Those were issues with Xglx (mostly raised by nVidia specifically) which was supposed to be a stop-gap measure. Xegl was the long term approach. It wasn't just purely technical but rather a debate if the current X display server was salvageable. Red Hat and nVidia thought it was.
I reference David Reveman's post to the xorg mailing list.
I think the arguments made by nvidia to why X on OpenGL would be worse
than the current driver architecture can be debated on until forever. I
think it all boils down to if we want put some more effort to it and
take the big scary step to something new or if we want to stick to the
old well known. Not too surprising, we have people who are in favor of
both and we'll likely have development being done on both, which I don't
think is that bad after all.
So far I haven't heard a single argument for why X on OpenGL is a a bad
idea other than that it's a big step and a lot of work will have to be
done. If that would stop me from working on Xgl, I wouldn't have started
working on it in the first place
So yes I did do my homework. I did it 7 years ago and the teacher just forgot to collect it.