Comment Re:Software GPU Emulation (Score 5, Informative) 237
Note that compositing != GPU acceleration. Mac OS X has always used compositing, but it was entirely software. There are still good reasons to do so. I'll compare for you:
No compositing, one frontbuffer: You don't get your own pixmap to draw onto. You have to send drawing commands to the display server to draw on your behalf, to prevent you from drawing wherever you want on the frontbuffer. Unfortunately, if you have something complicated to draw, the user gets to watch as the drawing happens. When drawing a new object, generally the algorithm used is to draw the background, and then draw the objects in order from back to front. This means whenever the screen is updated, the user will see flicker whenever any objects are updated because they may briefly flicker to the background color. To work around this most modern toolkits (Qt 4, GTK 3) render to a pixmap, and then just tell X to draw their pixmap when they are done. This avoids the flicker but uses a bit more RAM.
With a compositor, the application still draws to the pixmap, but instead of requesting the X server to immediately draw their pixmap to the screen, they pass it a handle to the pixmap and the display server can draw it whenever. This makes a lot of things easier, like vertical sync and effects, as well as things like color format and color space conversion.
Drawing the pixmap on the screen is really the same operations, no matter if compositing is on or off. And the API your compositor uses shouldn't matter too much either if the underlying implementation is optimized. The highly optimized Gallium3D blitter is going to just as good as the traditional X blitter, if not better. The only thing slowing it down in this case is the fact that OpenGL API is rather overkill for blitting, but hopefully the llvmpipe backend is optimized for this use case. And it's probably not worth it to make the compositor support two drawing APIs, like KDE, as they both end up doing the same thing anyway.