While I'm sure it would allow customized algorithms, they would have to be rather unique to not be handled by the current state of geometry/vertex/fragment shaders. Are they thinking some of non-triangular geometry?
The FA mentions voxel rendering for Minecraft-type applications. Although volume rendering can be achieved with traditional hardware accelerated surface primitives, there are many algorithms that are more naturally described and implemented using data structures that don't translate so easily to hardware accelerated primitives.
Constructive solid geometry, vector based graphics, and ray tracing are also not such a nice fit to OpenGL and DirectX APIs. You don't always want to have to tessellate geometry that has an analytic expression, such as conics, rational quadratics, b-splines, and NURBS, so a more software-oriented approach can provide better renderings for those types of mathematical objects.
The challenge here is that graphics primitives that APIs such as OpenGL provide are of course those that the hardware can most readily accelerate. If you don't use primitives and operations that can be massively parallel then you may not get much use out of the hardware.
What's interesting there is we say it reflects reality because it makes the calculations easier.
That really is the most interesting thing in this discussion. Essentially we are making a leap of faith, that simpler models are more likely to be true as long as they continue to support the data and allow us to make predictions. But it is at root an aesthetic judgement: beauty is truth, and truth is beautiful. It is the essence of rationality.
It's cool to see how Feynman's diagrams may be like the epicycles of the earth-centered view of the universe: they can be made to work as long as you keep refining the model, adding loops within loops within loops. But with this new breakthrough, all that can be thrown away for a much simpler model that leads to deeper insights. And those deeper insights are awe-inspiring: locality and unitarity as emergent phenomena.
X was so "ahead of its time" that its entire architecture was dumped in version 10 to give way to X11, and then it remained so far ahead of its time that to this day NextOS, MacOS, Android and Windows have yet to adopt a single thing from it, contrary to the rest of Unix most of which has made its way into those operating systems.
Mac OS X, Android and Windows are consumer operating systems, for which eye-candy UIs are considered more important than network transparency. Their remote connectivity needs are limited to accessing corporate Web, cloud, and IT services, not other peers on the network.
NeXT was a great OS that used Display Postscript as the rendering engine, but it was also wrapped in a networked desktop environment, NextStep, and used with X11 and NeWS as well (Sun's Network Extensible Window System). I did find NextStep and NeWS superior to X11 and it's a damn shame they didn't succeed (although NextStep evolved into OS X, and Applie did include a rootless X11 implementation with it until Mountain Lion).
As for other companies, there were entire industry consortiums dedicated to expanding X and Unix, such as X/Open and the Open Software Foundation: these included companies like AT&T, DEC, Unisys, Hewlett-Packard, IBM, Sun, Prime, and Apollo.
And no, it was not designed to access resources from the desktop. It was mainly designed so that you could use a dumb terminal to access your server. When it became clear that was pie on the sky, instead of redesigning the turd, they just added layer upon layer of cruft, so you ended up with a dumb as doornails protocol running on a heavy weight, expensive "dumb" terminal.
The dumb terminal at that time was a VT100. X was designed to run on bitmapped displays. Although there were such bitmapped terminals available at the time, X mostly ran on engineering workstations. You didn't usually use it access a server (although you could); rather, other networked peers used it to display a UI on your local X display. I'm not sure why you think that is "pie in the sky" since it worked and continues to work rather well. Part of the reason for that was because the protocol was rich enough to transmit graphics primitives at a higher level than a bitmap. Nothing dumb about it.
Lastly the web browser has nothing to do with Unix. It is platform independent. The fact that you think the web==unix shows how little you know about deep OS architecture.
Don't be silly, I'm not conflating the Web with Unix. Sure, web browsers are supported by most computing platforms. But the web browser's roots in Unix go way back to NextStep and the beginnings of the Internet, at that time mostly Unix-based, and the web browser remains a central and crucial component of desktop Linux. My main point was that cleaning up web browser architecture would be vastly more useful and relevant than replacing a stable and functional part of Linux with something that is less useful, but prettier.
X is one of the few remaining *big* mistakes in Unix. It was designed with the wrong philosophy and overtaken by actual usage. Wayland is an effort to clean up and refactor the code.
X was ahead of its time and nothing ever caught up to it. It was designed around the idea that all the resources of the network should be seamlessly accessible from a single user's desktop, and embodied the old Internet ideal of ubiquitous peer-to-peer connectivity (still perfectly reasonable and incredibly useful on a secure LAN). Wayland is an effort to make it easier to develop eye-candy user interfaces for consumers and throw out any functionality that gets in the way of that goal. It's totally appropriate for mobile but unnecessary and counter-productive for the desktop.
If you want to talk about really big mistakes in Unix, and computing in general, take a look at the modern web browser and the development environment that it requires. Doing anything interesting on the web requires an unholy mix of technologies and infrastructure like JavaScript, PHP, HTML, XML, CSS, cookies, DOM, BOM and all the interfaces between them. What we really need is a Wayland for the Web, not a Wayland that destroys much of what is stable and functional in Unix.
Google CEO Page is worth 25 billion dollars and along with Brin owns enough voting shares to completely control the company. Mayer is worth 300 million. They have resources that you and I don't: the ability to hire the best lawyers in the world and media platforms that reach the majority of the people in the US and perhaps the world.
If they had any sense of responsibility, obligation, or patriotism they could fight this thing and have a good chance of winning.
Actually, what the clients are doing right now is assembling bitmaps, widgets, and font glyph assets into a pixmap on the client side, most likely without the benefits of GPU acceleration, and sending the result as an uncompressed pixmap over the wire to the X server, which hands it off to a compositor, which combines the pixmap with images from other applications and hands the result back to the X server.
Yes, I think you are right for the most part, especially with Gnome and GTK applications. It explains why the resource tab of gnome-system-monitor consumes over 1MB/sec of bandwidth on my LAN. It's a shame really since it could have been coded to be much more network efficient if it would just draw the damn lines on the server side instead of rendering them into a pixmap on the client side.
In general Gnome is extremely network unfriendly. I get tons of error messages on the console because Gnome applications feel so insecure when they can't connect to a Gnome desktop. They seem to work fine, but it's annoying.
Composited UIs are important for mobile because of the limited physical screen space; it gives additional information beyond the spatial dimensions of the viewing surface. And the lower overhead and simplicity of infrastructure such as DirectFB and Wayland are also essential for mobile. On the desktop, not so much: with enough screen space I can be happy and productive with a tiling window manager and completely opaque windows. That and network transparency in my opinion trumps any advantage that Wayland would have in terms of desktop environments.
The GPU acceleration issue is puzzling. I've experienced this -- if there is no monitor attached to a Linux machine, then the GPU drivers are not loaded. There's no good reason for that (it's not an issue on Solaris), and I've read that it can be worked around with a dongle and a resistor attached to the display port to make the driver think there's a monitor there.
If Wayland will support legacy X11 desktop applications the way you describe, then fine, I guess I'll get used to it. But it seems like a lot of work for not much benefit: work that could be more effective if applied to the mobile use case.
VNC is a pixel-based screen-scraping desktop replicator. I have never seen one that performs better than individual X11 clients over a fast LAN, and over the Internet it's even worse. Besides that, I already have a full X11 desktop running on my local machine, so I don't want another desktop environment intruding. I just want the individual clients to display on my existing X11 server's desktop. This is especially important when working with several remote hosts.
RDP is a little better in that it has some understanding of the higher-level desktop objects it is rendering. But it still functions as a complete desktop that really wants to take over your entire local desktop.
In the real world, the network transparency support features are not used, EVEN WHEN YOU ARE USING A REMOTE DISPLAY because it's easier and more effective to actually render on the remote machine and bang the interface, so that's exactly what every widget toolkit does.
I have three headless Linux machines and the only display I have is on my laptop. My remote X11 clients run on these machines and present their UIs on my local X11 display server running on my laptop. While it is probably true that these clients are not transmitting XDrawLine and XFillArc protocol elements to render their UIs, they are still mostly assembling pre-rendered bitmaps, widgets, and font glyph assets to send down the wire for rendering on the local server. How is this going to work on Wayland?
I keep reading that this will be supported through some backward-compatible protocol, but has anybody actually worked out the details of how existing X11 clients will migrate to this new protocol? My fear is that these clients will stop working with future versions of Linux and their replacements will not support network transparency.
Wayland has a real use case for mobile devices, but why make the same mistake as Microsoft by gratuitously trying to unify mobile with desktop? On a desktop, the only advantage to Wayland is that it facilitates implementing a pretty compositing desktop. This is a fad that is already starting to fade from fashion.
Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker