Seems like a bit of FUD on both sides of the argument...
A while back, I spent a few minutes skimming the Wayland FAQ and Wayland Architecture Diagram. Interesting stuff, especially the architecture page, and they provide actual detail while the TFA mostly doesn't.
For comparison of the two architectures, when running X, you might have an ordinary window manager or you might have a compositing window manager. The Wayland model is that Wayland *is* a compositor that provides both window manager functions and some of the functionality of an X server.
From what I can see, here's the architectual differences between X and Wayland when it comes to supporting remote app display:
Intentionally misstating by being over simplistic, it sounds like the reason Wayland doesn't support remote displays is because it also doesn't support local displays! More accurately, Wayland supports local displays (of course), but unlike X11 provides no way to render to them. Wayland doesn't do rendering; it apparently "just" knows how to swap video buffers to a display device and coordinate buffers between multiple clients.
I'm thinking that, for example, if you want to write a graphical app, you might target OpenGL or cario and then expect your code to work in both Unix (with X) and on Windows (without X). With Unix/X, I'd expect an OpenGL library that handed X primitives to the X server. With Wayland, you'd apparently have an OpenGL library that rendered to a buffer and then handed the buffer off to the Wayland compositor.
So, Wayland isn't doing some of the things than an X server would do. Wayland is never working with drawing primitives. It seems obvious that you'd never be able to run apps that use the old X toolkit libraries against Wayland without an X server in the picture. And, both the TFA and the FAQ report this and note that you'll need X server(s) in addition to Wayland for the foreseeable future. The X server(s) talk to Wayland instead of to hardware.
There's going to be support for running Wayland apps remotely. However, as others have noted, an obvious question is how efficiently a "native" Wayland app could be displayed remotely. If the app and its libraries are rendering graphics primitives into display buffers, it seems obvious that low level primitive operations are lost by the time Wayland gets the buffers, so you have to be able to efficiently transmit bitmap deltas. Queue arguments re whether drawing primitives are more efficient or bitmaps are more efficient...
The discussion of doing remote Wayland apps seems to revolve around how to transmit the per-app buffers across the network instead of handing the buffer to the local Wayland compositor / display driver. Or perhaps about having the compositor know that certain app buffers should be transmitted instead of composited to the local display, but that's almost the same thing when viewed as a flow.
Transmitting bitmap deltas works pretty well, but for some apps, sending higher level information such as the original OpenGL might be better. With X, they defined an extension, GLX for essentially passing the OpenGL to a remote server. Similarly, I imagine it would be useful if an OpenGL app could talk to a remote Wayland server instead of sending bitmap deltas. So, it looks to me like it would be useful to define an extension mechanism for transport that would allow OpenGL commands to be sent to a remote computer where they could be rendered to a remote buffer and handed off to the remote Wayland compositor. Bonus points if the extension mechanism is generic. Seems like transport of audio might be another fit for a modern replacement of X.
Not an X server developer nor a Wayland developer. I may have garbled things somewhat, but perhaps someone could clarify the mistakes and help take a portion of the FUD out of the weekly Wayland discussions.
TL;DR: Wayland should be able to support remote display of individual apps, but it will be based on transmission of bitmap deltas, not higher level drawing primitives. Seems like OpenGL or whatever could be added though.