Forgot your password?
typodupeerror

Comment: Re:I still don't see what's wrong with X (Score 1) 224

by raxx7 (#48185019) Attached to: Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

Here some fact for you.
Fact: Almost no X application allows itself to be detached and reatached from the X server.
Fact: Applications' performance are getting worse and worse over the network.
Fact: a number of people have felt the need to develop and use middlemen to avoid such problems: NX and it's many clones, Xpra.
Fact: when I migrated out lab from CentOS5 to CentOS6, each and every user of Kate or Kile complained they were unusable from home and I had to teach them to use Xpra.
Fact: I often use Xpra even on the LAN, because of poor performance and the ability to detach/reattach applications.

And some more facts.
Fact: nobody depends on X network transparency. Lot's of people need to be able to run graphical applications remotely, but X network transparency is not the only way to do it, neither the best. Having to launch an Xpra server is more inconvenient than ssh -X but hardly a show stopper.
Fact: X network transparency will keep working under Wayland as long as XWayland is around and the toolkits don't remove X11 backends.

No, Wayland won't be network transparent. But there are ways to get the actual job done.
And you'll actually be able to ssh -X as long the as the applications don't drop support for X11?
So.. what's the problem?

Comment: Re:I still don't see what's wrong with X (Score 1) 224

by raxx7 (#48182513) Attached to: Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

At no point I have stated that Keith has stopped working on X.
I only pointed out an article which documents (some of) Keith's opinion on improvements brought by Wayland.

There's one thing you must understand about open source: you can't force others to do your bidding. Nobody can declare X or W to be the future and kill the other.
Open source in general, has never been a democracy but a "do"-acracry: those who do the work decide and the others are left only with the choice of using what has been done by those who did it. All you can do is do the work and see who chooses to use it.

Like everything else, X and Wayland will continue to work as long as enough people are willing to the put in work to make it work. No more, no less.
People will be forced to change from X to Wayland only and if the people who are currently doing the work required to put together X based Linux desktops decide they don't want to do it any more and nobody replaces them.
X's network transparency will continue to work as long as XWayland is kept up to date and as long as the toolkits keep an X backend and the applications take care not to break.

That said, looking at statements made by people doing said work, it looks likely that in the not so far future, you won't be able to run the latest version of Gnome (maybe even KDE) on top of an X display server.

On the other hand, XWayland will probably be kept up to date forever, as toolkits older than GTK3 and Qt5 probably will never get native Wayland backends.
And removal of X backends from other toolkits is something in a very distant and hazy future.

Finally, Daniel has always qualified what he meant by "network transparency is broken".
It has been broken by countless application writers, who only care and test about their applications in a local environment, where the SHM extension gives them immense bandwidth to communicate with the server and the latency is measured in few micro-seconds.
They've, unintentionally but increasingly, made the applications perform worse and worse in limited bandwidth/latency environments.
And they (application writers) have no intention in doubling back.

And this the second time this happens, by the way. Before Xrender, applications were also on the way of pushing so much data to render anti-aliased forms, that using them over the networks of a decade ago was also problematic.

So, heed Daniel's words: X network transparency is broken.
Your ability to use it with the latest applications is decreasing and there isn't anything the X or Wayland developers can do about it.
Because, again, you can't force the application developers to make sure it does work well over the network.
And this is why Wayland does not support network transparency: it adds a lot of complexity and brings no benefit, because most application developers are not willing to make sure the applications work well.

Comment: Re:I still don't see what's wrong with X (Score 1) 224

by raxx7 (#48182491) Attached to: Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

Because you obviously were mixing pixel scraping (used by RDP) with pixel pushing.
Quoting yourself: "Why? RDP also pushes pixels"

You were also obviously confusing using Xrender to push pixels and using Xrender in a way which leads to low bandwidth usage.
Again, quoting yourself The term "software rasterizer" does not necessarily imply that it does not use Xrender.

Using the X protocol for a screen scraper is a bad ideia:
a) it does not provide means to compress the image on the wire.
b) the display X server won't always keep the contents of the window. Eg, if you minimize and then restore a window, the display X server may require the contents of the window to be re-send, although they haven't been changed by the application.

I really can't fathom why you insist on using the X protocol for a pixel scraper. There isn't anything new here. We have a bunch of pixel scrapers around for a long time (VNC, RDP, Xpra and, to some degree, NX) and they've all found it useful to use a specific protocol.

Some people like RDP because despite being owned by MS (and subject to patents) the FreeRDP project provides a good implementation of the RDP protocol, which AFAIK, compares favourably with other pixel scrapers.

Comment: Re:I still don't see what's wrong with X (Score 1) 224

by raxx7 (#48177071) Attached to: Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

You're mixing up three very different methods.

Method 1 is plain dumb pixmap pushing. Applications mostly render client side and then they are constantly pushing the result as large, non-reusable, pixmaps to the server.
Of course, X supports this perfectly. Not only you can push pixmaps over the socket, X has a shared memory extension which lets you do this fast for the local cse.
However, while this works "correctly" over the network, it requires too much bandwidth to be useful. Eg, to push a 800x600 RGB window at 30 fps, you're pushing 43 MB/s. Trivial between two processes in your computer, not so trivial between your home and your work computer.

Method 2 is _clever and effective_ use of server side storage and rendering. The keywords here are _clever_ and _effective_. If your use of Xrender is not _clever and effective_, then it degrades to plain dumb pixmap pushing.
Using Xrender, for example, an application will upload pixmaps/gylphs on the server side. And then it will issue many small rendering commands which refer to the pixmaps/gylphs stored on the server.
Poster child example for Xrender, once the gylphs for all the required characters have been uploaded, Xrender allows applications to draw anti-aliased text using relatively little bandwidth between the client and server.
_Clever and effective_ use of core X primitives and Xrender is is what allows X applications to work over the network effectively.

Problem is that, increasingly, application developers are making less and less _clever and effective_ use of server side rendering. Largely, because as I pointed out before, method 1 works so much better for local clients.

Method 3 is pixel scraping. In this method, we have four instead of two actors. The pixel scraper server presents itself as a (local) X server to the applications but which renders not to a screen but to a frame buffer. The applications draw normally to this (local) X server, they have no idea about the remote display X server.
The pixel scraper server asynchronously scans the frame buffer for changes and sends them to the pixel scrapper client on the other side, usually compressed.The pixel scraper client receives the changes and draws them into the client's X server, which renders them into the screen.
Pixel scrapping also eliminates the latency problem, as the applications see a local server.

RDP works effectively by using pixel scrapping (along Windows drawing primitives too).

Comment: Re:I still don't see what's wrong with X (Score 1) 224

by raxx7 (#48175955) Attached to: Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

Well, I also find strange that people say they see no difference.
My workplace is a simple LAN, with a dozen or so computers attached to a decent switch, perfectly capable of full bandwidth and a less than 200 s pings.
I run remote stuff everyday and I have a bunch of applications, using a bunch of toolkits, that perform between worse-than-local-but-ok to really bad.
From the top of my head, GTK3 applications are the only ones based on a modern toolkit that still run very well.

The fact that Qt5 needs libXrender doesn't mean it has a Xrender based backend.
AFAIK, Qt5 only has two backends: software rasterizer and OpenGL. I've never seen any evidence of other backends.
http://qt-project.org/wiki/Qt5GraphicsOverview

Trying to forward Wayland clients over the X prototol is dumb.
Wayland clients (application) do nothing but to push pixmap buffers to the server. Try to forward that directly over the network in any shape or form and you get terrible performance.
The only solution is either to have a pixel scrapping mechanism or to have Wayland clients (applications) support some other protocol.
Which most Wayland clients (applications) already do: they support X.

Comment: Re:The gradual middle road (Score 3, Interesting) 519

by raxx7 (#48173183) Attached to: Debian Talks About Systemd Once Again

systemd-journald has long been capable of forwarding the logs to rsyslogd.
And systemd-journald can even be configured to keep it's binary log in /var/run/journal, which gets deleted at each reboot.
And Debian uses this configuration for default.

Unfortunately, if they acknowledged this, systemd haters would be left with one less thing to hate.

Other functions provided by the systemd package (eg, session managment by systemd-logind) are just a lot of work to implement, specially if you try to go for a more decoupled architecture.
Not that people aren't working on it though.

Comment: Re:I still don't see what's wrong with X (Score 1) 224

by raxx7 (#48172895) Attached to: Lead Mir Developer: 'Mir More Relevant Than Wayland In Two Years'

That's not what he meant I think.

Modern X applications (well, the toolkits) are resorting so much to simply pushing bitmaps to the X server that X is becoming unusable over the network.
Eg, the Qt developers found sometime ago that their client side rendering backend was much faster for local usage than the Xrender backend. So they made it the default for Qt4.4 And for Qt5 they didn't even bother with a Xrender based backend.

The client side rendering backend pushes so much data Kate is actually painful to use over a Gigabit Ethernet local network, don't even mention working from home. And this can be said for many other applications I use daily.

I run applications remotely on a daily basis. But now it's mostly under Xpra instead of plain X forwarding.
Good performance over the network is buried very deep in the priority list of most graphical application developers, if it's on the list at all.

And an Xpra like solution is something you can implement for Wayland.

Comment: Re:And yet IBM soldiers on... (Score 1) 156

by raxx7 (#48054899) Attached to: End of an Era: After a 30 Year Run, IBM Drops Support For Lotus 1-2-3

OS/2 was too advanced for the PC market of the time.
OS/2 was something we take as granted now: a preemptive multi-tasking protected memory OS.
But such an OS had drawbacks: it requires more memory, a bit more CPU and tends not to work well for old applications that were written to run on a bare metal "OS" like DOS.
Although Microsft had one since 1993 (Windows NT), it was not until 2001 (Windows XP) that they were able to converge the PC market into using such an OS.

IBM didn't abandon PowerPC, only a given market segment, that of desktop/laptop processors.
Since Apple was the only customer for those processors, there was not enough money in it to fund competitive designs.
Instead, IBM kept designing and producing PowerPC processors for other market segments: Wii/Wii U, Xbox360 and PS3 all use IBM PowerPC designs.
They also have built a number of supercomputer based on PowerPC CPU designs. And of course, they keep making POWER CPUs for their servers.

Comment: Re:IPv6 won't fix this problem (Score 1) 248

by raxx7 (#47666415) Attached to: The IPv4 Internet Hiccups

While BGP routers need to know route for every prefix, they can then can compress the routing table, by merging prefixes which have the same routing.
The problem is that the IPv4 address space is too fragmented to allow much compression.

IPv6 address allocations should allow for less fragmentation and better compression.

Comment: Re:complex application example (Score 1) 161

by raxx7 (#47494651) Attached to: Linux Needs Resource Management For Complex Workloads

Interesting. I sounds a bit like an application I have.
Like yours, it involves UDP and Python.
I have 150.000 "jobs" per second arriving in UDP packets. "Job" data can be between 10 and 1400 bytes and as many "jobs" are packed into each UDP packet as possible.

I use Python because, intermixed with the high performance job processing, I also mix slow but complex control sequences (and I'd rather cut my wrists than move all that to C/C++).
But to achieve good performance, I had to reduce Python's contribution to the critical path as much as possible and offload to C++.

My architecture has 3 processes, which communicate through shared memory and FIFOs.
The shared memory is divided into fixed size blocks, each big enough to contain the information for a maximum size jobs.

Processs A is C++ and has two threads.
Thread A1 receives the UDP packets, decodes the contents, writes the decoded job into a shared memory block and stores the block index number into a queue.
Thread A2 handles communication with process B. This communication consists mainly of sending process B block index numbers (telling B where to get job data) and receiving block index numbers back from process B (telling A that the block can be re-used).

Process B is a single threaded Python.
When in the critical loop, it's main job is to forward block index numbers from process A to process C and from process C back to process A.
(It also does some status checks and control functions, which is why it's in the middle).
In order to keep the overhead low, the block index numbers are passed in batches of 128 to 1024 (each block index number corresponding to a job).

Process C is, again, multi-threaded C++.
The main thread takes the data from the shared memory, returns the block index numbers to process B and pushes the jobs through a sequence of processing modules, in batches of many jobs.
Withing each processing module, the module hands out the batch of jobs to a thread pool and back, while preserving the order.

Evolution is a million line computer program falling into place by accident.

Working...