Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Re:Let me guess (Score 1) 166

by raxx7 (#49223449) Attached to: Google Introduces Freon, a Replacement For X11 On Chrome OS

Sorry, I forgot to add a point 1.5.

1.5 Even among applications who heavily use server-side rendering, they tend to be more sensitive to network latency and bandwidth than midle-man solutions like NX and Xpra.
When working remotely from home to work, no application I use behaves as well when using X forwarding as it does under Xpra. A few are completely useless with X forwaring.
No application I can think of allows itself to be detached from a Xserver and attached to another one.

The usefulness of X11 network capabilities only works so far, dependind on the application, network latency and bandwidth and whether you need or not detach/reattach.
For a lot of users of remote display, it's necessary use midle-man solutions like NX and Xpra, ignoring X11's network capabilities.

Comment: Re:Let me guess (Score 1) 166

by raxx7 (#49223355) Attached to: Google Introduces Freon, a Replacement For X11 On Chrome OS

Ultimately, I cannot explain to you it's not possible; absence of a solution is not proof of it's non-existence.
I can enumerate the known difficulties.

1. Fastest rendering method we have nowadays is direct rendering on the GPU hardware. Second fastest is often server side software rendering. X11 server side rendering often comes last.
The issue with methods 1 and 2 is that they include the application sending large amounts of pixmaps to the display server (akin to playing video).
Only effective use of method 3 makes X11's network capabilities useful.

But, when faced with the decision, application developers tend to favor 1 or 2 and ignore 3.
History teaches us this: it was happening before the Xrender extension; it happened again now, as Qt5 does not have an effective server side rendering backend.
Thus, making network capability a core part of the display system can be an exercise in futility, as application developers may simply not make use of it.

2. Server side rendering, which as I've shown is an integral part of X11's network capabilities, adds considerable complexity to the display server.
This is an huge issue in Wayland. In Wayland, the display server, the window manager and the compositor are rolled-in into a single program, known as the Wayland compositor.
To make this work, Wayland compositors must be relatively simple, not much more complicated than X11 window manager/compositors.
This is not compatible with supporting the sophisticated server side rendering features provided by X11. The display server is a huge program, in part because the X11 protocol is complex.

Comment: Re:Let me guess (Score 2) 166

by raxx7 (#49220641) Attached to: Google Introduces Freon, a Replacement For X11 On Chrome OS

If you insist in asking the wrong question, you'll always get non-sense as an answer.

The server isn't being bloated with new button styles and widgets. X11 server side rendering is accomplished with a a relatively small number of power primitives which haven't been changed in years.

The reason why applications are doing less server side rendering, however, has to do with much more than fancy buttons and widgets.
For example, the Qt4 folks came to the conclusion that their raster backend was quite a bit faster (for local applications, of course) than their X11 backend.
This matters little for drawing the buttons and widgets of an application. But it does matter a lot for the main application area, specially when it's something a bit more complex than a text editor.
It matters when you're navigating a PDF in Okular on a not-so-fast laptop. It matters when you're navigating through the mess of wires in an integrated circuit layout application.

For someone who chastises others for bell and whistles, you can't actually see under the surface.

Comment: Re:Let me guess (Score 1) 166

by raxx7 (#49217113) Attached to: Google Introduces Freon, a Replacement For X11 On Chrome OS

True, it's not about network access per se.

Whether you are using "ssh -X" or just using DISPLAY to point your application to another machine, it is useful because of two properties in the X protocol, which Wayland does have.

Property 1: Although we routinely use shared memory extensions in modern X setups, a lot (including the core functions to which all applications must be able to fall back) works over a socket, which can be a unix local socket or a TCP socket.
Property 2: The X11 protocol has a slew of very sophisticated features which enabled graphical applications to work around the latency of the communication and to reduce communication bandwidth. An application can store gylphs in the Xserver and then, referencing those gylphs, it can draw nice anti-aliased text using a very small amount of bandwidth.

Wayland lacks both properties, but property 2 is the big issue.
Redirecting X11 applications over a network doesn't work well if the application sends a ton of commands synchronously, unless the network is low latency (eg, local LAN).
Redirecting X11 applications over the network doesn't work well if the application sends the entire graphics window as a single pixmap, untless it's a high bandwith network (eg, local GbE LAN).

Wayland works over a Unix socket too. And you can set which socket the application will attach to with WAYLAND_DISPLAY.
The first problem is that the protocol assumes shared memory but it would have not complicated things that much to make it work without requiring shared memory.
But the real problem is that the only method Wayland applications have to draw is by sending the entire window (surface) as a single pixmap. Works wonderfully over shared memory, works like crap over my cell phone's 3G connection.
A 800x600 surface rendered at 60 fps requires 86.4 MB/s.

Adding to Wayland the rich set of server-side rendering features would complicate things too much for Wayland to be possible.
And it would be a somewhat futile exercise because, increasingly, X11 applications are doing less server side rendering and more pushing of large pixmaps.

Comment: Re:Let me guess (Score 4, Insightful) 166

by raxx7 (#49214057) Attached to: Google Introduces Freon, a Replacement For X11 On Chrome OS

99% of people don't want X11 style network transparency. They want "ssh -X" and friends to work. They want to be able to remotely run individual graphical applications.
But X11-style network transparency isn't the only way. And it's not the best way.

Despite all the features available in X, application developers give limited effort to making applications work well over high latency limited bandwidth. An increasing number of applications work poorly over this links. Qt4 applications with the default raster backend work poorly sometimes even my workplace's GbE LAN (Qt5 doesn't even give you the option). Let's not even think of running Kate from home.
No application I actually use supports detaching and re-attaching to a different X server.

People have been pushed to replace "ssh -X" with NX and Xpra (or, in despair, VNC) because of these limitations (Google about them).
Similar solutions can be implemented in Wayland and they don't need the protocol to become networks transparent.

Supporting X11-style network transparency in the Wayland protocol is a futile exercise which compromises the simplicity required by the Wayland model.

Not to mention, if "ssh -X " and friends suits you... then it will work for a long time. As long as your Wayland session includes XWayland (which it will, probably for ever) and your applications retain a X11 backend, this will still work.
It's not going to die tomorrow just because we switch to Wayland compositors.

Comment: Re:It's almost like the Concord verses the 747 aga (Score 1) 157

by raxx7 (#49155953) Attached to: Hyperloop Testing Starts Next Year

As I wrote in the another post[1], you need to limit both lateral and vertical accelerations, which puts constrains on how much you can get from banking.
Eg, if you bank almost 90, the passengers will experience no lateral acceleration, but they will experience vertical acceleration, for which tolerance is even lower.
At 800 mph, even the sloping the line up/down will be a problem.

This is all well understood and researched, as it's a massive and expensive problem for all modes of transportation that go at least as fast as a car in a highway.
The Hyperloop proposal simply glossed over reality in this aspect, along many others.


Comment: Re:It's almost like the Concord verses the 747 aga (Score 2) 157

by raxx7 (#49155263) Attached to: Hyperloop Testing Starts Next Year

You can also tilt the track, canting in railway terms.
Canting in normal railway lines is limited due to the need to handle slow trains, but on high speed rail it's often allowed to be higher.
That's why it's uncommon to use tilting trains above 250 km/h: it's usually preferable to tilt the track than to increase the weight of trains by adding tilting systems.
Though the Japanese have some.

That said, "much" is a relative statement, Whether you tilt the train floor or the track the accelerations experienced by the passenger
- lateral: is v^2/r - g*sin(tilt_angle); acceptable limit is +/- 0.1g
- vertical: 1g + v^2/r * cos(title_angle); limit is 1g+/-0.05g, IIRC

As you can seen from the math, the problem increases with the square of speed, while the benefit from tilt is is limited and bound by the need to keep both lateral and vertical acceleration within bounds.

Comment: Re:It's almost like the Concord verses the 747 aga (Score 3, Informative) 157

by raxx7 (#49154981) Attached to: Hyperloop Testing Starts Next Year

The HyperLoop Elon proposed was estimated to cost 1/10th of the LA-SF CHSR project, but it only had 1/10th of the capacity.

Unlike the CHSR project, the proposed HyperLoop project actually only connected the outskirts of LA to the Oakland bay, leaving out the expensive part of going to downtown LA and downtown SF.

The estimated HyperLoop projects assumed they save on expropriations by placing the track elevated over existing highways.
But to travel at 800 mph without making your passengers sick and barfing, the route actually needs curves to be 16 times as smooth as the 200 mph CHSR.

The estimated HyperLoop costs were low by an order of magnitude even when comparing to known costs of elevated track and even of oil pipelines. Let's not even talk about the actual precision needed to make this work at 800 mph.

Comment: Re:Not really happy (Score 2) 171

by raxx7 (#49080067) Attached to: HTTP/2 Finalized

What happened to HTTP/1.1 pipelining is fortunately not a common case: popular webserver breaks the standard and doesn't get fixed, making the standard de facto useless. While it's not impossible for the same to happen with HTTP/2.0, this type of situation is more the exception than the norm.
All popular webservers today strive for a good degree of standard compliance.

But, maybe as importantly, as pointed out before, that was not the only problem with HTTP/1.1 pipelining. You also have the head of line blocking problem, which is a fundamental design problem.
This reduced the case to enable HTTP/1.1 pipelining in the browsers and to fix the broken server software.

Comment: Re:Not really happy (Score 5, Informative) 171

by raxx7 (#49079695) Attached to: HTTP/2 Finalized

You might be happier if you researched why HTTP pipelining is off in most browsers and what problem it actually wants to solve.

First, HTTP/1.1 pipelining and HTTP/2.0 are not about increasing the number of request your server can handle. It's mainly about reducing the effect of round trip latency in page loading times, which is significant.
If a browser is fetching a simple page with 10 tiny elements (or even cached elements subject to conditional GET) but the server round trip latency is 100 ms, then it will take over 1 second to load the page.

HTTP was a first attempt to solve this problem, by allowing the server to send multiple requests over the connection without waiting for the earlier requests to complete.
If you can pipeline N requests, the round trip latency contribution is divided by N.
However, HTTP/1.1 pipelining has two issues which led most browsers to disable it by default (it's not because they enjoy implementing features not be used):
- There are or were a number of broken servers which do not handle pipelining correctly.
- HTTP/1.1 pipelining is subject to head of line blocking: the server serves the requests in order and a small/fast request may have to wait inline behind a larger/slow request.

Instead, browsers make multiple parallel requests to each server.
However, because some servers (eg, Apache) have problems with large numbers of requests so browsers use arbitrary low limits (eg, 4-8 parallel requests per hostname).

HTTP/2.0 attempts to solve these shortcomings by:
a) being new, hopefully without broken servers out there
b) using multiplexing, allowing the server to serve the requests in whatever order. Thus, no head-of-line blocking.

So. HTTP/2.0 is, fundamentally, HTTP/1.1 pipelining without these shortcomings. We hope.

Comment: Re:It used to be fun ... (Score 1) 755

by raxx7 (#49070297) Attached to: Removing Libsystemd0 From a Live-running Debian System

It is that easy. Because some sane and nice people who care about not using systemd as pid1 have actually put in the work to do so.

systemd is only taking in everything as much as nobody is willing/able to provide comparable functionality.
Eg, there's no credible replacement for systemd-logind. ConsoleKit is unmaintained and less powerful. Developers of desktop environments and distributions want to take advantage of it's functionality and avoid the trouble of trying to get shit done using ConsoleKit.

But enterprising souls (ok, mainly from Ubuntu) have come up with enough functionality (cgmanager) to run systemd-logind without having to run systemd.

Comment: Re:It used to be fun ... (Score 0) 755

by raxx7 (#49065833) Attached to: Removing Libsystemd0 From a Live-running Debian System

Then for f**ks sake, apt-get install something_else and stop bitching.

XFCE, LXDE, KDE, MATE, etc, etc, etc, are only a few apt-get commands away.Likewise, systemd (pid1) can be replaced by sysvinit (and systemd-shim if needed) with a few commands and one reboot.
NOBODY is putting roadblocks in the way of the Debian developers who are willing to put in the work to keep Debian as functional as possible without using systemd as init.

The entire Debian issue is a storm in a teacup, fueled by zealots to whom the mere presence of libsystemd in their hard drive is unacceptable and "freedom of choice" seems to mean that nobody should have the choice of using systemd.

Comment: Re:Pointless (Score 2) 755

by raxx7 (#49063453) Attached to: Removing Libsystemd0 From a Live-running Debian System

You need to get out more.

Most servers run on Windows or Linux, mainly in the form of RHEL and SLES. Anything else tends to mean the hardware and software providers don't support you, which can be quite inconvenient.
Outside hobby servers, the number of servers using BSD or unsupported Linux distros (eg, I run Debian on personal systems) are a minority.

When dealing with systems with more custom hardware designs, things get varied. Cray XT6's compute nodes run a lightweight Linux installation, while IBM's BlueGene compute nodes run a custom OS with is only a few thousand lines of code.
But supercomputers we'd call clusters usually run RHEL or SLES or derivative with some add-ons. Comparing with BSDs is non-sense.

Among embedded systems with a multi-tasking memory protected OS, the most common sightings are QNX, VxWorks and Linux [full GNU/Linux, Android, WebOS, etc].
I can't recall the last time I saw a shipping product with NetBSD, actually. Despite it's fame for portability, NetBSD has been trailing Linux for a while and it lacks support for a number of modern embedded platforms. From the top of my head, there's no NetBSD support for AVR32, NIOS or Blaze architectures..
I don't think there's working support for FPGAs with embedded ARM CPUs either.

If I'd known computer science was going to be like this, I'd never have given up being a rock 'n' roll star. -- G. Hirst