Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Note: You can take 10% off all Slashdot Deals with coupon code "slashdot10off." ×

Comment Re:Open source has won... and then we lost (Score 1) 193

This isn't a step back for open source; it's just staying the same.

Despite the great success of open source software, which I'm using to write this post, the underlying computers where this software runs have always been proprietary hardware implementations with some proprietary firmware blobs (eg, the BIOS) stored somewhere.

The fact that some companies have moved from hardware to on-board firmware stored on EEPROMs to firmware which needs to be uploaded by the driver isn't a real change , as long as the license allows it to be freely redistributed.
It is not an ideal situation; it would be better if this firmware was open source. But it's not anything new or different.

Comment Re:This matters because... (Score 2, Interesting) 193

Unless the system has an I/O MMU, the hardware devices and any firmware they may be running have unrestricted access to RAM.
I/O MMUs were almost exclusive to server chipsets until some time ago.
Nowadays they are more common (spurred mostly by virtualization needs) but not totally universal yet. Intel likes to disable the feature in the K CPU models (which have unlocked frequency multipliers for better overclocking options).
I don't keep track of the status of phone/tablet SoCs but if I had to hazzard a guess, I'd bet most of them don't have an I/O MMU.

Comment Re:Why this presumption that you need 3D accelerat (Score 1) 193

For such a short post, you've got plenty of wrong things.

You assume that this is only about 3D.
Now I don't know about Intel's plans but with the open source driver without the firmware blob, I can't even get my AMD card to work at more than 800x600.
No mode settings (screen resolution), no power management, no video decoding, no accelerated anything: neither 3D nor 2D.
Without the firmware blob, it's just an expensive power hungry 800x600 dumb frame buffer.

And there are _not_ plenty of cards out there.
Intel, NVIDIA, AMD (and Matrox) are the only choices if you want to buy new hardware. With Matrox not being present in the laptop market.
Intel was the last of the these which did not require a firmware blob.

Comment Re:Surprised? (Score 2) 98

x87 can produce IEEE 754 compliant results if the compiler either sets the correct rounding mode before each operation OR if it stores and reloads the results of each operation into memory (forcing the correct rounding).
However, both are expensive to do, performance wise, and no compiler does so by default.
Instead, x87 is normally used a way which is not IEEE 754 compliant, although it's actually a bit more accurate: internally, everything is done with 80 bit precision.
This results from the fact the x87 unit actually predates the final version of the IEEE 754 standard.

The IEEE 754 standard only covers a few operations: add, subtract, multiply, divide, FMA,
The transcendals (sin, cos, tag, exp, pow, etc) functions have never been part of the IEEE 754. Historically, most x87 FPUs have had errors larger than 1 ulp, at least for some part of the range.it
If I am not mistaken, only the AMD K5 FPU actually provided errors of less than 1 ulp for the entire range of inputs. And please, take this with a large grain of salt.

Comment Re:Amtrak's existing signal system (Score 1) 393

The existing system (Pulse Code Cab Signaling) is quite limited in many ways.
http://en.wikipedia.org/wiki/Pulse_code_cab_signaling

One of them is that it only offers a few speed limits, so often trains would have to run slower than otherwise needed.
It also doesn't prevent all the cases PTC mandate requires.

I'm not sure if the combination of PCC wht ACSES (another system Amtrak has been deploying) meets the PTC mandate or not.

Comment Re:Works great when you want to be seen (Score 1) 52

Primary radar (what you call search radar) is not very used for air traffic control.
Primary radar stations are very expensive to build and to operate; their range and accuracy is rather limited too. That is because primary radar stations need to transmit very powerful pulses and listen back for the very faint echoes generated by the aircraft, while discerning them from all the other sources of noise and clutter.
In general, only the military operate primary radar stations and coverage is rather limited. Eg, a large country like the USA will not have much coverage of primary radar deep inside it's borders. For poorer countries, even at the borders you may have nothing.
Some countries (eg, USA, Russia, Australia) have long wave radar stations which can detect the launch of a missile or a bomber squadron half world away, but those provide very very rough information.

The current primary tool for air traffic control, being superseded by ADS-B, is secondary radar.
In this type, the radar stations emit pulses which are detected by the transponders of (cooperative) aircraft. The transponders then transmit (sqwak) a reply to the radar pulse, which is detected by the radar station.
This requires much less power to be transmitted by the radar station and the transponder reply is a much stronger signal than an echo. The transponder signal can also include information such as aircraft ID, altitude and etc.

Comment Re:Let me guess (Score 1) 166

Sorry, I forgot to add a point 1.5.

1.5 Even among applications who heavily use server-side rendering, they tend to be more sensitive to network latency and bandwidth than midle-man solutions like NX and Xpra.
When working remotely from home to work, no application I use behaves as well when using X forwarding as it does under Xpra. A few are completely useless with X forwaring.
No application I can think of allows itself to be detached from a Xserver and attached to another one.

The usefulness of X11 network capabilities only works so far, dependind on the application, network latency and bandwidth and whether you need or not detach/reattach.
For a lot of users of remote display, it's necessary use midle-man solutions like NX and Xpra, ignoring X11's network capabilities.

Comment Re:Let me guess (Score 1) 166

Ultimately, I cannot explain to you it's not possible; absence of a solution is not proof of it's non-existence.
I can enumerate the known difficulties.

1. Fastest rendering method we have nowadays is direct rendering on the GPU hardware. Second fastest is often server side software rendering. X11 server side rendering often comes last.
The issue with methods 1 and 2 is that they include the application sending large amounts of pixmaps to the display server (akin to playing video).
Only effective use of method 3 makes X11's network capabilities useful.

But, when faced with the decision, application developers tend to favor 1 or 2 and ignore 3.
History teaches us this: it was happening before the Xrender extension; it happened again now, as Qt5 does not have an effective server side rendering backend.
Thus, making network capability a core part of the display system can be an exercise in futility, as application developers may simply not make use of it.

2. Server side rendering, which as I've shown is an integral part of X11's network capabilities, adds considerable complexity to the display server.
This is an huge issue in Wayland. In Wayland, the display server, the window manager and the compositor are rolled-in into a single program, known as the Wayland compositor.
To make this work, Wayland compositors must be relatively simple, not much more complicated than X11 window manager/compositors.
This is not compatible with supporting the sophisticated server side rendering features provided by X11. The X.org display server is a huge program, in part because the X11 protocol is complex.

Comment Re:Let me guess (Score 2) 166

If you insist in asking the wrong question, you'll always get non-sense as an answer.

The server isn't being bloated with new button styles and widgets. X11 server side rendering is accomplished with a a relatively small number of power primitives which haven't been changed in years.

The reason why applications are doing less server side rendering, however, has to do with much more than fancy buttons and widgets.
For example, the Qt4 folks came to the conclusion that their raster backend was quite a bit faster (for local applications, of course) than their X11 backend.
This matters little for drawing the buttons and widgets of an application. But it does matter a lot for the main application area, specially when it's something a bit more complex than a text editor.
It matters when you're navigating a PDF in Okular on a not-so-fast laptop. It matters when you're navigating through the mess of wires in an integrated circuit layout application.

For someone who chastises others for bell and whistles, you can't actually see under the surface.

Comment Re:Let me guess (Score 1) 166

True, it's not about network access per se.

Whether you are using "ssh -X" or just using DISPLAY to point your application to another machine, it is useful because of two properties in the X protocol, which Wayland does have.

Property 1: Although we routinely use shared memory extensions in modern X setups, a lot (including the core functions to which all applications must be able to fall back) works over a socket, which can be a unix local socket or a TCP socket.
Property 2: The X11 protocol has a slew of very sophisticated features which enabled graphical applications to work around the latency of the communication and to reduce communication bandwidth. An application can store gylphs in the Xserver and then, referencing those gylphs, it can draw nice anti-aliased text using a very small amount of bandwidth.

Wayland lacks both properties, but property 2 is the big issue.
Redirecting X11 applications over a network doesn't work well if the application sends a ton of commands synchronously, unless the network is low latency (eg, local LAN).
Redirecting X11 applications over the network doesn't work well if the application sends the entire graphics window as a single pixmap, untless it's a high bandwith network (eg, local GbE LAN).

Wayland works over a Unix socket too. And you can set which socket the application will attach to with WAYLAND_DISPLAY.
The first problem is that the protocol assumes shared memory but it would have not complicated things that much to make it work without requiring shared memory.
But the real problem is that the only method Wayland applications have to draw is by sending the entire window (surface) as a single pixmap. Works wonderfully over shared memory, works like crap over my cell phone's 3G connection.
A 800x600 surface rendered at 60 fps requires 86.4 MB/s.

Adding to Wayland the rich set of server-side rendering features would complicate things too much for Wayland to be possible.
And it would be a somewhat futile exercise because, increasingly, X11 applications are doing less server side rendering and more pushing of large pixmaps.

"Pok pok pok, P'kok!" -- Superchicken

Working...