Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).


Comment: Re:It's almost like the Concord verses the 747 aga (Score 1) 134

by raxx7 (#49155953) Attached to: Hyperloop Testing Starts Next Year

As I wrote in the another post[1], you need to limit both lateral and vertical accelerations, which puts constrains on how much you can get from banking.
Eg, if you bank almost 90, the passengers will experience no lateral acceleration, but they will experience vertical acceleration, for which tolerance is even lower.
At 800 mph, even the sloping the line up/down will be a problem.

This is all well understood and researched, as it's a massive and expensive problem for all modes of transportation that go at least as fast as a car in a highway.
The Hyperloop proposal simply glossed over reality in this aspect, along many others.

[1] http://tech.slashdot.org/comments.pl?sid=7030445&cid=49155263

Comment: Re:It's almost like the Concord verses the 747 aga (Score 1) 134

by raxx7 (#49155263) Attached to: Hyperloop Testing Starts Next Year

You can also tilt the track, canting in railway terms.
Canting in normal railway lines is limited due to the need to handle slow trains, but on high speed rail it's often allowed to be higher.
That's why it's uncommon to use tilting trains above 250 km/h: it's usually preferable to tilt the track than to increase the weight of trains by adding tilting systems.
Though the Japanese have some.

That said, "much" is a relative statement, Whether you tilt the train floor or the track the accelerations experienced by the passenger
- lateral: is v^2/r - g*sin(tilt_angle); acceptable limit is +/- 0.1g
- vertical: 1g + v^2/r * cos(title_angle); limit is 1g+/-0.05g, IIRC

As you can seen from the math, the problem increases with the square of speed, while the benefit from tilt is is limited and bound by the need to keep both lateral and vertical acceleration within bounds.

Comment: Re:It's almost like the Concord verses the 747 aga (Score 2) 134

by raxx7 (#49154981) Attached to: Hyperloop Testing Starts Next Year

The HyperLoop Elon proposed was estimated to cost 1/10th of the LA-SF CHSR project, but it only had 1/10th of the capacity.

Unlike the CHSR project, the proposed HyperLoop project actually only connected the outskirts of LA to the Oakland bay, leaving out the expensive part of going to downtown LA and downtown SF.

The estimated HyperLoop projects assumed they save on expropriations by placing the track elevated over existing highways.
But to travel at 800 mph without making your passengers sick and barfing, the route actually needs curves to be 16 times as smooth as the 200 mph CHSR.

The estimated HyperLoop costs were low by an order of magnitude even when comparing to known costs of elevated track and even of oil pipelines. Let's not even talk about the actual precision needed to make this work at 800 mph.

Comment: Re:Not really happy (Score 2) 171

by raxx7 (#49080067) Attached to: HTTP/2 Finalized

What happened to HTTP/1.1 pipelining is fortunately not a common case: popular webserver breaks the standard and doesn't get fixed, making the standard de facto useless. While it's not impossible for the same to happen with HTTP/2.0, this type of situation is more the exception than the norm.
All popular webservers today strive for a good degree of standard compliance.

But, maybe as importantly, as pointed out before, that was not the only problem with HTTP/1.1 pipelining. You also have the head of line blocking problem, which is a fundamental design problem.
This reduced the case to enable HTTP/1.1 pipelining in the browsers and to fix the broken server software.

Comment: Re:Not really happy (Score 5, Informative) 171

by raxx7 (#49079695) Attached to: HTTP/2 Finalized

You might be happier if you researched why HTTP pipelining is off in most browsers and what problem it actually wants to solve.

First, HTTP/1.1 pipelining and HTTP/2.0 are not about increasing the number of request your server can handle. It's mainly about reducing the effect of round trip latency in page loading times, which is significant.
If a browser is fetching a simple page with 10 tiny elements (or even cached elements subject to conditional GET) but the server round trip latency is 100 ms, then it will take over 1 second to load the page.

HTTP was a first attempt to solve this problem, by allowing the server to send multiple requests over the connection without waiting for the earlier requests to complete.
If you can pipeline N requests, the round trip latency contribution is divided by N.
However, HTTP/1.1 pipelining has two issues which led most browsers to disable it by default (it's not because they enjoy implementing features not be used):
- There are or were a number of broken servers which do not handle pipelining correctly.
- HTTP/1.1 pipelining is subject to head of line blocking: the server serves the requests in order and a small/fast request may have to wait inline behind a larger/slow request.

Instead, browsers make multiple parallel requests to each server.
However, because some servers (eg, Apache) have problems with large numbers of requests so browsers use arbitrary low limits (eg, 4-8 parallel requests per hostname).

HTTP/2.0 attempts to solve these shortcomings by:
a) being new, hopefully without broken servers out there
b) using multiplexing, allowing the server to serve the requests in whatever order. Thus, no head-of-line blocking.

So. HTTP/2.0 is, fundamentally, HTTP/1.1 pipelining without these shortcomings. We hope.

Comment: Re:It used to be fun ... (Score 1) 754

by raxx7 (#49070297) Attached to: Removing Libsystemd0 From a Live-running Debian System

It is that easy. Because some sane and nice people who care about not using systemd as pid1 have actually put in the work to do so.

systemd is only taking in everything as much as nobody is willing/able to provide comparable functionality.
Eg, there's no credible replacement for systemd-logind. ConsoleKit is unmaintained and less powerful. Developers of desktop environments and distributions want to take advantage of it's functionality and avoid the trouble of trying to get shit done using ConsoleKit.

But enterprising souls (ok, mainly from Ubuntu) have come up with enough functionality (cgmanager) to run systemd-logind without having to run systemd.

Comment: Re:It used to be fun ... (Score 0) 754

by raxx7 (#49065833) Attached to: Removing Libsystemd0 From a Live-running Debian System

Then for f**ks sake, apt-get install something_else and stop bitching.

XFCE, LXDE, KDE, MATE, etc, etc, etc, are only a few apt-get commands away.Likewise, systemd (pid1) can be replaced by sysvinit (and systemd-shim if needed) with a few commands and one reboot.
NOBODY is putting roadblocks in the way of the Debian developers who are willing to put in the work to keep Debian as functional as possible without using systemd as init.

The entire Debian issue is a storm in a teacup, fueled by zealots to whom the mere presence of libsystemd in their hard drive is unacceptable and "freedom of choice" seems to mean that nobody should have the choice of using systemd.

Comment: Re:Pointless (Score 2) 754

by raxx7 (#49063453) Attached to: Removing Libsystemd0 From a Live-running Debian System

You need to get out more.

Most servers run on Windows or Linux, mainly in the form of RHEL and SLES. Anything else tends to mean the hardware and software providers don't support you, which can be quite inconvenient.
Outside hobby servers, the number of servers using BSD or unsupported Linux distros (eg, I run Debian on personal systems) are a minority.

When dealing with systems with more custom hardware designs, things get varied. Cray XT6's compute nodes run a lightweight Linux installation, while IBM's BlueGene compute nodes run a custom OS with is only a few thousand lines of code.
But supercomputers we'd call clusters usually run RHEL or SLES or derivative with some add-ons. Comparing with BSDs is non-sense.

Among embedded systems with a multi-tasking memory protected OS, the most common sightings are QNX, VxWorks and Linux [full GNU/Linux, Android, WebOS, etc].
I can't recall the last time I saw a shipping product with NetBSD, actually. Despite it's fame for portability, NetBSD has been trailing Linux for a while and it lacks support for a number of modern embedded platforms. From the top of my head, there's no NetBSD support for AVR32, NIOS or Blaze architectures..
I don't think there's working support for FPGAs with embedded ARM CPUs either.

Comment: Re:Pointless (Score 1) 754

by raxx7 (#49063147) Attached to: Removing Libsystemd0 From a Live-running Debian System

To me, it looks like you're taking your hobby approach to work.

If you're using professional (lack of better word) proprietary applications from Xilinx, Cadence, Mentor, Oracle, Autodesk etc, in Linux you should do it on a supported operative system version: RHEL or SLES.
Of course, if you're running in Windows is fundamentally the same situation. Usually, application developers target and support only a few version version of Windows.
Eg, check https://www.cadence.com/rl/Resources/release_info/Supported_Platforms_Matrix.pdf

If you do that, it works relatively well -- no better, no worse than Windows.
If you go with anything else you're a) asking for trouble and b) their support won't even help you.
Even if you run CentOS, which is 99.99% compatible with RHEL, expect their support to refuse assistance until you migrate to RHEL.

Other than that, as your experience with Cadence shows, there is actually a large body of niche applications which is not available for Windows or where Windows is a second class citizen for the developers.
Another example from the top of my head is ROOT, the main data analysis framework used at CERN.It's mainly developed for Linux, including the graphical user interface parts which make the plots. OS X and Windows are second and third class citizens.

And their number seems to be increasing. For example, in the last few years Xilinx brought the Linux version of their tools to parity level with Windows (they're equally crappy now). And Altera brought their Linux tools from basic-and-expensive to almost parity with their Windows tools (there's a few glitches in the GUI).

Comment: Re:Pulseaudio misconceptions (Score 1) 754

by raxx7 (#49062889) Attached to: Removing Libsystemd0 From a Live-running Debian System

dmix doesn't run in kernel space, it runs in user space.
In an ugly way, by the way: the first application to load the dmix plugin forks() a process which run as the sound server.

That said, dmix was a very simple sound server that mostly did the job but PulseAudio does it better and also implements a long list of wanted features.

Comment: Re:Submarines are the undisputed... (Score 1) 439

by raxx7 (#49061575) Attached to: Will Submarines Soon Become As Obsolete As the Battleship?

Despite FlyingGuy's somewhat US-sub fanboish tone (sorry!!), his fundamental proposition is correct.
Exercise after naval exercise has confirmed only one thing: even not-so-modern submarines are a nightmare for even the most modern surface fleets, as all the detection technologies have severe limitations.
Active sonar has limited range; passive sonar depends on the target's noise. Both are seriously affected by the noise environment and the ocean's thermal layers
Magnetic anomaly detection has very limited range and can be defeated/partially defeated by minimizing the use of ferromagnetic materials (such as Germany's U-212 class).
And they can attack a target without exposing their position too much. Launching a torpedo does no longer require you to yell the world "Here I am and this is my street number".
Modern torpedoes can quietly be launched and guided through an arbitrary path before before closing in on the target and being detected.
By the time a target detects a incoming torpedo, the launching sub can be somewhere in a few hundred of km of ocean, which already is a lot to canvas.

And the innovations mentioned in the article smell a lot like bullshit by the way. LED light of the submarine hull???

If this is timesharing, give me my share right now.