Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

Comment: The question is (Score 1) 221

by buchner.johannes (#49615961) Attached to: No, NASA Did Not Accidentally Invent Warp Drive

If all goes through, what will it mean?
If I understood correctly, it allows you to pre-warp some space ahead in your journey, so that you can begin your journey later. For example, to go to Alpha Centauri A, where light takes a few years, you may start the warp drive, wait for a year, then jump into the ship and travel there (taking 1 year less time).

It will not save you anything going to new places you did not plot a course to.

I am also not sure about the speed limits that warp drive imposes. Possibly beyond light speed if it squeezes space enough? (By light speed I mean compared to flat space).

Comment: Re: Seriously?! (Score 1) 157

by Samantha Wright (#49607085) Attached to: Statues of Assange, Snowden and Manning Go Up In Berlin
Right, which is why I added the second sentence. My point is that it could've been phrased in a manner that avoids implying Moscow is a trap, e.g. "unable to return home." I'm sure there are schools of propaganda training that are more subtle and don't pooh-pooh that sort of structuring, but at the very least it implies some restraint on the parts of the authors away from being a proverbial anti-US slant.

Comment: Re: I must be old (Score 1) 86

What does that really matter? Almost by definition, a demoscene prod involves clever choices in what to make and display on screen in order to achieve an effect. I'm pretty confident the winners of the competitions for the last few years (a) don't have the same flexibility for artists working with their demo engines as Square-Enix does and (b) would never be able to assemble enough assets and people to do the facial expression stuff with anywhere near the same quality (an area in which, AFAIK, Nvidia has been almost entirely pioneering.) The achievement of this video isn't diminished by the achievements of the scene, nor vice-versa.

Comment: Re: The answer has been clear (Score 1) 390

by jd (#49575883) Attached to: Why the Journey To IPv6 Is Still the Road Less Traveled

Multiple IPs was one solution, but the other was much simpler.

The real address of the computer was its MAC, the prefix simply said how to get there. In the event of a failover, the client's computer would be notified the old prefix was now transitory and a new prefix was to be used for new connections.

At the last common router, the router would simply swap the transitory prefix for the new prefix. The packet would then go by the new path.

The server would multi-home for all prefixes it was assigned.

At both ends, the stack would handle all the detail, the applications never needed to know a thing. That's why nobody cared much about remembering IP addresses, because those weren't important except to the stack. You remembered the name and the address took care of itself.

One of the benefits was that this worked when switching ISPs. If you changed your provider, you could do so with no loss of connections and no loss of packets.

But the same was true of clients, as well. You could start a telnet session at home, move to a cyber cafe and finish up in a pub, all without breaking the connection, even if all three locations had different ISPs.

This would be great for students or staff at a university. And for the university. You don't need the network to be flat, you can remain on your Internet video session as your laptop leaps from access point to access point.


Verizon Tells Customer He Needs 75Mbps For Smoother Netflix Video 170

Posted by Soulskill
from the selling-your-grandma-upgrades dept.
An anonymous reader writes: Verizon recently told a customer that upgrading his 50Mbps service to 75Mbps would result in smoother streaming of Netflix video. Of course, that's not true — Netflix streams at a rate of about 3.5 Mbps on average for Verizon's fiber service, so there's more than enough headroom either way. But this customer was an analyst for the online video industry, so he did some testing and snapped some screenshots for evidence. He fired up 10 concurrent streams of a Game of Thrones episode and found only 29Mbps of connection being used. This guy was savvy enough to see through Verizon's BS, but I'm sure there are millions of customers who wouldn't bat an eye at the statements they were making. The analyst "believes that the sales pitch he received is not just an isolated incident, since he got the same pitch from three sales reps over the phone and one online."

Comment: Re:But why is there only one spot like this? (Score 1) 45

by buchner.johannes (#49550567) Attached to: Mystery of the Coldest Spot In the CMB Solved

You make it sound like the temperature of the (empty) region averages down the background, making it colder. But something way more awesome actually happens: Photons enter one side of the Void (empty region) at an early time and travel through it. During that time, the Void expands. To escape the Void, the photon then has to lose more energy than it received when it entered. It is the slow light speed relative to these enormous scale, evolving structures that causes this effect!

Comment: Re:systemd is a bad joke (Score 4, Interesting) 493

by buchner.johannes (#49545361) Attached to: Ubuntu 15.04 Released, First Version To Feature systemd

You can't just leave things alone, because computers have also changed. Today we do not work on mainframes or desktop computers, but increasingly on laptops and mobile phones, which constantly change state, in terms of network connections, devices plugged in, location, hibernation.

I think there is consensus that these things did not work well on the old init system, although band-aids were found. I remember that changing the hostname stopped X from working, which can occur when DHCP gives you a new hostname. That is 80s design for you. Or changing the time messes up the logfiles.

Now you can choose which modern init system you want, and there are a couple out there: OpenRC, upstart and systemd are the most well-known ones.

OpenRC is the familiar runlevel based approach, which runs scripts which may or may not succeed.

Upstart is a triggering framework, that takes pre-defined actions (but does not work with goals). That means you have to write tasks for how to get from A to B with your system.

systemd is a dependency resolution program, that knows what to activate next to get to a certain state (goal). It handles services, mount points and network connections in the same framework. It is essentially an overseer of a services tree.

There are some upsides to systemd, besides parallelizing the tasks of a dependency tree to reach a goal. One is for every process it is known which service launched it (there are some Linux-specifics that allow marking those processes). Also, each service can be assigned resources (memory, number of processes), which it can not exceed (again, modern Linux supports that). And, obviously, you are not limited to a set number of runlevels.

Yes, systemd is annoying, because it is a new thing to learn. And it is annoying, because the maintainers are inconsiderate. But in the end, it is just a program to start other programs, with one particular way to do it. I don't get what the big deal is. If it is feature bloat -- Linux also has a lot of features, so does VLC -- there we consider them a good thing. Technically, the dependency resolution approach of systemd seems like a good thing (as in progress for Linux) to me.

Comment: More LOFAR info (Score 3, Informative) 49

by RogerWilco (#49545093) Attached to: Cosmic Rays Could Reveal Secrets of Lightning On Earth

Here is a presentation by Pim Schellaert (referenced in the article) with some more information:

I've seen a presentation of their more recent results, but that doesn't seem to be public yet, I can't find a link.

One of the coolest things we did recently with the LOFAR telescope was to observe the Solar Eclipse in real time, I think it has never been done with a radio telescope before:

In general you can find a lot of info about what we're doing with the LOFAR telescope here:
and here:

Comment: Re:I doubt Apple will stay in the market (Score 1) 417

by RogerWilco (#49541267) Attached to: We'll Be the Last PC Company Standing, Acer CEO Says

I think Apple will stay in the desktop/laptop market as long as it's the development platform for their software. I believe it's a core principle of how Apple operates to not be dependent on anyone but Apple. It's why they have all their design and technology in house.
This gives them the immense power to switch suppliers and manufacturing locations and completely control their own future.

Pc-clone manufaturers and brands come and go because the parts are interchangeable. Apple doesn't live in that world and actively tries to prevent it.

Apple might let someone like Samsung, Asus or Foxconn manufacture something for them, but they do all the design themselves to prevent what Compaq did to IBM.

There are a few cracks in that Apple philosophy, in that they use Intel chips and some other PC components for their desktops, but they have demonstrated to be able to successfully switch if need when they moved from Power to x86. They have enormous power because they can believably tell any of their suppliers that they might move. It's why they have their own OS, office suite, cloud, browser, etc.

It gives them long term security and complete independence. This results in a company that is much more agile than it's size and age would let you to believe. It is one of the core reasons Apple is able to do what it does.

Lavish spending can be disastrous. Don't buy any lavishes for a while.