Please create an account to participate in the Slashdot moderation system


Forgot your password?
Slashdot Deals: Deal of the Day - Pay What You Want for the Learn to Code Bundle, includes AngularJS, Python, HTML5, Ruby, and more. ×

Comment Interesting (Score 5, Interesting) 72

Kernel bypass plus zero copy are, of course, old-hat. Worked on such stuff at Lightfleet, back when it did this stuff called work. Infiniband and the RDMA Consortium had been working on it for longer yet.

What sort of performance increase can you achieve?

Well, Ethernet latencies tend to run into milliseconds for just the stack. Tens, if not hundreds, of milliseconds for anything real. Infiniband can achieve eight microsecond latencies. SPI can get down to two milliseconds.

So you can certainly achieve the sorts of latency improvements quoted. It's hard work, especially when operating purely in software, but it can actually be done. It's about bloody time, too. This stuff should have been standard in 2005, not 2015! Bloody slowpokes. Back in my day, we had to shovel our own packets! In the snow! Uphill! Both ways!

Comment Two thoughts on this. (Score 2) 307

First, they could always use blipverts.

Second, 400+ new shows is somewhere between half to a third of a new show per channel per season, on average. That suggests that if there's too much new material, there are far, far too many channels. In fact, that might be the best solution. Shut down nine in every ten channels. Then you can have exactly the same amount of new material with less channel surfing. People will stay on channel because they'll like the next program as well.

The British did perfectly well on four channels. In fact, they mostly did perfectly well on three channels. America is, of course, bigger. They might need fifteen to cater to all the various needs. You don't need several thousand (including local). All it does is dilute the good stuff with a lot of crap.

Comment Re: Even if practical technology was 10-20 years o (Score 1) 399

Maybe. My thought has always been that if fusion is close enough to get ballpark figures, we can build the necessary infrastructure and much of the housing in parallel with fusion development. Because the energy distribution will impose novel demands on the grid, it's going to require a major rethink on communications protocols, over-generation procedures, action plans on what to do if lines are taken out.

With fusion, especially, it's expensive at best to learn after the fact. Much better to get all the learning done in the decade until working fusion.

With all that in place, the ramp time until fusion is fully online at a sensible price will be greatly reduced.

Parallelize, don't serialize. Only shredded wheat should be cerealized.

Comment That is the problem. (Score 1) 30

By trying to not say too much, the advisories are inherently vague and therefore can be interpreted as insignificant or a dire emergency depending on the day.

That's not useful to anyone.

Because the NSA and GCHQ have effectively eliminated all network security, thanks to their backdoors in things like Cisco devices, it should be automatically assumed that all the bad guys capable of exploiting the issue already have all the information they need and the bad guys not capable of exploiting the issue aren't an issue whether informed or not.

Advisories should therefore declare everything. Absolutely everything. And it should be made clear in those advisories that this is being done because the risks created by the backdoors exceed the risks created by the additional information.

The added information will aid in debugging, clearing up the issue faster and validating that no regressions have taken place.

Comment Lots of options (Score 2) 35

Now that they can extract pure silicon 28 with a simple linear accelerator (which should have been obvious), it should be possible to use much larger dies without running into imperfection problems. That doesn't keep to Moore's Law, admittedly, but it does mean you can halve the space that double the transistors would take, since you're eliminating a lot of packaging. Over the space of the motherboard, it would more than work out, especially if they moved to wafer-scale integration. Want to know how many cores they put onto a wafer using regular dies? Instead of chopping the wafer up, you throw on interconnects Transputer-style.

Graphene is troublesome, yes, but there's lots of places you need regular conductors. If you replace copper interconnects and the gold links to the pins, you should be able to reduce the heat generated and therefore increase the speed you can run the chips. Graphene might also help with 3D chip technology, as you're going to be generating less heat between the layers. That would let you double the number of transistors per unit area occupied, even if not per unit area utilized.

Gallium Arsenide is still an option. If you can sort pure isotopes then it may be possible to overcome many of the limitations that have existed so far on the technology. It has been nasty to utilize, due to pollution, but we're well into the age where you can just convert the pollution into plasma and again separate out what's in it. It might be a little expensive, but the cost of cleanup will always be more and you can sell the results from the separation. It's much harder to sell polluted mud.

In the end, because people want compute power rather than a specific transistor count, Processor-in-Memory is always an option, simply move logic into RAM and avoid having to perform those functions by going through support chips, a bus and all the layers of a CPU in order to get carried out. DDR4 is nice and all that, but main memory is still a slow part of the system and the caches on the CPU are easily flooded due to code always expanding to the space available. There is also far too much work going on in managing memory. The current Linux memory manager is probably one of the best around. Take that and all the memory support chips, put it on an oversized ASIC and give it some cache. The POWER8 processor has 96 megabytes of L3 cache. I hate odd amounts and the memory logic won't be nearly as complex as the POWER8's, so let's increase it to 128 megabytes. Since the cache will be running at close to the speed of the CPU, exhaustion and stalling won't be nearly so common.

Actually, the best thing would be for the IMF (since it's not doing anything useful with its money) to buy millions of POWER8 and MIPS64 processors, offering them for free to geeks individually on on daughter boards that can be plugged in as expansion cards. At worst, it would make life very interesting.

Comment Re: The answer has been clear (Score 1) 390

Multiple IPs was one solution, but the other was much simpler.

The real address of the computer was its MAC, the prefix simply said how to get there. In the event of a failover, the client's computer would be notified the old prefix was now transitory and a new prefix was to be used for new connections.

At the last common router, the router would simply swap the transitory prefix for the new prefix. The packet would then go by the new path.

The server would multi-home for all prefixes it was assigned.

At both ends, the stack would handle all the detail, the applications never needed to know a thing. That's why nobody cared much about remembering IP addresses, because those weren't important except to the stack. You remembered the name and the address took care of itself.

One of the benefits was that this worked when switching ISPs. If you changed your provider, you could do so with no loss of connections and no loss of packets.

But the same was true of clients, as well. You could start a telnet session at home, move to a cyber cafe and finish up in a pub, all without breaking the connection, even if all three locations had different ISPs.

This would be great for students or staff at a university. And for the university. You don't need the network to be flat, you can remain on your Internet video session as your laptop leaps from access point to access point.

Comment Re: How about basic security? (Score 5, Informative) 390

IPSec is perfectly usable.

Telebit demonstrated transparent routing (ie: total invisibility of internal networks without loss of connectivity) in 1996.

IPv6 has a vastly simpler header, which means a vastly simpler stack. This means fewer defects, greater robustness and easier testing. It also means a much smaller stack, lower latency and fewer corner cases.

IPv6 is secure by design. IPv4 isn't secure and there is nothing you can design to make it so.

Comment Re: Waiting for the killer app ... (Score 3, Informative) 390

IPv6 would help both enormously. Lower latency on routing means faster responses.

IP Mobility means users can move between ISPs without posts breaking, losing responses to queries, losing hangout or other chat service connections, or having to continually re-authenticate.

Autoconfiguration means both can add servers just by switching the new machines on.

Because IPv4 has no native security, it's vulnerable to a much wider range of attacks and there's nothing the vendors can do about them.

Comment Re: DNS without DHCP (Score 4, Informative) 390

Anycast tells you what services are on what IP. There are other service discovery protocols, but anycast was designed specifically for IPv6 bootstrapping. It's very simple. Multicast out a request for who runs a service, the machine with the service unicasts back that it does.

Dynamic DNS lets you tell the DNS server who lives at what IP.

IPv6 used to have other features - being able to move from one network to another without dropping a connection (and sometimes without dropping a packet), for example. Extended headers were actually used to add features to the protocol on-the-fly. Packet fragmentation was eliminated by having per-connection MTUs. All routing was hierarchical, requiring routers to examine at most three bytes. Encryption was mandated, ad-hoc unless otherwise specified. Between the ISPs, the NAT-is-all-you-need lobbyists and the NSA, most of the neat stuff got ripped out.

IPv6 still does far, far more than just add addresses and simplify routing (reducing latency and reducing the memory requirements of routers), but it has been watered down repeatedly by people with an active interest in everyone else being able to do less than them.

I say roll back the protocol definition to where the neat stuff existed and let the security agencies stew.

Comment What is wrong with SCTP and DCCP? (Score 4, Interesting) 84

These are well-established, well-tested, well-designed protocols with no suspect commercial interests involved. QUIC solves nothing that hasn't already been solved.

If pseudo-open proprietary standards are de-rigour, then adopt the Scheduled Transfer Protocol and Delay Tolerant Protocol. Hell, bring back TUBA, SKIP and any other obscure protocol nobody is likely to use. It's not like anyone cares any more.

Comment Re: Must hackers be such dicks about this? (Score 1) 270

He claimed he could hack the plane. This was bad and the FBI had every right to determine his motives, his actual capabilities and his actions.

The FBI fraudulently claimed they had evidence a crime had already taken place. We know it's fraudulent because if they did have evidence, the guy would be being questioned whilst swinging upside down over a snake pit. Hey, the CIA and Chicago have Black Sites, the FBI is unlikely to want to miss out. Anyways, they took his laptop, not him, which means they lied and attempted to pervert the course of justice. That's bad, unprofessional and far, far more dangerous. The researcher could have killed himself and everyone else on his plane. The FBI, by using corrupt practices, endanger every aircraft.

"Help Mr. Wizard!" -- Tennessee Tuxedo