Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Slashdot Deals: Deal of the Day - Pay What You Want for the Learn to Code Bundle, includes AngularJS, Python, HTML5, Ruby, and more. ×

Comment Interesting (Score 5, Interesting) 72

Kernel bypass plus zero copy are, of course, old-hat. Worked on such stuff at Lightfleet, back when it did this stuff called work. Infiniband and the RDMA Consortium had been working on it for longer yet.

What sort of performance increase can you achieve?

Well, Ethernet latencies tend to run into milliseconds for just the stack. Tens, if not hundreds, of milliseconds for anything real. Infiniband can achieve eight microsecond latencies. SPI can get down to two milliseconds.

So you can certainly achieve the sorts of latency improvements quoted. It's hard work, especially when operating purely in software, but it can actually be done. It's about bloody time, too. This stuff should have been standard in 2005, not 2015! Bloody slowpokes. Back in my day, we had to shovel our own packets! In the snow! Uphill! Both ways!

Comment Two thoughts on this. (Score 2) 307

First, they could always use blipverts.

Second, 400+ new shows is somewhere between half to a third of a new show per channel per season, on average. That suggests that if there's too much new material, there are far, far too many channels. In fact, that might be the best solution. Shut down nine in every ten channels. Then you can have exactly the same amount of new material with less channel surfing. People will stay on channel because they'll like the next program as well.

The British did perfectly well on four channels. In fact, they mostly did perfectly well on three channels. America is, of course, bigger. They might need fifteen to cater to all the various needs. You don't need several thousand (including local). All it does is dilute the good stuff with a lot of crap.

Comment Re: Even if practical technology was 10-20 years o (Score 1) 399

Maybe. My thought has always been that if fusion is close enough to get ballpark figures, we can build the necessary infrastructure and much of the housing in parallel with fusion development. Because the energy distribution will impose novel demands on the grid, it's going to require a major rethink on communications protocols, over-generation procedures, action plans on what to do if lines are taken out.

With fusion, especially, it's expensive at best to learn after the fact. Much better to get all the learning done in the decade until working fusion.

With all that in place, the ramp time until fusion is fully online at a sensible price will be greatly reduced.

Parallelize, don't serialize. Only shredded wheat should be cerealized.

Comment That is the problem. (Score 1) 30

By trying to not say too much, the advisories are inherently vague and therefore can be interpreted as insignificant or a dire emergency depending on the day.

That's not useful to anyone.

Because the NSA and GCHQ have effectively eliminated all network security, thanks to their backdoors in things like Cisco devices, it should be automatically assumed that all the bad guys capable of exploiting the issue already have all the information they need and the bad guys not capable of exploiting the issue aren't an issue whether informed or not.

Advisories should therefore declare everything. Absolutely everything. And it should be made clear in those advisories that this is being done because the risks created by the backdoors exceed the risks created by the additional information.

The added information will aid in debugging, clearing up the issue faster and validating that no regressions have taken place.

Comment Lots of options (Score 2) 35

Now that they can extract pure silicon 28 with a simple linear accelerator (which should have been obvious), it should be possible to use much larger dies without running into imperfection problems. That doesn't keep to Moore's Law, admittedly, but it does mean you can halve the space that double the transistors would take, since you're eliminating a lot of packaging. Over the space of the motherboard, it would more than work out, especially if they moved to wafer-scale integration. Want to know how many cores they put onto a wafer using regular dies? Instead of chopping the wafer up, you throw on interconnects Transputer-style.

Graphene is troublesome, yes, but there's lots of places you need regular conductors. If you replace copper interconnects and the gold links to the pins, you should be able to reduce the heat generated and therefore increase the speed you can run the chips. Graphene might also help with 3D chip technology, as you're going to be generating less heat between the layers. That would let you double the number of transistors per unit area occupied, even if not per unit area utilized.

Gallium Arsenide is still an option. If you can sort pure isotopes then it may be possible to overcome many of the limitations that have existed so far on the technology. It has been nasty to utilize, due to pollution, but we're well into the age where you can just convert the pollution into plasma and again separate out what's in it. It might be a little expensive, but the cost of cleanup will always be more and you can sell the results from the separation. It's much harder to sell polluted mud.

In the end, because people want compute power rather than a specific transistor count, Processor-in-Memory is always an option, simply move logic into RAM and avoid having to perform those functions by going through support chips, a bus and all the layers of a CPU in order to get carried out. DDR4 is nice and all that, but main memory is still a slow part of the system and the caches on the CPU are easily flooded due to code always expanding to the space available. There is also far too much work going on in managing memory. The current Linux memory manager is probably one of the best around. Take that and all the memory support chips, put it on an oversized ASIC and give it some cache. The POWER8 processor has 96 megabytes of L3 cache. I hate odd amounts and the memory logic won't be nearly as complex as the POWER8's, so let's increase it to 128 megabytes. Since the cache will be running at close to the speed of the CPU, exhaustion and stalling won't be nearly so common.

Actually, the best thing would be for the IMF (since it's not doing anything useful with its money) to buy millions of POWER8 and MIPS64 processors, offering them for free to geeks individually on on daughter boards that can be plugged in as expansion cards. At worst, it would make life very interesting.

Comment Re:Kids don't understand sparse arrays (Score 1) 128

It all depends on what you want to do with your matrices. Various operations have various costs in different sparse matrix formats. The standard ones are COO or coordinate format: a list of triples (i, j, val); DOK or dictionary of keys format: the hashmap you are thinking of; LIL or list of lists format: a list for each row and a list if pairs (j, val) in each list entry; CSR/CSC or compact sparse row/column: an array of indices where each row starts, an array of column indices and an array of values.

COO and DOK are great for changing sparsity structure; LIL is very useful if you have a lot of row-wise (or column-wise) operations, or need to manipulate rows regularly. CSR is great for matrix operations such as multiplication, addition etc. You use what suits your usecase, or change between formats (relatively cheap) as needed.

Comment Re:a low-IQ child's IQ can be raised in some cases (Score 2) 185

I've had sat psychologist administered IQ tests a year apart and had my score differ by 10 points. I've been told that, in fact, this is perfectly normal and well within the accuracy expected of IQ tests by psychologists who take them seriously. I wouldn't worry about IQ scores changing (they may well do that, but it is equally likely measurement error). IQ is a very imperfect measure to begin with. Our ability to measure it, even under the best of conditions, is extremely poor. Take most IQ studies with a grain of salt.

Comment Re:The Road Warrior (Score 1) 776

...not a sequel, but a cash-in remake.
It's not a Mad Max movie. The main character isn't Max, the atmosphere isn't Mad Max's, it just happened to have spiked cars chasing plated cars in the wastland.

Indeed. What they should have done was get the writer/director of the original film, who I gather had been trying to get a sequel made for over a decade, to come and write and direct the new one. Clearly whoever they got to write this didn't really understand Max's character at all.</sarcasm>

Comment Re: The answer has been clear (Score 1) 390

Multiple IPs was one solution, but the other was much simpler.

The real address of the computer was its MAC, the prefix simply said how to get there. In the event of a failover, the client's computer would be notified the old prefix was now transitory and a new prefix was to be used for new connections.

At the last common router, the router would simply swap the transitory prefix for the new prefix. The packet would then go by the new path.

The server would multi-home for all prefixes it was assigned.

At both ends, the stack would handle all the detail, the applications never needed to know a thing. That's why nobody cared much about remembering IP addresses, because those weren't important except to the stack. You remembered the name and the address took care of itself.

One of the benefits was that this worked when switching ISPs. If you changed your provider, you could do so with no loss of connections and no loss of packets.

But the same was true of clients, as well. You could start a telnet session at home, move to a cyber cafe and finish up in a pub, all without breaking the connection, even if all three locations had different ISPs.

This would be great for students or staff at a university. And for the university. You don't need the network to be flat, you can remain on your Internet video session as your laptop leaps from access point to access point.

Comment I really hate reports like this (Score 3, Interesting) 622

1) Combine two things that are sort of similar but not really - e.g. EVs and hybrids or tablets and e-ink e-readers
2) Make a statistical claim about the combined group - 'People are leaving EVs and hybrids", "Tablets and E-readers bad for sleep/eyes"
3) Forget to mention one of the two in the headline - 'People dump EVs', 'E-readers bad for sleep/eyes"

By combining the two, this report doesn't really tell us anything useful. I'd love to know the different rates of people abandoning EV or hybrids, as I think they are two very different propositions.

Hybrids, at the end of the day, are simply a different way of building efficient petrol/diesel powered cars. From what I've heard that efficiency has been a lot less in real life, with milage claims for things like the Prius not really living up to the hype. With ever more efficient petrol engines on the market, and gas prices so low, the efficiency improvements have to be pretty significant to make a big difference and to offset the higher cost of buying many hybrids.

EVs on the other hand are a totally different beast, and the reasons people might give up on them are different. Are people buying EVs and then finding range is more of a problem than they thought? Did they have problems finding charing points? Was overnight, at-home charging not good enough for them? Etc, etc.

In addition, this report talks about the number of people who are trading in their EVs/Hybrids for something else. But that doesn't really tell us anything about how much people like EVs and Hybrids as it only includes people who are switching. It doesn't provide any analysis of how many people are keeping their EVs for longer.

What's most annoying is that there are genuinely interesting questions to be asking about the EV and hybrid market, but this data isn't really answering any of them well.

The best way to avoid responsibility is to say, "I've got responsibilities."