Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment That is the problem. (Score 1) 30

By trying to not say too much, the advisories are inherently vague and therefore can be interpreted as insignificant or a dire emergency depending on the day.

That's not useful to anyone.

Because the NSA and GCHQ have effectively eliminated all network security, thanks to their backdoors in things like Cisco devices, it should be automatically assumed that all the bad guys capable of exploiting the issue already have all the information they need and the bad guys not capable of exploiting the issue aren't an issue whether informed or not.

Advisories should therefore declare everything. Absolutely everything. And it should be made clear in those advisories that this is being done because the risks created by the backdoors exceed the risks created by the additional information.

The added information will aid in debugging, clearing up the issue faster and validating that no regressions have taken place.

Comment Lots of options (Score 2) 35

Now that they can extract pure silicon 28 with a simple linear accelerator (which should have been obvious), it should be possible to use much larger dies without running into imperfection problems. That doesn't keep to Moore's Law, admittedly, but it does mean you can halve the space that double the transistors would take, since you're eliminating a lot of packaging. Over the space of the motherboard, it would more than work out, especially if they moved to wafer-scale integration. Want to know how many cores they put onto a wafer using regular dies? Instead of chopping the wafer up, you throw on interconnects Transputer-style.

Graphene is troublesome, yes, but there's lots of places you need regular conductors. If you replace copper interconnects and the gold links to the pins, you should be able to reduce the heat generated and therefore increase the speed you can run the chips. Graphene might also help with 3D chip technology, as you're going to be generating less heat between the layers. That would let you double the number of transistors per unit area occupied, even if not per unit area utilized.

Gallium Arsenide is still an option. If you can sort pure isotopes then it may be possible to overcome many of the limitations that have existed so far on the technology. It has been nasty to utilize, due to pollution, but we're well into the age where you can just convert the pollution into plasma and again separate out what's in it. It might be a little expensive, but the cost of cleanup will always be more and you can sell the results from the separation. It's much harder to sell polluted mud.

In the end, because people want compute power rather than a specific transistor count, Processor-in-Memory is always an option, simply move logic into RAM and avoid having to perform those functions by going through support chips, a bus and all the layers of a CPU in order to get carried out. DDR4 is nice and all that, but main memory is still a slow part of the system and the caches on the CPU are easily flooded due to code always expanding to the space available. There is also far too much work going on in managing memory. The current Linux memory manager is probably one of the best around. Take that and all the memory support chips, put it on an oversized ASIC and give it some cache. The POWER8 processor has 96 megabytes of L3 cache. I hate odd amounts and the memory logic won't be nearly as complex as the POWER8's, so let's increase it to 128 megabytes. Since the cache will be running at close to the speed of the CPU, exhaustion and stalling won't be nearly so common.

Actually, the best thing would be for the IMF (since it's not doing anything useful with its money) to buy millions of POWER8 and MIPS64 processors, offering them for free to geeks individually on on daughter boards that can be plugged in as expansion cards. At worst, it would make life very interesting.

Comment Re: The answer has been clear (Score 1) 390

Multiple IPs was one solution, but the other was much simpler.

The real address of the computer was its MAC, the prefix simply said how to get there. In the event of a failover, the client's computer would be notified the old prefix was now transitory and a new prefix was to be used for new connections.

At the last common router, the router would simply swap the transitory prefix for the new prefix. The packet would then go by the new path.

The server would multi-home for all prefixes it was assigned.

At both ends, the stack would handle all the detail, the applications never needed to know a thing. That's why nobody cared much about remembering IP addresses, because those weren't important except to the stack. You remembered the name and the address took care of itself.

One of the benefits was that this worked when switching ISPs. If you changed your provider, you could do so with no loss of connections and no loss of packets.

But the same was true of clients, as well. You could start a telnet session at home, move to a cyber cafe and finish up in a pub, all without breaking the connection, even if all three locations had different ISPs.

This would be great for students or staff at a university. And for the university. You don't need the network to be flat, you can remain on your Internet video session as your laptop leaps from access point to access point.

Comment Re: How about basic security? (Score 5, Informative) 390

IPSec is perfectly usable.

Telebit demonstrated transparent routing (ie: total invisibility of internal networks without loss of connectivity) in 1996.

IPv6 has a vastly simpler header, which means a vastly simpler stack. This means fewer defects, greater robustness and easier testing. It also means a much smaller stack, lower latency and fewer corner cases.

IPv6 is secure by design. IPv4 isn't secure and there is nothing you can design to make it so.

Comment Re: Waiting for the killer app ... (Score 3, Informative) 390

IPv6 would help both enormously. Lower latency on routing means faster responses.

IP Mobility means users can move between ISPs without posts breaking, losing responses to queries, losing hangout or other chat service connections, or having to continually re-authenticate.

Autoconfiguration means both can add servers just by switching the new machines on.

Because IPv4 has no native security, it's vulnerable to a much wider range of attacks and there's nothing the vendors can do about them.

Comment Re: DNS without DHCP (Score 4, Informative) 390

Anycast tells you what services are on what IP. There are other service discovery protocols, but anycast was designed specifically for IPv6 bootstrapping. It's very simple. Multicast out a request for who runs a service, the machine with the service unicasts back that it does.

Dynamic DNS lets you tell the DNS server who lives at what IP.

IPv6 used to have other features - being able to move from one network to another without dropping a connection (and sometimes without dropping a packet), for example. Extended headers were actually used to add features to the protocol on-the-fly. Packet fragmentation was eliminated by having per-connection MTUs. All routing was hierarchical, requiring routers to examine at most three bytes. Encryption was mandated, ad-hoc unless otherwise specified. Between the ISPs, the NAT-is-all-you-need lobbyists and the NSA, most of the neat stuff got ripped out.

IPv6 still does far, far more than just add addresses and simplify routing (reducing latency and reducing the memory requirements of routers), but it has been watered down repeatedly by people with an active interest in everyone else being able to do less than them.

I say roll back the protocol definition to where the neat stuff existed and let the security agencies stew.

Comment What is wrong with SCTP and DCCP? (Score 4, Interesting) 84

These are well-established, well-tested, well-designed protocols with no suspect commercial interests involved. QUIC solves nothing that hasn't already been solved.

If pseudo-open proprietary standards are de-rigour, then adopt the Scheduled Transfer Protocol and Delay Tolerant Protocol. Hell, bring back TUBA, SKIP and any other obscure protocol nobody is likely to use. It's not like anyone cares any more.

Comment Re: Must hackers be such dicks about this? (Score 1) 270

He claimed he could hack the plane. This was bad and the FBI had every right to determine his motives, his actual capabilities and his actions.

The FBI fraudulently claimed they had evidence a crime had already taken place. We know it's fraudulent because if they did have evidence, the guy would be being questioned whilst swinging upside down over a snake pit. Hey, the CIA and Chicago have Black Sites, the FBI is unlikely to want to miss out. Anyways, they took his laptop, not him, which means they lied and attempted to pervert the course of justice. That's bad, unprofessional and far, far more dangerous. The researcher could have killed himself and everyone else on his plane. The FBI, by using corrupt practices, endanger every aircraft.

Comment Re: Must hackers be such dicks about this? (Score 1) 270

Did the FBI have the evidence that he had actually hacked a previous leg of the flight, or did they not?

If they did not, if they knowingly programmed a suspect with false information, they are guilty of attempted witness tampering through false memory syndrome. Lots of work on this, you can program anyone to believe they've done anything even if the evidence is right in front of them that nothing was done at all. Strong minds make no difference, in fact they're apparently easier to break.

Falsifying the record is self-evidently failure of restraint.

I have little sympathy for the researcher, this kind of response has been commonplace since 2001, slow-learners have no business doing science or engineering. They weren't exactly infrequent before then.

Nor have I any sympathy for the airlines. It isn't hard to build a secure network where the security augments function rather than simply taking up overhead. The same is true of insecure car networks. The manufacturers of computerized vehicles should be given a sensible deadline (say, next week Tuesday) to have fully tested and certified patches installed on all vulnerable vehicles.

Failure should result in fines of ((10 x vehicle worth) + (average number of occupants x average fine for unlawful death)) x number of vehicles in service. At 15% annual rate of interest for every year the manufacturer delays.

Comment Re: In summary (Score 1) 57

ADA updates would be good, bringing in the Spark 2014 and early 2015 extensions would have been nice. (Spark is a mathematically provable dialect of ADA. Well, mostly. Apparently, you can't prove floating point operations yet because nobody knows how. Personally, I think it's as easy as falling off a log table.)

There are also provable dialects of C and it would be nice if GCC had a flag to constrain to that subset. Using multiple compilers is a good way of producing incompatible binaries and nasty interactions. GCC has no business having limitations. :)

With work on KROC at a standstill, we have a reference compiler that talks Occam Pi. Occam is a very nice language to work with but working through archaic Inmos blobs is tiresome and limiting.

Code quality in GCC and GlibC is still poor, the stability of internal interfaces is derisory (these should be generated from abstract descriptions, ensuring the flexibility GCC wants and the usability interface developers want) and the egos of the developers should be taken out and shot. However, it's still one of the best environments out there. Those that are better at specific things are usually carrying three to four digit price tags. I'd write in hand-turned assembly before paying for unquantifiable products that I won't even own.

Comment Re: In summary (Score 1) 57

Different animal. Cilk has specific instructions for parallelising loops and similar. It looks like a similar concept to Fortran's capacity to turn anything that can be done as a vector rather than as a sequential operation into a vector instruction.

OpenMP parallelizes at the block level rather than the instruction level. By all accounts (notably comments on the ATLAS mailing list), the performance is terrible.

Submission + - Cancer researcher vanishes with tens of millions of dollars (goerie.com)

jd writes: Steven Curley, MD, who ran the Akesogenx corporation (and may indeed have been the sole employee after the dismissal of Robert Zavala) had been working on a radio-frequency cure for cancer with an engineer by the name of John Kanzius.

Kanzius died, Steven Curley set up the aforementioned parallel company that bought all the rights and patents to the technology before shuttering the John Kanzius Foundation. So far, so very uncool.

Last year, just as the company started aproaching the FDA about clinical trials, Dr Curley got blasted with lawsuits accusing him of loading his shortly-to-be ex-wife's computer with spyware.

Two weeks ago, there was to be a major announcement "within two weeks". Shortly after, the company dropped off the Internet and Dr Curley dropped off the face of the planet.

Robert Zavala is the only name mentioned that could be a fit for the company's DNS record owner. The company does not appear to have any employees other than Dr Curley, making it very unlikely he could have ever run a complex engineering project well enough to get to trial stage. His wife doubtless has a few scores to settle. Donors, some providing several millions, were getting frustrated — and as we know from McAfee, not all in IT are terribly sane. There are many people who might want the money and have no confidence any results were forthcoming.

So, what precisely was the device? Simple enough. Every molecule has an absorption line. It can absorb energy on any other frequency. A technique widely exploited in physics, chemistry and astronomy. People have looked into various ways of using it in medicine for a long time.

The idea was to inject patients with nanoparticles on an absorption line well clear of anything the human body cares about. These particles would be preferentially picked up by cancer cells because they're greedy. Once that's done, you blast the body at the specified frequency. The cancer cells are charbroiled and healthy cells remain intact.

It's an idea that's so obvious I was posting about it here and elsewhere in 1998. The difference is, they had a prototype that seemed to work.

But now there is nothing but the sound of Silence, a suspect list of thousands and a list of things they could be suspected of stretching off to infinity. Most likely, there's a doctor sipping champaign on some island with no extradition treaty. Or a future next-door neighbour to Hans Reiser. Regardless, this will set back cancer research. Money is limited and so is trust. It was, in effect, crowdsource funded and that, too, will feel a blow if theft was involved.

Or it could just be the usual absent-minded scientist discovering he hasn't the skills or awesomeness needed, but has got too much pride to admit it, as has happened in so many science fraud cases.

Comment Re: stop the pseudo-scientific bullshit (Score 1) 88

The Great Extinction, caused by Siberia becoming one gigantic lava bed (probably after an asteroid strike), was a bit further back in time. Geologically, Siberia is old. You might be confusing the vestiges of Ice Age dessication (which was 10,000 years ago) but which involves the organics on the surface with the geology (aka rocks).

Regardless, though, of how the craters are forming, the fact remains that an awful lot of greenhouse gas is being pumped into the air, an awful lot of information on early civilization is being blasted out of existence, and a lot of locals are finding that the land has suddenly become deadly.

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...