Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment: Re: The answer has been clear (Score 1) 390

by jd (#49575883) Attached to: Why the Journey To IPv6 Is Still the Road Less Traveled

Multiple IPs was one solution, but the other was much simpler.

The real address of the computer was its MAC, the prefix simply said how to get there. In the event of a failover, the client's computer would be notified the old prefix was now transitory and a new prefix was to be used for new connections.

At the last common router, the router would simply swap the transitory prefix for the new prefix. The packet would then go by the new path.

The server would multi-home for all prefixes it was assigned.

At both ends, the stack would handle all the detail, the applications never needed to know a thing. That's why nobody cared much about remembering IP addresses, because those weren't important except to the stack. You remembered the name and the address took care of itself.

One of the benefits was that this worked when switching ISPs. If you changed your provider, you could do so with no loss of connections and no loss of packets.

But the same was true of clients, as well. You could start a telnet session at home, move to a cyber cafe and finish up in a pub, all without breaking the connection, even if all three locations had different ISPs.

This would be great for students or staff at a university. And for the university. You don't need the network to be flat, you can remain on your Internet video session as your laptop leaps from access point to access point.

Comment: Re: How about basic security? (Score 5, Informative) 390

by jd (#49516499) Attached to: Why the Journey To IPv6 Is Still the Road Less Traveled

IPSec is perfectly usable.

Telebit demonstrated transparent routing (ie: total invisibility of internal networks without loss of connectivity) in 1996.

IPv6 has a vastly simpler header, which means a vastly simpler stack. This means fewer defects, greater robustness and easier testing. It also means a much smaller stack, lower latency and fewer corner cases.

IPv6 is secure by design. IPv4 isn't secure and there is nothing you can design to make it so.

Comment: Re: Waiting for the killer app ... (Score 3, Informative) 390

by jd (#49516451) Attached to: Why the Journey To IPv6 Is Still the Road Less Traveled

IPv6 would help both enormously. Lower latency on routing means faster responses.

IP Mobility means users can move between ISPs without posts breaking, losing responses to queries, losing hangout or other chat service connections, or having to continually re-authenticate.

Autoconfiguration means both can add servers just by switching the new machines on.

Because IPv4 has no native security, it's vulnerable to a much wider range of attacks and there's nothing the vendors can do about them.

Comment: Re: DNS without DHCP (Score 4, Informative) 390

by jd (#49516387) Attached to: Why the Journey To IPv6 Is Still the Road Less Traveled

Anycast tells you what services are on what IP. There are other service discovery protocols, but anycast was designed specifically for IPv6 bootstrapping. It's very simple. Multicast out a request for who runs a service, the machine with the service unicasts back that it does.

Dynamic DNS lets you tell the DNS server who lives at what IP.

IPv6 used to have other features - being able to move from one network to another without dropping a connection (and sometimes without dropping a packet), for example. Extended headers were actually used to add features to the protocol on-the-fly. Packet fragmentation was eliminated by having per-connection MTUs. All routing was hierarchical, requiring routers to examine at most three bytes. Encryption was mandated, ad-hoc unless otherwise specified. Between the ISPs, the NAT-is-all-you-need lobbyists and the NSA, most of the neat stuff got ripped out.

IPv6 still does far, far more than just add addresses and simplify routing (reducing latency and reducing the memory requirements of routers), but it has been watered down repeatedly by people with an active interest in everyone else being able to do less than them.

I say roll back the protocol definition to where the neat stuff existed and let the security agencies stew.

Comment: What is wrong with SCTP and DCCP? (Score 4, Interesting) 84

by jd (#49503031) Attached to: Google To Propose QUIC As IETF Standard

These are well-established, well-tested, well-designed protocols with no suspect commercial interests involved. QUIC solves nothing that hasn't already been solved.

If pseudo-open proprietary standards are de-rigour, then adopt the Scheduled Transfer Protocol and Delay Tolerant Protocol. Hell, bring back TUBA, SKIP and any other obscure protocol nobody is likely to use. It's not like anyone cares any more.

Comment: Re: Must hackers be such dicks about this? (Score 1) 270

by jd (#49500235) Attached to: FBI Accuses Researcher of Hacking Plane, Seizes Equipment

He claimed he could hack the plane. This was bad and the FBI had every right to determine his motives, his actual capabilities and his actions.

The FBI fraudulently claimed they had evidence a crime had already taken place. We know it's fraudulent because if they did have evidence, the guy would be being questioned whilst swinging upside down over a snake pit. Hey, the CIA and Chicago have Black Sites, the FBI is unlikely to want to miss out. Anyways, they took his laptop, not him, which means they lied and attempted to pervert the course of justice. That's bad, unprofessional and far, far more dangerous. The researcher could have killed himself and everyone else on his plane. The FBI, by using corrupt practices, endanger every aircraft.

Comment: Re: Must hackers be such dicks about this? (Score 1) 270

by jd (#49500221) Attached to: FBI Accuses Researcher of Hacking Plane, Seizes Equipment

Did the FBI have the evidence that he had actually hacked a previous leg of the flight, or did they not?

If they did not, if they knowingly programmed a suspect with false information, they are guilty of attempted witness tampering through false memory syndrome. Lots of work on this, you can program anyone to believe they've done anything even if the evidence is right in front of them that nothing was done at all. Strong minds make no difference, in fact they're apparently easier to break.

Falsifying the record is self-evidently failure of restraint.

I have little sympathy for the researcher, this kind of response has been commonplace since 2001, slow-learners have no business doing science or engineering. They weren't exactly infrequent before then.

Nor have I any sympathy for the airlines. It isn't hard to build a secure network where the security augments function rather than simply taking up overhead. The same is true of insecure car networks. The manufacturers of computerized vehicles should be given a sensible deadline (say, next week Tuesday) to have fully tested and certified patches installed on all vulnerable vehicles.

Failure should result in fines of ((10 x vehicle worth) + (average number of occupants x average fine for unlawful death)) x number of vehicles in service. At 15% annual rate of interest for every year the manufacturer delays.

Comment: Re: In summary (Score 1) 57

by jd (#49453131) Attached to: GCC 5.0 To Support OpenMP 4.0, Intel Cilk Plus, C++14

ADA updates would be good, bringing in the Spark 2014 and early 2015 extensions would have been nice. (Spark is a mathematically provable dialect of ADA. Well, mostly. Apparently, you can't prove floating point operations yet because nobody knows how. Personally, I think it's as easy as falling off a log table.)

There are also provable dialects of C and it would be nice if GCC had a flag to constrain to that subset. Using multiple compilers is a good way of producing incompatible binaries and nasty interactions. GCC has no business having limitations. :)

With work on KROC at a standstill, we have a reference compiler that talks Occam Pi. Occam is a very nice language to work with but working through archaic Inmos blobs is tiresome and limiting.

Code quality in GCC and GlibC is still poor, the stability of internal interfaces is derisory (these should be generated from abstract descriptions, ensuring the flexibility GCC wants and the usability interface developers want) and the egos of the developers should be taken out and shot. However, it's still one of the best environments out there. Those that are better at specific things are usually carrying three to four digit price tags. I'd write in hand-turned assembly before paying for unquantifiable products that I won't even own.

Comment: Re: In summary (Score 1) 57

by jd (#49453043) Attached to: GCC 5.0 To Support OpenMP 4.0, Intel Cilk Plus, C++14

Different animal. Cilk has specific instructions for parallelising loops and similar. It looks like a similar concept to Fortran's capacity to turn anything that can be done as a vector rather than as a sequential operation into a vector instruction.

OpenMP parallelizes at the block level rather than the instruction level. By all accounts (notably comments on the ATLAS mailing list), the performance is terrible.

+ - Cancer researcher vanishes with tens of millions of dollars->

Submitted by jd
jd writes: Steven Curley, MD, who ran the Akesogenx corporation (and may indeed have been the sole employee after the dismissal of Robert Zavala) had been working on a radio-frequency cure for cancer with an engineer by the name of John Kanzius.

Kanzius died, Steven Curley set up the aforementioned parallel company that bought all the rights and patents to the technology before shuttering the John Kanzius Foundation. So far, so very uncool.

Last year, just as the company started aproaching the FDA about clinical trials, Dr Curley got blasted with lawsuits accusing him of loading his shortly-to-be ex-wife's computer with spyware.

Two weeks ago, there was to be a major announcement "within two weeks". Shortly after, the company dropped off the Internet and Dr Curley dropped off the face of the planet.

Robert Zavala is the only name mentioned that could be a fit for the company's DNS record owner. The company does not appear to have any employees other than Dr Curley, making it very unlikely he could have ever run a complex engineering project well enough to get to trial stage. His wife doubtless has a few scores to settle. Donors, some providing several millions, were getting frustrated — and as we know from McAfee, not all in IT are terribly sane. There are many people who might want the money and have no confidence any results were forthcoming.

So, what precisely was the device? Simple enough. Every molecule has an absorption line. It can absorb energy on any other frequency. A technique widely exploited in physics, chemistry and astronomy. People have looked into various ways of using it in medicine for a long time.

The idea was to inject patients with nanoparticles on an absorption line well clear of anything the human body cares about. These particles would be preferentially picked up by cancer cells because they're greedy. Once that's done, you blast the body at the specified frequency. The cancer cells are charbroiled and healthy cells remain intact.

It's an idea that's so obvious I was posting about it here and elsewhere in 1998. The difference is, they had a prototype that seemed to work.

But now there is nothing but the sound of Silence, a suspect list of thousands and a list of things they could be suspected of stretching off to infinity. Most likely, there's a doctor sipping champaign on some island with no extradition treaty. Or a future next-door neighbour to Hans Reiser. Regardless, this will set back cancer research. Money is limited and so is trust. It was, in effect, crowdsource funded and that, too, will feel a blow if theft was involved.

Or it could just be the usual absent-minded scientist discovering he hasn't the skills or awesomeness needed, but has got too much pride to admit it, as has happened in so many science fraud cases.

Link to Original Source

Comment: Re: stop the pseudo-scientific bullshit (Score 1) 88

by jd (#49156217) Attached to: Mysterious Siberian Crater Is Just One of Many

The Great Extinction, caused by Siberia becoming one gigantic lava bed (probably after an asteroid strike), was a bit further back in time. Geologically, Siberia is old. You might be confusing the vestiges of Ice Age dessication (which was 10,000 years ago) but which involves the organics on the surface with the geology (aka rocks).

Regardless, though, of how the craters are forming, the fact remains that an awful lot of greenhouse gas is being pumped into the air, an awful lot of information on early civilization is being blasted out of existence, and a lot of locals are finding that the land has suddenly become deadly.

Comment: Re: Authority (Score 2, Interesting) 234

by jd (#49156167) Attached to: As Big As Net Neutrality? FCC Kills State-Imposed Internet Monopolies

That is a good question. The last time the courts ruled on this, the ruling was that the FCC had ceded power and couldn't claim it back without the will of god. Or Congress, or something.

Personally, I'm all in favour of Thor turning up to the Supreme Court, but he probably wouldn't be allowed in on account of not having a visa.

There is one way to find out if a man is honest -- ask him. If he says "Yes" you know he is crooked. -- Groucho Marx

Working...