Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:wft ever dude! (Score 1) 154 154

I found that above about 10Mb/s you start to hit diminishing returns. The jump from 10 to 30 was barely noticeable. The jump from 30 to 100 is noticeable with large downloads, but nothing else. From 100 to 1000, the main thing that you notice is if you accidentally download a large file to a spinning-rust disk and see how quickly your fill up your RAM with buffer cache...

Over the last 10 years, I've gone from buying the fastest connection my ISP offered to buying the slowest. The jump from 512Kb/s to 1Mb/s was really amazing (though not as good as moving to 512Kb/s from a modem that rarely managed even 33Kb/s), but each subsequent upgrade has been less exciting.

Comment Re:wft ever dude! (Score 1) 154 154

Because in 1981 or so, everybody was pretty sure that this fairly obscure educational network would *never* need more than about 4 billion addresses... and they were *obviously right*.

Well, maybe. Back then home computers were already a growth area and so it was obvious that one computer per household would eventually become the norm. If you wanted to put these all on IPv4, then it would be cramped. The growth in mobile devices and multi-computer households might have been a bit surprising to someone in 1981, but you'd have wanted to add some headroom.

When 2% of your address space is consumed, you are just over 6 doublings away consumption. Even if you assume an entire decade per doubling, that's less than an average lifetime before you're doing it all over again.

With IPv6, you can have 4 billion networks for every IPv4 address. Doublings are much easier to think about in base 2: one bit per doubling. We've used all of the IPv4 addresses. Many of those are for NAT'd networks, so let's assume that they all are and that we're going to want one IPv6 subnet for each IPv4 address currently assigned during the transition. That's 32 bits gone. Assuming that we're using a /48 for every subnet, then that gives us 16 more doublings (160 years by your calculations). If we're using /64s, then that's 32 doublings (320 years). I hope that's within my lifetime, but I suspect that it won't be.

In practice, I suspect that the growth will be a bit different. Most of the current growth is multiple devices per household, which doesn't affect the number of subnets: that /64 will happily keep a house happy with a nice sparse network, even if every single physical object that you own gets a microcontroller and participates in IoT things using a globally routable address.

IMHO: what needs to happen next is to have a 16 bit packet header to indicate the size of the address in use. This makes the address space not only dynamic, but MASSIVE without requiring all hardware on the face of the Earth to be updated any time the address space runs out.

This isn't really a workable idea. Routing tables need to be fast, which means that the hardware needs to be simple. For IPv4, you basically have a fast RAM block with 2^24 entries and switch on the first three bytes to determine where to send the packet. With IPv6, subnets are intended to be arranged hierarchically, so you end up with a simpler decision. With variable-length fields, you'd need something complex to parse them and that would send you into the software slow path. This is a problem, because you'd then have a very simple DoS attack on backbone routers (just send them packets with large length headers that chew up CPU before they're dropped). You'd also have the same deployment headaches that IPv6 has: no one would buy routers that had fast paths for very large addresses now, just because in 100 years we might need them, so no one would test that path at a large scale: you'd avoid the DoS by just dropping all packets that used an address size other than 4 or 16. In 100 years (i.e. well over 50 backbone router upgrades), people might start caring and buy routers that could handle 16 or 32 byte address fields, but that upgrade path is already possible: the field that you're looking for is called the version field in the IP header.

Comment Re:Wait Wait Wait... (Score 1) 154 154

It depends on the ISP. Some managed to get a lot more assigned to them than they're actually using, some were requesting the assignments as they needed them. If your ISP has a lot of spare ones, then they might start advertising non-NAT'd service as a selling point. If they've just been handing out all of the ones that they had, then you might find that they go down to one per customer unless you pay more.

Comment Re:MenuChoice and HAM (1992) (Score 1) 268 268

The problem with shell scripts for this kind of thing is that they're a Turing-complete language. This makes it very hard to present to the user what they actually do. .BAT files on DOS / Windows provided that functionality too, but unless you aggressively restrict yourself to a subset of the shell language it's very hard to check a .sh / .bat file and see exactly what command is going to be invoked.

Comment Re:MenuChoice and HAM (1992) (Score 1) 268 268

This requires the program to be explicitly written that way. Gcc and clang also do this, to detect whether they're invoked as C or C++ compilers, and clang will detect a target triple if it's the compiler invocation name prefix. This just goes in argv[0] though - you can't modify the other arguments from a shortcut. It would be really useful to be able to add things like --sysroot=/some/path and -msoft-float to a symlink so that you had a single cc that you could invoke as a cross compiler, but currently the only way to do this is with a tiny shell script that execs the compiler with the correct flags.

Comment Re:The central pro-escrow argument is idiotic. (Score 2) 81 81

You would think a pair of gloves would render all the police fingerprinting useless, yet haphazard criminals are caught by it all the time. Like everyone else with limited resources, they either catch you because you're important or because you make it easy. Heck, I bet many criminals using computers don't even know what crypto is.

Comment Re:The title is terrible (Score 1) 215 215

The car insurance industry is making a lot of money on the fact that your driving profile is individual and will trick you into keep paying a high premium despite having moved into a lower risk segment. All autonomous cars of the same model will drive the same way, which makes it a lot harder to price gouge. It doesn't matter if you're 18 or 80, male or female, single driver or whatever. It's one Google car, 10000 miles/year, parked in garage - what are you charging? In fact, Google might easily just offer insurance themselves since they're the driver and got deep enough pockets they don't need an insurance company.

Comment Re:Ha, lower rates lol (Score 2) 215 215

One of the major reasons traffic deaths went down is we redesigned cars so that instead of being able to withstand a crash without injury to the car, they absorb the crash in a 'crush zone', meaning the car itself takes the damage instead of a person.

And this made a lot of lesser crashes that wouldn't have injured the passengers anyway far more expensive because even small damage is distributed on a large area. I was in an accident not so long ago and despite being a fairly low speed collision where the air bag did not deploy, the damage to my car alone amounted to about 1/5th of the sticker price for a new one and in total I think it wiped out everything I've paid in insurance premiums over the last ten years. So I got no reason to complain, really...

Comment Re:awkward! (Score 1) 180 180

Nonsense. It is true, however, that Windows and Linux use different (overlapping) subsets of the SATA (and SCSI) command sets and, in particular, use very different sequences of commands in common use. If you test heavily with Windows and not with Linux, then you may find that there are code paths in your firmware that Linux uses a lot but which are mostly untested.

Comment Re:Difficulty (Score 1) 268 268

The 'tray' that Raymond describes in his second article looks very much like the Shelf from OPENSTEP 4.1, which was released just after Windows 95. I wonder if some of the NeXT people were playing with early betas of Windows 95 and, as their company CEO later quipped, started their photocopiers...

% APL is a natural extension of assembler language programming; ...and is best for educational purposes. -- A. Perlis

Working...