I found that above about 10Mb/s you start to hit diminishing returns. The jump from 10 to 30 was barely noticeable. The jump from 30 to 100 is noticeable with large downloads, but nothing else. From 100 to 1000, the main thing that you notice is if you accidentally download a large file to a spinning-rust disk and see how quickly your fill up your RAM with buffer cache...
Over the last 10 years, I've gone from buying the fastest connection my ISP offered to buying the slowest. The jump from 512Kb/s to 1Mb/s was really amazing (though not as good as moving to 512Kb/s from a modem that rarely managed even 33Kb/s), but each subsequent upgrade has been less exciting.
Because in 1981 or so, everybody was pretty sure that this fairly obscure educational network would *never* need more than about 4 billion addresses... and they were *obviously right*.
Well, maybe. Back then home computers were already a growth area and so it was obvious that one computer per household would eventually become the norm. If you wanted to put these all on IPv4, then it would be cramped. The growth in mobile devices and multi-computer households might have been a bit surprising to someone in 1981, but you'd have wanted to add some headroom.
When 2% of your address space is consumed, you are just over 6 doublings away consumption. Even if you assume an entire decade per doubling, that's less than an average lifetime before you're doing it all over again.
With IPv6, you can have 4 billion networks for every IPv4 address. Doublings are much easier to think about in base 2: one bit per doubling. We've used all of the IPv4 addresses. Many of those are for NAT'd networks, so let's assume that they all are and that we're going to want one IPv6 subnet for each IPv4 address currently assigned during the transition. That's 32 bits gone. Assuming that we're using a
In practice, I suspect that the growth will be a bit different. Most of the current growth is multiple devices per household, which doesn't affect the number of subnets: that
IMHO: what needs to happen next is to have a 16 bit packet header to indicate the size of the address in use. This makes the address space not only dynamic, but MASSIVE without requiring all hardware on the face of the Earth to be updated any time the address space runs out.
This isn't really a workable idea. Routing tables need to be fast, which means that the hardware needs to be simple. For IPv4, you basically have a fast RAM block with 2^24 entries and switch on the first three bytes to determine where to send the packet. With IPv6, subnets are intended to be arranged hierarchically, so you end up with a simpler decision. With variable-length fields, you'd need something complex to parse them and that would send you into the software slow path. This is a problem, because you'd then have a very simple DoS attack on backbone routers (just send them packets with large length headers that chew up CPU before they're dropped). You'd also have the same deployment headaches that IPv6 has: no one would buy routers that had fast paths for very large addresses now, just because in 100 years we might need them, so no one would test that path at a large scale: you'd avoid the DoS by just dropping all packets that used an address size other than 4 or 16. In 100 years (i.e. well over 50 backbone router upgrades), people might start caring and buy routers that could handle 16 or 32 byte address fields, but that upgrade path is already possible: the field that you're looking for is called the version field in the IP header.
Nobody really needs to enter control characters anymore
Except those of us who use a terminal, who find control-C and control-Z (and, on FreeBSD, control-T) indispensable.
I think David's point is that, while what you're saying about coin flips is true of abstract mathematical "fair coins" or other similar processes, in reality you don't know a priori if you're actually dealing with a fair coin or not, and its past results gives you information on whether or not it's a fair coin and thus whether you should expect any bias in the probability of future outcomes. Those future outcomes themselves aren't influenced by the previous flips causally, as the Gambler's Fallacy presumes - -the coin isn't "due" anything -- but the previous flips give epistemic reason to expect certain outcomes to be more likely on each independent flip, including the next one.
You knew the job was dangerous when you took it, Fred. -- Superchicken