We're years away from ipv4 exhaustion.
How many years? Happily, the data for consumption is publicly available, so I did some calculations a while back. Maybe you've run them more recently, I did them a couple of years ago, so maybe you have a more accurate answer. But what I got was this...
...the run rate is such that if we reclaim ALL IPv4 address space, including yours and mine that we're using right now, we still run out in 2019.
I'm not sure that lengthy and expensive reclamation projects really buy us a lot when we outrun two internets' worth of addresses within a decade.
I've watched B5 through a couple of times now, with friends who hadn't seen it before. Funny thing happens - at the beginning people complain that the CGI looks so bad (it was awesome for mid 90s TV, but I get that in this context it's fair to evaluate against present day). But that fades quite quickly and by the time you're half way through, at Severed Dreams, people are pretty blown away. It's a good time to remind them of their complaints.
I assume that there is a combination of two things going on - the viewer gets used to the style, and the CGI quality does improve considerably over time. I can't tell how much of each is involved, though.
Funny thing about the CGI in B5, of course, is that budget constraints were certainly a factor in the decision - because it was the only way they could possibly do anything as ambitious as they wanted with the cash they had. They were doing some of the most elaborate stuff on TV at the time. I don't think stuff like the CGI sequences in Severed Dreams had ever been seen outside of big budget movies before then. So that tradeoff does show in the early episodes, but it seems like they really pushed the technology forward.
I reckon if you were an IPv6 only user, what you'd want to see is a list of pages you can access, and not ones you can't. That's a matter of filtering for the user, not sorting for relevance to the search query. And that assumes the existence of an IPv6 only user with *no* access of any kind to the IPv4 internet. We've a long distance to go before we start seeing those in the wild, outside of labs.
I think we need to face it that we can't expect Google to damage their core product by introducing changes like this for even the best of technical intentions. There isn't any "How to get IPv6 adoption in months not years." There's a lot of work to be done in crafting proper plans with realistic costs and benefits that can be understood by the people who are going to approve the money. We can do little things here and there, but we can't short-circut that process on an industry-wide basis.
It sounds daunting, but it's doable if we chew one bite's worth at a time. What's happening today is going to contribute to that for the content providers, by quantifying something that was previously uncertain: just how big is the impact on existing users if you dual stack. If the day turns out to be so successful that some big sites dual stack permanently - as such experiments in the past have done - then that contributes to the case for the rest of us, because finally there will be some real content out there that will use the stuff we're paying for.
1. Microsoft has a patch that demotes IPv6 access for one day only. Not only does this throw a wrench in the worlds ability to gauge problems but it does nothing to solve the end users issue. Paradoxically simply disabling IPv6 is much better at this point as not breaking IPv4 is much more important to the forward progress of IPv6 deployment than a few end-users who can enable IPv6 later when they can get their issues fixed.
I did at first think the same way, but then I realised - that doesn't appear to be an automatically-pushed patch. It looks like a support article to which an admin can refer a user who is screaming "I don't care, make my internet work NOW." It's something that can be applied in a hurry to temporarily resolve the problem, but doesn't sweep it under the carpet because the underlying problem will still need to be dealt with in time. In that context, I think that this is a more responsible approach than telling users to disable IPv6 permanently.
Nope. It's a scheduled, time limited way to identify end users whose browsers work normally when presented with sites that resolve to IPv4 only, but have problems when presented with sites that resolve to both IPv4 and IPv6. This is a fraction of one percent of users, but they're holding up the show for the rest of us. Without a day like today, they would never even be aware that something is wrong.
It's been pretty hard to miss in networking circles specifically. Reason I say this is:
A lot of people here seem to be missing the point of the event. It's not really about boosting IPv6 traffic for the day; there are still other links in the chain to get sorted out before we can do that (most visibly, users' LANs and internet connections.) But one big thing that's been holding up the dual stacking of BIG websites, the kind participating today, is a really tiny proportion of users who don't know they have IPv6 configured and it's broken.
The numbers are in or around a fraction of a percent, but for a really big site, that's too many users. We need to find these guys and get them to fix it.
So the target for this one hasn't been mainstream users or even system administrators, but ISPs and IT support departments, so that they can find the problems in advance. (Maybe you fall into this category and missed it, in which case, sorry, but I saw it in pretty much all the networking channels I was aware of over the past seven months.)
So far, eighteen hours in, I've not seen many reports of problems. This is EXCELLENT NEWS, because if the perception of problems turns out to be much greater than reality, some of the participants might decide to leave IPv6 on permanently. That's one more link in the chain, so that ISPs that do deploy IPv6 to their users will actually begin to see some more take-up of traffic. Step by step, such is how the chicken/egg problem is unravelled.
Google will not make changes to pagerank that are not specifically about improving the quality of search results.
This has been thought of before.
One step at a time. Before we start turning off IPv4, we need to sort out people on nominally ipv4-only connections that actually fail to connnect to websites that do no more than offer IPv6 in parallel. That's what the google site is testing
The test is aimed squarely at you.
What stops the large content providers from serving over IPv6 right now today is a level of brokenness that affects a fraction of a percent of users. These are computers or networks which are nominally IPv4 only, but have some misconfigured IPv6 setup that is actively causing problems connecting to sites. The proportion of users is tiny, but if you're facebook, that's still a lot of users. Wednesday next will expose these problems on a temporary, scheduled basis.
If you run IT support for an organisation, it would be wise to see the results of, say, the RIPE IPv6 eye chart on your client machines.
NAT destroys the peer to peer nature of the network. It limits who can run servers of any type to those who are outside NAT.
Using NAT at the ISP level is basicly evil and should not be considered when we are going to need to deploy IPv6 anyway.
Cool! I agree.
Glad that's sorted.
So what do we do while we're waiting for everyone else to catch up on IPv6?
To be honest, this is a fair comment. It *should* be a seamless transition, and evidently it's not going to be. My one concern is that, on the internet, this sort of change can't be laid down from on high. The kind of people who should be working on this transition are... pretty much the target audience of slashdot, actually.
I did some calculations a while back, extending the growth curve beyond 256
I mean, if we really needed to buy a bit more time to do a transition, then maybe it'd be worth going through all that trouble. But we've had longer than that to prepare already with this deadline very clearly looming. I don't see how extending it a bit would change the end result.
"Pascal is Pascal is Pascal is dog meat." -- M. Devine and P. Larson, Computer Science 340