As they have no life to begin with, at least they won't lose much if things go wrong.
because we couldn't possibly have good service from an ISP.
Don't most ISPs sell good service at a premium? I think that was the entire point with having poor service in the first place. The only other reason I could imagine would be to drive customers to the competitors, and that doesn't seem to make sense from a business point of view.
I have no imagination, so I have no idea what we might get in the future if we actually had the infrastructure to support it.
I can come up with a couple of additional usages for some
A modern OS is a multi user system, imagine if each user could get their own IP address. You could allow users to use privileged port numbers on their own IP address, and all port numbers on their IP address would be protected from usage by other users. You could do this by responding to neighbor discovery for as many IPs in your link prefix as you have users on the node. But a more secure and more efficient approach would be to route a prefix to each node.
a prefix that just got compressed might get split quickly, and vice versa
There is no need to combine the routes, if there is still free entries in the CAM. Once the CAM is full and another entry need to be inserted, the pair which has been a candidate for combining for the longest time can then be updated. That algorithm would keep the number of updates down.
However as the number of routes approach the limit of what can be handled, even with combination of routes, the frequency of updates needing to combine and split entries will go up. It may be they are already doing this, some sources say the problem did cause reduced performance, which would be consistent with such behavior.
all Comcast needed to do was write "56" in their config files rather than "60"...
One has got to wonder if that's how it happened. Did some admin arbitrarily decide to write 60 in a configuration file, where he could/should have written 56, and then that was how it was going to be? Or did a lot of bean counters get together and decide on a policy (possibly not even based on real data), and then admins had to implement it like that without asking questions.
But that's not what we should be targeting. We should be targeting "enough for pretty much everybody", and "for the foreseeable future" -- including for any new, fun things that become possible because of easily-available address space.
Even in many areas where there is tough competition among ISPs, it is hard to find even one trying to capture those customers, who want IPv6. That's how bad it looks today. And that's why I would happily take a
I can't yet imagine what I would use more than a
Next, IPv6 addresses are of course 4 times larger than IPv4 addresses. Even if your IPv6 routing table has 5 times fewer entries, you're not getting a 5 times saving in memory. You're only getting a 5/4 times saving or tables that are 80% of the IPv4 - nowhere near as dramatic.
In IPv4 all 32 bits are used for routing, though on the backbone you tend to only accept
Either way, you only need twice as many bits in the CAM to handle an IPv6 route compared to IPv4. So what you call a 20% saving is more like a 60% saving. The picture is a bit more complicated, because two CAM entries at half the size is not the same as one of the full size. So you may have to decide at design time, how you are going to use that CAM.
Routing tables growing with the size of the network, in terms of # of entries - even if not at all fragmented.
I'd love to take part in solving that problem. Any realistic solution is going to start with a migration to IPv6. And I don't see how we could expect the solution to be deployed any faster, so if we start now, we could probably have it in production by 2040.
it is possible that IPv6 is actually too small to be able to solve routing scalability.
That algorithm has a major drawback. The address of a node depends on which links are up and which are not. You'd have to renumber your networks and update DNS, every time a link changing somewhere cause your address to change. If we assume that issue can be fixed, it doesn't really imply that addresses would have to be larger.
The algorithm in the paper assigns two identifications to each node. The first one could very well be the IPv6 address assigned to the node. The second address is computed based on the first address and structure of the network. However their routing looks awfully similar to source routing. So really the solution might just be to make source routing work.
I can think of a couple of other reasons to consider IPv6 addresses to be too short. That paper isn't one.
Teredo and 6to4 are two "automatic" tunnel protocols. Both embed IPv4 addresses inside IPv6 addresses. Due to the use of NAT, Teredo needs to embed two IPv4 addresses and a port number inside the IPv6 address. That doesn't leave room for a site-level-aggregator or host part. If you wanted one unified protocol which could replace both Teredo and 6to4, you'd need at least 192 bits in the IPv6 address.
After IPv6 showed up, people realized that it is sometimes convenient to embed cryptographic information inside the IP address. That was unthinkable with IPv4. With IPv6 it is doable, but you have to chose cryptographic primitives that are not exactly state of the art, due to 128 bits being a bit short for cryptographic values, and not all of them even being available for that purpose.
On the one hand it saves memory, on the other it adds complexity.
At least it would be complexity in software. That's better than complexity in hardware. If that additional complexity would be the only way to keep some already deployed hardware functioning, it might be worth it.
let's start work on IPv7+!
IPv7 was officially deprecated in 2012. In practice IPv7 was obsolete before IPv6 was finalized.
You won't *keep* that nice clean space. The same processes that led to IPv4 fragmentation, ex space, will start to affect IPv6
With address shortage being the main reason for fragmentation, that doesn't sound so likely.
Mergers
This will not exactly lead to growth in number of announcements, but it won't lead to a reduction either. Giving incentives to renumber after a merger may help a bit. At least there should be enough addresses that the company can pick which of the two blocks it want to renumber into, and that block can be extended as needed.
ASes eventually running out of bits in their prefix
Bits are set aside to allow them to grow - for now at least.
that's only 16 in a
/48, a lot but not impossible to exhaust either, a /56 would be even easier to exhaust
Doesn't all the RIRs hand out addresses in
Ok, you've got a 5 fold linear reduction compared to IPv4. However it still doesn't fix the problem that current Internet routing leads to O(N) routing tables at each AS
That is true. This problem is going to get even worse if we want end user sites to have access to dual homing. Fixing this is going to require some fundamental change to how routing is done.
But if IPv6 gets deployed soon, the reduction in routing table size should buy us some time, that can be used to come up with a more scalable solution, which will allow every site to be dual homed. But of course things will have to break if ISPs will keep waiting for breakage to happen before they start deploying scalable solutions.
That 5x linear reduction is, ultimately, a barely noticeable blip
If the tables grow with each generation of hardware, a 5x reduction can last a while. Not forever, but long enough that a long term solution can be deployed, if ISPs want to.
IPv6 doesn't fix routing table growth problems
Not permanently, but IPv6 can help now, and IPv4 can be expected to get worse if allocations gets split and traded. And throwing bigger hardware at the problem may help with this one issue regarding IPv4, but there are other problems with IPv4.
If IPv4 is fragmented it's primarily because of "short-sighted" initial allocations
Those allocations should be the least fragmented ones around, so blaming those allocations for fragmentation is a bit of a stretch. As far as short-sighted goes, it is not clear to me that IP stacks at the time would have supported doing the allocations differently. Moreover, two decades ago it was clear that IPv4 wasn't viable as a long term solution. Should we really blame problems we have no on decisions made back then? I'd say if any decisions were to be blamed, it would be those causing IPv6 deployments to get postponed.
The other kind of routing table compaction is due to serendipitous next-hop sharing for ranges of prefixes. E.g., prefixes for European networks are more likely to assigned by RIPE, so if you're "far" away from Europe in network terms, then there's a better chance that there'll be a number of adjacent prefixes for European networks that will share nexthops and can be compressed, etc.
It is true, that this approach should reduce the number of table entries you need to put in the CAM. But at the same time, since the efficiency of this depends on where you are, the expected outcome would be that failures would be much more spread out over time and not happen all at once. Is this sort of compression widespread, and if it is, then why the simultaneous failures? Is this a matter of the rate of failures being tied to the rate at which the number of announcements grows? If so it wasn't one particular announcement that pushed the Internet over the limit, but rather that as 15k new announcements made it around the world about 2.5k of them happened to push one AS over the limit.
To "fix" this problem, you need routing tables that grow more slowly than linearly, with respect to the size of the network. To the best of my knowledge of the theory, this is impossible with any form of guaranteed-shortest-path routing.
Maybe more work needs to go into the principle of keeping the intelligence at the end-points rather than in the core. Maybe source routing is the way to go, we just need to figure out how to make it secure and not require prohibitively large packet headers.
You knew the job was dangerous when you took it, Fred. -- Superchicken