As they have no life to begin with, at least they won't lose much if things go wrong.
because we couldn't possibly have good service from an ISP.
Don't most ISPs sell good service at a premium? I think that was the entire point with having poor service in the first place. The only other reason I could imagine would be to drive customers to the competitors, and that doesn't seem to make sense from a business point of view.
I have no imagination, so I have no idea what we might get in the future if we actually had the infrastructure to support it.
I can come up with a couple of additional usages for some
A modern OS is a multi user system, imagine if each user could get their own IP address. You could allow users to use privileged port numbers on their own IP address, and all port numbers on their IP address would be protected from usage by other users. You could do this by responding to neighbor discovery for as many IPs in your link prefix as you have users on the node. But a more secure and more efficient approach would be to route a prefix to each node.
a prefix that just got compressed might get split quickly, and vice versa
There is no need to combine the routes, if there is still free entries in the CAM. Once the CAM is full and another entry need to be inserted, the pair which has been a candidate for combining for the longest time can then be updated. That algorithm would keep the number of updates down.
However as the number of routes approach the limit of what can be handled, even with combination of routes, the frequency of updates needing to combine and split entries will go up. It may be they are already doing this, some sources say the problem did cause reduced performance, which would be consistent with such behavior.
all Comcast needed to do was write "56" in their config files rather than "60"...
One has got to wonder if that's how it happened. Did some admin arbitrarily decide to write 60 in a configuration file, where he could/should have written 56, and then that was how it was going to be? Or did a lot of bean counters get together and decide on a policy (possibly not even based on real data), and then admins had to implement it like that without asking questions.
But that's not what we should be targeting. We should be targeting "enough for pretty much everybody", and "for the foreseeable future" -- including for any new, fun things that become possible because of easily-available address space.
Even in many areas where there is tough competition among ISPs, it is hard to find even one trying to capture those customers, who want IPv6. That's how bad it looks today. And that's why I would happily take a
I can't yet imagine what I would use more than a
Next, IPv6 addresses are of course 4 times larger than IPv4 addresses. Even if your IPv6 routing table has 5 times fewer entries, you're not getting a 5 times saving in memory. You're only getting a 5/4 times saving or tables that are 80% of the IPv4 - nowhere near as dramatic.
In IPv4 all 32 bits are used for routing, though on the backbone you tend to only accept
Either way, you only need twice as many bits in the CAM to handle an IPv6 route compared to IPv4. So what you call a 20% saving is more like a 60% saving. The picture is a bit more complicated, because two CAM entries at half the size is not the same as one of the full size. So you may have to decide at design time, how you are going to use that CAM.
Routing tables growing with the size of the network, in terms of # of entries - even if not at all fragmented.
I'd love to take part in solving that problem. Any realistic solution is going to start with a migration to IPv6. And I don't see how we could expect the solution to be deployed any faster, so if we start now, we could probably have it in production by 2040.
it is possible that IPv6 is actually too small to be able to solve routing scalability.
That algorithm has a major drawback. The address of a node depends on which links are up and which are not. You'd have to renumber your networks and update DNS, every time a link changing somewhere cause your address to change. If we assume that issue can be fixed, it doesn't really imply that addresses would have to be larger.
The algorithm in the paper assigns two identifications to each node. The first one could very well be the IPv6 address assigned to the node. The second address is computed based on the first address and structure of the network. However their routing looks awfully similar to source routing. So really the solution might just be to make source routing work.
I can think of a couple of other reasons to consider IPv6 addresses to be too short. That paper isn't one.
Teredo and 6to4 are two "automatic" tunnel protocols. Both embed IPv4 addresses inside IPv6 addresses. Due to the use of NAT, Teredo needs to embed two IPv4 addresses and a port number inside the IPv6 address. That doesn't leave room for a site-level-aggregator or host part. If you wanted one unified protocol which could replace both Teredo and 6to4, you'd need at least 192 bits in the IPv6 address.
After IPv6 showed up, people realized that it is sometimes convenient to embed cryptographic information inside the IP address. That was unthinkable with IPv4. With IPv6 it is doable, but you have to chose cryptographic primitives that are not exactly state of the art, due to 128 bits being a bit short for cryptographic values, and not all of them even being available for that purpose.
On the one hand it saves memory, on the other it adds complexity.
At least it would be complexity in software. That's better than complexity in hardware. If that additional complexity would be the only way to keep some already deployed hardware functioning, it might be worth it.
You won't *keep* that nice clean space. The same processes that led to IPv4 fragmentation, ex space, will start to affect IPv6
With address shortage being the main reason for fragmentation, that doesn't sound so likely.
This will not exactly lead to growth in number of announcements, but it won't lead to a reduction either. Giving incentives to renumber after a merger may help a bit. At least there should be enough addresses that the company can pick which of the two blocks it want to renumber into, and that block can be extended as needed.
ASes eventually running out of bits in their prefix
Bits are set aside to allow them to grow - for now at least.
that's only 16 in a
/48, a lot but not impossible to exhaust either, a /56 would be even easier to exhaust
Doesn't all the RIRs hand out addresses in
Ok, you've got a 5 fold linear reduction compared to IPv4. However it still doesn't fix the problem that current Internet routing leads to O(N) routing tables at each AS
That is true. This problem is going to get even worse if we want end user sites to have access to dual homing. Fixing this is going to require some fundamental change to how routing is done.
But if IPv6 gets deployed soon, the reduction in routing table size should buy us some time, that can be used to come up with a more scalable solution, which will allow every site to be dual homed. But of course things will have to break if ISPs will keep waiting for breakage to happen before they start deploying scalable solutions.
That 5x linear reduction is, ultimately, a barely noticeable blip
If the tables grow with each generation of hardware, a 5x reduction can last a while. Not forever, but long enough that a long term solution can be deployed, if ISPs want to.
IPv6 doesn't fix routing table growth problems
Not permanently, but IPv6 can help now, and IPv4 can be expected to get worse if allocations gets split and traded. And throwing bigger hardware at the problem may help with this one issue regarding IPv4, but there are other problems with IPv4.
If IPv4 is fragmented it's primarily because of "short-sighted" initial allocations
Those allocations should be the least fragmented ones around, so blaming those allocations for fragmentation is a bit of a stretch. As far as short-sighted goes, it is not clear to me that IP stacks at the time would have supported doing the allocations differently. Moreover, two decades ago it was clear that IPv4 wasn't viable as a long term solution. Should we really blame problems we have no on decisions made back then? I'd say if any decisions were to be blamed, it would be those causing IPv6 deployments to get postponed.
The other kind of routing table compaction is due to serendipitous next-hop sharing for ranges of prefixes. E.g., prefixes for European networks are more likely to assigned by RIPE, so if you're "far" away from Europe in network terms, then there's a better chance that there'll be a number of adjacent prefixes for European networks that will share nexthops and can be compressed, etc.
It is true, that this approach should reduce the number of table entries you need to put in the CAM. But at the same time, since the efficiency of this depends on where you are, the expected outcome would be that failures would be much more spread out over time and not happen all at once. Is this sort of compression widespread, and if it is, then why the simultaneous failures? Is this a matter of the rate of failures being tied to the rate at which the number of announcements grows? If so it wasn't one particular announcement that pushed the Internet over the limit, but rather that as 15k new announcements made it around the world about 2.5k of them happened to push one AS over the limit.
To "fix" this problem, you need routing tables that grow more slowly than linearly, with respect to the size of the network. To the best of my knowledge of the theory, this is impossible with any form of guaranteed-shortest-path routing.
Maybe more work needs to go into the principle of keeping the intelligence at the end-points rather than in the core. Maybe source routing is the way to go, we just need to figure out how to make it secure and not require prohibitively large packet headers.
I'll admit to being willfully ignorant of IPv6 other than seeing it as enormously more complicated than IPv4
IPv6 is slightly simpler than IPv4. Some areas got slightly simplified, other areas are just slightly different. Starting from no knowledge in the field, you can learn IPv6 just as fast as you can learn IPv4.
All the complexity people are talking about comes from not deploying IPv6. Had IPv6 been deployed soon enough to avoid NAT and tunnels, it would all have been very simple.
Tunnels are complicated. And many of the people needing IPv6 are currently forced to use tunnels, because ISPs have decided to postpone deployment of IPV6 beyond the reasonable. As if this wasn't bad enough, the presence of NAT makes tunnels even more complicated. Moreover looking at tunnels like 6to4 and Teredo, one notice that there isn't even enough bits in the IPv6 address to make a unified tunnel protocol that could work in place of both 6to4 and Teredo. The reason Teredo was designed in the first place was due to NAT getting in the way of 6to4.
The tunnel technology closest to native IPv6 is probably 6rd. It tuns out fragmentation of IPv4 address space (caused by shortage of IPv4 addresses) makes 6rd deployments more problematic.
There are also some people who have spent so much brain capacity on grasping NAT, that they don't have room left over for anything new.
All of this could have been avoided, if people had deployed native dual stack instead of NAT. And everything would have been simple and cheaper, because it would have been completed before the network grew so big.
trying to solve too many problems at once.
It doesn't. It increase the size of the addresses and fix a few other small design mistakes in the original IPv4 protocol, that's it.
I sometimes wonder if maybe IPv6 didn't appear so complicated and different that adoption might have been increased.
It only appear complicated to those who want an excuse not to deploy it.
Couldn't they just have added a couple of extra bytes to IPv4 to come up with something that worked like IPv4?
How many you add doesn't change the complexity. Adding a couple and later finding out you didn't add enough and have to do the entire thing over again would have been an utter failure.
The bits need to be split into a network part and a host part. Having the boundary between those two parts moving around is complicated, having the boundary in fixed position in the address, is simpler. In IPv4 the part about where the boundary was got more and more complicated over time. In IPv6 each part was designed to be large enough so the boundary could be fixed.
Calculations showed 45 bits should be sufficient for the part before the boundary and 49 should be sufficient after. For simplicity both numbers got rounded up to a power of two, that's how we ended up with 64 + 64.
If you look at the IPv4 and IPv6 headers, you'll see that other than the increase in address size, a few fields got removed because they hadn't been a good idea in the first place. This reduced the size of those other fields from 12 bytes to 8 bytes. A few fields got renamed because the name they had originally been assigned didn't match reality anymore.
Those people who consider IPv6 to be complicated don't even remember all of those fields in the IPv4 header, so the changes wouldn't be a big deal to them. Personally I can remember the IPv6 header fields well enough to be able to write out a valid IPv6 packet by hand, but I can't remember the IPv4 header fields.
I also wonder about an addressing scheme like IPX, where a single network address covers an entire broadcast domain and node addresses are MAC addresses plus the network address.
This would be a slightly larger change than the actual difference between IPv4 and IPv6. But even if this idea is better, you'd not be able to turn that idea into a complete specification in time. You are two decades too late for that.
broadcast domains can scale arbitrarily large without needing to renumber
You cannot scale broadcast domains to arbitrary size. As you try to scale it up, the number of broadcast packets would increase with the number of hosts, and you'd end up with every host getting flooded with broadcast packets. IPv6 has a foundation to get rid of this problem by using multicast addresses for neighbor discovery instead of broadcast like ARP did. How far this can scale is yet to be seen.
Since node addresses are locally determined, ISPs would need to only assign a network address which would allow for basically unlimited public network addresses to each subscriber.
This statement is equally true about IPv6, if you deploy IPv6 as recommended. The customer gets 16 bits for subnetting and 64 bits for each subnet (of which 48 is used for MAC address).
Mom and pop shops who can't afford their own address space also can't afford their own ASN.
What would break if you multihomed your own address space without having your own ASN? Each of the ISPs you connect to have an ASN, which can be used to announce your address space.
Further, if IPv4 was fragmented
IPv4 is fragmented.
then it'd compress better than IPv6 - there would be more IPv4 prefixes going to the same destination.
IPv4 prefixes going to the same destination only compress well, if they are neighboring prefixes. If no two neighboring prefixes go to the same destination, then it doesn't matter how many prefixes go to the same destination, it still won't compress at all.
In fact I can only see IPv6 making things worse in that regard because tons more address space means that more AS assignments would be easy to do.
In reality it works the other way around.
With IPv4 there is a shortage of addresses, so ISPs haven't been getting extremely large blocks. They have been getting blocks just large enough to get by for another year, then they could get another block. Renumbering from multiple smaller blocks into a larger block isn't an option for IPv4, because there isn't enough available address space to shift things around.
With IPv6 an ISP can get a single block, which is large enough for years to come. And the address space around it is being kept free by the RIR, such that should the ISP need more space, their existing block can simply be made larger.
This means an ISP that have 20 different IPv4 blocks announced individually could support the same number of customers with a single IPv6 block. On average each AS announcing IPv4 space announce five times as many IPv4 prefixes as the number of IPv6 prefixes announced by those prefixes announcing IPv6 space.
This is all due to the HD-ratio of IPv4 having been pushed way above the reasonable threshold. IPv6 is designed to work with an HD-ratio of only 80-90%.
Your "solution" is a bunch of horrible hacks that don't even work with DNSSEC.
That didn't stop DNS64+NAT64 deployments. DNSSEC is not widely deployed, which is why IPv6 transition mechanisms that are incompatible with DNSSEC would still be usable. DNSSEC also hasn't solved the amplification attacks problem yet. I'd love to see DNSSEC deployed, but I am personally not going to put much effort into DNSSEC until the day I no longer have to worry about IPv4.
Essentially you have "NAT" functioning at the DHCP+router+DNS level, all conspiring to mangle packets in concert.
Turns out it still works better than carrier grade NAT.
Theoritically, any block of Ipv4 addresses outside of the local subnet could be used
Squatting on global unicast space would not be a good idea at this time. You still want to be able to communicate with existing IPv4 backbone. Once the IPv4 backbone is ready to be deprecated, such a system could start reusing global unicast space.
Until then, RFC 1918 addresses do work just fine for the purpose. RFC 6598 addresses might be better, I haven't tested yet. Addresses from the reserved class E address space won't work well for this purpose. I tested and found that it works with some systems, but other systems refuse to communicate with peers on class E addresses.
a pool of 131072 ipv4 addresses, plenty for most use cases.
Depends on how large a network you deploy it to. For a single broadband connection, that size is plenty. But I don't think it would be sufficient, if you want to cover an entire ISP. If you can find me a network that want to deploy it, I'll tell you how far that pool size scales.
I doubt most people will have that many TCP connections at once.
In the end, the number of TCP connections won't even matter.
Since 127 is not used for local networks, it is the best choice however as the first choice.
127.0.0.0/8 has special meaning in practically every IPv4 stack. Trying to redefine that won't work well.