Slashdot needs a "-1: Generalising and dismissing others on irrelevant physical attributes" modifier.
“Shocking revelations have come out today that the NSA is using the same kind of computers and Internet technologies as hackers, criminals and even paedophiles! The NSA are known to use PCs and operating systems such as Microsoft Windows - a paeophiles favourite - and even Linux - beloved by hackers. The NSA even has spent money on making Linux more secure, which may help thwart law enforcement from investigating computers used by criminals. Further reports suggest the NSA also regularly use TCP in a variety of ways. TCP is known to be heavily deployed by many criminals worldwide. We contacted the NSA and asked them to comment, but their spokesperson responded only with a sneering "Oh for fucks sake" before hanging up the phone.”
Well, not just software costs. You may have to update the CAM more often. E.g. a change of nexthop for one prefix might demand that a compressed CAM entry has to be split up into several entries. Alternatively, it might mean several CAM entries can be consolidated into one. Next, you don't know how often that will happen - a prefix that just got compressed might get split quickly, and vice versa, or it might go back and forth a lot. So you could be doing a lot more updates to the CAM than otherwise. Whether that matters, I really don't know.
Ah, yes, of course, for the CAMs (or any other relevant longest-match index) you need to only store 64 bits at worst. Still, it's not the 5 fold saving.
The Cowen algorithm: Her original paper encodes landmark output ports in the label. That's not practical because of updating. However, with some added restrictions and at the cost of a slight amount of generality (e.g. not being able to work for every posssible graph, like pure star/hub-spoke graphs), you can eliminate that and have the addresses just be (landmark,node). You can do this by having nodes not build local clusters that are overly large, and so you can allow landmarks to also maintain local cluster routing tables - eliminating the output-port hack.
The (landmark,node) association need not change too often. Outages of links in the region between landmark and destination can be dealt with as they are today with routing - the scheme has full shortest-path routing in a region around each node. No need for the label to change. Outages that affect the path between the source and the landmark also similarly are dealt with like normal routing today. The one issue would be if there is a complete loss of a local cluster shortest-path route from the landmark to the destination. Then packets would disappear.
The end-node can at least be informed of this quite quickly, through the local cluster routing protocol (which can be a slightly modified BGP). Which is better than BGP today. Dealing with such issues of landmark redundancy, i.e., having associations with multiple landmarks, are perhaps better solved at a layer above the network layer. The theory shows that it is impossible to have both sub-linear routing tables AND full, global, shortest-path routing for
Practice suggests that those who require redundancy at scale already seek to do so above the network layer. I.e. it is already good practice to locate redundant services on different prefixes precisely to guard against routing fsck-ups and failures. That suggests multi-homing in the "global prefix for one prefix" sense is not something that you should make too many other compromises for in any new routing architecture. Even with IPv4 which does do multi-homing for all to all, BGP multi-homing is not reliable enough to rely on. So it's probably better solved at the transport layer or higher, mediated at end nodes, and not complicate or compromise routing for it, as networks will still go and implement higher-layer redundancy anyway.
Indeed, by providing 2-way signalling in the routing layer, we can make the higher-layer redundancy solutions much better. Today if you advertise a prefix, you have no idea who has and has not received it, beyond your immediate neighbours. Even for your immediate neighbours, you still don't know if they have accepted the route. You can't really improve this in a routing system where all prefixes have global visibility, the communication and state costs would likely be unacceptable. However, in a Cowen Landmark routing scheme, we could at least provide an advertising node knowledge of which landmark nodes have working local cluster connectivity back to it. That's made possible because the scope is restricted, no longer global.
Note that the routing isn't source routing. Just because the address contains (landmark,node) doesn't mean the packet goes via the landmark. As a packet gets near to a landmark it may hit a node that already has the destination in its local cluster routing region, and so the packet goes shortest-path from there to the destination - potentially skipping the landmark. It's more two-stage routing, but each stage is shortest-path, per-hop routing. The 1st stage is routing the packet towards the landmark, the 2nd is when it hits a node with the destination in its local cluster (which, in the worst case, is the landmark).
On address sizes, that's a very interesting point about Teredo and 6to4. Yet another reason why IPv6 is too small.
If you're interested in this stuff, I'm trying (desperately
I don't know if there are vendors who use this kind of compression to save CAM memory to be honest. On the one hand it saves memory, on the other it adds complexity.
On the last point, yes, be good to see more work on this.
Well, let's agree to disagree to on point 1. I do agree with you though, IPv6 will be less fragmented than IPv4, though I do also think there are processes besides de-aggregation in the face of address space pressures that cause fragmentation, and I think IPv6 will face those pressures too. IPv6 needs to be seriously used first, and it may also require time.
Next, IPv6 addresses are of course 4 times larger than IPv4 addresses. Even if your IPv6 routing table has 5 times fewer entries, you're not getting a 5 times saving in memory. You're only getting a 5/4 times saving or tables that are 80% of the IPv4 - nowhere near as dramatic.
I'd contend 2 is the real underlying problem. Routing tables growing with the size of the network, in terms of # of entries - even if not at all fragmented. In terms of overall size, it's O(NlogN), however given we're using a fixed-length address label, that logN factor makes itself known in quite big jumps, as illustrated in the previous paragraph. That 20% saving will be eaten extremely quickly if the Internet keeps growing at super-linear pace. Given so much of the world's population isn't yet online, there's every reason to think the Internet still has plenty left to grow. Even in the developed world, there's no reason to think the amount of address space used per person will not grow dramatically. The amount of network enabled devices each of us own just keeps growing. The "Internet of Things" is the current buzzword, looking at network-enabling many small devices. Granted, that won't directly increase pressure on routable bits where a site upgrades to v6 from an existing v4 connection, e.g. a person's home, however there are surely many use-cases that involve new distinct locations coming online (e.g. cars?).
IPv6 is just neutral on the routing scalability question. Reduced fragmentation seems a trivial saving, at least to me.
IPv4 has been around a lot longer, and has had a lot more real use and legacy concerns. Even if you got that 5× fold reduction in routing table sizes by switching everyone over to IPv6, then:
1. You won't *keep* that nice clean space. The same processes that led to IPv4 fragmentation, ex space, will start to affect IPv6: Mergers; ASes eventually running out of bits in their prefix, given enough time (and remember, we're talking routable bits - that's only 16 in a
2. Say 1 is wrong, and v6 stays clean. Ok, you've got a 5 fold linear reduction compared to IPv4. However it still doesn't fix the problem that current Internet routing leads to O(N) routing tables at each AS in terms of number of entries (O(NlogN) in terms of total size), where N is the size of the Internet and N keeps growing at a fast rate - even if measured in # of ASes rather than # of prefixes. Certainly supra-linear, potentially exponential Internet growth to date, and we've still got much of Africa, China and India still to become rich enough to start using address space like we do in the developed world. That 5x linear reduction is, ultimately, a barely noticeable blip in the face of continued supra-linear, perhaps exponential growth of the Internet.
IPv6 doesn't fix routing table growth problems, at least not in terms of providing a mode change in how routing table sizes grow with respect to the overall network, because IPv6 does not fundamentally do anything to change how routing is done, in a way that could slow the mode of routing table growth.
If IPv4 is fragmented it's primarily because of "short-sighted" initial allocations, the tight space of IPv4, growth and time. I.e. some network got a prefix covering X space, ended up needing more space eventually. Enough time had passed that it was no longer possible to get a prefix covering their original space and new space. So now one AS has is using 2 non-contiguous prefixes for its network, that it has to advertise. Both prefixes go to the "same" place, as far as Internet routing is concerned.
This problem is less acute in IPv6, because it's been around for far less time and the address space is much bigger, so the pressures of compaction and growth aren't there like in IPv4. So, on this factor, we should expect IPv4 to compress more than IPv6.
The other kind of routing table compaction is due to serendipitous next-hop sharing for ranges of prefixes. E.g., prefixes for European networks are more likely to assigned by RIPE, so if you're "far" away from Europe in network terms, then there's a better chance that there'll be a number of adjacent prefixes for European networks that will share nexthops and can be compressed, etc. Personally, I don't see why there'd be any huge difference between IPv6 and IPv4. IPv4 does have legacy allocations, pre-RIR, where there might not be these prefix-range to network topology correlations, so perhaps it'd compress ever so slightly less than IPv6.
Still though, my understanding of the theory behind this is that this type of compression can only give linear savings in number of routing table entries. Which means it can't ever "fix" the problem, in terms of fundamentally changing the mode of growth of routing tables in response to network growth. To "fix" this problem, you need routing tables that grow more slowly than linearly, with respect to the size of the network. To the best of my knowledge of the theory, this is impossible with any form of guaranteed-shortest-path routing.
This isn't a huge win. It can provide only a constant improvement. Further, if IPv4 was fragmented, then it'd compress better than IPv6 - there would be more IPv4 prefixes going to the same destination.
People looked into geographical routing for IPv6. It never went anywhere though. Today, IPv6 address assignment and routing works pretty much like IPv4.
See my reply to your sibling comment. Yes, people looked at geographical assignment and routing. No, this wasn't ever rolled out for IPv6.
Geographical routing could have worked well in some contexts, e.g. in regulated Internet connectivity markets, where some monopoly carrier controls end-access and is required to provide whole-sale access to other, virtual ISPs. This is the case in at least several European markets, where the monopoly carrier is the former state telco (Ireland, UK). With geographical routing my packets to another host on, say, the same telephone exchange as mine, could have take a direct route. Sadly in at least both those markets, the monopoly carrier instead encapsulates packets and delivers them to the virtual ISPs and the virtual ISPs have to exchange the packets - meaning packets to my next-door neighbour might have to go hundreds of miles to my virtual ISP, then a further distance to a large Internet exchange, then back to their virtual ISP, then hundreds of miles back to my neighbour. The packets in the flow to my geographical neighbour pass by each other in the same switch near us both, while taking a detour hundreds of miles. Very inefficient.
Generally though, geographical routing would have been very very hard to make work. It is simply not in most ISP or network operators' self-interests.
Possibly better would be using topographical-landmarks (i.e. nodes or ASes important because of some property of their place in the network - not geographical) and using those to implement a hierarchy, while still giving routing flexibility to not have to strictly follow hierarchies. E.g. some scheme based on Lenore Cowen's compact routing work. The issue there is in making it practical.
There's no good reason to think there'll be a significant improvement in HD with IPv6, or significantly fewer prefixes advertised.
The issue is orthogonal to IPv6, it's fundamentally about how Internet routing is organised today. No hierarchy, and all prefixes must have global visibility. Hierarchical routing of the 90s has a bit of a bad name, and support for aggregation in BGP has been deprecated. However, there are things like topographical-landmark routing, which improve on the deficiencies of hierarchical routing. These would allow the Internet to grow without routing tables everywhere having to grow in direct proportion. Instead, routing tables wouldn't grow much at all, even as the Internet grew, in relative terms.
This particular problem is due to the way routing on the Internet works, where generally every router must hold routes for every prefix announced on the Internet. That system doesn't change with IPv6. Now, there might be fewer IPv6 prefixes at this time than IPv4, but intrinsically there's nothing about IPv6 that addresses the problem that all prefixes must have global visibility.
To fix this kind of problem requires changing how routing is done.
Thank you for comment with some impressive nitpicking that "highly intelligent" should be "relatively intelligent". Also very impressive how you make an argument in your comment that these animals barely rank alongside human children, and back it up with an example of how these animals *outperform* children.
BTW, did you know that Chimpanzees can perform basic arithmetic much *faster* than pretty much *any* human, child or adult? Does that mean we barely measure up to chimpanzees?
There are numerous examples of highly advanced behaviours in Orcas, e.g. hunting strategies that require significant forward planning and close co-operation to pull off. E.g. washing seals off ice floes by swimming in tight formation to create a large bow wave. They also have complex social structures and behaviours, as with other dolphins and most whales generally. Mothers have been seen to teach calves hunting skills, e.g. pods that beach-hunt mothers have been seen "instructing" calves on how to do it, even pushing them toward the beach. This is clear evidence of culture - a very high-order behaviour. There is also strong evidence that Orcas have languages, differing significantly between different groupings.
In "Blackfish" it was reported that a pod of Orcas, that had had calves taken before, adopted a strategy to try foil the hunters. They split up with one group of adults swimming down one sound, breaching regularly to attract the attention of the hunters and divert them; while another group of mothers swam quietly with the calves down another sound (unfortunately, the hunters had a spotter aircraft). That story, if true, shows incredibly advanced planning, problem solving and organisational abilities.
You could go on and on. There is, to my understanding, *ample* evidence that these are *highly* intelligent animals, and are used to living very social and inter-dependent lives. On the latter social aspect, their needs potentially may even be much greater than ours.