Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Secret Ballot? (Score 1) 480

How is that any different from your employer demanding your Facebook password or your private email history? If you can be fired for refusing either of those demands then sure, they could fire you for refusing to give them your voting ID. But I'd say that's a completely different issue, wouldn't you?

Comment Re:Secret Ballot? (Score 4, Interesting) 480

Voter shows ID to election worker. Worker checks a box. Voter reaches into a giant lottery box full of generated IDs and uses that ID to vote. Later the voter can inspect the blockchain, find his ID and verify that his vote went to the right candidates.

I'm not saying it's a better system but I think there are ways to keep voter anonymity while also allowing the public to audit the result.

Comment #JeSuisCharlie? (Score 5, Insightful) 1350

Maybe instead of representing solidarity with a silly hashtag it'd better for us all to exercise free speech by posting a picture of Muhammad. Not an overly offensive picture either, a simple stick man would do.

This craziness isn't going to stop until the media and us people in general start standing up for the things that we're always claiming to hold dear.

Comment Wrong threat maybe? (Score 1) 580

Maybe this has more to do with the threat of releasing more information "if their demands aren't met" than it does the threat of physical attacks? Maybe there really was some backroom discussion between Sony and the big theater chains to scrap the release because of this?

Or maybe not. It's probably just stupidity.

Comment Different kind of risk (Score 1) 151

Maybe there wasn't a legal risk that would have held up in court. What all that legal council evidently failed to mention is the very real threat of crippling litigation that, while ultimately unsuccessful, could still wipe you out in the process.

I guess that's one thing separating the 'good' legal council from the 'best'. The former will stop at examining the laws, the latter will also examine all the ways the laws could be abused to achieve the same result.

Comment Re:just ask carriers. (Score 1) 248

because we couldn't possibly have good service from an ISP.

Don't most ISPs sell good service at a premium? I think that was the entire point with having poor service in the first place. The only other reason I could imagine would be to drive customers to the competitors, and that doesn't seem to make sense from a business point of view.

I have no imagination, so I have no idea what we might get in the future if we actually had the infrastructure to support it.

I can come up with a couple of additional usages for some /64s. One /64 could be used to harden your recursive DNS resolver against poisoning. The 16 bit transaction ID in DNS is way too small. The entropy you can get from randomizing port numbers help a lot. But you will still only get a total of 32 bits of entropy that way. Some have gone to great lengths to squeeze extra entropy into a DNS request, for example by mixing lower case and upper case in the domain. But that doesn't give a lot of bits. If you allocate a /64 to the recursive DNS resolver, you can put 64 bits of entropy into the client IP, which instantly gives you more than a doubling of entropy almost for free.

A modern OS is a multi user system, imagine if each user could get their own IP address. You could allow users to use privileged port numbers on their own IP address, and all port numbers on their IP address would be protected from usage by other users. You could do this by responding to neighbor discovery for as many IPs in your link prefix as you have users on the node. But a more secure and more efficient approach would be to route a prefix to each node.

Comment Re:IPv6 won't fix this problem (Score 1) 248

a prefix that just got compressed might get split quickly, and vice versa

There is no need to combine the routes, if there is still free entries in the CAM. Once the CAM is full and another entry need to be inserted, the pair which has been a candidate for combining for the longest time can then be updated. That algorithm would keep the number of updates down.

However as the number of routes approach the limit of what can be handled, even with combination of routes, the frequency of updates needing to combine and split entries will go up. It may be they are already doing this, some sources say the problem did cause reduced performance, which would be consistent with such behavior.

Comment Re:just ask carriers. (Score 1) 248

all Comcast needed to do was write "56" in their config files rather than "60"...

One has got to wonder if that's how it happened. Did some admin arbitrarily decide to write 60 in a configuration file, where he could/should have written 56, and then that was how it was going to be? Or did a lot of bean counters get together and decide on a policy (possibly not even based on real data), and then admins had to implement it like that without asking questions.

But that's not what we should be targeting. We should be targeting "enough for pretty much everybody", and "for the foreseeable future" -- including for any new, fun things that become possible because of easily-available address space.

Even in many areas where there is tough competition among ISPs, it is hard to find even one trying to capture those customers, who want IPv6. That's how bad it looks today. And that's why I would happily take a /60. Hopefully once IPv6 is the norm (which it likely will be before the end of the decade), the ISPs will start competing on prefix lengths as well.

I can't yet imagine what I would use more than a /60 for. But if I get a /60, I might soon come up with ideas on how to use a /56. All it takes to get that competition among ISPs started is two people independently of each other coming up with something really cool you can do to put your entire /60 to use.

Comment Re:IPv6 would make the problem worse (Score 1) 248

Next, IPv6 addresses are of course 4 times larger than IPv4 addresses. Even if your IPv6 routing table has 5 times fewer entries, you're not getting a 5 times saving in memory. You're only getting a 5/4 times saving or tables that are 80% of the IPv4 - nowhere near as dramatic.

In IPv4 all 32 bits are used for routing, though on the backbone you tend to only accept /24s. In IPv6 the first 64 bits are used for routing, though on the backbone you tend to only accept /48s.

Either way, you only need twice as many bits in the CAM to handle an IPv6 route compared to IPv4. So what you call a 20% saving is more like a 60% saving. The picture is a bit more complicated, because two CAM entries at half the size is not the same as one of the full size. So you may have to decide at design time, how you are going to use that CAM.

Routing tables growing with the size of the network, in terms of # of entries - even if not at all fragmented.

I'd love to take part in solving that problem. Any realistic solution is going to start with a migration to IPv6. And I don't see how we could expect the solution to be deployed any faster, so if we start now, we could probably have it in production by 2040.

it is possible that IPv6 is actually too small to be able to solve routing scalability.

That algorithm has a major drawback. The address of a node depends on which links are up and which are not. You'd have to renumber your networks and update DNS, every time a link changing somewhere cause your address to change. If we assume that issue can be fixed, it doesn't really imply that addresses would have to be larger.

The algorithm in the paper assigns two identifications to each node. The first one could very well be the IPv6 address assigned to the node. The second address is computed based on the first address and structure of the network. However their routing looks awfully similar to source routing. So really the solution might just be to make source routing work.

I can think of a couple of other reasons to consider IPv6 addresses to be too short. That paper isn't one.

Teredo and 6to4 are two "automatic" tunnel protocols. Both embed IPv4 addresses inside IPv6 addresses. Due to the use of NAT, Teredo needs to embed two IPv4 addresses and a port number inside the IPv6 address. That doesn't leave room for a site-level-aggregator or host part. If you wanted one unified protocol which could replace both Teredo and 6to4, you'd need at least 192 bits in the IPv6 address.

After IPv6 showed up, people realized that it is sometimes convenient to embed cryptographic information inside the IP address. That was unthinkable with IPv4. With IPv6 it is doable, but you have to chose cryptographic primitives that are not exactly state of the art, due to 128 bits being a bit short for cryptographic values, and not all of them even being available for that purpose.

Comment Re:IPv6 won't fix this problem (Score 1) 248

On the one hand it saves memory, on the other it adds complexity.

At least it would be complexity in software. That's better than complexity in hardware. If that additional complexity would be the only way to keep some already deployed hardware functioning, it might be worth it.

Comment Re:IPv6 would make the problem worse (Score 1) 248

You won't *keep* that nice clean space. The same processes that led to IPv4 fragmentation, ex space, will start to affect IPv6

With address shortage being the main reason for fragmentation, that doesn't sound so likely.

Mergers

This will not exactly lead to growth in number of announcements, but it won't lead to a reduction either. Giving incentives to renumber after a merger may help a bit. At least there should be enough addresses that the company can pick which of the two blocks it want to renumber into, and that block can be extended as needed.

ASes eventually running out of bits in their prefix

Bits are set aside to allow them to grow - for now at least.

that's only 16 in a /48, a lot but not impossible to exhaust either, a /56 would be even easier to exhaust

Doesn't all the RIRs hand out addresses in /32 or shorter blocks?

Ok, you've got a 5 fold linear reduction compared to IPv4. However it still doesn't fix the problem that current Internet routing leads to O(N) routing tables at each AS

That is true. This problem is going to get even worse if we want end user sites to have access to dual homing. Fixing this is going to require some fundamental change to how routing is done.

But if IPv6 gets deployed soon, the reduction in routing table size should buy us some time, that can be used to come up with a more scalable solution, which will allow every site to be dual homed. But of course things will have to break if ISPs will keep waiting for breakage to happen before they start deploying scalable solutions.

That 5x linear reduction is, ultimately, a barely noticeable blip

If the tables grow with each generation of hardware, a 5x reduction can last a while. Not forever, but long enough that a long term solution can be deployed, if ISPs want to.

IPv6 doesn't fix routing table growth problems

Not permanently, but IPv6 can help now, and IPv4 can be expected to get worse if allocations gets split and traded. And throwing bigger hardware at the problem may help with this one issue regarding IPv4, but there are other problems with IPv4.

Comment Re:IPv6 won't fix this problem (Score 1) 248

If IPv4 is fragmented it's primarily because of "short-sighted" initial allocations

Those allocations should be the least fragmented ones around, so blaming those allocations for fragmentation is a bit of a stretch. As far as short-sighted goes, it is not clear to me that IP stacks at the time would have supported doing the allocations differently. Moreover, two decades ago it was clear that IPv4 wasn't viable as a long term solution. Should we really blame problems we have no on decisions made back then? I'd say if any decisions were to be blamed, it would be those causing IPv6 deployments to get postponed.

The other kind of routing table compaction is due to serendipitous next-hop sharing for ranges of prefixes. E.g., prefixes for European networks are more likely to assigned by RIPE, so if you're "far" away from Europe in network terms, then there's a better chance that there'll be a number of adjacent prefixes for European networks that will share nexthops and can be compressed, etc.

It is true, that this approach should reduce the number of table entries you need to put in the CAM. But at the same time, since the efficiency of this depends on where you are, the expected outcome would be that failures would be much more spread out over time and not happen all at once. Is this sort of compression widespread, and if it is, then why the simultaneous failures? Is this a matter of the rate of failures being tied to the rate at which the number of announcements grows? If so it wasn't one particular announcement that pushed the Internet over the limit, but rather that as 15k new announcements made it around the world about 2.5k of them happened to push one AS over the limit.

To "fix" this problem, you need routing tables that grow more slowly than linearly, with respect to the size of the network. To the best of my knowledge of the theory, this is impossible with any form of guaranteed-shortest-path routing.

Maybe more work needs to go into the principle of keeping the intelligence at the end-points rather than in the core. Maybe source routing is the way to go, we just need to figure out how to make it secure and not require prohibitively large packet headers.

Slashdot Top Deals

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...