Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:But 32 bits is enough for anybody (Score 1) 164 164

One approach would be via the FTC. Simply offering connectivity to IPv5 is no longer connectivity to 'The Internet'. Perhaps the ISPs should be forced to either get v6 up and running or cease advertising themselves as an ISP. Instead, they should be forced to call themselves deprecated ISPs. Perhaps we should legally define provision of v4 only as 'shitty service' and force them to advertise that. As in, Ajax ISP, shitty service for $60/month.

b and c are difficult, but take care of a and d and the pressure on them will mount rapidly.

As for d, actually there has been a big push for government to make sure their public facing servers are available over v6. The mandate extends to government contractors as well. They really do need to expand that mandate to all hosts within government networks that are allowed access to the public internet at all.

Comment Re:wft ever dude! (Score 1) 164 164

Essentially, all it says is that hosts and routers (meaning end user's routers) should not default to using 192.88.99.1 as a 6to4 router if they don't get a prefix. The reason for that is too many firewalls and clueless network people were breaking the mechanism and causing long timeouts as hosts assume they have v6 connectivity and use it in preference to v4 (as they should).

The mechanism itself and the associated address space are explicitly not deprecated.

That is, they absolutely DO need to cease black holing customer traffic bound to 2002::/16. All that does is make the sorry state of IPv6 adoption even worse. Since the route exists in their public rviews server, I suspect it is unintentional breakage affecting only some customers, but since their entire support structure is designed to make sure nobody can ever talk to anyone with a clue, I have no way to alert anyone who actually knows how a router works that there is a problem.

Comment Obligatory (Score 1) 51 51

No.

If we look back into the shrouded mists of time, we see that Moblin begat Meego begat Tizen.

Moblin was Linux with a cool OpenGL interface from Intel on which Intel spent most of their effort ripping out the parts they didn't need.

Meego was the effort to put those parts back and make something useful on more than just intel hardware.

Tizen is the attempt to convince you that this zombie project has life lift in it. It doesn't. It's dead. Stick a fork in it.

Comment Re:wft ever dude! (Score 1) 164 164

I found that above about 10Mb/s you start to hit diminishing returns. The jump from 10 to 30 was barely noticeable. The jump from 30 to 100 is noticeable with large downloads, but nothing else. From 100 to 1000, the main thing that you notice is if you accidentally download a large file to a spinning-rust disk and see how quickly your fill up your RAM with buffer cache...

Over the last 10 years, I've gone from buying the fastest connection my ISP offered to buying the slowest. The jump from 512Kb/s to 1Mb/s was really amazing (though not as good as moving to 512Kb/s from a modem that rarely managed even 33Kb/s), but each subsequent upgrade has been less exciting.

Comment Re:wft ever dude! (Score 1) 164 164

Because in 1981 or so, everybody was pretty sure that this fairly obscure educational network would *never* need more than about 4 billion addresses... and they were *obviously right*.

Well, maybe. Back then home computers were already a growth area and so it was obvious that one computer per household would eventually become the norm. If you wanted to put these all on IPv4, then it would be cramped. The growth in mobile devices and multi-computer households might have been a bit surprising to someone in 1981, but you'd have wanted to add some headroom.

When 2% of your address space is consumed, you are just over 6 doublings away consumption. Even if you assume an entire decade per doubling, that's less than an average lifetime before you're doing it all over again.

With IPv6, you can have 4 billion networks for every IPv4 address. Doublings are much easier to think about in base 2: one bit per doubling. We've used all of the IPv4 addresses. Many of those are for NAT'd networks, so let's assume that they all are and that we're going to want one IPv6 subnet for each IPv4 address currently assigned during the transition. That's 32 bits gone. Assuming that we're using a /48 for every subnet, then that gives us 16 more doublings (160 years by your calculations). If we're using /64s, then that's 32 doublings (320 years). I hope that's within my lifetime, but I suspect that it won't be.

In practice, I suspect that the growth will be a bit different. Most of the current growth is multiple devices per household, which doesn't affect the number of subnets: that /64 will happily keep a house happy with a nice sparse network, even if every single physical object that you own gets a microcontroller and participates in IoT things using a globally routable address.

IMHO: what needs to happen next is to have a 16 bit packet header to indicate the size of the address in use. This makes the address space not only dynamic, but MASSIVE without requiring all hardware on the face of the Earth to be updated any time the address space runs out.

This isn't really a workable idea. Routing tables need to be fast, which means that the hardware needs to be simple. For IPv4, you basically have a fast RAM block with 2^24 entries and switch on the first three bytes to determine where to send the packet. With IPv6, subnets are intended to be arranged hierarchically, so you end up with a simpler decision. With variable-length fields, you'd need something complex to parse them and that would send you into the software slow path. This is a problem, because you'd then have a very simple DoS attack on backbone routers (just send them packets with large length headers that chew up CPU before they're dropped). You'd also have the same deployment headaches that IPv6 has: no one would buy routers that had fast paths for very large addresses now, just because in 100 years we might need them, so no one would test that path at a large scale: you'd avoid the DoS by just dropping all packets that used an address size other than 4 or 16. In 100 years (i.e. well over 50 backbone router upgrades), people might start caring and buy routers that could handle 16 or 32 byte address fields, but that upgrade path is already possible: the field that you're looking for is called the version field in the IP header.

Comment Re:Wait Wait Wait... (Score 1) 164 164

It depends on the ISP. Some managed to get a lot more assigned to them than they're actually using, some were requesting the assignments as they needed them. If your ISP has a lot of spare ones, then they might start advertising non-NAT'd service as a selling point. If they've just been handing out all of the ones that they had, then you might find that they go down to one per customer unless you pay more.

Comment Re:Interesting argument (Score 1) 110 110

But hey, should I be allowed to just come into anything you own and make it for the better without your permission or any specific act or law created by your elected officials?

As soon as the carriers surrender their granted right of way and allocations out of the public's spectrum, they can do whatever they want.

Until then, they have been placed under the FCC BY LAW.

Comment Re:But 32 bits is enough for anybody (Score 1) 164 164

The problem there is it will cause pain to all the wrong people. New business, need 5 IPs? That'll cost ya! Go with IPv6, half your customers ISPs haven't crawled out of the slime yet and so they won't be able to reach you at all.

The ISPs themselves? They have a massive pool of IPs and they aren't afraid to NAT them.

Until major sites start having v4 blackout days, the pain won't hit the right people.

Comment Re:Well it is half true (Score 3, Insightful) 164 164

Actually, it was never crying wolf. The wolf was actually there, it's just that it was a long way off in the '90s. It has been headed our way in a strait line ever since. You needed a telescope to see it in the '90s, now you don't even need to squint.

And apparently, a warning that far in advance wasn't enough since there are still a lot of organizations with their pants down. How pathetic is that?

Comment Re:Hmm... (Score 1) 247 247

You realize that Cortana is completely disabled if location is disabled. While I agree that location is required for some queries, it's certainly not required for all, yet Microsoft has made it so. Why is that?

Ad sponsored / pay for ad-free Solitaire packaged with a free copy of Windows 10 - sure. What about for people that actually *buy* Windows 10 - they have to pay twice for ad-free. Seems like a dick move by Microsoft.

I'll start considering myself a product when I get no benefit in return for what I provide, and not a moment sooner.

Be careful what you give up, you might be able to get it back.

And, personally, I can't stand auto-correct and speech recognition services.

Comment We have no idea what "superintelligent" means. (Score 2) 179 179

When faced with a tricky question, one think you have to ask yourself is 'Does this question actually make any sense?' For example you could ask "Can anything get colder than absolute zero?" and the simplistic answer is "no"; but it might be better to say the question itself makes no sense, like asking "What is north of the North Pole"?

I think when we're talking about "superintelligence" it's a linguistic construct that sounds to us like it makes sense, but I don't think we have any precise idea of what we're talking about. What *exactly* do we mean when we say "superintelligent computer" -- if computers today are not already there? After all, they already work on bigger problems than we can. But as Geist notes there are diminishing returns on many problems which are inherently intractable; so there is no physical possibility of "God-like intelligence" as a result of simply making computers merely bigger and faster. In any case it's hard to conjure an existential threat out of computers that can, say, determine that two very large regular expressions match exactly the same input.

Someone who has an IQ of 150 is not 1.5x times as smart as an average person with an IQ of 100. General intelligence doesn't work that way. In fact I think IQ is a pretty unreliable way to rank people by "smartness" when you're well away from the mean -- say over 160 (i.e. four standard deviations) or so. Yes you can rank people in that range by *score*, but that ranking is meaningless. And without a meaningful way to rank two set members by some property, it makes no sense to talk about "increasing" that property.

We can imagine building an AI which is intelligent in the same way people are. Let's say it has an IQ of 100. We fiddle with it and the IQ goes up to 160. That's a clear success, so we fiddle with it some more and the IQ score goes up to 200. That's a more dubious result. Beyond that we make changes, but since we're talking about a machine built to handle questions that are beyond our grasp, we don't know whether we're making actually the machine smarter or just messing it up. This is still true if we leave the changes up to the computer itself.

So the whole issue is just "begging the question"; it's badly framed because we don't know what "God-like" or "super-" intelligence *is*. Here's I think a better framing: will we become dependent upon systems whose complexity has grown to the point where we can neither understand nor control them in any meaningful way? I think this describes the concerns about "superintelligent" computers without recourse to words we don't know the meaning of. And I think it's a real concern. In a sense we've been here before as a species. Empires need information processing to function, so before computers humanity developed bureaucracies, which are a kind of human operated information processing machine. And eventually the administration of a large empire have always lost coherence, leading to the empire falling apart. The only difference is that a complex AI system could continue to run well after human society collapsed.

If you can't understand it, it is intuitively obvious.

Working...