Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:knuth's art of computer programming (Score 5, Informative) 247

They're kind of dated, because few people do sorts and list manipulation at that level any more. I have both an original edition and a current edition of vols. 1-3, but haven't looked at them in years.

Sure, for the average programmer these days who relies on existing libraries, these probably aren't all that useful.

As a grad student working on a thesis and other papers however, Knuth's books are invaluable for citations. Need to defend the use of a specific algorithm? Cite Knuth. His books were invaluable citation material for when I wrote and defended my thesis a few years back.

This is, of course, good science. You may not need to use Knuth to program your own B* tree, but you have a pretty much universally accepted reference for citation if you use one in your research.

Yaz

Comment Re:Wasn't allocation always the problem? (Score 1) 306

I call BS, it would only take that long if it was a low priority job. If they were told in no uncertain terms to sort it out or be kicked out of the internet I'm sure they could deal with it much quicker than that.

Perhaps, but it's still potentially going to be a very large, costly job, which probably won't gain enough addresses to make it worth anyones while. It would still take them at least a few months.

The problem here is how many organizations with a large allocation (like a /8) have allocated these addresses within their organizations. Typically, they don't go around doling out the addresses in a completely contiguous manner -- they may have done something akin to setting up a /16 for each building (they would have received their address block before CIDR, and thus would have had to spit things along glassful lines), out of which different labs may have got a /24 to use however they wanted. Readdressing all of these and setting up new routes for all of these subnets is a big job for a large organization like MIT. You'd have to combine subnets together, which would change the routing topology, and compress everything down into a few /16's to make the returned address space contiguous.

You could return non-contiguous space, however this has a serious negative impact on world-wide routing tables. You can't just add a few million /28's to the global routing table (that is, you can't just say "hey, here's a few hundred thousand non-contiguous groups of 16 addresses we aren't using, let's give them back!").

And after putting all that effort into making their address space more contiguous (while still allowing room for future growth), they'd probably wind up with enough addresses to extend IPv4 for a month or two at best -- at which point, they might as well have put the effort into migrating to IPv6 instead.

Giving unused address space only slightly delays the inevitable. It does't postpone it indefinitely. If you're going to do the work, you might as well do it right the first time and get everything running on IPv6.

Yaz

Comment Re:About time! (Score 1) 306

Once we finally move on to IPv6, can we all have our own static IP?

That's a good reason to push it.

Actually, you get a prefix -- either a /48 or a /64, from which you can assign your own addresses. A /64 is enough to give you more addresses than the entire public IPv4 Internet. How you use them internally is up to you.

Yaz

Comment Re:About time! (Score 1) 306

if anyone back then had seen this coming that clearly, they'd have just used 64 bits to start with and we'd be fine for the next thousand years.

Exception that on a 8-bit computer running at only a handful of MHz, using a 64-bit address right off would have entailed a performance penalty. There would be more packet overhead, and more address processing required.

This may not seem like a bad compromise for clients, but you have to consider what would have happened to routers in the 70's and 80's had 64-bit addresses been the norm. Won't anybody think of the routers?

Yaz

Comment Re:Bloody Idiot (Score 1) 588

How long have we been vaccinating kids for? How long have we known about "autism"?

History of Autism

"Autism" as it is currently used was defined in 1938. The first vaccines were developed in the late 1700s, however, the first of the components of what are now the MMR vaccine were introduced in 1963 with the first measles vaccine (Timeline of Vaccines)

.

They tried that crap on my own kid who didnt behave well in school. Instead, I tried more discipline and a stricter policy and now he's a "Straight A" student.

Really, so how many times a day should I beat my autistic daughter who is completely unable to speak because of her condition? Do you recommend using paddles, straps, or electrocution? Maybe I should just lock her in a closet and feed her a bucket of fish heads once a week? Please Doctor Anonymous, share your wisdom!

Yaz

Comment Re:Knowledge (Score 1) 1037

Well, not necessarily. There is no scientific way I could think of that lets us tell what happens with our "soul" after death.

Of course not. You'd have to prove the existence of a soul scientifically before you could even start to answer such a question.

It would be no different than asking a zoologist about the mating patterns of the one eyed, one horned, flying purple people eater. All research is built upon the foundations of prior research; as there is no scientific evidence for a one eyed, one horned, flying purple people eater, there is no logical place to start in trying to deterring what it's mating patterns might be like.

You have to be careful with such statements, as they're the sorts of arguments people of faith like to try to use against science (i.e.: "But science can't prove/disprove X", where X is some construct for which there is no scientific basis in the first place, but which the speaker treats as a given). This is a fallacious line of argument, one which nobody can ever actually learn anything from.

Yaz

Comment Re:Better encourage rather than confront (Score 1) 98

I was using unblock-us for a while, and it worked flawlessly. I only stopped as there wasn't enough additional content on US netflix for me to justify paying for it.

IPv6 tunnels are fortunately free. And as I mentioned, if you have router support for it, then every Mac, PC, and Linux box in your house will automatically be provisioned for end-to-end IPv6 access to Netflix (and anything else IPv6 accessible on the Internet), along with any set-top boxes which may use IPv6 (Apple TV apparently does, but I don't own one to be able to confirm this).

Yaz

Comment Re:Better encourage rather than confront (Score 1) 98

Canadian Netflix is pretty crappy compared to the American version and we don't have much else. It's not like the content companies want to sell their products here, at least in an easy to purchase downloadable format

Pro tip:

Netflix is fully IPv6 enabled, which is actually great news for Canadian Netflix users. Just setup an IPv6 tunnel to the nearest Hurricane Electric tunnel server farm (if you have a router that supports this, you can enable IPv6 invisibly for your entire home quickly and easily. Apple's routers all support this out of the box, for example), and presto -- you'll have US Netflix.

Note that this only works on IPv6-enabled devices, of course, so your set-top box or smart TV may not benefit. And you have to ensure the browser you're using properly supports Happy Eyeballs so as to ensure it will prefer IPv6 over IPv4 (Safari on Mac OS X since Lion uses an algorithm to prefer whichever connection is fastest in responding, which can cause it to initially load Netflix via IPv6, showing all the US content you can't otherwise see in Canada, only to be blocked when you actually try to view it if OS X switches down to IPv4 for optimization purposes).

As I have IPv6 tunnelling enabled right at the router, there is no software to be installed or anything that needs to be configured anywhere once this is setup, unlike VPN/proxy solutions. It's also fast -- even though the IPv6 is tunnelled, I can't perceive any speed issues when watching content this way.

Enjoy!

Yaz

Comment Re:That's only part of the story. (Score 2) 60

$5000 per infringer (not per infringement) is the maximum. The minimum is $100, and I've heard word that the court is more likely to impose the minimum. The plaintiff either has to prove actual damages, or can apply for statutory damages, between $100 - $5000 at the judges discretion. The copyright act stipulates that the judge needs to consider whether the infringement was for non-commercial purposes, whether it was for private purposes, and whether it would constitute hardship for the defendant to pay.

Yaz

Comment Re:SAT is not a brute force loop (Score 3, Interesting) 189

SAT is clearly NP complete, and clearly the existence of good SAT solvers is not a proof that P=NP. This means that there will be relatively small problems that SAT solvers won't be able to solve.

Enjoyed your post, but have to correct a small quibble.

From a mathematical standpoint at least, being NP complete doesn't imply that there are some problems that are unsolvable; merely that they won't be solvable in any reasonable amount of computing time. If you have a few hundred billion years of compute time available, a SAT solver might be able to solve even those small problems you mention. Of course, from a practical perspective, none of us are going to be here to get the result in those situations, making them unsolvable from a practical standpoint.

(On the other hand, once the billions of aeons roll by and the machine goes 'ding' and spits out an answer, we do know that we can verify it in poly time. Huzzah!)

While all of this may seem ultra-pedantic, there is enough confusion about NP out there that someone reading your post may get the idea that things that are NP-complete are unsolvable. They're not unsolvable -- we can typically fashion algorithms to solve them, simply that those algorithms run in nondeterministic polynomial time, and thus may have runtimes exceeding the expected lifetime of the solar system, even with every cycle of compute time ever invented pushed at it.

...unless, of course, someone comes up with a proof that P = NP, in which case all those NP-complete problems can be transformed into P problems. Sure, they might still take a few hundred billion years to get a solution, but at least we'd know how many hundreds of billions of years would be needed to get a solution!

Yaz

Comment Re: Bullshit (Score 1) 389

It's really funny to think that Mac OS X, an OS for whom many Windows users think is primarily aimed at and is used by the least technically proficient users in the world, has had virtual desktops for seven years now. So if Apple can figure out how to provide this feature, why can't Microsoft? Yaz.

Comment Re:For anybody paying attention... (Score 1) 574

But the thing that sticks out the most is - why the hell is it such a crisis that IP addresses are doled out where they are needed, instead of what I am sure you would consider "fairly" to everyone? Is there now a social justice aspect to the IPv4 "crisis?"

Thanks for making it obvious you have no idea what you're talking about.

I have no problem with the disproportionate amount of /8's ARIN has assigned to it However, having such a large pool means that:

  1. Many of the organizations that want an IPv4 address block (of whatever size) probably already have one. Indeed, due to pre-CIDR allocation rules, many of them have way more than they actually need to use,
  2. There are more opportunities for addresses to be shuffled about. ARIN has assigned/controls over 1.3 billion addresses, for a population of roughly 530 million people. You have a lot more flexibility when you have nearly 2.5 addresses for every man, woman, and child in your registry area.

As such, you can't point to the pool with the largest number of addresses, and then imply(as the /. article does) that there is no address shortage issues. APNIC and RIPE NCC are already exhausted. The fact that North America has a historical address advantage means that effects in North America will be delayed -- not that they simply won't happen.

With that out of the way, if you know anything about routing, you would know that there is a technical crisis in doling out addresses wherever they are needed. Anytime you break up a contiguous address space, you'll generally need two (or more) additional routing table entries to handle the situation. In pre-CIDR days, the situation was fairly simple (although I'm simplify it a bit to make it easier to communicate): a router only had to look up where to forward a packet based on the value of the first octet, which would only have 255 possibilities (actually less, due to reserved address spaces, such as the unused Class E space). The packet would follow the route until it reached the router in charge of the value of the first octet, which would route based on the second octet, also with a maximum of 255 values. Each hop would hit a router with a table with a maximum of 255 entries, until you got to the destination host.

Post CIDR, the address space could be broken up at pretty much arbitrary locations, so knowing the next hope required ever expanding tables. As soon as you geographically break up, say, 213, into geographically separate ranges (say, for simplicity, a series of /16s), what used to be one routing entry is now 256 routing entries. Break up some of those /16s into /24s, and each of those /16s that are broken up become 256 other router entries.

This is how we've got to the point where there are roughly half a million forwarding entries. Maintaining all of these entries in a constantly changing network, storing them, and searching them is getting to be extraordinarily computationally expensive. If you continue to break them up such that no two contiguous addresses are on the same physical network, you could wind up with roughly 3.7 BILLION routing entries.

IPv4 wasn't designed to be broken up this way. In the early days of CIDR, it was expected that such routing difficulties were far in the future, and that we would have moved to a newer, better protocol by then. Turns out the problems aren't as far into the future as they may have expected, and we've done pretty much squat at doing anything about it, other than throwing more compute power at packet routing.

So yeah -- you can't just throw addresses where they're needed anymore. Every /8 block from the IANA has been assigned to RIRs, and any transfer of a block smaller than a /8 is going to add yet more entries to the global routing table. Just try to think of how a network is supposed to route 213.0.113.1 to the United States, but 213.0.113.17 to China. Yes, we can make it work -- but every time you break apart contiguous addresses like this, you need yet more routing entries to deal with the exception. The problem isn't ever going to get any easier with IPv4 -- it's only going to get worse. And that's why you can't just put addresses where they're needed. An address is useless if you can't route to it.

Yaz

Slashdot Top Deals

What good is a ticket to the good life, if you can't find the entrance?

Working...