Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Better encourage rather than confront (Score 1) 98

I was using unblock-us for a while, and it worked flawlessly. I only stopped as there wasn't enough additional content on US netflix for me to justify paying for it.

IPv6 tunnels are fortunately free. And as I mentioned, if you have router support for it, then every Mac, PC, and Linux box in your house will automatically be provisioned for end-to-end IPv6 access to Netflix (and anything else IPv6 accessible on the Internet), along with any set-top boxes which may use IPv6 (Apple TV apparently does, but I don't own one to be able to confirm this).

Yaz

Comment Re:Better encourage rather than confront (Score 1) 98

Canadian Netflix is pretty crappy compared to the American version and we don't have much else. It's not like the content companies want to sell their products here, at least in an easy to purchase downloadable format

Pro tip:

Netflix is fully IPv6 enabled, which is actually great news for Canadian Netflix users. Just setup an IPv6 tunnel to the nearest Hurricane Electric tunnel server farm (if you have a router that supports this, you can enable IPv6 invisibly for your entire home quickly and easily. Apple's routers all support this out of the box, for example), and presto -- you'll have US Netflix.

Note that this only works on IPv6-enabled devices, of course, so your set-top box or smart TV may not benefit. And you have to ensure the browser you're using properly supports Happy Eyeballs so as to ensure it will prefer IPv6 over IPv4 (Safari on Mac OS X since Lion uses an algorithm to prefer whichever connection is fastest in responding, which can cause it to initially load Netflix via IPv6, showing all the US content you can't otherwise see in Canada, only to be blocked when you actually try to view it if OS X switches down to IPv4 for optimization purposes).

As I have IPv6 tunnelling enabled right at the router, there is no software to be installed or anything that needs to be configured anywhere once this is setup, unlike VPN/proxy solutions. It's also fast -- even though the IPv6 is tunnelled, I can't perceive any speed issues when watching content this way.

Enjoy!

Yaz

Comment Re:That's only part of the story. (Score 2) 60

$5000 per infringer (not per infringement) is the maximum. The minimum is $100, and I've heard word that the court is more likely to impose the minimum. The plaintiff either has to prove actual damages, or can apply for statutory damages, between $100 - $5000 at the judges discretion. The copyright act stipulates that the judge needs to consider whether the infringement was for non-commercial purposes, whether it was for private purposes, and whether it would constitute hardship for the defendant to pay.

Yaz

Comment Re:SAT is not a brute force loop (Score 3, Interesting) 189

SAT is clearly NP complete, and clearly the existence of good SAT solvers is not a proof that P=NP. This means that there will be relatively small problems that SAT solvers won't be able to solve.

Enjoyed your post, but have to correct a small quibble.

From a mathematical standpoint at least, being NP complete doesn't imply that there are some problems that are unsolvable; merely that they won't be solvable in any reasonable amount of computing time. If you have a few hundred billion years of compute time available, a SAT solver might be able to solve even those small problems you mention. Of course, from a practical perspective, none of us are going to be here to get the result in those situations, making them unsolvable from a practical standpoint.

(On the other hand, once the billions of aeons roll by and the machine goes 'ding' and spits out an answer, we do know that we can verify it in poly time. Huzzah!)

While all of this may seem ultra-pedantic, there is enough confusion about NP out there that someone reading your post may get the idea that things that are NP-complete are unsolvable. They're not unsolvable -- we can typically fashion algorithms to solve them, simply that those algorithms run in nondeterministic polynomial time, and thus may have runtimes exceeding the expected lifetime of the solar system, even with every cycle of compute time ever invented pushed at it.

...unless, of course, someone comes up with a proof that P = NP, in which case all those NP-complete problems can be transformed into P problems. Sure, they might still take a few hundred billion years to get a solution, but at least we'd know how many hundreds of billions of years would be needed to get a solution!

Yaz

Comment Re: Bullshit (Score 1) 389

It's really funny to think that Mac OS X, an OS for whom many Windows users think is primarily aimed at and is used by the least technically proficient users in the world, has had virtual desktops for seven years now. So if Apple can figure out how to provide this feature, why can't Microsoft? Yaz.

Comment Re:For anybody paying attention... (Score 1) 574

But the thing that sticks out the most is - why the hell is it such a crisis that IP addresses are doled out where they are needed, instead of what I am sure you would consider "fairly" to everyone? Is there now a social justice aspect to the IPv4 "crisis?"

Thanks for making it obvious you have no idea what you're talking about.

I have no problem with the disproportionate amount of /8's ARIN has assigned to it However, having such a large pool means that:

  1. Many of the organizations that want an IPv4 address block (of whatever size) probably already have one. Indeed, due to pre-CIDR allocation rules, many of them have way more than they actually need to use,
  2. There are more opportunities for addresses to be shuffled about. ARIN has assigned/controls over 1.3 billion addresses, for a population of roughly 530 million people. You have a lot more flexibility when you have nearly 2.5 addresses for every man, woman, and child in your registry area.

As such, you can't point to the pool with the largest number of addresses, and then imply(as the /. article does) that there is no address shortage issues. APNIC and RIPE NCC are already exhausted. The fact that North America has a historical address advantage means that effects in North America will be delayed -- not that they simply won't happen.

With that out of the way, if you know anything about routing, you would know that there is a technical crisis in doling out addresses wherever they are needed. Anytime you break up a contiguous address space, you'll generally need two (or more) additional routing table entries to handle the situation. In pre-CIDR days, the situation was fairly simple (although I'm simplify it a bit to make it easier to communicate): a router only had to look up where to forward a packet based on the value of the first octet, which would only have 255 possibilities (actually less, due to reserved address spaces, such as the unused Class E space). The packet would follow the route until it reached the router in charge of the value of the first octet, which would route based on the second octet, also with a maximum of 255 values. Each hop would hit a router with a table with a maximum of 255 entries, until you got to the destination host.

Post CIDR, the address space could be broken up at pretty much arbitrary locations, so knowing the next hope required ever expanding tables. As soon as you geographically break up, say, 213, into geographically separate ranges (say, for simplicity, a series of /16s), what used to be one routing entry is now 256 routing entries. Break up some of those /16s into /24s, and each of those /16s that are broken up become 256 other router entries.

This is how we've got to the point where there are roughly half a million forwarding entries. Maintaining all of these entries in a constantly changing network, storing them, and searching them is getting to be extraordinarily computationally expensive. If you continue to break them up such that no two contiguous addresses are on the same physical network, you could wind up with roughly 3.7 BILLION routing entries.

IPv4 wasn't designed to be broken up this way. In the early days of CIDR, it was expected that such routing difficulties were far in the future, and that we would have moved to a newer, better protocol by then. Turns out the problems aren't as far into the future as they may have expected, and we've done pretty much squat at doing anything about it, other than throwing more compute power at packet routing.

So yeah -- you can't just throw addresses where they're needed anymore. Every /8 block from the IANA has been assigned to RIRs, and any transfer of a block smaller than a /8 is going to add yet more entries to the global routing table. Just try to think of how a network is supposed to route 213.0.113.1 to the United States, but 213.0.113.17 to China. Yes, we can make it work -- but every time you break apart contiguous addresses like this, you need yet more routing entries to deal with the exception. The problem isn't ever going to get any easier with IPv4 -- it's only going to get worse. And that's why you can't just put addresses where they're needed. An address is useless if you can't route to it.

Yaz

Comment For anybody paying attention... (Score 5, Informative) 574

For anybody paying any attention over the past few years, this shouldn't come as a surprise.

The IANA ran out of IPv4 address space available for doling out to the Regional Internet Registries (of which there are six) three years ago. APNIC (Asia Pacific) and RIPE NCC (Europe) went below a single /8 three and two years ago respectively. The IPv4 address exhaustion has already begun.

ARIN (North America), however, has 82 /8s. If you consider that there are only 221 /8s in total (the IANA keeps 35 for reserved use), this means that ARIN has 37% of all usable Internet addresses assigned to it, for roughly 8% of the worlds population. More than a third of all possible addresses for less than a tenth of the worlds population.

Even still, ARIN now only has about 1.3 /8s free. Projections have them running out next year. They've always been estimated to be one of the last RIRs to run out (with AfriNIC being last, as they still have just over 3 of their nearly 13 /8s free) due in part to the huge number of /8s already in use in North America (way out of proportion to the population of the continent).

I feel really ashamed every time this topic comes up on /. at the complete and rampant ignorance of the issues surrounding IPv4 and IPv6. We will run out of IPv4 address space, but address space is hardly the only problem with IPv4. The bigger problem is ROUTABILITY -- the IPv4 routing tables have become seriously unweildly, they are getting progressively worse (in part due to InterRIR transfers of address blocks now that Europe and Asia have run out of addresses), and they continue to need more and more compute power thrown at the problem just to keep up. The number of BGP forwarding entries has doubled from roughly 250k to nearly 500k in just the last six years. The algorithms used for determining routes in IPv4 are complex. The computability is difficult, and it's slowing down the Internet today.

IPv6 solves a lot of the routing problems inherent in IPv4, making routability a lot easier to compute. IPv6 packets have a simpler header, routers don't need to provide fragmentation services, and there is no header checksum. IPv6 also avoids the routing anomalies present in IPv4 due to things such as the switch to CIDR. We know a heck of a lot more about packet routing now than we did in the 60s when IPv4 was first defined, and these improvements are available in IPv6.

This is why I cringe whenever I see a post in an IPv6 address exhaustion related /. story complaining about a lack of backwards compatibility in IPv6, or anytime anyone says that NAT is good enough for everybody. As the address space fragments even further, and historic /8s and /16s are broken up into ever smaller units which are then distributed to diverse geographies, the routing table in IPv4 is going to continue to blow up, becoming ever uglier -- it simply wasn't designed to scale in the manner in which we're using it. IPv6 brings sanity to global routing again, in a way that no backward-compatible solution could achieve.

The IANA is out of addresses. RIPE and APNIC are virtually out of addresses (with only enough reserved to aid in IPv4 - IPv6 tunnelling and translation services). ARIN is down to less than 1.5 /8s, and survives purely on the fact that it has a disproportionate number of /8s compared to the population it serves. And worst of all, IPv4 routing is an absolute mess that requires a ton of processing power and compute time to maintain. Remember these things before you post something silly about being pro-NAT, pro-some-untested-IPv4-address-extension-proposal, complaining about backward compatibility, or how people have been predicting IPv4 exhaustion for the last 25 years (just because you see the train coming towards you way off in the distance doesn't mean you won't eventually have to get off the tracks).

/. users, hang your head in shame. You used to be so much better than this. For those of you who do understand the issues involved, bravo on continuing to try to educate the idiot masses about why this is important. I just wish you weren't so few and far between.

Yaz

Comment Re:Radio Shack Ad Best So Far (Score 1) 347

That's probably one of the reasons they closed in Canada. Radio Shack used to be the place to go when you needed some components (which they stopped selling). the 200-1 electronic kits, the Armatron, I miss those kind of things...

Nope -- technically, they have never really closed in Canada, but it's a strange story.

RadioShack was operated in Canada by a company called InterTAN. They weren't owned by the US RadioShack at all -- the stores were licensed under an agreement. In 2004, Circuit City in the US bought InterTAN, and one week later RadioShack sued in the US (claiming breech of agreement) to have the licensing agreement cancelled. All Canadian RadioShack stores were then rebranded as "The Source By Circuit City" (which IMO was always a terrible name).

But wait -- there's more. In 2006 RadioShack US then opened 9 stores in the Toronto area running under the "Radio Shack" name. After only a few months in business, they closed all of them down "to focus on core US business".

In January 2009, Circuit City in the US went out of business; however, as "The Source By Circuit City" in Canada wasn't doing too badly, instead of being shut down with the US stores the entire thing was sold to Bell Canada, who renamed the stores "The Source", and who continues to operate them to this day.

As such, many/most of the pre-2006 RadioShack stores haven't actually closed -- they were simply renamed, first to "The Source By Circuit City", ad then just "The Source", which still operates today. InterTAN didn't go out of business -- it's just been swallowed up.

Of course, the product selection has changed over the years -- you're probably not going there anymore to get your zener diodes. They still have some parts, but it's not like back in the heyday.

(Refs: http://en.wikipedia.org/wiki/RadioShack#Operations_in_Canada, http://en.wikipedia.org/wiki/The_Source_(retailer))

Yaz

Submission + - Microsoft fears Chromebooks .. (bgr.com)

An anonymous reader writes: We were surprised last year when Microsoft started launching an anti-Chromebook ad campaign because, quite frankly, we’d never see anyone really use a Chromebook in the wild before and Chromebooks were nowhere to be found on usage statistics published by NetMarketShare. A few weeks later, however, we started hearing stories about Chromebook usage surging in schools although we didn’t have any real data to back up such claims. Now, however, The Wall Street Journal directs our attention to new research from Futuresource Consulting showing that Chromebooks’ share of the K-12 market for tablets and laptops exploded from just 1% in 2012 to 19% in 2013. What’s more, Windows’s share of the same market declined from 47.5% to 28% over the same period.

Comment Re:Can't directly compare PC and phone sales ... (Score 1) 511

It was the last of the plastic MacBooks, self identifies as "Early 2008". The CPU is a Core Duo and is 64-bit capable but Apple did not write 64-bit drivers (or something like that) for this system. It is not compatible with the 64-bit versions of Mac OS X. That makes it a non-64 bit machine regardless of what the CPU is capable of.

Your system runs a Core 2 Duo, and is indeed 64-bit capable.

Here's the rub, however -- your machine only has a 32-bit EFI, which means it can only boot in 32-bit mode. In OS X, this means it can only boot the 32-bit kernel and associated kernel modules. The 32-bit kernel can still run 64-bit applications -- but you'll still have the various limitations of a 32-bit kernel (although as the OS X 32-bit kernel implements PAE, you can still bust the 4GB addressing limitations you see in 32-bit versions of Windows client OS's).

The most recent OS X releases ship with only a 64-bit kernel; systems running with a 32-bit EFI are thus left out of the cold.

As such, it's not that your CPU can't handle 64-bit computation, or that Apple didn't write suitable drivers for your system. It's a boot issue due to the 32-bit EFI. So now you know.

Yaz

Comment Re:9.1 (Score 1) 1009

Did you ever use Windows 3.0?

If you did, you'd understand why people thought Windows 3.1 was... GENERAL PROTECTION FAULT.

I remember some of Microsofts advertising around the release of Windows 3.1, heavily advertising the fact that there were no more "Unexpected Application Errors", and thus Windows 3.1 was so much more stable than Windows 3.0.

Of course, the truth of the matter really was they just renamed the error condition to "General Protection Fault", and it was no more stable than 3.0.

Windows 3.1 was the last version of Windows I ever ran on personal hardware (and I steer clear from Windows at work as much as possible).

Yaz

Slashdot Top Deals

PURGE COMPLETE.

Working...