How can a list of "the seven major tech hubs" not include Seattle, which is home to some of the biggest tech companies in the world, but include cities like Atlanta? That is a strangely biased list so I wonder what the criteria was for "tech hub".
I could never manage more than 1...I was just unable to force myself concentrate on what I was being asked to do. I'd read 500 pages per night from library books but couldn't force myself to read more than 10 out of the 100 I was expected to read from my assigned reading.
I was using unblock-us for a while, and it worked flawlessly. I only stopped as there wasn't enough additional content on US netflix for me to justify paying for it.
IPv6 tunnels are fortunately free. And as I mentioned, if you have router support for it, then every Mac, PC, and Linux box in your house will automatically be provisioned for end-to-end IPv6 access to Netflix (and anything else IPv6 accessible on the Internet), along with any set-top boxes which may use IPv6 (Apple TV apparently does, but I don't own one to be able to confirm this).
Canadian Netflix is pretty crappy compared to the American version and we don't have much else. It's not like the content companies want to sell their products here, at least in an easy to purchase downloadable format
Netflix is fully IPv6 enabled, which is actually great news for Canadian Netflix users. Just setup an IPv6 tunnel to the nearest Hurricane Electric tunnel server farm (if you have a router that supports this, you can enable IPv6 invisibly for your entire home quickly and easily. Apple's routers all support this out of the box, for example), and presto -- you'll have US Netflix.
Note that this only works on IPv6-enabled devices, of course, so your set-top box or smart TV may not benefit. And you have to ensure the browser you're using properly supports Happy Eyeballs so as to ensure it will prefer IPv6 over IPv4 (Safari on Mac OS X since Lion uses an algorithm to prefer whichever connection is fastest in responding, which can cause it to initially load Netflix via IPv6, showing all the US content you can't otherwise see in Canada, only to be blocked when you actually try to view it if OS X switches down to IPv4 for optimization purposes).
As I have IPv6 tunnelling enabled right at the router, there is no software to be installed or anything that needs to be configured anywhere once this is setup, unlike VPN/proxy solutions. It's also fast -- even though the IPv6 is tunnelled, I can't perceive any speed issues when watching content this way.
$5000 per infringer (not per infringement) is the maximum. The minimum is $100, and I've heard word that the court is more likely to impose the minimum. The plaintiff either has to prove actual damages, or can apply for statutory damages, between $100 - $5000 at the judges discretion. The copyright act stipulates that the judge needs to consider whether the infringement was for non-commercial purposes, whether it was for private purposes, and whether it would constitute hardship for the defendant to pay.
In addition, the court also found that Voltage Pictures has to pay TekSavvy for all costs associated with gathering the data, and that they'll be limited to $5k in damages per person found to infringe.
May sanity reign!
SAT is clearly NP complete, and clearly the existence of good SAT solvers is not a proof that P=NP. This means that there will be relatively small problems that SAT solvers won't be able to solve.
Enjoyed your post, but have to correct a small quibble.
From a mathematical standpoint at least, being NP complete doesn't imply that there are some problems that are unsolvable; merely that they won't be solvable in any reasonable amount of computing time. If you have a few hundred billion years of compute time available, a SAT solver might be able to solve even those small problems you mention. Of course, from a practical perspective, none of us are going to be here to get the result in those situations, making them unsolvable from a practical standpoint.
(On the other hand, once the billions of aeons roll by and the machine goes 'ding' and spits out an answer, we do know that we can verify it in poly time. Huzzah!)
While all of this may seem ultra-pedantic, there is enough confusion about NP out there that someone reading your post may get the idea that things that are NP-complete are unsolvable. They're not unsolvable -- we can typically fashion algorithms to solve them, simply that those algorithms run in nondeterministic polynomial time, and thus may have runtimes exceeding the expected lifetime of the solar system, even with every cycle of compute time ever invented pushed at it.
...unless, of course, someone comes up with a proof that P = NP, in which case all those NP-complete problems can be transformed into P problems. Sure, they might still take a few hundred billion years to get a solution, but at least we'd know how many hundreds of billions of years would be needed to get a solution!
But the thing that sticks out the most is - why the hell is it such a crisis that IP addresses are doled out where they are needed, instead of what I am sure you would consider "fairly" to everyone? Is there now a social justice aspect to the IPv4 "crisis?"
Thanks for making it obvious you have no idea what you're talking about.
I have no problem with the disproportionate amount of
- Many of the organizations that want an IPv4 address block (of whatever size) probably already have one. Indeed, due to pre-CIDR allocation rules, many of them have way more than they actually need to use,
- There are more opportunities for addresses to be shuffled about. ARIN has assigned/controls over 1.3 billion addresses, for a population of roughly 530 million people. You have a lot more flexibility when you have nearly 2.5 addresses for every man, woman, and child in your registry area.
As such, you can't point to the pool with the largest number of addresses, and then imply(as the
With that out of the way, if you know anything about routing, you would know that there is a technical crisis in doling out addresses wherever they are needed. Anytime you break up a contiguous address space, you'll generally need two (or more) additional routing table entries to handle the situation. In pre-CIDR days, the situation was fairly simple (although I'm simplify it a bit to make it easier to communicate): a router only had to look up where to forward a packet based on the value of the first octet, which would only have 255 possibilities (actually less, due to reserved address spaces, such as the unused Class E space). The packet would follow the route until it reached the router in charge of the value of the first octet, which would route based on the second octet, also with a maximum of 255 values. Each hop would hit a router with a table with a maximum of 255 entries, until you got to the destination host.
Post CIDR, the address space could be broken up at pretty much arbitrary locations, so knowing the next hope required ever expanding tables. As soon as you geographically break up, say, 213, into geographically separate ranges (say, for simplicity, a series of
This is how we've got to the point where there are roughly half a million forwarding entries. Maintaining all of these entries in a constantly changing network, storing them, and searching them is getting to be extraordinarily computationally expensive. If you continue to break them up such that no two contiguous addresses are on the same physical network, you could wind up with roughly 3.7 BILLION routing entries.
IPv4 wasn't designed to be broken up this way. In the early days of CIDR, it was expected that such routing difficulties were far in the future, and that we would have moved to a newer, better protocol by then. Turns out the problems aren't as far into the future as they may have expected, and we've done pretty much squat at doing anything about it, other than throwing more compute power at packet routing.
So yeah -- you can't just throw addresses where they're needed anymore. Every
For anybody paying any attention over the past few years, this shouldn't come as a surprise.
The IANA ran out of IPv4 address space available for doling out to the Regional Internet Registries (of which there are six) three years ago. APNIC (Asia Pacific) and RIPE NCC (Europe) went below a single
ARIN (North America), however, has 82
Even still, ARIN now only has about 1.3
I feel really ashamed every time this topic comes up on
IPv6 solves a lot of the routing problems inherent in IPv4, making routability a lot easier to compute. IPv6 packets have a simpler header, routers don't need to provide fragmentation services, and there is no header checksum. IPv6 also avoids the routing anomalies present in IPv4 due to things such as the switch to CIDR. We know a heck of a lot more about packet routing now than we did in the 60s when IPv4 was first defined, and these improvements are available in IPv6.
This is why I cringe whenever I see a post in an IPv6 address exhaustion related
The IANA is out of addresses. RIPE and APNIC are virtually out of addresses (with only enough reserved to aid in IPv4 - IPv6 tunnelling and translation services). ARIN is down to less than 1.5
/. users, hang your head in shame. You used to be so much better than this. For those of you who do understand the issues involved, bravo on continuing to try to educate the idiot masses about why this is important. I just wish you weren't so few and far between.
That's probably one of the reasons they closed in Canada. Radio Shack used to be the place to go when you needed some components (which they stopped selling). the 200-1 electronic kits, the Armatron, I miss those kind of things...
Nope -- technically, they have never really closed in Canada, but it's a strange story.
RadioShack was operated in Canada by a company called InterTAN. They weren't owned by the US RadioShack at all -- the stores were licensed under an agreement. In 2004, Circuit City in the US bought InterTAN, and one week later RadioShack sued in the US (claiming breech of agreement) to have the licensing agreement cancelled. All Canadian RadioShack stores were then rebranded as "The Source By Circuit City" (which IMO was always a terrible name).
But wait -- there's more. In 2006 RadioShack US then opened 9 stores in the Toronto area running under the "Radio Shack" name. After only a few months in business, they closed all of them down "to focus on core US business".
In January 2009, Circuit City in the US went out of business; however, as "The Source By Circuit City" in Canada wasn't doing too badly, instead of being shut down with the US stores the entire thing was sold to Bell Canada, who renamed the stores "The Source", and who continues to operate them to this day.
As such, many/most of the pre-2006 RadioShack stores haven't actually closed -- they were simply renamed, first to "The Source By Circuit City", ad then just "The Source", which still operates today. InterTAN didn't go out of business -- it's just been swallowed up.
Of course, the product selection has changed over the years -- you're probably not going there anymore to get your zener diodes. They still have some parts, but it's not like back in the heyday.
Does that mean only hatchbacks will be permitted in the EU going forward?
(Note to eds: bad titles are bad, and will be mocked.)
Link to Original Source
It was the last of the plastic MacBooks, self identifies as "Early 2008". The CPU is a Core Duo and is 64-bit capable but Apple did not write 64-bit drivers (or something like that) for this system. It is not compatible with the 64-bit versions of Mac OS X. That makes it a non-64 bit machine regardless of what the CPU is capable of.
Your system runs a Core 2 Duo, and is indeed 64-bit capable.
Here's the rub, however -- your machine only has a 32-bit EFI, which means it can only boot in 32-bit mode. In OS X, this means it can only boot the 32-bit kernel and associated kernel modules. The 32-bit kernel can still run 64-bit applications -- but you'll still have the various limitations of a 32-bit kernel (although as the OS X 32-bit kernel implements PAE, you can still bust the 4GB addressing limitations you see in 32-bit versions of Windows client OS's).
The most recent OS X releases ship with only a 64-bit kernel; systems running with a 32-bit EFI are thus left out of the cold.
As such, it's not that your CPU can't handle 64-bit computation, or that Apple didn't write suitable drivers for your system. It's a boot issue due to the 32-bit EFI. So now you know.
Did you ever use Windows 3.0?
If you did, you'd understand why people thought Windows 3.1 was... GENERAL PROTECTION FAULT.
I remember some of Microsofts advertising around the release of Windows 3.1, heavily advertising the fact that there were no more "Unexpected Application Errors", and thus Windows 3.1 was so much more stable than Windows 3.0.
Of course, the truth of the matter really was they just renamed the error condition to "General Protection Fault", and it was no more stable than 3.0.
Windows 3.1 was the last version of Windows I ever ran on personal hardware (and I steer clear from Windows at work as much as possible).