FWIW, a number of critical Foundation-level APIs are C++ under the hood. Whether linking a newer libc++ dylib would cause them to break or not, I couldn't begin to guess.
The H-1B program is different because H-1B workers who leave their jobs are also legally required to leave the country. This makes them captive labor, almost to the same extent that illegal immigrants are. IMO, we should make green cards easier to obtain and kill the H-1B program outright. By ensuring that foreign workers have similar employment mobility to native workers, it would reduce the ability of unscrupulous companies to bring in workers from overseas and pay them wages that are below the regional going rate. (They would still be able to do it, but they wouldn't be able to retain those employees, so they would eventually be forced to pay wages that are competitive within their geographical area.)
There's nothing xenophobic about wanting to stop the H-1B program from being a way to cut costs. If you truly need to bring in talent from overseas because you can't get it in the U.S., that's one thing, but if you are firing American workers and bringing in foreign workers to do the same job at a lower cost, that's quite another. It is abusing the system, and unfortunately, the H-1B system was practically designed to make such abuse easy.
So how does a 40 year old computer system get replaced and only doubles the number of flights capable of being tracked?
Tracking double the number of flights likely requires about 4x the about of computing power. A naive comparison grows at a rate of (n)(n-1)/2. You might be able to reduce that by not comparing aircraft that aren't going to be anywhere near each other (e.g. a plane in Washington D.C. cannot readily crash into a plane in Los Angeles, CA until they get close to halfway across the country), but still....
That's true, but FireFox and Chrome don't maintain backwards compatibility forever, either. Firefox 16 and Chrome 21 are the last versions that support 10.5. And older, 32-bit-only machines are limited to Chrome 38 even if they're running 10.6.x. Otherwise, I think they're both still supporting 10.6.8 for now, but it is probably just a matter of time.
IIRC, they already don't support certain features on old operating systems. For example, Chrome supports WebGL only on 10.8 and later (unless they've changed that recently). So although the UI might be getting updated and security holes might be getting fixed, they're still not getting the full upgrade experience.
Being anti-H-1B is progressive. Progressives generally believe that corporate abuse of workers is bad, and H-1Bs represent the ultimate pathway to worker abuse, by creating a class of people who cannot afford to demand equal pay (because if their employer terminates them, they have to leave the U.S.), who have a harder time moving from company to company (or at least who perceive themselves to have a harder time, which in practice is basically the same thing), and who therefore will end up working for substandard wages by local standards.
And then those H-1B workers end up depending on government subsidies, low-income housing, etc. because the cost of living in high-tech areas is based on typical salaries, not H-1B salaries. In effect, everyone else in the area pays to support these people, solely because their employers were too cheap to pay them properly.
Progressives tend to take a dim view of turning our country into a caste system. Just saying.
All you have to do is put them on tables, with their wires stretching out across the living room floor. Sure, if you only use your laptop on a desks, it will never happen, but that's not how most people (outside of office environments) use laptops in the real world.
With that said, Apple's round plugs were way too big, and thus made great levers, so you didn't even have to trip over them to break them. Placing them on your lap in the wrong way was sufficient....
Plenty of websites use JSON-based GET requests to post comments on web boards. Is it ideal from a design perspective? No. Is it common? You bet.
Magsafe is crap, the cables look ugly and break in no time. They're also no faster to connect than say HP or Dell round power connectors.
Round power connectors are crap. The jacks stretch and stop making proper contact in no time. Apple used to use them back in the PowerBook days, and on the PB 145, I broke at least three cables and at least two or three jacks on the back of the device over the course of three or four years.
Magsafe connectors are a godsend by comparison. In the eight years or so that I've been using them, I've broken zero ports. And if you don't count the recycled MacBook Air cables that I was using with my rechargeable external power brick, I've also broken zero cables. (If you count those, I've broken two or three, but given that the external power brick company cut them off of dead power supplies, odds are good that they had been seriously abused long before I got them.)
Magsafe 2, however, is a train wreck. The contact surface is too small to have any real grip, so they tend to fall off while I'm moving my laptop from a tray table to my lap. That "upgrade" was a huge step backwards. That's the one thing I really miss about the pre-retina MacBook Pro, and I'd be more than happy to see Apple add an extra millimeter of thickness at the edge of their case (the center is plenty thick enough) to allow them to go back to the (far superior) earlier design. With that said, I did appreciate the lighter weight of the Retina MBP when one fell edge-first out of an overhead bin onto my head a few months ago... but I digress.
The new USB power connector is doubly bad, because it has all the same problems as the older, breakage-prone designs, plus it steals your ability to use your USB port without plugging in a clumsy adapter cable. The absolute last thing I want to do is have to carry around some weird splitter cable just so I can charge my laptop and a cell phone at the same time. And of course, as an iOS developer, I keep more than one cell phone connected to my laptop for much of the day, so the new MacBook really would be nightmarish from my perspective; I'm hopeful that Apple does not even *think* about taking their Pro line in that direction.
1. The court who handed down the injunction is the arbiter for copyright law
Agreed so far.
2. The cache-only service is the means of enforcing the injunction.
Nope. The cache-only service isn't the one being enjoined. The party being enjoined is ISP A (the users' ISP). However, they aren't in a position to actually do anything about the injunction because they aren't ISP B (the Pirate Bay mirror's ISP). Their only way of "handling" it is to block the site in a manner that directly harms the business of CDN C (CloudFlare) and hundreds of other innocent businesses. CloudFlare, in turn, is also not capable of truly enforcing the injunction, because the Pirate Bay website mirror can trivially switch off CloudFlare with a simple DNS change and avoid any block that CloudFlare might put up.
The sole plausibly effective means of enforcement is for the courts to order CloudFlare to disclose the source IP for the website, and to then get an injunction against the correct ISP. And if that ISP turns out to be outside the UK, then it is likely beyond the reach of UK law, and that's a reality that the UK government will simply have to accept.
3. If you go to the other end of the spectrum and follow the lowest level of law the copyright is dead on the internet.
The reality is that there will always be sites on the Internet in countries that have weak laws. Any government that thinks it can somehow put up road blocks that will adequately prevent people from accessing those sites is a government of fools. Just take a look at how many people pay for VPN service to get around geo-blocking of TV shows, or to avoid censorship by oppressive governments.
As John Gilmore put it, "The Net interprets censorship as damage and routes around it." That's the way it has always been, and practically speaking, that's the way it always will be.
For this reason, if you want to fight piracy, you cannot hope to do so using technical measures. It never worked before, yet in spite of more than thirty years of trying to do so and failing (think Macrovision, floppy disk copy protection, etc.), corporations keep trying to make it work, and idiotic governments keep trying to find ways to legislatively turn this hopeless cause into something that's magically feasible. You know what they say about insanity?
Mind you, I don't have the right answer; if I did, I'd be rich. But I do know how to spot the wrong answers.
4. The cache only service could segregate the different sources to different IPs so different countries could enforce their own laws by blocking selected content.
First, there are only so many IP addresses. They can't realistically cache each site on its own IP address. The cost would be astronomical. Second, even if they could, how can you do that without also making it easier for oppressive regimes to suppress information? Ethically and morally speaking, a CDN must be content-neutral. There's simply no acceptable alternative.
It could also be a real boon for windmill users. Store power that the windmill provides at night (when you probably aren't using much, if any, power) and sell it back (or use it) during peak usage periods.
And 48kWh, which is cited above as "about average", means, no home-servers running 24x7 (about 200Watts*24h=4.8kWh — or 10% more than the estimate — per server), no super-duper Christmas lights [komar.org], and other limitations...
My home server runs 24x7. It draws 11W when idling, or about 264 watt-hours per day, and the current versions draw barely half that. Compared with heating and cooling, the server is lost in the noise. Unless you're serving a site that absolutely requires staggering amounts of computing power or desktop-sized hard drives, might I suggest you consider more power-efficient server hardware?
If I were still using such an ancient 200W horror, replacing it with a 6W server would save me almost $650 annually at my current PG&E rate. In other words, the new hardware would be basically free after the first year or so.
Cloudfare blocks Tor exit nodes heavily; you have to fill out a captcha almost every other page refresh. It makes it almost impossible to navigate a website.
CloudFlare blocks any IP address that sends an insane number of page hits in a short period of time, because the vast majority of those IPs are being used by automated bots running on sites like Amazon EC2 to scan websites and post spam links en masse. There's no good way for CloudFlare to tell the difference.
And yeah, that policy is problematic. It caused me to endure a protracted back-and-forth with Amazon over getting my affiliate account activated, because CloudFlare was treating Amazon's web crawler bot's IP range as a potential spammer and showing it a captcha page for every result.
That seems incompatible with your distaste for "kowtowing to the enemies of freedom" and trying to allow customers access to your books even if a government doesn't want them to have access.
There's also a decided benefit to blocking web-posting mass spammers, and although the captchas are annoying, they don't prevent you from using the site entirely; they merely make it a pain in the backside. On balance, although it isn't ideal, it is acceptable, IMO, because A. it is trivial for end users to get around and thus is not a true block, and B. it serves a very useful purpose in the default case while causing a hassle for only a tiny fraction of a percent of the site's users (at most).
(Incidentally, the book thing was purely hypothetical; my books are pretty tame.)
In any case, you're asking the wrong questions. You're looking at it from the perspective of one of those big cloud providers. The truth is, the big players can't protect your site. The big players have too much to lose. If you want your site protected, you can not go to the cloud.
On the other hand, the big players are also the only ones that can protect the site. The small players who have nothing to lose will just get blocked and won't have enough pull to do anything about it. They'll have no choice but to bend to any random government's demands if they want to avoid their entire IP range getting blocked en masse. Only a company that is big enough to serve real companies' content can be even slightly effective at protecting you against bullying by world governments.
So basically, when you combine that fact with your statement, you end up with a world in which there can be no protection from free speech, because the only companies big enough to defend it have too much to lose, and thus cannot afford to do so. In effect, the world's free speech becomes limited to the lowest common denominator—to content that complies with the strictest limits of all of the strictest sets of laws in the world. I know that's what the leadership of those countries would like, but it is simply too high a price.
IMO, what is needed is a U.S. law that says that any U.S. company, being an entity that exists solely at the pleasure of the U.S. government, can be fined for not preserving, protecting, and defending the Constitution, including the first amendment, against all threats, foreign and domestic. That would at least provide a counterweight—a punishment for bending too far.
In the absence of that, though, the CDNs need to step up on their own. They need to stand up for free speech, and they need to defend their presumed innocence as a blind cache by requiring that all legal actions be taken against the original site directly, and by taking steps to make it painful for anyone who tries to make an end run around that policy. It is a legally defensible position to hold, and more importantly, it is the only morally and ethically reasonable position to hold. All other positions are a slippery slope that eventually leads to blocking speech that truly deserves defending.