What you're saying basically boils down to "in the end you have to trust the people who wrote the OS or built the device". Yes, yes you do. This article is an example of how one such group abused that trust. Of course Apple and Google could do the same, but absent of any evidence that they have done so saying they could is kind of redundant.
According to my update history they automatically uninstalled it the next day (via a new update). So the auto updates worked - no drama.
Apple could certainly pay more tax than they do (and I would totally support legal changes to require that), but to call $5.3 billion "nothing" is a bit of a stretch.
And I honestly don't think Microsoft are trying to control what you do with their software. At least, I've never seen anything like that. All the licensing stuff is about proving you actually did buy it, and thus proving that the first sale doctrine even applies. It's a nuisance for sure, but I'm not sure what the alternative is. That said, as a 20+ year user of their products I've had to call for a license activation precisely once and it took maybe 60 seconds. I can live with that.
I've worked with MaxMind stuff on mobile IP location - as they guy says it's pretty useless. If the user is on wifi it's not too bad, at least the IPv4 stuff could pretty reliably get the state and often city. I never had any luck with IPv6 although they claim to support it better now.
The big kicker is if the user is on cellular - at least in the US most cell networks are natively IPv6, and they tunnel connections through giant NAT devices. This leads to two interesting effects - firstly the IPv4 address you see on the server is located at some random data center usually on the other side of the country from the user. Secondly, the IP (and therefore the data center) keeps changing - sometimes multiple times within a few minutes. Doing any kind of tracking leads to a device which appears to keep hopping back and forth between California and Kansas.
This Microsoft Research whitepaper talks more about these issues.
(and before anyone jumps on me for the privacy implications of trying to do this - in my specific case it was tracking devices in an enterprise environment for security purposes and everyone involved had given informed consent)
Why do they have to be exclusive options? I backup locally to a server under my desk, and remotely to the cloud. In the (more likely) event of an HDD failure I can restore as fast as my server can spit the data back out and be up and running in a few hours. In the (less likely) event of a catastrophe like a fire it might take a while to restore everything but at least it's not gone forever (and if I'm willing to pay they'll fedex me all my data on a drive). If the cloud provider go bust I still have my local backup and I can switch to a new offsite provider.
FWIW I pay around $12 a month for unlimited off site storage (and currently use maybe 4TB) - this is with crashplan. If you have anything remotely valuable it seems like an obvious thing to do for a little more peace of mind.
He wrote some software, you weren't charged for it and it's existence doesn't affect anyone. Your anger, if it exists, should be directed to those forcing you to use it. Who are "no-one" or "your distro maintainers" depending on your POV.
Our memory usage scales with load. Our load scales with usage. Predictions about growth in popularity of our product are all very well, but no excuse for not monitoring for impending doom
Of course. But testing will tell you something like "a single instance with a 32GB heap will support 9000 tx/sec with acceptable 99.9% latency". So you can monitor traffic levels and scale out as appropriate well before something monitoring GCs starts seeing problems. Where I work we deal with request rates in the 100k/s range and so if things go wrong they do so very fast - the trick is to know the limits and stay well away from them!
(especially since we have some legacy code that doesn't scale horizontally and so we have to keep throwing more memory at the problem for those services until we can fix that).
Tracking the frequency/duration of full collections is the usual approach. The GC has to work harder as heap space runs out, a system which is tight will do frequent full GCs vs one which is running with plenty of head room. In particular if you're using G1, seeing full (single thread) GCs at all is a bad sign. I'd also do this out of process, either by monitoring via JMX or simply scanning GC logs. A process trying to monitor itself rarely works out well
The better garbage collector for servers (G1) never pauses the world to free everything it can, so it's not like you can look at post-collection heap size or anything.
It's an over simplification to call G1 "the better collector for servers", it's more complicated than that - and G1 certainly can do a stop the world, it just tries to avoid it.
I'd also say this - if you're capable of writing C++ without any resource leaks you're capable of writing Java without any resource leaks. In which case memory usage will be predictable and simple load testing will show you how big a heap you need to allocate.
Except that's entirely untrue. You may wish it were, but it is not. I don't have an HOA at my house but there are myriad laws (federal, state & local) which restrict what I can and can't do in and to my house.
It's not about average usage, it's about instantaneous usage. Most of the time my connection is pretty idle, but when I want to download something big (e.g. multiple gigs) I don't really want to wait around for it. That's what I'm paying for - not having to wait.
Don't forget storage. Bandwidth is one thing, but image storage is a big deal for sites like FB. They often store multiple copies of each one (e.g. at different sizes) and then you also have copies cached on CDNs etc, which also costs money. 5% isn't going to make or break the company, but it's worth investigating.
Which in turn would mean that for the problem space it's capable of operating within it's no faster than a normal computer. Which reduces down to "it's no faster than a normal computer".
Door opening: See above re: neighbor or friend, or hide a key somewhere.
A truly special reply suggesting mitigating a theoretical, limited, network security vulnerability by quite literally leaving the physical keys to the castle out in public. Please hand in your risk assessment credentials at the door.
Or you pay a couple of bucks and complain later. Given that this scenario has never happened to me in years of riding the subway makes me quite happy to take the $2 charge every few years to avoid dealing with the police.