The problem with DLLs are that there are many versions of the same DLL that often need to run at the same time. Which means that you can substitute one version for another, and hijack a program. Nothing new here.
If only it were as benign as that. You can even inject DLLs into a system process, and then have code executed as that process unless things have changed dramatically in the past 4 years.
Are you suggesting that Windows makes a toy computer? Wouldn't a toy GUI consist mostly of big colored squares, dumbed down applications, and a supervisor monitoring your usage patterns?
And I present
Banks can roll back transactions for various reasons, e.g. bankruptcy proceedings, mistakes by their own operators or by customers, or
Lossless audio CD rip averages 350MB. DVDs usually run between 4-8GB ripped and compress to about 2GB on average. OTA HD average about 12-14Mbps streams, average about 7GB / hr. BD movies at 32Mbps, average about 30GB for 100 min.
People with bit more specialized needs (hardcore gaming, media production, virtual machines, etc.) can probably soon acquire 1 TB SSD for a price like $200.
And you can get an 8TB Seagate Archive HDD for $223 at newegg today, if you need/want to store lots of data it's still cheaper by far. The real issue from the manufacturer's side is that nobody will pay a premium for anything. You get a SSD for all things performance and the cheapest, slowest HDD because for streaming huge media files you just have to be fast enough, they're mostly accessed linearly and even a video server for a big family only serves a handful of video streams at once. And a lot of people are streaming more or doing download & delete, to be honest I hardly ever get around to watching most things again. Every so often I just go cleaning up a few TB of stuff that was just collecting dust.
You think say the Linux kernel isn't useful? They've been on a three month cycle for ages, roughly one month merge window and two months of release candidates. Basically what you want is for everybody to time box what they can do before the next release, but you can't know if you don't know how long that'll be. Maybe if it's two months you'll do some quick enhancements and fixes but if it's six you do a deeper restructuring. If 90% of your developers have finished according to plan and 10% is threatening to hold up the release then the great majority won't be able to effectively use a small extension. It's better to just scrub the parts that aren't ready and say we're releasing now, sorry try again next merge window. Of course assuming that you have a large enough project that there'll be some release-worthy items every cycle and that people don't just submit shit for release no matter what state it's in. There's a lot less drama about who is important and can rush patches and delay releases if the answer is no, you can't. Only bugfixes during RC, if your code breaks shit or needs major rework you're bumped to the next version. If you don't have a person with balls managing that your releases will suck, but if you can't stand up to the developers it'll probably suck on a rolling basis too.
His cameo in the actual Iron Man movies is a pretty good indicator that at least someone at the movie studios also saw the connection, so yes.
Probably massively distorted by stars who accept all friend requests and serve as hubs.
Basically, when you make such a rule, you should have some kind of minimum standard for what qualifies as a "connection". If you bring it down to FB standards, which basically is "I once saw you from afar on the street", the distance is minimal. In real-world terms, if you actually would use "once saw you on the street", I'm fairly sure even for large cities the average would be something like 1.8
It's not as simple as how many flops can you do.
This is why I quoted 3 sets of tests. The Top500 is pretty much flops focused, a very specific test for a very specific workload, which is what all supercomputers were originally targeting back when that benchmark started. While Intel can compete in this arena, as soon as you move to what we might call more realistic workloads, Intel's weaknesses spring out everywhere. You speak of latency - Intel's x86 base architecture has huge issues with process/thread switching compared to any of the RISC entries. Those effects are what kill Intel in the Graph500 list. The Green500 is just a bonus for showing how horrible these processors are, yet as of today, they are the most likely hardware most of us will run. It's kind of like being tied to the current set of inherently dangerous nuclear reactors when a better design has existed for decades, but no one wants to spend the extra cash to get one operational.
AMD also suffers from the process/thread switching costs as originally they were x86 based and I'll be honest that I've not kept up with what they've done since the developed their RISC like core, so can't comment on to what extend they're suffering from those effects today.
Money cannot buy love, nor even friendship.