Somehow I fondly remember VMS running on HP hardware back in the 90s. A local university had a dialup guest account. It was fun. Going back to the DOS prompt after a finished session always made me hurt and long for something better than DOS.
"Somehow" is that you're hallucinating. VMS didn't run on any HP hardware until 2002. Prior to that it only ran on DEC and Compaq hardware.
nearly 30% of Americans either aren't digitally literate or don't trust the Internet
For that to be true, over 70% of Americans must be BOTH digitally literate AND trust the Internet, which is impossible since anyone who trusts the Internet is not digitally literate.
IPv6 addresses are so long that you can't remember them long enough to read the address from one machine and type it into another.
Which is not a problem because normal people don't have to read the IP address from one machine and type it into another. They use DNS and DHCP, which were specifically intended to eliminate the overwhelming majority of instances of dealing with IP addresses directly.
I've been a networking software engineer for most of my career, so I do have to deal directly with IP address (v4 and v6) routinely, and I don't complain about it. My mother is not a networking software engineer or IT person, so she's had to do that exactly ZERO times in the 15+ years that she's used the Internet.
But, it seems unworkable from a human perspective. No I haven't thought of a better solution. I'm just saying that this is a significant usability problem and a barrier to adoption.
It's not a usability problem, because people shouldn't be directly dealing with IP addresses. If people are directly dealing with IP addresses, that is the usability problem which needs fixing, and not the length of the address.
XFS is prone to data corruption when improperly shut down.
Really? Ugh. I thought most modern file systems were consciously designed to avoid that sort of problem.
They're adding "slow lanes", and moving services that don't pay up into the slow lanes.
The whole thing is nothing but greed. The ISPs at both ends are already being paid for the bandwidth, but the ISP at the consumer end wants to be paid for it twice, once by the consumer and once by Netflix.
Would you argue that if a Microsoft (or other vendor) SSL implementation was used by most of the world's web servers, this would have been less likely to happen? As far as I know, there's no reason to think that any other implementation, open or closed, would be any more immune to such problems. There is little or no evidence that closed source software is generally more reliable, or that substantial effort is made to audit it.
If you're arguing that it's bad that such a high percentage of the world's web servers use the same software, I might agree, but that is completely orthogonal to whether that software is open or closed.
Heartbleed is a perfect example of why software should be written in "safe" languages, which can protect against buffer overruns, rather than unsafe languages like C and C++.
Of course, the problem is that if you try to distribute open source software written in a safe language, everyone bitches and whines about how they don't have a compiler for that language, and how run time checking slows the software down by 10%. Personally I'd rather have more reliable software that ran 10% slower, than less reliable software that ran faster. It's also crazy to turn off the run-time checks "after the software is debugged", as if the debugging process ever succeeded in finding all the bugs. As C.A.R. Hoare famously observed in 1973, "What would we think of a sailing enthusiast who wears his lifejacket when training on dry land, but takes it off as soon as he goes to sea?"
The "with enough eyes" argument, and "if programmers were just more careful" arguments don't justify continued widespread use of unsafe languages. Granted, safe languages don't eliminate all bugs, but they eliminate or negate the exploit value of huge classes of bugs that are not just theoretical, but are being exploited all the time.
I keep hoping that after enough vulnerabilities based on buffer overruns, bad pointer arithmetic, etc. are reported, and cost people real money, that things will change, but if Heartbleed doesn't make a good enough case for that, I despair of it ever happening.
so basically if they start building the uranium enrichment plants now, they might have a working nuke in 10-20 years.
There's an existence proof that it can be done in four years, if someone is willing to devote sufficient resources to it.
1.79 x 10^12 furlongs per fortnight -- it's not just a good idea, it's the law!