Integer overflow has absolutely nothing to do with security
Integer overflow has been in the top five causes of CVEs for several years running. Buffer overflows, sadly, are still at the top.
Walled gardens like AOL and CompuServe failed because they had to compete with everyone else. In the early '90s, there was a lot of content that was exclusive to AOL or CompuServe. There were a load of small BBS that had their own unique content. And then there was the Internet. Anyone could put something on the Internet and when web browsers started to be easy to install anyone could put up a web page. Individuals would put things up on their ISPs' web space or somewhere like Geocities, big companies would buy their own servers. Small individual ISPs started to spring up, because the cost of entry was low: a rack of modems, a leased line, and a load of phone lines and you could be an ISP. Local ISPs competed by differentiating themselves in various ways (free email, free web space, static IPs, whatever).
Meanwhile, AOL and CompuServe (OSPs - Online Service Providers) were trying to sell access but also be responsible for all of the content. The parallel with Facebook isn't quite there, because they're only selling the content. The problem is that, while there is some content on Facebook, anyone who can access Facebook can also access the whole of the web. They need to somehow justify putting content on Facebook (where only Facebook users can see it) rather than just putting it on a web site. Their argument for this is that they can collect lots of data about potential customers if you do, but it's not clear that this is a good long-term alternative.
*sends Morse "telegrams" with ham licence and homebrew radio costing about $25 one-off in junk parts*
If you think Morse sent over a Ham radio connection is equivalent to a telegram, then you're missing the point. It's only equivalent if someone prints it off at the far end and couriers it to the recipient.
Can you imagine the number of dit-dah combinations you'd need to memorize for a minimum of 2000 or so kanji?
You'd probably use a short sequence for each of the brushstrokes and compose Kanji like from them. There's an input method (Cangee? Something like that) that works like this with a QWERTY keyboard. From 26 brush strokes, it can compose any Kanji and is apparently the fastest way of entering Kanji on a computer, although it takes a while to learn.
Oh please, the same arguments were made when API's like DirectX first started surfacing. You may as well argue that unless you're programming directly at the assembly level, you're wasting your time.
DirectX came with some advantages as well though. Like OpenGL, you ended up with a slower game than writing specialised code for a specific graphics accelerator, but the trade was that you got to support all shipping accelerators (and ones that hadn't shipped yet) by allowing them to write the code once and get a speedup everywhere. With Direct2D, you got things like accelerated sprite drawing from hundreds of graphics cards that all had different interfaces for bit blits. This meant that your Direct2D game would often be faster than a DOS version that was talking to the VGA hardware (or SGVA via VESA) and doing the same thing entirely in software.
An abstraction layer that has a small cost but buys you acceleration support in exchange is worthwhile, because the net result is faster (and simpler) code. An abstraction layer that has a big cost and doesn't give you faster code much harder to justify.