Or maybe it would mean that you were as smart as Einstein. Or, at least, able to plagiarise Einstein, who did explain special relativity to his children and wrote down the explanation that he used. My father told me the same explanation when I was 11. There's a lot of maths, and the moment when you can work out the mass-energy equivalence formula from first principles requires a lot more maths than a typical child has, but that isn't needed to get an understanding of what Einstein showed any more than you need to understand Newton's laws to understand that he worked out how to calculate where a thrown object will land and the relationship between the mass of an object and how fast two things will fall together.
Einstein used a model of two trains moving towards each other, each with headlights on the front, and asked his children what would happen to the light. If you start with the speed of sound on a train, then you get to the answer that sound goes faster because the air is moving, so for the light to move faster you'd need some substrate to be moving. The rest falls fairly naturally out of there.
General relativity, in contrast, is horribly complex.
Slack is a bloated monstrosity that provides IRC and a few other things, using a combination of Node.js and Chromium to produce one of the largest and most memory-hungry desktop applications that you might ever need to run. Snap is Ubuntu's version of the old PC-BSD PBI installer, where each application comes with all of its dependencies and installs them in a directory so that the package maintainers don't have to worry about incompatible upgrades. The combination of the two allows Slack to consume even more resources, by not even sharing memory mappings for common libraries.
The goal of Slack is to minimise productivity, by consuming all available computing resources and all available attention. This combination allows it to consume even more resources, but unfortunately does nothing to increase the amount of time that people waste on Slack.
Neither? Really? You'd honestly prefer to have a 50% chance of having to wait longer to get your information?
Load times for web sites are now under a second - over 100ms is considered slow. I honestly couldn't care less if two sites that are otherwise equally ranked end up sending me to the one that loads in 500ms instead of 50ms.
What about other characteristics of bad sites? They can be slow, ugly, spammy, malware-laden.
A site that is spammy or malware-laden shouldn't make it into the search index at all. A site that is ugly may still have useful information - and often ugliness correlates quite strongly with utility for technical web pages, so I'd be very unhappy if a search engine decided that I wanted to look at pretty and information-light sites instead.
20-year-old, I might give you. Just. As long as it was a cheap and crappy machine from 20 years ago. 10 years? No chance. A 10-year-old machine is going to be at least a Core 2 Solo, which can handle line-rate TLS on a 100Mbit connection without consuming more than a fairly small amount of CPU. The RAM usage per TLS connection is tiny. It was an issue on machines with 4MB of RAM servicing a few hundred connections, but on your low-end VPS with 256MB of RAM it's trivial.
Most modern IoT devices have hardware AES, so aren't even doing most of the hard work in software, but even doing it entirely in software on something like a Cortex-M3 is very feasible at the kinds of network speeds that these devices can handle.
It's due to apple's instance of posting characters like ' as unicode even if the site is not using unicode
Apple doesn't do this on sites not using unicode. Take a look at the HTML for this page and you will see a meta tag telling you that the encoding is UTF-8. The problem is that Slashdot explicitly advertises that it is unicode, but isn't.
The fact that it doesn't support unicode in 2017, when even my terminal does, is a secondary incompetence.
You really want to integrate this with the DHCP response (though that's also not authenticated in any way). The problem with
A good first step would be for the DHCP response to include a root cert that can be used only for things on the current network. Ideally, you probably also want something integrated with mDNS so that devices that publish their names via mDNS can also publish their cert via the same mechanism and have other parties automatically reject names if the signing cert changes. Neither of these mechanisms is very secure, but they both probably better than nothing - at least they give you reasonable protection against passive eavesdroppers.
The first version always gets thrown away.