Most compiler bugs are not that difficult to debug
Another compiler guy here: Some compiler bugs are not that difficult to debug if you have a reduced test case that triggers the issue. Most are caused by subtle interactions of assumptions in different optimisations and so go away with very small changes to the code and are horrible to debug (and involve starring at the before and after parts for each step in the pipeline to find out exactly where the incorrect code was introduced, which is often not where the bug is, so then backtracking to find what produced the code that contained the invalid assumption that was then exploited later).
How does this contempt you hold for the people around you serve you in your daily life? Does it make you feel better about yourself to proclaim your moral superiority?
The name thing was a huge deal-breaker for a fair number of people, and the pathologically horrible way they handled it made it a lot worse. I know dozens of people who would have used G+ but walked away from it because at least one person they knew had bad experiences with it. I spent months with my G+ account in various kinds of limbo because the "appeals" process for name decisions was completely dysfunctional. I eventually ran into someone on slashdot who knew a person who knew a person who could unstick my account and get my name approved, but by that time everyone had lost interest.
And one of my friends used to have a Picassa account, and then somehow it got marked as a G+ profile thing (even though she never intentionally activated G+), and then suspended because their algorithm thought the name was unrealistic, and then she lost access to the Picassa stuff. I don't know whether that actually got resolved.
Very badly run at every level. The most frustrating thing is, they had a guy writing about this who was apparently in some kind of leadership role, and he talked about how the appeals process should work and how the name stuff should work... And nothing he said actually had any influence on the behavior of the product. The actual appeals process consisted of a thing that did not include any mechanism at all for stating your case or explaining why you felt a given name was the right name to use for you, which was then ignored by a machine or possibly a person, who knows. That's it. No mechanism for response or interaction.
Google's hatred of actually dealing with things personally interacted very badly with a policy which was inherently personal.
things like the GHS hazard pictograms, DIN 4844-2, ISO 3864, TSCA marks, and similar such things seem like perfectly reasonable additions to Unicode
No they don't, because they are pictograms with very specific visual appearances. Such things don't belong in a character set, because things in a character set are characters. Glyphs (visual presentation of characters) live in fonts and each font designer is free to represent them differently, as long as they're recognisable. If every font has to represent things in the same way, then they don't belong in a character set, they belong in a set of standard images.
The other issue with this kind of cruft is collation. The unicode collation algorithm is insanely complex (and often a bottleneck for databases that need to keep strings sorted). Different locales sort things in different orders and most have well-defined rules for things that are characters. The rules for how you sort a dog-poop emoji relative to a GHS hazard pictogram, relative to a roman letter are... what?
The working class doesn't get to pick where they live. It's expensive as hell to up and move
I'm not totally convinced by this. The poorer you are, the less likely you are to own your own house. That makes moving a lot cheaper (selling a house is expensive, changing rented accommodation is inconvenient but not nearly as expensive).
Unfortunately, unions in the USA managed to becomes completely self-interested and corrupt institutions. This is partly due to lack of competition: in most of the rest of the world you have a choice of at least a couple of unions to join, so if your union isn't representing your interests you can switch to another one. Partly due to the ties between unions and organised crime in the USA coming out of the prohibition era. Partly due to the demonisation of anything vaguely socialist during the Cold War, which reduced employee involvement in unions (and if most people aren't involved in the union, then the few that are have disproportionate influence).
Even this has been somewhat eroded by automation. If you're replacing 1,000 employees with robots and 100 workers, then a union's threat to have 600 people go on strike doesn't mean much and even when it does it's very hard to persuade those 600 that striking won't mean that they're moved to the top of the to-be-redundant list.
But, back to my original point: lack of jobs for life isn't the real problem. A large imbalance in negotiating power between companies and employees is. When employees are in a stronger negotiating position, companies will favour keeping existing employees because it's cheaper than hiring new ones.
I found that above about 10Mb/s you start to hit diminishing returns. The jump from 10 to 30 was barely noticeable. The jump from 30 to 100 is noticeable with large downloads, but nothing else. From 100 to 1000, the main thing that you notice is if you accidentally download a large file to a spinning-rust disk and see how quickly your fill up your RAM with buffer cache...
Over the last 10 years, I've gone from buying the fastest connection my ISP offered to buying the slowest. The jump from 512Kb/s to 1Mb/s was really amazing (though not as good as moving to 512Kb/s from a modem that rarely managed even 33Kb/s), but each subsequent upgrade has been less exciting.
Because in 1981 or so, everybody was pretty sure that this fairly obscure educational network would *never* need more than about 4 billion addresses... and they were *obviously right*.
Well, maybe. Back then home computers were already a growth area and so it was obvious that one computer per household would eventually become the norm. If you wanted to put these all on IPv4, then it would be cramped. The growth in mobile devices and multi-computer households might have been a bit surprising to someone in 1981, but you'd have wanted to add some headroom.
When 2% of your address space is consumed, you are just over 6 doublings away consumption. Even if you assume an entire decade per doubling, that's less than an average lifetime before you're doing it all over again.
With IPv6, you can have 4 billion networks for every IPv4 address. Doublings are much easier to think about in base 2: one bit per doubling. We've used all of the IPv4 addresses. Many of those are for NAT'd networks, so let's assume that they all are and that we're going to want one IPv6 subnet for each IPv4 address currently assigned during the transition. That's 32 bits gone. Assuming that we're using a
In practice, I suspect that the growth will be a bit different. Most of the current growth is multiple devices per household, which doesn't affect the number of subnets: that
IMHO: what needs to happen next is to have a 16 bit packet header to indicate the size of the address in use. This makes the address space not only dynamic, but MASSIVE without requiring all hardware on the face of the Earth to be updated any time the address space runs out.
This isn't really a workable idea. Routing tables need to be fast, which means that the hardware needs to be simple. For IPv4, you basically have a fast RAM block with 2^24 entries and switch on the first three bytes to determine where to send the packet. With IPv6, subnets are intended to be arranged hierarchically, so you end up with a simpler decision. With variable-length fields, you'd need something complex to parse them and that would send you into the software slow path. This is a problem, because you'd then have a very simple DoS attack on backbone routers (just send them packets with large length headers that chew up CPU before they're dropped). You'd also have the same deployment headaches that IPv6 has: no one would buy routers that had fast paths for very large addresses now, just because in 100 years we might need them, so no one would test that path at a large scale: you'd avoid the DoS by just dropping all packets that used an address size other than 4 or 16. In 100 years (i.e. well over 50 backbone router upgrades), people might start caring and buy routers that could handle 16 or 32 byte address fields, but that upgrade path is already possible: the field that you're looking for is called the version field in the IP header.