_Generic - Useful for a few things (mostly tgmath.h, which I've rarely seen used in real code because C's type promotion rules make it very dangerous, but it was quite embarrassing that, for 12 years, the C standard mandated a header that could not be implemented in standard C). Existing compilers have all provided a mechanism for doing the same thing (they had to, or they couldn't implement tgmath.h), but it was rarely used in real code. Oh, and the lack of type promotion in _Generic makes it annoyingly verbose: int won't be silently cast to const int, for example, so if you want to handle both then you need to provide int and const int cases, even though it's always safe to use const int where an int is given as the argument.
_Static_assert - useful, but most people had already implemented a similar macro along the lines of:
#define _Static_assert(x) static int _assert_failed_ ## __COUNTER__ [x ? 1 : -1];
This gives a 1 or -1 element array, depending on whether x is true. If x is true, the array is optimised away, if x is false then you get a compile-time failure. _Static_assert in the compiler gives better error diagnostic, but doesn't actually increase the power of the language.
And then we get on to the big contributions: threads and atomics. The threading APIs were bogged down in politics. Microsoft wanted a thin wrapper over what win32 provided, everyone else a thin wrapper over what pthreads provided. Instead, we got an API based on a small company that no one had ever heard of's library, which contains a clusterfuck of bad design. For example, the timeouts assume that the real-time clock is monotonic. Other threading libraries fixed this in the '90s and provide timeouts expressed relative to a monotonic clock.
The atomics were lifted from a draft version of the C++11 spec (and, amusingly, meant that C11 had to issue errata for things that were fixed in the final version of C++11). They were also not very well thought through. For example, it's completely permitted in C11 to write _Atomic(struct foo) x, for any size of struct foo, but the performance characteristics will be wildly different depending on that size. It's also possible to write _Atomic(double) x, and any operation on x must save and restore the floating point environment (something that no compiler actually does, because hardly anyone fully implements the Fortran-envy parts of even C99).
In contrast, let's look at what WG21 gave us in the same time:
Lambdas. C with the blocks extension (from Apple, supported by clang on all platforms that clang supports now) actually gives us more powerful closures, and even that part of blocks that doesn't require a runtime library (purely downward funargs) would have been a useful addition to C. Closures are really just a little bit of syntactic sugar on a struct with a function pointer as a field, if you ignore the memory management issues (which C++ did, requiring you to use smart pointers if you want them to persist longer than the function in which they're created). C++14 made them even nicer, by allowing auto as a parameter type, so you can use a generic lambda called from within the function to replace small copied and pasted fragments.
Atomics, which were provided by the library and not the language in C++11. Efficient implementations use compiler builtins, but it's entirely possible to implement them with inline assembly (or out-of-line assembly) and they can be implemented entirely in terms of a one-bit lock primitive if required for microcontroller applications, all within the library. They scale down to small targets a lot better than the C versions (which require invasive changes to the compiler if you want to do anything different to conventional implementations).
Threads: Unlike the C11 mess, C++11 threads provide useful high-level abstractions. Threads that can be started from a closure (with the thread library being responsible for copying arguments to the heap, so you don't have the dance of passing a pointer to your own stack and then waiting for the callee to tell you that it's copied them to its stack). Futures and promises. Locks that are tied to scopes, so that you don't accidentally forget to unlock (even if you use exceptions).
Smart pointers. C++11 has unique_ptr and shared_ptr, for exclusive and shared ownership semantics. unique_ptr has zero run-time overhead (it compiles away entirely), but enforces unique ownership and turns a whole bunch of difficult-to-debug use-after-free bugs into simple null-pointer-dereferences. shared_ptr is thread safe (ownership in the presence of multithreading is very hard!) and also allows weak references.
C++14 and C++17 both made things even better. I've already mentioned generic lambdas in C++14, C++17 adds structured binding (so you can return a structure from a function and in the caller decompose it into multiple separate return values). It also adds optional (maybe types), any (generic value type) and variant (type-safe union) to the standard library. Variant is going to make a lot of people happy.
With C++11, the language moved from being one I hated and avoided where possible, to my go-to language for new projects. With a rich regular expression library, threads, smart pointers, and lambdas, it's now useable for things that I'd traditionally use a scripting language for as well (and an order of magnitude faster when crunching a load of data). In contrast, C has barely changed since the '80s. It still has no way of doing safe and efficient generic data structures (you either use macros and lose type safety, or you use void* and lose type safety and performance). It still has no way of expressing ownership semantics and reasoning about memory management in multithreaded programs. The standard library still doesn't provide any useful data structures more complex than an array (not even a linked list), whereas C++ provides maps and sets (ordered and unordered), resizable and fixed-size arrays, lists, stacks, queues, and so on.
C11 didn't really address parallelism and definitely didn't address reliability or security. Microsoft Research's Checked-C provides some very nice features, but they initially prototyped them in C++ where they could implement them all purely as library features.
In terms of Linux, it's not classical security through obscurity, it's security through diversity. One of the reasons Slammer was so painful a decade ago was that most institutions had a Windows monoculture. The time between one machine being infected on your network and every machine on your network being infected was about 10 minutes (a fresh Windows install on the network was compromised before it finished running Windows Update for the first time). If you'd had a network that was 50% Windows and 50% something else, then it would only have infected half of your infrastructure and you'd have been able to pull the plug on the Windows machines and start recovery. It's possible to write cross-platform malware, but it's a lot harder (though there's some fun stuff out of one of the recent DARPA programs writing exploit code that is valid x86 and ARM code, relying on encodings that are nops in one and valid in the other, interspersed with the converse). Writing malware that can attack half a dozen combinations of OS and application software is difficult.
This is why Verisign's root DNS runs 50% Linux, 50% FreeBSD and of those they run two or three userland DNS servers, so an attack on a particular OS or particular DNS server will only take out (at most) half of the machines. Even an attack on an OS combined with an independent attack on the DNS server will still leave them with about a quarter functional, which will result in a bit more latency for Internet users, but leave them functioning.
Most of them still use system call interposition. They're vulnerable to a whole raft of time-of-check to time-of-use errors, so the only part that actually catches things is the binary signature checking, and that requires you to install updates more frequently than malware authors release new versions - it's a losing battle.
They run some quite buggy code in high privilege. In the last year, all of the major AV vendors have had security vulnerabilities. My favourite one was Norton, which had a buffer overflow in their kernel-mode scanner. Providing crafted data to it allowed an attacker to get kernel privilege (higher than administrator privilege on Windows). You could send someone an email containing an image attachment and compromise their system as long as their mail client downloaded the image, even if they didn't open it. It's hard to argue that software that allows that makes your computer more secure.
The situation in most of the USA is that it's been done using the worst possible mixture of laissez-fair capitalism and central planning. Vast amounts of taxpayer money have been poured into the infrastructure, yet that infrastructure is owned by a few companies and they have geographical monopolies and are now owned by their customers, so have no incentive to improve it. Oh, and regulator capture means that it's actually illegal to fix the problem in a lot of places. You can provide an incentive in several ways:
Alas, it's a shame that it doesn't mean anything. The point here is that the Earth has undergone many shifts in its climate, sometimes in a startlingly short period of time
Except that the difference in temperature between the peak of the Medieval Warm Period and the bottom of the Little Ice Age were significantly smaller than the difference between the current temperature and the bottom of the Little Ice Age. The last time we saw an increase in temperature equivalent to the last 200 years it was over a period of tens of thousands of years.
Go and read a news story about an area of science that you know about and compare it to what the original research actually claimed. Now realise that press reports about climate change are no more accurate than that and go and read some of the papers. The models have been consistently refined for the last century, but the predictions are refinements (typically about specific local conditions and timescales), not complete reversals. Each year, there are more measurements that provide more evidence to support the core parts of the models.
Oh, and I don't think the words objectivist or dualistic mean what you think they mean. You can't discard evidence simply by throwing random words into a discussion.
Considering that the entire selling point behind Signal is that it's supposed to be resistant to "an adversary like the NSA," I would think their ability to trivially associate a key with a real person would kind of turn that on its head.
Any global passive adversary can do traffic analysis on any communication network. Signal's message encryption should stand up against the NSA unless there are any vulnerabilities in the implementation that the NSA has found and not told anyone about or unless they have some magical decryption power that we don't know about (unlikely). Protection of metadata is much harder. If you connect to the Signal server and they can watch your network traffic and that of other Signal users, then they can infer who you are talking to. If they can send men with lawyers, guns, or money around to OWS then they can coerce them into recording when your client connects and from what IP, even without this.
In contrast, Tox uses a DHT, which makes some kinds of interception easier and others harder. There's no central repository mapping between Tox IDs and other identifiable information, but when you push anything to the DHT that's signed with your public key then it identifies your endpoint so a global passive adversary can use this to track you (Tox over Tor, in theory, protects you against this, but in practice there are so few people doing this that it's probably trivial to track).
No system is completely secure, but my personal thread model doesn't include the NSA taking an active interest in me - if they did that then there are probably a few hundred bugs in the operating systems and other programs that I use that they could exploit to compromise the endpoint, without bothering to attack the protocol. I'd like to be relatively secure against bulk data collection though - I don't want any intelligence or law enforcement agency to be able intercept communications unless at least one participant is actively under suspicion, because if you allow that you end up with something like Hoover's FBI or the Stazi..
It's why we had a change in language from global warming to climate change
We had the change from global warming to climate change because idiots kept ignoring the 'global' part and saying things like 'this summer is rubbish, so much for global warming!'. The weather is a complex chaotic system. Global warming means that the total amount of energy in this system is increasing. This is very simple to understand - more energy is arriving from the Sun than is being radiated into space, by quite a large amount. This is trivially measurable by pointing an infrared camera at the night side of the Earth from space (which NASA does).
The effects of this are more difficult to communicate, because they're not the same everywhere. Adding more energy to the air and water in the middle of the Atlantic, for example, is likely to cause more hurricanes to form, but it may also disrupt the gulf stream and lead to significantly colder weather for a lot of places.
In the 1600s the Thames used to freeze over so that you could safely walk from one side to the other
You mean right at the height of the Little Ice Age?
If that were to happen now climate 'scientists' would be up in arms.
If it were to happen now, then it would not be part of a prolonged cooling trend that had been going on for around 200 years at that point and was just reaching its peak, before starting to warm again. The global temperature then passed the peak of the previous warm period (the Medieval Warm Period) in the last century and kept climbing. But you knew all of that, right?
"Pull the wool over your own eyes!" -- J.R. "Bob" Dobbs