_Generic - Useful for a few things (mostly tgmath.h, which I've rarely seen used in real code because C's type promotion rules make it very dangerous, but it was quite embarrassing that, for 12 years, the C standard mandated a header that could not be implemented in standard C). Existing compilers have all provided a mechanism for doing the same thing (they had to, or they couldn't implement tgmath.h), but it was rarely used in real code. Oh, and the lack of type promotion in _Generic makes it annoyingly verbose: int won't be silently cast to const int, for example, so if you want to handle both then you need to provide int and const int cases, even though it's always safe to use const int where an int is given as the argument.
_Static_assert - useful, but most people had already implemented a similar macro along the lines of:
#define _Static_assert(x) static int _assert_failed_ ## __COUNTER__ [x ? 1 : -1];
This gives a 1 or -1 element array, depending on whether x is true. If x is true, the array is optimised away, if x is false then you get a compile-time failure. _Static_assert in the compiler gives better error diagnostic, but doesn't actually increase the power of the language.
And then we get on to the big contributions: threads and atomics. The threading APIs were bogged down in politics. Microsoft wanted a thin wrapper over what win32 provided, everyone else a thin wrapper over what pthreads provided. Instead, we got an API based on a small company that no one had ever heard of's library, which contains a clusterfuck of bad design. For example, the timeouts assume that the real-time clock is monotonic. Other threading libraries fixed this in the '90s and provide timeouts expressed relative to a monotonic clock.
The atomics were lifted from a draft version of the C++11 spec (and, amusingly, meant that C11 had to issue errata for things that were fixed in the final version of C++11). They were also not very well thought through. For example, it's completely permitted in C11 to write _Atomic(struct foo) x, for any size of struct foo, but the performance characteristics will be wildly different depending on that size. It's also possible to write _Atomic(double) x, and any operation on x must save and restore the floating point environment (something that no compiler actually does, because hardly anyone fully implements the Fortran-envy parts of even C99).
In contrast, let's look at what WG21 gave us in the same time:
Lambdas. C with the blocks extension (from Apple, supported by clang on all platforms that clang supports now) actually gives us more powerful closures, and even that part of blocks that doesn't require a runtime library (purely downward funargs) would have been a useful addition to C. Closures are really just a little bit of syntactic sugar on a struct with a function pointer as a field, if you ignore the memory management issues (which C++ did, requiring you to use smart pointers if you want them to persist longer than the function in which they're created). C++14 made them even nicer, by allowing auto as a parameter type, so you can use a generic lambda called from within the function to replace small copied and pasted fragments.
Atomics, which were provided by the library and not the language in C++11. Efficient implementations use compiler builtins, but it's entirely possible to implement them with inline assembly (or out-of-line assembly) and they can be implemented entirely in terms of a one-bit lock primitive if required for microcontroller applications, all within the library. They scale down to small targets a lot better than the C versions (which require invasive changes to the compiler if you want to do anything different to conventional implementations).
Threads: Unlike the C11 mess, C++11 threads provide useful high-level abstractions. Threads that can be started from a closure (with the thread library being responsible for copying arguments to the heap, so you don't have the dance of passing a pointer to your own stack and then waiting for the callee to tell you that it's copied them to its stack). Futures and promises. Locks that are tied to scopes, so that you don't accidentally forget to unlock (even if you use exceptions).
Smart pointers. C++11 has unique_ptr and shared_ptr, for exclusive and shared ownership semantics. unique_ptr has zero run-time overhead (it compiles away entirely), but enforces unique ownership and turns a whole bunch of difficult-to-debug use-after-free bugs into simple null-pointer-dereferences. shared_ptr is thread safe (ownership in the presence of multithreading is very hard!) and also allows weak references.
C++14 and C++17 both made things even better. I've already mentioned generic lambdas in C++14, C++17 adds structured binding (so you can return a structure from a function and in the caller decompose it into multiple separate return values). It also adds optional (maybe types), any (generic value type) and variant (type-safe union) to the standard library. Variant is going to make a lot of people happy.
With C++11, the language moved from being one I hated and avoided where possible, to my go-to language for new projects. With a rich regular expression library, threads, smart pointers, and lambdas, it's now useable for things that I'd traditionally use a scripting language for as well (and an order of magnitude faster when crunching a load of data). In contrast, C has barely changed since the '80s. It still has no way of doing safe and efficient generic data structures (you either use macros and lose type safety, or you use void* and lose type safety and performance). It still has no way of expressing ownership semantics and reasoning about memory management in multithreaded programs. The standard library still doesn't provide any useful data structures more complex than an array (not even a linked list), whereas C++ provides maps and sets (ordered and unordered), resizable and fixed-size arrays, lists, stacks, queues, and so on.
C11 didn't really address parallelism and definitely didn't address reliability or security. Microsoft Research's Checked-C provides some very nice features, but they initially prototyped them in C++ where they could implement them all purely as library features.
Flawed logic. C++ doesn't have a corporate sponsor either, and yet it has a native compiler on Windows.
Multiple vendors pay good money to develop compliant C++ compilers for many of the platforms that we use.
The two main challenges I see for C are the competition with C++ and faster hardware.
Most "C compilers" are actually C++ compilers running in a "C mode". The trouble is that most of the corporate sponsors care more about being compliant with the latest C++ standard than being compliant with the latest C standard.
And because C++ is slightly more typesafe (strict aliasing), those optimizers can do more for C++ code than for C. So despite the more complex language, C++ can be marginally faster (~1%).
Probably not even that. In the tiny number of cases where strict aliasing buys you anything at all (which on modern out-of-order hardware it almost never does), it's around the same order of magnitude as the performance that you lose in C++ due to maintaining exception handling information. There's really nothing between C and C++ given the same code these days. In practice, most of the performance gains in C++ come from metaprogramming.
Which gets us to the faster hardware: how often is that performance even needed?
For the vast majority of "embedded" devices that I own, the main performance impact isn't CPU speed, it's battery life. A program which gets its job done as quickly as possible and then puts the CPU into an idle state is far more desirable than one which is perceptually just as responsive but wears down the battery faster.
Why is that a bad thing?
You may have missed this, but I gave one example in the very next sentence.
c doesn't have "problems"
Sure it does. As TFS notes, C doesn't have a corporate sponsor. That's why (to pick one example) there is no native compliant C compiler for Windows.
(Yes, you can build Clang if you want to. You know what I meant.)
So you think the 1.7 billion that Burnie Madoff defrauded people out of, and then had seized by the FBI [...]
That is a fair point, and it does go to the fact that we don't measure these things well. That $1.7 billion was a technically a "seizure", but it was actually a settlement with JPMorgan Chase. A fairer comparison would only count assets seized without an accompanying criminal conviction.
Still, the mere fact that they're in the same ballpark is a problem. Where I live, non-contraband assets may be frozen prior to a conviction, but they may not be seized until after.
I'm pretty sure the headline left out a slash, and it should read "scramjet/rocket engine". Scramjets (as I'm sure you're aware) need a way to get to supersonic speeds before they start working.
There is never a reason to burgle someone.
You be sure to tell the judge that when it's a cop who stole your money.
(Yes, that's the more likely scenario.)
"They that can give up essential liberty to obtain a little temporary saftey deserve neither liberty not saftey." -- Benjamin Franklin, 1759