For fuck's sake, *nix kernels have been implementing complex process and cycle allocation algorithms for four decades now, almost all of it written in C.
LOL. Thanks. As a system developer specializing on Linux, how could I have missed it!? /s
Seriously though, you might also note that it often took kernels also *decades* to get where they are.
Most algorithms are very very primitive - because you shouldn't put complex/unpredictable logic into the kernel.
Lion share of memory allocation is static. There are very few truly dynamic structures. Because kernel may not run out of memory (and kernel address space is often very limited).
Data structures are primitive - lists and hashes are the pillars - because everything else either has lower memory efficiency or has performance quirks.
It literally takes years to get it right.
Otherwise, if you are such a huge fan of C, please show me an implementation of binary tree in C which can be reused to store either `int` or `double` or `void *` data types in it. And no, crapload of preprocessor macros or type casts on every source code line do not cut it.
That's not even talking about various tools in userland that invoke fairly complex logic.
You seem to be either inexperienced or undereducated. Because you have missed the elephant in the room:
Complex logic != complex implementation.
And it's not like "complexity" has any formal definition.
Having seen and written plethora of C code in my life, I know well what C is capable of. But still, for any new development it is literally impossible to recommend C over C++.