The Linux kernel is, sadly, an outstanding example of this problem. Syzkaller has found so many thousands of memory vulnerabilities that not only is no one fixing them, hardly anyone is even bothering to look any more. Oh, there are plenty of researchers making a good living by finding and reporting vulnerabilities, because there are plenty of companies -- white hat and black hat -- who will pay for them. But it's a meaningless treadmill, because there are far more than can ever be fixed. The only thing sort of saving us at the moment is that attackers generally find it easier to mine CVEs than to get their own. Deploying kernel updates is slow enough that they can just rely on finding unpatched kernels with published exploits. This dynamic is also what has led to the demise of LTS, but that's a topic for another post. As we get better at patching, we'll eventually force attackers to do their own research, and then hope that we can reverse engineer what they're doing to figure out which of the myriad bugs to fix.
I find it hard to believe the situation is really as dismal as you're implying. The reason is, I simply do not see Linux systems being exploited en masse in the wild, which I'd think they would be if it really was this bad. Yes, theoretical exploits make the news every few months, but most of them require very specific conditions to trigger, and are patched quickly. Instead, most actual attacks I read about are "supply chain vulnerabilities", which is fancy speak for "we downloaded and ran some random code from somewhere", and Rust with its "modern package management practices" is far more vulnerable to it than C.
Besides, if there are "more vulnerabilities than can ever be fixed", just who is going to rewrite all this code in Rust? Rewriting a major codebase from scratch in a different language is a far larger project than fixing memory issues in it.
This situation is not going to improve as long as we continue building complex, critical infrastructure with memory-unsafe languages.
My experience speaks otherwise. I am not a kernel developer, but I do contribute to a large and widely used C codebase, which 15+ years ago would segfault if you looked at it funny. Things improved dramatically since then, and the main cause was change in developer mindset. It became culturally unacceptable to crash on invalid input, leak memory on errors, etc. People got sponsored to fix the existing crashes one by one, and though their number seemed endless at first, gradually things became better. Not perfect of course, but no software ever is. I do not believe that a hypothetical Rust rewrite would improve things all that much at this point, and the effort required would be at least an order of magnitude larger, probably more. Not to mention the potential community-destroying drama and a myriad other issues.
Another thing that bugs me is the implied equality being made between "memory-safe" and "Rust". The implicit argument is along the lines of
- 1. we want memory safety
- 2. Rust is memory-safe
- 3. therefore, all C code should be rewritten in Rust
but the conclusion does not necessarily follow at all. Rust is far from being "memory-safe C" (whatever such a thing may be), it is a far more complex and ambitious language. It is also by far not the first or the only memory-safe language. So why is "rewrite everything in Rust" being presented as THE solution to all our ills. In fact, given the sheer size of the Linux kernel, and the amount of other critical C code out there, it seems far more viable to me to create this "memory-safe C" - a less ambitious language that would prevent memory errors and existing codebases could be easily ported to. Of course that is nowhere near as glamorous and so will never be done.
I think you're mischaracterizing what happens here. In my experience, it's less that rewrites inject new design or logic bugs and more that that they identify unknown requirements which were accidentally or at least quietly met by the old code. Often, this is because of features that were added to the old code without anyone documenting them. Sometimes its because the world around the old code has actually adapted to rely on unintentional features (which would perhaps have been called bugs if they'd been noticed).
My experience is similar, but I would characterize it differently. When you're a young smart ambitious rewriter of an old and hairy codebase, there is a strong temptation to see things in black and white. The old code sucks, and its authors were obviously stupid and clueless. You are smart and have better tools, and therefore can just discard the old cruft and Do Things Correctly. The typical result then is that you miss the reasons why old code is designed the way it is, and either reinvent the problems solved 25 years ago, or write something that fails to account for some less obvious, but important use cases. Then you hack in the fixes for those issues and suddenly your nice clean new codebase is looking just as hairy as the old one, except without the decades of testing and bugfixes. I've seen this scenario happen many times, some of them to myself. The correct way to do it is to understand the old code in detail before rewriting it, but that requires a lot of effort few people are willing to spend.