From the start, the design of C emphasized speed and efficiency over all else. "Trust the programmer" was one of the mottoes. If the programmers are doing something weird, assume they know what they're doing, and maybe print a warning, but allow it. C was, by design, weakly typed, and minimalist, especially when it comes to checking for errors because such checks take time.
Often, we've seen efforts to improve C's safety that were eventually sidelined because they were a performance hit. The iostream library is safer, but much slower than stdio. Which one do people prefer? stdio! C libraries are full of routines that do not do bounds checking, for the sake of performance and simplicity. gets() is an infamous one. The language itself is so easy to to use insecurely. Pointers can be set to point absolutely anywhere, and those places both read and written at will. If the OS, with help from modern CPU memory management facilities, didn't set boundaries and kill programs whenever they stepped over the bounds, there'd be nothing to stop them.
Another idea was adding instructions to dynamic memory allocation to do memory wipes. Before freeing the memory, the computer was instructed to zero it out. This resulted in as much as a 10% performance hit, and was quickly abandoned. Wiping memory has been proposed at the OS level as well. But there are always apps that don't need that because they aren't doing anything sensitive.
That brings up a big problem with the article. Where should responsibility for security lie? With the OS? I think trying improve a language's security is the wrong approach. That's what they sort of tried to do with Java. It's like trying to prevent bank robberies by securing the steering wheels of all potential getaway vehicles. Yes, make languages easier to use and less prone to bugs, but don't specifically target security.