Regexes are a PITA to work with, but editing them is more so when you're having a hard time seeing properly - all those (*$^!.*/ tend to look more like comic-book swearing than ever.
So I figured it would just be quicker to write a program in c.
But realloc() kept throwing errors on the 3rd or 4th call, but only for one variable. Did I make a mistake? It happens
So, went and upgraded the distro once again, and sure enough, it was the compiler. I have to admit I was taken by surprise when I didn't get that long assertion failed message instead (and who is the retard who makes assertions that are so long and complicated that they're going to need an audit just to verify that they actually do what they claim to do???).
All this got me thinking - how come the code I wrote today, in c, to parse out some files doesn't run faster than the c code I wrote a couple of decades ago on a machine that was 100x slower?
The answer is simple, and disappointing. Past a certain level of complexity of the software stack (OS, libs, compiler), you don't get improved performance. It all gets sucked up by the stack.
20 years from now, are we going to have machines with a terabyte of ram, 256 cores, and only running as fast, on average, as an old 386 because by then we'll have past the peak and gone into negative returns territory but can't go back because everything would break worse? For example, code with so many security checks that it's into "infinite bug" state, where fixing one exploit opens up another one (personally, I think we're there already, but that's another story).