I won't argue he's wrong, but I think as fast as CPUs change you'd have to have across the board reductions in workload capacity by a significant number (ie, the 30% touted initially) to be able to claim harm and justify a recall.
Man, have you ever drunk the software caveat-emptor Kool-Aid whole cloth.
The specifications for hardware and software are not that it will work as specified.
The specification is that, with sufficient user cleverness (and sweat streaming off a bulging forehead, and unbroken vigilance) it is possible to almost get the hardware or software to attain its putative performance ratings without catching one or more incurable black hat diseases along the way.
This ethos dates from the 1980s and 1990s when the upgrade cycle (both hardware and software) was three years at most, and the whole drama could barely play out in that time frame.
I bought my Sandy Bridge Xeon in January 2013 and I'm not sure I'd take a free swap to Intel's newest equivalent, because the hassle of swapping over probably exceeds the expected gain (my average uptime, when the power company isn't replacing faulty-from-birth concrete power poles, has averaged well over a year, and there hasn't been a single hardware or OS glitch that I know about since I eliminated one suspect hardware card, back in the first few months).
I make fairly heavy use of BSD jails, and I've always regarded process isolation as more important than raw performance.
Punting isolation from hardware to software is a Sisyphean burden. One important binary on your system that wasn't compiled with the right compiler (on the right day) and the game is lost. Worst, if there's a 'retguard' gadget variant of the attack, you also have to use the right linker (on the right day), because security is (potentially) no longer link-order independent. Lovely, lovely, isn't that lovely.
This giant, amorphous, hard-to-discern attack surface is now my problem because "durf durf, complicated, too many transistors" and "we can't read widely circulated papers pointing out that one of our optimizations is full of shit" and "caveat-emptor grandfather clause—so suck it!".
If there was a paper from long ago pointing this out (as I've seen stated in this thread, but haven't check out myself) then there's going to be an MFT of legal discovery pointed at Intel to find out what they knew and when they knew it.
The GHz wars were a great thing, because it rapidly erased your worst corporate blunders.
The beleaguered end-user responded to constant threat by refusing to buy computer systems with CPUs and memory soldered to the mainboard (only then it was a bug in an ASUS northbridge—newly introduced when they bumped the much-loved board from an A to B revision—that caused the disk controller to scribble randomly over my hard drive, taking out three different OS boot partitions in a single bound).
Once vendors start soldering CPUs and memory to main boards, it's like, so long puberty, you're apparently an adult now.
Adults in every other industry recall or replace products that don't work as least marginally to original specifications (without forcing the end-user to jump through a thousand flaming hoops).
It's high time to set aside childish ideas that this technology (alone) can demand unlimited user backflips before any restitution is warranted or justified.