I think you can take off the tinfoil hat on the sabotage thoughts. For one LSE's own quote providing solution is experiencing the same issue. So unless they have some diabolical plan to sabotage themselves, I think that fact should rule that out. Also, this is effecting large players. These are billion dollar companies whose sole propose is to provide accurate data and provide it fast. Without that they simply don't exist. You'd have to live in a very conspiracy theory driven existence to think they'd all really all throw their own companies and livelihoods away in some dark plan to get the LSE or Linux.
Now would all these large players all have made the exact same errors in their interfaces so they are seeing these same issues while the smaller players seem to have got it right? Sure. But the simplest explanation, of course is they are all experiencing the same issue because of some upstream problem with scaling to their large volumes. Those actually involved seem to point to some caching issue. Is that true? No clue, but they are certainly in a better position to offer analysis of the issue than you or I. That doesn't mean Linux is bad for christ's sake (so you don't have to go on some crazy conspiracy hunt to explain it away). If it is true (and we certainly don't know that it is, though the information currently available may point that way), you know what? There is software which just happens to be running on Linux, which may have an issue. No big deal. Happens every day on every platform. Not some big black eye on Linux if some software that happens to run on Linux has issues and no need to start jumping to conspiracies to explain it away.