There is not enough public information to make specific conclusions about the contributing factors for this outage.
We can make specific broad comments about systems that have these type of requirements, performance and otherwise.
Just as there are platforms that have a security model that make them more (or less) secure than other platforms, there are platforms that are inherently better (or worse) at performance.
There are message passing schemes that well suited for this type of system.
There are programming languages that make it easier to develop robust bug-free applications.
There are systems with built-in high-availability fail-over capabilities (as apposed to a typical multiple vendor, multi tiered "solution").
I'm sorry, if they were patching the system after 3 months running in parallel, they probably have much more fundamental problems than the application not yet being production ready.
I think the biggest problem of all is the extreme hubris of vendors and consulting firm who sell the idea that they can apply their products, methodologies, and "industry best practices" (what a load of excrement!) to ANY project, even though they have never attacked a problem in the same class before! We'll have our Super Certified Windbags meet with the other vendors Account Superheros and your Subject Matter Expuds, and we'll have a full project plan and budget on your desk by this Tuesday.
The best case is that they simply fail miserably. Worst case is the get it almost-right and go through the outage/patch cycle for the next decade.
Oh, and for any system that must have near-perfect availability, you want to avoid patching as much as possible. Annually is a nice goal. Every Tuesday, not nice at all. That's begging, pleading, screaming for trouble.