The .NET platform is a generic platform that is capable of delivering good or at least acceptable performance in most cases.
Indeed. And this has been my point about the "new technologies" for a few years (specifically in the Stock Exchange space but also more generally). These new languages and platforms have lowered the barriers to creating average, adequate systems. If you use excellent people to do good design then you can really lower the implementation and operation costs of these systems and produce excellent applications. By excellent I mean cheap and fast and good or perhaps rather a triangle of cheap, fast and good that has much shorter vertices than the traditional model which makes the overall compromise easier to bear.
BUT, and it is a huge BUT. These systems cannot survive at the edges of performance. You give up too much control to the "framework" regardless of which one it is; .NET, Java, Smalltalk, Rails, whatever. As a result you cannot approach the limits of performance of the hardware at your disposal. Stock exchanges are, by their very nature a shared resource, minimum latency problem which means that every CPU cycle you are not using doing _real_ work is someone not getting serviced as well as the next guy. It is hoped that when you do the design and implementation right, you can take this tradeoff below the threshold of caring for the participants of your market.
At the top end of these systems you have to be using haute design and implementation to cut it.
A comparison that might shed light is cars, family sedan, Ferrari, F1. The family sedan of today is a highly efficient, cheap and really very good quality implementation of a car, but it just can't compete with a Ferrari on performance and whilst a Ferrari will get close to an F1 in many aspects of performance all the extra engineering in the F1 car is designed to wring out the last iota of performance that means that no matter how good the Ferrari is, it can _never_ get any closer to the F1 car. (As a sidelight note that the failures that result from a fault in each of these types of vehicles probably correlates to what goes wrong in the system space as well, not pretty!!).
As such, any system written within the constraints of a "Consultancy/Outsourced/Managed Code" environment simply cannot be at the leading edge of performance.
It is interesting to try and compare the characteristics of a Trading System with other more traditional HPC applications for which supercomputers would ordinarily be used as well as with other high requirement problems. I recall looking at VISAs data processing throughput and on their busiest day in history (23rd of December 2006 IIRC) they processed 180 million transactions in a day), and their IT spend was phenomenal. O can't find that data just now but in the year to June 30 2009, VISA processed just under 40 billion transactions, that translates to about 1300 per second. A stock exchange is an order of magnitude more transactions than that (perhaps more depending on how you count it) and the handful of seconds you wait at the Checkout for your VISA payment to authorise is 4 orders of magnitude longer than a stock exchange requires (10 seconds compared to 1/1000 of a second). That means that a stock exchange is constrained in two dimensions by a total of 5 orders of magnitude more demanding requirements than a system as comprehensive as VISA. I have no real experience about a climate model or nuclear simulation, but I get the feeling that it is a huge dataset issue and your "release valve" is that whilst execution time is ideally to be reduced, the simulation takes as long as it takes and you wait until it is done. CErtainly more Gigaflops than a Stock Excahnge but not as demanding in terms of the number or rigour of the contraints under which you must operate.
To make that work, you kinda need to be down at the metal.
To make sure that you also make it work 100% of the time you need to be bloody careful.
All in all it's a funky, funky problem and we haven't even got into the hard/interesting stuff about trading more complex products than the simple equities that we are talking about for LSE and their direct competitors. And these products require another dimension of constraints where the number of shared resources increase and the systems must process "many" of the transactions of these underlying equity markets and then provide the same performance metrics to their own customers.
Control over your algorithms is not just important it's mandatory. The frameworks that work so well in many environments just don't cut it under that set of constraints