I'd have listed TalkTalk as the third large ISP, since they're the company that does the most LLU work. They install their own equipment at exchanges and only use BT for backhaul. There are quite a few smaller LLU operators, but BT dragged its heels to delay LLU rollout until they'd largely cemented their monopoly.
The problem with the split of BT retail and wholesale units is that there's no requirement for BT retail to make a profit. The wholesale part has to sell to BT retail at the same price that they sell to everyone else, but the retail division is able to operate at a loss and be bailed out by the rest of the company...
The important issue is the ratio between investors and speculators. You need speculators in the market to provide liquidity, but you don't want too many because liquidity is the positive spin on volatility. If you have too high a ratio of speculation to investment then the market becomes completely decoupled from the thing it's trying to represent and it becomes a dangerous place for investors (and companies) because they can lose all of their money as a result of something completely unrelated to the actual profitability of the company. If you have too few speculators, then it becomes difficult to buy and sell shares.
The problem with HFT is not really HFT itself, it's that it magnifies the effects of speculators on the market, meaning that you need far fewer speculators with far less capital to have a disproportionate effect on the functioning of the market.
The problem was solved by making the ISA independent of the microarchitecture. That wasn't just solved, it was an industry-wide convention by the time Java appeared.
MIPS IV is nice, because it's a 64-bit ISA that's over 20 years old (the magic number for patents). FreeBSD 10 runs on it out of the box with the BERI kernel config on the Altera DE4 boards and in simulation and 10.1 should include a kernel config for the NetFPGA 10G board. These boards are pretty expensive, but we have a couple of configurations that will let it run on smaller FPGAs. Removing the FPU makes it a lot smaller and you can also build a microcontroller variant (simple static branch predictor, no MMU) that's even smaller. The simulator is slow, but just about useable (it takes about an hour to boot to single user mode, but it's enough for testing).
It's only in the last couple of years that FPGAs have become interesting for this kind of thing. There are a few high-level HDLs appearing, because hardware is sufficiently complex that the traditional approach of throwing it all away and starting again every CPU revision is increasingly impractical. The devices themselves are now fast enough that they're useable for prototyping and getting a reasonable feel for behaviour. We can get 100-200MHz with 4 cores in a single FPGA with the latest generation - not competitive with an ASIC, but fast enough that you can actually use them. I gave a demo that ended up being more compelling than I expected because I was showing people some things running on the UART console and I'd left the network cable connected so the screen kept being spammed with messages about invalid ssh connection attempts. Nothing I was doing said 'this is a real computer' quite as much as people on the Internet trying to attack it...
There are two ways to write error-free programs; only the third one works.