Using stl containers is not the issue, most people use stl containers in their default mode. My point was to use custom allocators with such containers. Specifically stack based ones if the container will only exists within a function call etc. perhaps even used EA-STL
As for expression templates, its not all about compile time computations, most often than not its a mechanism to explicitly specify how a computation should be done - this happens a lot in linear-algebra based implementation - check out the various uses in Blitz++, Boost Fusion, Boost Graph etc..
Also note move semantics weren't used with the compiler/library mentioned in the paper.
In short if the above were done/used the difference would have been MUCH greater.
The paper doesn't talk about using:
1. Custom allocators (stack and pool based)
2. Move semantics
3. Modern C++ idioms such as expression templates
I believe if these were used, the performance would have even better and left the rest "truly" in the dust...
I believe they use two sets of bloom filters one for known bad sites and one for known good sites - each is roughly ~1.5MB large and can be found in your google install dir, Search for files with the word "filter" in their name.
I think you misunderstood my point, all the examples I gave such as separate mini markets, stealth exchange and clearing pools are all elements that existed in past markets that inevitably all failed.
The idea is if you allow people the freedom to choose from these options over time such options will fail - that is a certainty. The reason for failure centers around the fact that as they reduce the overall efficiency (true price) of the market, this results in a less profitable market.
For example dark-pools were invented to reduce the undesired market impact that large players may create when trying to offload large orders in a short amount of time (good idea keep the market steady) - From a technical pov its working fine today, most of the major IBs provide such services, and it is true that such services skew the true value of stocks, however as more and more people begin to utilize them, the value of such dark-pools and their advantages begin to dissipate, same can be said for the mini-emarkets.
When someone says let the markets be free, they're not talking about an "everyone is equal" paradigm but rather "let them do what they will" and let the outcomes decide. The markets are ruthless, most people are culled out of the competition before they can even blink, and unlike most other national services such as public schools and hospitals - they are not designed nor willing to cater to the lowest common denominator (be that due to intelligence, server speed or lack of optic fiber), in short they are not fair, they were never intended to be fair, making them fair will make the unprofitable - all any one government can do is regulate certain use-cases that make participating in said market unpalatable, such as insider trading/front-running.
The Taiwan and Korea markets are like that, the only thing such delays achieve is to decrease the total amount of shares traded in a day, hence reduces the overall market value and shifts wealth to other markets that do allow continuous unfettered trading.
What you don't realise is that a great deal of shares/commodities these days are being traded in dark pools and mini electronic markets, those traditional NYSE or NASDAQ style exchanges are looking to be a thing of the past and only being used as a reporting medium for the exchange of shares from external/independent markets.
I say let the markets be as efficient as they can be.
That is absolutely correct, slashdot is going downhill as of the last couple of years.
As for your comment regarding "single CPU with easy RISC instructions" thats also absolutely correct as well, back in 98' when I was reviewing the ideas coming out of Cryptography Research (cryptography.com), we could only ever get this kind of thing to work on smart-cards for that very reason, they all have a single execution pipe-line, no prefetch or look-ahead algorithms, essentially instructions/data are pumped in and executed in that same order, making both the power and temporal analysis quite easy,
A good HP CRO running at about 2GHz was all one needed sampling the system, and yes the techniques did work, but with today's processors you don't even need salt as they do enough mixing for you, especially if there are multiple processes actively running over multiple cores whilst some cryptographic primitive is being executed.
The concept is called Differential Power Analysis (DPA) or for people in the industry its also known as power cryptography and has been a staple of many attack vectors since the mid-90s (at least in open research), furthermore simple techniques such as adding salt or in other words randomly chosen bogus operations into the computation flow renders such attack vectors useless.
Nothing new here, slow news day, move along peoples.
A debugged program is one for which you have not yet found the conditions that make it fail. -- Jerry Ogdin