... why are mission critical devices connected to the internet
sure we know that the weakest link is the meatware, not the hardware, but still...
They aren't, at least, not directly. They are however generally connected at various points to the "business" network which is connected to the Internet (people gotta email). The literal air gap is largely fiction. The business network is hacked, then some vulnerability exploited in the bridge points or routers (it's a network of networks!). Why connect the SCADA to the business network at all? To get the data out to do reports, send email alarms etc. in theory this data exporting should be secure. Problem is that who is hacking your SCADA system? It's not the usual suspects; there is no money in it and the barrier of entry is too high for the script kiddies. It's other countries wanting to perform espionage. How the hell do you protect against that? Look at stuxnet, I mean really look at how that took down the centrifuges. Governments have resources that the average hacking group simply doesn't (or SCADA group). They also have no reason to reveal a compromised system. There could be sleeper, targeted, custom malware sitting on every SCADA server in the US, just waiting for the a time where it will be useful to activate. It's a brave new world!
Dick smith is a hypocrite, all his electronics stores revolved around importing the cheapest crap from overseas, so now for him to say buy australian is a huge backflip. Back when that was happening with dick smith, australia was still manufacturing lots of stuff, now we're just importing everything, whilst exporting the raw materials.
You do realize that the "dick smith" electronics store was sold to woolies in 1982? 60% in 1980, then the rest in 1982. Are you really talking about the store during the 70's? In addition, it does not make someone a hypocrite to behave in a different way to what the once did. Is the reformed alcoholic a hypocrite for wanting tighter alcohol regulation? You really haven't thought this through.
Why generate temporary objects in the millions? drawing from an (garbage-collected) object pool can often make a colossal difference to performance.
Let's say I'm an I/O server processing the data from a moderate number of clients say 5000. These clients are sending me updates for a small data set, let's say 2000 points, once a second. My job is to pluck that data off the wire, format it as required by the rest of the sub-systems, then commit it off to say a database. Say it takes about one second for me receive a response from the system on average, before I can dispose of the data update. 5000*2000, means I've got about 10 million little data items I'm processing per second. Let's add another wrinkle. Worst case, I need to buffer that data in the case of a lost database connection for up to 15 min to give enough time for the database to restart or some such thing. 9e9 data updates in memory. Lets say each update consists of a 32 bit number, a 64 bit timestamp, and a 16 bit status field. That's 14 bytes in total. 14 * 9e9 = 1.8e11 bytes. 117.4 GB. Shit, I may be a big server, but I don't have that much memory!. OK fine, maybe I can make my safety margin smaller, lets just go for 4 min, if we can last 4 min, there will be just enough time for a redundancy switch-over for my database. Still need 31.3GB of memory. My server has 8 cores, and 16GB of ram, but it's still just not quite enough. 1.5 minutes. OK, now we can handle it, 11.7 GB. I also need to keep a reference to all these little data updates. If I go for a smart pointer, that's sizeof(std::tr1::shared_ptr) which = 8 bytes. 6.7GB. Dammit!, still over the 16GB. What if I use a bald ptr? sizeof(thing*) = 4 bytes = 3.35GB. Just fits. There is also a performance penalty for creating my smart pointers. This is obviously a contrived example, but it's not too far off the kinds of problems that have had to be solved in my current place of employment. If you go managed for this kind of stuff, the overheads become too large and the ability to scale is greatly impacted. I know this, because we tried it and just weren't able to get the scalability into the same order of magnitude. As I said before, just use the right tool for the job.