They are cheap, almost indestructible, small, low-power and ancient enough to comfortably run any legacy application out there, even under pure DOS. Should one break (which is, in itself, rather unlikely even for heavily used units), full service manuals are available and having lots of them means easy replacements. They have traditional, hardware RS232 and LPT ports, one of each. As long as you need a single machine for a single PLC, X40s should be one of the best tools for the job.
Wouldn't they get more if you turned off your computer and donated the money you'd otherwise have to spend on the electricity bill?
OR if you can't get a job and want to go to the US, contact the people at NYC Resistor, the New York Hackerspace. Just tell them you are interested in those things, you'd like to come and hang around for two weeks around people with similar interests who actually do something with them, and you need a couch to crash on. I'm pretty sure it's going to be a cheaper, more interesting and more educational alternative to a summer camp.
The key, I think, is "properly". I've seen some deployments (none as big as yours, to be honest) that started off similar to that, but as changes went on, security measures went away or were literally worked around because they (obviously) slowed down the development process. The result was a total mess, neither secure nor efficent.
I haven't really had a chance to ever go totally mad with security, but if I somehow get it, I'm sure going to try all of those things listed in the previous post and much, much more.
This. Especially the layers.
If you can, split the application in two parts - the font end running on a world-facing web server and a back end on a private network. Use a well-defined, high level protocol for communication between the two. If you can afford (literally, it's just a matter of throwing more hardware at the problem) some overhead, use a text-based serialization format with a solid, well-tested parser. The simpler, the better. Check every single request at the backend in every possible way, data sanity checking at the door is crucial. Maybe sign the requests - won't do any good if someone breaks into the frontend boxes (because they will get the private key then), but will make it impossible to somehow impersonate those boxes without compromising them first. Sign responses. Generate and deploy new keys and certificates often. Use prime numbers (look up cicada principle) for intervals between key changes to avoid being predictable, if you're truly paranoid. Log everything, offsite. Send the logs over a smart network bridge that will let through logs, just logs and only logs, and only in one direction just to be sure. Make this bridge the one and only thing connected to the log server, other than the power cable, a monitor, a keyboard and a tape drive. Preferably use a similar bridge between the frontend and the backend servers, have it do sanity checking of all passing traffic in addition to the checks at the backend. Have different people implement the checks at the backend and at the bridge, do not let them share code. Preferably, use two different parsers for your serialization format of choice. If you can, put the databases on a third layer behind the backend (so that it's only doing business logic, not data storage). Try to embed some basic security in the database itself, especially data integrity checks. Have it roll the transaction back, tell the backend to bugger off and raise an alarm if it's told to do something that doesn't quite fit with the nature of the data. And so on, and so on, and so on. It's all about assuming that every single part of the system can and will contain security holes, but with so many layers, cross-checks and variations on the security measures (like using two different parser implementations for the same check), the probability of someone finding a usable chain of exploits is absurdly low. Remember, exploits have to be used several at a time to actually break into a system and not just DoS it.
I wonder if any web applications that properly implement all those things and more even exist, but it wouldn't hurt to try to make one, if you have the funds.
Oh, and one last thing. The most important one, actually. If you pull this off, your application might be so impervious to hacking in the usual sense of the word that it would be simply impractical to do that, not worth the time and effort. And guess what the determined hacker will probably do at that point? Dress as an air conditioning serviceman, show up at the facility, talk some shit into the guard and walk away with your data 15 minutes later, using equipment no more high-tech than a screwdriver. Or, if your guards are not as dumb as that, a *very* determined hacker might even employ themselves at your air conditioning service, cleaning or electrical work company and do the same the next time they *legitimately* show up at the facility. It's been done. In short, consider other aspects of security, moreso if you're actually a valuable target and almost unhackable through the internet.
This question, and the very act of asking itself, is full of fallacies and silent assumptions.
What is this "general public" you speak of, what does it mean to "still matter", how do you evaluate this property?
I'm using Linux because I am used to it, the tools I need, many subtle features included, are there (and they aren't there on Windows or OS X), and in general I can get the job done in a most timely, convenient and pleasant manner on Linux compared to any other environment out there. So, yeah, it's relevant for me.
Wait, what did you expect, that some ephemeral being called "The General Public" will descend upon this thread and lay pure truth upon it, drawn from its unbound knowledge? Sorry, no cake for you. I know some people have this tendency to readily extrapolate "I" into "we", "everyone" and such and happily provide answers to these questions, but you shouldn't listen to their bullshit. I mean, respect your own intelligence and try to see through dumb generalisations. And don't ask meaningless questions that invoke them.
Off the top of my head: LLVM and CUPS. Please get your facts straight before posting overly general statements. Your posts will be much more difficult to discredit as a whole on the basis of a single "all", "none", "anything" or whatever disproven by a single example to the contrary, thanks to elementary mathematical logic.
If it's sensible, this could be useful in some areas, for some vehicles. Looks like the whole gassification assembly is not exactly a work of precision engineering and could be built in somewhat sub-standard conditions. I'd expect that many third-world plantations of easily gassified produce have lots of leftovers and not all of those have sensible uses to date - some might be just dumped somewhere to rot.
On a different note, if I were the CEO of Starbucks, I'd get such a car as a publicity and marketing stunt, and power it with dried left-overs from brewing.
That's an interesting question. I can't give a definitive answer, but I think such a claim could hold some weight.
Note, however, that if you're an author (or more precisely, a sole copyright holder) of an application, you can't - in the "logical impossibility" sense of "can't" - "violate GPL" by doing that, or just about anything else. You're free to distribute software that is under your copyright in any way and shape you like, but anyone redistributing it after downloading your incomplete source tarball would be unable to comply with the GPL if someone further down the chain asked them to provide the full source.
Yes, you have a point, the comparison to mnemonic assembly output of gcc is a good one. I was trying to find an example such as this, but couldn't think of anything at the moment.
My explanation, however, still answers the OP's question - what was distributed was enough to recreate the binary without raising any suspicions, and that's why this could happen.
The problem in this case is that the concepts of "source code" and "object code" are a bit fuzzy with generated code that is GPL-licensed.
Someone wrote the bison grammar files (which are the missing source code in this case) and "compiled" them, by running bison over them. The resulting files were "object code" in the light of GPL, as they're not really intended nor suitable to be read or edited by a human (and the GPL's definition of source code is "the preferred form of the work for making modifications to it"), but at the same time, they were still technically source code, as in something that can be fed to another compiler, together with the actual source code of Emacs to build the executable Emacs binary.
Thus, the final binary can be recreated from those tarballs just fine, because *technically* it's the full Emacs source code all right. Legally, though, it's not, because of the definitions in GPL.
They're more durable - you can bang one against the desk, throw it around the room all day, then plug it in and it should still work (or, at worst, require fixing a broken solder joint or two, SMD capacitors sometimes fall off the PCB after a strong enough jolt), while no HDD in the world is going to survive that. Maybe people got that confused, the word "reliable" means many different things in layman's speech.
According to Nietsche, God is dead since at least 1882, which means that the source code is in public domain even by the standards of Disney. Sorry, no bonus.
[Feeding a troll? Sure, the bigger a piece, the better, more chance of choking!]
Oh my. I'm not even 25, and I feel the urge to call "get off my lawn" in response to your "old school games" list and the configuration you call "old hardware"...
AFAIK, on a desktop with two discrete graphics cards, you should be able to run Windows and Linux as guests at the same time, each using one card. I'm not sure about disk access, you might want to add a discrete PCI-E SATA controller for one of the systems to avoid any screwups caused by Windows doing something nasty, but other than that, this seems to be perfectly viable. A recent Sandy Bridge-based Core i7, with 8GB of memory on a good P67-based motherboard should run such a software stack with native performance of an SB i5 (roughly half the cache and threads of an i7 available most of the time for each guest) with 4GB of memory (if split evenly), which is more than adequate for everyday use.