Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
I don't believe that technological privacy is achievable, and I'm skeptical that it's valuable. Whether cryptography actually works (an interesting mathematical question in itself), cryptosystems fail fairly often. Even when they do work, to truly be untraceable or private with them you have to effectively opt out of commerce. Don't logon to anything when you're using Tor, kids; also, don't use Google, since they can always watch your referer tags and see 3/4 of your pages that way. The problem with privacy as we normally talk about it is that it is extremely fragile -- what we've historically taken as 'privacy' was really laziness -- going back to my example from the detective firm above, all this information was already there, it was just split into a couple of dozen different archives and databases. Beforehand, it took time and effort, so you had privacy because unless something was really important, it wasn't worth the effort of searching. Now, it's very easy to record and archive, and we've been focused for many years on making recording and archiving easier, and we elect to be recorded and archived in order to participate with other people -- bank won't serve you if you're wearing a ski mask, visit vegas and you'll see that any table game has very specific gestures and rules to make what you're doing camera-friendly, want a loan you need to have a credit rating.
So, privacy has to be implemented, which means its going to be a combination of legal, technical and social elements. Technical in the same sense as breaking and entering -- the definition of B&E is that the breaker has to make -an- effort, regardless of how trivial. Lifting a latch is considered B&E, and similarly you need some indication that you're trying to achieve privacy. Legal in the sense of limiting the consequence when your privacy is breached.
Running a real company or a real government requires dealing with people who don't want to be there. Not everybody wants a career, some people just want jobs. They want to punch the clock and go home. Some people steal habitually from the till. Had I my druthers, I'd spend all day at home reading, and I'm considered a sociopathic workaholic. Some people are going to cheat. Some people are going to lie on their interviews. The test of any organization isn't how it does when it's doing well, it's how it does when its under extreme stress. Valve hasn't been under extreme stress, so the question of the effectiveness of their organization is effectively mooted. We can look to other game companies with strong egos (Origin for example, or Ion Storm) and get a good idea, though.
Well and good, but all any security implementation buys you is *time*. The real problem with StO is that the time it buys you is unpredictable, and in Kerckhoffs' era of large and slow system upgrades, it might take years to update a cryptosystem once it was broken. Malware authors have happily used StO for years -- for example, evading detection mechanisms by using a number of off the shelf packers in sequence. The approach works because they replace their malware faster than anyone figures out the packing sequence. The windtalkers during WWII were a security through obscurity approach, and it worked fine for the duration of the war, but would have gone horribly in the next one.
Now, what we're dealing with here is network defense, which isn't crypto. In network defense, creative lying is enormously helpful because you can use it to differentiate between your ignorant attackers and knowledgeable members of the community. The majority of attackers scan horizontally (all hosts on a fixed number of ports) rather than vertically (all ports on a number of hosts) because vertical scanning is a waste of time. Most attackers normally hit 9-10 ports and then move onto the next potential target -- they don't see the network in terms of what the hosts *are*, just what they can *exploit*. Moving SSH to a random port means that the attacker now has to spend 6000x the effort to figure out of there's anything on the host he cares about, and he's probably not going to bother when there are nice sysadmins out there happy to put everything on port 22 (as always, I don't have to outrun the bear. I just have to outrun you.) Copy it with some aggressive port blocking (like port 22) or a threshold random walk scan detector and you've got a perfectly fine way to ignore idiots. It's also worth noting that the mentioned port is 2222, which tends to be "stupid port manipulation rule #2" among folks (the other one being to add 1 in front of the port numbers, I can't tell you how fascinating it was to watch port 16888 the first time we blocked bittorrent).
As for malware signatures, they've been increasingly ineffective for years. Attackers can buy AV as well, and it's easier for them to tweak their software to evade AV then it is for defenders to generate new signatures. AV's very good at protecting you from yesterday's attack. If you don't have a signature though, it usually takes month to identify a subverted host.