Slashdot videos: Now with more Slashdot!
We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).
They are the intelligence community, not our national cybersecurity consulting firm, and they only ought to be notifying the public if the risk to national security involved in leaving the vulnerability open is greater than the risk to national security involved in losing the intelligence that could be gained from it.
What you're saying is we HAVE NO national cybersecurity entity whose purpose is to protect our infrastructure from bad actors using exactly the kinds of methods and exploits we're seeing here. And given that, we have to rely on Kaspersky to do it for us. Not only is it then a good thing, it's long overdue.
I suppose you could look for another job.
So when that happened to me, pretty much just like that, what I did was use a hash on the passwords (SHA-256 IIRC, it was a long time ago), then asymmetrically encrypt/decrypt the resultant hash with hardcoded keys just so they could say they secured their passwords with asymmetrical encryption. And customers are very unlikely to know the difference (or at least, ours were), so there was no real risk if the sales force blabbed about it like that as if it were a useful feature. When management gets a security buzzword stuck in their heads and they think they want it and can't or won't be convinced it's not the solution they think it is, you give it to to them if you want to keep your job regardless of whether it makes any sense or not. Some developers won't even bother to find out what the right solution is, or have the luxury to actually implement it. I gave them what they needed, then bolted what they wanted on top as window dressing. And management will never read my comments on that code, which explain exactly what happened.
Good security is hard. VERY HARD. The government is often bad at it. Sony is bad at it. Banks are bad at it. In fact, I can't point to anyone who's known to be good at it except maybe Zimmerman, and I don't even know that for sure. And users don't like it and will often bypass or otherwise subvert it themselves. But it's not because engineers are incompetent. Often they're not even asked to provide security and it isn't even on their radar. And sometimes when they are asked to provide security they are saddled with bogus requirements for how it should be done. Good security affects the user interface and the users behavior, and that's an area that companies prefer to stay out of because it's unpopular, at odds with productivity, and isn't readily seen to contribute to their bottom line.
I would suggest there is a much cleaner way for the TLAs to make warrant canaries ineffective. Send a warrant to every company that publishes a canary. In a short space of time, no company of any note will have a canary, and the whole point of issuing a canary is defeated.
Too risky-- it would show up in Canary Watch when they all dissapear, and you'd start seeing a lot of new canaries being published by companies who hadn't done it before, which would then all get their own NSLs, and the whole thing would continue to snowball until someone refused to comply with an NSL and the resulting stink would probably kill off NSLs alltogether.