Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Encryption Security

Bruce Schneier Interview on Salon 84

citmanual wrote to us with the Schneier interview on Salon. He's promoting his new book Secrets and Lies. I'm just about finished with it, and will be doing a review soon - it's quite good.
This discussion has been archived. No new comments can be posted.

Bruce Schneier Interview on Salon

Comments Filter:
  • no kidding... to be one of the few in the first 1k... I barely missed it. Oh well
  • just because i didn't have the pleasure of finding /. even after years of surfing ;)
  • by pmokros ( 149847 ) on Thursday August 31, 2000 @05:53AM (#814549)
    Schneier writes, "he's going to choose the dancing pigs over computer security any day."

    That's one of the reasons Schneier rocks as hard as he does. He's a down-to-earth kind of renaissance man and he's pretty much always been like that in the years I've known him. Not only is he one of the best 10 or 50 cryptographers in the world (of which about 40 are probably NSA), but he's a regular guy. I had the good fortune to see Richard Thieme speak and he's similar in that regard (though he's more of a humanities guy than a science guy).

    I read something recently describing a self-proclaimed epiphany he experienced a few years ago that set him in the direction of holistic security versus the crypto-heavy approach he favored since he entered the field. I'd be curious to know more about it because I felt he's always had a hint of that approach. Just go back and read his essays "Why Cryptography Is Harder Than It Looks" [counterpane.com] and "Security Pitfalls in Cryptography" [counterpane.com]... substitute "cryptography" for "computer security" or "network security" and you'll see what I mean.

    In 1996 I think it was, Bruce, Niels Ferguson, and I were working on the problem of creating a file system offering plausible deniability of the existence of any files the user wanted to keep secret. We came up with some really neat ideas on how to avoid creating proof that certain files existed. I think it would have worked, but I made the realization that it wouldn't work successfully with technology of the day. Microsoft Word (heck, the Start -> Documents listing broke us too) will list the last several documents accessed: what if one of them was supposed to remain secret? If it's hidden, but the attacker sees you modified it and they can't find it, the game is over--you can't deny it exists and the system is broken.

    Why is this such a big problem? Because if we were to create a special OS and set of applications that didn't track that stuff, the only people who would use it were those with something to hide (this wasn't a court of law--we couldn't assume you were innocent until proven guilty). So, the user loses the game before it even started simply by having that special OS and application set.

    Keep on "keeping on", Bruce.

  • So...guns don't kill people, bad passwords do :-)

    Still the thought of millions of people being permanently connected on-line without adequate security 'common sense' should concern us. You just have to assume that the vast majority will have to put their trust in 'the experts' to keep their networks from being broken into.

    I've been playing around with linux for 2 years now, and quite frankly I am daunted by the amount of RTFM'ing I have to do before I will feel comfortable being permanently on-line.

    I guess the first thing is to shut off most of your network listening services unless you REALLY know your stuff, and change root password every 30 days.
    ---

  • Countless analogies to computer security can be made - banks being one of them, my house being another. There's nothing inherently insecure about my house - strong doors with locks, a security system, new windows, a fenced-in backyard - but one open window allowed my wife to "break in" when she accidentally locked herself out. All it takes is one slip-up, and all other security measures are nullified.

    Still, my house is only one house, on one street, in one neighborhood in this city, and I count on that "anonymity" to provide a measure of safety. All that's necessary is to provide potential thieves ("house hackers?") with sufficient motivation to either overlook the house or move on to an easier target (hence the Security System sign in the front). It's a play on human nature - count on the baddies to take the path of least resistance, and don't call attention to yourself by parading how many secrets you have locked up or how confidential your data is.

    Net privacy may be a concern, but net anonymity is also good cover. My real name is so commonplace and vanilla that I use it frequently as my login - tempting fate perhaps, but do a search on it in any database and you'll be swamped with results. Which one is me? I'll never tell...
  • No security is going to be perfect, but you can make it arbitrarily difficult to break in. It seems to me that the reality is a tradeoff between security and a combination of cost, convenience, and ignorance. These are formidable obstacles, but it seems to me you can make headway on them.

    Taking another poster's Java example. If you use a language like Java, you have no buffer overruns. But this is at the cost of checking array bounds.

    Many of Microsoft's security problems is because MS wants to give you convenience and damn the security implications. Macros in Word files and email attachments as clickable applications are just two examples.

    The problem of ignorance is twofold. One is that many users are not knowledgable system admins, and yet they will be on the Internet 24/7. The other is that the current OS model is just not security conscious. It take enormous work (e.g., OpenBSD) to get a semi-secure OS and even then, it is easy to slap on telnet, ftp, and every other insecure service. Combining ignorant users with inherently insecure OSes is a potent mix.

    Dealing with cost and convenience is a social issue. Costumers will have to demand security as a top priority before anything much happens here. That probably won't happen until we get a disaster that costs a lot of people a lot of money. There is not much you can do about ignorant users. You can educate some, but certainly not most. It is a matter of research to make OSes more secure, and this is something that the government and the military are very interested in.

  • by Jeffrey Baker ( 6191 ) on Thursday August 31, 2000 @06:18AM (#814553)
    I spend a lot of my time evaluating the security of commercial web sites. Most of them have incredibly bad security. But my observation is that human interactions that do not involve computers have as little or less security as computer transactions.

    For example, it is possible to easily execute money transfers and stock trades as someone else at A Large Internet Broker Who Will Remain Unnamed. This sounds bad, and it is bad, but it isn't much worse than the equivalent security at non-Internet banks and brokerages. At my bank, I can execute a money transfer by simply sending a fax with my signature to the bank. Now, any waiter, billing clerk, or grocery checkout monkey to whom I have ever given a check has my account number, the name of my bank, the bank's routing number, and an original of my signature. Photo reproduction is more than good enough for a forged fax, so it would be trivial to walk down to the local copy shop and start faxing money transfers to people's banks.

    Credit cards are even worse. You need only possess the physical card, or a reproduction thereof, to use the card fraudulently. The number of register operators who actually check the card signature against positive identification and then note the form and number of the identification is incredibly small. With near-full employment here in the U.S.A., the diligence of the rank register operator has become even worse in recent years. Fraud is trivially perpetrated in real-world banking and retail.

    My final example involves my own clients. I am in a line of business that automates business processes in medium and large companies. Invariably, the client wants to ensure that the computer-automated process is totally secure. This is good and I applaud their concern. However, it is funny to note that the manual processes they are replacing haven't the least bit of security at all. Often, the "process" involves one person rubber-stamping a piece of paper and placing it in an open bin on some other person's desk. Yet they insist on impenetrable electronic security.

  • by ucblockhead ( 63650 ) on Thursday August 31, 2000 @06:27AM (#814554) Homepage Journal
    Or the coder throws in a 1024 byte buffer and says to himself "I'll go make this more robust once it is working". A couple days latter, he's got the system working, so he runs off, shows it to his boss, and moves on to the next project, forgetting about the work he has left to do on the program that "works".

    Usually the tradeoff isn't program performance vs. security but coding time vs. security/performance/whataver.

  • Here are a couple more annoying statements. Buffer overflows are unavoidable? All programs are riddled with this type of bug and no programmer can stop it? Ok first off there are many solutions. Java offers protection against this. Also hasn't BSD basically solved most of its version of Buffer overflows. Buffer overflows are a bug in the structure of C++ that is caused by SLOPPY programming. Next time a customer or boss finds a buffer overflow error in my code I'll point to this article and say -> Look it can't be helped. Every 1000 lines of code I have to randomly place a few of these bugs. Computer security is a farce anyway so I shouldn't have to make effort to solve this paticular problem. If this guy was in charge of security at a bank he would state since a 100% chance of the bank being robbed can not be guranteed we should leave all the money in a big pile outside. Actually I think the article just misqoutes him to be saying that.
  • by Elvis Maximus ( 193433 ) on Thursday August 31, 2000 @06:30AM (#814556) Homepage

    "I got about two-thirds of the way through the book without giving the reader any hope at all", he writes "It was about then that I realized I didn't have the hope to give."

    Salon takes this quote out of context. Schneier goes on to say:

    I had my epiphany in April 1999: that security was about risk management, that detection and response were just as important as prevention, and that reducing the "window of exposure" for an enterprise is security's real purpose. I was finally able to finish the book: offer solutions to the problems I posed, a way out of the darkness, hope for the future of computer security.

    I haven't read the book yet, but my understanding from what Schneier says regularly on his very interesting mailing list is that he and others had been looking at security the wrong way. The analogy he uses frequently is to safes. Safes don't claim to be uncrackable; instead they come with ratings specifying how many minutes it would take a skilled safecracker to open them. Schneier's argument is that this is the same approach we should be taking to information security. Not "this security is crackable and that security isn't," but "this security can be cracked by a skilled intruder after X minutes/hours, giving you that much lead time to respond. Plan accordingly."

    -

  • No. It's more of an interview if you ask me.
  • Exactly. I'm sure a determined person could break into my house. But I don't give up, because moderate security means that such a person will have to go to a lot of effort to do so. That's really the best we can hope for. To make it hard enough that it isn't worth an attacker's time.

    I'm sure that the NSA could break into my home box, no problems. I don't really care about that. There's not much I can do about that. That doesn't mean I won't put up enough security to keep the script-kiddies out.

    The same applies for a corporate box. You don't give up just because you can't keep everyone out. It is good enough to have enough security that any attacker is highly skilled enough that he wouldn't bother with your site.

  • I believe the moral of the book (which I gather from Bruce's comments in Crypto-gram [counterpane.com]) is that security breaches are inevitable, and therefore it's imperative to have a recovery plan.

    He doesn't claim that since perfect security is impossible we should stop trying to attain it. He says that we need to stop assuming our efforts will always protect us and make sure we're prepared for the alternative.

  • The moral argument:

    "Who watches the watchmen?"

    All the rest is just elaboration on that.
  • This is why many security features (and certifications, like C2) are really more about bulletproof audit trails than about prevention. You can't prevent abuse of any remotely automated system (this includes noncomputerised systems, as you note), but you can keep track of everything that's done so that the damage can be undone. This works poorly for cases like hospital equipment and space shuttles where the damage is irreversible, but very well indeed for financial stuff (which isn't strictly "real world" anyway, since money is a shared delusion).

    Many people look at certifications like C2 and think "what's the point just logging the break-in - shouldn't we prevent it?" These people are usually frontline types (sysadmins who fear an open port, for instance), but there's a lot of security that goes on after the fact, and much of it is done by people in apparently less exciting lines of work (accountants, say ;). Indeed one common use of the term "security" pertains to how people will feel about the *next* time even if there is a problem now - if they see that the people in charge of a system can track down the perps and make amends in damn near every case, they'll feel ok even knowing that an abuse is possible.
  • Exactly! Most of us live in houses that can be entered by anyone with a large brick. Does that mean that we give up and just leave our doors sitting wide open when we go to work?
  • I really recommend reading the CRYPTO-GRAM newsletter! If there is one person on the surface of this planet who knows what he's talking about, and especially what is unreasonable to expect of users and programmers, it's gotta be Bruce Schneier.

    In you followup post, you seem to infer that the code you write is exempt from the emperical rule that every thousand lines of quality code will have three bugs on average. Of course, I've not read any of your code, but if you write error free code I've got a job for you. I think I'm a pretty good coder, but I come across the "how did I ever overlook *this*" kind of bug more often than I care for.

    The article obviously was not written with coders as the primary audience, so you'll have to read through some of the parabole that's just to get peoples attention. As Bruce explains in his latest CRYPTO-GRAM newsletter, his biggest eye opener was that attaining perfect security (even if the code itself is perfect, which it is rarely if ever) is impossible, and therefore we (as coders and sysadmins alike) should focus more on detection and damage control, and less on perfection. I cannot fault any of his reasoning.

    Anyway, when judging his work, I'd suggest to look more at his freely available writings and less on the secondhand hype like on Salon. Counterpane [counterpane.com] has a pretty complete history of his musings. They're free, and very entertaining to read (my favorite is his "doghouse" column), and it paints a much better picture of what to expect from the book.

  • Looking at the history of physical defense from attack (using fortifications) one can see that there was never (nor will there ever be) the Impregnable Fortress. From the Maginot Line [si.edu] (cf firewalls [frankston.com]) to other defensive military structures [comw.org], we find that massive, static fortifications fail because (in part) they are inflexible and therefore brittle.

    Therefore, the strategy is not to build the super-fort, the one that keeps bad guys out no matter what. That doesn't work.

    Instead, modern thinking on security is all about layered defenses which raise the cost of attack to (hopefully) unacceptable levels to the attacker(s), as well as preserving flexability and resiliancy.

    Although IANAMH (I am not a military historian), I have read enough to generally agree with these ideas. I don't disagree with Schneyer's main thesis, I just am not that surprised by it.

    Here is a fairly interesting article called From Sandbags to Computers: What's New in Field Fortifications and Protective Structures [army.mil]. Maybe we can analogize some of modern military tactical theory to cyber-defense.

    ---
    In a hundred-mile march,

  • Sorry re. typo above.

    ---
    In a hundred-mile march,

  • Your here now and I/we are glad. To bad the moderators missed the point that were kinda on topic as the Book(altho not the contents) and the implication of 200K+ readers for one purchase (Re This) [slashdot.org] was lost on them. Stay Cool Dude.
  • how low can we go?

    Well if da Taco replies....
  • Horatory:
    "Marked by exhortation or strong urging: a hortatory speech."

    That's funny, for a second there, I thought that I had a new word to use in my second job: High-Jammy-Pimpmaster Supreme.

    Shucks.

    Rami
    --
  • CmdrTaco is #1
    not surprisingly
  • by Anonymous Coward
    In my opinion, many of the security problems that plague the internet (and computers in general) are caused by the fact that companies still put their priorities in the wrong place. Most programmers still choose performance over stability and security.

    I sometimes wonder about that. I am a programmer who is reading slashdot before going in to check some network code he just wrote, so I might as well write.

    Observations:
    #1- There is a tradeoff between stability and 'interestingness.' It is an early age of computers, is it not better to make rashes of mistakes in creating something that overall is interesting?

    #2- "Mission-critical systems" are designed for cases where people die from some programmer goof, and they do not depend on input from non-missioncritical systems. These probably have warranties, or are important enough that warranties mean nothing.

    #3- Competition provides some level of accountability. Not a terribly great amount, but companies like Sun and Oracle make money because people who really care about stability will pay for it. Really, when NT crashes, do people care that much? As long as they saved, people do consider failure acceptable. (And when someone is demoing software if you're at a job interview, a crash feels kind of comfortable.. has happened to me.)

    #4- Code is thought. That's a hacker mainstay, since we tend to wish to give speech status to code. When you're formulating positions on some subject, don't your thoughts start out a bit unsound, but lead to more insightful ideas? Same with code.

    (And a quiet observation to round out the list: The great majority of programmers don't know what assumptions they're making. High-level languages really do hide them because they try to be self-contained little universes.)
  • I'm not so sure about your comment on Most programmers still choose performance over stability and security.
    Most will choose stability, and I think, no I hope more are choosing security when programming but this also falls under the category of human error.


    A lot of programmers are just not aware of their errors. Most programs with buffer overflows may be very stable in theory, but they assume that the user will not make errors. Or assume that noone will ever enter a string longer than 1024 characters. Or maybe they don't expect anyone to use an other interface than their own, and put their security in the wrong place

    A lot of programs still in use today were written in a time when the internet was still a relatively friendly place. I hope books like this one will make programmers today more aware of the problems that plague the internet in it's new form.
  • just better than the other guy!

    There is neither cause for irrational exuberance nor hopeless despair. Frankly I'm disappointed in the last 3.5 years of running a NT net that we've had no break ins, security compromises, etc; sure there's been the ocassional StealthBoot.B viri and one person executed a 'fireworks' attachment w/ a virus that took a half hour to clean up, but that's been about it. Eternal vigilance and watch is my key, try to convince mgmt to keep things simple enough to keep it under control w/ ability to recover should a disaster occur and all will be find. I'm looking fwd to e-commerce while watching for some bastard to 'make my day' :))
  • It makes the arguement against it just a little more difficult. You can hear it now "We can't guarentee 100% security, but this box will log 99% of attempted intrusions"
  • This sounds very trolllike, but I'll bite.

    Overbroad monitoring was a fact of life in communist countries - indeed, if you look at the lifestyles of many of those who did speak out, they weren't particurly pleasant.

    By putting a full monitoring system into place you immediately allow such a system to occur. It's not being monitored per-se - most people, except for extremists will understand the necessity of some wiretaps etc. It's bulk monitoring. Many nations, such as Britain (and quite possibly the US) have a number of laws on the books to attempt to stop abuse by the government - and one of them is to give you privacy against search and seizure, and, in certain circumstances, anonymity

    In the UK, a huge storm errupted over the fact that a number of members of the current cabinet were spied upon - mostly for being socialist. The British RIP bill allows a vast quantity of information to be recorded about you - and some of this can be looked at by people you know.

    If I get my jollies from various obscure forms of (legal) sex, then I might not want Plod who works at the policestation down the road to know - indeed, I don't think he has a right to know unless he has *very good reason*. If I was tapped because I had met with lots of dodgy terroristy like people and so on, and the information was destroyed, and those possessing it were under strict regulation - then I would accept it.

    If however, I want to say that the government laws on drugs are wrong, I don't want to be targeted and monitored simply *because* I disagree with governmental policy. I don't think it's right, and I don't think it's just.

    By setting up the means to abuse free speech on a large scale you won't have it

    (PS, yes, I'm bad at articulating, I'm coding VB at the moment - destroys your brain)
  • The right to privacy and the 'right' to anonymnity are two completely different things. One can not have 'rights' except in one's role as a citizen. Therefore granting anonymnity to random individuals is not the same as known individuals who are part of a community having a right to their privacy.

    Jefferson bemoaned the rise of large cities full of strangers. His whole ideology is based on the premise of small-medium sized communties of people who know one another. Part of being a member of society means being accountable to others in your community. That's why murder laws are possible. So it's important to get beyond the misconception that we are guaranteed a right to anonymnity in all matters, so we can decide just what the community does have a right to know about the individuals who make it up.
  • Ha ha I'm pretty sure my code helps balance out the better coders who don't have as many errors ;) Plus I code in Java mostly so that paticular problem is not much I worry about. Your post proves how lame this article was becouse what you say in your post is logical,makes sense and makes me want to read the book. (Ok we can't achieve 100% security of a system. What do we do WHEN the system is breached.)
  • by Anonymous Coward

    I have to take issue with the acclaim Bruce gets as a cryptographer. His work doesn't stand up to the standard set by the many, many computer scientists and mathematicans who have brought the field of cryptology to its glory. Bruce Schneier, by his work, is not a cryptographer (nor is he a cryptologist). He is a computer security engineer whose contributions have been mostly practical symmetric-key cryptosystems. He is also been the layman's "crypto expert". But no cryptographer is he; his papers are imprecise and obvious, his book Applied Cryptography is poorly written (his description of his own Blowfish algorithm is shady; he could learn a lot from the Cormen, Rivest, and Leiserson CS Bible). Finally there is a better crypto book out there anyways: it called the Handbook for Applied Cryptography by Vanstone et al.

    For what he is, he has been very effective, I think. But to place him amongst the top cryptographers in the world is to be reckless. Can you honestly associate his work with that of luminaries like Ron Rivest, Shafi Goldwasser, Don Coppersmith, Andrew Odlyzko, Scott Vanstone, Menezes, Joseph Silvermann, Neal Koblitz? I cringe at the notion.
  • When I read the announcement of the book in the CRYPTO-GRAM newsletter (we all read it do we? www.counterpane.com [counterpane.com]), the thing that struck me was the use of the modern day buzzword Intrusion Detection. IDS certainly seems to become this years snake oil.

    What is being sold today under the Intrusion Detection System label usually is a cobbled together set of attack signatures that spuriously trigger thousands of times per day. The ones I've seen do not have the flexibility to do things like suppress the check for ".asp." if a word consituent character follows it. And needless to say, the way they work, chances are that successful attempts to break in are overlooked.

    This of course if yet another instance of false security. A lot of work has to be done still to make IDS systems work reasonably out of the box, and that is not even taking issues like training into account.

    Mind you, people who actually read the book will know better, but I've lived in corporate hell for much too long to know just where this IDS thing will end. "Do we have a firewall? Check. Do we have an IDS? Check. Hey, what's this guy doing on our systems? Call legal and sue the IDS vendor."

  • The problem is that government has violated the trust given them.

    So have individuals, yet trust is extended to them. There is no monolithic "government". It is made up of people. The "government" didn't commit Watergate. A few people within it did. Just like a few people within society commit crimes. Yet I'm sure you would agree that we shouldn't revoke our trust of all people on the basis of the violations of a few.
  • However, it is funny to note that the manual processes they are replacing haven't the least bit of security at all. Often, the "process" involves one person rubber-stamping a piece of paper and placing it in an open bin on some other person's desk. Yet they insist on impenetrable electronic security.

    You make an excellent point and that has been my experience as well.

    However, Schneier points out some important differences between a manual process and a computer system based one:-

    1. Automation:- small repetitive illegalities ignored by manual systems can be magnified a very large number of times (eg transferring 0.2 cents per transaction to a swiss account);
    2. Action at a Distance:- A bank in NY can be attacked from St Petersburg;
    3. Technique Propagation:- a new exploit posted to bugtraq can be used by thousands of kiddies within hours of publication.
  • Government violates privacy precisely because they expect to labor under secrecy.

    The problem is that government has violated the trust given them. Again, look at Watergate and the Nixon enemies list as an example. Its those kinds of violations that make people nervous about giving them more trust.


    ...phil

  • Yet I'm sure you would agree that we shouldn't revoke our trust of all people on the basis of the violations of a few.

    When the people act in the name of "the government", then yes there *is* a monolithic entitiy called "the government", and yes, it's resonable to revoke trust in it.


    ...phil

  • I find this extremely insightful and would love to spend a mod point on it.

    But what I'd love more is a reference telling me where I can read this view of Jefferson's, and others. Can anyone help?
  • I'm afraid that there aren't any moral arguments than can be presented

    Nonsense. The Net servers and accounts are the property of the ISPs and users. Theft is immoral. Quod Erat Demonstrandum.

    Wake up people, there are some very evil people out there

    Yep -- and the levers of government power are their natural homes.
    /.

  • by tooth ( 111958 ) on Thursday August 31, 2000 @02:58AM (#814585)
    ...the small clique that gets its hardcore jollies from Perl programming.

    Hey Taco, This book is for you man!

  • I find it really, really scary when someone of Bruce Schneier's reputaion has seeming given up on the idea of secure software.

    "I got about two-thirds of the way through the book without giving the reader any hope at all", he writes "It was about then that I realized I didn't have the hope to give."

    If that is the case, what possible arguements can we muster against things like internet regulation. If we can't have better living through software, then there is a bunch of special interest groups and three leter agencies who would be happy to say "just install this box on every network, and we'll be able to trace all hackers"

  • I think that just like the real world, you can *never* prevent crimes from happening. You can, however, make the criminal's life as hard as possible by heavier punishment, making it more difficult to break in etc.. You can only make it more difficult, but not impossible. So I think the advice of mr.Schneier isn't too bad: put more emphasis on prevention and recovery.
    How to make a sig
    without having an idea
  • At least, not for vitally important applications and services. There has been a push recently towards providing every kind of service imaginable over the net, but people have already learnt that some things just aren't safe to do online - the notable failures of internet banks such as Egg [egg.co.uk] to keep their sites secure being one recent incident.

    Another foolhardy venture I read about in the paper today (no links, I couldn't find anything online) is of a company in Thailand which has developed a security robot armed with an air gun as standard, but which can be fitted with any other type of weapon, lethal or not. Whilst it can be set to work automatically, it can also be aimed and fired manually, using a command sent over the Internet! Is this the most stupid thing ever or what? Now hackers can shoot people from the comfort of their own bedrooms!

    Despite the recent rush to get online, I can see that the craze will die down in years to come, as people and companies realise that some things are much safer in a real world environment where factors such as ease of interception and physical location make every transaction far more secure.

    At the end of the day, the amazing lack of online security and privacy means that the net is good only for unimportant information and trivial communications. Who wants every hacker with a root exploit to know their most personal and important details?

  • I just wanted to complain that, though this looks like a fasinating article, I can't read it because while at work, Salon.com is blocked, and deemed Non-business related, though every other technical magazine is fine (Notice I'm reading /.)
    --------------------
  • No system - computer or otherwise - is perfect. For any lock there's a way to get around it. They just deter the amateurs.

    Is this a situation where there is no hope? I don't think that's the right question or conclusion. We all have houses with locks that are adequate 99.9% of the time. They will fail from time to time but that is something we've all accepted (well, 99.9% of us:).

    Computer security is the same way and Schneier's conclusion seems to me to be common sense. Not that he doesn't - probably - present all sorts of interesting anecdotes.

    To be eternally secure we'd have to be eternally vigilant. No one can do that. So we do a best effort sort of thing, get on with our lives and have to be prepared to pick up the pieces when it falls apart. What else is new.

    IMHO, as per

    J:)
  • The argument against internet regulation remains the same. If installing extra hardware on every network would provide real security, then I think Bruce Schneier would endorse it. Rather, I believe the point not that we are not yet capable of a secure technology (whether implemented in hardware or software), but that there is no conceivable technology that is truly secure. The best you can do is realistically assess risks and mitigate them where it's cost-effective to do so.
  • I managed to get one of those pre-release galleys and would have to say that it's Bruce's best book from what I've had time to read so far. I highly recommend it to anyone even remotely interested in computers. It's definitely an eye (and mind) opener. ;-)
  • I agree, or perhaps the three letter companies start to distribute a copy of a program with a couple of backdoors in place which monitor all strings of "bombs, guns, explosives, Dilber".

    I mean sure Open Source helps a tremendous amount, however are you going to debug every StarOffice program that comes along before you install it?

    I like his "Dancing Pig" analogy because it fits nicely into this paranoia mentality on the net. My friends are constantly sending me crap (similar to Elf bowling), but I refuse to open it on Windoze98. When they ask why and explain that they have all the Mcafee (C) updates in place, I just say "thats nice" and delete the files anyway. All it takes is some nice little backdoor (a la Netbus) that is programmed NOT to look like a virus...and either A.)Bye Bye data or B.)Little kiddy is watching everything you are doing.

    **shudder**

  • by harmonica ( 29841 ) on Thursday August 31, 2000 @03:24AM (#814594)
    Bruce simply has given up on the idea of perfect security. I don't think that he (or anyone) will be accepting some black box that promises magic. Why? Having to fight (in public) against all kinds of agencies wanting to restrict our personal freedoms is another matter. This will not change with or without Bruce's book.
  • If that is the case, what possible arguements can we muster against things like internet regulation.

    One very important argument: our freedom. If we wish to take the risks of communing online, then it's partially our own fault. I believe in taking personal responsibility for my system, that's why my personal lan is protected by a carefully managed firewall, and I do backups of any crucial data regularly. Any sensitive information (such as code I'm working on) is kept on another machine, on another network, that is not directly accessable to the outside world. The only way to get it would be to slice through my firewall, gain access to one of my workstations, and then hack the system containing the data, which they don't know is there. If they do all this, which is very possible, I do have backups.



    "Evil beware: I'm armed to the teeth and packing a hampster!"
  • by Shimrod ( 107031 ) on Thursday August 31, 2000 @03:29AM (#814596) Homepage
    In my opinion, many of the security problems that plague the internet (and computers in general) are caused by the fact that companies still put their priorities in the wrong place. Most programmers still choose performance over stability and security.

    An example:
    The article mentions buffer overflows, which, in my experience, have been virtually deleted in a language like Java. Sure, checking array bounds every single time may be a performance hit, but I will choose a performance hit over a security hit any day.

    Basically, when you write software, don't make assumptions. Not on anything. I've seen plenty programs crash because they tried to access the network and found that it was not installed, or play a sound and find the device busy.

    We may not be able to fix the people, but I think fixing the the software is possible. All it requires is ridding the world of software licences that deny responsibility. Once financial gain is at stake, corporations will put a lot more time into security, and hopefully a lot less in screwing eachother for financial gain.
  • Douglas Hofstadter's [indiana.edu] brilliant book "Goedel Escher Bach" [fatbrain.com] comes to my mind. It's a very good read and could have suggested to Scheider 20 years ago that every formally defined system is going to have holes in it.
  • by mvw ( 2916 ) on Thursday August 31, 2000 @04:38AM (#814598) Journal
    Yep. Schneier is a bit overreacting, like Bill Joy lately (or people reacting to Gödel's theorem, or Turing..).

    Going from one extreme to the other.

    Of course you can't have full safety, but that holds true as well for the real world. You can't prevent anyone from getting into a building, but you can make it so hard, that only a few will manage. And you have to pay in a way for that.

  • MPAA will adore the cool 404 page!
  • Yes. And he falls into the same trap, as if this would make these systems totally useless.
  • by ZanshinWedge ( 193324 ) on Thursday August 31, 2000 @04:47AM (#814601)
    Damn straight! Well put.

    Simply because it's not possible to create perfect security does not mean that we should give up the ghost and go home. Quite the contrary, it is simply an indication that computers are indeed a part of the real world. Are real world banks 100% secure? Do the never get robbed? Obviously not, but we still have trust in them. Simply because "you cannot build a robbery proof bank" does not mean that we should give up banks (and their like) alltogether. And, while the Fort Knox gold repository isn't precisely invulnerable, it is sufficiently close to being so for the purposes necessary.

    Computer security will necessarily ebb and flow as people recognize the issues and concerns and understand how to deal with them, etc., etc. Currently, there are many examples of poor and lax security because A) the favored model for computing has many security problems, B) unix is not a very secure operating system (face it guys, I love unix to death, but in many ways it is fundamentally broken when it comes to security [luckily, unix is so flexible that you can patch up the huge gigantic rents enough to make it pretty a box pretty damn secure]), C) everyone and their mother has some sort of semi-important server, which when combined with D) very few people actually understand even the basics of making a network / server / system secure can only cause problems.

    In the semi near term, one of two things will most likely happen. Either 1) people will in general become more security concious, or 2) programs and systems will be made more inherently secure. I think 2 is much more likely, though a combination of 2 and a little bit of 1 would go a very long way.

  • Strawman arguments for privacy only benefit those with something to hide, and allow criminals and terrorists to plan their campaigns behind the shield of anonymity.

    {sarcasm on}

    Then I'm sure you'll have no problem handing over your social security number, street address and phone number, credit card numbers (with exp. dates, please) and bank account info, along with that for your kids. What? You're not going to? Why, then, you must have something to hide!

    {sarcasm off}

    That's the first problem with your stance - you do have an expecation of privacy, and I don't think you're a terrorist or a child pornographer.

    The second problem is that parts of the government has historically shown an annoying tendency to violate privacy rights for no particular good reason. Watergate and other things Nixon is a classic example, particularly the 'enemies list'. (For those who don't remember, Nixon and his cronies ordered the FBI and the IRS to investigate and cause trouble for people who Nixon considered his political enemies - classic cases of misuse of power.) These reasons are enough for people to want to protect their privacy themselves, since we've already seen we can't rely on the government to do it.

    Wake up people, there are some very evil people out there, and it is our duty as decent Christians to do everything we can to help stop them.

    Many people, myself included, find it evil for the powers-that-be to trample all over the innocent in pursuit of the evil. And, given the recent track record of the government in the issue of things like wrongful convictions (remember that the gov. of Illinois has put all death sentences on hold, because of massive uncertainty in the process, or you can check here [truthinjustice.org] for organizations which track this sort of thing), many people are not sure the government is the best organization to give our absolute trust to.

    Also note, while the vast majority are probably decent, there are a lot of people who are not christian. You're not calling their decency into question, are you?


    ...phil

  • Just knowing this persons reputation as aposter here (reading some of his previous posts) I could wrack this up to semi trollish maybe flamebait.

    But you know why I wouldnt?

    It is wrong to judge a person for his beliefs as whoever modded this as falmebait did.

    That said if this is not a Troll people do truly believe things like that.. although we could ahve done without the final line

    "It is our duty as decent Christians to do everything we can to help stop them."

    That was rather trollish Jon...

    Jeremy
  • by Forgotten ( 225254 ) on Thursday August 31, 2000 @08:18AM (#814604)

    It's almost enough to convince you to stop choosing the dancing pigs.

    This last line of the article is telling. A user of an Internet-connected computer is in possession of a powerful, potentially dangerous tool - maybe it's been long enough that people look at them as some kind of silly toy. On the one hand computing ought to be fun and people shouldn't have to do it in fear (implying they should be given software tools that, though they can never be completely secure, at least aren't braindamagedly insecure). On the other hand they should probably need some bare cognizance of what they're getting into and what they might, through ignorance and negligence, allow someone else to do in the world. Dancing pigs (or penguins) might be cute, but people need to consider that there can be real-world consequences of what they allow to be done to their networked computer.

    We tell our kids not to take toys from strangers, but then we go and download and play with the software equivalent without a second thought...

  • World-class cryptography is pretty useless, Schneier notes, if the administrator's password is set to "password."

    Doh! There goes my carefully planned security system. Thanks, Bruce.
  • There are two approaches to this I feel:

    1) Use a morphing chip. Transmeta could make a package for server builders that makes a unique assembler/microcode for you, you build the server source and presto no buffer overrun will get you since this is your isa not someone elses. The payload will dump core and the intrusion will be short lived.

    2) Use a vmware and a linux guest as the portal to the DSL/cable modem with a linux host monitoring the md5 tripwire results of the guest. Should the results change, kill the vmware process and start a new one from a fresh copy of the virtual image. The masq'd VPN on the host is not penetrated and script kiddies get limited fun. This is what I call an Etch'a'sketch server.

    Hedley
  • by BranMan ( 29917 ) on Thursday August 31, 2000 @08:48AM (#814607)
    Don't we have a solution for this "special OS" now? Linux and/or BSD allow you access to all the source code, and will accept modifications. All you do is code up the changes, add your "invisible" bit to the file access and change all related tools to handle it correctly. Then submit the patches - if it gets accepted into the next baseline you're done. The nice thing about Open Source stuff is that if something is a good idea, and helps *some* people, it will be adopted and available to ALL.
  • The article mentions buffer overflows, which, in my experience, have been virtually deleted in a language like Java. Sure, checking array bounds every single time may be a performance hit, but I will choose a performance hit over a security hit any day.

    Not knowing anything about Java, I can't comment much but I do have this question:

    If my C program checks all user input, the only thing which could then take me down is a program/compiler bug, no? Do I not get all the benefit of bounds checking without the performance-sapping problem of checking the calculated values on every loop iteration?

  • by rjh ( 40933 ) <rjh@sixdemonbag.org> on Thursday August 31, 2000 @10:13AM (#814609)
    The proof, as they say, is in the pudding. I can definitely associate his work with Ron Rivest, for instance. Rivest wrote RC6, an AES candidate which had 15 rounds broken. Schneier wrote Twofish, an AES candidate which has fared much better. RC6 isn't going to be selected as AES (I'll wager $20 on it), but Twofish is still in the running.

    Insofar as "Bruce Schneier, by his work, is not a cryptographer (nor is he a cryptologist)"... I've got to recommend that you talk to your dealer about the purity of your rock. Strictly speaking, a cryptographer is one who develops and devises codes and ciphers. Schneier has written lots of ciphers, ranging from the lousy (MacGuffin) to the profoundly brilliant (Blowfish, and maybe Twofish).

    He has also published cryptanalytic results against the major AES candidates, including (I believe) RIJNDAEL. I like RIJNDAEL. Joan Daemen (I'm misspelling the name here) is a brilliant cryptographer, responsible for a lot of extremely high-quality stuff--and if Schneier, et. al., can cryptanalize RIJNDAEL, that says something about Schneier's skill.

    Insofar as Applied Cryptography being shady, I'm going to have to ask you for some verification. How is his description of Blowfish "shady"? After reading his description of Blowfish I was able to implement it, from scratch, without looking at any source code. My version passed the Blowfish compatability vectors, so apparently his description was clear and concise.

    Finally there is a better crypto book out there anyways: it called the Handbook for Applied Cryptography by Vanstone et al.

    Err... no. The Handbook of Applied Cryptography (notice the name), edited by Menezes, Vanstone, et. al., is a very good book. I refer to it often. However, I refer to Applied Cryptography more--why? Because I want to know the accepted way of how to do something, not a formal proof that the accepted way works. If you want to know how to encrypt the last block of a CBC mode cipher so that the cleartext is the same size as the ciphertext, Schneier tells you this in the space of a paragraph or two. The Handbook of Applied Cryptography goes into much more mathematical rigor.

    This is not to say that the Handbook of Applied Cryptography is inferior: it's not. The two books are meant for different audiences who need different things, and claiming that one is superior to the other is pretty specious.

    His papers are imprecise and obvious

    ... Are you telling me that you could have cryptanalyzed RIJNDAEL?

    Seriously. Get a grip. Schneier is no demigod, that much is true; he's a human being just like anyone else, and occasionally screws it up past all recognition (just like anyone else). However, he is a hell of a lot better than I am.

    My own beef with Schneier is something totally different. Schneier works as part of a team, not a solo operator. His best work has always been collaborative, with Doug Whiting, Niels Ferguson, etc. While I don't begrudge Schneier his fame--I think he's more than earned it--I do wish that people, particularly Slashdotters and the news media, would realize that Schneier may be the crypto version of Buckaroo Banzai, but--just like Buckaroo Banzai--he's nothing without his crew of Hong Kong Cavaliers and Blue Blazes Irregulars backing him up.
  • How about me? 3 digits baby! :)
  • I don't know but this article was just annoying. Its first premise , books about hacking and security are boring, is just untrue for me. My favorite books are those two subjects. It then promises this book will be devoid of boring facts. Of course when you strip away those boring facts your left with vague Jon Katz like statements. It then goes into full my GOD we are all going to DIE and there is NOTHING we can do about it. Was the auther really so depressed over the state of computer security that he was thinking about taking his own life? What the heck is up with that. Yes 100% security is not possible. Yes Windows 98 (An operating system never designed to be secure) is not overly secure. Just becouse of that should we give up even attempting to have security?
  • "If J. Random Websurfer clicks on a button that promises dancing pigs on his computer monitor, and instead gets a hortatory message describing the potential dangers of the applet," Schneier writes, "he's going to choose the dancing pigs over computer security any day."

    Make it dancing penguins and I'll give up security too. <grin>

    Wait a minute... hortatory?

    "Free your mind and your ass will follow"

  • live in constant fear. Yeah! The Yin and Yang of the Internet: Freedom(good) Vs. Fear of Security (bad). Just IMHO.

  • by Alexius ( 148791 ) <alexiusNO@SPAMnauticom.net> on Thursday August 31, 2000 @03:34AM (#814614) Homepage
    It seems to me that while there may not be a hope of totally securing anything, that doesn't mean that the act of breaking in will be worth it. Right now, I'm sure that the FBI's headquarters isn't totally secure, but the effort to break in isn't worth it to the average individual who would do it just for the novelty of it.
    --------------------
  • Who the hell are the Vlad Clones?
    Anyway, how many UIN's are on /. these days?
    ie. do I still count as a newbie !?
  • by PHAEDRU5 ( 213667 ) <instascreed.gmail@com> on Thursday August 31, 2000 @03:36AM (#814616) Homepage
    A long, long time ago I was utterly taken with formal methods. I lived and breathed Z and VDM.

    Then one day I read a paper about the *limits* of formal methods. The one phrase summary of the article was that once a formally-verified program meets the real world, each time it's executed is a conjecture.

    The paper seemed to be an argument against formal methods and as you might guess, all the heavy hitters with PhDs and post-doctoral work to defend generated a storm of complaint, the one phrase summary of which was that while formal methods might not be perfect, they shouldn't be abandoned.

    I mention this because of the author's original notion of protecting ourselves by wrapping ourselves in mathematics, and his current appearance of despair.

    It appears to me that the book's more a reaction to a crisis in faith than anything else. I don't think anyone really expects security to be uncrackable - we're got history going back to the pharaohs on that one, but neither should we throw the baby out with the bath water. I mean, I think I've seen at least one reaction that uses this article to predict the *imminent* *death* *of* *the* *internet*. As if!

  • I find it really, really scary when someone of Bruce Schneier's reputaion has seeming given up on the idea of secure software.

    Wasn't it that great political philosopher, Thomas Paine, who said that the price of a secure network is eternal vigilance? Or something like that.
  • I think you may have to wait until UINs grow to seven digits before your six digit number looks low.
  • so long as there is reciprocal transparency so that I know the same details about every person who accesses mine.

    Government violates privacy precisely because they expect to labor under secrecy. If they didn't have that "expectation of privacy" that you talk about then they wouldn't trample all over the innocent.
  • I'm afraid that there aren't any moral arguments than can be presented against allowing the agencies that protect us and our children to attempt to make the net a safer, more secure place for everyone. Strawman arguments for privacy only benefit those with something to hide, and allow criminals and terrorists to plan their campaigns behind the shield of anonymity. If we were supposed to be able to hide our wrongdoings, the Lord wouldn't have made us all different.

    Anonymous mailers, encryption, and online security is just the application of the ideals of privacy and personal rights in our technological society. Certainly they are used by criminals, in much the same way criminals now use banks, cars, and hand tools to perpetrate their crimes. At some point, the majority of the online population will be computer savvy enough to use these tools. That is the goal of the developers, to make online security and privacy as available as physical security and privacy.

    I'll accept the risks. I'll do my best to prevent other people from making that decision for me.

    They're coming for the privacy nuts now. Who's going to be left when they come for you?

  • Wake up people, there are some very evil people out there, and it is our duty as decent Christians to do everything we can to help stop them.

    I take offence at this comment, there are many other religions out there, don't assume that because you are a "decent" christian that you are the only one that cares.. or only group that cares. This is an issue that effects humanity as a whole, and there is no place to add religion into this fight.

  • The fraud committed against Egg had nothing to do with clever hacking. It was a fairly dumb, run of the mill fraud attempt that could have happened irl as well. More details in The Register [theregister.co.uk]
  • OK, that sounds really scary. Now are the rest of you plagued by all these menaces? I've had net access since 1987, been using the web since you had to telnet to the NeXT browser at CERN and had a Linux box on a full-time connection for three years, running telnetd, wu-ftpd and other services. I use the same (Crack-able) password all over the place, and for a while used my default web password as my Linux root password! I use FTP and telnet, send credit card data over unsecure http, check for security updates maybe every couple of months and use my real email address on Usenet. (Obviously if I were responsible for a server with real consequences I would keep much more up to date.) I would never dream of encrypting anything -- the only thing I can see that accomplishing is losing my own data.

    And nothing bad has ever happened to me. Am I just lucky or is this article a little hyperventilated?

    There are an average of five to 15 bugs in every thousand lines of code, which means that Windows 98 is riddled with somewhere between 90,000 and 270,000 oopsies.

    It's probably the use of "oopsies" by an adult that set me off, but the first half of that statement seems exaggerated and the second half is just bad logic. What, that absurd claim about "65,000+" documented bugs in W2K isn't absurd enough?

    ---------

  • Now hackers can shoot people from the comfort of their own bedrooms!

    So if we put a BFG9000 on it....

  • It read more like a book review with a couple of quotes thrown in from the author....
  • pshaw, you new fangled users don't know what low is....;)
  • If that is the case, what possible arguements can we muster against things like internet regulation.

    I'm afraid that there aren't any moral arguments than can be presented against allowing the agencies that protect us and our children to attempt to make the net a safer, more secure place for everyone. Strawman arguments for privacy only benefit those with something to hide, and allow criminals and terrorists to plan their campaigns behind the shield of anonymity. If we were supposed to be able to hide our wrongdoings, the Lord wouldn't have made us all different.

    With the current increase in the amount of material like child pornography (up 19% last year according to FBI statistics) and terrorist manifestos, we need to have safeguards in place to deal with these threats. Unfortunately, the ivory tower academics that designed the net didn't think to include such measures, and as a result we're forced to add them now.

    And what is the result? People with such petty, insignificant lives that no self-respecting law enforcement agency would want to snoop on them are complaining that their "privacy" is being violated. Wake up people, there are some very evil people out there, and it is our duty as decent Christians to do everything we can to help stop them.

    ---
    Jon E. Erikson

  • > I'm just about finished with it, and will be doing a review soon - it's quite good.

    What did you do, wait until you were almost finished with the book before posting this, so your book wouldn't get slashdotted?

    --
  • I'm just about finished with it, and will be doing a review soon - it's quite good.

    Well hell I don't need a review now.
  • I'm not so sure about your comment on Most programmers still choose performance over stability and security

    Most will choose stability, and I think, no I hope more are choosing security when programming but this also falls under the category of human error.

    You say: We may not be able to fix the people, but I think fixing the the software is possible. All it requires is ridding the world of software licences that deny responsibility. Once financial gain is at stake, corporations will put a lot more time into security, and hopefully a lot less in screwing eachother for financial gain.

    No matter what we do there will always be human error as a factor. No matter how much time is spent on a project looking for security holes there will, dare I say, always be another hole to be found by the bored teenager, security professional, etc... I think that is what Schneier is trying to say. No matter what we do there are always security problems and we just have to be prepared for those problems the best we can.

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...