Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security Books Media Book Reviews

Security Engineering 112

SilverStr writes: "With all the recent discussion on organizations rethinking their security strategies, I thought I would do a review on one of my favorite books. I have stayed pretty quiet on /. over the years, but security is something I don't think developers anywhere should be taking lightly. Hopefully some of them will get something out of my review and pick this book up." Read on for the rest of his review of Ross Anderson's Security Engineering.
Security Engineering: A Guide to Building Dependable Distributed Systems
author Ross Anderson
pages 612
publisher Wiley Computer Publishing
rating 9.5
reviewer SilverStr
ISBN 0-471-38922-6
summary An exceptional book on the dynamics of security engineering. A must have on all developers shelves who care about digital security and its impact on system design.

Introduction

The complexities of security engineering go beyond the ideals of understanding buffer overflows and considering that patching your systems is not an option. Many a Slashdot article (particularly the latest one on Louis Bertrand's OpenBSD presentation) has comments on the failings of code design. In Ross Anderson's book Security Engineering: A Guide to Building Dependable Distributed Systems, Ross goes into impeccable detail into the aspects of building systems resilient to malicious attack, abuse and programming error.

The book is well laid out, and in my opinion Ross properly segmented the topics in a way that makes the sections easy to read. The first section is focused on the many concepts of digital security such as protocols, access control and cryptography, and is written in a way so that you do not require a technical background to understand. It was refreshing to read how Ross explains cryptography in such a non-threatening manner that you can understand it without having to refer to Applied Cryptography from Bruce Schneier. Many authors have tried this in the past, and failed.

The second part of the book goes into considerable detail about practical and important applications such as banking and network attacks and defense. I have to be honest with you, I don't read a lot of books on software engineering that go into Radar Jamming and Nuclear Command and Control systems, and I found that sort of discussion exciting. (Although I have no interest in writing security code for the next cruise missile that will move the world to a level of DefCon quicker than that in movies like War Games, I still was quite interested in the approach.) Many of the examples and case studies that Ross explains bring the whole topic together to help strengthen the point about security engineering and its application to each system. Further to this, Ross' writing made me shutter to think about just how popular applications like bankcard systems have been written to be so weak and vulnerable. Before the book's main content, Ross includes an explanation the legalities of publishing some of this information. It wasn't until I started reflecting on some of the case studies that I realized how potent and valuable some of this information is, especially when I thought of potential risks that should have been mitigated and were not. Ross' examples should be considered textbook cases, though, and not information that can be drastically abused.

The third part looks into the organizational and policy issues faced with security engineering. From office politics to security and the law, this section goes into depth about managing security engineering and its affects on business and people. Compared to the rest of this book I found some of the topics in this section too short on detail, feeling like just a glancing blow, but still giving the reader enough information to seek more in depth content if they so choose. (Check out the bibliography for such information.) Discussing issues such as Carnivore, digital copyright, and system evaluation and assurance, this section rounds out the book quite well.

Why to Consider this Book

If you are a developer considering security (which should be all developers, anyways) this book provides a good balance on security engineering, and serves as an excellent reference work. It can work well as a textbook introducing developers to security engineering, and can be used as a good introduction to many dynamics of digital security. (Hint to COMP professors outside of Cambridge: get your students to read this book -- after you do of course).

Although you might not be able to use the section on radar jamming and its countermeasures directly, you may still be able to use principles in writing protected electronic systems while working on that new wireless system for Ma Bell. And finally, you should use this book as a brick in the foundation of learning on the concepts of writing secure code.

Something else you should consider in this book is the extensive bibliography in the back. If you want to follow up with more detailed information in any one section, Ross did an tremendous job in providing pointers to research papers and work done by others to read and research on. This in itself made the book well worth the money, as for me I have already read up and used some of the works I didn't have indexed to me before.

Wrap Up

If you are going to read this book and look for samples to write secure code, you are going to pick up the wrong book. This book is a cornerstone in building a strong foundation and understanding of security engineering. This book is goes beyond understanding the practical components of buffer overflows, stack smashing and code audits for review, and takes the reader into a new plain of understanding when it comes to security engineering. It is not a cookbook for lazy script kiddies to learn how to attack weak systems, but can be used to allow you to learn from others mistakes. You don't have to be a developer working on security systems to gain some knowledge from this text. Areas in the book such as that on E-Commerce can very much help bridge the chasm of bad web application design and can help you refrain from getting in the trap of fast application development full of vulnerabilities and exposing users to unnecessary online risk.

It is the responsibility of all developers to understand the risks they expose their software and their clients to. I am sure some developers will have some excuse where their web forms and applications do not require them to learn such silly things. That's fine. Hopefully I wouldn't need to use your systems. For the rest of us though, this is a must read.

Table of Contents

Part One

  1. What Is Security Engineering
  2. Protocols
  3. Passwords
  4. Access Control
  5. Cryptography
  6. Distributed Systems

Part Two

  1. Multilevel Security
  2. Multilateral Security
  3. Banking and Bookkeeping
  4. Monitoring Systems
  5. Nuclear Command and Control
  6. Security Printing and Seals
  7. Biometrics
  8. Physical Tamper Resistance
  9. Emission Security
  10. Electronic and Information Warfare
  11. Telecom System Security
  12. Network Attack and Defense
  13. Protecting E-Commerce Systems
  14. Copyright and Privacy Protection

Part Three

  1. E-Policy
  2. Management Issues
  3. System Evaluation and Assurance
  4. Conclusions

Bibliography


You can purchase Security Engineering from Fatbrain. Want to see your own review here? Just read the book review guidelines, then use Slashdot's handy submission form.

This discussion has been archived. No new comments can be posted.

Security Engineering

Comments Filter:
  • by ksw2 ( 520093 ) <[moc.liamg] [ta] [retaeyebo]> on Thursday February 28, 2002 @01:05PM (#3084897) Homepage
    This reminds me, looks like the speeches from Defcon [defcon.org] 9 will be going up online soon.
  • Sugaring the pill (Score:3, Insightful)

    by GSV NegotiableEthics ( 560121 ) <autecfmuk001@sneakemail.com> on Thursday February 28, 2002 @01:05PM (#3084898) Homepage
    Let's face it, most developers would rather gnaw their own left leg off than read about something as dull as building secure systems--it's so often something left to a security audit or (even more often) a malicious cracker to discover the inevitable vulnerabilities. So it's nice to see a book that capitalizes on the glamor of nuclear defence systems to try and kickstart interest.
  • If you are going to read this book and look for samples to write secure code, you are going to pick up the wrong book. This book is a cornerstone in building a strong foundation and understanding of security engineering.

    If one was looking for a book with samples of writing secure code, does anyone have any recommendations?
    • If one was looking for a book with samples of writing secure code, does anyone have any recommendations?

      http://www.openbsd.org/slides/musess_2002/index.ht ml [openbsd.org]

      This website gives a few tips on avoiding the main pitfalls of insecure coding, including how to avoid buffer overflow exploits.

    • If one was looking for a book with samples of writing secure code, does anyone have any recommendations?

      I heartily recommend the book Building Secure Software (How to Avoid Security Problems the Right Way) [buildingse...ftware.com].

      It also shows that security is mostly a human problem.

      On the other hand, I would like to know how crackers find security holes. For example: how was the buffer overflow in PnP XP found? Did the guy sort of fuzzed [sourceforge.net] it?

      What I mean is: Before trying to secure software, it would be nice to know how the bad guys (or the security researchers) find the weaknesses.

      • There are many good articles on how to do this. Check @Stake and securityfocus. In essence, a typical buffer overflow is discovered by hammering away at software until you can reproduce a bug, by making the software crash. You then reproduce this crash using a debugger such as softice, trace the stack and figure out just what is going on memory. You then craft your input strings to hijaak the registers to point to a different location to begin executing code.

        A good example is: http://www.atstake.com/research/reports/wprasbuf.h tml

        Its a simple exploit of RASMAN.EXE and takes you through step by step how an exploit is discovered, researched and ultimately exploited. Its an interesting subject to say the least, but not for the faint of heart. Brush up on your assembly!
    • I haven't read the book, but for something with a perhaps more practical approach, check out the Secure Programming for Linux and Unix HOWTO [dwheeler.com].
  • Secrets and Lies (Score:5, Informative)

    by ksw2 ( 520093 ) <[moc.liamg] [ta] [retaeyebo]> on Thursday February 28, 2002 @01:08PM (#3084917) Homepage
    He mentioned Applied Cryptography... I wanted to point out Schneier's latest book Secrets and Lies, kind of a real-world threat analysis in contrast to the mathematical analysis of Applied Cryptography. Good read.
  • by Anonymous Coward
    Another book which is even more accessible to non-techies (perfect for clueless CEOs and management in general) is Secret Lies by Bruce Schneier. An excellent high-level view of why security often is sub-standard in most organization and many useful discussions on what to do and not to do with regards to security in general. Every man and woman should read this excellent book!
  • by Dr. Bent ( 533421 ) <ben&int,com> on Thursday February 28, 2002 @01:17PM (#3084983) Homepage
    Software development, on the whole, is not practiced as an engineering discipline. It needs to be. Software engineers should be certified just like civil engineers or electrical engineers, and when something breaks, the company shouldn't be able to hide behind the EULA...they should be fully accountable.

    This would make software much more reliable and siginificantly reduce the maintence cost for users.
    • I don't know about the certification bit...perhaps it would be a kudo, but not a requirement...hmmm...

      But I do agree with the engineering approach. (By the way, I'm not an engineer, but the much maligned Business School - Information Systems graduate)

      Software design, and the structure over the programming/QA/design teams seems really weak to me. To get a good program (virtually bug-free and good security...realize that these are one and the same) the structure must be quite rigid. We can't all go off coding as we wish, and just throwing the design and QA portions together on the fly.

      I know I mentioned it before, but Mark Minasi's "The Software Conspiracy" is a great book that lays out these principals in overall detail. Look at the references to find more concrete/detailed examples of structured coding and design.

      If we ever expect to get decent software, and secure software, we must then take a more rigorous and structured approach to software creation and design. Until this gets done, we'll all be running around in FRONT of the 8 ball trying to patch things after the fact. I can attest that this approach is a loosing one - in any disipline. Security has to be built in up front. Writing good code, and having a very structured devlopment environment need to be carefully engineered.

      If we built bridges the way we build software, about 30% of all bridges would catastrophically fail. Now, you'll say, "oh software isn't as important, or at least not most software." Well, sure, that's true. But I don't think the reason we build good bridges the first time is because it could kill someone - although it's probably one of the reasons. The most likely, is that REBUILDING bridges when they fail is very expensive. When that happens, we know who to blame, and the following costs are very apparent.

      What happens in software, is the costs are not tracked and traced to their source. If they were, we'd all of a sudden realize that the cost of crappy software is HUGE! I don't recall the source, but I think the estimated cost of the NIMDA virus was like 2 Billion. Lets assume that the cost was over estimated by 100%, so the real cost was only 1 billion. We're talking about some real cash here. I don't know what Windows developemt costs were, but I'd bet for an extra billion, and some real care to fix the problem, we could have had a whole lot better software.

      Now finally for the market based solution. Make vendors liable for the bugs and insecurity of their software The government doesn't have to mandate a standard. A jury decides if the software vendor used due dilligence in writing good code. All I'm asking is that the atrificial protections for software be lifted. Treat it like any other good. We wouldn't expect the same functionality or lack thereof for our lawn-mowers, toasters, cars, microwaves or even our Tivo's.

      When vendors find that the real costs get shifted back to them, in the form of civil negligence suits, they'll get serious about fixing the problem. Until then, they'll laugh their way to the bank, and we (the techs) get to fix the problems over and over and over again. Not only that, but WE look bad, rather than the vendor.

      Ok, do your damage.

      Cheers!
      • To get a good program (virtually bug-free and good security...realize that these are one and the same) the structure must be quite rigid. We can't all go off coding as we wish, and just throwing the design and QA portions together on the fly.


        Oh, wow. It's another "structured programming guru." And like all the others, he knows nothing about programming.


        Good programming is not accomplished by managers or business structures, useful as those are. Good programming is done by good programmers, using a language that allows them to express themselves elegantly.


        In case you were wondering, programmers generally use the term "bugs" to refer to flaws in the implementation of a program, not flaws in the design of a program. For example, the fact that Mac OS 7 had no multi-user mode was not a security "bug"-- it was never designed with multi-user mode in mind. The fact that Windows 95 had memory leaks WAS a bug, because it represented an incorrect implementation of the design specification.


        It is possibly for a program to be buggy, but secure. For example, an FTP server can be immune to exploits, but still have memory leaks and random crashes. It is also possible for a program to be free of bugs, but insecure. If your FTP server is rock-solid stable, but never asks anyone for a password, you are in this situation. If you think, "but nobody could be stupid enough to ignore security in their product design!"... well, like I said, you haven't been in this business long.


        There are a lot of things programmers can do to improve security:


        1) Don't use C for networking software. All at once, buffer overflows (as well as faulty pointer arithmatic) become a thing of the past. Unfortunately, there are very few languages that can replace C for systems-level and speed-critical software. But even if you do end up using C, DON'T use the C string functions.


        2) Think about your design BEFORE you implement it. It doesn't matter who the CEO of the company is, or how many managers are on the Managerial Board, if the programmers can't think for themselves. For example, can you think of situations where having your email program automatically download and run .exe attachments would be a bad idea? Apparently the folks at microsoft-- or at least the ones making decisions-- can't.


        In my opinion, the best programming teams are small, focused teams of individuals, who can work with little interference from management. Trying to squeeze software development into "top-heavy" business models has never worked very well (*cough*, IBM, *cough*, OS/2). And finally, before you convince yourself that you know how to manage programmers-- try programming.

        • It is possibly for a program to be buggy, but secure.

          Only by luck. Bugs will eventually lead to security holes. That's like saying "You can have accidents in your car without injuring anyone..." and then using this logic to claim that accidents aren't that big of a deal for injuring people.

          I'm sorry, but that's a stupid approach. I'm not a manager. I DO know how to program - in fact, my first programming language was C. I'm not too bad, though I know what I excell at and what I don't. I'm mostly a security and networking guru.

          You can take pot-shots all you like. The fact is that a wild-west approach to design, programming and QA won't get the job done right. Heavy handed, totally inflexible management PHB's won't either. There's a middle approach that will work. But I dare say that most projects I've been on and worked with took the wild-west approach much more often than the heavy management approach.

          So, if I were asked to put effort into fixing the problem in general, I would focus on fixing the wild-west approach before anything else.

          Cheers!
          • The best way is just to get a bunch of developers who know that the wild-west approach is a bad idea. If they know going in that they have to have a cohesive and well thought-out design, you won't need the heavy hand of management to get a reliable, well-engineered system.
    • Not to criticize you post, because I agree, but Electrical engineers are not certified. There is no certification process besides the Bachelor's degree that is required, or even available I believe, in order to work. I think Civil and Mechanical do require something depending on the type of work.

      I agree though, some formal certification/education program should be developed for general system administration and especially for security specialists.

    • At the current state of the art, software engineering is about as codified and scientifically verifiable as medicine was in the 12th century.
      • I think there is considerably more literature and research going into software engineering than 12th century medicine.

        A few examples:

        http://sdg.lcs.mit.edu
        http://research.microsof t.com/foundations
        http://hillside.net/patterns/
        http://xprogramming.com

        Although you bring up a valid point that many new programmers still treat it as a dark art shrouded in mysticism. Random debugging, not using source control, etc. I guess this is where certification would be useful, but as a badge of pride or mark of excellence (like a TopCoder ranking) rather than a shock collar to electrocute you when things go wrong.
    • The only problem is:

      Engineered security is and should be intuitive. It is not a matter of difficulty in the actual engineering process but rather a lack of effort that the software engineers devote to the process. This is pretty simple for most pieces of software with the main exception being cryptography (which should be far more rigorous than many engineering disciplines).

      When I have designed secure distributed applications, I have generally carefully considered what level of trust to give each component and then layered the security accordingly. I can then delegate out the security-related tasks to the operating system, database manager, main application, etc. as appropriate so that a security failure in one module in no way is a complete system compromise (can you tell I don't like IIS or Sendmail?). In short, I design my applications with the idea that they will fail, but that such failure will not be complete, and then do what I can to prevent the failure in the first place.

      To me this is pretty simple and straight-forward and does not require an engineering degree. What it does require is a little forethought, and the humility to realize that as a programmer, no one is perfect.
    • I agree to a point. The software world is dangerously sales-driven (My uncle coined the term "Real Time Sales Driven Development", or RTSDD for short. It makes extreme programming look like a gentle stroll through the park.), and defiantely could do with some professional ceritifcation regulations that slow down the clamour to develop the latest buzzword. To some degree, I'm sure the situation is much better at insitutions that have used software systems for many years (ie, banks, etc) than consumer-facing industries ('net access providers, ASP providers, OS *cough* vendor). The CS degree seems to be how companies evaluate the 'professionalism' of a programmer, but I think its a misplaced faith. More important than that would be audits and guidelines that must be adhered to by a regulatory body.

      Of course, those addicted to the worderful world of 'slave-to-the-market innovation at the speed of e-business and shit' probably shudder to the thought, but can you imagine if designing buildings were subjected to the kind of rush-out-the-door development tempo that many dev houses have? Sure buildings have people, but softweare systems hold processes and business logic, which surplanted people long ago in terms of marketplace value.

      It'd probably result in greater long range vision for the software industry as a whole as well.
      • How about a market based approach. Eliminate the protections software gets from civil liability.

        The market would then respond to additional costs from suits that prevail and cost the vendor lots of money. Investors would be wary of investing in companies that didn't have rigorous design, testing, and production methods. Etc. etc. etc.

        The jury would decide the merits of the case. Primarily, did the vendor use due dilligence in producing the software.

        Now, for all the libertarians. You'll all yell that government shouldn't be involved...Well, fine, software copyright just went out the window too. Software vendors want it both ways. They want full government protection of IP and copyright. But they don't want the legal system to be involved when they make crappy software.

        Take one, expect to take the other. This situation, it seems to me, to be the great fraud of the twenty-first century.

        Cheers!
        • You're simply mistaken. Government agencies have long, long lists of requirements that need to be satisfied before they will purchase a particular piece of software. POSIX adherence comes to mind.

          The problem is: these lists don't help. Vendors just put in the minimal effort required to satisfy the government's checklists, and you end up with an implementation that is for all purposes useless, except to pass the checklists and close the contract. This is what Microsoft did when they needed Windows NT to support POSIX. That the customer ends up with a severely braindamaged (but still compliant) product does not seem to matter.

          • Did I miss something here? I didn't ever say ANYTHING about government regulation!

            The only portion of my post that mentioned government was that software vendors want government to protect their copyright, but not subject themselves to a civil court for negligence in creating that same software.

            They want part of government, but not the other. If you want copyright protection, then expect to also stand trial in civil court and defend yourself for shoddy software.

            Seems fair to me.

            Cheers!
            • This is what you wrote:
              Investors would be wary of investing in companies that didn't have rigorous design, testing, and production methods. Etc. etc. etc.
              In other words, you want software to confirm to a wide range of standards. A checklist so to speak. The example I gave you shows you how these ideas turn out in reality. The government requires POSIX? Microsoft writes you the most miserable POSIX implementation around. Problem solved!

              No, not really. Problem not solved. See my point?

              • You're missing the point.

                It's not the FUNCTIONALITY we want to conform to a standard it's the DESIGN METHODOLOGY. There's a big difference.

                You can write any application you want using OOP. It doesn't limit your functionality at all. You can use the POSIX standard, or an M$ standard, or the standard from BillyBob's house of code...

                Engineering is using a standard methodology to solve general problems. If the problem set the methodology can solve isn't totally general, then the methodology is useless.
                • Well, okay, you're with the Church of OOP and DESIGN METHODOLOGY. That is wonderful. I suppose that by obscuring the workings of your program (by encapsulating stuff inside, say, an object), it becomes possible to maintain code without having to understand what it does. That's good. The problem is that it leads to people maintaining code without understanding what it does. And that's bad. What any of this has to do with engineering I'm not sure.
        • How about a market based approach. Eliminate the protections software gets from civil liability.
          Software vendors want it both ways. They want full government protection of IP and copyright. But they don't want the legal system to be involved when they make crappy software.

          There's are basic problems with the Market Based approach which make it unusable -- Since there are so many distinct parts of software (OS Kernel, OS Services, Applications, Libraries, Device Drivers) that may be made by different vendors, how can you pin the blame for a software problem on a particular vendor if you're not 100% sure where the problem is? Or what if the code from two vendors is technically correct in isolation, but creates a security problem when used together? Who gets sued then?

          I'm not sure of there's a practical way to enforce civil liability for software products unless the acual code is examined in the trial. Which eliminates the whole concept of Proprietary software. Most /.ers would welcome that, but I guarantee their employers won't.

          And given that these trials are decided by twelve people who couldn't get out of Jury Duty, how will they possibly be able to understand the technical merits of such a case? They'll probably be swayed by the lawyers in the better suits, which will belong to the big software companies. Or the class-action law firm that sues the small companies under this statute.

          In short, eliminating civil liability protection for software will insure that only big companies that can afford to go to trial with the class-action lawyers will be able to write software. Forget about small companies. And forget about cooperative Open-Source development -- that just gives Class-action lawyers more people to sue.

          • There's are basic problems with the Market Based approach which make it unusable -- Since there are so many distinct parts of software (OS Kernel, OS Services, Applications, Libraries, Device Drivers) that may be made by different vendors, how can you pin the blame for a software problem on a particular vendor if you're not 100% sure where the problem is?
            Isolating bugs isn't that hard -- while there are a number of different layers, they each do different things; find out in doing what the failure occurs, and then you know the layer. But then maybe I'm speaking from the wrong viewpoint -- I've been working (and playing) with OSS for the last seven years (jeesh, it doesn't seem like that many!) and I've not yet seen a bug that, given enough time, can't be tracked down to its source.
            Or what if the code from two vendors is technically correct in isolation, but creates a security problem when used together? Who gets sued then?
            Whoever you hired to do systems integration, of course.
            I'm not sure of there's a practical way to enforce civil liability for software products unless the acual code is examined in the trial. Which eliminates the whole concept of Proprietary software. Most /.ers would welcome that, but I guarantee their employers won't.
            Ya know, it's not unheard of for evidence of a proprietary nature (trade secrets and such) to be sealed or otherwise withheld from the public record. Further, my employer wouldn't mind -- the source we work with is already public.
            And given that these trials are decided by twelve people who couldn't get out of Jury Duty, how will they possibly be able to understand the technical merits of such a case? They'll probably be swayed by the lawyers in the better suits, which will belong to the big software companies. Or the class-action law firm that sues the small companies under this statute.
            What do you think expert witnesses are for? Alternately, if need be, the court can appoint their own experts. As for the class-actions, lawsuits don't just happen -- someone has to at least think they've been wronged, and convince a lawyer that they really have been wronged (presuming said lawyer is working on a contingency fee, there's quite a motivation to be sure you only take cases you really can win).
            In short, eliminating civil liability protection for software will insure that only big companies that can afford to go to trial with the class-action lawyers will be able to write software.
            You mean big companies with deep pocketbooks will be the only ones able to write bad software.
            Forget about small companies. And forget about cooperative Open-Source development -- that just gives Class-action lawyers more people to sue.
            I agree that folks giving away a product at zero cost should be able to disclaim even implied warranties -- that's not so hard a thing to put into law. Heck, I take the position that anyone should be able to disclaim warranties -- as long as the customer is very, very aware of it before the purchase. Symantec wants to sell an undertested release of SystemWorks and be immune from lawsuits from folks who lose data? Fine -- but they put their notice on the outside of the box, in bold letters. Then I'll be happy.
    • No, they shouldn't be. Unless you are paying for the certifications, that is.
    • Software development, on the whole, is not practiced as an engineering discipline. It needs to be.
      Your blanket statement is incorrect. Anything on this earth could be designed better. Bridges, watches, popsicle sticks, etc. Common sense tells us that we have to make tradeoffs. Higher levels of security, redundancy and certainty require more time and money. I do agree that some software engineers should be certified and depending on the software, some companies should be accountable. But not all. Does it take a certified engineer to "ok" a doormat? Would you pay $1000 for a "certified" doormat?
      • Yes, you don't need a civil engineering degree to design a dollhouse. But on any non-trivial software project, if you don't apply some good software engineering practices (OOP and UML come to mind), it's going to take you twice as long, cost twice as much, and have twice as many bugs.

        The inherient problem in "methodless" software development is that there's no standard way of thinking about design. Engineering is all about formalizing the design process so that any other engineer can understand it. Give a blueprint for a bridge from one engineer to another and it will take him about a day to understand it all. Try doing that with any reasonably sized software system. Unless you've stuck to a strict design principle that both the devlopers understand, it'll take you months to sort out how the system works.
    • by OblongPlatypus ( 233746 ) on Thursday February 28, 2002 @02:11PM (#3085411)
      I have little to no knowledge about how the whole engineer certification thing works in the states, but I thought I'd share my situation anyway. If anyone would like to enlighten me about how it works in the states in a reply, I'd be most grateful.

      I live in Norway, and I'm currently three quarters through the first year of a 5-year study to become a civil engineer in "computer technology". Although it may not sound like it, this study branches into eight different subjects, most of which are entirely software oriented. In four years, I'll graduate as a civil software engineer, every bit as much of an engineer as an engineer of electronics or other traditional engineering sciences. There are similar degrees in such diverse subjects as chemistry, industrial economics and technology management, mathematics, and communication technology.
    • Interestingly enough, the "certifications" that are currently out there, like MCSE, are prohibitively expensive and are meant to be eye candy on your C.V. rather than make you accountable for your software.

      An engineering discipline also entails a systematic approach to solving problems and a way of codifying best practises; I think software engineering does both of these things. Perhaps not as well as they could be done, but good attempts are being made.
  • Sample Chapters (Score:4, Informative)

    by Akatosh ( 80189 ) on Thursday February 28, 2002 @01:18PM (#3084987) Homepage
    Here you can find a pdf off chapter 10 [cam.ac.uk], chapter 18 [cam.ac.uk], and chapter 1 [amazon.com].
    • Re:Sample Chapters (Score:1, Informative)

      by Anonymous Coward
      The book is simply brilliant. I have been working in security (professionally) for almost 7 years, have read many number of books and huge number of papers, but nothing compares to this one. Incredible breadth and deep insight.

      If you haven't got money to buy book or access to a library, have a look at Ross's website, there are many of his papers on which the book is based.

      http://www.cl.cam.ac.uk/users/rja14/

      jl
  • Monitoring (Score:3, Insightful)

    by yintercept ( 517362 ) on Thursday February 28, 2002 @01:19PM (#3085006) Homepage Journal
    I threw this book in my to read list. BTW, I've found that people are probably a more important part of your security strategy than just code. Every program I write begins with a security mechanism that drops any anomolies into a database. Severe problems get emailed to an admin or activates their pager. The program does a great job detecting and reporting security breaches, but means squat if no one ever acts on the problems or turns off their pagers. It generally is an uphill battle to get a company to train resources in monitoring their systems, and to give adequate rewards to the admin who gets woken up at 3AM because some one is trying to hack a password in the system.
    • by JMZero ( 449047 )
      How do you detect intrusion? Of course you can do things like the following (in a login based web app):

      1. Watching for too many password tries
      2. Watching for too many page views/write attempts by a particular user - logging things
      3. Blocking and logging long queries/odd characters/queries with errors

      What else can I do? I try to write clean, secure code - but I don't know what the big threats are around the corner.

      Should I be working harder to avoid cross-site scripting issues on pages past the login page? What's the odds of it being exploited? I use a session key, should I be changing it on every page view? Should I tie session keys to request IP (I do now), or is that pointless? Should I be extending my SSL key length?

      And while I think about this, I notice some user has their password on a little sticky note on their monitor.

      There's so many security threats to worry about. Does anyone have a resource that might suggest which are the first ones I should point my time at? Or a list of security failures and how they happened?

      .
      • A question for you... how do you get by tying session keys to request IP? I'd love to do that for my web app, and I think it's far from pointless (IP spoofing is beyond the level of the sort of person who would try to hack my system), but I'm stumped by the fact that some ISPs (including AOL) will give its users different IPs from one request to the other. How do you handle that? Or doesn't your app have AOL-using users?
        • Occasionally someone's IP will change, especially if they're logging in from home. If it does, we force them enter their login again (and we have a mechanism to preserve their last work).

          This would, of course, get mighty tiresome if your IP changed after every request - so far it hasn't come up.

          Like you say, IP spoofing does add another burden to the hacker. But is it that much of a burden to a hacker who's already managed to obtain a session key? Who knows.

          We've avoided using client certificates because they're onerous to administer - but we may end up doing that soon. Security is a tough game for me - I know some of the moves, but I don't know who my opponent is and I can't see his pieces.

          .
  • by Cyberdyne ( 104305 ) on Thursday February 28, 2002 @01:24PM (#3085029) Journal
    One point the reviewer missed is that Ross put a few chapters of the book on his home page [cam.ac.uk] here. There's a page about the book itself here [cam.ac.uk] with links to a couple of chapters.

    From what I've seen of it so far, it's a good book (Disclaimer: yes, he was my project supervisor last year!). A few funny typos etc in the errata [cam.ac.uk], which is well worth a look, too, especially anyone wondering who the hell this "Prince Schneier" guy on p 113 is ;-)

  • When someone starts off by saying
    "I thought I would do a review on one of my favorite books."

    I guess this is objective, since it is one of his favorites.
  • by SuperKendall ( 25149 ) on Thursday February 28, 2002 @01:31PM (#3085078)
    I know it's all the rage to hate Amazon. As for myself, I think the annoyance of things like one-click patent support do not quite outway the good Amazon has done or the fact they have a well-built site that I enjoy more than almost any other.

    I do buy books from FatBrain from time to time, and even wear a FatBrain baseball hat they were kind enough to send me some time ago. But when a book is $60 at FatBrain, and $36 at Amazon... well, I like donating money to the EFF but I'm not sure I am quite THAT supportive of FatBrain. So if you're on a limited budget, you might want to order the book here [amazon.com] instead (and no, I don't get anything from the link - go to Amazon yourself if you are paranoid).

    Just for completeness, it's $51.95 at Bookpool.
  • Just FYI, this book is $60.00 at Fatbrain but
    $36.00 at Amazon. I like Fatbrain and all,
    but $24 is $24...
  • Yes, you thought dumb crypto laws were a thing of the past, but no, here comes the UK trying to copy what the USA already gave up, only without that tedious constitutional protection of free speech stuff.

    http://www.cl.cam.ac.uk/~rja14/exportbill.html
    (yes, rja14 is the Ross Anderson who wrote this book).
  • by essiescreet ( 553257 ) on Thursday February 28, 2002 @01:50PM (#3085194)
    This is valid, to a point, here's where it falls apart: You want to build a bridge: 1. Draw Plans 2. Have plans approved by inspector 3. Dig and pour foundation 4. Have foundation approved by inspector 5. Put up pylons, supports, whatever 6. Have them approved by inspector 7. Put the horizontal top on the bridge 8. Have top approved by inspector 9. Pave road 10. Have pavement approved by inspector 11. Bridge gets reviewed every year by inspector to check for flaws, maintainence needs, etc. Now, not all these apply to software development, but I hope you can see some parallels. Also, There is not much innovation in this type of method. You know your load, materials (and their thresholds), traffic load, weather, and most other variables, and you can use a mathematical formula to solve them. Show me this for software development! Now, certifing developers is a good idea, as is holding people accountable for their mistakes, but treating software developers like you would civil/mech/areo engineers is a farce, and is only purported to work by people who heard it in school and are just spouting it back out.
    • It would help for everyone to read some history
      (humans don't live long enough) regarding the
      evolution of engineering disciplines from
      their respective sciences. Software Engineering
      is just that. Computer Science lacks many of the
      formalisms to properly discipline the engineering
      that results. And that is all there is to it.
      This does diminish software engineering as a proper
      discipline, however. If you have been building
      systems as long as I have (20+ years) and you have an engineering degree at least (and you have an open mind), then you have ground to talk on. Otherwise, you are arguing with yourself to no
      end.

      My opinion, in short, is that if you think that
      all there is to software engineering is just
      code, then I'll tell you what I tell the "coders"
      in my lab that keep repeating the same engineering
      mistakes: you don't understand the problem space
      and that's why your "code" does not work. Understand the problem (a science) and the code solution will present itself.

      By the way, the "coders" work for me, not the
      other way around.
    • Two remarks on this:
      1. At least engineers make time to do drawings, documentation and inspections. If software people would be qualified engineers, they'd ask for that time as well.
      2. A bridge engineer is responsible enough not to use the equivalent of sprintf() and other buffer-overflow-inducing things (in fact, he'd probably not use C but some safer language).
      Both of these would arguably slow down software development, cut back heavily on features (and on feature bloat), and would improve quality and security. When society finds the latter more important than the former, we'll see licensed software engineers.
  • 60.00 from fatbrain?!Go to bestbookbuys.com (a comparison shopping site much like pricegrabber) and pick it up for under $40.00
    • (Yes, I'm that guy... The one that replies to his own post... )

      here [bestwebbuys.com] is a link for the book for $36.00 shipped.

      Note to new slashdotters: Slashdot puts spaces in the links, so just 'right click' and 'copy link location' then paste it in your URL bar and remove any spaces.
  • What programming languages are considered secure for you? I've been told Java is secure but how about C and C++. This is a very important topic for those in the developing community.
  • I just finished reading it, and it is mandatory reading if you are developing security these days. It covers a very broad range of situations and assumes a fair amount of familiarity with the math behind crypto, so this isn't "Teach Yourself Security in 24 Hours for Dummies".

  • I recently worked on contract at a *major* bank doing web (intranet) programming in one of their IT departments. There were about two dozen people in this department, working on everything from banking apps to back-end HR systems, and *none* of them really had a clue when it came to security.

    We had people embedding SQL userid/passwords in client-side javascript, passing stuff in plain text in query strings that should have been SSL'd, and other fun stuff. I tried to hit a few of these people with a clueX4, but most just couldn't be bothered learning even the basics of security standards. It was pretty scary.

    All developers should be forced to take a few security courses before they even get to touch a computer in the workplace, and even then, audit audit audit...

  • I'm no security expert, I've only just recently started reading. And incidentally, a couple of days ago I've begun reading "Security Engineering". So far I share the reviewers very good impression.

    I'd like to recommend some complementary books; each of these approach security from a different angle

    • Secrets & Lies [counterpane.com] by Bruce Schneier. Deals with the "soft" issues. What are the threads to networked systems? Who are the attackers? One of the messages: Risks can't be avoided -- manage them.
    • Building Secure Software [aw.com] by John Viega and Gary McGraw This one's closer to technological issues related to security. Risks of various base technologies (languages, middleware). Introductory details on buffer overflow attacks, random numbers, cryptography. Some organizational/dev process stuff.
    • Secure Programming for Linux and Unix HOWTO [dwheeler.com]
    • by David A. Wheeler. Technical security down to the C-level. Programming techniques.

    Michael

It is easier to write an incorrect program than understand a correct one.

Working...