Become a fan of Slashdot on Facebook


Forgot your password?

Programmers Learn to Check Code Earlier for Holes 212

Carl Bialik from WSJ writes "Many companies are teaching programmers to write safer code and test their security as software is built, not afterward, the Wall Street Journal reports. This stands in contrast to an earlier ethos to rush to beat rivals with new software, and, of course, brings tradeoffs: 'Revamping the software-development process creates a Catch 22: being more careful can mean missing deadlines.' The WSJ focuses on RIM and Herb Little, its security director, who 'uses Coverity every night to scan the code turned in by engineers. The tool sends Mr. Little an email listing potential red flags. He figures out which problems are real and tracks down each offending programmer, who has to fix the flaw before moving on. Mr. Little has also ramped up security training and requires programmers to double-check each others' code more regularly.'"
This discussion has been archived. No new comments can be posted.

Programmers Learn to Check Code Earlier for Holes

Comments Filter:
  • I hold any bet (Score:5, Insightful)

    by Opportunist ( 166417 ) on Thursday May 04, 2006 @02:20PM (#15264352)
    After missing a few deadlines, the marketing goons will push to abandon security for more crap on the shelves.

    After all, that's how the software market works. People buy anything. "LOOK! THE NEW (insert program/OS name here)! I MUST HAVE IT!"

    Mem-leak free?
    In one word: FINISHED?

    Who cares? It's new, it's shiny, it's been all over all the mags and preview pages, the hype is on, WANNAHAVE!

    And as long as we keep buying the unfinished crap, it won't change.

    Yes, I'm sure everyone in the tech departments would see this as the right way to go. Test your software, preferably during development, not afterwards. Go through memleak tests, go through stability tests, have some experienced whitehats poke at it, and if it survives, let it go into beta.

    If anyone gets that idea past marketing, I will bow down to him.
  • Catch 22? (Score:3, Insightful)

    by Tourney3p0 ( 772619 ) on Thursday May 04, 2006 @02:21PM (#15264354)
    Revamping the software-development process creates a Catch 22: being more careful can mean missing deadlines.

    Alright, so writing better code means you might miss a deadline. But not writing better code means.. things are exactly as they've always been, or the software development cycle will be revamped appropriately?

    Not much of a catch 22.

  • by cableshaft ( 708700 ) <> on Thursday May 04, 2006 @02:26PM (#15264400) Homepage
    I usually do some quick general design and planning beforehand, then go in and write the software one element at a time, testing to make certain it works properly before moving on to the next. The benefits seem to far outweight doing it the other way, for me, as it reveals problems I wouldn't have noticed in the planning stages in the design or implementation early, and it also helps isolate where any bugs would be located at, so I'm not checking all over the place.

    I'm not sure if it really saves me any time in the long run, but I'm much more comfortable coding this way, which is probably more important.

    Also, so far, I've been the only coder for my projects at work and my games at home, so it *might* not be quite as effective for large teams, although what I've read on XP seems to suggest that it can still be very effective.
  • gets() and people (Score:4, Insightful)

    by mkiwi ( 585287 ) on Thursday May 04, 2006 @02:32PM (#15264466)
    Sometimes it amazes me what people do with the C programming language, for good or for bad. Take some pro programmers who I caught using gets() instead of fgets(). I'm not a rocket scientist, but I'd say anything that uses gets() is a serious problem, since that function does no bounds checking and is prone to attacks.

    How do people learn to code like this? Is it just early habits that do not go away?

  • OT: not a Catch 22 (Score:4, Insightful)

    by cain ( 14472 ) on Thursday May 04, 2006 @02:32PM (#15264468) Journal
    The example in the write up is not a catch 22 []. A catch 22 requires two things be done, each one before the other, thus neither can be done.
  • Thinly veiled ad? (Score:5, Insightful)

    by Mr Z ( 6791 ) on Thursday May 04, 2006 @02:37PM (#15264515) Homepage Journal
    Is it just me, or does the article just read like a thinly veiled advertisement for Coverity? It's reads like a generic commercial template: "Meet Bob. Bob thought everything was fine. But then he discovered he had Problem X. That's when Bob discovered Company Y with Solution Z." (etc. etc.).
  • by Jonboy X ( 319895 ) <jonathan@oexner.alum@wpi@edu> on Thursday May 04, 2006 @02:39PM (#15264534) Journal
    Umm, about your comment: The link goes to a blog entry of yours about the inefficiency of using StringBuffer.append(String) to append a single-character string instead of just using StringBuffer.append(char). Sure, it's a good idea, but there's another kinda-orthogonal piece of advice that will likely improve runtime performance a good bit more:

    The vast majority of the code that uses StringBuffer could save a bunch of time by using the new-ish(JDK 1.5) StringBuilder class [], which has the same API but is not internally synchronized. This translates to a runtime savings of approximately a KAJILLION percent by avoiding the horrendous synchronization overhead hit when the StringB*ef in question is only being used by one thread. It's very similar to using an old-skool Vector when an ArrayList will do just as well and not slow down your code.

    Like I say every thime this kind of thing comes up, Java isn't slow (any more), but we're certainly not helping matters with this kind of sloppy coding.

    Also, back on topic, try writing financial software some time. It's like a different world. Everything is unit tested, and the unit tests don't so much check for bugs as prove that your code works. That way, when a million-dollar bank wire doesn't go through, you can prove that it's not your head that should be on the chopping block. It's actually kind of refreshing knowing that any code you touch is pre-vetted so you don't have to worry about trusting it enough to build on it.
  • Re:I hold any bet (Score:2, Insightful)

    by Gnavpot ( 708731 ) on Thursday May 04, 2006 @02:41PM (#15264550)
    After missing a few deadlines, the marketing goons will push to abandon security for more crap on the shelves.
    Is it a fact that early testing will delay a project?

    I must admit that I don't know much about large software development projects. But I do know a lot about large development projects in my own profession. It seems that any problem which was unresolved/ignored/insignificant during early development will turn into huge problems a few days before a deadline.

    Are software projects different? I would think that early warnings about bad coding practices at least would make a programmer change his coding habits so he doesn't make the same error again and again and finally has to correct it in 200 different parts of the code after the final quality check.
  • by 192939495969798999 ( 58312 ) <info&devinmoore,com> on Thursday May 04, 2006 @02:45PM (#15264586) Homepage Journal
    If being careful makes you miss the deadline, then the deadline is set wrong. Shipping a product with security holes that you knew about + could've fixed with a bit more time is how we got into the position we're in. Pushing back a release date to fix them first should be the rule, not the exception.
  • by Anonymous Coward on Thursday May 04, 2006 @02:53PM (#15264641)
    We know how to code securely, at least in the same way that every profession has its skill levels on a bell curve.

    What the industry needs, as has been pointed out here, is companies that are
    A) willing to give developers the time to design software correctly,
    B) willing to give testers the time to test software thoroughly, and
    C) willing to delay software that the testers find holes in.

  • by Greyfox ( 87712 ) on Thursday May 04, 2006 @03:04PM (#15264719) Homepage Journal
    I see static analysis and code auditing as an excellent step on the road of security, but at a completely different level you have to also make sure that the processes you're coding are also secure. All the secure programming techniques in the world will not help you if your design itself has flawed assumptions. So not only should you program for security but you should also design for security.
  • Laws? (Score:3, Insightful)

    by VGR ( 467274 ) on Thursday May 04, 2006 @03:04PM (#15264721)
    From the article:
    Many companies rushed to beat rivals with new software, and checking for bugs that could later be exploited by hackers was often seen as a waste of time. That has begun to change in the past few years as new laws force the disclosure of security holes and breaches...
    What laws are these? This is the first I've heard of such a thing. And why do I have a feeling these laws have a clause that directly or indirectly exempts certain large software companies?
  • Re:This just in: (Score:5, Insightful)

    by Anonymous Coward on Thursday May 04, 2006 @03:15PM (#15264823)
    No point you proofreading you own code. You see what you think you've written, not what you've actually written, therefore don't spot any problems with it.
    The trick is to get 2-3 other people to review it.

    1. The earlier you spot a defect, the cheaper it is to fix.
    2. Test results are only as good as the test code written.
    3. Edge cases don't normally show up in test code. Test cases are typically designed to show that the code works, rather than finding the boundary where it fails.
    4. You can suggest better ways of writing the code/learn new tricks during code reviews.

  • by Coryoth ( 254751 ) on Thursday May 04, 2006 @03:18PM (#15264844) Homepage Journal
    Also, back on topic, try writing financial software some time. It's like a different world. Everything is unit tested, and the unit tests don't so much check for bugs as prove that your code works.

    Unit tests don't prove your code works any more than drawing a few right angled triangles and measuring the sides proves Pythagoras' theorem. If you want to prove your ode works you use a theorem prover. To do tht you usually need to provide more detailed specification (beyond just type signatures) about how your code is intended to function. That tends to be more work, though if you really need to know your code is going to work it can often save time in the long run (over ridculously long and exhaustive testing). There are things out there that provide toold support for theorem proving aout your code: SPARK Ada [] along with the SPARK tools provides a powerful theorem prover, and HasCASL [] with CASL tools (including the HOL theorem prover) provides string theorem proving for Haskell. Even ESC/Java2 [] utilises a theorem prover (called Simplify) to provide extended static checking of Java code. I'm sure there are more examples.

    My point is not that Unit testing is bad (it's very good), but that you shouldn't overstate its effectiveness. Unit tests are a great way to provide a reasonable degree of assurance that your code will hopefully ork as intended. It isn't a substitute for actual assurance however. It really depends on exactly how sure you need to be - how much an error will cost, and whether that can be tolerated.

  • Re:Catch 22? (Score:2, Insightful)

    by Kapsar ( 585863 ) on Thursday May 04, 2006 @03:33PM (#15264952)
    A Catch 22 is a "damned if you do, damned if you don't" circumstance. In this case if you miss your deadline because you created a better product, or you make your deadline but end up with crap and then miss it later. catch 22 is not an old school mentality, it's a realistic way of looking at situations, read the book you'll understand then.
  • by Coryoth ( 254751 ) on Thursday May 04, 2006 @03:42PM (#15265027) Homepage Journal
    I've worked in industry as a mathematician. When we say we're going to prove something we actually prove it, rather than just tossing out a few random examples for demonstration. Given that a piece of software is, at its heart, just a lot of mathematics, and the fact that it really is possible to prove things about code in the real sense of the word, I would be very careful about saying you "prove" your software works.

  • by Allnighterking ( 74212 ) on Thursday May 04, 2006 @03:57PM (#15265134) Homepage
    The problem is one of doing things in software the way Automobile companies did in the 60's and Japan stopped doing in the 70's. Traditionally in software development you design... then send to engineering to build then send to QA for an endless cycle of test bitch fix bitch retest bitch fix bitch test bitch deadline ooops market. QA should be involved the moment some fool says "I have an idea" and stay in the loop all the way. Testing in increments as things are built. I've done more in a white paper on my site as for writing this all up but this is the jist. Integration of Quality control from the start means less problems. The idea of. I'll fix it later sucks because it never gets to be later.
  • by LargeWu ( 766266 ) on Thursday May 04, 2006 @04:01PM (#15265177)
    No, I think you have failed to comprehend the example from the book. Go back and read it again. A catch-22 is a circular set of conditions that can only be fulfilled if the other is true. The second condition in your example falsely assumes that code which is released on schedule cannot also be bug free. Furthermore, "Damned if you do, damned if you don't" is a lose-lose situation, not a catch-22. This phrase assumes that "do" and "don't" are not dependent on each other. Of course, if you have to "do" in order to "not do", and have to "not do" in order to "do", then you've got a catch-22.
  • by z4pp4 ( 923705 ) on Thursday May 04, 2006 @04:02PM (#15265190)
    Everybody makes mistakes. That is how we learn and progress to a more experienced state of being.
    By telling people not to make mistakes is letting them know that they cannot try out new and inventive, sometimes even shorter ways of doing things.
    Unit testing is fine and should be encouraged, but really the thing you want to do here is make your build process do all the donkey work as much as possible, and let your programmers worry about the programming issues and doing things smarter and achieve the most with the least possible effort.
    The build process can do the following, if you do it right:
    -> Build the code to executable format and even CD ISO distributables (duh)
    -> Do code indenting and formatting etc. to conform to a standard.
    -> Do unit testing on code segments, and even tell you what % parts were not tested.
    -> Scan the code for bad practices such as strcpy and unmatched mallocs.
    -> Gather all your TODO's and your FIXME's into an output file.
    -> Run the program live and do input fuzzing testing, with extended debugging logs.
    -> Run nessus and other attack scripting languages to take care of the obvious issues.
    With all these measures in place, it is a simple matter of having *somebody* go through the build logs and make a priority / TODO list, fixing security first and stability later, and the small imperfections last.

    But alas. Nobody looks at the logs. Logs are boring. Thats why you have to keep them visible. Maybe via RSS, IM or email?
  • by ishmalius ( 153450 ) on Thursday May 04, 2006 @04:14PM (#15265303)
    I'm just so happy that a "Developer" article actually made the front page. I have been afraid that the tech level of the audience of Slashdot has been falling lately. Compare it to the number of "Game" articles on the front page.

    But to stay with the topic, analysis tools are just that: tools. They are not a cure to chronic software problems. Developers are not excused from the responsibility of at least attempting to write quality code.

    Some current project development methods really contribute to buggy and insecure code. Example: XP. I really think that some aspects of XP programming are a bad idea. Namely, the "code as fast as you can" aspect of it is fraught with errors. A more thoughtful, disciplined approach might seem like it is terribly slow. Yet being inherently less buggy, it can reach the target faster than the sloppier, more haphazard approach. This is much like the Tortoise and the Hare. Or maybe a better analogy would be like a rally driver who is more careful with his fuel and tires.

    Don't get me wrong. Some parts of XP are fine. The Buddy System is an excellent way to get things done quickly by short-circuiting the collaboration cycle.

  • Re:I hold any bet (Score:3, Insightful)

    by TheGreek ( 2403 ) on Thursday May 04, 2006 @04:22PM (#15265375)
    If I'm guilty of infringement, I can't give a shit. Legal or not, I don't believe that copyright should be binding past ten years - check my past posts, I've got a record of saying five-ten should be the copyright limit, and I do live by that.

    Unfortunately for you, what you believe doesn't matter.

    "I'm sorry, officer. I don't believe that the speed limit should only be 45 on this road. I'm far enough away from the urban area, aren't I?"
  • by IamTheRealMike ( 537420 ) on Thursday May 04, 2006 @04:31PM (#15265454)
    FYI, it costs about 50.000 $ for a medium sized project (500.000 lines)

    Yes it's incredibly expensive. Yet, plenty of well known companies pay for it, so I suspect it's worth it to them.

    is no more than a lint on steroids.

    Er, no. No, no, wrong, no.

    I've got access to the Coverity results for WineHQ. It's already found many problems that evaded both manual code review and unit testing. Its rate of false positives is remarkably low once properly configured. A lot of these problems would only occur in obscure circumstances or on error paths - but these are precisely the kind of errors that unit testing tends not to reveal. It can detect problems like race conditions or memory leaks that lint cannot. The recent X security bugs were revealed by the tool first.

    I've seen tools like this before, but not one as good as this. I've never used competing commercial products, so cannot speak as to their effectiveness, but for a large C++ codebase I would certainly be happy to have such a tool helping me out.

    Microsoft have used similar programs developed by MS Research on the Windows codebase for some time now and they're apparently very effective. Quite a lot of security problems revealed by them were silently fixed along with other problems in updates.

    None of this tools is a mach for a manual audit performed by a professional.

    Totally wrong. Every patch that gets checked into Wine passes code review by at least Alexandre who is without question the best programmer I've ever met. He is easily as good as Linus but his much quieter and more conservative personality means he doesn't get Linus' press attention (a good thing, imo). And all the patches are posted to a public mailing list where several other people can and do review patches too.

    Static analysis can reveal problems that simply don't get spotted by the human eye because they're too complicated to follow, because they occur in very weird situations, or because the code evolves over time under the direction of many different people and inconsistencies creep in.

  • by nietsch ( 112711 ) on Thursday May 04, 2006 @04:43PM (#15265550) Homepage Journal
    It is very nice that this bozo has a (very expensive I read) little program that tries to detect problems when they have already happened. So along comes mr friendly one day (or more?) after the fact to dicipline the programmer? That does not sound like a very positive approach to me.
    If you want to learn someby something (I hope mr belittle does) it works much better if you have a quick feedback loop, react immediately when something is going wrong, not one weekend later when the programmer has all but forgotten why he did it that way. I agree you cannot use a mr little for such feedback, but unittests and other tests that have to run before the developer can turn in his work can be run automagically. Test are not partial, do not have favorites, and are easy to understand by a programmer. Mr little is probably the opposite. You will either need a pairprogramming or review process to prevent programmers from just disabling the test that fail, but with such a process you will have good software and happy programmers. Mr Little does not make programmers happy.
    Have a look at aegis [], a Configuration management system that can enforce such a process and do a lot of other commonsense things. The 'problem' with aegis is that it does not have a pretty pictures interface, so it's advantages are hard to explain to pointy haired bosses.
  • by hotsauce ( 514237 ) on Thursday May 04, 2006 @06:49PM (#15266673)
    You should really read Unit Testing in Java: How the Tests Drive the Code []. XP is about small, direct steps, and when these are done with tests first, they greatly improve the quality of the code. You can draw all the big, fancy, pie-in-the-sky diagrams you want, and still get sloppy code.
  • Re:This just in: (Score:2, Insightful)

    by Psyonic ( 547207 ) on Thursday May 04, 2006 @09:45PM (#15267605) Homepage
    I hate to nitpic, but I have to object to your statement that, "If ever piece of software written was tested in every concievable scenario we wouldnt have any bugs, when that day comes I'll be a happy coder." That would be true if they all got fixed, but having worked in the business for a little while I can say that most companies know of hundreds, perhaps thousands of small bugs in their code, but they just don't know the manpower or motivation to get them all fixed.
  • by chthonicdaemon ( 670385 ) on Friday May 05, 2006 @01:41AM (#15268482) Homepage Journal
    In many cases they learn to program for their own projects, where speed/ease of coding may be more important than security. If you just pick up an old C book, no-one warns you to stay away from gets(), so people learn to use it, and it works. Then they get told to use something else that is more secure but it is slow or more difficult to learn, so they don't.

    In my building there is a whole floor of guys doing simulations in Fortran 77. When I tell them about new functions in F90 or about ways they could solve their problem in C++ they have only one question: "isn't that going to be slow?".

    So ultimately the problem is that most "bad" code comes from people who have hacked together a useful tool by leveraging their experience in fields where security or stability doesn't really matter. They have probably been coding successfully for some time without ever seeing anything wrong with their approach until they used it out of context.

I have ways of making money that you know nothing of. -- John D. Rockefeller