Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

PMD Applied 108

Simon P. Chappell writes "It's a fundamentally agreed fact within our industry that code reviews are good. Really good. Sliced bread good. But have you actually tried organizing one? If you can get everyone together that needs to be there at the same time in the same meeting room, then you still have the challenge of trying to keep a roomful of geeks from getting trapped in minutiae and squabbling over details like formatting conventions. Well, what if I told you that you could get your code reviews done in less than five minutes and that there would be no arguing? Enter PMD, an open-source Java static analyzer. Think of it as a code review in a box. As if that weren't wonderful enough, there's even a book, PMD Applied, written by Tom Copeland, the author of PMD." Read on for the rest of Simon's review.
PMD Applied
author Tom Copeland
pages 221
publisher Centennial Books
rating 9
reviewer Simon P. Chappell
ISBN 0976221411
summary A good book if you use or are considering using PMD for static analysis of Java code


I hope that it's not too much of stretch to suggest that this book is primarily written for users of PMD. That said, I actually purchased it (yup, this one wasn't a freebie) because I was considering a recommendation to use PMD as part of the build process at a previous employer. This is not a slam on the documentation on the PMD website, but Mr. Copeland was offering signed editions of the book if you pre-ordered and I wanted to add to my growing collection of books signed by the author. (No contest; geeky as charged.)

PMD Applied has ten chapters. The first chapter is the mandatory introduction and the second covers installation. Nothing unusual there. Chapter three covers using PMD from within Ant or Maven. I'm glad to see both build systems covered here. Both have a pretty good sized user base and both deserve to be represented. Chapter four covers using PMD within an IDE. The range of IDEs covered is excellent and includes all of the ones that I would have covered and a couple of others that I hadn't heard of.

Chapter five begins the serious job of using PMD for static analysis by examining the ability of PMD to detect duplicate code using the wonderfully named Copy and Paste Detector (CPD). Chapter six is titled Best Practices. It's a wonderful collection of advice from Mr. Copeland, on how to begin applying PMD to your Java development process. As a Team Leader, I understand many of the points he makes. Teams generally don't enjoy having impartial tools dumped on them in the middle of a project that do nothing but nitpick their code and programming style; such things are a fast-track to being unpopular. Mr. Copeland's guidelines will increase the chance of your team accepting PMD, as you start with a subset of helpful rules with low chances of false positives.

For all of its cleverness, PMD is nothing without well written rules and chapter seven has this covered. PMD ships with a good sized rule set of its own, even though not all of these will be suitable for every site. Creating custom rule sets is an important first step in customizing PMD for your team. Once the team is used to PMD, they're likely to want to start adding new custom rules to the mix and the book has you covered there as well, with instructions on writing rules with both Java and XPath expressions.

Chapter eight addresses the matter of customizing PMD. Naturally, making changes to the code and recompiling is the first customization covered, but then it looks at custom report formats, adding new languages for the CPD and advice on making contributions of code to the PMD project. Chapter nine is a look at the internals of PMD. Now, perhaps this won't be of much interest to some of the readers of this book, but I found it fascinating. There's nothing quite like having the internal workings of a useful tool explained clearly by the person who wrote it. If you don't like it, it's only twenty pages, so skip over it and check out chapter ten where the playing field of similar open-source tools is examined. The book wraps up with a large appendix with a very useful explanation of all of the rules that ship in the standard PMD deliverable.

I'm going to let a good part of my personal biases show here, but I absolutely love slim and detailed technical books. PMD Applied is 221 pages, which is about the size of the pre-ANSI version of Kernighan and Ritchie's The C Programming Language; the greatest technical book in the known universe. This is the perfect size for a technical book, allowing the author to introduce their subject, explain everything that needs saying, wrap up and be done before the reader's eyes roll back into their head and information overload descends upon them. This book is perfectly sized.

The level of technical discussion matches the size of the book and is just right. While the first few chapters deal with what is PMD and how do you install and run it, the rest of the book deals with good solid code analysis using PMD. This is why we bought the book. We're geeks ... show us the code! Mr. Copeland doesn't disappoint us and there are many excellent code examples throughout the book.

The appendix is a snapshot of the rules that shipped with PMD at the time of the book's publishing. While new rules have been added since this time, the list of every rule and a portion of example code that would trigger that rule are useful. Even if you don't use PMD, buy the book so you can use the appendix as a comprehensive list of examples of how not to write Java code!

After having been out for a year, the exact level of the software described in the book has been passed by. This may bother you, but I feel that the basic principles of PMD have remained true and so I wouldn't let the version numbers dissuade you from purchasing.

The book is only available from the publishers, Centennial Books. They're a smaller publisher, but I had no problem with purchasing through their website.

If you use or are considering using PMD for static analysis of Java code, then this book should be by your side."


Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
This discussion has been archived. No new comments can be posted.

PMD Applied

Comments Filter:
  • Misses the point (Score:5, Insightful)

    by Azarael ( 896715 ) on Friday February 16, 2007 @04:32PM (#18043582) Homepage
    Code review is as much about semantics as syntax. It's great to be able to find duplicated code and the like, but what about more subtle mistakes? The only way you will really be able to benefit from code reviews is if you frequently have experienced eyes going over the code. In addition, there are lots of suggestions around for how to set up a code review to avoid squabbling (reviewers should only be looking for specific problems, code formatting is not one of them).
    • I agree; but it's still worth running static analysis tools - like lint, or PMD to catch admittedly obvious mistakes - before you do the semantic (expensive, human) analysis. I mean we turn all the warnings on when we compile, don't we?
      • by Azarael ( 896715 )
        I agree, every QA check list needs to have a bunch of steps.
      • Re: (Score:3, Insightful)

        by jrumney ( 197329 )
        Its generally worth catching as many errors as you can as early as you can. I used to run an on-the-fly spell checker, which caught a lot of typos as soon as I'd made them, saving me a compile and fix cycle. In the last few years though, CodingStandardsThatActivelyPreventSpellChecking have become popular, with the justification that underscores (which my spellchecker could treat as whitespace) are not allowed in variable names in some obscure language that someone used 30 years ago, so they shouldn't be use
      • Re: (Score:3, Insightful)

        by ThosLives ( 686517 )

        Heh - you would think people compile with all warnings turned on. I've been working on a project recently that didn't.

        Also, the thing that neither static checkers nor semantic analysis can replace is a really good up-front requirements and architecture. Those things are often the ones that are the most lacking, and really have the most impact on things. A bad architectural decision early on can make life so much worse later, and that's not something that can be "fixed" in the traditional code review proces

        • by Azarael ( 896715 )
          It's kind of an endless loop though. The customers of my company aren't usually that technical. We might be able to come up with a great set of requirements with a whole lot of research, but it's likely that we'd never hit the sweet spot (because the customer changed their mind or plain explained something incorrectly). I figure that most developers are in the same boat that they have to do the best with the requirements that they have and be prepared to shift direction down the road. Sadly, we can't redesi
        • Re: (Score:3, Funny)

          by jgrahn ( 181062 )

          Heh - you would think people compile with all warnings turned on. I've been working on a project recently that didn't.

          This is, in my experience, more of a rule than an exception. Some moron starts programming, doesn't bother with the compiler, figures someone will clean that up later ... and five years later you cannot even add a CFLAGS=-ansi -Wall without having the whole codebase barf thousands of warnings at you.

    • Re: (Score:3, Informative)

      PMD is not only a format checker, and If you want syntax checking, just try to compile the code and if it doesn't compile then the syntax is wrong. PMD does exactly what you are talking about. For example, it flags negative tests and if-then blocks without braces, which while valid code are the source of many subtle mistakes.

      It is an excellent thing to run before doing code review. It flags piddling sources of code errors and formatting non-compliance so that your peer review process doesn't have to. The au
      • by caseydk ( 203763 )
        I agree wholeheartedly... and said the same thing in 2005: http://caseysoftware.com/pmdapplied [caseysoftware.com] ;)
      • Re: (Score:3, Insightful)

        by mollymoo ( 202721 )

        I LIKE ternary expressions and will use them whether or not beginners find them "confusing."

        So using a style which leads to the minimum of errors is less important than your personal preference? It's not surprising that most software is bug-laden shit when so many programmers have attitudes like yours.

        • Why do you think that PMD allows customization at all? Because labelling any particular thing as "bad" is not universally agreed upon, or not appropriate for different levels of programmers.

          When you're not a first-year graduate anymore, you start seeing why they're useful. They are in fact MORE clear and MORE logical than a bulky if-then-else. Where I work no one has a problem with that, so we disabled that rule and everyone is happy. You're SUPPOSED to do that.
    • You are right about semantic versus syntax, and of course static code analysis cannot catch a lot of things. But it is very useful to catch a lot of things so it doesn't miss the point, it just doesn't cover all the points that a manual code review could cover. What it does do though, is free up more time for doing more in depth code reviews because you don't have to waste as much time manually reviewing syntatic junk.
    • Re: (Score:1, Informative)

      by emphatic ( 671123 )
      Completely agree. PMD, along with FindBugs and optionally CheckStyle are great tools to get your development team in the habit of using *before* code reviews. This allows the review to focus on higher level issues rather than "that's a possible NPE right there". Code Reviews are irreplacable. These tools are *very* helpful to help locate potential issues/bugs in your codebase automatically. (Both PMD and FindBugs have Eclipse plugins too, which helps integrate the tools into the act of code writing.)
      • You are absolutely correct.

        I've done [Java] code reviews with and without static analysis tools.

        If you don't have static analysis tools in place, the review tends to spend its time on finding and discussing little bugs. You'll still find the big ones, but the room is likely to be so worked up (or tuned out) by the discussion of "basics" to some and "nit-picking" to others that the discussion of important stuff is difficult to keep constructive.

        If you do have static analysis tools in place, you can se

    • by OldManAndTheC++ ( 723450 ) on Friday February 16, 2007 @05:35PM (#18044538)
                 (if the purpose
          of code) {
      is             to be }
      [    understood,       then
        formatting        standards]
      //       can     really     help.
      • by Azarael ( 896715 )
        That's why we have
        astyle dirty-file
      • Re:Misses the point (Score:4, Interesting)

        by rthille ( 8526 ) <web-slashdot@@@rangat...org> on Friday February 16, 2007 @06:15PM (#18045106) Homepage Journal

        I've felt for many years that source should be stored as syntax trees and that editors should show you the source how every _you_ want to see it so it's most clear to you, not to the author.
        • Re: (Score:3, Interesting)

          by joggle ( 594025 )
          Your idea has merit but I disagree that the files should be stored in this way. This would probably either be some binary format that would make it more difficult to track changes using version-control software or an XML file that would also be more difficult to keep track of. This would also prevent common text editors from being able to edit the file.

          However, it would be cool if your text editor automatically cleaned up the code for your display without actually changing the file. The trick here though wo
        • by EvanED ( 569694 )
          An idea if you can find a source code formatter that you can customize to your liking is to have a canonical format for storing in the repository, then on update run it through a formatter to convert it to the canonical format and on checkout/update run it through your formatter.
        • by jgrahn ( 181062 )

          I've felt for many years that source should be stored as syntax trees and that editors should show you the source how every _you_ want to see it so it's most clear to you, not to the author.

          I do not believe that is possible. My tastes in formatting cannot be encoded into an algorithm.

          And besides, how many fellow programmers do you know who have horrible formatting, but at the same time happen to name variables and types well, and document tastefully? I know none.

          This idea was hot back in the early 199

          • by Raenex ( 947668 )

            My tastes in formatting cannot be encoded into an algorithm.

            I'm curious. Do you have examples?

            And besides, how many fellow programmers do you know who have horrible formatting, but at the same time happen to name variables and types well, and document tastefully? I know none.

            I don't think it's a matter of "horrible" formatting. It's just a matter of personal taste, like where the brace belongs, how many spaces to indent, etc. It certainly would be nicer if everybody could use their personal style

      • by Dastardly ( 4204 )
        If the code is hard to write it should be hard to read. :-)
    • Re: (Score:3, Insightful)

      by Tim C ( 15259 )
      reviewers should only be looking for specific problems, code formatting is not one of them

      On the contrary, if you have a code formatting standard (and imho you should), then formal code reviews most certainly should flag transgressions.
    • There are static analysis tools that do full semantic analysis as well as syntactic analysis, tools such as SPARK and MALPAS. They look at your code and tell you what it does. They'll only work on a subset of programs (in a subset of the relevant programming language), of course, and you have to give them some hints (to overcome the halting problem, for one thing), and the output is a mass of predicate calculus that you are then expected to prove matches your formal specification for the program. Huge fun,
    • What about large-scale systemic problems that can't easily be gathered for a "round-the-table" walk through. Security vulnerabilities are the obvious candidates there -- some level of review is obviously tractable (the "hey, you're slapping that user-supplied string straight into a SQL statement there, bad lad!" kind of thing), but the more pernicious and less obvious candidates (e.g. malformed tracer packets being propagated through system services), I would say, require the formalism that a static analysi
  • PMD defined (Score:4, Informative)

    by martyb ( 196687 ) on Friday February 16, 2007 @04:33PM (#18043602)
    WTF is PMD???

    Look at the sourceforge definition of PMD [sourceforge.net].

    • by bubbl07 ( 777082 )
      Well, according to the summary:

      Think of it as a code review in a box.

      However, I don't trust boxes. The first box I opened had Justin Timberlake inside I'm still paying child support to the second box.
    • WTF is PMD???

      Easy, Erick and Parish Making Dollars [wikipedia.org], well, PMD was when Parrish went solo.
    • Indeed. It can be quite helpful in reviews of you actually provide a somewhat detailed definition, at the beginning, for what you're talking about, rather than assuming everyone has heard of it. Calling it a "static analyzer" only makes me think of balloons with hair stuck to them. And, "The first chapter is the mandatory introduction... Nothing unusual there," is simply ironic, given its absence here.

      Freshmeat [freshmeat.net] says, "PMD is a Java source code analyzer. It finds unused variables, empty catch blocks, unne
    • It's pretty annoying when people use obscure acronyms in the /. summmaries without resolving them, but acronyms that can't even be resolved are taking the annoyingness to a whole new level.

      Won't somebody think of the /.ers?
  • ran many code reviews, most of them very succesfully. They just need someone to run them that isn't attached to the code.
    • by jrumney ( 197329 )
      It also helps if your "room full of geeks" is a small room - the person who wrote the code, the person running the meeting, and two or even just one reviewer works best in my experience. Also helps if the reviewers have reviewed the code prior to the meeting, so the rest of the team are not spending the whole meeting watching them skim the code.
  • by Chirs ( 87576 ) on Friday February 16, 2007 @04:34PM (#18043628)
    There are lots of static checkers, and they're valuable to use before a formal review. The real usefulness of human reviewers is to find bugs in the logic, not the coding. Math errors, bad assumptions, missed corner cases, etc.
    • agreed... and not only those sorts of things, but peers with more time invested into a non-trivial-sized project can also spot where you reinvented the wheel (due to unfamiliarity with the code base) and where, thinking down the road, a refactor now will save tons of time later. These are the sorts of things you do code/peer reviews for.
  • Sewwiously [homestarrunner.com]. I don't know why Microsoft doesn't (appear to) invest in static analysis in their code... Imagine outputting a to-do list with, say, *every* freaking buffer overflow exploit in the Windows code tree.
    • Re: (Score:2, Informative)

      Imagine outputting a to-do list with, say, *every* freaking buffer overflow exploit in the Windows code tree.

      From what I understand from my friend who works for M$, one of the main reasons it took so bloody long to write Vista was they did precisely that. He may be wrong or the auto-buffer overflow exploit finder may not have been perfect, but at least they're trying.

    • by jrumney ( 197329 )
      Static analysis of C and C++ is much more difficult, and prone to miss things and flag things inappropriately, than static analysis of a language like Java.
    • Re: (Score:3, Insightful)

      by Procyon101 ( 61366 )
      As someone else said, they do. Rigorously. And with some of the best next-gen lint like tools there are.

      As an aside though, I question the sanity of such tools. Why does the language allow you to use constructs that an automated tool could tell you were screwy?!? Shouldn't such practices the tool catches be compiler errors, or at least glaring warnings? It seems to me that the underlying problems lie in the language specifications and the tools are only there as a band-aid.
      • by EvanED ( 569694 )
        As an aside though, I question the sanity of such tools. Why does the language allow you to use constructs that an automated tool could tell you were screwy?!? Shouldn't such practices the tool catches be compiler errors, or at least glaring warnings? It seems to me that the underlying problems lie in the language specifications and the tools are only there as a band-aid.

        If you're writing a device driver, how are you going to have a language that tells you not to call a sleeping function when the IRQ level
        • how are you going to have a language that tells you not to call a sleeping function when the IRQ level is too high?

          You aren't. But I doubt a static analyzer is going to catch that either.

          How are you going to have a language that will prevent you from writing 'close(fd);' twice in a row?

          By abstracting away the mutation. You can do this with C++ RAII, or Monadically, or any number of other ways.

          We will always need device drivers. C and ASM are great for that, no question about it. On the other hand, when

          • by EvanED ( 569694 )
            You aren't. But I doubt a static analyzer is going to catch that either.

            There are tools that are powerful enough to find if small programs (such as drivers) have these bugs. They are limited by assumptions they make so they can't catch all of them, but they will usually either prove a program free of these bugs up to their assumptions or find a concrete counterexample.

            (In general such a problem is of course undecidable, but for the tiny subsets of possible programs that represent what we actually write, the
            • I agree that there will always be classes of bugs that cannot be solved by language design. The halting problem is the classic example of this. However, if the static analysis tool can determine that the code you wrote is not in a subset of possible code you probably meant to write, then it seems perfectly reasonable that you can construct a syntax in which only programs lying in the subset of programs you mean to write are expressible, or at least a syntax that maps them closely enough that a static anal
      • by jgrahn ( 181062 )

        As an aside though, I question the sanity of such tools. Why does the language allow you to use constructs that an automated tool could tell you were screwy?!?

        Because the linter is only right 90% of the time. A language has to be flexible to be useful, a linter doesn't.

        • A language need only be as flexible as its problem domain requires. Many make systems are not even Turing complete (I think GNU Make is) but that doesn't make them non-useful! Java abstracts away pointers and forces you into an OO-only paradigm, but that lack of flexibility shelters you from a large class of bugs while retaining enough flexibility to solve a great range of problems. (Please don't confuse this statement as a personal endorsement of Java :)

          Without even getting into non-mutable semantics, we
    • by EvanED ( 569694 )
      WTF? MS has invested HEAVILY in static analysis. MS Research's SLAM project, which has been semi-commercialized in the static driver verifier for driver development, is one of the most important projects in the field.
  • Think of it as a code review in a box.

    Hey, if you want to call it [youtube.com] a "code review", that's your prerogative.
  • by stoicfaux ( 466273 ) on Friday February 16, 2007 @04:35PM (#18043656)
    Err... call me old-fashioned, but aren't code reviews supposed to focus on the business logic implementation, potential side effects, and the algorithms used? Don't get me wrong, it's nice that to have an automated tool to check syntax and for duplicated code, but that's not where the savings comes from. Until you get a human level A.I. to do code reviews, automated code review tools are just gravy and not the meat to solving code problems.
    • > aren't code reviews supposed to focus on the business logic implementation

      I think that's exactly right - use tools like PMD to find nickle and dime things like certain null pointer exceptions [sf.net], unused code [sf.net], empty try blocks [sourceforge.net], etc. Let the code reviews be focused on things like "hey, we don't need all these accessors", "we should be using the business rule package here", "this is really more of a Map than a List" and that sort of thing.

      The tools do the gruntwork and the people do the thinking... good tim
    • Re: (Score:3, Insightful)

      This has to do with the human element. If you get six people in a room and give them a pile of badly formatted code, even with specific instructions to the contrary they are likely going to at least mentally pick apart all the obvious problems before they even move on to subtle logic flaws and business logic. I think the savings come in because you can be a better code reviewer precisely because you are not being distracted by formatting or stupid little potential errors. I personally have a much easier tim
    • by jgrahn ( 181062 )

      Err... call me old-fashioned, but aren't code reviews supposed to focus on the business logic

      (What is "business logic", anyway? Can I assume it's a synonym for "logic", only made more palatable to PHBs?)

      implementation, potential side effects, and the algorithms used?

      I guess that's way of putting it, yes. It's also a tool for people to learn each others parts of the code, and for understanding each others' styles and design choices.

  • by neutralstone ( 121350 ) on Friday February 16, 2007 @04:44PM (#18043782)

    Enter PMD, an open-source Java static analyzer. Think of it as a code review in a box.

    No, no, no!

    Neither of these development tools is a substitute for the other. Static analyzers are very good at catching *certain* *limited* *kinds* of obvious and likely or conceivable bugs in a short span of time, but in general they cannot deduce the intentions of the developer -- let alone the desires of the user. That's where your peers come in: they review your designs before you write the code and review patches prior to merging them into a revision control system. Static analyzers are *debugging tools*: use them while coding and/or when you run regresssion tests.

    Why wasn't that obvious to the submitter?
    • Yes! Yes! Yes!

      Having worked in corporate I/S for far too many years now, I understand that the "right thing" is rarely what happens.

      I agree with your point that PMD is not a substitute for a REAL code review, undertaken by a small group of talented programmers. Unfortunately, these rarely happen, so PMD starts to be a very useful second line of defense. For the pragmatic among us, using PMD makes much sense. I wish it wasn't so, but I find that it is. If you work in an environment unlike that which I've des
      • I agree with your point that PMD is not a substitute for a REAL code review, undertaken by a small group of talented programmers.

        Ok, we agree so far...

        Unfortunately, these rarely happen, so PMD starts to be a very useful second line of defense. For the pragmatic among us, using PMD makes much sense. I wish it wasn't so, but I find that it is.

        Why do you wish that using PMD wouldn't make sense?

        I think perhaps you missed my other point: Code review is not a substitute for running a static analyzer.

        Static anal

        • I think perhaps you missed my other point: Code review is not a substitute for running a static analyzer.

          Static analysis is a Good Thing.

          Peer review is Good Thing.

          Nice summary. Static analysis tools have a firm place in a well-planned development cycle, and that place is on the developer's desktop, while they're coding. Applying static analysis as a "last thing before ship" stage is a recipe for failure.

          So, apply static analysis while coding and get all the advantages that it brings in terms of de

    • by EvanED ( 569694 )
      Static analyzers are very good at catching *certain* *limited* *kinds* of obvious and likely or conceivable bugs...

      To be fair, good static analysis tools can catch a LOT of non-obvious bugs. MS Research's SLAM project found a bug in the parallel port driver, a race condition that was triggered if a program was closing the port at the same time the port was being physically removed from the computer. (Hey, it could happen if you have a laptop docking station.) This bug went undetected for like a decade to be
      • by EvanED ( 569694 )
        (I'm not denigrating code reviews either in the slightest; you're very right that they are nearly complimentary. But I do think that they can do more than you acknowledge.)

        they = static analysis tools

        Got my antecedents confused...
      • Static analyzers are very good at catching *certain* *limited* *kinds* of obvious and likely or conceivable bugs...

        To be fair, good static analysis tools can catch a LOT of non-obvious bugs.

        Very true; and I've seen SAs catch some very non-obvious bugs so I shouldn't have said "obvious". (:

        MS Research's SLAM project found a bug in the parallel port driver, a race condition that was triggered if a program was closing the port at the same time the port was being physically removed from the computer. (Hey,

  • by 140Mandak262Jamuna ( 970587 ) on Friday February 16, 2007 @04:46PM (#18043814) Journal
    My friend wrote a syntax analyzer to report gross statistics about code. Calculate the ratio of number of lines of comments to lines of code, average number of lines per function, etc etc. Finally it would come up with a let grade between A and F for code qualtiy based on some heuristics. That was his masters project.

    Our diabolical prof fed this analysis code itself to it as data. It spat out a D. He got a D.

    • Re: (Score:3, Interesting)

      by tcopeland ( 32225 )
      Nice... yup, metrics are nice, but don't let them rule.

      PMD has a bunch of metrics stuff, including NPathComplexity [sourceforge.net] (thanks to Jason Bennett for writing that one) and CyclomaticComplexity [sourceforge.net]. All the "codesize" PMD rules are here [sourceforge.net].
    • Our diabolical prof fed this analysis code itself to it as data. It spat out a D. He got a D.

      What an idiot! Come on, if it's his freaking *project*, you'd think he'd at least do the vanity/sanity check of running it on itself. Something like that had better damned well produce an 'A' for itself. Did it never occur to him to at least practice the principles of design he's preaching?

      I'd have done the same thing if I were the prof, it's just too damned obvious. And incredibly funny. If I was feeling char

  • Comment removed based on user account deletion
  • findbugs is better (Score:2, Insightful)

    by Nightshade ( 37114 )
    I've used both PMD and findbugs and PMD is pretty good, but I find findbugs to be much easier to set up and use. (http://findbugs.sourceforge.net/ [sourceforge.net])
  • I've used PMD, and it's quite good, but irritating at times. It complained when I was creating an object within a loop. Now what I had to do was populate a list of objects so the whole point was to create multiple objects, using a loop. But it still complained...

    It's like checkstyle - good for catching some things, but sometimes too irritating to take seriously.
  • by Anonymous Coward
    I really thought that IBM was headed somewhere when they cooked up Structural analysis for Java
    http://www.alphaworks.ibm.com/tech/sa4j [ibm.com] back in the 04's.

    Yes, heuristics can catch probable mistakes in developed code, but the above sort of thing, if sophisticated enough, could warn you about serious architectural mistakes as you are making them.

  • I used to hate going into code reviews at this one place I worked. Because nobody had time to prepare for it properly, it often collapsed into a series of "You need to indent 4 spaces instead of 3" kind of criticisms, while ignoring issues like code reuse, poorly commented code, and misuse of basic OOP principles.

    I don't believe a machine, no matter how well programmed, can replace a proper review done by real live human beings. But to do that, you need to give people the time to do it right. Unfortunately,
  • Where is all the whining about Slashdot linking off to some evil corporate bookseller, when the book can be acquired for cheaper at Amazon?
  • This was proved to not be viable in 1936 when Alan Turing proved that no machine given a description of a program and a finite input can decide whether the program finishes running or will run forever, given that input.

    Thus a general algorithm to solve the halting problem for all possible program-input pairs cannot exist, let alone validate the semantic approach or validitiy of the code itself.

    Unfortunately there are enough managers who want to devalue software development from a skilled art into a brain-de
    • by mture ( 1053660 )

      I understand the computer theory behind your comment. However, I don't see what this has to do with the task of improving code quality. Do I need to prove whether a program halts to be able to suggest improvements through static analysis?

      Static analysis tools are one part of the complex software development process. I wouldn't be so quick to throw them out completely because of something a very smart computer scientist once said!

      • by JustNiz ( 692889 )
        Sure but my understanding from the article is that this software is intended to replace the need for code reviews by humans. My original comment was really trying to say why that could never happen.
        • by Raenex ( 947668 )

          Sure but my understanding from the article is that this software is intended to replace the need for code reviews by humans.

          There were many posts that pointed out that this software was not meant to take the place of code reviews. The original submitter/reviewer had it wrong. No need to rant about the Halting Problem and managers.

  • by steveoc ( 2661 ) on Friday February 16, 2007 @10:49PM (#18047356)
    Used to do this all the time when writing software for defence / weapons systems. (in Oz at least)

    Its easy - the source is printed out and circulated .. peers have 1 week to circulate the code and make markups in coloured pen of their choice, after which a formal code review meeting is held.

    It is the responsibility of each peer to be familiar with the requirements spec (even if its not part of their project), and the design that has been signed off. If they dont agree with the design - too bad - the process allows for plenty of time to look at design options, document those options, and document reasons for choosing one approach above another. Dont waste people's time by trying to flog a dead horse at a code review.

    The meetings are known in advance, so its never a problem getting people in the same room at the same time.

    Code review meeting is informally divided into 4 sections :

    1) Intro - if this is NOT the first code review, quickly list the TODO items that came up from the last code review. Good developers will refer to these recurrent issues quickly, and stick to making use of the time by covering new ground. 10 minutes to discuss any code formatting / spelling / naming convention issues. These ARE important for maintainability and consistency, so egos need to be put on hold for the duration of this 10 minute period. Its not hard to do.

    2) Critical issues section - no time limit. During this phase of the meeting, we look at :Coding errors, buffer overflows, and other WTFs in general. Requires serious, devious rat-like cunning on the part of the reviewers to think up diabolical scenarios in which the reviewed software might fail. Again, egos are put to one side, since everone eventually ends up on the receiving end of this treatment. OK, its hard to watch 'your baby' being ripped apart like this, but its all for a good cause. A lot of really good things come out of this exersize.

    Strangely, when your peers rip the shit out of your code, yes it hurts, but it gives you more respect for those peers. There is nothing better than feeling that you are surrounded by really top-notch people that you CAN bounce ideas off, and get better insights into what you are working on. That is one side-benefit of tight code reviews.

    Good, disciplined peers will stick to the agenda, and not wander off into talking about design options or formatting issues during this period.

    Document everything - even if the conversation winds around and ends up coming to the conclusion that the code is not at fault after all - document it, so you dont end up going back down the same path later on down the track.

    3) Requirements review : quick session to discuss whether the code matches the requirements. Dont bother turning up to a meeting without a thorough understanding of those requirements - you will look like a dick, and nobody will invite you to a pissup on friday arvo. Document any areas where the code falls short of requirements AND ALSO document any areas where the developer has gone over the top and added a pile of 'features' that are outside of the requirements. Keep the code lean and mean.

    4) 5 minute wrap up : write up a TODO list of things that need to be done, estimate a timeframe, make sure it gets plugged into the gannt chart, and set a fixed time for the next review.

    Time and schedule should be on everyone's mind, but thats a management and sales problem - not a technical one. Good developers will ignore schedule constraints and stick to doing things right. Having this attitude MAY turn some people's hair grey, but thats not our problem ... fact is, its the quickest way to finish off a good product.

    And we do build good shit.

    Most importantly, everyone should come out of a code review with that smug feeling that they really are part of an elite team of coders, who look at things from every possible angle. It builds morale and raises standards all round.
  • Assuming your manager is competent, what's wrong with 1 developer and 1 senior/manager for a code review?

    The key here is that the developer improves his code, or learns more about how the other developers work. That's all.

  • So it seems that PMD is something like lint, splint or QAC, but then for Java.

    Where I work we use QAC on C code (embedded development), to obtain information and statistics on such things. However, there is also an extensive code review practice set up.

  • The classic - sit in a room and review - doesn't work, the engagement factor drops considerably.

    The process that I have implemented within my team is quite simple.

    1) Before review, the developers prepare the submission comments as well (low quality checkin comments are as bad as low quality code)
    2) The developers then run a script to prepare a diff for review
    3) The review is set to *all* developers
    4) Any developer can hold the review based on logic
  • PMD is decent but really not all that useful in the grand scheme of things. Two tools are absolutely critical at my workspace:

    checkstyle - enforce a coding style. We integrate this into our SCM system so people can't check in unless their code meets the official style. It's draconian but our codebase would have 12 different styles and be close to unreadable without it.

    SmartBear's CodeCollaborator - a tool which makes code reviews easy. This is a necessity when working with junior developers on another c

Neutrinos have bad breadth.

Working...