Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Getting Development Group To Adopt New Practices? 125

maiden_taiwan asks: "At my software company, we occasionally need all engineers to adopt a new standard or 'best practice.' Some are small, like the use of Camel Case for function names, while others have tangible business value, such as 'every check-in must be accompanied by a unit test.' As you might guess, some new practices get ignored, not because people are evil or lazy, but because they're simply too busy to pay attention and change their work habits. So we are seeking creative ways to announce, roll out, and enforce a standard for 100+ engineers so they will actually follow it." What ways have you used to convince your developers and engineers to adopt a new set of practices that may or may not get in the way of their daily work habits?
We already know to automate compliance when possible (e.g., the revision control system could reject check-ins without unit tests), and simple platitudes like 'tie compliance to their year-end bonuses' aren't helpful by themselves, as someone will still need to check compliance. The engineers here are smart people, so we want to spend less time on enforcement (having architects read the code and flag any non-standard practices) and more on evangelization (getting engineers to see the benefits of the standards and -want- to follow them). I'd welcome any advice on formal processes or just plain fun ways to get people's attention."
This discussion has been archived. No new comments can be posted.

Getting Development Group To Adopt New Practices?

Comments Filter:
  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Wednesday November 15, 2006 @12:41AM (#16848510)
    One thing that really needs to be understood by all you "best practice" guys is that test code is not a shippable product. Requiring that all checkins be accompanied by unit test code is ridiculous because two developers working on the same code will need to update not only the code itself but also any test cases that rely on the behavior of the executing code. And if code A is supported by test code X, Y, Z, are you also going to require that any changes to A also be accompanied by changes to X, Y, Z? What happens if A is some fundamental architectural change (or maybe simply a refactoring) that affects all tests in the test suite? You can't seriously be talking about forcing the developer to go through the entire test suite looking for compilation errors and runtime errors just because those early tests don't make sense anymore with the new code.

    You need developer buy-in to make a largescale standards effort work. You can do that by culling those developers who are realistic about development practices and by augmenting the remaining programmers with new hires who are just as standards-oriented as you are. Other than that, you'll be facing an uphill battle that will come to its conclusion the first time you approach a project milestone that is significantly behind schedule.

    Do yourself a favor and get some test developers and testers. Let them worry about the test suite and let your developers worry about the product.
  • What I'd Do (Score:4, Insightful)

    by hahafaha ( 844574 ) * <lgrinberg@gmail.com> on Wednesday November 15, 2006 @12:45AM (#16848528)
    > What ways have you used to convince your developers and engineers to adopt a new set of practices that may or may > not get in the way of their daily work habits?

    Adopt the new practicer. Or you're fired.
  • Advance notice (Score:2, Insightful)

    by MeanMF ( 631837 ) on Wednesday November 15, 2006 @12:46AM (#16848534) Homepage
    Make sure you give your programmers two weeks notice about the changes...Because that's probably what they'll be giving you soon afterwards.
  • by Anonymous Coward on Wednesday November 15, 2006 @12:54AM (#16848586)
    Bring new development practices in incrementally. Make sure there is a strong case and rationale for each new practice. Equally important is communicating this rationale to the Engineers. Management/programming leads need to step in and raise the whip if needed. Engineers either follow the new standards or find new employment.

    Many development practices show immediate tangible benefit. Anybody who has tried moving/renaming files or directories in CVS can easily be sold on Subversion. Bonus points in Subversion's command line client being syntax-similar to the CVS client.
  • Evangalize (Score:2, Insightful)

    by ubergnome ( 242049 ) on Wednesday November 15, 2006 @12:59AM (#16848624)
    It's been my experience that most good developers will adopt a standard practice because a) it makes sense and/or b) the new standard doesn't require any more effort than the old way, so it doesn't hurt to play the game.

    So, make it easy to adopt and make an effort to educate.

    Remember, though, that this is a two-way street; if it's hard to adopt or hard to argue for, then maybe management/architects should rethink their reasons for requiring the standard.

    Also -- a training program might be a good carrot to help get the more junior devs onboard (at least with more complex standards like unit testing or patterns-based development).
  • by kfg ( 145172 ) on Wednesday November 15, 2006 @01:05AM (#16848662)
    This Ask Slashdot question just cries out for its companion question:

    How do you get management not to implement a new standard or 'best practice.'

    KFG
  • Try some structure (Score:1, Insightful)

    by Anonymous Coward on Wednesday November 15, 2006 @01:11AM (#16848700)
    Keep you process changes to 3 monthly cycles (or something regular), so that people know they're coming. There's nothing more aggravating as a coder getting yelled at for some new process that some guy thought of last week and just 'mentioned' to some people.

    If you have a regular release cycle (like any good software product) and actually notify people of the changes (as well as providing a reference guide containing all current policies), people are more likely to follow them. If you just announce new things you want people to do as you feel like it, and don't even bother to document what you want them to do, all you're going to do is piss people off. When you're making up new practices all the time and then ripping into people for not following them, it just feels like you're looking for excuses to pick on them.
  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Wednesday November 15, 2006 @01:16AM (#16848728)
    Lots of "clean up our practice" Ask Slashdots lately.

    Here's another one that seemed a little bit hopeless: http://ask.slashdot.org/article.pl?sid=06/11/10/04 1257 [slashdot.org]

    I have a feeling that as word gets out that Microsoft is getting deep into the whole Agile phenomenon and using it as much as possible throughout its teams, that other companies want to emulate the dynamicism for themselves. However, what works for one company (especially one focused on building consumer applications) may not be a realistic or useful methodology for another company (like one trying to implement a system based on pre-packaged components).

    That's not to say that Agile or XP or DP or Booch or Rumbaugh or Rational or Waterfall or any of the myriad development methodologies aren't useful. They are imminently useful *within their useful domains*. Once you start trying to apply a square methodology to a round development team, you're bound to run into problems. If you are deadset on implementing a set of standards, make sure you understand the methodology completely first, otherwise you'll end up with a half-assed implementation of piecemeal tactics without a comprehensive strategy.

    Just look at Iraq for an onerous example of a project with capable team members and highly effective tactics led by incompetent managers with no overall strategy. You don't want to build that sort of company.
  • by Osty ( 16825 ) on Wednesday November 15, 2006 @01:28AM (#16848776)

    One thing that really needs to be understood by all you "best practice" guys is that test code is not a shippable product.

    Code without tests is not shippable code!

    And if code A is supported by test code X, Y, Z, are you also going to require that any changes to A also be accompanied by changes to X, Y, Z? What happens if A is some fundamental architectural change (or maybe simply a refactoring) that affects all tests in the test suite? You can't seriously be talking about forcing the developer to go through the entire test suite looking for compilation errors and runtime errors just because those early tests don't make sense anymore with the new code.

    You can, and you should. If your build is not organized enough that each developer can do a full build (or at least a build of modified components) prior to checkin then you have some work to do. As for architectural changes and refactorings, think of this as a small barrier to entry. It prevents frivalous re-architecting when your developers should be getting on with their real work. If a new architecture or a refactoring is important enough, then it's also important enough to fix or deprecate the test suite.

    Do yourself a favor and get some test developers and testers. Let them worry about the test suite and let your developers worry about the product.

    I agree with the idea, but not the sentiment. Testers and test developers are there ensure QA. They have their own job to do, and it's not mopping up after lazy developers who can't be bothered that they broke a build or checked in non-functional code because they didn't unit test. Bear in mind that the advocated "developer testing" really is minimum-bar stuff -- does your code work for the mainline scenarios? Maybe you have an obscure boundary condition or off-by-one error that you missed, and that's okay. What's not okay is just checking in a bunch of code and throwing it over the wall to the test department. Your quality will suffer and your testers will hate you. An adversarial test-dev relationship benefits nobody.

  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Wednesday November 15, 2006 @01:33AM (#16848798)
    How fast does your codebase evolve? My money is on a glacial changerate.

    As for "dumping" messes on others, I find it laughable that anyone would think that a developer whose mind is so geared towards developing a certain section of code would be able to objectively analyze and test their own code, much less write a test suite to cover the new code effectively. This is the same as asking an author to edit his own book. Sure, he may find the occasional misspelling or figure a better way to reword a chapter, but on the whole the work of editing is best left to editors.

    The work of testing is best left to testers. An intermediate 'clear box' test team focused on the development and maintenance of unit tests is far more effective than the traditional dev/test organization that is so prevalent. By getting the development of tough tests out of the hands of the code creators, the tests are more likely to be fiercely objective and less prone to leniency.

    As for creating a feedback loop, a developer who consistently puts out good code will consistently pass the unit tests and get to work on new code, but one that consistently puts out bad code will forever be stuck in the code-fix-code-fix cycle. If you were to leave the developers to their own devices, the first one would spend an inordinate amount of time writing good unit tests while the second one would write a test that was designed to pass. Or you'd have both writing the bare minimum unit tests which is worse than useless because you'd come to the incorrect assumption that the test was reaching some level of minimum quality when in reality the code is no less buggy than usual.

    I don't doubt that unit tests are a good idea. I question whether it is a good idea to have the developers themselves write them, and whether it is a good idea to force the development of tests as a prerequisite to checking in code. Both sound great on paper, but reality shows that unless management is ready to step on the necks of the development team, that the practices get thrown out the window when projects hit the death march stage.
  • by seebs ( 15766 ) on Wednesday November 15, 2006 @01:49AM (#16848860) Homepage
    One thing that might help would be to involve them in the decision-making process. A lot of the "best practices" I've seen handed down were monumentally stupid ivory-tower junk.
  • by Coryoth ( 254751 ) on Wednesday November 15, 2006 @02:54AM (#16849154) Homepage Journal
    Requiring that all checkins be accompanied by unit test code is ridiculous because two developers working on the same code will need to update not only the code itself but also any test cases that rely on the behavior of the executing code.

    The solution to that, of course, is to integrate test definitions into the code itself [wikipedia.org] so that it all gets updated together. As a bonus your API documentation is more precise and gets updated along with the tests and the code. You can then push a button, walk away, and have an automated testing system fully exercise your code [inf.ethz.ch]. And yes you can have all of that for Java [iastate.edu] if Eiffel isn't to your taste (though it might be worth having a second look at Eiffel [kuro5hin.org]).

    In other words, yes if you want software with lower defect rates you should expect developers to update their specification (and documentation) of how they intend code to be used and to work at the same time that they update the actual code. If you want to specify working code by having it pass a separately written unit test, fine; but that's not the only way, nor is it always the most efficient.
  • by chromatic ( 9471 ) on Wednesday November 15, 2006 @03:21AM (#16849260) Homepage
    Having the developers create them themselves seems to be a waste of time.

    In my experience, any developer worthy of the title "developer" already tests his code, though perhaps not in an automated fashion. I've seen plenty of developers (and I've done this myself) use debug statements to build up a function or method in pieces, checking the output manually each time.

    Actual test-driven development (not after-the-fact writing a few half-hearted test cases) is a refinement of that process. It has three benefits. First, it captures those manual tests in a permanent format that I can run any time I want. Second, it gives immediate feedback as to the state of the code and allows me to work in much smaller increments of time. Third, it changes the way I approach code and low-level design. My code tends to be simpler and shorter and more cohesive and less coupled (and much, much easier to test) this way.

    Having a poor developer create dozens of lousy tests does no one any good. However, my experience with doing TDD myself (and teaching other people how to do it and writing libraries to help them do it) makes me think that any decent developer who tries to do it seriously and with discipline will produce better code.

  • by mrjb ( 547783 ) on Wednesday November 15, 2006 @04:50AM (#16849596)
    The main problem I see with adapting new practices is that project management relies on discipline as a replacement for motivation. This Does Not Work. Discipline alone is not enough- This is why for instance source control systems have been invented.

    At some point at our company we started 'crosschecking'- which essentially means all developers have their code tested by another developer, mostly before things got to nightly build. (Other quality assurance practices are also already in place).

    The result was twofold. First of all, incorrect code was communicated back the same day, or even several times the same day. This saved a huge amount of time in detecting problems. Second, because each developer knows their code is going to be reviewed by another developer, they try harder to avoid the bugs in the first place.

    The first release after we introduced this practice, we noticed that the bugs being reported by the client mostly had to do with previous releases. The bug rate has steadily decreased since, and we're not constantly patching holes anymore. We catch most bugs before we deliver the product to the client- so the client is happier too. We spend more time doing 'fun' stuff now, rather than bug hunting. And it is quite an ego boost to ship a product and not hear of any defects. Would we stop crosschecking, we would have the feeling that we are shipping an inferior product.

    By making sure the practices to adopt are insanely useful, time savers and make the work more fun, developers will adopt them without complaining.
  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Wednesday November 15, 2006 @05:55AM (#16849856)
    fifteen months for a decent mid-sized project.

    I envy your team. You have a great PM who is able to pad a schedule quite ably. (Seriously, over a year? Where do you work?)

    if they're such crummy developers that they are truly unable to determine which lines of code that a given unit test exercises, there are these nifty new tools to do code coverage analysis for them.

    The point was that developers have a different view of their code than objective outsiders. They will not create as good of tests as someone not intimately associated with the code.

    No, this is the same as reminding the carpenter to "measure twice, cut once". Most developers doen't even bother to measure once; they just saw off a chunk by eye and ask somebody else if it's the right length.

    I don't think you'll find anyone is arguing that developers should be cowboys and just spit out code all day long. I'm certainly not advocating such a stupid thing.

    What I am saying is that development time is finite and asking the developers to write unit tests is a waste of time since 1) it takes up actual time and 2) they have a vested interest in successful unit tests.

    UNIT TESTING HAS NOTHING TO DO WITH QA

    And if you had read past the first sentence you would have understood that I am not saying that QA should be writing unit tests but that the development team should have its own set of unit test developers who work in parallel with the code developers towards the goals of the specification. QA (the separate testing team) has nothing to do with what I am talking about.

    Even if the QA at most shops weren't little better than monkeys, wasting your best testers on tracking down simple "Oops, oh yeah I forgot that case" bugs instead of actually doing what they do best is an idiotic waste of money

    I am very disappointed to see this attitude. While I do agree that many QA departments are staffed badly with people who are ill-qualified, I find the attitude that they are "little better than monkeys" atrociously condescending. It is the persistence of this attitude that drives a wedge between dev and QA and makes many QA engineers view development as a step up rather than a step sideways. Some of the best engineers I have ever worked with have been QA engineers.

    Unit testing code is similar enough to testing that repurposing a handful of development-oriented QA engineers for unit test development would allow extra breathing room for the code developers and the increased code quality would see to it that the QA cycle went smoothly. Are you going to argue that better unit tests are going to be worse for the product cycle?

    Let the person who knows their code best (or one of his/her peers) double-check their own work.

    I am in agreement about code reviews and informal walkthroughs before checkin. When done well, they have been more effective than any other development technique at sniffing out coding mistakes in my experience.

    I do not agree that the person who wrote the code is the best person to judge the fitness of their own code, even if they take serious pains to double check their work well. It is because they have the code fresh in their mind that they cannot see the mistakes. It is because their ego is intimately tied to the code that they look for success rather than failure. An objective outsider (whether it be a peer developer or someone else) will be far more qualified to judge the fitness of the code than the coder himself.

    We aren't talking about sawing pieces of wood. We're talking about the design and implementation of complicated machines that require intense focus and attention to detail. It forces the developers into a certain pattern of thinking, potentially blinding him to his own errors, and the difficulty and rewards of success bind the programmer's ego to the code such that he cannot judge its fitness objectively.

FORTUNE'S FUN FACTS TO KNOW AND TELL: A giant panda bear is really a member of the racoon family.

Working...