Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Businesses

Why New Systems Fail 140

bfwebster writes "Over the last forty years, a small set of classic works on risks and pitfalls in software engineering and IT project management have been published and remained in print. The authors are well known, or should be: Gerry Weinberg, Fred Brooks, Ed Yourdon, Capers Jones, Stephen Flowers, Robert Glass, Tom DeMarco, Tim Lister, Steve McConnell, Steve Maguire, and so on. These books all focus largely on projects where actual software development is going on. A new book by Phil Simon, Why New Systems Fail, is likewise a risks-and-pitfalls book, but Simon covers largely uncharted territory for the genre: selection and implementation of enterprise-level, customizable, off-the-shelf (COTS) software packages, such as accounting systems, human resource systems, and enterprise resource planning (ERP) software. As such, Simon's book is not only useful, it is important." Read on for the rest of Bruce's thoughts on this book.
Why New Systems Fail: Theory and Practice Collide
author Phil Simon
pages 251
publisher AuthorHouse, 2009
rating 8/10
reviewer Bruce F. Webster
ISBN 9781-4389-4424-1
summary Risks and pitfalls of enterprise COTS projects
Phil Simon has written a long-needed and long-overdue book. Most risks-and-pitfalls book in the IT category focus primarily on projects where actual software engineering is the principal activity. However, many of the large, expensive and often spectacular IT project failures over the past 20 years have little to do with software design and development. Instead, they involve a given organization selecting and implementing — or trying to implement — a commercial off-the-shelf (COTS) software package to replace existing legacy systems, either homegrown or also commercial. The reasons for such a move can be many: standardizing IT and data management across the enterprise, seeking new functionality, retiring systems that are no longer supported or supportable, and so on. By so doing, the firm (usually rightly) thinks to avoid the risks and expense of from-scratch custom software development. However, the firm (usually wrongly) thinks that such a project comprises nothing more than installing the software, training some users, converting some data, and turning a switch. A quick search on the terms "ERP" and "lawsuit" shows just how mistaken that idea can be.

Simon's book is far more informative and instructive than a Google search and should be required reading for all CIOs, IT project managers, and involved business managers prior to starting any such enterprise COTS project. He covers the complete lifecycle of such projects, starting with the typical expectations by upper management ("Fantasy World") and following it through system selection, implementation, and production, along with a final section on how to maximize the chances of success. Along the way, he uses several real-word case studies (with names changed), as well as a few hypothetical ones, to demonstrate just how such efforts go wrong.

What Simon writes is spot on. For roughly 15 years now, my primary professional focus has been on why IT projects fail. I do that both as a consultant (brought in to review troubled projects to get them back on track) and as a consulting or testifying expert (brought in to review troubled or failed projects now in litigation). I have reviewed hundreds of thousands of pages of project documentation and communication; I have likewise traced or reconstructed project histories for many major IT projects, including enterprise COTS projects. It's clear that Simon knows exactly what he's talking about and knows where all the bodies are buried.

The book itself is very readable. Simon's tone is conversational and a bit humorous; he occasionally dives into technicalities that would be lost on upper management, but always comes back to basic principles. The real-world and hypothetical case studies will have those of us who have been on such projects nodding our heads even as we occasionally wince or shudder. His coverage is exhaustive (and at times a bit exhausting), but his goal appears to be to give those managing and overseeing such projects the information they need to navigate the shoals. He goes into detail about COTS pitfalls such as project estimation, vendor selection, use of consultants, group responsibility, integration with legacy systems, data conversion, and report generation.

The first section of the book covers how and why firms decide to initiate a major COTS project. Besides the "Fantasy World" section that compares management expectations to what really happens, the book also covers why firms hold onto legacy systems, why they buy new (replacement) systems, and how they can (or should) make the decision among building a custom system, buying a COTS system, and "renting" enterprise software via a web-based software-as-a-service (SaaS) vendors such as Workday and Salesforce.

The second section covers COTS system selection. The book divides current ERP and COTS vendors into four different tiers based on company size and use (e.g., SAP, Oracle and BaaN are all Tier 1) and warns of the, ah, enthusiasm of vendor salespersons. (Old-but-still-timely joke: What's the difference between a used car salesman and a software salesman? The used car salesman knows how to use his own product and knows when he's lying.) The book then raises up front an issue often left (by customers) until much later: how will business processes change as a result of the COTS system we're acquiring? It then talks about selecting, if necessary, a consulting firm to help with the installation and project management.

The third section covers the actual COTS implementation process, including the overall strategy, roles and responsibilities, providing the necessary environments, data migration, testing, reports, and documentation. This section is a bit exhausting at times, but it is critical for exactly that reason: far too many firms launch into a major COTS acquisition without fully realizing just what it will take to get the system into production.

The fourth section briefly deals with life after implementation. In theory, one of the reasons a firm buys a COTS system is to avoid doing its own maintenance and support; the reality is that the firm often doesn't like paying those large annual maintenance fees and instead goes off on its own path, which is seldom a good idea.

The fifth and final section talks about how to maximize the chance of success in a large COTS implementation. This section builds upon the rest of the book, which has provided suggestions along the way. In particularly, it talks about how to deal with a troubled project mid-course in order to get it back on track.

Throughout the book, Simon puts a significant focus on human factors in project success and failure. He identifies issues such as internal politics, kingdom-building, reluctance to learn new systems, internal project sabotage, end-user resistance, and staff allocation. Simon divides firm personnel assigned to work on the COTS project into four groups — willing and able (WAA); willing but not able (WBNA); not willing but able (NWBA); and neither willing nor able (NWNA) — and talks about how each groups helps or hurts. Similarly, he identified four dangerous type of project managers: the Yes Man, the Micromanager, the Procrastinator, and the Know-It-All. Again, those of us who have been on major IT projects, particularly those involving COTS implementations, will recognize both sets of categorization and the risks they entail.

While Simon is himself a consultant, he is also quite frank about the role consultancies can play in COTS project failures. In particularly, he notes the tendency of consulting firms to underestimate project duration and cost in order to win business, as well as the frequent unwillingness to point out risks and pitfalls to the client, particularly if they represent something the client wants to do.

My few complaints with Why New Systems Fail are mostly production-related. Simon self-published the book; as such, the book's internal layout and graphic design leaves something to be desired. Likewise, his organization and prose could use a bit of editing in spots; he has a propensity for throwing in terms and abbreviations without clarification, and the technical level can vary within a given chapter. Almost all of his footnote references come from Wikipedia; his bibliography is small (just four books) and cites only Brooks from the cadre of authors listed above. None of this makes the book's content any less important or useful, but some of the very people who should be reading this book might well skip or skim it for those reasons. My understanding is that Simon is working on finding a publisher for the book, which will likely solve all those problems.

In the meantime, if you or someone you love is about to embark on an enterprise-level COTS project, get this book; I've added it to my own short-list of recommended readings in software engineering.

You can purchase Why New Systems Fail: Theory and Practice Collide from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
This discussion has been archived. No new comments can be posted.

Why New Systems Fail

Comments Filter:
  • the users dont understand what I write!
  • by religious freak ( 1005821 ) on Wednesday July 15, 2009 @02:52PM (#28706607)
    I was discussing with a friend how software projects are probably the most difficult to run and predict, especially with very large projects. He disagreed and said that all large projects are difficult - when you're building a bridge a multitude of things can and do go wrong.

    That's obviously true, but how many bridges never get finished compared to the number of software projects that never get finished? It seems project management is very difficult for IT related stuff. So am I just being IT centric in thinking our projects are more difficult than most?
    • Simple. If you build a bridge, and it falls over, you have just wasted a lot of money in materials and labor that you have to reinvest. Plus you have to dispose again of the broken bridge. It is worth it to have a reasonable schedule and proper design.

      If your software fails, you just fix the bug and recompile. It simply makes economic sense to rush it out the door.

      • by Splab ( 574204 ) on Wednesday July 15, 2009 @03:15PM (#28706927)

        Not quite, you could do the same with a bridge, if parts of it collapse you can rebuild it.

        The reason why software is so buggy is no one is being hold responsible. Software is the only product out there where catastrophic failures are accepted and people happily sit around waiting for new stuff.

        If a bridge fails, some contractor is going to lose a lot of money, if someone is killed in the process they will be out of money (most likely). If software fails people just get a new update.

        • Re: (Score:3, Interesting)

          by Splab ( 574204 )

          And just to add to my own post (give us a friggin edit slashdot!), EU is currently working on adding the same form of guarantee for software that hardware manufacturers has to supply, this means, any bug found within 6 months of purchase has to be fixed within a reasonable time (rule of thumb, 4 weeks), if not the customer has option of full refund.

          This will probably mean:
          1. Software is going to be a heck of a lot simpler, most stuff I've worked on where things didn't go according to plan is the scope of th

          • I wonder if they'll do the same thing for laws. If its bad, it has to be fixed or repealed.

            As for what it would mean, I doubt you'll see simple software, or subcontracting.

            But you will likely see larger legal departments, they have to account for things.

            Huge EULAs that state the software is guaranteed if you click X, Y, Z, in that order any other use is not permitted and may cause damage. Similar to the safety signs on curling irons "For External Use Only" which don't make it any safer.

        • by NateTech ( 50881 )

          No one is "happy" about it. They just have no other option.

      • Except there are plenty of software projects where the original project is a huge mess due to poor planning, impossible deadlines and all the regular management issues that it ends up taking less time rebuilding the whole thing than it would to "just fix the bug and recompile", I've been involved in a couple of them myself.

        What's disturbing is how a lot of people in management don't seem to realise what a waste of money this is, if the developers say it's gonna take three months to build it's probably not a

        • by dave562 ( 969951 )
          Not only should not short change the developers, you should probably give them even more time. If they say they need four months, give them six. That leaves some leeway for the often frequent delays that most rookie project managers and developers fail to account for. If you come in ahead of schedule then you look good. When I was a consultant, we would always overbid our hours. Given the choice between coming in under budget, or having to go to the client with delays and ask for more money, the choice
      • Re: (Score:3, Insightful)

        by Rastl ( 955935 )

        People are taken out of the equation when it comes to bridges. You don't have to teach people how to use *your* bridge since the use of a bridge is the same regardless of which bridge they use.

        People are the main reason why I see projects fail. Incomplete/incorrect requirements, artificial deadlines, glory seekers, scope creep, poor training, process change, resistance to process change, etc. These are all variables that don't have to be considered when building a physical structure.

        And unfortunately the

    • by MightyYar ( 622222 ) on Wednesday July 15, 2009 @03:07PM (#28706825)

      how many bridges never get finished compared to the number of software projects that never get finished?

      All bridges are essentially "open source". Plenty of bridges have failed, but the failures are right out there in the open, ready to be studied by anyone who wants to build another bridge.

      In contrast, when a company's software project fails, the only people who learn from it are the ones involved with the project.

      • by hemp ( 36945 )

        In contrast, when a company's software project fails, the only people who learn from it are the ones involved with the project.

        And a lot of times, the ones involved don't learn from it either, but merely continue on their merry way selling their consulting services to the next client.

      • In contrast, when a company's software project fails, the only people who learn from it are some of the ones involved with the project.

        You make an incorrect assumption, namely that all those involved in the project will learn anything. Least of all the guy who decided to start the (possibly completely doomed) project with insufficient time available for things to go wrong, who has managed to successfully blame the failures completely on his most junior subordinate.

      • by davek ( 18465 )

        All bridges are essentially "open source". Plenty of bridges have failed, but the failures are right out there in the open, ready to be studied by anyone who wants to build another bridge.

        I'm afraid I can't understand your analogy. Perhaps if it were in terms of a car design failure...

        • I'm afraid I can't understand your analogy. Perhaps if it were in terms of a car design failure...

          There's a good one, but that would lead to a discussion of presidential politics :)

          • by genner ( 694963 )

            I'm afraid I can't understand your analogy. Perhaps if it were in terms of a car design failure...

            There's a good one, but that would lead to a discussion of presidential politics :)

            Change! Hope!

      • by guruevi ( 827432 )

        There are plenty of projects out there that get killed everyday. Even bridge projects get killed everyday. It's just that you don't notice because it hasn't gone to the compile stage (the building of the bridge) - most bridges get stuck in the political, feasibleness or 'is this really necessary' stage. By then a bridge building project has already consumed thousands if not millions in contractors, engineers, lobbyists and hookers and then in the backlash vendors will sue because somebody already promised t

    • Re: (Score:3, Insightful)

      by Dr_Barnowl ( 709838 )

      Indeed. With a bridge, the requirements are simple and obvious - you want a structure that permits transit from one side of some geographical divide to the other. All the detail is just detail, the end requirement is invariable.

      With a software project, the requirements are often poorly understood or even unknown - a nebulous sense that things could be better if only we had better software. Often the software itself will reveal the real requirements.

      • All the detail is just detail, the end requirement is invariable.

        While the functional requirements for bridges tend to be invariant, the "ility" requirements often go [msn.com] wrong [wikipedia.org]. And those performance requirements are the places where software projects often go wrong, too.

    • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday July 15, 2009 @03:20PM (#28707003) Journal
      Software certainly does have the disadvantage of being extremely complex(Both internally and, perhaps ultimately more serious, in its interfaces to other software and systems.) What gives it an extra edge of danger, I suspect, versus some other complex projects is the difficulty(particularly for those of limited technical understanding who happen to be involved) of intuitively grasping how the project is going.

      You wouldn't want a non-engineer trying to micromanage bridge construction; but anybody can stick his head out the window and see how far across the water the bridge is today. The incipient cracks in the foundation might well be missed(as they often are), and the project can still easily go over time, or over budget; but it is hard(er) for fundamental delusions about progress to crop up.

      A layman looking at a complex piece of software doesn't have nearly the same chances. A "nearly there" system with a few serious but non-systemic bugs looks like a broken unusable mess. A brittle demo being coaxed through by a guy who knows the system better than the back of his hand looks a lot like an intuitive interface. If your institutional structure or culture has any of the factors that encourage delusions, lying, yes-manning, or similar, the people who are supposed to have a grand vision of the project won't have the foggiest idea what is going on.
      • by Mr Z ( 6791 )

        Speaking of bridges going badly, the Michigan Department of Transportation issued a great report about the Zilwaukee Bridge. [michiganhighways.org] (Site isn't M-DOT, but the it reproduces their report.) It had all sorts of problems during its storied construction, but it eventually was completed and stands tall and strong today. The report's conclusion summarizes:

        Completion of the new high level bridge at Zilwaukee will bring to a conclusion some 20 years of planning, designing and construction of one of the biggest and most

    • by Z34107 ( 925136 )

      My theory: If you build a bridge, and it falls over, you go to jail. Additionally, people die.

      I guess the kinds of project managers described ("the know-it-all, the micromanager") that evidently exist in IT would all be felons in the bridge-building world. But dammit Jim, IANACE (I Am Not a Civil Engineer), nor a project manager, nor an IT manager.

    • I think it has something to do with the fact that it's easier for bigshot investors/CEOs/bigwigs to wrap their head around physical problems brought up by engineers.

      If a structural engineer says "this cabling isn't strong enough, we need this other type and it will cost X more," that's a pretty concrete statement. And nobody wants to argue with a structural engineer.

      If a software engineer says "we should just buy X software package so I won't have to spend the next 2 weeks reinventing the wheel," a ba
    • by Chirs ( 87576 )

      While it's true that things can go wrong building a bridge, it's also true that the fundamental physics of bridge-building are fairly well understood. There are standard tables that are used to spec girder strength, fastener type, etc. I suspect that most bridges don't involve custom-making the metal alloys specifically for that bridge, or nonstandard concrete mixes.

      With software, for many fields it just isn't standardized. I suspect that there are many more original problems being solved in software tha

    • by radtea ( 464814 )

      I was discussing with a friend how software projects are probably the most difficult to run and predict, especially with very large projects. He disagreed and said that all large projects are difficult...

      Actually, all large projects are equally easy with regard to prediction. A larger project is even more amenable to a statistical estimation approach, because the workforce and circumstances are averaged over the larger size, creating a more homogeneous, stable mass.

      For any organization that has built more

    • Re: (Score:2, Interesting)

      Software projects fail more easily than their physical counterparts due to the brittleness of software programs. It is unlikely a bridge falls apart if it's missing a single screw, but for software a single line of bad code could cause the application to crash. Currently software has a very small tolerance for errors, which makes it very difficult to successfully complete large projects.
      • by NateTech ( 50881 )

        Horse-shit. "Brittle" software is badly designed and written. There ARE systems that aren't brittle out there. Building Avionics software, for example, also has very small tolerance for error, but the vast majority of aircraft fly just fine using that software. It's all about time, effort, money, and professionalism.

    • That's obviously true, but how many bridges never get finished compared to the number of software projects that never get finished?

      You tend to have to make an expensive investment in resources (securing real estate, preparing the site, etc.) to even begin building a bridge, which is less true of software projects, therefore a bridge project that gets cancelled between when the project starts and when it is complete is a lot more likely to be cancelled before any construction is done than a software project

    • Bridges are designed by engineers. Software is designed by people who CALL themselves engineers. There's a big difference.

      • Software is designed by people who CALL themselves engineers.

        That's good software. Some of the software I've seen shows no evidence of being designed at all.

    • We've been building bridges for thousands of years. We've been building software for a few decades.

      Nobody thinks you can turn a bridge into a hospital by moving a few spars. But changing a stock control program to do invoicing needs "just a bit of typing".

  • by Em Emalb ( 452530 ) <ememalb AT gmail DOT com> on Wednesday July 15, 2009 @02:53PM (#28706615) Homepage Journal

    Not trying to be a jerk (hah, stupid buttface!) but the reason most new "systems" fail is for one of 4 reasons:

    1) the decision maker(s) not understanding the actual requirements thereby causing a situation where they end up with a system that doesn't fit their needs

    2) the third party or in-house developers not understanding the actual requirements thereby causing a situation where the system they've created either doesn't work or doesn't work as it should

    3) the new system is too complicated/buggy/worthless and the end users of the system refuse to use it and/or complain constantly (I HATE CHANGE!)

    4) all of the above.

    There are more, but those are the big 3.

    • by MightyMartian ( 840721 ) on Wednesday July 15, 2009 @02:59PM (#28706703) Journal

      I've had two projects fail ignominiously. One was my fault for not getting much more concrete requirements, and getting caught up in the "oh, and can you add this to it too?" The second was because I was basically lied to by a supplier who claimed their own product could do what it ultimately could not, and since it was a core feature of the system we were putting in place, the whole thing died, but not before consuming heap loads of money.

      I learned a few things. The first is to get exact specifications. Let there be no wiggle room, no "well I thought it would do that" crapola. Extras can be added on once the core system is demonstrated to work, not before. Have a design philosophy and stick to it. As to lying suppliers, well, it's a lot easier to assess these things nowadays than when the one project failed in the mid-1990s. Still, I always keep in the back of mind "if software/library/whatever doesn't work, is there something that can".

      • by dave562 ( 969951 ) on Wednesday July 15, 2009 @03:21PM (#28707013) Journal
        As to lying suppliers, well, it's a lot easier to assess these things nowadays than when the one project failed in the mid-1990s.

        What is different now from the 1990s? I've been involved with one software project that failed because the vendor promised functionality that they couldn't deliver. The client spent a significant amount of money on the project. Once it came out that the software couldn't do what the vendor promised it could do, the client sued the vendor and recouped all of their money plus legal fees. The client was able to sue because the vendor put it in writing.

        Getting things in writing from the vendor is of paramount importance. Doing a needs analysis with the client before shopping around for software vendors is key. With a needs analysis in hand, you can present that to the vendors and ask them point blank whether or not their software fits the needs. If they say it does, make them sign a contract to back up their claims. Then they either deliver what they promised or they get sued.

        • What is different now from the 1990s?

          GP's got ten years more experience?

          Also, if you consider the early 90s, intarwebs.

      • by rgviza ( 1303161 ) on Wednesday July 15, 2009 @04:33PM (#28707921)

        > I learned a few things. The first is to get exact specifications. Let there be no wiggle room, no "well I thought it would do that" crapola.

        This is not realistic. You _can't_ get the requirements right up front because the users don't even know what they want until they have a system that doesn't have it. They think they told you what they want, but they didn't because they don't know themselves. A more realistic approach is to get the best requirements you can, and build enough time into the project to handle 1.5-2 years worth of scope creep because that's what's going to happen with any huge system.

        If you try to hardline your users by forcing them into a corner with rigid up front requirements that they cannot possibly help you formulate, they'll simply go outside the company and work with someone who knows how to run a project better and you'll get laid off. (refer to Linus Torvald's rant about specs to see why specs and requirements done this way are useless, except as a starting point)

        If you are prepared for scope creep, and frame the first release as an alpha, you will succeed. I've been doing this for 20 years and I've seen the approach you are talking about fail over and over even with PM's that have 30 years experience. They knew better but corporate policy forced them to operate this way. Inevitably the requirements were hopelessly incomplete and the users were pissed off when they had to sign off the project as complete because of what they agreed to, and in the end, the product did not meet their needs. The whole idea is to give the users the product they need. So even if you succeed in beating them on paper, and they are forced to sign off complete, you've failed.

        Know what happens when you do this to your users? They hire contractors, who will be more flexible and give them what they want, and fire you. You are better off with a "Look this is a big system and it's going to take a while to get it right. Lets figure out what you think you need now, we'll build it, and use that as a starting point to flesh out your system."

        XP for the win for corporate development, Waterfall = FAIL. Waterfall can only succeed if you are a software company building a boxed static product produced by someone that knows what they want.

        At the end of the day a large corporate software project will take 10x longer than you think it will. I've never seen one that didn't. I've seen plenty that failed. Plan accordingly.

        -Viz

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          I think you need to take it one step up the abstraction chain. First, find out what the goals of the users are: what real-world problem are they trying to solve? Forget traditional "requirements" and such, which just leads to discussion about the "system" in the minds of the user and the "system" in the minds of IT. Which, by the way, are greatly different. Once you first figure out what they are trying to accomplish well above requirements, only THEN can you start to figure out the requirements to meet the

        • I learned a few things. The first is to get exact specifications.
          ==========
          This is not realistic. You _can't_ get the requirements right up front because the users don't even know what they want until they have a system that doesn't have it. They think they told you what they want, but they didn't because they don't know themselves.

          I have to agree with the parent. I've had an opportunity to watch the post-mortems on several large, failed software procurements by my state's government. My opinion is tha

        • I think the problem there comes in that a huge proportion of corporate software is contracted. Your method, which I agree is the proper way to build a successful system, simply doesn't work when the programmers are required by a quote-to-order system to do as little work as possible and may be paid by the hour for labor, and managers are required to keep costs as low as possible.

          Yes, this means software contracting generally designs your system for failure. You absolutely must have in-house capabilities f

        • by 3ryon ( 415000 )

          I know that this thread is too old for anyone but the author of the parent to read my comment....so this is for you. Having just come off a 2 year project I can say that your comments are the most insightful that I've read in years. Technically the project is only 85% done, but the PM has closed the project so that the deadline is met...all the remaining work is 'phase two'. Of the classic project triangle (Cost, Time, Quality) everything was deemed flexible except Time and Cost.

          In defense of our BAs they

          • by rdebath ( 884132 )

            Your "BAs" asked the wrong question, it should have been "What do you need the solution to do". Nobody needs unicorn farts.

            You then lock down that fixed specification for the fixed price.

            When the specification changes so does the price.

            Or you can work time & materials, but that needs trust.

            • Re: (Score:3, Insightful)

              by Hognoxious ( 631665 )
              Unfortunately end users do not know the difference between needs, wants, and wishes. And when they go crying "it doesn't work!" and disturb the IT director's afternoon nap he doesn't care; you're mean and horrid and not customer focussed.
        • Re: (Score:3, Insightful)

          by johannesg ( 664142 )

          You make a good point, but it is totally unrealistic. Let me demonstrate where it will fail:

          A more realistic approach is to get the best requirements you can, and build enough time into the project to handle 1.5-2 years worth of scope creep because that's what's going to happen with any huge system.

          If you overbid by 1.5-2 years, you are sure to be outbid by a competitor who will stick to the "rigid requirements" method. So you will never receive a contract in the first place.

          If you try to hardline your users by forcing them into a corner with rigid up front requirements that they cannot possibly help you formulate, they'll simply go outside the company and work with someone who knows how to run a project better and you'll get laid off.

          Yes, but if you let the schedule slip those very same users will suddenly remember that you have a contract and force you into a corner with rigid contract stipulations about deadlines. If you want to avoid that, you'd better act first.

          Don'

        • by NateTech ( 50881 )

          And 10x the cost. Meaning that most management still doesn't REALLY know what the cost of the software projects they're asking for, really cost. They're long-gone by the time it's truly "finished" usually, too. Thus... if technology is supposed to make a company more competitive by spending money today on the tech that will make X work Y amount better than the previous system did... they can't really put values in for Y or compare that value to the true costs. Businesspeople should know better. SOMETIM

      • by NateTech ( 50881 )

        Your comment hides a KEY point that most software "engineers" miss: "core system" shows that you understand the concept of critical/must-have functionality and "nice to have" functionality. Many people calling themselves "engineer" think you can build/test/deploy it all at once...

    • by Coz ( 178857 )

      YOU may not need a book to know this. but there are intelligent-in-their-area bean-counters who get sold on these things at major companies every year. THEY need this book, and as responsible techies, it's our job to make sure they have it. Remember, if it's in a book, it's not just OUR opinion - it's Official :)

    • 2.1) Bad communication of requirements with third party coders. Happening right now with one project I'm working on ( aka "Why didn't you implement X? It's logical that it must be that way!" )
    • Re: (Score:2, Funny)

      by lazyforker ( 957705 )
      When I read your post I was reminded of this old old old picture. http://www.cubiccompass.com/blogs/main/content/binary/TireSwingCartoon.png [cubiccompass.com] I first saw it on paper so I had to Google around for an example.
    • by Matje ( 183300 )
      So what you're saying is that developers should become better at:

      - eliciting requirements
      - designing simple, usable software

      The prevailing mood on /. seems to be that users should become better at expressing their requirements. I find that baffling; it's like walking into a restaurant and having the chef come up to you to ask how many teaspoons of cumin should go into your dish. User's don't have requirements - the whole concept was made up by developers in response to complaints that they were develop
  • Buying software designed to take the user 4 times longer to use, takes 4 times more key presses & takes a 4 times faster processor to do the same task as the "old" software, but it enables some sodding PHB to get a report in one click.
  • by Anonymous Coward

    Let's be real here for a moment...

    People who make alot more than I do look at a list of features. They don't tend to ask peons like me if these features are implemented in a reasonable way.

    I'm not given the opportunity to warn of an impending clusterfuck until it's to late. By then it's not just my problem, it's everybody's.

    Of course by the time it gets that far it is to late to turn around.

    By the way, I heard second hand of very senior people at a company being fired because of an SAP implementation gone a

    • by PhxBlue ( 562201 )

      Me - I cram good websites in to shitty content managements systems. Generally i could personally develop most of the features that are in these CMSes if instead of dicking around with them I was just writing code.

      So you work with the Air Force Portal, then? *Rimshot!*

  • Clients are over expecting.
    Salespersons are over promising.
    Developers are over outsourcing.

  • Tolstoy's version (Score:5, Insightful)

    by T.E.D. ( 34228 ) on Wednesday July 15, 2009 @03:20PM (#28706997)

    People have written oodles of books on this subject, because there are oodles of different ways to screw up a project.

    The best insight on this subject comes from Tolstoy, not Brooks. He was talking about families being functional, not software, but the principle is the same.

    All happy families are alike; every unhappy family is unhappy in its own way.

    A far better method of approaching this issue is to study projects that don't fail, not ones that do.

    • by ljw1004 ( 764174 )

      Why do you think the principle is the same?

      My experience is that it's the opposite. Project failures always come from the same sources (bad expectations and bad specifications and bad process control).

      But project success comes from many different reasons most of them unexpected -- and often it's enough for just a couple of these "success wildcards" to arise in order for the project to succeed. Sometimes a star programmer pulled it off, sometimes a star manager, sometimes the solution just fell out instantly

      • The original quote was overly simplistic. The causes of success and failure are both diverse, in families as much as in software.

        • The tricky bit is finding out which factors really contributed and which were concidental. Look, two projects in Minnesota succeeded - pack your bags everybody!
    • All happy families are alike; every unhappy family is unhappy in its own way.

      That's interesting. I think just yesterday I said exactly the opposite.

      Everyone I know who's furiously pusued the American married-with-2-kids nuclear family ideal is miserable. Everyone I know who has some alternative-y lifestyle (without marriage, kids, lucrative job, white picket fence, or some combination) seems a lot happier.

      • Re: (Score:3, Insightful)

        by genner ( 694963 )

        All happy families are alike; every unhappy family is unhappy in its own way.

        That's interesting. I think just yesterday I said exactly the opposite.

        Everyone I know who's furiously pusued the American married-with-2-kids nuclear family ideal is miserable. Everyone I know who has some alternative-y lifestyle (without marriage, kids, lucrative job, white picket fence, or some combination) seems a lot happier.

        In reality everyone is miserable.

  • I have seen several COTS projects break up in flight, once as a participant, three times as an observer, and a big part seems to be management not wanting to change their business process to match that of the COTS package. It seems simple to me. Either do things the way the purchased software wants them done, or write your own software to automate your existing processes. Unfortunately, the people that make those kind of decisions will not be the ones that read this book.
    • People often fail to distinguish between "can't do X" and "does X but not in the same way as the old system".

      In consequence the make or buy descision is "solved" by doing both.

  • by filesiteguy ( 695431 ) <perfectreign@gmail.com> on Wednesday July 15, 2009 @03:31PM (#28707131)
    ...or at least one of them. I haven't read the book yet - but it is now listed as a to-do in my list of to-do items taking up space on my blackberry.

    In any case, I'd be curious what the answer is. In my short software development experience - only since '93 have I been doing enterprise-level development - I see one factor being the overwhelming key to failure.

    Communication.

    When you have analysts and developers (who are notorious for not being communicative in the first place) trying to interface with executives and managers (who are trying to CYA) then you have a perfect storm brewing.

    Add to it, the fact that COTS solutions rarely actually fit the needs of everyone, and you subscribe to failure. A classic example that I just saw this week is with the California State Child Welfare lien processing system, written by Accenture. I asked for a minor change in the file layout some months ago. Only this week did I hear that I'd need a change request and that they'd get back to me in a few months. :P

    By contrast, I've written my own in-house custom software systems for the enterprise. (One system in production has well over 500 concurrent users on any of fifteen different modules.) When teh customer(s) request a change, then it can - depending on complexity - be implemented and tested in a matter of days. Of course, harping on the communication theme, I'm in constant communication with teh customer, the end-user (if different), my developers, and my analysts. (I'm a PHB in the middle.) I make sure that we under-promise and over-delivery whenever possible.
  • A good system is one that evolves constantly from humble beginnings with smart
    people making and guiding decisions at every step in its evolution.

    You start with a good idea, implement it. Add more good ideas, discard the bad ones.

    If your system is useful, and supersedes the older slow/bloated one, then it "becomes" the "new system".

    • by kigrwik ( 462930 )

      A good system is indeed one that can evolve and adapt to shifting requirements.

      A good product/project manager is one that can say "NO" and prevent feature creep. But you need some backbone to say no to the boss/client.

  • In my experience, the main reason software projects fail is a failure to collect adequate requirements. The tendency to jump in a code something is extremely great, but that is the absolute worst thing to do. For projects I manage, it is about a 2/3 requirements gathering to 1/3 or less code writing. A lot of people hate gathering requirements, figuring out how people do their job / what people actually need, and following up with minor changes that are extremely important in the different in a system th
  • by petes_PoV ( 912422 ) on Wednesday July 15, 2009 @03:43PM (#28707255)
    "But we don't have time for a pilot"

    Also heard as "Why, don't you have confidence in your project"

    Putting aside the sheer commonsense approach of not giving a porsche to a newly passed driver, most projects are run in a state of panic. Panic that the timetable is slipping (although this is almost always due to poor time-estimating, it seems to get presented as being due to slothful or untalented techies), Panic that it's costing too much - again due to poor cost estimation, rather than ovespending. Panic about bugs, Panic about training (ha!). Panic about compatibility with other systems. Panic about all the little patches, workarounds, working practices and hacks that have developed in the old system - that everyone knows about, but have never been documented.

    All these, could have been identified and most of them fixed just by running a small scale prototype in parallel to the existing system. However by the time the project is halfway through, most of the directors are firmly engaged in either "buyers remorse" or utter denial. They become deaf to bad news and generally take full aim at the messenger, while leaving the culprits of all the problems unscathed. This is usually because all the biggest mistakes are made right at the start - in the design stages. However, these have been completed and signed off, so by definition cannot be at fault. The blame gets transferred down the line, to the people who have their hands-on right at the time the deadline is due. It's the original smoking gun: "The project ran over time / budget today - you were working on it when that happened, therefore you must be to blame". It's simplistic, always wrong and always starts off the finger pointing part of the process. You can't get away from it.

    Although the biggest problem I see is "seagull" consultants. They fly in, make a lot of noise, crap over everything and fly off. The trouble usually only surfaces once they've disappeared.

  • by rev_sanchez ( 691443 ) on Wednesday July 15, 2009 @03:43PM (#28707259)
    Communication - Ill defined or changing specifications and poor documentation make development and testing very difficult.
    Technical - Large systems tend to be very complicated and it's difficult and expensive to make them fault tolerant and build in the sort of redundancy, validation, and security that make critical systems reliable.
    Leadership - Decision makers on the client and supplier side often don't know enough details about various parts of the project to really know what they want much less what they need.
    Organizational - Setting deadlines before defining the scope of the project, belligerent coworkers and other HR issues, uncooperative clients, cutting testing time to meet deadlines, and other general issues within the organizations can lead to death march development and other undesirable situations.
  • by n6kuy ( 172098 ) on Wednesday July 15, 2009 @04:17PM (#28707703)

    90% of Everything is Crap.

  • All projects can be described by:

    1. define the problem
    2. entertain solutions
    3. iterate

    Requirements are the interface between #1 and #2. Thus, one way or another all system failures are about requirements. Either the true project requirements were never discovered - or the customer was allowed to impose unnecessary and counterproductive pseudo-requirements - or the domain requirements weren't correctly elaborated into appropriate functional requirements - or the process for managing the requirements was top

  • Everyone knows when a project is doomed. However, no-one is willing to report it. The reason is that so many people have bonuses, contracts and reputations at stake that they will always hold out, right up to the last, in the hope that someone else will fold first. Once the first guy admits "there might be a slight problem", everyone else piles in on top of that. Typically blaming the fall-guy for all of their problems, shortcomings, missed deliveries and failures.

    The higher up an organisation you go, the

  • by russotto ( 537200 ) on Wednesday July 15, 2009 @05:38PM (#28708681) Journal

    Because why they fail is not all that interesting. A project specified mostly by people who don't know what the system is supposed to do, implemented by people who don't understand the business, replacing a legacy system containing within its labyrinthine bowels the combined knowledge of tens or hundreds of expert users past and present. What could possibly go wrong?

    Add on top of that a COTS requirement, so it's a matter of making the requirements fit the software's limitations (while still fitting the business), and you have an almost guaranteed recipe for failure. Particularly when the users _won't_ adapt.

  • Software is hard (Score:3, Insightful)

    by plopez ( 54068 ) on Wednesday July 15, 2009 @07:08PM (#28709889) Journal

    You can't see it, touch it, smell it, taste it etc. Most of it is an intellectual abstraction many programmers, not to mention the general population, isn't very good at.

    Doing software well requires being an expert in complex problem domains. The domains may require knowledge of complex financial, legal, engineering and manufacturing systems. It may require modeling human relationships. Or combinations of all of the above.

    Where people get things wrong is they do not take the time to understand their problem domain. They look for magic bullets. They need to spend time with their end users and understand the work processes. A little business process modeling goes a long way.

  • ERP Systems fail (and this is by no means an exhaustive list, just what I have seen myself) ...because the sales pitch is to the board of directors and the implementation is at the user level. ...because I (financial analyst that I am) have a job to do. Your system helps or it doesn't, but I've got to get my job done.

    The common theme here is that ERP implementations lack humility and respect for the existing business and the people who actually run it. In pursuit of relatively nebulous "strategic" advanta

  • which isn't software focused, but contains principles *absolutely* useful for software as well as other types of engineering is Inviting Disaster [invitingdisaster.com]. It's an easy, highly entertaining read, with the bonus that you (ought to) learn something as well. I highly recommend it.

  • Software projects only fail because people agree and/or commit to features or schedules that are not thought out ahead of time. Anytime such a committal is made, either that part of the project will fail, or some part that connects to that part will be forced to fail as a result of the developers being forced not to design the software properly. Software projects are supposed to be really expensive (just look at the early days for examples), but to cut costs, sales and non-technical people agree to nonsensical schedules and features. The clients won't sign onto a project unless it is cheap, so the managers/sales folks that agree to the MOST nonsensical stuff are initially seen as winners. Developers are then given the responsibility for delivering on a deal that they didn't design or agree with. Since every development outfit does this, none of the clients out there have any idea how complicated and expensive it would be to actually do things the right way.

  • Another interesting read .. Software Project Failures [lsbu.ac.uk] Sabina Seifert ...

It is easier to write an incorrect program than understand a correct one.

Working...