Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Tools To Automate Checking of Software Design 128

heck writes "Scientific American describes some of the work to develop tools for examining the design of software for logical inconsistencies. The article is by one of the developers of Alloy, but the article does reference other tools (open and closed source) in development. The author admits that widespread usage of the tools are years away, but it is interesting reading the approach they are taking regarding validation of design."
This discussion has been archived. No new comments can be posted.

Tools To Automate Checking of Software Design

Comments Filter:
  • too hard. (Score:5, Insightful)

    by yagu ( 721525 ) * <{yayagu} {at} {gmail.com}> on Friday June 02, 2006 @05:34PM (#15458140) Journal

    Back in the mid-80s I attended a seminar in Atlanta, it was about automated software engineering... and tools and workbenches that would take as input specifications and design parameters and would crank out entire suites of software/applications. (Heck, there was even a new acronym for it, can't remember what it was, but it was a hot, hot, hot button for a few years.) We were pretty much warned our careers were over, automation was here to generate what we as professionals had studied years to create.

    It never happened. It never came close to happening. We are as far away today or further from tools that can generate applications transcendentally.

    I was skeptical then, I'm skeptical now. Tools like the ones described are useful, but they're not foolproof, and they hardly supplant the intuition and "art" that is programming.

    At best tools are an adjunct to the software development process, not to be a replacement for common sense testing and design and code walkthroughs. I could construct many scenarios that logically would be consistent but have no relationship to the desired end of the application, i.e., a logic consistency tool would not detect a problem. Any poorly designed system with these "new" tools applied will merely be rigorously poor systems.

    As for the prime example (in the Scientific American article) of the Denver International Airport baggage handling debacle, I doubt logic analysis tools would have had much impact on the success or failure of that effort. I knew people involved in the project, and "logic consistency" in their software was the least of their problems. (I would have loved to been on a team to design and develop that system -- I think it was a cool concept, and ultimately VERY feasible... )

    I did get one benefit from the Atlanta Seminar -- I got a football signed by Fran Tarkenton (he was CEO of one of the startup companies fielding a software generating workbench).

    • CASE (Computer Aided/Assisted Software Engineering?)?
    • We are as far away today or further from tools that can generate applications transcendentally.

      True, but the fallout's been useful. Ever used Rational XDE? I see Sun has something similar in the latest Sun Studio 8 Enterprise, but I haven't used it. Basically, it's a round-trip UML modeler: lay out your class diagram, and XDE will generate the code for it. Update the generated skeleton with "real" code, and XDE will update your model from the changes. It's much nicer than trying to do things with Ratio
      • True, but the fallout's been useful. Ever used Rational XDE? [...] Basically, it's a round-trip UML modeler: lay out your class diagram, and XDE will generate the code for it. Update the generated skeleton with "real" code, and XDE will update your model from the changes. It's much nicer than trying to do things with Rational Rose -- then again, pulling our your toenails with rust pliers is nicer than trying to do some things in Rose [...]

        You have a realistic view on Rational Rose and that makes me want t

        • Well, even if you don't trust Rational, I worked a few years ago with a company that (over six years ago) had a tool that would do this round-tripping between UML and Java code. Worked like a charm, proved handy for very large systems we were designing where the object hierarchy and relations were helpful to see all laid out in one big sheet.

          Tools are getting smarter. We need to leverage them to help us write the code, check the code, and maintain the code. They are, however, just tools -- not panaceas.
    • Re:too hard. (Score:3, Insightful)

      by deuterium ( 96874 )

      We were pretty much warned our careers were over, automation was here to generate what we as professionals had studied years to create.

      I vaguely recall that fad as well. A lot of executives were jazzed about the idea, as they seemed to assume that software was rote and procedural anyway. They viewed programmers as simple translators, not realizing that program code doesn't just facilitate the resulting software, but was the software. Regardless of how many tools you devise to commoditize the basic functions

      • By the same token, code checkers can't know what your intentions are for every variable and class relationship. They can tell you if you generate invalid or null variables, or if a function is orphaned, stuff that is strictly boolean. Beyond mistakes like that, you'll have to tell the checker in explicit manners what to look for, negating the benefit of the tool.

        If you state your intentions in a language that the code checker understands (preferrably a language designed to be expressive when it comes to mak
    • CASE, code generators, etc. have their place and can be successful. I have created and used these types of tools and continue to enjoy analyzing where they are a good fit.

      OLTP business applications tend to be the best fit because there are many many forms/transactions to create and they tend to be relatively simple (mostly data validation, some calculations, database update, etc.). The work that is being automated is not creative problem solving but rather the application of a solution multiple times t
    • Yeah, that investigation was called the "AD/Cycle", and Fran Tarkenton was the CEO of Knowledgware at the time.

      The thing is, on an abstract level, "Designing Code from Logically-Proven Constructs" (the title of a book by James Martin) makes total sense: If the base elements are logically proven, and if the complex elements are constructed of base elements, then the output will have no un-proven output. However, the design of the programs needs to be at a meta-level to the operation. (Thanks, Goedel!) I coul
    • I could construct many scenarios that logically would be consistent but have no relationship to the desired end of the application. . .

      Bingo! In fact, unless you are working at the very cutting edge of science and/or technology, going where no man has gone before, the really hard part of program design is figuring out just what the heck that desired end really is.

      The rest is just a programming exercise.

      Computers may be able to prove a program correct, logically consistent and even generate algorithms, but t
    • I got a football signed by Fran Tarkenton (he was CEO of one of the startup companies fielding a software generating workbench).

      "That's Incredible!"

      (Boy am I showing my age...)

      --
      It's Better to Have It and Not Need It ... than Need It and Not Have It.
    • Program testing can be used to show the presence of bugs, but never to show their absence! - Edsger Wybe Dijkstra It remains an art.
      • by Anonymous Coward
        Maybe against sofware testing afterwards programming, but FOR formal verfication

        From the 1970s, Dijkstra's chief interest was formal verification. The prevailing opinion at the time was that one should first write a program and then provide a mathematical proof of correctness. Dijkstra objected that the resulting proofs are long and cumbersome, and that the proof gives no insight as to how the program was developed. An alternative method is program derivation, to "develop proof and program hand in hand". On

    • I was skeptical then, I'm skeptical now. Tools like the ones described are useful, but they're not foolproof, and they hardly supplant the intuition and "art" that is programming.

      I once studied "Z", a specification language that was supposed to eventually be able to feed automatic correctness checkers. I realized how bad the language was when one of the canonical examples required that the design of the code itself be contaminated by the constraints of the specification language.

      For some very narrowly defi
    • I think the key word here is "tools". People usually "use" tools, they are not replaced by them.

      If the only tool that you have is a hammer, then every problem becomes a nail.
    • Wow. You're thinking about CASE tools in general.... I recall some of the marketing nonsense going on then. In fact, in the early 90's a group from my workplace traveled to Atlanta to talk to a software vendor about their CASE tool. The CEO came by during lunch to meet the group.... his name was Fran Tarkenton! And, just to help you out, the company was "KnowledgeWare".

      Of course, we should remember that the original FORTRAN 0 manual stated that the use of the language would elimate coding errors and the
  • This may be great for catching some bugs, but I think the majority of problems within software are not from "conflicting" instructions, they come from the program doing to wrong thing (i.e. not what we wanted) or simply performing an illegal opearation in the process of getting the correct results. Neither of these cases is a logical inconsistancy. Now maybe if we all programmed in Prolog this would be more useful ...
    • This is why you really want to state clearly the assumptions made on entry to every function (the preconditions) and the consequences that logically follow from applying what you think the function does to the data passed in, given those preconditions (the postconditions).

      You then want the automated tool to map out the paths through the code, substituting every variable for an equivalent expression consisting of the inputs that would go into producing that variable, for each path. (This works for loops, so

  • by cavemanf16 ( 303184 ) on Friday June 02, 2006 @05:43PM (#15458196) Homepage Journal
    For members of IEEE with a subscription to IEEE Computer Society's Transactions on Software Engineering, the last issue (April) has a very interesting article related to this stuff titled: Interactive Fault Localization Techniques in a Spreadsheet Environment [computer.org]. Basically, the article explains how they have worked to develop and test techniques that allow "end-user programmers" (people who create formulas in spreadsheets and such) to use automated fault localization testing tools to help debug their "applications" (spreadsheets) at runtime. Pretty interesting stuff that they found in their analysis. (It's easier for you to just go read it than for me to attempt to summarize it at the end of my work week. ;)
  • From what I'm reading it looks like these programs preform all sorts of different executions, thats great and all, but they probably dont behve like real people do. The average user isn't going to create a file and then (in the middle of that) start running the delete file interface. Also I doubt these tests include other common user issues (like clicking the same function over and over again if it doesn't respond immedietly). Maybe I'm just not understanding what these do... but if I'm even half right, rea
    • From what I'm reading it looks like these programs preform all sorts of different executions, thats great and all, but they probably dont behve like real people do. The average user isn't going to create a file and then (in the middle of that) start running the delete file interface. Also I doubt these tests include other common user issues (like clicking the same function over and over again if it doesn't respond immedietly). Maybe I'm just not understanding what these do... but if I'm even half right, rea

      • Thank you! (Score:1, Insightful)

        by Anonymous Coward
        Someone who doesn't see it as an all or nothing proposition.
        Tools that make sure programs are self-consistent are good!
        What's the point of having testing and real world trials if you're programming doesn't even agree with itself?
    • But it's still important to test those kind of things. A user MAY do that. Apple used to have a way of testing things that was rather ingenious. They used it to get rid of the bugs in the original Mac OS. Check out the story at Folklore.org [folklore.org].
  • LWN - Lock Checker (Score:4, Interesting)

    by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Friday June 02, 2006 @05:48PM (#15458230) Homepage
    LWN [lwn.net] just did a piece on a lock validator that just went into the kernel. The article [lwn.net] is currently subscriber only and won't be visible to non-subscribers until next Tuesday, IIRC.

    It was a very interesting piece. It talked about the problems of locking (more locks makes deadlocks easier, but they get harder to track down) and the way the code went about finding problems. Basically it remember when any lock was taken or released, which locks were open before that, etc. Through this it can produce warnings. For example if lock B needs lock A, but there is a situation where lock B is taken without A being taken it will flag that.

    The article has MUCH better descriptions. But through the use of this the software can find locking bugs that may not be triggered on a normal system under normal loads. Here is summary bit:

    "So, even though a particular deadlock might only happen as the result of unfortunate timing caused by a specific combination of strange hardware, a rare set of configuration options, 220V power, a slightly flaky video controller, Mars transiting through Leo, an old version of gcc, an application which severely stresses the system (yum, say), and an especially bad Darl McBride hair day, the validator has a good chance of catching it. So this code should result in a whole class of bugs being eliminated from the kernel code base; that can only be a good thing."

    It was a piece of code from Ingo Molnar, you should be able to find it on the kernel mailing-list and read about it.

    Kudos, by the way, to LWN for the great article.

    • > The article [lwn.net] is currently subscriber only and won't be visible to non-subscribers until next Tuesday, IIRC.

      No problem, just right click the login-box and select "Login with BugMeNot".

      You need Firefox and the BugMeNot-extension, though. Firefox can be found in your favorite repository or at http://www.mozilla.com/firefox/ [mozilla.com].

      The BugMeNot extension is here: http://extensions.roachfiend.com/bugmenot.xpi [roachfiend.com]
      • You have to PAY to get a LWN subscription. I'm not talking about a general login, I'm talking about a PAYING account. Unless someone PAID and then put the login up on BugMeNot, that won't work.

        SECOND, how kind of you to encourage people to steal from such a great website. LWN is the only one I subscribe to because I like them so much. They aren't a "pay us or you won't see anything" site (like most science journals). They aren't a "pay us and we won't put large flash ads between each page" site (their only

  • software snake oil (Score:4, Insightful)

    by penguin-collective ( 932038 ) on Friday June 02, 2006 @05:50PM (#15458241)
    None of those tools have ever been demonstrated to be cost-effective means of making software more dependable. It's an article of faith that adding a complex notation and another complex set of tools to the development process makes the product any better.
    • Spot on... we all know the proper way to ensure high-quality software design is with a heavy Waterfall [waterfall2006.com] methodology.
      Ooo... and throw in lots of beurocratic layers in your organization, too!
      Lord knows software can't be high-quality without at least 10 separate management rubber-stamps on it. ;)
    • None of those tools have ever been demonstrated to be cost-effective means of making software more dependable.

      Your choice of word ordering is interesting. The article is not about making software more dependable; it's about making more dependable software!

    • None of those tools have ever been demonstrated to be cost-effective means of making software more dependable. It's an article of faith that adding a complex notation and another complex set of tools to the development process makes the product any better.

      You're absolutely right. Unfortunately, that also applies to pretty much every other software engineering tool, including ones that are currently being used. There are very few tools and techniques that have been experimentally validated (inspections ar

  • by Anonymous Coward on Friday June 02, 2006 @05:57PM (#15458287)
    This reminds me of my previous job. One day the owner of the company came up with a brilliant idea. He had just watched the movie "Finding Nemo" and asked me, "have you ever seen finding nemo? You know those little silver fish? I think we should design a system based on those little silver fish. If we get enough small components they can be combined into any piece of software. Eventually we wouldn't need any more components and thus no more software developers. All of our software would be made by sales guys who could just combine these components into any software we need." I remember thinking to myself that we could just start with quarks and we could build everything in the universe. But I didn't say anything and was just happy to not be chosen to be on the team creating the silver fish.

    6 years later, dozens of programmers, and millions of dollars, the finding nemo architecture has been a bust. The owner of the company refuses to give up on the idea. They currently have created components of "and" and "or" gates and use "wires" to put them together. It reminds me of entry level electrical engineering. Back before they tell you that writing software on flash is usually easier and cheaper than wiring together dozens of IC's. In any case, I eventually did get sucked into the project and promptly left the company.

    • Well, he didn't totally have the wrong idea, he just took it too far and went the wrong way. Reusable components are good. If you can just download a library that can do X, write a bit of glue code, and be done your productivity has skyrocketed. But there will always be glue code to write. And the idea isn't to write every component you'll ever need first, its to write/find libraries as you need them, being careful to write them in a reusable fashion. You'll never have your sales guys as your main code
      • by TapeCutter ( 624760 ) on Friday June 02, 2006 @11:13PM (#15459981) Journal
        "If you can just download a library that can do X, write a bit of glue code, and be done your productivity has skyrocketed."

        When I worked for IBM in the 90's the CEO made the pronouncement: "All code has been written, it just needs to be managed". We all thought he was clueless, nevertheless here I am 10yrs later writing "glue code" for somebody else and IBM is still the largest "software as a service" company on the planet.
    • That story showcases the weakest component in the software design process: humans. In this case, the owner of the company.
    • I don't think the idea is bad, but your boss/company seems to equate the "magic" of schools of fish to their small size, whereas it's their capability to self-organize using simple rules. Ala the game of life, cellular automata, etc.

    • It is six years since Finding Nemo was released? Looks like it was yesterday I saw the movie. [quick googling "finding nemo year release"] 2003. How long would it have taken the "tool" to find this contradiction in your posting?
    • Fish school through autonomous intaraction with the state of their observable surroundings. Most times, the local heuristic on movement is not very linear, and the swarm folds on itself, moves randomly, changes volume and surface topology.

      When this type of intelligence is directeted toward some more concrete goal, such as getting away from a predator (for fish), it turns out that the average path can be near optimal if the proper heuristics can be chosen.

      http://en.wikipedia.org/wiki/Swarm_intelligence [wikipedia.org]
    • Yep.

      If we get enough small components they can be combined into any piece of software. Eventually we wouldn't need any more components and thus no more software developers.

      the key is that phrase "can be combined" although my second pick is "eventually". your finding nemo system will have to be self-organizing because is too vast to have organization imposed from without. You already have that kind of system today anyway. So if you have a self organizing system, two questions are a) how does it arise and b)
    • If we get enough small components they can be combined into any piece of software.

      In the news today: Tanenbaum [oreilly.com] charged of subliminally brain-washing people on his microkernel design. People with tin-foil hats [wikipedia.org] live to tell the day!

      * lon3st4r *

    • If we get enough small components they can be combined into any piece of software. Apparently he wants Assembler.
    • You have two kind of people in a company: People whose job it is to be skeptical (engineers) and people whose job it is to be optimistic (sales/marketing). The former think in terms of logic, the second think in terms of perception. That's why sales guys writing code doesn't make sense.

      The idea of sales guys looking at code scares the crap out of me.
    • Otherwise your salespeople would already be creating applications in Java. There's certainly no shortage of components there.
  • Snakeoil/Panacea (Score:1, Interesting)

    by Anonymous Coward
    Yet another article about a supposed solution to software quality problems by an author who just happens to have such a solution to sell you.

    Software design and coding isn't easy, and beyond the fundamentals (code coverage tools, unit testing frameworks, etc.), I have yet to see automation tools that increase software quality in any real way.

    Any person who knows anything about software quality knows that the answer is not to use "a tool that explores billions of possible executions of the system, looking fo
  • Now a computer can discover the flaws in the design of a piece software, and advise the developers of them. Who, if they're in any way involved in design of games, will promptly ignore them and release a post-release patch to fix the issues they knew were there anyway. But hang on.. just how will you check for inconsistencies in the design of the analysis tools?
  • These types of algorithmic testing tools are useful for small, truly critical functionality that has to work perfectly. It's not cost effective to try to model typical complex software in a manner that supports testing as described in the article. Most programming is not about designing the next great single algorithm, it's about integrating data, interacting with users, and providing all the logic to handle the myriad special cases that make up user requirements. Rarely is such a testing tool going to cov
    • Rarely is such a testing tool going to cover all the possibilities without a gargantuan effort to model the software -- which effort will most likely not be able to keep up with the actual development anyway. These tools won't be widely accepted until they can automatically read source code and create a software's model without programmer input.

      did you RTFA? design first, code later.

      there are other tools for analyzing code after it's written. this article is not about those, but there's lots of work

      • Yes, the magazine actually came out three weeks ago. "Design first" at the level of detail required for this type of testing to work (complete pseudocode) is pretty much never going to happen with a major software system. Perhaps with small critical algorithms, yes, or where the risk/reward of that level of design is warranted. Most software is not going to qualify.
  • by cryptomancer ( 158526 ) on Friday June 02, 2006 @06:08PM (#15458363)
    Sounds like some producer wants a magic-bullet program to replace some bad-performing designer. Even in the case of a 'useful' tool to apply to projects, this is likely to become an excuse for when an inconsistency is found later on by QA- "the program said it was good!"

    It's not going to find everything, let alone fix it. See Turing: the halting problem.
    • It's not going to find everything, let alone fix it.
      OTOH, it may find plenty of things that would otherwise be missed. Of course, people will misuse it some times and blame the results on the software. That's not a reason to think the software is a bad idea -- its not like not having automated validation software will stop people from doing inadequate QA, or poor design.
    • You should read more about the halting problem.

      Turing did not say "you cannot prove a algorithm will terminate" he said "you cannot prove that all arbitrary algorithms will terminate on a turing machine"
      First, computers are NOT turing machines. They are bounded in storage (although the bounds are pretty high nowadays).
      Second. It is possible to prove that a given algorithm will terminate. Just not all algorithms. Therefore a tool can prove that your algorithm terminates. If it does terminate you are set. If
  • Detecting too late (Score:3, Interesting)

    by Doctor Memory ( 6336 ) on Friday June 02, 2006 @06:13PM (#15458403)
    If they're checking the software design for inconsistencies, then they're too late. What is needed is some way to formally specify user requirements, so that they can be checked for completeness and consistency. Use cases are nice, but they're not sufficiently rigorous to capture absolutely all the requirements. I know there have been some schemes tossed around for requirements validation, but none that I've seen have really been general-purpose enough for the average project.
    • There are formal methods such as Z Notation [wikipedia.org] for specifying the behavior of a general-purpose (not even necessarily computer-based) system in a provable, checkable way. Unfortunately, precisely specifying the behavior of a nontrivial system is a lot of work, as is learning how to use a formal method in the first place. Theoretically, someday tools will make it easier, but specifying the intended behavior ahead of time will still be time-consuming and often difficult to justify to management.

      Other posters

      • Maybe what is needed is a dual source code -- one source code specifying what is supposed to happen (specification) and another source code specifying how it is supposed to happen (algorithm). If you had a complete specification, you would think you could automatically generate the algorithm, but in that case, your specification would be a form of programming and would have to be airtight.

        But why does the specification have to be a one-to-one mapping into the algorithm? Couldn't the specification be of

  • by roger6106 ( 847020 ) on Friday June 02, 2006 @06:14PM (#15458409)
    Making a tool to check most programs for errors sounds extremely complicated, but wouldn't it be possible to make a more simple tool that checks the security of a PHP/MySQL website?
    • Making a tool to check most programs for errors sounds extremely complicated, but wouldn't it be possible to make a more simple tool that checks the security of a PHP/MySQL website?

      If this would be a thread about car safety, you would be saying something along the lines of "It's nice that cars are safe and all, but can I have an apple?"
    • All PHP programmers use the same method. It's called "the internet".
  • by KidSock ( 150684 ) on Friday June 02, 2006 @06:14PM (#15458413)
    A good design correctly models the concept of what it is you're trying to achive with the program. Ultimately this means the programming interfaces (APIs) for each concept are correct [1]. Don't design interfaces around procedures. Don't design interfaces around the physical world. Design to *concepts* and *ideas*. This is superior because you will never discover at a later time that the code is fundamentally flawed and needs to be totally re-written. If the interface correctly models the concept, by definition, it CANNOT be wrong. If it is wrong then you simply didn't understand the concept well enough or you failed to translate that concept into a suitable interface and you just need to think more and type less. If you do get things right you'll find that major peices dovetail together perfectly [2]. The implementation can be wrong and may need to be re-written but if the interface correctly represents the concept the re-write will be localized to one library or part of a library. That is a much more straight forward matter than using a bad design and finding half way through a project that the required changes transcend the whole system.

    And thus you cannot validate a design because that would require representing a concept and determining if an interface suitably models it. That is HARD. If that were possible you would effectively have a thinking, rationalizing, brain (Artifical Intelligence) in which case you wouldn't be dorking around with validating programs, you would be dynamically generating them.

    [1] Frequently people advocate that interfaces are "well defined". That just means there are no holes in the logic of it's use. Personally I think a well defined interface is useless if it does not correctly model a concept. You can always go back and fill in the holes later.
    [2] Although this is also when you discover that you didn't get the concept right and need to adjust the interface (hopefully not by much)
    • A well defined interface means that if you build 1 million holes in a plank and I deliver 1 million pegs, when they "meet" they fit.

      A square hole and a round hole, one 1 inch in diameter and one 1 foot wide, all of them model the concept, but are utterly useless.

      You can always go back and fill in the holes later.

      No you cannot, otherwise we would all use Dvorak keyboards, not this stupid Qwerty. And we would have had HDTV 15 years ago. And...

      • Of course, a software plank might be able to ask the peg what size and shape it is and adjust it's holes accordingly.

        Also, color TV managed to build on top of black and white. Black and white tvs recieve color signals and display black and white, color tvs do fine with black and white signals.

        Qwerty sticks because it is good enough. Dvorak might be better(it isn't as clear as people want it to be), but it really doesn't matter that much, people that hunt and peck don't care what the field looks like, people
  • Wow. It's indeed a table, even a good target for implementation as an HTML table. It has links in it, which point to places to look at software. And yet it's presented as an image, not even an image map, so despite already being on a website, we have no choice but to type them in!

    If this is reliable, I don't mind unreliability, but at least let me copy and paste!
  • I actually read this article last week, unlike most of the people who've responded so far. The principle behind this concept is reasonably sound, except that the example they've given in it (seating Montagues and Capulets at Romeo & Juliet's reception) requires you to understand every unspoken assumption to make the tester work properly.

    Jackson doesn't claim it'll find everything. What he says is that it carefully synthesizes the two previous approaches to software testing, reducing the amount of time-e
  • .. remain the same: describing the problem in the first place. Alloy (or any other set of design programs) can only analyze the information that it's provided. It may be able to flag problems that would have been missed by a human analyst, but it can't possibly deal with real world systems which will invariably produce conditions that weren't considered in the first place. Patching never produces a system as reliable as one that might have been described thoroughly in the first place (a practical impossibil
  • by lilnobody ( 148653 ) on Friday June 02, 2006 @08:13PM (#15459191)
    The article sounded good, so I went to the alloy site. Having just read through the tutorials, and some of the docs, I can't imagine what possible use this software could have in the majority of software development.

    It's basically a nifty, graphical declarative programming language. Anyone familiar with Prolog and set theory will breeze through the docs, only to ask "Why?" at the end.

    One of the tutorials, for example, is a way to get Alloy to create a set of actions for the river crossing problem, and list them. Thus, alloy has saved the poor farmer's chicken. It's actually quite a cool set of toys for set theory, and it generates all possible permutations of a system with rules and facts based on how many total entities there are in the system, and checks the system for consistency. There are doubtless uses for this, but software development is, at the moment, not one of them.

    The other tutorial is about how to set up the concept of a file system. The tutorial sets up a false assertion (assertion = something for Alloy to test), which fails. Here is the reasoning, with summary to follow for disinterested:

    Why can this happen? First, let's note that both delete functions cause the rows of the contents relation in the post-state to be a subset of the rows in the pre-state. And we know the FileSystem appended facts "objects = root.*contents" and "contents in objects->objects" constrains the root of the file system to be the root of the contents relation. So if the post-state has a non-empty contents relation, it will have the same root as the pre-state. However if delete function causes the post-state to have an empty contents relation, then the root is free to change arbitrarily, to any directory available. Bet you didn't see that coming. Good thing we wrote a model!

    Basically this says that in a 2-node scenario, i.e. a root directory with one subdirectory, they delete the subdirectory with their delete function. The way they defined the delete function basically meant that the 'deleted' node could, in theory, be the root of the file system after the deletion operation occured, since there was no constraint on which node of the file system was root after the change. They basically said under these constraints, it was possible to define a 'delete' function that deletes the subdirectory in a 2-node filesystem and then makes that same subdirectory the root of the filesystem.

    Good thing we built a model, indeed! A bug in the programming of your model is by no means a valid use for spending a significant amount of effort modeling a concept in set theory. The best part is that all of your effort amounts to mental masturbation--there are no tools for turning this into a programming contract, test cases, or anything. Some projects are in the works in the links area, but they aren't there yet, and only for Java. I don't see how the amount of effort that would be required to model a large scale, realistic project in this obtuse set notation would be worth it for absolutely no concrete programming payoff. Writing HR's latest payroll widget, or even their entire payroll system, is just not going to get any benefit from this.

    All that aside, it's concievable that this sort of model programming could find use in complicated systems in which high reliability is paramount. The usual suspects, such as satellites, space, deep sea robots or whatever come to mind--this system could prove, for example, that a given system for firmware upgrades cannot leave a robot in mars in an inoperable state, unable to download new, non-buggy firmware.

    But it still can't prove the implementation works. *slaps forehead*

    nobody

  • Although you should design at this level, many problems hit LONG before design. The big problems I've seen have been in the analysis stage where you gather customer requirements and translate them into a very detailed requirements document.

    If your non-trivial project lacks such a document, it will probably fail.

    The only way to overcome a lack of requirements is to have a heroic effort by one or more engineers, and even then you end up with many of the same problems.

    The problems will stem from missing a few
  • FTA: More recently, researchers have adopted a very different approach, one that harnesses the power of today's faster processors to test every possible scenario. This method, known as model checking, is now used extensively to verify integrated-circuit designs.

    The problem with this is that algorithmic software does not work like ICs. The only way to solve the crisis is to abandon the algorithmic software model and embrace a non-algorithmic, signal-based, synchronous model. This is the model used in hardwar
  • A tool not listed is autotest http://se.inf.ethz.ch/people/leitner/auto_test/ [inf.ethz.ch] which makes use of Eiffels contracts. I did use it to some degree and it can realy help to find some errors.
  • One of my friends did a project for masters. Some simple code that will read the submitted source, count the number of code lines, comment lines, average number of lines per function etc and print out some stats about "quality of the code". His prof ran his project code source itself as the input! It flunked itself for not having enough comments, for having functions that were too long, not breaking up large source files, for using too many nested levels of code etc

    Microsoft sells collaboration software a

  • by aeroz3 ( 306042 ) on Friday June 02, 2006 @10:56PM (#15459929)
    The point of these tools, is to simply verify the consistency of a design, not to execute or examine existing source code. The steps involved are:
    1) Come up with software design
    2) Implement software design in one of these tools (model it in Z, or as a state machine using fsp/ltsa)
    3) Use said tool to verify the consistency of the design.

    Now, this activity, of course, takes a lot of time, and is unlikely to ever be of any use to your average J2EE/Ajax/Enterprise application. Areas where they CAN be of use are in things such as life-critical systems. For instance medical devices, or air plane control systems. Using something like FSP/LTSA you can model, check, and verify that your design does not every allow the system to enter into an invalid state. Now, remember, this says nothing about the final code, there is a separate issue of the code not matching the design, but it is possible to verify that the design does not ever lead to invalid states.
    • If the abstractions available in the design verification tool simplify testing for correctness, why not also use them for the actual implementation? Performance is less good of a reason everyday that hardware costs go down. Currently, that's everyday.

      "Agile methods" are largely an acknowledgement that mistakes are inevitable and should be planned for. Unit testing helps find mistakes by stating everything twice, once inside(the code) and once outside(the tests) the implementation. Test failures are mistakes
  • by 3seas ( 184403 ) on Friday June 02, 2006 @11:35PM (#15460052) Homepage Journal
    ...science of abstraction physics.

    yes the software industry is still playing with magic potients and introductary alchemy.

    Why is a simple answer to give.

    money, job security and social status.

    Someone posted that they were warned that their jobs would become extinct upon automated software development.
    but the fact is.... who but those who have their job to risk....are in a position to employ such tools?

    snake oil software development is a self supported dependancy... far from genuine computer software science (of which we haven't really seen since the US government held the money carrot up for code breakers during WWII.
  • by dwheeler ( 321049 ) on Saturday June 03, 2006 @12:33AM (#15460216) Homepage Journal
    The referenced article has a lot about formal methods tools (including "light" formal methods tools). See the paper High Assurance (for Security or Safety) and Free-Libre / Open Source Software (FLOSS)... with Lots on Formal Methods [dwheeler.com] for FLOSS programs that support this. For a list of some tools that look for security vulnerabilities, see the FlawFinder web site [dwheeler.com], which links to others.

    Alloy is a cool tool, if it does something you want done. But nobody should be fooled into thinking that you can just run Alloy and suddenly your code is perfect. Alloy just helps you check out a model based on set theory, etc... it's a long distance from models like that to the actual code.

  • even before there is code written there is failure of developing a design science
  • I read this in the dead tree version. It sounded good until I read the Alloy example:

    /*
    * Constrains at most one item to move from 'from' to 'to'.
    * Also constrains which objects get eaten.
    */
    pred crossRiver (from, from', to, to': set Object) {
    // either the Farmer takes no items
    ( from' = from - Farmer &&
    to' = to - to.eats + Farmer ) ||
    // or the Farmer takes one item
    some item: from - Farmer {
    from' = from - Farmer - item

  • Related to proofing the correctness of software design, is the research field that focus on one particular area: Rule Verification. It is similar to the SAT solvers; however the focus is on the conditions in a rule and its inferred actions. Verification examines the technical aspects of an expert system (a.k.a. rule based system) in order to determine whether the expert system is built correctly. Verifying the expert system involves examining consistency, completeness and correctness of the knowledge by de
  • There are different means of checking software; the article describes a technique wherein a specification language is used to describe the essential features of an application which is then checked using theorem-proving techniques. The power of Alloy seems to be based in part on its ability to efficiently process the multitude of variations in program execution by establishing the equivalency of many internal states. There are other methods which the article does mention, albeit briefly.

    Floyd (whose early

Never test for an error condition you don't know how to handle. -- Steinbach

Working...