Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Tools To Automate Checking of Software Design 128

heck writes "Scientific American describes some of the work to develop tools for examining the design of software for logical inconsistencies. The article is by one of the developers of Alloy, but the article does reference other tools (open and closed source) in development. The author admits that widespread usage of the tools are years away, but it is interesting reading the approach they are taking regarding validation of design."
This discussion has been archived. No new comments can be posted.

Tools To Automate Checking of Software Design

Comments Filter:
  • by cavemanf16 ( 303184 ) on Friday June 02, 2006 @05:43PM (#15458196) Homepage Journal
    For members of IEEE with a subscription to IEEE Computer Society's Transactions on Software Engineering, the last issue (April) has a very interesting article related to this stuff titled: Interactive Fault Localization Techniques in a Spreadsheet Environment [computer.org]. Basically, the article explains how they have worked to develop and test techniques that allow "end-user programmers" (people who create formulas in spreadsheets and such) to use automated fault localization testing tools to help debug their "applications" (spreadsheets) at runtime. Pretty interesting stuff that they found in their analysis. (It's easier for you to just go read it than for me to attempt to summarize it at the end of my work week. ;)
  • LWN - Lock Checker (Score:4, Interesting)

    by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Friday June 02, 2006 @05:48PM (#15458230) Homepage
    LWN [lwn.net] just did a piece on a lock validator that just went into the kernel. The article [lwn.net] is currently subscriber only and won't be visible to non-subscribers until next Tuesday, IIRC.

    It was a very interesting piece. It talked about the problems of locking (more locks makes deadlocks easier, but they get harder to track down) and the way the code went about finding problems. Basically it remember when any lock was taken or released, which locks were open before that, etc. Through this it can produce warnings. For example if lock B needs lock A, but there is a situation where lock B is taken without A being taken it will flag that.

    The article has MUCH better descriptions. But through the use of this the software can find locking bugs that may not be triggered on a normal system under normal loads. Here is summary bit:

    "So, even though a particular deadlock might only happen as the result of unfortunate timing caused by a specific combination of strange hardware, a rare set of configuration options, 220V power, a slightly flaky video controller, Mars transiting through Leo, an old version of gcc, an application which severely stresses the system (yum, say), and an especially bad Darl McBride hair day, the validator has a good chance of catching it. So this code should result in a whole class of bugs being eliminated from the kernel code base; that can only be a good thing."

    It was a piece of code from Ingo Molnar, you should be able to find it on the kernel mailing-list and read about it.

    Kudos, by the way, to LWN for the great article.

  • by Anonymous Coward on Friday June 02, 2006 @05:57PM (#15458287)
    This reminds me of my previous job. One day the owner of the company came up with a brilliant idea. He had just watched the movie "Finding Nemo" and asked me, "have you ever seen finding nemo? You know those little silver fish? I think we should design a system based on those little silver fish. If we get enough small components they can be combined into any piece of software. Eventually we wouldn't need any more components and thus no more software developers. All of our software would be made by sales guys who could just combine these components into any software we need." I remember thinking to myself that we could just start with quarks and we could build everything in the universe. But I didn't say anything and was just happy to not be chosen to be on the team creating the silver fish.

    6 years later, dozens of programmers, and millions of dollars, the finding nemo architecture has been a bust. The owner of the company refuses to give up on the idea. They currently have created components of "and" and "or" gates and use "wires" to put them together. It reminds me of entry level electrical engineering. Back before they tell you that writing software on flash is usually easier and cheaper than wiring together dozens of IC's. In any case, I eventually did get sucked into the project and promptly left the company.

  • Snakeoil/Panacea (Score:1, Interesting)

    by Anonymous Coward on Friday June 02, 2006 @06:05PM (#15458334)
    Yet another article about a supposed solution to software quality problems by an author who just happens to have such a solution to sell you.

    Software design and coding isn't easy, and beyond the fundamentals (code coverage tools, unit testing frameworks, etc.), I have yet to see automation tools that increase software quality in any real way.

    Any person who knows anything about software quality knows that the answer is not to use "a tool that explores billions of possible executions of the system, looking for unusual conditions that would cause it to behave in an unexpected way."

    For one thing, you need to approach at least the level of human intelligence to understand what is intended by a software design or code module, and often even humans don't understand it. If you don't understand what was expected, how in the hell can you look for unexpected conditions??

    In short, if this is truly what the author is proposing, i.e., brute force checking of design/code, I would have to say the author just doesn't get it.
  • Detecting too late (Score:3, Interesting)

    by Doctor Memory ( 6336 ) on Friday June 02, 2006 @06:13PM (#15458403)
    If they're checking the software design for inconsistencies, then they're too late. What is needed is some way to formally specify user requirements, so that they can be checked for completeness and consistency. Use cases are nice, but they're not sufficiently rigorous to capture absolutely all the requirements. I know there have been some schemes tossed around for requirements validation, but none that I've seen have really been general-purpose enough for the average project.
  • by jvkjvk ( 102057 ) on Friday June 02, 2006 @08:06PM (#15459154)
    Fish school through autonomous intaraction with the state of their observable surroundings. Most times, the local heuristic on movement is not very linear, and the swarm folds on itself, moves randomly, changes volume and surface topology.

    When this type of intelligence is directeted toward some more concrete goal, such as getting away from a predator (for fish), it turns out that the average path can be near optimal if the proper heuristics can be chosen.

    http://en.wikipedia.org/wiki/Swarm_intelligence [wikipedia.org]
  • by lilnobody ( 148653 ) on Friday June 02, 2006 @08:13PM (#15459191)
    The article sounded good, so I went to the alloy site. Having just read through the tutorials, and some of the docs, I can't imagine what possible use this software could have in the majority of software development.

    It's basically a nifty, graphical declarative programming language. Anyone familiar with Prolog and set theory will breeze through the docs, only to ask "Why?" at the end.

    One of the tutorials, for example, is a way to get Alloy to create a set of actions for the river crossing problem, and list them. Thus, alloy has saved the poor farmer's chicken. It's actually quite a cool set of toys for set theory, and it generates all possible permutations of a system with rules and facts based on how many total entities there are in the system, and checks the system for consistency. There are doubtless uses for this, but software development is, at the moment, not one of them.

    The other tutorial is about how to set up the concept of a file system. The tutorial sets up a false assertion (assertion = something for Alloy to test), which fails. Here is the reasoning, with summary to follow for disinterested:

    Why can this happen? First, let's note that both delete functions cause the rows of the contents relation in the post-state to be a subset of the rows in the pre-state. And we know the FileSystem appended facts "objects = root.*contents" and "contents in objects->objects" constrains the root of the file system to be the root of the contents relation. So if the post-state has a non-empty contents relation, it will have the same root as the pre-state. However if delete function causes the post-state to have an empty contents relation, then the root is free to change arbitrarily, to any directory available. Bet you didn't see that coming. Good thing we wrote a model!

    Basically this says that in a 2-node scenario, i.e. a root directory with one subdirectory, they delete the subdirectory with their delete function. The way they defined the delete function basically meant that the 'deleted' node could, in theory, be the root of the file system after the deletion operation occured, since there was no constraint on which node of the file system was root after the change. They basically said under these constraints, it was possible to define a 'delete' function that deletes the subdirectory in a 2-node filesystem and then makes that same subdirectory the root of the filesystem.

    Good thing we built a model, indeed! A bug in the programming of your model is by no means a valid use for spending a significant amount of effort modeling a concept in set theory. The best part is that all of your effort amounts to mental masturbation--there are no tools for turning this into a programming contract, test cases, or anything. Some projects are in the works in the links area, but they aren't there yet, and only for Java. I don't see how the amount of effort that would be required to model a large scale, realistic project in this obtuse set notation would be worth it for absolutely no concrete programming payoff. Writing HR's latest payroll widget, or even their entire payroll system, is just not going to get any benefit from this.

    All that aside, it's concievable that this sort of model programming could find use in complicated systems in which high reliability is paramount. The usual suspects, such as satellites, space, deep sea robots or whatever come to mind--this system could prove, for example, that a given system for firmware upgrades cannot leave a robot in mars in an inoperable state, unable to download new, non-buggy firmware.

    But it still can't prove the implementation works. *slaps forehead*

    nobody

  • by 140Mandak262Jamuna ( 970587 ) on Friday June 02, 2006 @10:36PM (#15459855) Journal
    One of my friends did a project for masters. Some simple code that will read the submitted source, count the number of code lines, comment lines, average number of lines per function etc and print out some stats about "quality of the code". His prof ran his project code source itself as the input! It flunked itself for not having enough comments, for having functions that were too long, not breaking up large source files, for using too many nested levels of code etc

    Microsoft sells collaboration software and project management software. And its products are not shipping any faster.

    These guys are touting alloy and tools, sounds like a old CASE wine in new bottle. Did they use these tools to design the tools? Atleast would they use these tools and alloy to create the next version? Could they demonstrate that these tools can handle a project of that complexity? And produce provably better code with no bugs?

    Please forgive me if I am underwhlemed.

  • by TapeCutter ( 624760 ) on Friday June 02, 2006 @11:13PM (#15459981) Journal
    "If you can just download a library that can do X, write a bit of glue code, and be done your productivity has skyrocketed."

    When I worked for IBM in the 90's the CEO made the pronouncement: "All code has been written, it just needs to be managed". We all thought he was clueless, nevertheless here I am 10yrs later writing "glue code" for somebody else and IBM is still the largest "software as a service" company on the planet.
  • by 3seas ( 184403 ) on Friday June 02, 2006 @11:35PM (#15460052) Homepage Journal
    ...science of abstraction physics.

    yes the software industry is still playing with magic potients and introductary alchemy.

    Why is a simple answer to give.

    money, job security and social status.

    Someone posted that they were warned that their jobs would become extinct upon automated software development.
    but the fact is.... who but those who have their job to risk....are in a position to employ such tools?

    snake oil software development is a self supported dependancy... far from genuine computer software science (of which we haven't really seen since the US government held the money carrot up for code breakers during WWII.
  • by JurassicPizza ( 972175 ) on Saturday June 03, 2006 @12:11AM (#15460150)
    Yes, the magazine actually came out three weeks ago. "Design first" at the level of detail required for this type of testing to work (complete pseudocode) is pretty much never going to happen with a major software system. Perhaps with small critical algorithms, yes, or where the risk/reward of that level of design is warranted. Most software is not going to qualify.
  • by Anonymous Coward on Saturday June 03, 2006 @12:11AM (#15460153)
    Maybe against sofware testing afterwards programming, but FOR formal verfication

    From the 1970s, Dijkstra's chief interest was formal verification. The prevailing opinion at the time was that one should first write a program and then provide a mathematical proof of correctness. Dijkstra objected that the resulting proofs are long and cumbersome, and that the proof gives no insight as to how the program was developed. An alternative method is program derivation, to "develop proof and program hand in hand". One starts with a mathematical specification of what a program is supposed to do and applies mathematical transformations to the specification until it is turned into a program that can be executed. The resulting program is then known to be correct by construction. Much of Dijkstra's later work concerns ways to streamline mathematical argument. In a 2001 interview, he stated a desire for "elegance," whereby the correct approach would be to process thoughts mentally, rather than attempt to render them until they are complete. The analogy he made was to contrast the compositional approaches of Mozart and Beethoven.

    From WP [wikipedia.org]

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...