Forgot your password?
typodupeerror
Education

Journal: If We Taught English the Way We Teach Mathematics

Journal by Coryoth
Imagine that your only contact with "English" as a subject was through classes in school. Suppose that those classes, from elementary school right through to high school, amounted to nothing more than reading dictionaries, getting drilled in spelling and formal grammatical construction, and memorizing vast vocabulary lists -- you never read a novel, nor a poem; never had contact with anything beyond the pedantic complexity of English spelling and formal grammar, and precise definitions for an endless array of words. You would probably hate the subject.

You might come to wonder what the point of learning English was. In response perhaps the teachers and education system might decide that, to help make English relevant to students, they need to introduce more "Applied English". This means teaching English students with examples from "real life" (for varying degrees of "real") where English skills are important, like how to read a contract and locate the superfluous comma. Maybe (in an effort by the teachers to be "trendy") you'll get lessons on formal diary composition so you can better update your MySpace page. All of that, of course, will be taught using a formulaic cookbook approach based on templates, with no effort to consider underlying principles or the larger picture. Locating the superfluous comma will be a matter of systematically identifying subjects, objects, and verbs and grouping them into clauses until the extra comma has been caught. Your diary will be constructed from a formal template that leaves a few blanks for you to fill in. Perhaps you might also get a few tasks that are just the same old drills, just with a few mentions of "real world" things to make them "Applied": "Here is an advertisement for carpets. How many adjectives does it contain?".

In such a world it wouldn't be hard to imagine lots of people developing "English anxiety", and most people having a general underlying dislike for the subject. Many people would simply avoid reading books because of the bad associations with English class in school. With so few people taking a real interest in the subject, teachers who were truly passionate about English would become few and far between. The result, naturally, would be teachers who had little real interest in the subject simply following the drilling procedures outlined in the textbooks they were provided; the cycle would repeat again, with students even worse off this time.

And yet this is very much how mathematics tends to be taught in our schools today. There is a great focus on the minutiae of the subject, and almost no effort to help students grasp the bigger picture of why the subject might be interesting, and what it can say about us, and about the world. Mathematics has become hopelessly detail oriented. There is more to mathematics than mindlessly learning formulas and recipes for solving problems. And just like our imaginary example, the response to students lack of interest in mathematics has only served to make the problem worse. The "applications" and examples of using the mathematics in the "real world" are hopelessly contrived at best, and completely artificial at worst, and still keep a laser like focus on formulas and memorizing methods without ever understanding why they work.

Of course the opposite situation, with no focus on details, can be just as bad. Indeed, that is where English instruction finds itself today, with students never learning the spelling, formal grammar, and vocabulary needed to decently express the grand big picture ideas they are encouraged to explore. What is needed is a middle ground. Certainly being fluent in the basic skills of mathematics is necessary, just as having a solid grounding in spelling and grammar is necessary. What is lacking in mathematics instruction is any discussion of what mathematics is, and why mathematics works as well as it does.

The discovery and development of mathematics is one of the great achievements of mankind -- it provides the foundation upon which almost of all modern science and technology rests. This is because mathematics, as the art of abstraction, provides us the with ability to make simple statements that have incredibly broad application. For example, the reason that numbers and arithmetic are so unreasonably effective is that they describe a single simple property that every possible collection possesses, and a set of rules that are unchanged regardless of the specific nature of the collections involved. No matter what collection you consider, abstract or concrete, it has a number that describes its size; no matter what type of objects your collections are made up of, the results of arithmetic operations will always describe the resulting collection accurately. Thus the simple statement that 2 + 3 = 5 is a statement that describes the behaviour of every possible collection of 2 objects, and every possible collection of 3 objects. Algebra can be viewed the same way, except that instead of abstracting over collections we are abstracting over numbers: elementary algebra is the combination of objects that represent any possible number (as numbers represent any possible collection with the given quantity), and the set of arithmetic rules for which all numbers behave identically. Numbers let us speak about all possible collections, and algebra lets us speak about all possible numbers. Each layer of abstraction allows us to use an ever broader brush with which to paint our vision of the world.

If you climb up those layers of abstraction you can use that broad brush to paint beautiful pictures -- the vast scope of the language that mathematics gives you allows simple statements to draw together and connect the unruly diversity of the world. A good mathematical theorem can be like a succinct poem; but only if the reader has the context to see the rich connections that the theorem lays bare. Without the opportunity to step back and see the forest for the trees, to see the broad landscape that the abstract nature of mathematics allows us to address, it is rare for people to see the elegance of mathematical statements. By failing to address how mathematics works, how it speaks broadly about the world, and what it means, we hobble children's ability to appreciate mathematics -- how can they appreciate something when they never learn what it is? The formulas and manipulations children learn, while a necessary part of mathematics, are ultimately just the mechanics of the subject; equally important is why those mechanics are valuable, not just in terms of what they can do, but in terms of why they can do so much.

So why is it that this broader view is so rarely taught? There are, of course, many reasons, and it is not worth trying to discuss them all here. Instead I will point to one reason, for which clear remedies to exist, and immediate action could be taken. That reason is, simply, that far too many people who teach mathematics are unaware of the this broader view themselves. It is unfortunately the case that it is only at the upper levels of education, such as university, that any broader conception about mathematics becomes apparent. Since it is rare for people going into elementary school teaching to take any university level mathematics, the vast majority of elementary teachers -- the math teachers for all our children in their early years -- have little real appreciation of mathematics. They teach the specific trees outlined in textbooks, with no real idea of forest. A simple but effective measure that could be taken is to provide stronger incentives and encouragement for prospective elementary school teachers to take extra math; whether it takes the form of courses, or math clubs, doesn't matter, the aim is to get teachers more involved and better exposed to mathematics in general so that they can become familiar with the richer world beyond the specific formulas and algorithms. This exact approach was tried in Finland as part of their LUMA project starting in 1992. As a result the number of teachers graduating with higher level had increased dramatically by 1999. And the results are also clear: Finland finished first, showing continued improvement in mathematics and science, in the 2003 PISA survey of the reading, math, and science skills of 15-year-olds in OECD countries (Finland finished second, just behind Hong Kong, in the mathematics section). Finland has continued to do extremely well in other more recent (though less major) studies.

Whether you view mathematics as an important subject or not, it is hard to deny that, currently, it is being taught poorly in many countries around the world. With such scope for improvement, and clear examples such as Finland showing the way, isn't it time that we took at least some of the obvious steps toward improving the quality of mathematics education?

Programming

Journal: Better Programming in Java

Journal by Coryoth

Sometimes ideas don't get the attention that they deserve. The Java Modelling Language (JML) is just such an idea. JML is something every Java programmer should know about. That does not mean every project in Java should make use of JML, only that developers should be aware of it so they can use it wherever it does make sense. The fact is that JML provides so many benefits for so little extra work that it deserves far more use than it gets. If you program in Java it could make your life a lot easier.

What is JML?

In a sense JML is just a way to document your code better. JML provides simple intuitive syntax to provide concise descriptions of behaviour in a Java module. JML uses annotations to document all the mundane things you would normally document. For example you might want to note which fields are allowed to ever be null, or whether a particular method requires one of its parameters to be greater than zero, or if a method is side-effect free. What JML really offers, however, is the tools to make use of these extra comments to save you time and effort later. JML simply asks you to do what you would normally do, and in return offers improved productivity when it comes to documenting, testing, and debugging your code.

So what exactly does JML look like? Here's a simple toy example that, while contrived, manages to demonstrate a lot of basic JML and will be useful later for demonstrating what tools that use JML can do.

public class Example {
 
    public int modifiable = 0;
    private /*@ spec_public @*/int additional_value = 5;
    private /*@ spec_public non_null @*/ String current_substring;
 
//@ invariant additional_value >= 5;
//@ invariant current_substring.length() == modifiable;
 
//@ requires ((String) o).length() > modifiable;
//@ assignable current_substring;
    public void castExample(/*@ non_null @*/ Object o) {
        additional_value = o.hashCode()%5;
        current_substring = ((String) o).substring(0, modifiable);
    }
 
//@ requires x >= -5;
//@ ensures \result > 0;
    public /*@ pure @*/ int pureFunction(int x) {
        return x + additional_value;
    }
 
//@ assignable modifiable;
//@ signals (ExampleException e) additional_value <= 4;
    public void unreachable() throws ExampleException {
        if (additional_value > 4 && modifiable > 0) {
            modifiable++;
        }
        else if (modifiable == 0) {
            throw new ExampleException();
        }
        else {
//@ unreachable;
        }
    }
}

Most of the JML annotations should be clear: We are simply documenting constraints on inputs (requires clauses) and outputs (ensures clauses, noting when fields or parameters cannot be null or when code is (we presume) unreachable, and what invariant properties that we expect the objects to have. A little less immediately obvious are the assignable and signals clauses. An assignable clause declares what is allowed to be written/assigned to from within the method, and a signals clause declares conditions that must be met in the case of an exception being thrown. Finally spec_public simply declares that these private fields have public visibility to the specification. You may note a few errors where the specification and the code don't agree - we'll come to those shortly.

This is, of course, only a small sample of what JML offers. JML provides a lot of useful syntactic sugar to make stating your intentions as easy and natural as possible. For a proper introduction to JML there are some presentations on basic JML and more advanced JML. Full documentation can be found in the JML Reference Manual.

What do I get out of it?

JML provides an easy to use language with which you can concisely say how you intend your code to behave. More importantly it provides tools to make full use of that. For the small amount of extra effort of documenting your intentions with JML you get a lot of easy gains.

The JML tools

The JML distribution comes with a number of tools that use JML annotations. The four most useful are jmldoc, jmlc, jmlrac, and jmlunit. The first, jmldoc performs the same function as Javadoc, producing almost identical API documentation. The advantage of jmldoc is that it understands JML annotations and includes suitable extra information in the API documents based on the annotations. As long as you document requirements and guarantees for modules in JML you can be sure that the API documentation will always include those constraints. Furthermore because JML annotations become integral for testing and debugging it is far more likely that the annotations are kept up to date with the code, meaning that documentation is also more likely to be up to date.

The tools jmlc and jmlrac provide a JML aware compiler and runtime environment. The advantage of compiling and running code with such tools is that JML annotations are converted into assertions which are checked at runtime, making it easier and faster to discover and locate bugs. For the small cost of writing a few annotations you can significantly speed up the testing and debugging cycle.

You can get the similar results by liberally sprinkling assertions through your code, but JML offers several advantages over such a scheme:

  1. JML provides a lot of syntactic sugar like universal and existential quantifiers (\forall and \exists), and logical connectives like "implies" (==> ) and "if and only if" ().
  2. JML elegantly handles inheritance. When overriding a method you will have to rewrite all your assertions, while JML annotations are automatically inherited. If you want to add annotations to those inherited by an overridden method JML provides the also keyword.
  3. As JML annotations are comments, and only converted into runtime assertions when you use the jmlc compiler, turning off runtime checking is as easy as compiling with a standard Java compiler. Using assertions will have you jumping through hoops to arrange anything similar.
  4. JML annotations can be automatically included into your JavaDoc documentation. If you use assertions you'll have to repeat the information elsewhere to get it included in the documentation

Of course having assertions based on intended behaviour is no substitute for a decent unit testing framework. JML has an answer for this too however. The jmlunit tool can read a Java module with JML annotations and automatically generate a JUnit test class based on the annotations. For example running jmlunit over our example above creates two new files Example_JML_Test.java and Example_JML_TestData.java. The first of those files is a JUnit test suite, and the second is used to generate test data. JML attempts to come up with a reasonable test data generation strategy, but leaves space for you to enter your own set of test data instead should you prefer.

Extended Static Checking

Extended static checking is where JML annotations begin to truly come into their own. ESC/Java2 is an extended static checking tool for Java that uses JML annotations to catch errors before you even compile, let alone run, the code. You can think of it as type checking on steroids - using the information in the annotations to warn you about a wide variety of potential errors. It runs about as fast as compilation, so there's no waiting for results.

Consider what happens when we run the example above through ESC/Java2. We get a series of warnings (some not revealed until others have been corrected):

  1. We are warned that current_substring is uninitialized and could be null
  2. We are warned that additional_value is modified when it shouldn't be (line 14)
  3. We are warned that the cast made in castExample may be invalid
  4. We are warned that the postcondition for pureFunction may be violated (it's possible that x = -5 while the invariant only ensures that additional_value is at least 5, thus 0 could be returned).
  5. We are warned that an exception may be thrown in unreachable without the conditions being met
  6. We are warned that the //@ unreachable portion of code is, in fact, reachable
  7. We are warned that the invariant current_substring.length() == modifiable can be violated.

Most of the errors listed above are either errors in code, or in our expectations of what the code will do. Some of these errors can be remedied by strengthening the constraints. For example we can add //@ requires o instanceof String to castExample to fix the warning, but at the cost that any code calling that method must meet that precondition (though the API documentation will state the precondition). Other errors can only be corrected in the code itself.

While the errors are necessarily simple in this toy example, hopefully this gives you the flavour of what ESC/Java2 can do for you - it has very powerful algorithms and scales to remarkably complex code, easily finding errors that would be hard to spot by inspection. For more examples of what ESC/Java2 can do for you (and it is well worth reading) try this introduction to ESC/Java2. ESC/Java2 offers a robust and powerful tool for finding subtle errors in your code earlier when they are easier to fix.

Conclusions

JML is simple to use: it has an expressive and intuitive syntax, and you can write the annotations at the same time the code. JML is also flexible: you an use as much or as little in the way of annotations as you need. Perhaps you simply want to add some pre- and post-conditions, or just keep track of which methods can access or alter certain fields; perhaps you want to use the full power of the theorem provers in ESC/Java2. Most of all JML makes many aspects of development, from debugging and testing to documenting, easier and more efficient, all for very little extra effort. If you program in Java for a living then JML can almost certainly make your code better, and your life easier.

Programming

Journal: A Case for Formal Specification 8

Journal by Coryoth

Formal Specification helps build more robust, more maintainable software with fewer bugs and defects. It has a long history, but it is still a developing field. While it may not be suitable for all software projects, a case can be made that there are many projects not currently using formal specification that stand to benefit from it. As the methods and tools for formal specification develop it is increasingly becoming something that developers and software engineers should learn to use to their advantage.

What is Formal Specification

Formal specification is simply a matter of being more explicit and specific in defining the requirements of software. At the simplest level this can take the form of Design by Contract, where functions and procedures specify pre- and post-conditions and loops and objects include a set of invariant properties. In the more rigorous case formal specification involves building an explicit mathematical definition of the requirements of the software. Using such a definition one can prove the correctness of the system, or simply prove theorems about properties of the system. An implementation can also be checked against such a formalised specification, verifying that the implemented code does indeed do precisely what the requirements claim. At the most rigorous level the initial formal requirement specification can be expanded, through (mathematically rigorous) refinement, to ever more specific and detailed specifications resulting eventually in executable code.

All of these different levels allow a significantly greater degree of analysis of the software, be it improved static and runtime checking with contracts, to more complex data flow analysis and proof with more complete specifications. In the same way that static types allow more rigorous checking at compile time, catching a lot of simple errors, contracts and specifications allow even more analysis and checking, catching even more errors at the early stages of development when they are most easily and efficiently fixed.

A Simple Example

For good examples of formal specification at work I would suggest you try this Z case study or the case study for CASL in the CASL user manual. For those who want something more immediate I have a warning: To be short enough to present easily in this space the specifications are necessarily very simple and should be taken as a sample of the flavour of formal specifications rather than a serious example.

The specification for a simple stack, in SPARK, a version of Ada that adds formal specification semantics, would look something like the following:


package Stack
--# own State: Stack_Type;
--# initializes State;
is
        --# type Stack_Type is abstract

        --# function Not_Full(S: Stack_Type) return Boolean;
        --# function Not_Empty(S: Stack_Type) return Boolean;
        --# function Append(S: Stack_Type; X: Integer) return Stack_Type;

        procedure Push(X: in Integer);
        --# global in out State;
        --# derives State from State, X;
        --# pre Not_Full(State);
        --# post State = Append(State~, X);

        procedure Pop(X: out Integer);
        --# global in out State;
        --# derives State, X from State;
        --# pre Not_Empty(State)
        --# post State~ = Append(State, X);
end Stack;

The package body, containing the implementation, can then follow.

The own simply declares the variable State to be a package global variable, and initializes means that the variable State must be initialized internally in this package. We then declare an abstract type (to be defined in implementation) and some simple functions. The in and out keywords in the procedure declarations tag the parameters: in means that the parameters current value will be read in the procedure, and out means the parameter will be written to in the procedure. The global keyword declares the the package global variable State will be used in the procedure (and both read from and written to). The derives keyword provides explicit declarations of which input will be used in determining values for variables that will be output or written to. The pre- and post-conditions should be largely self explanatory.

As you can see, mostly all we are doing is making explicit exactly how we intend the procedures to operate. This specificity provides automated verification tools the information necessary to properly analyse and validate implemented code: Prior to a compile step you can run a verification and flow analysis tool that catches many subtle errors.

A similar object, specified in an algebraic specification language like CASL might look something like this:


spec Stack[sort Elem] =
        sort
                stack
        ops
                empty: stack
                push: stack * Elem -> stack
                pop: stack ->? stack
                top: stack ->? Elem
        axioms
                not def pop(empty)
                not def top(empty)
                forall s : stack; e : Elem
                          pop(push(s,e)) = s
                          top(push(s,e)) = e
end

CASL offers syntax that can compact this considerably, but this longer version makes the workings of the specification more clear.

We declare a specification for a stack of generic elements, of sort stack; and with operations empty (which is in a sense the constructor creating and empty stack); push which maps a stack and an element to a "new" stack; pop a partial function (denoted by the question mark) which maps a stack to "new" stack; and top another partial function which maps a stack to an element. We then define axioms: first, as pop and top are partial functions, we say they are not defined on empty stacks; secondly we declare that pop and top should behave as (partial) inverses for push for all stacks and elements.

This is sufficient information to completely specify stacks as a universal algebra and brings a great deal of powerful mathematical machinery to bear on the problem, much of which has been coded into a set of analysis tools for CASL.

While these examples are very simple, they should give you the flavour of how formal specification can work. Both SPARK and CASL support structured specifications, allowing you to build up libraries of specification, making them scalable to very large problems.

Why Use Formal Specification?

Why use static types? Why do any compile time checking? Why produce design documents for your software? Not every project is worth writing design documents for, and some projects are better off being developed quickly using a dynamically typed language and runtime checking. Not every project really needs formal specification, but there are a great many software projects that could benefit greatly from some level of formal specification - a great many more than make use of it at present.

I would suggest that it is probably unimportant whether your paint program, music player, word processor or desktop weather applet uses formal specification. What about your spreadsheet application? Bothering with formal specification for the GUI might be a waste of time. Bothering to do some specification for the various mathematical routines, ensuring their correctness, would potentially be worth the extra trouble. Formal specification doesn't need to be used for a whole project, only those parts of it that are sensitive to error. Likewise any network services could easily benefit from formal specification on the network facing portions of the code to significantly reduce the possibility of exploits: it is far easier to audit and verify code that has been properly specified. Security software, and implementations of cryptographic protocols, are far safer if formally specified: with cryptography the protocols are often rigorously checked, and many exploits relate to errors where the implementation fails to correctly follow the protocol. Finally mission critical business software, where downtime can costs millions of dollars, could most assuredly benefit form the extra assurances that formal specification and verification can provide.

None the less, barring a few industries, formal specification has seen little use in the past 25 years. Developers and software engineers offer many excuses and complaints as to why formal specification isn't suitable, isn't required, or is too hard. Many of these complaints are founded in a poor understanding of how formal specification works and how it can be used. Let's consider some of the common complaints:

But Formal Specification is Too Much Work...

Formal specification can be as much work as you choose to make it. You can do as little as adding contracts to critical functions or classes, or you can write the entire project from the top down by progressive refinement of (and verification against) a specification for which you have created formal proofs of all important properties. There is a sliding scale from a dynamically typed script with no documentation or comments, all the way up to a completely explicitly specified and proven system. You can choose the level of specificity and verification, and you can specify as much or as little of the system as you need. Formal specification covers everything from adding contracts to a couple of critical routines to complete specification of the entire project.

Formal specification isn't necessarily significantly more work than you do now. Writing contracts is only a little more work than static typing: you are simply declaring formally what you intend the function to do, something you ought to have a clear idea of before you write the function anyway. Writing a basic specification is only a little more work that writing a design document: you are simply formalising the diagrams and comments into explicit statements. For a little extra work in the initial stages you gain considerably in debugging and testing. Extended static checking based on contracts can catch many simple errors that would otherwise go unnoticed, and debugging is significantly accelerated by the added information contracts supply about runtime errors. If you've gone to the trouble of writing a formal specification, you can statically verify properties of the system before it is even implemented. Once implemented you can validate the implementation against the specification. When performing testing you can use the specification to help determine the best coverage with the least testing.

Formal specification does work in practice, and the gains in efficiency in testing and debugging can often outweigh the potential extra overhead during design. Design by Contract, for instance, is often cited as saving time in the development process, and there are many real world success stories where systems were developed faster by using contracts. A 20 person team completed in 5 months what took a team of 100 people using standard C++ almost a year. For projects where robustness is critical, full formal specification has been used successfully. Companies like Praxis High Integrity Systems use formal methods successfully in the rail, aerospace, nuclear, finance and telecommunications industries (among others), and have sold their formal method tool-set to a variety of large companies. Reading through some of their recent work and case studies makes it clear that formal specification and verification can prove to be more efficient and faster to use.

But Formal Specification is Not Practical...

On the contrary, formal specification is used all the time in many industries, and there are various companies like Praxis High Integrity Systems, B-Core, Eiffel Software, Kind Software, and Escher Technologies who provide formal specification tools as a major part of their business.

Formal specification has proved to be practical in the real world. What it has lacked is mature tools to make development using formal specification faster and easier (try writing code without a good compiler; try doing formal specification without good static checking tools), developer awareness, and developers skilled in the languages and techniques of formal specification. It doesn't take much to learn - learning a specification language is not much harder than learning a programming language. Just as programming appears to be a difficult and daunting task to a person who doesn't know any programming languages, specification looks difficult and impractical to people who don't know specification languages. Take the time to learn and your perspective will change.

But the Project Requirements are Always Being Changed...

It is true that many projects face ever changing requirements. The question is not whether this renders formal specification useless (it doesn't) but how formal specification compares to informally specified software in such a situation.

First of all it should be noted that engaging in formal specification can help reduce the problem to begin with. By requiring you to rigorously specify at least some portions of the software, formal specification can assist in finding ambiguities, cases that were not covered, and other problems much earlier in the development cycle when changes are easiest to make. Those parts of the system, such as GUIs, that are often the target of more whimsical design or requirements changes are precisely the parts that benefit least from, and are least likely to use, formal specification. Changes to design or requirements for those parts of the system, then, are not likely to be more detrimental than for informally specified projects.

Secondly formal specification is quite capable of dealing with change, and even offers benefits in the face of changing requirements. A change in requirements results in a change in specification, but because the specification is formally written the impact of the changes can be analysed. You can check that new requirements remain consistent with the rest of the system. You can also determine all parts of the system that will be impacted by the change and avoid bugs introduced by the changes. Having a formal specification for the system when a change in requirements occurs makes it easier to see how the change will effect the system and easier to make the changes consistently and completely. Formal specification can make changes in requirements easier to deal with, not harder.

But I Already do Unit Testing...

Unit testing is fantastic, and there's no reason to stop just because you can write proofs for some aspects of your code. Unit testing is not a replacement for formal specification however. Testing only verifies the component for a very limited and specific range of cases. Specifically testing covers the cases that the developer thought of. A great many bugs, however, are those that occur from situations the developer didn't consider, and probably wouldn't have included in his test. You can improve the situation with testing tools like fuzzers, but you're still only checking a sampling of the whole space. Formal specification, on the other hand, allows construction of formal proofs that cover all possible cases. I would much rather specify and prove Pythagoras' theorem than simply assume it is true by testing it on random right angled triangles. Likewise, while testing is valuable (and easy) when assurance matters it is no replacement for proof based on a specification.

Does this mean you should give up Unit testing? In no way shape or form. Just as formal proofs in mathematics are much harder than trying a few cases, formal proofs from specifications require significant work - you most likely can't prove everything. That means the best method is prove those properties that are critical, and continue testing the system much as you normally would. Hopefully you'll see a lot less errors to debug based on the extended static checking of course.

But Formal Specification Still Can't Guarantee Perfect Software...

Correct, there are no guarantees. Complete proofs are often too complicated to perform, so theorem proving for specific properties is usually undertaken instead. Static checking can be enhanced significantly with suitable specifications, but not all errors can be caught. Your software is only going to be as stable and secure as the system it is running on. If you can't be perfect why try at all? Because significant gains can be made and the costs may be smaller than you think. Even if the whole system can't be proved, managing to prove key properties may well be all the assurance of correctness that is required for the application at hand. Even if you can't catch all the errors statically, reducing the number and severity of runtime errors can bring significant efficiency gains in the testing phase. Furthermore the cost of errors in software found after release can be sufficiently high that any reduction is worthwhile. Finally if you can't trust your compiler and operating system, then everything is compromised. At some level you have to trust something - we all trust a great deal of software right now that we expect to have flaws - reducing the source of errors to trusted components is still a significant gain.

But Formal Specification Is Only Viable for Trivial Examples...

Not even close to the truth. Formal specification and formal methods have been used for large projects involving hundreds of thousands of lines of code. Just look at many of the examples provided above: these were not small projects, and they were not toy trivial examples; these were large scale real world software systems. Examples of formal specification tend to be small simplistic cases because they are meant to be just that: easily comprehensible examples. In the same way that programming languages often use "Hello World" and quicksort as examples, formal specification languages use equally simple but demonstrative examples. Don't take this to mean that the language and method doesn't scale.

Conclusions

Formal specification isn't a silver bullet, and it isn't the right choice for every project. On the other hand you can use as much specification and verification as is suitable for your project. You can apply formal specification only to the portions that are critical, and develop the rest normally. More importantly, the techniques, languages and tools for formal specification continue to improve. The more powerful the methods and the tools, the more projects for which formal specification becomes a viable option. In an increasingly network oriented computing world where security and software assurance is becoming increasingly important, the place for formal specification is growing. If you are a software engineer it makes sense to be aware of options available to you.

The Almighty Buck

Journal: Elephants in the Living Room 37

Journal by Coryoth

The United States is facing the possibility of a severe economic correction. Yet most of the causes of such a correction are, for the most part, being completely ignored in preference to partisan bickering. It is the proverbial elephant in the living room - except it is not just one, but several elephants that everyone is doing their best to ignore. While the likelihood that any of these issues could result in disaster is low, these are issues worth taking the time to discuss.

The first elephant is debt. There are 3 kinds of debt that are of concern: Household debt, the budget deficit, and the current account or trade deficit. Of those three, it is only the budget deficit that gets any real attention, and even then it is often brushed aside.

The current US budget deficit stands at $413,000,000,000 in the 2004 fiscal year. This is a record deficit. To put it in perspective, that's 3.6% of the Gross Domestic Product, almost a quarter of total federal spending, or 80% of the total receipts from Federal income taxes. Clearly such a situation is not good. But there are deeper implications for such a large fiscal gap. Some economists argue that such large deficits are detrimental to private investment. The deficit effectively soaks up a large portion of the US's savings, which would otherwise be invested back into the private sector. There is historical precedent for this - investment fell to record lows during the previous record budget deficits under President Reagan. But there is something else to consider as well. This debt is managed by selling Government bonds and securities. These securities can be, and often are, sold overseas.

Currently, the two largest foreign owners of US Government debt are Japan and China. Japan had an economic boom, but collapsed in the 90s and has been struggling since. The Japanese government is very conservative, and has been remarkably slow to introduce reforms that are seen by many as necessary to stimulate the Japanese economy. Recently however, the Japanese Prime Minister, Junichiro Koizumi, has been fighting to push through the necessary reforms, with some success. In the last couple of years the Japanese economy has shown signs of improvement. Meanwhile, the Chinese economy is strong, and only getting stronger. Why is this important? Because as both of these economies gear up there will be far less interest in investing overseas, and much greater emphasis in investing the the local growing economy. This would essentially amount to a mass sell off of US debt in the form of government bonds, something that would almost certainly fuel higher inflation in the US, putting pressure on the Federal Reserve to take action by increasing interest rates. That is to say, it would almost certainly lead to a large scale recession in the US.

The US current account deficit stood at $530,668,000,000 in 2003 and $313,341,000,000 for just the first two quarters of the 2004 financial year. That's a record figure for each of the first two quarters of 2004. This could represent a country living beyond its means, or it could represent an economic power attracting large amounts of foreign investment. Certainly as long as the US remains a significant economic powerhouse it can sustain high current account deficits. That is, as long as the US remains an attractive place for foreigners to invest, an imbalance is of limited concern. Whether such a high current account deficit is sustainable is a complicated issue affected by many factors. A reasonably coherent explanation of some of those factors can be found in a 2002 paper by Catherine Mann. In rough précis, the current account deficit is balanced by private savings, but widened by budget deficits, yet at the same time is driven by the attractiveness of US investments to foreigners. Should the current account grow too large, the perception of the ability of the US to repay the investment may decrease, causing an economic correction. In the conclusion of her paper Mann indicates that a change in trajectory (from growing to shrinking) is inevitable, and the concern is whether this will occur through slow structural and policy changes, or whether it will be caused by a sharp correction in overseas investment. Two of the major requirements she lists for structural change are greater fiscal discipline (resulting in budget surpluses) and increased personal savings. Since the paper was published in 2002 we have seen massive increases in the budget deficit, as just discussed. At the same time, the prospects for increased personal savings are very limited (which I will touch on in a moment). Add to that the beginnings of resurgent Asian economies attracting investment, and the risk of a sharp correction is certainly much greater. The consequences of a sudden shift in global investment away from the US would be extremely rapid depreciation of the US dollar, most certainly resulting in considerable economic hardship in the US.

US household debt stood at $8,454,400,000,000 in 2002, and has grown since. A quick look at the associated chart shows just how serious the upward trend in household debt is. This debt is driven both by mortgages, and by credit card debt in an increasingly consumerist society. Do you recall the "Shop for America" campaigns following September 11? It is exactly that kind of thinking that helps to drive the consumer society even further into debt. In 2004 household debt increased 4 times faster than the economy, and average credit card debt for households with at least 1 credit card increased 300%, to over $9000. Of course this is not necessarily crippling, as a recent speech by Alan Greenspan points out. It is, however, trending in the wrong direction, and getting worse fast. It is certainly far from the increase in household and personal savings required to help curb the current account deficit.

The concern about debt for the US is, in the end, quite simple. US debt, in all three forms, is huge, and it is only getting larger. All three forms of debt, while different, are connected in that both household debt and government debt have a significant influence of the sustainability of such a large (and growing) current account deficit. At the same time, either the current account deficit or the budget deficit, if they continue to grow, could easily trigger a rapid and severe depreciation of the US dollar. Which brings us to our next elephant.

The second elephant is the US Dollar. At the time of writing, the US Dollar is running at about 0.77 Euros to the Dollar. One could claim that this is simply due to a strong Euro, but in reality most world currencies, including the Japanese yen and the Great British pound are trading strongly against the US Dollar. A quick look at the historical record of the US Dollar against the Euro, the yen, and the pound shows a distinct downward trend over the last two years in all cases. Of course this need not be seen as a bad thing, certainly it is beneficial to US exports, an increase in which would be highly beneficial for the current account deficit. There are potential issues however. The US dollar has, to some extent, remained as strong as it has due its position as the de facto global currency, in which most major commodities, including oil and gold, are traded. The Federal Reserve estimated that in 2003 around $400 billion of the $680 billion US dollars in circulation were held outside the United States. This high demand for US Dollars overseas is an effective prop for the US Dollar, meaning it is unlikely to ever fall too low too quickly. This prop could, however, disappear. The possibility of the Euro becoming a new alternative global currency is increasing.

In 2001 Alan Greenspan gave a cogent speech on the possibilities of the Euro as a global currency. The first salient point is the fact that a global currency tends toward a natural monopoly - as use increases, it becomes an increasingly attractive currency to hold, while the decreasing liquidity of competing currencies makes them less and less desirable as a global currency. Transitions can of course be slow, for example the transition from the Pound Sterling to the US Dollar between the world wars, but once it begins it becomes inevitable. Greenspan then notes that, at least on the surface, the Euro possesses all the traits required of a global currency (a stable currency based in a strong economy with a well developed financial system and deep, liquid financial markets). Greenspan continues by discussing reasons why the US Dollar remained so dominant after the introduction of the Euro. He cites the strength of the US Dollar against the Euro (the Euro depreciated against the US Dollar in its first two years after introduction), the strength of the US economy on comparison to the EU, and the Euro's apparent inability to expand into foreign equity markets. As already noted, the strength of the US dollar against the Euro, and in fact most world currencies, has been in decline. On the other hand, while the relative strength of the EU economy to that of the US has increased during the US recession, the US economy is beginning to show signs of increasing growth. Lastly, however, the Euro is now beginning to extend into foreign equity markets, most notably oil. An increasing amount of Middle Eastern oil is being traded in Euros, and while the US Dollar remains dominant, both Iran and Saudi Arabia have flirted with the idea of completely converting to Euros. Equally significantly, Russia, the second largest oil producer in the world, has expressed serious interest in trading their oil in Euros instead of US Dollars, though it has not yet embarked on such an en masse conversion. For now the US dollar comfortably remains the dominant player, but there are enough signs for concern, and as Alan Greenspan pointed out, a transition will have a tipping point, after which it will be carried by its own momentum.

The threat to the US economy is that this transition, if it occurs, may not be slow. Because the strength of the US dollar is currently supported in part by its position as a global currency, a shift toward the Euro could trigger further collapse of a weakening dollar initiating a panic driven feedback cycle resulting in an almost complete collapse of the US dollar from its current point of strength. Such a collapse will only be compounded by the fact that the US currently repays its debt in US Dollars - the only country in the world that is granted the privilege of paying off debt in its home currency. Should the US Dollar destabilise significantly (or the Euro establish itself as the global currency) many debt holders could choose to request payment in Euros. This would force the US to purchase Euros (at a markup) to service its debt, both reinforcing the Euro as global currency, an inflating US debt. Such a situation would only serve to exacerbate the US Dollar's collapse.

Of course ideally such a collapse would be halted by the closing current account deficit as the price of imports skyrockets, and US exports become ever more attractive. The US populace is, however, a heavily addicted consumer, more than willing to spend its way into increasing household debt (as already noted). Worse still, US exports have been on the decline despite the recent weakness of the US dollar. Increasingly, the US is importing goods from China, and services from India. Which leads us to another elephant.

The third elephant is the rise of India and China. Both the Indian and Chinese economies are growing very rapidly. These are the two most populous nations on earth, so they should not be taken lightly. Both countries are filled with young and talented people eager to make the most of their education and climb the global economic ladder. In the case of India this has taken the form of, for example, outsourced IT jobs from the US. Increasingly US companies are importing their IT service from India, where there is a vast pool of highly capable people who do not face the vast cost of living that their counterparts in the US do. In many ways this can simply be seen as globalization and free trade beginning to more evenly distribute the wealth of the world, and is not a threat as long as the US continues to innovate and create new industries for itself. There is a question as to whether this actually occurring however.

The most common statistic for measuring economic strength, Gross Domestic Product (GDP), shows the US economy to be in good shape. Current US GDP is between 10 and 11 trillion US dollars depending on whether you measure by Purchasing Power Parity (PPP) or Current Exchange Rate (CER). GDP does have problems, the most important to consider here being its ability to include work that produces no net gain, and its lack of consideration for negative externalities. It is worth considering how much work in the US is included in the GDP that has very minimal net gain. For those familiar with the SCO Group versus IBM court case, its worth noting that tens of millions of dollars have been spent, all counted toward GDP, and yet most observers would point out that, in comparison to the money spent, the gains are negligible if they exist at all. This is common to a surprisingly large number of other court cases in an increasingly litigious country. All of it counts toward GDP, much of it returns little if any gain. A more debatable issue is the current costs of management, particularly in larger corporations that have exceptionally high salaries for the many tiers of upper management - is the net gain provided by management actually comparable to the money being spent. It is certain that management provides significant value, what is unclear is exactly how much value, and how that compares to the salaries involved. As stated, this is a point for debate. The fear is that if, in fact, US GDP is inflated by such issues, the economy could find itself hollow when it comes time to compete in earnest.

Currently India is exporting low and mid level IT services to the US, but given current Indian growth, it is only a matter of time before India is in a position to cease selling its services piecemeal, and instead sell complete packages. At present, while a certain amount of work is being outsourced to India as their economy grows, management and the corporations have remained in the US. Given the growing number of capable and experienced IT workers in India however, it is inevitable that new companies will arise in India taking advantage of the considerably lower cost of living to compete head to head with US corporations for complete solutions. Again we have to question the ability of the US to forge ahead into new industries. It could be argued that ambition for education, science, and research is lower for the current youth of the US than for their would be competitors in India, China, Korea and Japan. While there is involvement from the US in the very ambitious fusion reactor project, it will most likely be sited in Europe, and if not there, then Japan. On the other hand it was a US based company, Scaled Composites, that recently won the Ansari X-Prize and looks to be at the forefront of commercial spaceflight. The future remains uncertain, but this is certainly an issue for concern.

Finally, the rapid growth in India and China is having other visible effects. China, in particular, with its rapidly growing manufacturing sector, has had an an equally rapid increase in its demand for oil, adding a new element to the global oil market. While China is currently trying to slow its economic growth to more sustainable levels, the growth in its hunger for oil is not expected to be similarly dampened. Oil prices are already being driven ever higher, and unlike the crises in the 1970's, it is not due to disruptions in supply, but instead simply due to demand. That this is detrimental to the worlds largest oil user, the US, is obvious. Of more concern is that with increasing Chinese oil needs stretching current oil production capacity to its limits, a disruption in supply now could be catastrophic. This issue is discussed in detail in an article by Paul Krugman. In the wake of ever higher oil prices, alternative energy sources may prove to be one of the most significant new industries in the coming decade. With this in mind, the question must be asked: Which countries are going to be at the leading edge of research in alternative energy sources?

There remains one significant issue to discuss: the domino effect. This is the fact that the three major issues so far discussed are all interconnected. Increasing strength in the Chinese and Indian economies, providing goods and services to consumption addicted Americans willing to go into personal debt to maintain their standard of living, could easily lead to a widening of the US current account deficit. If the current account deficit grows too large it could easily trigger significant depreciation of the US Dollar. Should the US Dollar start too look too volatile, or become a significant financial risk to hold (due to depreciation), global markets could easily start to embrace the far more stable Euro, potentially sending the US Dollar into free fall. During a period of such extreme uncertainty in the US Dollar, foreign investors may well seek to diversify their investments away from the US towards rapidly growing countries such as India or China. That is to say, any one of these issues could trigger the others, to a devastating end.

Much of what has been discussed is speculative. Far from being probable, the potentially disastrous outcomes outlined are somewhat unlikely. Furthermore, even if the worst did come to pass the US would rebound, and may well come back stronger than ever. The reason to consider these issues, despite the low probabilities, despite the eventual recovery, is that the possible effects could be so pernicious during the time required for such an economic correction to shake itself out. The severity of the possible outcome demands our attention. These are issues that US politicians should be discussing instead of quoting the standard divisive talking points about the usual false dichotomies. You won't hear these issues raised, however, because with these issues politicians can't give a soundbite as to how they'll spend some money, or make a law that will fix the problem. Ignoring the problem won't make it go away, and just because there are no easy solutions doesn't mean that nothing can be done. What is certain is that nothing will be done if people aren't aware of the problems.

System checkpoint complete.

Working...