Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Strategies for Test Databases? 66

youngcfan asks: "I've been tasked with finding strategies for a test database that can be used effectively by both software developers and the QA team. We're a J2EE shop with most of the interesting pieces of the application interacting heavily with the database -- so we need to test it. We're ramping up on JUnit, but are looking for ways to test the database-driven pieces of code. Since QA needs the same database for functional testing separate from developers' unit tests, DBUnit doesn't seem to suffice. We also have the challenge of working on multiple releases at the same time, which only complicates how and when to add new data to the test database in a way that's useful and valid for everyone. We're looking for strategies for using a test database in a way that meets both the QA's and the developers' needs, works for multiple releases, and isn't a heavy burden to maintain given that the schema and code can change anytime before any of the multiple upcoming releases. Any suggestions?"
This discussion has been archived. No new comments can be posted.

Strategies for Test Databases?

Comments Filter:
  • by Dr. Hok ( 702268 ) on Wednesday September 20, 2006 @07:26AM (#16144802)
    Why do you insist on using one DB for both developers and QA? They have different test scopes, so they should use different DBs. It's like using an axe to both chop wood and cut fingernails.

    You'll find it much easier to create dedicated DBs for each test scope.

    • Re: (Score:2, Funny)

      by Dr. Hok ( 702268 )
      It's like using an axe to both chop wood and cut fingernails.

      Simultaneously, I should add.

      • Re: (Score:3, Funny)

        by kfg ( 145172 ) *
        Ok, maybe it didn't turn out to be such a hot idea, but as side effect it is easier for me to compute in base 8 now.

        KFG
    • by djbckr ( 673156 ) on Wednesday September 20, 2006 @09:31AM (#16145409)

      As the parent eludes to, the only way to do it The Right Way (tm) is to have a Development environment, a QA environment, and a Production system.

      Each of these systems should be using the same architecture when it comes to hardware and configuration.

      The Development system is always in a state of flux, as its name implies.

      The QA system should *at least* approximate (if not be identical to) the data and load of the production system, and it should be treated like a production system that QA tries to break.

      It is only in this fashion that you will be able to test and make sure your system will work as expected. Leave nothing to chance. Expensive, yes. But it's less expensive than a downed production system, and definitely less expensive than building a complete system and realising it doesn't perform as expected.

      • You are right as far as you go. With larger companies, you need multiple copies of each. One shop I was in had 1 Prod environment, 5 QA environments, and a seperate test environment on each developers PC.

        Another had 1 P, 2 QA and 2 Test.

        My current company has 2 Prod (one is a daily clone for reporting from ), 1 QA and 2 test. And QA and test may have duplicate tables at any time of the normal tables, due to special testing.

        Now if you will excuse me, The space manager just mounted that new pack for me, I
      • by gosand ( 234100 )
        As the parent eludes to, the only way to do it The Right Way (tm) is to have a Development environment, a QA environment, and a Production system.

        Each of these systems should be using the same architecture when it comes to hardware and configuration.

        The Development system is always in a state of flux, as its name implies.

        The QA system should *at least* approximate (if not be identical to) the data and load of the production system, and it should be treated like a production system that QA tries to break.

        We

      • Re: (Score:3, Interesting)

        Oh my god ....

        You are nearly as wrong as your parrent!

        1st: the QA system very likely won't be the production system, but the production system running in future.
        2nd: DEFINETELY the development system is the same like the QA system. And no: it is not in flux!!! It is reset after each developer test, or developer access to it, either by erasing it and using a back up or by "roll back" of all transactions (that likely is not possible).

        How the hell should a developer figure if his actual "attempt of a new worki
        • And no: it is not in flux!!! It is reset after each developer test, or developer access to it, either by erasing it and using a back up or by "roll back" of all transactions (that likely is not possible).

          How the hell do you get anything done? That would spell disaster where I work - 1000 developers split into about 80 teams, more or less, communicating via service interfaces, all using separate databases. Devo is Devo so you can trash it and nobody cares - this makes you design stuff that is resiliant in

      • One step further:
        2 QA Systems. One for testing the next release, one set up identically to the production server so QA can reproduce problems found with the production software.

        The company I'm at now is extremely small. I constitute 50% of the software engineers and 33% of the entire IT department. We have a production setup, 2 developer setups, and a QA server that we've configured to rapidly switch between our production software and release software. The transition takes about 15 minutes.
    • I had one client who had a bunch of customer records compromised when they sent out some data to a development firm for "testing purposes". There are several products out there that will take actual records, scramble them and spit out a "test" database. I'd highly recommend doing that, no matter what other methodology you use.

      2 cents,

      QueenB
  • Test databases (Score:2, Informative)

    by Anonymous Coward

    Oracle, Sybase and MySQL can all be used as test databases.

    Perhaps you really want to know how to test code that uses databases, which is a different question

    There are many refactorings that can be done to reduce your dependency on a particular database install...but thats a rather large topic. I'm available for consultancy, post here and I can get in touch...

    Some things you might like to consider

    • Per-developer databases (obviously using automated schema building/destruction
    • Dependency-injection of
  • by SMQ ( 241278 ) on Wednesday September 20, 2006 @07:42AM (#16144858)

    Test data sucks: there are too many real-world situations the developers fail to think of.

    We're a pretty small shop, but here's what we do: The production server backup is loaded to the test server daily. Every developer maintains a set of scripts which make any needed databae structure modifications after the backup has loaded. All development and QA testing is done against this test database. Where the production data isn't stable enough for unit testing we force-feed a few specific rows (as few as possible). This gives us fresh, real-world data for development and testing, and when an application rolls out, the exact same set of modification scripts are usually run on the production server (i.e. the modification scripts have been indirectly but repeatedly tested themselves).

    • Re: (Score:1, Interesting)

      by Anonymous Coward
      A problem we had in a shop I used to work for was the the production dataset was huge. There were some plans to try and take subsets of data... but the schema was quite large and complex - making it a pain to keep integrity (which is crucial for performing tests against). In the end, we ended up doing a big refresh of test every few months. This was for user acceptance testing. The developers box got updated even less often - and as you can imagine - this caused huge problems (developers were expected to up
    • Re: (Score:3, Insightful)

      In some cases a developer can't or shouldn't have access to production data. Our production data contains confidential client information -- including information about our own employees. There are federal laws in place regarding access to it, and our developers and QA people must not have unfettered access to it, and it should never be placed on a system that is not access-restricted with the utmost diligence and paranoia.

      We do take a QA snapshot of the production server about once a week. Its confide

      • Additionaly live production data had better be good data, but for testing and QA you'll want some bad data; how well the bad data is handled is important for a robust system.
        • Good point. We do have test clients and such for regression testing that get merged into the QA database during the weekly munge.
        • No. You should never have bad data in the database to start with. If you manually put bad data into a DB, you are of course going to be running into problems that should never exist.

          If you have to test code for handling bad data in the DB, then you are not testing the code that should be properly validating the data *before* it is inserted into the DB.

          • good point SpaceLifeForm, does your planet except Earthling immigrants? On my world, just before I send in the bug report to the language developers about the bug I've been ripping my hair out about for two days, the thought occures to me to double check the test data and sure enough the program is doing exactly what it should be with the data it's getting.
          • o. You should never have bad data in the database to start with. If you manually put bad data into a DB, you are of course going to be running into problems that should never exist.

            Yes, because that never happens.
            /furiously rolls eyes

    • Unit tests should be as minimal as possible. E.g., you might have a single record loaded to test the basic CRUD operations for a class.

      Why? You can set up your JUnit failure method so it takes a snapshot of the database at the point of failure and mails it to you (as an XML attachment). This means you can run smoke tests nightly -- try doing that with a "complete" database that's been scribbled on by other tests and developers since the problem occurred.
    • Re: (Score:3, Insightful)

      by CastrTroy ( 595695 )
      I agree with this completely. For any sufficiently sized application, there's too many permutations of data for the developers to think up and make on their own. The only thing you're missing out on, which you probably do, is to create a set of scripts to clear or change any data that the devs or QA team shouldn't see. Confidentiality is an issue, but you should be able to identify the data and delete or change it accordingly. Also, devs probably have access to production data in some form or another an
      • by cavac ( 640390 )
        Actually, for developing a reliable system, at least one developer MUST use real data. Unless you're ready to send him to the production database after releasing the software to fix some unexpected problems.

        But if you can't trust your techs, devs and sysadmins to handle sensitive data, then how are you expecting them to fix a problem on a production system?

        While i do most developing and testing on test data (to simplify backup, restore und bugtracking), i *always* use a backup of the real database for fina
    • Test data sucks: there are too many real-world situations the developers fail to think of.

      It's not the developers business to define test data, but the business of either the business analyst, or the test engineer in cooperation with the business analyst.

      Sure, lots of business cases are so simple a developer could define the test case. But, if you have a contract with a customer to develop something ... who should define wether you get payed, wether you did right? You or the customer?

      angel'o'sphere
  • If you have to work on UNIT test (or single developer test) there are a lot of tools but if you are talking about SIZING, TUNING and so on you cannot reach your goals without using complex tools and working with more RDBMS. In the last 4 years I worked and designed testing processes on J2EE and without "high levels" tool we cannot understand when the probs are on the java code, on IO SW&HW subsystems, RDBMS or concurrency on classes or table rows. You have to develop testing code for specific goals. We
  • You'd have 3 servers as close in configuration as possible. One houses your production enviroment DB and the other houses your test and one for the QA enviroment. You can get away with QA and TEST in the same server but you REALLY don't want a devoloper to crash the test box or bog it down with a bad query when they're doing QA.
    • by beacher ( 82033 )
      That's what we do in our shop although we do a few tweaks as well. On the weekends, we nuke our test and dev environments and then copy production back to test and dev. We then apply all outstanding data & ddl logs to test and dev in order to get the database back to where it should be.

      Developers have DBA rights on Dev and are locked out of our Prod instances. Developers script all changes so that their work can be reapplied with the same results on every instance. We also log object changes so we c
    • Re: (Score:3, Interesting)

      You'd have 3 servers as close in configuration as possible. One houses your production enviroment DB and the other houses your test and one for the QA enviroment. You can get away with QA and TEST in the same server but you REALLY don't want a devoloper to crash the test box or bog it down with a bad query when they're doing QA.

      Seconded. I'm on a project right now where we (the programmers) have finally gotten management to allocate time for us so we can get going on doing more unit testing, integration tes
  • Doesn't Suffice? (Score:4, Insightful)

    by Aladrin ( 926209 ) on Wednesday September 20, 2006 @08:09AM (#16144966)
    DBUnit doesn't suffice? What's it missing? It's only function is to place the database into a known state before the test, to make sure the data is correct before you test with it. How can that not do what you want?

    It also occurs to me that if you can't even decide what data is 'useful and valid to everyone' then your test data is nothing like the live data you will have. Here's my suggestion: If it seems like it'll be even slightly relevant to anyone, use it. Otherwise you aren't testing everything.

    The constantly changing schema is puzzling also. Did you not plan your database beforehand? I'm guessing this is an XP shop then, eh? XP doesn't stand for 'no planning'. I can understand changes to the schema in the early stages of programming, but if you're getting close to 'multiple releases' then the schema should be pretty solid by now, and the little changes needed to make to DBUnit shouldn't be a big bother.
    • Re: (Score:3, Insightful)

      The constantly changing schema is puzzling also. Did you not plan your database beforehand? I'm guessing this is an XP shop then, eh? XP doesn't stand for 'no planning'. I can understand changes to the schema in the early stages of programming, but if you're getting close to 'multiple releases' then the schema should be pretty solid by now, and the little changes needed to make to DBUnit shouldn't be a big bother.

      In theory I'd agree with you, but in practice I've rarely worked on a project of significan
      • Re: (Score:3, Insightful)

        by Aladrin ( 926209 )
        On the other hand, if you've got to make changes to the schema, you really should not be upset about having to make changes to the tests that go with it... It's all part and parcel. I don't foresee a magic version of DBUnit that handles all that for you.
    • As a DBA in a similar situation to what the OP seems to be in, I can sympathize with him. The problem on my project, however, isn't a lack of planning. The problem is that the customer can request requirements changes, and in order to ensure the software can do what the customer needs it to do, schema changes can be necessary.

      As to the question of a way to test the DB, the use of a test system, or possibly even multiple test schemas is the correct way to accomplish this. If it's an issue with constructed
  • I'm confused by your statement. A single database server (Oracle, PostgreSQL, whatever) can hold many databases. You should definitely have two separate databases for each release (for developers and testing), and arguably a database for each developer for unit tests. It's a one-line change in your config files to switch from one database to another, hardly an onerous burden.

    I guess some toys would only be able to handle a single database, but I can't imagine why anyone would use one when there are so many
  • Do you have a DBA? (Score:3, Insightful)

    by duffbeer703 ( 177751 ) * on Wednesday September 20, 2006 @08:43AM (#16145126)
    It sounds like you need someone intimately familiar with the database who is not a developer, but can do things like create scripts to build your schema and populate it with useful test data... this person is usually called a DBA.

    DBAs are usually viewed by devs as complete assholes, because they scream and holler at devs who make gratuitous changes to schemas and stored procedures. But a good DBA will make your database issues go away.
  • Consider getting storage that can provide data point-in-time copies (Snapshots). Use Snapshots of your production database for development. Using different Snapshots for different releases. If you don't like the changes, make a new Snapshot and rework the tables. You can also use Snapshots for upgrade testing.

    You should use caution here. Moving your production data is never trivial. Snapshots are not free. Developement machines can load the point-in-time copy to the point where it could impact the pr
  • I make a copy of the production database with *real data*. I augument the copy of the production db with the new schema. I then merge the schema back into the production database when I am happy with the testing.
  • In your case, it sounds like a traditional test environment of seperate machines and multiple instances is not the way to go. I would suggest using a virtualization server like VMWare or MS Virtual server or other related software. what this allows you to do is get one environment set up and established, and then make an image of it. Then you can mount this image into a virtual environment where everyone can bang away at it and no matter how bad they destroy the database, all you have to do is mount the
  • Ok, with a structured approach you can make testing walk in the park. First, listen to your costumer, what ar his needs? What does he wnt to do? Define input and output, and of course wht information needs to be stored, and what information can be tabulated. It's no use storing for example ge when you have a birthdate registered. Remove ALL information that can be derived. Make a paper drawing of the structure of your database. plan out the relations. Make sure to obey the CODD rules for design of a rel
    • First, listen to your costumer... Costumer? LOL! I'm pretty sure most costumers wouldn't have the faintest clue about how to set up a database testing environment. They might know about floppy hats and masks, but not floppy disks and markup. Sorry, dude. That was just the funniest typo I've seen in a long time. :-) I keep picturing some dude dressed like Will Shakespeare hunkering over a server, muttering "Verily, thou are a varlet!", or some such silliness.
  • We maintain an SQL script that creates the database or, when run in an existing database, upgrades any stored procedures that are out of date, alters tables, etc. This script (actually the smaller scripts it is assembled from) is checked in to Subversion like any other source code.

    Our unit tests work at the C# level, not SQL (they test the objects implemented using the database, rather than the database itself). Most tests start by running the creation script to create a fresh database, do things to it, a
  • by Ramses0 ( 63476 ) on Wednesday September 20, 2006 @09:34AM (#16145418)
    For (QA) test databases, it's generally not enough to just have a separate instance, you also need to support the following capabilities:

        1- "Clone" whatever is most recent on production

        2- Revert to "known good QA state" (ie: big red reset button)

        3- Dump current state for later use.

    You need to be able to clone so that ad-hoc testing can be run against production data w/o making production impact. This doesn't have to be live, but can be like a once-a-week/once-a-month activity, or rotate out a slave DB every once in a while, or have your DB people test your backups / etc.

    You need the ability to revert to a known good state so that specific tests can be run and those can be more easily automated. Like: search "foo", 7 results found (not 6, not 8, not "it was 8 a few seconds ago but now it's 9 because there's a new result that was just added) ... the more confident QA is in the data, the more confident (and/or prone-to-automation) their can be.

    The ability to dump out DB state is a very distant third, but can be helpful for post-testing analysis or being able to modify a particular DB snapshot to fit some particular testing needs and then dump that out to the file-system for later use.

    QA is hard, thank you for trying to make it easier.

    --Robert
  • For the scope of unit testing, try out HSQLDB. It's an in-memory database that you can connect to over JDBC, so even if you're using Hibernate or some other layered persistence engine you can simply switch your DataSource. If you're writing Java that follows the tenets of dependency injection, this is really straightforward.

    Now, this can only really effectively test a few things, and generally, I find that it can only really be useful for exercising small operations, like individual DAO methods. This is act
    • by curunir ( 98273 ) *
      I think the only time HSQL would make sense is if you are using a persistance layer like Hibernate (where you can just change the dialect during the test). Otherwise, the differences in SQL parsing mean that queries that run fine against Oracle, PostgreSQL or MySQL will either cause an error or just not work properly under HSQL. MySQL is particularly bad about relying on MySQLisms to get things done, but the other ones have their quirks too. So there really isn't a substitute for running the actual datab
  • Use an embedded (or at least small) database like McKoi or Apache Derby, have a script that defines the tables and some test data (which you can grab from a real test system). Then simply create the db once, and use the embedded jdbc url with your unit tests. Clear the database out, or destroy it before or after each unit test (you probably want to do it before each test, because there's no guarantee the last test exited cleanly). Ta da.
  • by toybuilder ( 161045 ) on Wednesday September 20, 2006 @12:41PM (#16146902)
    I also second the idea that developers and QA's normally should all have their own database running on separate servers.
    Ideally, the developers and QA run against a smaller database that is (ideally) populated from scratch with a small dataset to speed development; and then for release testing use a much larger populated database or (if that's too difficult) a copy of the production database that has been appropriately scrubbed to get rid of confidential data.

    The database offerings from the various major vendors allow you to "quiesce" the database which suspends new transactions, completes all pending transactions, and then ensure that all data and log are flushed to disk. Then, with the production system paused, take a hot point-in-time snapshot of the filesystem, effectively giving you a compelte database dump in a few seconds. (This requires a storage system that allows you to make snapshots -- NetApp's do this, for example.) Resume the database to let the production system continue, and then copy the snapshot of database files to another server and reconstruct a clone of the database.

    Run the appropriate trimming/cleansing/schema update on the clone database, and then make a snapshot of THAT. You can then revert the database to a knowing starting point as you like. If your development requires schema changes, don't let developers make the schema changes directly -- insteead, insist on schema change DDL's to be scripted, and reapply the script to the snapshot at each refresh.

    When doing the final release testing, get the latest snapshot of the production database, run the update scripts, and run the tests. If everything looks good, make another snapshot of the production database, and apply the updates to the production database.

    Done right, you can always roll back the test
  • by Slashdot Parent ( 995749 ) on Wednesday September 20, 2006 @01:08PM (#16147106)
    A couple of points.
    1. Typically, the term Unit Testing refers to the testing of a single, fine-grained unit of code. In other words, to do your true Unit Tests, you should not be accessing any database.
    2. The question that I think you are asking, is "How do I get databases initialized with the correct schema and correct data for integration testing?" The answer is, as always, "It depends."
    The two biggest factors for creating useful test environments are: "How often does your schema change?", and "How much data do you need in your database for meaningful test cases?"

    Schema Changes: As a J2EE architect, the first time I saw Ruby on Rails' database migrations my first impulse was to wonder, "Why the !@#$ is this not in Hibernate?" I am not aware of any slick framework for J2EE apps to manage DB migrations, so you may have to use your own migration scripts. Hopefully, your schema is not changing much.

    Getting Data In There: This totally depends on how much data you need. My "favorite" reply to you was to have one snapshot of your production data per developer. That works great, as long as you don't have much data. My last project had I don't even remember how many terrabytes of data in prod. Do you really think the client was going to spring for that much storage and that many Oracle licenses to get one instance per developer? Yeah right. We had a full snapshot for performance testing, but regular integration testing was done on a representative subset of data.

    DBUnit is a great way to initialize a small amount of data. For larger datasets, you cannot get away with things like DBUnit, as it would take hours, if not days, to get the data in there. For our performance testing databases, we had the prod data snapshot stored on a RAID-1. Before testing started, we broke the mirror and did testing against the degraded array. When it came time to reset the data, we shut down Oracle and rebuilt the array to the good snapshot. That wound up being very fast for us. For medium amounts of data, you could probably get away with using SQL*Loader.

  • by phamlen ( 304054 ) <phamlen&mail,com> on Wednesday September 20, 2006 @01:48PM (#16147399) Homepage
    One approach that has worked for me in the past is the "backup and recover" approach. Basically, it works like this:
    1) You maintain a canonical "test" database (or multiple ones). This database has the same functionality as the production database but generally contains much less data. No one touches this database unless they need to permanently modify the test data. After each release, you make a backup of the database and release that backup to everyone who needs a test database. They restore it to their own environment.
    2) You always write changes to the database as scripts so that you can run them against your test database and your production database. Your release process has to change to include running any database modification scripts on the canonical test database as well as the production database. This ensures that your new test database matches the production database for that release.
    3) You need to modify your test process so that it runs a database restore at the appropriate points. In our case, we always restore before QA functional tests (because they leave the database in an altered state) but we don't restore for unit-tests (because we insist they leave the database in the same state they started.)

    The advantages to this approach is that everyone has a copy of an actual database and you get to see all the funkiness of your real environment. The downside is that you have to be very disciplined in keeping the backups for all releases, and for running modification scripts against both the test and production databases appropriately.

    -Peter
  • Like a Forest Fire (Score:3, Interesting)

    by Flwyd ( 607088 ) on Wednesday September 20, 2006 @04:05PM (#16148641) Homepage
    We define our schema in an XML format. We have a class that builds a DB from that format, subclassed by database type, making skeletal DB install an automated process. This also means it's the same process to install a client site using Oracle as it is to install a test database on a developer machine using Postgres.

    When our master build runs test cases, it drops all tables and creates them all fresh using the XML definitions. Each JUnit test case is responsible for ensuring it has the data it needs. In some cases, this is done by setting up a facade on the regular service so that the test can worry about semantics and not data storage. In other cases, the test (or a utility) creates test data. You could presumably also copy part of your live data, though that makes it much more difficult to know what the correct answer is in advance.

    If you follow this structure, multiple releases with different schemas is trivial. Just have a parameter for the DB URL in your test suite and let it build the correct database version for you when it checks your schema out of your source repository.

    (Incidentally, keeping your database schema in your source repository also allows easy comparison of database structure between code versions, making it easier to figure out what must happen when you upgrade.)
  • by mikeburke ( 683778 ) on Wednesday September 20, 2006 @08:38PM (#16150596)
    I work with a large, legacy codebase - about 2 million lines of code, 600 tables. Some bits are nicely written, some aren't. Concepts such as dependency injection, seperation via interfaces etc are not prevasive, so traditional unit testing approaches of mocks or HSQL are not useful (in fact I find they do not scale for 'meaningful' tests anyway).

    So you have this legacy code base - you want to make changes, but how can you validate the result? One approach is to compare database states - one from a known good codebase, one from a modified codebase. DBUnit can be tremendously useful here - this is what I've done (perhaps too complex for explaining on Slashdot):

    Create a common Unit Test base class that extends DBUnit's DatabaseTestCase. It will:

        a) receive a list of modified table names from the concrete test class
        b) if a system property is set, export a pristine copy of these tables prior to running the test - 'reference data'.
        c) execute the use case (register a user, perform a transaction, whatever) - this just makes a 'blind' call into the
              code proper.
        d) if a system property is set, export the modified table data ('known good results')

    The idea is you run this test twice:

    1) With the original codebase, with result exporting enabled to generate known good results.

    2) With the codebase under test - the results generated will be compared against known good results and DBUnit will flag any differences. You can get it to ignore stuff like sequnces,dates that will differ between runs.

    The reference data generated in (b) is reloaded prior to running the test second test, so you start from the same point. Each concrete test class just has to:

    * figure out what tables change within the test
    * provide the test code itself

    Everything else is managed by DBUnit - exporting/importing datasets, comparing datasets, etc.
  • What about SQLUnit? (Score:2, Informative)

    by Abobo ( 704508 )
    http://sqlunit.sourceforge.net/ [sourceforge.net] is based on JUnit and it specifically designed to test databases and result sets. It is what I use when building automated test streams. Supports many databases on fresh download and can be extended easily if required.
  • Spring provides TestCase subclasses that provide a Spring ApplicationContext and a TransactionManager. Spring automagically starts a transaction in setUp() and rolls it back in tearDown(). They provide hooks to execute setUp() and tearDown() code both inside and outside the transaction. You can force the transaction to commit if you want, but that's not really what you want to do. I've found that this works really well for a number of reasons 1) initialize the database once, 2) unit tests are independen
  • I used this recently for running test cases against Python code, and it worked great! I placed some DB population code in my setUp() method so you can run the test from any dir and it works -- no DB server needed!

    It works like MS Access (file-based) but supports most of the SQL92 standard.

    http://sqlite.org/ [sqlite.org]

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...