Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

PostgreSQL vs. MySQL comparison 390

prostoalex writes "Ever find yourself wondering which open source database is the best tool for the job? Well, wonder no more, and let your tax dollars do the work in the form of Fermi National Accelerator Laboratory publishing this unbiased review of MySQL vs. PostgreSQL. After reading it, however, it seems that MySQL ranks the same or better on most of the accounts." My poor sleepy eyes misread the date of posting on here; caveat that this is more then 15 months old.
This discussion has been archived. No new comments can be posted.

PostgreSQL vs. MySQL comparison

Comments Filter:
  • From TFA:

    Having foreign keys [...] can all be very attractive in PostgreSql—if you need them and you will make any use of them.

    Foreign keys are nice, I have to say; I implement them in mysql anyway, in spite of the fact that they're ignored for MyISAM.

    • Re:Foreign Keys (Score:5, Insightful)

      by mwanaheri ( 933794 ) on Monday December 18, 2006 @09:34AM (#17284976)
      Foreign keys are more than nice, they are essential. Unless, maybe you don't care about the integrity of your data or want to make the necessary checks in their application. The latter should keep their eyes down and their mouth shut if the talk is about 'speed' of any rdbms, off course.
      • Re: (Score:2, Interesting)

        by Tony Hoyle ( 11698 )
        Foreign keys don't speed anything up, they just add an extra layer of checks on your database. Your app should be checking itself anyway.

        It's the subselects that get me - without them you have to jump through a lot of hoops. The sentence quoted basically translates as 'as long as you are only storing your CD collection and not doing anything serious with a database, then use mysql'.

        TFA also fails to mention that mysql cannot be used in commercial development without paying $200 per client - which makes it
        • Re: (Score:3, Insightful)

          by mwanaheri ( 933794 )
          >Foreign keys don't speed anything up, they just add an extra layer of checks on your database. Right, they even make the dbms slower. But the dbms certainly does it faster than the application you write. So, to rely on the checks being made in the application results in a waste of speed on the application side. If I don't care about the speed in the application, why make a fuss about the speed of the rdbms? By the way: I rather have things reliable than fast. Subselects are something I also heavily us
          • Re: (Score:3, Informative)

            by Tony Hoyle ( 11698 )
            An app can do its checks with full knowlege of the structure of data it's writing, and often it's just the comparison of a couple of integers anyway and have no impact on speed. You don't want to rely solely on the DB to do that... you end up having to handle a lot of nasty exception cases. Far better to avoid them first. Put foreign keys in, but definately don't make them your first line of defence.
            • Re:Foreign Keys (Score:5, Insightful)

              by vadim_t ( 324782 ) on Monday December 18, 2006 @11:10AM (#17286088) Homepage
              Well, I sure hope you never work on anything serious.

              The database's function is to provide a RELIABLE storage for your data. Part of the whole reliability thing is making sure crap can't get in, because once it's there everything goes to heck.

              For instance, let's take a shopping cart. Can an order be for a negative quantity? If your app doesn't work that way (it could, using a negative amount for returns for example), and you still allow it in the DB, then all your reporting goes to heck, as SELECT SUM... now returns the wrong thing.

              A proper database is set up in such a way that every piece of data in it makese sense. This means for instance not having things like orders hanging around without in the void without being linked to some client. This is something easily ensured by foreign keys. Otherwise you have an utter mess - the total of the orders in the database doesn't match the sum of the orders of all clients!

              If you put your checks in the database, you have a guarantee that when somebody else codes another frontend to it (say, you had a website and now are making a special version for PDAs), if the application does the wrong thing, the database simply won't let it happen. This may cost a bit of speed, but I assure you that peace, your sanity and your ASS (if you have a boss and he's got any sense, he's not going to like it at ALL if it turns out that reports don't match reality, and that reality can't be even easily extracted) is far, far more valuable.
            • by alienmole ( 15522 ) on Monday December 18, 2006 @02:29PM (#17289414)
              I suspect you're thinking of relatively simple cases where you have a single application working against a database. Which of course is the standard MySQL scenario.

              Where foreign keys and the other referential integrity features really shine is in true enterprise scenarios, when you may have hundreds or thousands of applications, written in multiple languages, working against the same shared database(s).

              In that scenario, the only viable way to duplicate the functionality of foreign keys at the application level is to have a middle layer which all other applications are required to go through. Realistically, that middle layer has to be implemented as a server, serving requests for object/record creation, update and delete over the network. Implementing it as a library to be linked into applications doesn't work well, because there are multiple applications accessing the database, and integrity enforcement needs to be centrally coordinated.

              Implementing a middleware data server for an application isn't all that difficult, but integrating it into applications can be. Most application development environments know how to talk to databases, but don't automatically know how to talk to your application-specific, language-independent, data server. So now you're writing a client library for each app dev platform used in the enterprise, and dealing with things like integrating your custom interface with data-bound controls in the user interface. BTW, this is where people start resorting to e.g. SOAP, and projects start going off the rails (no pun intended, Ruby fans).

              Luckily, as it turns out, there are already standardized, widely-available, well-supported systems that implement a centralized data serving service which enforces referential integrity. They're called databases. And foreign keys are an essential part of the service they provide.
        • Re:Foreign Keys (Score:5, Insightful)

          by Branko ( 806086 ) on Monday December 18, 2006 @10:45AM (#17285760)
          Your app should be checking itself anyway.

          Actually it shouldn't (in this context). Typically, one database will have several client applications attached to it. If data consistency is not checked at DB level, then:

          • Bug in single application might compromise the data consistency of the whole system.
          • You must keep all of your applications precisely synchronized.
          • You are repeating the job of implementing the same consistency logic across all applications instead of implementing it only once - in database.
          • Implementing these sorts of checks can be difficult to do correctly at the application level in a concurrent environment typical for a DBMS.
          • Data consistency at DB level is directly supported by modeling tools, so you can plan for it and visualize it early enough to spot problems and communicate it to the other team members more easily.
          • Re:Foreign Keys (Score:5, Insightful)

            by tacocat ( 527354 ) <tallison1&twmi,rr,com> on Monday December 18, 2006 @01:58PM (#17288880)

            Additionally, databases generally can do this faster than the application code. I can say this because databases are written in C and optimized and debugged for years. Applications are rarely (relatively) written in C and have not been debugged for years when released.

            This is something that actually really pisses me off about Ruby, Rails, and ActiveRecord. ActiveRecord is an insane violation of everything that a database has been built to do. It breaks consistency, violates keys, ignores so many rules... And it's beats the crap out of a database to do what a database is designed to do and can handle much faster.

            This is regardless of the flame wars of Postgres vs MySQL.

        • Re:Foreign Keys (Score:4, Insightful)

          by cloudmaster ( 10662 ) on Monday December 18, 2006 @12:03PM (#17286844) Homepage Journal
          Umm, I'm pretty sure that MySQL had had subselects for several years. It's not clear if you knew that or not.
        • Re:Foreign Keys (Score:4, Insightful)

          by poot_rootbeer ( 188613 ) on Monday December 18, 2006 @01:01PM (#17287908)
          Foreign keys don't speed anything up, they just add an extra layer of checks on your database.

          Correct. That extra layer of checks will probably actually slow things down a bit.

          But foreign keys aren't about performance. They're about data integrity, which I would hope every database administrator or developer is more concerned with anyway. It doesn't matter how many requests/second your DBMS can handle if the data is fuxxored.

          Your app should be checking itself anyway.

          Yes, it should be catching "foreign key constraint violation" exceptions thrown by the DB interface and handling them appropriately. I hope that's what you meant.
        • Re:Foreign Keys (Score:5, Insightful)

          by tacocat ( 527354 ) <tallison1&twmi,rr,com> on Monday December 18, 2006 @01:54PM (#17288818)

          When are you non-database types going to stop saying "Your app should be checking itself anyway."

          This is an insanely inneficient method of execution. It's also highly presumptive.

          Inneficient: If you are going to insert a record you have to first check to make sure it's not there. Then if it is there you have to change your INSERT to an UPDATE. This is dumb. Some databases do a INSERT OR UPDATE. but if they don't, it's faster to do an INSERT, handle failure, UPDATE. Alternatively -- UPDATE and INSERT on ZERO ROWS CHANGED. This means you have to run less than 2 queries on average. Your app should check method guarantees two SQL statements are executed every single time.

          Dumb. Say you check for a record to exist. You get a "NO" answer. While you are preparing and executing your next INSERT, some other process or a thread inserts that same record into the databse. Now you have an error and you still don't know what to do. In short, you're in a pretty bad way.

          Presumptive. In all my years of living I've never seen any company happy with the only interface to the data being through the application interface. Especially with a database on the back end. The business types, Marketing in partitular, love to screw with database information to try and identify trends, patterns, and correlations between the customer behaviour, product representation, and sales metrics. It is presumptive that the application can safely contain all of the business logic and you can assume that no one will ever come in the back end and change something -- thereby breaking all your business rules.

          The other consideration is that the business logic contained in a database is going to run a heck of a lot faster on the database than anything you can dream up in your application, unless the application is written in C. Databases are generally written in C/C++. Applications are generally written in Java,Perl,Python,Ruby. None of these can compete with C. Add to that the fact that databases have been designed for years to do only one thing -- manage data. Do you seriously think you can out perform a decade of database optimization in a ruby script?

          If you are going to base an application on data it would be useful to know how to capitalize on the features of a database rather than trying to repeat it. At the very least, you are less likely to introduce bugs.

          • Re: (Score:3, Informative)

            by killjoe ( 766577 )
            Personally I think putting business logic in the database is wrong. It's harder to debug, harder to version control, harder to write.

            Putting some code to handle data integrity is OK but not business logic. Performance isn't everything. I would rather have an application that runs slower and is easier to maintain then the other way around.

            "unless the application is written in C. Databases are generally written in C/C++. Applications are generally written in Java,Perl,Python,Ruby. None of these can compete wi
        • Re: (Score:3, Insightful)

          by nuzak ( 959558 )
          > Your app should be checking itself anyway.

          No it shouldn't -- the purpose of a database is to make it happen whether the client apps care about data integrity or not. Now a good client will gracefully handle the errors that a database throws back at it, but the database is supposed to take care of the checks in the first place.

          TFA reminds me of the anti-transaction FUD in old MySQL docs, which suddenly disappeared as soon as MySQL got a transactional backend. But hey, its system tables are still MyISA
      • Re:Foreign Keys (Score:5, Insightful)

        by CaptainZapp ( 182233 ) * on Monday December 18, 2006 @10:04AM (#17285256) Homepage
        Foreign keys are more than nice, they are essential.

        Bingo!

        It doesn't cease to amaze me, when the Mysql croud argues that "you don't really need those pesky integrity stuff, it just slows down the database."

        Guess what guys; You're dead wrong!

        Any DBA worth his salary will enforce data integrity on the lowest possible level, which means constraints (however implemented) on the object level.

        Sure, you can let your coders in Bengaluru ensure that the primary key is unique instead of just applying a unique index and the same goes for referential constraints between tables. You can implement them in the application just fine until somebody overlooks some minor detail in the code and you're royally fucked!

        Again! Foreign keys or triggers are not "niceties". They are essential in implementing an industry strength database; period!

    • Re:Foreign Keys (Score:5, Informative)

      by ShieldW0lf ( 601553 ) on Monday December 18, 2006 @09:39AM (#17285048) Journal
      This is unbiased? Give me a break.

      WTF is with putting up an "unbiased comparison" between Postgres 7.2 and MySQL 5.0 when Postgres is now up to 8.2 and has most of their concerns addressed in that release, whereas MySQL is still at 5.0?

      MySQL is a great database, if you need clustering but not referencial integrity or ACID compliance, that is.

      • Re:Foreign Keys (Score:4, Informative)

        by Tet ( 2721 ) <slashdot AT astradyne DOT co DOT uk> on Monday December 18, 2006 @09:49AM (#17285134) Homepage Journal
        WTF is with putting up an "unbiased comparison" between Postgres 7.2 and MySQL 5.0 when Postgres is now up to 8.2

        That'd be because the article was written in 2005. Unbiased? Maybe. Vague, unscientific and out of date? Definitely.

        • Could it be coincidence that Digg had a 'dated' article on this earlier today - I can't check as Digg is currently down, but did Hemos 'nick' this from Digg?

          I'd hate to think that Digg, Fark, Slashdot, Boing Boing etc. are nothing more than a big news circle-jerk now. (well, more than they are already!)

        • Re: (Score:3, Interesting)

          by Trifthen ( 40989 )
          I think the concern here was that even 15 months ago, Postgres was at least in the 8.0 tree, a vast improvement over the 7.4 tree which itself offered not insignificant advances over 7.2 and 7.3. The linked table of pros/cons for each database is dated February 2005, and browsing the postgres archives informs me that 8.0.1 was released by that point. If they waited two months, 8.0.2 would have been available. As it stands, the 8.0 tree was deprecated back in November of 2005, and the future now sits squa
      • Re: (Score:3, Informative)

        by Phil John ( 576633 )
        MySQL is a great database, if you need clustering but not referencial(sic) integrity or ACID compliance, that is.

        Is that the same referential integrity and ACID compliance afforded by using INNOdb as your table type in MySQL? ;o)

    • Re: (Score:3, Insightful)

      by Brummund ( 447393 )
      Foreign keys aren't "nice", THEY ARE ESSENTIAL TO A RDBMS.

      It is the same thinking that probably made the retards at MySQL AB make a datatype that accepts 30th February as a date. (At least did, a few years ago.) Why EVEN include a datetime datatype if it isnt capable of the SIMPLEST validations ever.

      Yes, I'm fuming. Those MySQL retards has made a generation of programmers think they can do SQL when they manage to put crap into MySQL. Gahhh, I hope their puny webapps will haunt them down sometime.

      (I was once
      • 1. "SELECT * FROM ARTICLES ORDER BY ID DESC"
        2. Retrieve all articles from the database
        3. Then a for loop printing out the 5 first entries.

        They basically transferred all data in the articles database everytime, just to iterate over the 5 first rows. Gahhhhhh)


        Depends if it was cross-database or not, and whether that 'retrieve all articles' really did that or simply created the recordset/cursor.

        SQL is a bit fuzzy once you get beyond simple selects... On one database you might want 'SELECT TOP 5 * ...' on anoth
      • Re: (Score:2, Insightful)

        by walt-sjc ( 145127 )
        People coding shitty SQL is independent of their database of choice. MySQL is (IMHO) easier to install, configure, and use than postgres which just makes it more common to use, but MySQL is not responsible for shitty SQL in poorly written PHP apps.
        • Re: (Score:3, Insightful)

          by Brummund ( 447393 )
          A database allowing even simple datatypes to contain crap which is totally inconsistent with any calendar in use for the last 5000 years is responsible for some of the crap .
    • by hey! ( 33014 )
      This kind of entanglement of logical design (foreign keys) with phsyical design (ISAM) is precisely what a database management system is supposed to do.

      It's just like separation of concerns in web site design. You want to leave graphic design to artists and programming to programmers, and to be able to vary each (within limits) somewhat freely.

      The same SOC principle applies to database applications.

      Foreign key constraints are declarative specifications of a behavior; either you need the behavior or you don
  • No Digg (Score:5, Informative)

    by AKAImBatman ( 238306 ) * <akaimbatman AT gmail DOT com> on Monday December 18, 2006 @09:30AM (#17284936) Homepage Journal
    1. There's no such thing as unbiased. Especially on a page that gives a fairly abstract review.

    2. This article is 2 years old. Everything in its comparisons is out of date.
    • by suso ( 153703 ) *
      2. This article is 2 years old. Everything in its comparisons is out of date.

      Its a government website and so it was written in Government Time (GVT). That means that February 2005 is actually about May of 2009. So clearly the author of the page can't be trusted.
    • Funny you should mention that. Guess what article appeared on Digg very recently...
    • Re:No Digg (Score:5, Informative)

      by electroniceric ( 468976 ) on Monday December 18, 2006 @09:59AM (#17285212)
      Just to continue on your good points, especially troubling is the fact that this article compares the then-unreleased MySQL 5 to the Postgres 7.x series. Nearly all the drawbacks to Postgres that this article highlights have been addressed in the 8.x series.

      We run Postgres for our main business application and the main limitations are of two forms:
      1) Depth of community
      The Postgres community is great - very responsive and knowledgeable, but its size is a limitation in a number of ways. The ODBC driver is a bit of stepchild to the main project, and some key functions like dblink that address missing features like cross-database selects are relegated to /contrib, and rely on their individual authors for nearly all maintenance. This means that as a user you are more likely to bump up against the bleeding edge earlier than in communities where these outside-the-core projects are more supported.

      For the same reason a key subset of its documentation is very sparse. Documentation for the core system is thorough, clear and concise, but anything in contrib or any projects like the ODBC or .NET drivers are much less like to have the same quality of documentaton. Postgres' extremely powerful GIST indexes are unparalled as a feature, but you need a background in theoretical CompSci to figure them out, thanks to limited documentation (note to aspire database index geeks - I would gladly buy a book on GIST aimed at proficient DBAs who are not giants of theoretical CS). Likewise its procedural languages: thanks to its architecture and open codebase, Postgres offers more server-side languages than any other database that I know of, but few of them have more than basic documentation, let alone the stacks of books you'd find with other procedural languages.

      2) Postgres is very close to being a true enterprise contender (unlike MySQL, which is evolving that direction but distinctly further off), but lacks some key features like XML handling, a more comprehensible approach to result sets (anyone who's dealt with rowtypes and casting resultsets can attest to the steep learning curve), and a userbase that has put the product through the wringer. Now that some corporate heads are getting interested (e.g. Sun, Red Hat, EnterpriseDB) hopefully some of these shortcomings will be addressed in short order.

      Don't let this outdated, apples to oranges comparison fool you: Postgres is a very solid and usable database.
  • Old news (Score:5, Informative)

    by daffmeister ( 602502 ) on Monday December 18, 2006 @09:31AM (#17284944) Homepage
    From the site:

    "Last modified: February 15, 2005."
  • stability (Score:2, Informative)

    Having foreign keys, views, subselects, and transactions can all be very attractive in PostgreSql -
    if you need them and you will make any use of them. If you don't need them or won't use them, then
    you're probably better off with MySQL and its superior performance.


    PostgreSql is more stable than MySQL, (and has better performance when saturated), shouldn't you take that into consideration?
    • Re: (Score:3, Insightful)

      by TheRaven64 ( 641858 )
      According to TFA, 'MySQL does very good job even on the busiest sites,' while for PostgreSQL 'Random disconnects, core dumps and memory leaks are usual.' This flies in the face of my own experience and testing results I have seen. Under heavy load, PostgreSQL has a habit of slowing to a crawl, while MySQL just dies. How many web pages have you seen where the entire text was a PHP error saying it was unable to connect to the MySQL server?
      • Re: (Score:3, Interesting)

        by scribblej ( 195445 )
        This is what blows my mind, too. I've been using postgresql 7.x (yeah, I'm a Debian user) for years and I have never, ever seen it crash, disconnect, dump core, or leak memory. Ever.

        I've never seen MySQL do those things, either, to be fair, but I don't use it as much, and I *did* have it destroy a bunch of my data once when a machine was rebooted without being properly shut down.

        The comment that Postgresql is unstable, even in the older version numbers, seems to indicate some kind of separation from reali
      • Re: (Score:3, Interesting)

        by Blimey85 ( 609949 )
        Just because you see that error doesn't mean that MySQL has "died". If the number of people hammering your site exceeds the number of connections you have allowed, that message will come up. Granted by that point the server is probably crawling but maybe not. You may be doing a dump from a particular table causing it to remain locked while the dump is in progress. All queries to that table will stack waiting for it to get unlocked and that can cause your connections to max which would in turn cause that par
      • Re: (Score:3, Interesting)

        by rycamor ( 194164 )
        It flies in the face of my experience, too. Even with the older 7.x series, if I accidentally ran a stupid query, (for example, a join on multiple tables, forgetting the WHERE clause, resulting ridiculous multiplication of returned rows) PostgreSQL would literally run that query for 24 hours without dying, until I finally killed the query or re-started the server.

        Come to think of it, I haven't had PostgreSQL ever die on me, once. Even when my server crashed and rebooted, PostgreSQL would just happily pick u
      • Re: (Score:3, Insightful)

        by localman ( 111171 )
        Any DB misconfigured is going to die under load. MySQL can be configured to be extremely stable -- we've been running the fastest [internetretailer.com] & most reliable [internetretailer.com] retail site online for the past year now with MySQL as the DB.

        I've got nothing against PostgreSQL -- just never used it. I'm sure it's a fine piece of software, but please don't spread falsehoods about MySQL just because people don't know how to configure it. That would be like me claiming PostgreSQL sucks because I couldn't get it working easily. It's all
  • Old and wrong (Score:5, Informative)

    by ldapboy ( 946366 ) on Monday December 18, 2006 @09:32AM (#17284962)
    postgresql has a native Win32 version, complete with an installer, service support and does not depend on cygwin.
    • Re: (Score:3, Funny)

      by Anonymous Coward
      Yeah but In 2005 it didn't!

      You can't say it is "old" and "wrong" when it is wrong because it is old.

  • Unbiased ? (Score:2, Interesting)

    by UncleH ( 8863 )
    Just take a look at the description per item. I couldn't possibly call this unbiased in any way.
  • by seebs ( 15766 ) on Monday December 18, 2006 @09:36AM (#17285010) Homepage
    I have been involved with a smallish ("hundreds") installation of Movable Type using a mysql backend.

    One comment spammer can completely annihilate it.

    One developer I talked to once did some testing. On one simultaneous connection, mysql was way faster. By five or so, they were close. At ten, PostgreSQL was definitely winning. At a hundred, he was simply unable to get a single MySQL server to complete the test successfully, let alone do it quickly.

    The impression I get is that PostgreSQL uses more robust algorithms, with higher constant costs and lower quadratic costs. In any event, never had any problems.

    As noted elsewhere, these comparisons are quite old...

    But in any event, in my own experience, mysql is a lot easier to blow up by overloading than postgres is, at least if you have a lot of writes going on. For pure-lookup functions, it might do better -- but a lot of modern database apps are pretty compulsive about saving at least something every time someone touches them. (For instance, modern vBulletin saves last visits, threads seen, and so on; all of that adds up to a huge load on the database server.)
    • by Rich0 ( 548339 )
      I've been wanting to switch to postgres for a long time, but do you know what the main thing is that is holding me back?

      THERE IS NO ODBC FOR LINUX (or equivalent).

      Why should apps that use a db be linked against libraries that are db-specific? Why not make everything modular. Then developers can stick to ANSI SQL and let the user pick whatever database they want (mysql, posgres, oracle, sql-server, access, whatever).

      Right now all of my apps support mysql, and a few support postgres. So either I run two da
      • THERE IS NO ODBC FOR LINUX (or equivalent).

        WTF? You *do* know Microsoft didn't invent ODBC???

        There are at least 2 different ones for Linux that I can think of. Every commercial Unix either has ODBC or has it available as an addon.
      • Re: (Score:3, Informative)

        by AKAImBatman ( 238306 ) *

        THERE IS NO ODBC FOR LINUX (or equivalent).

        What the hell? If that were true, then what is this? [pgfoundry.org] And this? [unixodbc.org] Not to mention this. [mysql.com]

        Are you making a reference to Windows-specific APIs or something? Because I don't understand how you can shout that ODBC doesn't exist for Linux/Unix?

    • Re: (Score:3, Informative)

      by walt-sjc ( 145127 )
      We use both PostgreSQL and MySQL for a large web-based application that does a reasonable mix of reads / writes (sessions / profiles are in MySQL so it gets MANY MANY writes.) Neither MySQL nor PostgreSQL has problems handling many many connections. Our load frequently hits around 1000 connections on Postgres and 4000 on MySQL on individual database servers (we replicate too.)

      Obviously you need to tune your environment (there are a plethora of options including table types which can impact things a LOT) to
    • Re: (Score:3, Insightful)

      by hey! ( 33014 )
      Somewhere in here, there's a tortoise and hare analogy trying to get out.

      It seems to me that if you step back from the details, there is a fundamental difference in style between the two systems that could be summarized thus:

      Postgres: emphasizes completeness, correctness, and conformance.

      MySQL: emphasizes immediate practicality.

      One style is not intrinsically better than the other. Given time their results may begin to converge, which I think is starting to happen. However, I am not surprised that many peop
    • Re: (Score:3, Informative)

      by Wdomburg ( 141264 )
      For all it's faults, MySQL does scale with a largely read-only data set. We currently have twenty-eight production servers running and about twenty development and testing machines. On the busiest servers we're pushing somewhere in the neighborhood of 4000 queries a second sustained.

      Write performance can certainly be an issue, but it depends largely on the application and the table backend. For example, if you can avoid doing deletes on a MyISAM table INSERTs get appended, allowing concurrent reads.

      I've
  • by Anonymous Coward
    Glad to see the comparison, but I would really like to see is a comparison that includes the new 2.0 release of Firebird. Their new release is impressive, but I dont know how the features pan out with MySQL or Postgres. Including Derby would also be nice.

    The most important factor to me in any comparison is the licensing agreement. I like a very open agreement and the MySQL license requires you to release the source code to your product in some cases, or you have to purchase a license from them.
  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Monday December 18, 2006 @09:38AM (#17285038)
    Comment removed based on user account deletion
    • by Schraegstrichpunkt ( 931443 ) on Monday December 18, 2006 @09:46AM (#17285106) Homepage
      You have to give the Notepad guys credit for the fact that it is an incredibly easy product when it comes to configuring it for your needs. For me, out of college, going to Vim was a culture shock because the process of learning Vim was so convoluted and drawn out for simple stuff. I know that Vim and Emacs can be much more powerful than Notepad, but there is something to be said for how easy it is for a developer to install Notepad and just start working with it.
      • Re: (Score:3, Insightful)

        by MP3Chuck ( 652277 )
        Part of me sees the point you're making, but another part of me say "Yea, and ... what?" Does Notepad, embarrassingly simple though it may be, not still have appropriate uses?
      • Maybe your notepad.exe is a different one from mine, but I can imagine very few bona fide developers (apart from the very newest) using notepad, since IMHO it's unusably bad for all but the simplest of stuff. No syntax highlighting and annoying cursor placement issues being two of the most obvious problems. There are a million and one text editors superior to notepad available for win32 that are just as esy to use fr base functionality.

        OTOT, no-one would think of comparing vi/emacs/whatever to notepad. A mo
      • I don't think your example cuts the way you intend it to.

        1. For a lot of tasks, Notepad is all the editor you need (where "editor" is defined as a non-WYSIWYG text processor). For making a quick change to a config file, Notepad works just as well as Vim does.
        2. In Notepad, you open the program and start typing. Very intuitive for new/untrained users. In Vim, you can't start typing until you figure out how to turn on Insert mode. Very un-intuitive.
        3. Notepad on Windows is like Vim on *nix: it's the only ed
      • Nice troll. Comparing notepad to emacs is not the same as MySQL to PostgreSQL. Not even close.

        I think the point that the OP is trying to make is that PostgreSQL can be more difficult to install and use for no damn reason. As someone who uses both MySQL and PostgreSQL, the Postgres guys could (should) take some of the simplicity ideas of MySQL and incorporate them into PostgreSQL which would be a Good Thing. Most of the things that make MySQL easier to install / configure / use have NOTHING AT ALL to do with
    • You don't need to setup anything to run it for the first time, only if you want to play with performance, you can start to modify parameters as memory, max connections, etc. PostgreSQL is easy and powerful, just give it a try.
    • Oracle Express on Windows is dead simple. Click 'setup'. It creates the default tablespaces for you and there's a nice web frontend for creating tables etc.

      Perfectly fine for learning it. You can start on the real configuration later.

  • by kahei ( 466208 ) on Monday December 18, 2006 @09:41AM (#17285058) Homepage

    1 -- This article is years old.

    2 -- This article is posted solely to stir up (repetitive) discussion.

    3 -- This article pretends that MySQL is a real database, even though in order to do so it has to make gigantic leaps like considering data integrity to be not really all that important in a database.

    4 -- This article trolled me.

  • I'd rather (Score:3, Insightful)

    by Klaidas ( 981300 ) on Monday December 18, 2006 @09:43AM (#17285078)
    I'd rather have a new (not THAT old) comparison between Oracle and MySQL
  • Outdated and Silly (Score:3, Informative)

    by ClayDowling ( 629804 ) on Monday December 18, 2006 @09:46AM (#17285094) Homepage
    It's been a long time since any of their PostgreSQL statements were true. It's a very happy native windows service with a nice installer, and the administrative interface is very easy to use. Let's try posting current reviews of software, rather than reruns from a year or two ago.
  • by Stumbles ( 602007 ) on Monday December 18, 2006 @09:49AM (#17285136)
    February 15, 2005
  • Clustering and High-Availability aspects are not mentioned at all.

    MySQL speed will really depend on the database engine you use (MyISAM or InnoDB do not perform the same!). PostgreSQL performance is pretty much consistent across platforms.

    On the HA side, PostgreSQL has maybe less options: Slony/I ( http://gborg.postgresql.org/project/slony1/ [postgresql.org]) for master/slave or Sequoia (http://sequoia.continuent.org/ [continuent.org]) for multi-master.
    MySQL offers MySQL replication (http://dev.mysql.com/doc/refman/5.0/en/replicatio n.htm [mysql.com]

  • I know slashdot is for nerds (and I happen to use mysql databases myself), but honestly - an old article comparing databases? Must... keep... eyelids.. open...
  • Crap! (Score:4, Informative)

    by CaptainZapp ( 182233 ) * on Monday December 18, 2006 @09:54AM (#17285174) Homepage
    MySQL runs as a native Windows application (a service on NT/Win2000/WinXP), while PostgreSQL is run under the cygwin emulation.

    I call pure, unadulterated crap on this one.

    One of the major new features in Postgresql 8 was native Windows support. It runs just fine as a service.

    This comparision is either very old news, incompetence in action, or, um! strongly biased.

  • by punker ( 320575 ) on Monday December 18, 2006 @10:17AM (#17285390)
    This almost seems like the same comparisons we've been hearing for years.
    1) Postgresql is more full featured than MySQL
    2) MySQL is faster in a read-mostly environment
    That's pretty much the same as the anecdotal arguments have been for years.

              In my job, we moved from mysql to postgres several years ago (around PG 7.0). At the time, we needed to make the move for performance reasons. We are in a read-write system, and MySQL's locking was killing us (this was before InnoDB was well established). The features are better too, as our developers were used to having data integrity features, server side programming, and all of the SQL92 constructs available. We also learned a bit about PG performance, which I'll share.

    1) Run EXPLAIN ANALYZE on everything. Postgresql is touchier about query performance than MySQL was. This just needs to be a habit if you're using PG. (You really should do performance analysis no matter your DB. It's just a good practice). The biggest gain will be making sure you're using index scans rather than sequential scans.

    2) Use persistent connections. Everyone likes to point out the forking issue with PG vs. MySQL's threaded. PG's connection handling is slow, there's no doubt about it. But there's an easy answer. Just limit how often you connect. If you can keep a connection pool, and just reuse those connections, you'll save this big hit.

    3) Full vacuum and reindex regularly. We've found the docs to be a bit off on this. It indicates that you should run these occasionally. If you're in a read-write system, a full vacuum on a regular basis is very important. It really doesn't take that long if you do it regularly. Also, we've had trouble with indexes getting unbalanced (we see 50->90% tuple turnover daily). This has gotten better, but it doesn't hurt to let your maintenance scripts make things ideal for you. So, we run a full vacuum and reindex of our tables nightly through cron.

    4) Get your shared memory right. PG's shared buffers is probably the most important config attribute. It controls how much of your DB is memory resident vs disk resident. Avoiding disk hits is a big deal for any DB, so get this right. If you can fit your whole DB in memory, then do it. If not, make sure your primary tables will fit. The more you use the shared memory, and the less you have to page data in/out, the better your overall performance will be.

    Most DB systems seem to be read-mostly, so I can understand the performance comparisons focusing on that. In our read-write system though, the locking was the biggest issue and it tilted the performance comparison toward PG.
  • by itsdapead ( 734413 ) on Monday December 18, 2006 @10:19AM (#17285424)

    MySQL/MyISAM is the one with the massive legacy code base, the one that your open-source blogging software uses and probably the one that your web host supports. It beautifully hits the "sweet spot" for data-driven web sites with infrequent and simple updates, where trading integrity for "read only" performance is sensible. It does not even purport to compete with PostgreSQL on features - but it does offer fulltext searches, again

    MySQL/InnoDB is the one that offers transactions, foreign keys etc. (ISTR it doesn't do fulltext indexes, though) - this is the "version" that bears comparison with PostgreSQL. I wonder how its user base compares?

    (OK - you can mix InnoDB and MyISAM tables in a single database, but you can't use InnoDB if your web host hasn't installed it - heck, one provider I use is still on MySQL V3.23)

    Flamewars have tended to pit PostgreSQL against a mythical database with the performance of MyISAM and the features of InnoDB...

    As for the GUI software, the MySQL GUI Admin/query browser stuff is shinier than PgAdmin3 - but the MacOS version of the former is a complete crashfest! Neither of them steps up to the plate of providing a FOSS equivalent of (the good bits) of MS Access.

  • " let your tax dollars do the work in the form of Fermi National Accelerator Laboratory"

    I don''t begrudge the world the science (and other investigations) that Americans pay for. But that summary should read "let US tax dollars do the work".

    I wonder how many of Slashdot's foreign readers who usually rail against US-centric language in posts here will complain about how they get a free ride on this research.
  • by xyvimur ( 268026 ) <koo3ahzi AT hulboj DOT org> on Monday December 18, 2006 @10:31AM (#17285584) Homepage
    Ok, this is yet another outdated report comparing three mainstream RDBMS'es - MySQL, PostgreSQL and ORACLE. It was done for yet another physical experiment - for choosing the proper system for storing data about the construction process of one of the LHC detectors - ALICE.
    And this report is at least professional, which cannot be said about the one mentioned in the article.
    http://dcdbappl1.cern.ch:8080/dcdb/archive/ttraczy k/db_compare/db_compare.html [dcdbappl1.cern.ch]
  • by darekana ( 205478 ) on Monday December 18, 2006 @10:42AM (#17285728) Homepage
    http://tweakers.net/reviews/657 [tweakers.net]

    They compare PostgreSQL 8.2 vs MySQL 4.1.20 and MySQL 5.1.20a.
    • by daBass ( 56811 ) on Monday December 18, 2006 @12:33PM (#17287382)
      That is a great review; an actual real-world scenario buy guys who seem to know their database, not some spotty teenager who thinks a database benchmark is seeing how fast you can load 1 million records and then see how long a "select *" on that table takes...

      As the article shows, every time they double the number of cores, Postgres gains 75% in performance - like any good application should do. At 4 cores, it is already twice as fast as MySQL under reasonable concurrency; I'd like to see this test on a 8-core server - my guess is MySQL wouldn't be much faster than it is now and Postgres would perform at least 3 times better than MySQL.

      Oh, and Postgres doesn't think 0000-00-00 is a valid date, which is nice too.
  • by brentlaminack ( 513462 ) on Monday December 18, 2006 @10:56AM (#17285892) Homepage Journal
    I did a presentation [laminack.com] at the Atlanta Unix Users' Group [auug.org] this month that is a more up-to-date comparison. It's available in Open Office [openoffice.org] format. You can also get to it from my home page [laminack.com]. I did a similar talk almost four years ago. My conclusion is that MySQL has closed the feature gap with PostgreSQL in recent years. I still give PostgreSQL the edge in features, and MySQL the edge in out-of-the-box untuned performance. I also discuss replication and clustering.
  • by Hornsby ( 63501 ) on Monday December 18, 2006 @11:25AM (#17286280) Homepage
    I had to make a decision recently between Mysql and Postgresql for a database composed of many tables with greater than 50,000,000 rows. While going through the decision making process, I loaded a sample table with 50,000,000 rows to do some benchmarks. The first thing I had to do to run my tests was index the table. I started with Mysql using a InnoDB table type. I had both database servers relatively tuned to the hardware they were running on. I ran the create index with MySQL and detached my screen session. I came back several hours later to find MySQL was doing something along the lines of INDEX via REPAIR SORT. After some reading, I learned that this takes an order of magnitude longer than building an index the "normal" way and is caused by the index becoming corrupted during the creation. Okay... so, I restarted this process several times and encountered the same problem. This is clean data mind you that has already been exported from an existing SQL server. I duplicated my install on a second server and had the same problems. Very annoyed with MySQL, I gave Postgres a try. It worked on the first time in less than 25 minutes without issue. Since then I've been using it on 250,000,000 row datasets without issue. It's always reliable, and as long as you remember to use CURSORS for huge SELECT statement, it's painless to work with.
  • Disapponting start (Score:3, Insightful)

    by smchris ( 464899 ) on Monday December 18, 2006 @11:30AM (#17286332)
    The first screen that it says MySQL supports ODBC and doesn't mention that PostgreSQL does as well -- so why should I read further? Either sloppy, ignorant, or biased writing.

    There were a couple comparisons a couple years ago. It was my understanding that PostgreSQL did better with large data sets in a P vs. M match. In getting hammered with connections, another test between MySQL, PostgreSQL, DB2, Oracle, and SQLServer, if I remember, Microsoft's offering started to crap out along a power curve at maybe just 200(?) hits and the others degraded pretty equally along a straight line.

    My client/server experience started with some Oracle classes and managing a department server. I must say I am _much_ more comfortable with PostgreSQL and find MySQL a little alien no matter how popular it is. Just my 2 cents.
  • Unbiased? (Score:3, Insightful)

    by evrybodygonsurfin ( 360132 ) on Monday December 18, 2006 @11:54AM (#17286718)

    From the comparison table:

    • Postgres: Lacks binary distribution for all the supported plataforms.[sic]
    • MySQL: There are binary distribution for most of the supported plataforms.

    These statements convey the same information but that the author has presented them in different lights suggests to me a premeditated bias in favour of MySQL.

  • by Frater 219 ( 1455 ) on Monday December 18, 2006 @12:12PM (#17286996) Journal

    Does the Internet's favorite DBMS have an IP address datatype yet?

    How about MAC address? CIDR block?

    "An IP address is just a 32-bit unsigned int, duh. Any DBMS can store those."

    Wrong. A datatype isn't just about storage, but also about operations. In PostgreSQL, when you do a SELECT across a table with IP addresses in it, you get them formatted and displayed as IP addresses, not as opaque ints. Likewise with CIDR blocks, like "192.168.42.0/23". There's also a comparison operator for asking whether an IP address is within a CIDR block.

    If you're implementing a network registration system or an incident logging system, how much of your time do you want to waste staring at opaque ints like 3232246272 rather than IP addresses like 192.168.42.0 when you're trying to debug it?

    MySQL is a bimbo, a fratboy: it's easy, but so shallow! The amount of time you save in one-time setup, you will lose many times over in all the little annoyances and deficiencies of a DBMS that was originally designed by folks who didn't really believe in DBMSes. Over time they've slowly been shamed into including many of the features they used to despise: transactions, relational integrity checks, and so on. But there's still so much missing ... not just essential integrity features, but little fiddly bits like good datatype support, the kinds of things that make your life easier (as a programmer or as a DBA) in the long run.

  • by naChoZ ( 61273 ) on Monday December 18, 2006 @01:53PM (#17288808) Homepage Journal

    MysSQL has a much larger user base than PostgreSQL, therefore the code is more tested and has historically been more stable than PostgreSQL and more used in production environments.

    "Claiming that your RDBMS is the best in the world because more people use it is like saying McDonalds makes the best food in the world."

    Sorry, just an old joke that deserved retreading... ;)

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...