Top 5 Reasons People Dismiss PostgreSQL 704
Jane Walker writes "In an effort to dispel some of the FUD surrounding this impressive product, this article puts forth several of the most commonplace reasons for a user to dismiss PostgreSQL." From the article: "While PostgreSQL's adoption rate continues to accelerate, some folks wonder why that rate isn't even steeper given its impressive array of features. One can speculate that many of the reasons for not considering its adoption tend to be based on either outdated or misinformed sources."
Availability (Score:5, Insightful)
First post?
Other things... (Score:5, Insightful)
Number 1 (missing) reason (Score:5, Insightful)
#1 Reason (Score:5, Insightful)
Fulltext Indexes (Score:5, Insightful)
Re:Web developers... (Score:5, Insightful)
I think what the grandparent meant is that a database can do more than handle SELECT, UPDATE, and INSERT queries, and that web apps that use DB backends contain a lot of code for functionality that could have been handled more efficiently and cleanly by the DB itself.
Ironically, I think your post kind of validates the grandparent, in that you seem to implicitly be thinking of SQL databases as little more than a better place to store data than a flat file.
Cheers.
Re:The name (Score:4, Insightful)
Why I have so far resisted PostgreSQL (Score:2, Insightful)
Having been forced to work with Oracle before they had a usable GUI (It can be argued they still don't) theen MySQL Server, I learned to appreciate a database GUI. I've grown to *HEART* mysqlcc, and more recently mysql-administrator, mysql-query-browser, AND phpMyAdmin. Wake me up when the same are available for PostgreSQL AND they are bundled with major distributions like the MySQL tools are. Oh, and they need to WORK, too.
2. Familiarity
When I switched BACK to Windows without having touched Linux for 5+ years, the apps we initially standardized on use MySQL as the back end, many of them exclusively so. MySQL seems to be more ubiquitous in the OSS world, despite its license being less-free than PostgreSQL
3. Time
Who has the time to investigate or extend PostgreSQL, and why bother when there is MySQL? I've read up a little on PostgreSQL and I like its feature set better than MySQL, but I'd have to spend time learning about administering, backing up, restoring, configuring, and tuning it properly. I've already put that time into MySQL and right now I need to learn the ins and outs of asterisk on top of my usual workload. MySQL is running just fine, why switch now? When we develop an app for distribution which would not meet MySQL's requirements (e.g, requiring us to GPL the product), THEN I will put time into learning PostgreSQL.
Re:Other things... (Score:5, Insightful)
If your application has an bug and inserts an invalid date, you don't want that error to cascade to another application (or another part in your application) and cause more errors down the line. By the time you detect the bug, it could be almost impossible to determine the source of the bug.
Putting consistency checking in application A doesn't prevent application B from inserting invalid data. And when application A reports an error (due to it's wonderful in-application consistency checking), now you don't know what caused the error. It's long past the time that you can get meaningful state information from application B, at most you have database auditing tools that tell you "application B did it", but that's more easily implemented in PostgreSQL as well (triggers).
And I'm not talking about super-advanced users only. I am talking about everyone who wants to catch the error early when they have the most possible information. Everyone who's just a programmer who wants to be able to trust that data from the database comes in a meaningful form. Everyone that just wants the database to do either what they expect, or throw an error.
Re:Or this? (Score:5, Insightful)
Benchmarks like that should be run with a couple of hundred active users all doing different things (mix of updates, insert, deletes, and selects) -- a real world use-case.
You will quickly find that scores change.
We had a MySQL versus PostgreSQL battle once. The MySQL people put together a benchmark showing MySQL was nearly 10 times faster. The benchmark was a single user going through the steps.
We then took the exact same thing and put Apache in front and benchmarked with 5 active users -- now they were about even. At 10 users PostgreSQL was about 5x faster. At 100 users MySQL was unable to finish the test. PostgreSQL was serving each of the 100 users at a rate comparable to how it dealt with 10 active users.
Benchmarks for benchmark sake is useless. Benchmarks that model your actual use case are quite valuable.
None of Those 5 Are My Reasons... (Score:3, Insightful)
A few years ago, I decided to learn a DBMS and teach myself SQL. I tried Access because it's "user friendly." Call me crazy, but I felt it was anything but. So I tried Postgres because everyone spoke so highly of it (and I'm very comfy with the command line). I read a lot of documentation and did a lot of things that felt like "progress" before I gave up.
I picked up MySQL next. It had some quirks, sure, but it was maybe an hour before I was comfortable enough with the DBMS that it didn't stand between me and learning SQL.
I picked up Postgres again last year and got much further along with it. I actually made a database, and it had tables and everything. I gave up because everything just "felt" more complicated than in MySQL.
I really want to learn Postgres. I do. I'm convinced it's more powerful and flexible. I just don't have the time, patience, or need.
Both MySQL and Postgres have their quirks that make it so you can't just jump in and start playing with SQL, and that sets the bar higher than it needs to be. Sure, every product will have some such complexity, but the lower the bar, the wider the userbase.
I don't think so (Score:2, Insightful)
One query - select count(*) from .... (Score:2, Insightful)
Try this on a table with a couple of million rows
select count(*) from tablename
or
select count(fieldname) from tablename
This is incredibly slow as PostgreSQL scans the entire table! I know there are work arounds that will return approximate but this isn't good enough. I keep hearing how it isn't possible, that the table stats can't be updated etc... but other DB's handle this extremely fast.
I love PostgreSQL but I won't recommend it to Clients yet.
Re:For me, one thing remains a [sad] fact (Score:3, Insightful)
Good for you for getting that right. A lot of skilled devs won't use anything but the tools they already know, even if there is an astonishingly simpler tool that will get the job done in 10% of the time.
Personally, I think of everything as Java + PostgreSQL, so I've still got a lot of room to grow
Regards,
Ross
Why I chose MySQL (Score:5, Insightful)
Now, about a year ago, I had a client that wanted a web site back-end written. Now, I wasn't sure what the future of that site was going to be, whether I was going to be involved, etc. I also knew that it would be probably be run on inexpensive shared hosting solutions.
Guess what I chose? MySQL and PHP. The reason was because those are always available. It gives my client the flexibility to move it to any hosting solution. PostgreSQL simply is not everywhere. In my case, I run my own servers and can afford to have to understand it. But my client needs a hosting solution that does all the work for him (including back-ups). There's something to be said for using "the standard".
And you know what? I originally chose PostgreSQL because it was ACID compliant, but I have to say that MySQL sucks a lot less than it used to. It defaults to tables that support commit/rollback. It supports sub-transactions (which PostgreSQL v7 doesn't support, not sure about 8). It (FINALLY) supports sub-selects. If you're still turning up your nose at MySQL, it really isn't as bad as it used to be.
How about... (Score:3, Insightful)
Good installation documenation with Postgres is pretty sparse. It's not too complicated but it's not easy to find answers. This mainly includes how to properly setup pg_hba.conf which is vague at best on how to configure.
It might be better in newer installs but in RHEL3 I was just scraping along.
Application Support
As mentioned there are some great apps, but there are just are not many applications supporting Postgres, most web apps are LAMP (with M being very much in represntation). I think it would help Postgre if there is a comprehensive PHP-PGSQLPHP-MYSQL conversion equivelants document/tool to help developers either to transition or at the least open up the cross-platform support for multiple DB engines.
Documentation
Recently there have been a growing number of updated books on Postgres including those which work along with PHP, so that situation is improving, the books I had to work with were circa 2000 or earlier before schema support.
So, yes, I tried it, for a while, almost got there, but I just wasn't achieving as much progress as I had hoped. Maybe later I'll go back when conditions get better.
Keep up the good work, I'll be watching.
Re:Other things... (Score:2, Insightful)
you mean "betamax", right?
No newbie guides (Score:4, Insightful)
Two of PostgreSQL's biggest problems: Very little documentation that mere mortals can read (if they can even find it), and a rude, elitist cheering squad. The product my be the greatest thing to hit Open Source since RMS, but most people who need a database (usually for web dev, but yes, often for "real" applications as well) will never find out about all of PostgreSQL's golden features.
Re:Other things... (Score:5, Insightful)
Postgres -- getting closer to the kinds of features Oracle users want. Probably about at 8i level now. OTOH speed is nowhere near Oracle
MySQL -- Roughly the same speed as Oracle. Doesn't scale. Features ain't even close (though 5 is miles ahead of 3, another 10 years and...)
The problem for Oracle corporation is that 95% of Oracle users don't need the features of Oracle and thus MySQL is a good replacement for them.
Re:Availability (Score:5, Insightful)
(Only partially being sarcastic here. Access is fine for a lot of things. It's just the mentality that kills me. LAMP Developers are the true heir to the VB/Access Guy Mentality)
Re:Other things... (Score:5, Insightful)
MySQL has never cared about enforcing database integrity and are just starting to do so. The sad part is that the vast mayority of the people that use MySQL because it is the default database don't fully understand what data integrity or consistency is.
Cheers,
Adolfo
Re:Other things... (Score:3, Insightful)
Yes, the fact is that MySQL is trying to adopt a more PostgreSQL-like philosophy, using their market share and name recognition. In the meantime, the current users are in for a rough ride of backwards-incompatibility.
So, the question is, does MySQL get PostgreSQL's features and philosophy in place before PostgreSQL get's MySQL's market share in place?
Re:Other things... (Score:2, Insightful)
Simple web shit, sure, mysql is the way to go. But mysql's query optimizer still wets the bed for serious data churning. If you have heavy lifting to do and don't have the money for Oracle postgres is golden. I just wish those guys could sell themselves better. Assuming they want to, of course.
Re:Other things... (Score:5, Insightful)
What if you have a table and you need a functional index on a user defined function? MySQL can't even do that, so it will be forced to scan the whole table. PostgreSQL makes it trivial.
PostgreSQL can also combine indexes into in-memory bitmaps before looking in the table. That means you don't have to make a multi-column index for every combination of attributes you select. This is done automatically by the planner.
When you actually make use of some of the "features that nobody needs" in PostgreSQL, you can see huge performance gains. I'm not sure how it stacks up against Oracle, because they don't let their users publish results.
But yeah, you're right. If you just port some MySQL code to PostgreSQL, the PostgreSQL code will likely not perform as well (unless there are joins where the planner can use some of the features like the bitmap index).
Re:PostgreSQL is BSD licensed (Score:3, Insightful)
There isn't any way for someone to "take control" of a BSD-licensed project. Sure, someone could use a BSD project as the base for a proprietary project, but that isn't going to discourage anybody else from working on the open original. We have in fact watched several proprietary "improvements" of postgres quietly tank over the years.
On the other hand, MySQL AB own mysql lock stock and barrel, and only release GPL versions because they choose to. They could announce tomorrow that all their future versions will be high-price closed-source shrink-wrap, and no one could say boo to them about it. The difference from the postgres situation is that MySQL AB could take with them the vast majority of the existing development expertise for the code base. Postgres will continue as an open-source project no matter what any one company thinks about it --- you cannot say the same about mysql.
Re:Other things... (Score:4, Insightful)
When it should be done, it should be done in the outside.
When it must be done, it must be done in the inside.
Cheers,
Adolfo
Re:Why I'd rather not use PostgreSQL (Score:2, Insightful)
Guys, guys, it may be that certain coding styles trigger issues that other coding styles don't. I've had that happen to me with other products. Syntax and coding techniques that I preferred just happened to bother that particular product while it worked fine for other styles. No product will please everybody.
Reason number 6 (Score:5, Insightful)
A few more reasons (Score:5, Insightful)
Vacuum kills performance. Some uses maybe OK with loosing 50% or more while VACUUM runs. In some uses it's unacceptable. In our case (a lot of inserts with majority of selects going for the newly inserted records) performance degrades within 6-8 hours after running VACUUM & friends. VACUUM takes ~20 minutes to complete which is completely unacceptable during the day and we can't delay it till night.
No, AUTOVACUUM is not an answer because it kicks in unexpectedly and makes random queries run unexpectedly slow at unexpected times. Usual VACUUM makes all queries run slow at predetermined time. Not a very appealing choice.
More reasons:Re:Why I'd rather not use PostgreSQL (Score:4, Insightful)
Here is a little background on why vacuuming is used:
When a user starts a transaction to a database the database will show data that was valid as of the start of the transaction. So if I issue a select that takes some time to return results and another user updates rows that I would be selecting after I issue my select, I will only see data that was valid at the time of my select rather than the newly updated data.
When that other user updated a row I was selecting from, a new row is inserted (or written to a previously deleted row). Any new transactions that select this row will get the new row rather than the old one my long-running transaction is still seeing. Once my transaction is complete, then the row with the old data isn't needed any more because newer data is in another row. This is called a non-overwriting storage manager.
What vacuum does is look for these unused rows. It goes through the tables in the background and sees if any transactions are using that data. If no transactions are using the row it marks them as free for the storage manager to write to. Future inserts can use that row to store data rather than adding to the end of the file. Vacuum doesn't move any data in this way. It's just marking rows that can be overwritten with new data. It's completely unobtrusive and a normal part of keeping the database running.
Some databases take a different approach. For example, Oracle uses an overwriting storage manager. When you update a row the data in the row is physically overwritten. To handle transactions, Oracle keeps what it calls a REDO log. The REDO log is like a journal in a journaled filesystem. It keeps track of all changes to to the data in the database. Any transactions that were open before the update to a DB row will notice that the data was updated and will then look at the REDO log to see what the correct data should be for their transaction.
I have heard that implementing an overwriting storage manager like Oracle has is very complicated, much more complicated than a non-overwriting storage manager like Postgres uses. I'm not a DB programmer so I don't know if that is true. I also heard a while back that the Postgres developers were investigating overwriting storage manager algorithms but I don't know what came of that.
Now, back to vacuum. In Postgres there is "vacuum full" which will move data around. It is used to compact the datafile to remove the space that's marked as unused and shrink your datafile size. You should rarely, if ever, have to use this as your unused rows will be used by new inserts. At least "vacuum full" is easier than the Oracle equivilent which is:
Again, "vacuum full" is rarely, if ever, needed. The Oracle equivilent would be more rare to need due to how Oracle's storage manager works.Sometimes the tool IS the problem (Score:4, Insightful)
No, the tool IS part of the problem in such cases. If the programmer was not that good at C then C was the wrong tool and thus part of the problem. The programmer should've taken the time to study up on C, or picked a different tool. There are times when the tool is not appropriate for the job--you probably shouldn't use C if you need to do heavy text processing and need to get the job done fast (use Perl instead), or if you are less experienced and want a language that supports sound object-oriented programming maybe try Python, etc.
MySQL was not designed as a robust relational database, and its creators didn't seem to be intent on making it so, or else they'd have designed it differently. It was designed as a very quick and quite dirty SQL frontend/ISAM backend system to support small, informal databases (or so it seems): Basically, its heritage is to be like the old Ashton-Tate dBase but using SQL to query the tables. Since then it has lost that focus and now we have large websites storing millions of records in mySQL.
MySQL is a great tool if used as intended, however it definitely IS a problem if your accounting system uses it for example. People started doing crap like that and complained about mySQL's lack of features, thus we have things tacked on like innoDB tables and such to add this robustness.
PostgreSQL was not always as super-robust as it is now, and in its present form its source code is probably almost unrecognisable from 10 years ago, however its architecture was more sound and thought out from the start, as its heritage was as an academic project. Its challenge was not to add features as was the case with mySQL--PgSQL was designed for extensibility. PostgreSQL had to catch up in performance and stability, which it has done in spades.
Personally, I always use UNIX timestamps (seconds since 1970). They can be directly added, sorted, and converted into any timezone, and its very portable. But thats just me. (Yes, UNIX timestamps do nothing before 1970, etc, etc).
It seems somehow wrong that your business logic has to perform low-level validation of basic datatypes, and it is cumbersome and error-prone to deal with unrecognisable representations. Only the geekiest of geeks could tell me whether 1984293617 falls on a Thursday without runing it through some kind of conversion program (simple as that may be for a geek). What about people who point-and-click their way through some report designer--they're gonna have to deal with some giant integer in a column entitled "something-date". The other problem is that it is not very precise for some applications that need sub-second timestamp values.
Personally, I like PostgreSQL because it accepts ISO standard formats, you don't need to do anything to convert timezones--you simply specify the time zone when you insert or query and it issmart enough to figure it out when you query fordatain Eastern time zone and it was inserted in Pacific timezone. Furthermore, it knows Feb 30 isn't valid, and knows when leap years occur, and can format the date in many different ways with simple built-in functions, can be accurate to the millisecond and won't crash and burn in 2038.
FYI, I believe the "seconds since UNIX epoch" representation of date/time values is a SIGNED integer, so they are in fact good for earlier dates than 1970 (they are good to some time in late 1901 in fact). That is still a pretty limited range and why early systems didn't use that representation inmany cases (couldn't store birthdates for a lot of people who were still alive in the early 1970s becasue they were born before 1901). It is still a problem in some applications ad that is why 32-bit "UNIX-style" time is discouraged.
I think it's a shame that people resort to such kludges without adequately lookig for more appropriate alternatives...but that's just me
What an attitude (Score:5, Insightful)
I'm glad you don't work for me with that attitude. I'd rather work with someone who is interested in learning new things and will bring some creativity to the job. People of your mentality have to be careful they don't fall into the "false laziness" trap--using some tool or technique or techology because you are too lazy to learn something new, only to end up doing load of extra work to avoid the shortcomings of your inappropriate design choices. The result is scads of legacy code at higher layers of an application to handle things like datatype verification, basic referential integrity and so on.
All the various different executables to do different tasks rather then one shell like MySQL, a permission system which seemed from my limited usage more perverse then MySQL's
I've never found it to be a major struggle to use PostgreSQL, though being a more full-featured database it will naturally be a bit more complex to manage.
I'm puzzled about the "all the various executables" part too, since many of them were invocable from the psql shell anyways. Also, it sounds like you've not lookded at PostgreSQL for awhile because its permissions system has undergone a lot of work--certainly it can be complex but it is very flexible and powerful, and honestly it gets rid of most excuses you had to execute all your database operations under the database superuser (or some other single user account) in your backend code.
I have better things to do with my time, like write cool code that uses MySQL.
You might want to examine how you used your time...if you had spent a few hours or a couple days learning something new for a change (like PostgreSQL) then it might've saved weeks or hours of frustration trying to use mySQL for too-complex tasks.
MySQL might have grownup a lot in recent years, but at its heart it was meant for much more modest tasks, like storing guestbook entries, record collections, as a temporary datastore/embedded database, high-performance querying of relatively static ad/or non-critical data and so on.
Re:Other things... (Score:5, Insightful)
In particular this is likely to allow security problems in the application to cause malformed data to be entered into the database, thereby affecting other applications.
Re:Avoidance through Ignorance (Score:4, Insightful)
"Most people avoid Linux because they are totally ignorant and are going with the popular flow no matter how ugly it is. They've jumped on the Windows bandwagon with no regard for what they are missing."
I think that you, as someone not proficient in Linux, should at least appreciate that some things are a pain in the ass to learn, even if there are considerable benefits. Don't be so quick to judge people who haven't adopted your favorite database package or OS or seafood dish.
Re:Personal .02 (Score:3, Insightful)
I'm fairly sure that if Ferrari produced a bunch of promotional material showing the horse crapping on the Porsche logo someone would say the same sort of thing :)
Comment removed (Score:5, Insightful)
Re:Web developers... (Score:3, Insightful)
There is some logic that is easily moved to the database layer that does not need to be part of the applicaiton logic. I have seen web apps that do running totals or category totals completely on the fly when a simple SUM statement would have sufficed. Logic for INSET vs UPDATE statements canactually be handled by the database in some circumstances, as it actually allows greater flexibility in the future. Want to add traceability? If you already took the previous step than this now bcomes trivial because it can be done in the database withoutever touching the application code.
Same thing with the front-end. Since we are talking web, there are many reasons (beyinf eficiency) to keep your HTML sparse and try to place most of the formatting and prettiness in external CSS files. This does add complexity if you judge complexity by number of languages, but it adds simplicity to maintainability. Need to make vast sweeping changes to the color of somehting on the site? No problem, you don't need to know PHP/ASP/JSP/whatever, just modify the CSS file (if it was done right). You still have some crossover when it comes to the server-side code that is operating behind the scenes, as you do need o know CSS, HTML, and whatever server-side language your using, but what you won't be doing is searching through who knows how much code for the single remaining bgcolor attribute or inline style attribute.
I agree that taking this concept too far is worse than no seperation at all. But in moderation you do get a large number of advantages. I actualy had a good example of "too far as to be useless" the other day when our coop explained the restricionts their teachers were putting on them for a software project:
5 distinct layers in a web app, not counting CSS, with every layer bleading into at least 2 more. A change to the database directly affected a minimum of 2 other layers as the hardcoded SQL statements were in the next layer and the table-specific classes were in an additional layer (relationships were variables references between the objects)...there was more, but it bothers me everytime he mentions it.
Re:Other things... (Score:3, Insightful)
right on.
MySql should be considered for things that only need to be structured a little better than a plain textfile when the number of entries renders the textfile approach not practical. This is say webapps where you only need to save name/email/address, etc. For this MySql is better and faster than Postgres and therefore a better option.
For applications whose main need is to have a large number of complex related data stored, don't even think MySQL here.
Specially if your queries use a lot of joins (say 3 or more). Postgres is the way to go. Good performance, better data integrity and many more (used/usefull) features. If you do a lot of delete/updates then you'll have to have a good "cleanup" procedure or performace will become very low. This can be done on the fly and usually during a time period where data isn't much used (most usages have a period where access is low, but not all).
Version 4 was a great improvement on almost all of this.
Re:The name (Score:3, Insightful)
Re:Sometimes the tool IS the problem (Score:2, Insightful)
On MySQL doing only "informal" databases, you have a point: it is not good on points like referential integrity. But on "small", I think you are unfair. It scales very well to very large numbers of records/gigabytes, and performs well when doing so. If your challenge is sheer numbers of records, MySQL is good. It is still OK if you have very few developers who understand the entire database and its constraints. If you have many tables with complex relationships and many developers, some of indifferent quality, you need a lot more enforcement than MySQL gives you.
why I don't use postgresql (Score:3, Insightful)
with postgresql you have to run pg_dumpall, and then restore it after upgrading to the next major version. which is extremely gay, and half the time it doesn't work, or doesn't work according to the postgresql docs, or sometimes loads the tables but not the users or vice versa. thats stupid and is what keeps me from using postgresql.
My reason - upgrading woes (Score:2, Insightful)
From time to time they change the structure of the database. This is toxic waste. If you are not aware of it and upgrade your system and postgre is updated, you will have to move the data to a machine that can still understand the old database, dump it and migrate it back in. This has kicked my ass more than once. It seems like it is always at the most inopportune time as well. Then there is the vacuum nonsense to deal with.
With Mysql, I have never had this problem. The database just works regardless of an upgrade to the way they do things. It takes care of it. I don't recall one time where I had to dump and migrate my data, except from one machine to another. Usually I just move the files, even from platform to platform and they are still fine. In fact I have had mysql applications running for years without any intervention. It just works. I can't say that about Postgres, as well as Oracle.
Re:Why I choose MySQL... (Score:2, Insightful)
So MySQL replication does support a master to multiple slave setup where you can failover to one slave and the new master inherits all other slaves without the requirement of resynchronization. And when you later repair the original master, you can fail-back without significant downtime, right?
I might have not looked at it for a long time, but last time I looked it only allowed to promote a slave to a single standalone database
I do admit, the Slony-I replication system has a lot of shortcomings, most of which are due to the original design goal of "being able to install on an existing, old Postgres version and use it to upgrade to newer ones". But that mostly affected the implementation, not the initial design of features.
Jan