Become a fan of Slashdot on Facebook


Forgot your password?
The Internet

.org TLD Now Runs on PostgreSQL 394

johnnyb writes "The .org domain, which has long run on Oracle systems, is now being transferred to a PostgreSQL system. I guess we can now dispel the "untested in mission-critical applications" myth."
This discussion has been archived. No new comments can be posted.

.org TLD Now Runs on PostgreSQL

Comments Filter:
  • by LinuxCumShot ( 582742 ) <{moc.neibar} {ta} {scl}> on Friday January 24, 2003 @03:50PM (#5152845) Homepage Journal
    .ca runs on MS-DOS running some home brew DB that is just a bunch of batch files
  • Oracle... (Score:5, Funny)

    by killthiskid ( 197397 ) on Friday January 24, 2003 @03:51PM (#5152848) Homepage Journal
    "No one ever got fired for selecting Oracle, so we asked ourselves, Do we take that option?" he said.

    Not true! I know someone who got fired for choosing oracle, then being unable to properly implement it.

  • by TerryAtWork ( 598364 ) <> on Friday January 24, 2003 @03:51PM (#5152851)
    Now we get to see how PostgreSQL handles those 98 % of wasted inquiries from DNS servers that don't know .elvis is not a TLD.

  • by etcshadow ( 579275 ) on Friday January 24, 2003 @03:55PM (#5152882)
    Because they don't take context or purpose into account at all. There are things that Postress may be better for and things that Oracle certainly shines at. I mean, hell, I love MySQL, too, but I wouldn't want to use it as the backend for _my_ system. Not that the others are hollisticaly "bad", it's just that Oracle is the most appropriate for this situation.

    What's a TLD doing with a database? Making ridiculous numbers of extremely lightweight queries, and managing redundancy. That's not necessarily the same thing that everybody wants an "enterprise class" "tested" database to do for "mission critical" tasks.
    • Actually, this is a good question. What is the database used for? Profile information for WHOIS searches? That would make the most sense, and isn't *that* big a deal. A database to handle name resolution is a bit of overkill I think.

      And not to distract that yes it's good to see PostgreSQL getting some mainstream fame.
      • by Zeinfeld ( 263942 ) on Friday January 24, 2003 @04:43PM (#5153209) Homepage
        Actually, this is a good question. What is the database used for?

        The database is a buffer between the requests comming in from the registrars and the DNS resolvers. So you get a bunch of requests comming in once a day saying stuff like 'change DNS to' and the registry has to decide what to do with them. To do that they need to have a bunch of info stating what registrar owns the account at the time and so on. And yes it is not unknown for registrars to attempt to do things they should not.

        The DNS infrastructure that is queried by you DNS server is completely separate. Every hour or so the SQL database will do a dump which will then be checked and if it passes will be sent to the production DNS infrastructure which is essentially a read only affair.

        So no, this does not mean that every DNS lookup in .org is going to result in a mySQL transaction. Nor can you say anything about whether this deployment proves mySQL is ready for primetime, at least not yet you can't. You probably want to wait to see how the zone holds up over the next few months before drawing any judgements.

        BTW the technical name for Oracle features is 'complications'.

    • The point (Score:5, Insightful)

      by Synn ( 6288 ) on Friday January 24, 2003 @04:05PM (#5152957)
      I don't think the issue is that PostgreSQL will crunch data as well as Oracle. It's just that PostgreSQL has always had an undeserved reputation as "the database to use when you can't afford a REAL database", when actually it's a very robust and secure system that can compete quite well with commercial systems.

      I'd really like to see some serious tests done with PostgreSQL. Database systems, especially Oracle, can be an expensive part of a datacenter. Considering that with Linux/PostgreSQL your only cost is hardware/support, it may very well scale more cost effectively than Oracle.

      There's currently way too much marketing and FUD to get a real idea how these systems compare though.
      • Re:The point (Score:2, Interesting)

        by etcshadow ( 579275 )
        Well, that's exactly my point. I'm not saying "I don't trust PostgresSQL", I'm just saying that this doesn't really prove anything on its own.

        Good for them. Hell, great for them. I'll admit that I really like Oracle, but it's not the one and only universal hammer.

        The truth is that it is very difficult to really express what any particular DBMS is good at / bad at /worth in $$$. Too much of the time, the people who actually make purchasing and deployment decisions on database platforms don't really understand the issues. I think that is a large part of why such comparisons aren't very prevelant: that is, the people who could understand them are not the ones who would be using them, so why bother? Just publish FUD, and claim that you either innovate or are Unbreakable. :-)
      • TCO (Score:4, Informative)

        by oliverthered ( 187439 ) <oliverthered&hotmail,com> on Friday January 24, 2003 @04:24PM (#5153094) Journal
        from the artical it didn't look like TCO was a factor.

        1: they liked versioning in postgress.
        2: they liked the open source comunity.
        3: Oracle didn't have anything over postgress[that wsa usefull]

        Maybe 2 relates to TCO, the amount you'd have to pay to get the same level of developer support on oracle would be huge.

      • Re:The point (Score:5, Interesting)

        by ortholattice ( 175065 ) on Friday January 24, 2003 @05:19PM (#5153443)
        I'd really like to see some serious tests done with PostgreSQL.

        I love PostgreSQL, have used it in a small (million-record) transactional application with great success, and am pleased to see the implied advocacy of having .org run on it. Nonetheless 2.4 million records is hardly enterprise-level stress. I would really like to see some serious benchmarks against Oracle. My tests on a small PC-based Linux server last year showed that pg beat Oracle mainly because the bloat of Oracle caused excessive thrashing, but on a large mainframe-type application - billion-record type stuff - I simply have no idea. A couple of years ago some benchmarks were published on the web but got quickly taken down by Oracle under threat of lawsuit - their license doesn't allow publication of benchmarks - and I never got to see them. I think this is wrong. Perhaps the recent ruling against EDA benchmark restrictions [] will open a door towards Oracle benchmarks?

    • by Anonymous Coward on Friday January 24, 2003 @05:11PM (#5153398)
      If I here those words again, I think my head will explode. I don't remember anyone saying to use a wrench for a hammer if you have both. When people argue these things, they are argueing that a tool is the right tool for a job. PostgreSQL is being argued to be a tool that can be used for enterprise jobs. Either confront that or not, don't just state the obvious. No one said PostgreSQL is the only database to use.

      I here this everytime a programming language is mentioned too. Either say Java can't do what perl can, or Java is slower than perl and back those up. Don't say Java is good and Perl is good because everyone knows that.

      I don't mean to take my frustrations out on you poor poster, it's just high time people realize that this is like argueing philips or flat-head. It should be a poll option because it's preference, not because there ever is a right or wrong database for a job. It's a choice. After this has been going for a while without problems, we can then proceed to choose PostgreSQL to save money or because we like it better than Oracle or DB2.

      It's as annoying as:
      1: In soviet russia, Vi>Emacs
      2: ?
      3:, "MOD PARENT UP"

  • slashdotted (Score:4, Funny)

    by gokubi ( 413425 ) on Friday January 24, 2003 @03:55PM (#5152883) Homepage
    I hope computerworld isn't running on PostgreSQL!
  • Overkill. (Score:5, Funny)

    by grub ( 11606 ) <> on Friday January 24, 2003 @03:56PM (#5152891) Homepage Journal

    All they need is netcat, shell scripts and grep.
  • by kschendel ( 644489 ) on Friday January 24, 2003 @03:57PM (#5152900) Homepage
    Verisign runs the shared registry with Oracle, but the registrar-specific data was and still is stored using Ingres.
    • If so, that's interesting, when you realize that Ingres is the predecessor to Postgres, which later became the PostgreSQL project. Postgres originally used the (theoretically more relational) QUEL language of Ingres, but with SQL being the de facto standard, they decided to change to PostgreSQL.
  • Um (Score:5, Insightful)

    by TekReggard ( 552826 ) on Friday January 24, 2003 @03:59PM (#5152918)
    "Mohan said the decision to award the contract to a vendor deploying PostgreSQL vindicates the database as a reliable, stable management system."

    No, it simply means that its going to be tested in a larger environment and if it does well then they get to party and say "woohoo it worked!" and if it flops they're all gonna feel really stupid. It doesnt mean its stable at all. The common practice of paraphrasing "LOOK!! Someone is using our product so it MUST work perfectly." is actually quite disturbing.

    • I don't know how process works where you work, TekReggard, but by the time most of us go live with a project, the testing period is over. If you wait until the world starts hammering on the service to see how well it can hold up, you've waited too long and shouldn't be surprised if/when it does fail.

      True, no testing environment will ever duplicate real world conditions exactly, but since this project ended up going live it's safe to assume that PostgreSQL passed the tests with flying colors. While failure is still a possibility, it seems unlikely.

  • by Kenneth Stephen ( 1950 ) on Friday January 24, 2003 @03:59PM (#5152920) Journal

    Please, please, please tell me that there is some commercial entity that they have contracted to for support. I really dont want my domain to be unreachable because they do their own support and are debating about which fix is the "right thing to do" so that upstream accepts it.

    • Why do you assume that an outside, commercial entity would be more competent than somebody on their own staff? If they need a PostgreSQL expert, why would it be better to pay a "commercial entity" to provide them with an expert, rather than just hire the expert?

      I'm not saying that's what they did; of course it would depend on whether they have enough work to hire the expert, for example. But I don't understand the reasoning that says "there's no way that we can hire competent staff, but surely if we pay another company enough, they'll have competent staff."

      • by Kenneth Stephen ( 1950 ) on Friday January 24, 2003 @04:21PM (#5153064) Journal

        Competency isnt the issue here. I am assuming that whoever the actual developer of the fix is, that they will be extremely competent in fixing the problem. With an external entity, contractual terms of delivery will twist their arms into fixing severity 1 problems with the urgency that they deserve regardless of whether the fix is the best possible coding / architectural solution for the overall Postgres project. With an internal entity, the pressure will be less on them because if management threatens to "chop the head off" because of trying to do the "right thing" instead of just fixing the problem, they will have to stop and consider that they are damaging their own organization. It is always easier for management to be brutal with external entities rather than one of their own.

        • So you can "chop the head off" the consultant, because they're less important than internal employees. Therefore they are more valuable. Yeah, whatever.

          I have to say, I'm pretty suspicious of any management theory which is predicated on the notion that your best option is the one that allows you treat people the shittiest. And god forbid anyone would do the "right thing".

          Just exactly why do you think an internal employees idea of what constitutes the "right thing" would be inconsistent with managment's anyway? If indeed you find yourself dealing with employees who let the world collapse around them while they fritter away their time on trivia, they should, in fact, have their heads chopped off. But personally, I know precious few people who would behave in such a manner.

      • Go to []. They have commercial support, just like any regular company. In fact, at the higher levels of support, they even throw in a commercial replication/distributed-querying system. This is really the best of both worlds. You get the full source of the application for your own review, and you get to call an expert anytime you like.
    • Well, Red Hat is using PostgreSQL for their Red Hat Database [] package and presumably would provide support for it. You can also find support partners for PGSQL at [].
  • How fitting... (Score:3, Redundant)

    by exhilaration ( 587191 ) on Friday January 24, 2003 @04:00PM (#5152924)
    That the .org TLD, where one will find the vast majority of open source projects, is using open source software for mission-critical tasks.

    Today is a good day for open source and free software!

  • Not a surprise... (Score:4, Interesting)

    by frodo from middle ea ( 602941 ) on Friday January 24, 2003 @04:04PM (#5152943) Homepage

    I had the misfortune of dealing with oracle tech support team once and I can say I am not surprised the ".org" domain has shifted to PG.

    The DB was locking up when trying to retrieve data from a large table (>10 M rows) using a very complex query.The oracle guys kept suggesting that reduce the size of the table.

    Now seriously is that a valid option ? Hey man , I have a million bucks in my acct. and i can't withdraw from the ATM ??
    Just delete some of it and then try again ?
    Or the most common answer from Oracle tech team is "we know its a problem but we will not fix it in this release. Just buy the next version if you want it fixed ?

    • Re:Not a surprise... (Score:2, Informative)

      by BigGerman ( 541312 )
      Just did another Oracle TAR (telephone assistance request) via their Metalink site.
      In 5 minutes, there was real person working on it.
      In 20 minutes, he explained the behaviour(oracle bug) and suggested the workaround.

      Disclaimer: I do not work for them, do not rely on income from DBA work and do prefer Postgres for my own projects
      • Just did another Oracle TAR (telephone assistance request) via their Metalink site.

        Ya know, PostgreSQL has multiple levels of support as well... I believe you would have as good response times with them, especially at their Platinum level of support.

      • Hm. I filed a TAR a couple of years ago for a trivial bug which I had already completely diagnosed, related to a fixed bug in an earlier version of oracle, and posted the name of the function that needed to be altered, and 4 months later (when I left the company I was working for), they had not fixed it.

        This was on the Linux version of 8.1.5 we used for development, and we were deployed on HP-UX, so it wasn't a huge issue, especially since I managed to patch it myself within a few days of reporting the bug, but I really expected better of them.
    • In my company, we have an Oracle DB, from which some very mission critical applications written in FORTRAN get inputs. A few years ago, we had to upgrade from Oracle 7.3 to 8i, because the old AIX server was developing some old-age problems. To our disgust and surprise, we found that Oracle had completely dropped FORTRAN support from their product.

      Worse than that, it took several months for the Oracle support people to actually find out what had happened to FORTRAN. At first they told us that it was still there, but our new system wasn't configured right. Dozens of emails later, they finally found out the truth, and admitted it to us: you cannot rely on Oracle.

    • by Chazmyrr ( 145612 ) on Friday January 24, 2003 @05:26PM (#5153486)
      You are retrieving data from a 10 million row table using a very complex query and you are having performance problems? Who would have thought that?

      Normally I get paid a lot of money to solve problems like this but I'll give you a little guidance for free since you didn't like Oracle's answer.

      1) Maybe you should think about optimizing your query a bit. Running complex queries against 10+ rows can be problematic even when the RDMS has a good optimizer. Is there a less complex way to accomplish the same thing? If not, you may have to give the optimizer hints. Can you use an index to pull a smaller dataset into a working table where you do your complex operations?

      2) Profile your system to determine where the bottleneck is to be found. Is it CPU bound or IO bound. If it's IO bound, would more memory help? Can your tablespace be spread across more disks? Would a beefier system be appropriate? Cost Effective?

      This is why you hire qualified developers and administrators. I'm not surprised the tech team gave you that answer. You call the tech team when there is a real problem with the software. If you were paying Oracle to develop the database for you, you might have a case. But then, if that were true, you wouldn't have called tech support, would you?
      • Re:Not a surprise... (Score:3, Informative)

        by OrenWolf ( 140914 )
        My god.

        Go crawl up your database and hide in a cell.

        Firstly, reducing the amount of data is crap. What if you have 10 million records? Should the answer from *tech support* be "well, don't!"?? That's what you advocate.

        Oracle dropped the ball here. First, because the database *crashed* on the query. If you're telling me that *any* query I run should be able to outright *crash* the database then go work for Microsoft on MS-SQL. Worst case, the database should thrash incessantly (and accept a kill) or consume too much RAM and kill itself off, but certainly not HANG. I can't believe you suggest that's the fault of the person running the query and not the developer.

        But secondly, and most importantly, Oracle should definately offer tips on what to do. I mean, regardless of the situation, the thing ran, and *died*. Not slow. Not exceeding resources. Died. If it's a bug, fine. Then you offer a bloody workaround, *especially* if you have no intention of fixing the bug!

        I mean since when is *crashing* an app not a reason to call tech support? Is it because you run Windows and are *used* to the tech support response of "Reboot, try again"??

    • Re:Not a surprise... (Score:3, Informative)

      by MmmmAqua ( 613624 )
      Or the most common answer from Oracle tech team is "we know its a problem but we will not fix it in this release. Just buy the next version if you want it fixed ?

      Actually, they suggest you upgrade to the newest version, not that you buy anything new. Licenses purchased from Oracle are for a product family for a length of time determined by the license. For example: if you bought a four-year single cpu Enterprise Edition license two years ago when 8i was the current release, you have the right to use 9i, and 10i when it appears, until the end of your license term.

      ...according to my Oracle sales rep.
  • I hope this isn't the reason why they sent me an email yesterday morning with a list of over 86,000 valid contact email addresses. Here's an article [] about it
  • by Kenja ( 541830 ) on Friday January 24, 2003 @04:06PM (#5152965)
    If it breaks they can just go to to get updates and.... oh wait.
  • Ummmm.... (Score:3, Funny)

    by Grip3n ( 470031 ) on Friday January 24, 2003 @04:10PM (#5152992) Homepage
    ...for some reason I can't resolve anymore...
  • by j_kenpo ( 571930 ) on Friday January 24, 2003 @04:13PM (#5153009)
    Well, we ran Postgres as our primary database for a Managed Network System Security,a nd the postgres database stored all alerts coming in from all our sensors, which included a .EDU that had qutie a bit of traffic going through it (our own implemented honeypot). The only issue we ran into was with disk space with packet logging, which was unrelated to the Postgres Database. We would get any number of hits per data into the database (sometimes over a million in a weeks time). Ive come to prefer Postgres over MySQL, although Id still take Oracle over each if I could afford the license.
  • by Karora ( 214807 ) on Friday January 24, 2003 @04:15PM (#5153030) Homepage

    I was a designer of the system that runs .nz (New Zealand), which is also based around PostgreSQL, running on three replicated back-end application servers.

    The system was developed in mod Perl and went live on October 14th 2002.

    The plan is to release this (including client software) under the GPL after a stabilisation period.

  • ...don't you have to peridoically stop a PostgreSQL server to run VACUUM() or something similar to clean out old deleted rows or something like that? I know some of you who are actual PostgreSQL users will probably correct me (or tell me I'm talking out of my hat), but I think that this might make it hard to run PostgreSQL in a 24/7/365 environment.
    • At one point it was recommended to not 'vaccum' on a live database but that was fixed years ago.
    • by Anonymous Coward on Friday January 24, 2003 @04:43PM (#5153216)
      You can vacuum any time without shutting things down. You don't even lock a table thanks to the wonderful MVCC. But..

      The real problem with Postgresql, however, is that if you are doing lots of updates where the keys increase forever, the index files grow forever. You can, of course, drop and recreate them (which we do in a cron job), but in a real 24/7 environment you've got a real problem when your queries all turn into table-scans because the indexes aren't built yet.

      Here is some more information [] (seeIndex Maintenance? )

      The only option I know if is to have two sets of tables and swap between them.

      -- ac at work

    • I was wondering the same thing myself.

      Over the past two years, I've spent a great deal of time working with postgresql with relation to an online game I've been helping to develop (Open Merchant Empires []).

      We've been able to get good performance out of postgresql as long as we don't expect 24/7/365 availability. They've made great progress in making the VACUUMs less intrusive, but we've always ran into trouble if we don't impose on the database availability with a regular maintenance schedule (very regular partial vacuums which slow the database down considerably, semi-regular full vacuums which lock up the database, and occasional full rebuilds).

      I'd love to learn how they achieve the high availability I'd expect you'd need for a TLD database server.
    • by JohanV ( 536228 ) on Friday January 24, 2003 @05:25PM (#5153475) Homepage
      Yes, you are wrong, as of PostgreSQL 7.2 VACUUM can run without locking the table completely.

      Garbage collection is a problem every database faces. Due to ACID requirements it is pretty much (absolutely?) impossible to run a database that updates rows without having multiple versions of the same row on disk at some time during the operation. So at some point in time you have to get rid of that duplicate. You can choose to do that after commit of a transaction (or the last transaction for which the row is still visible), but that would potentially make every transaction slower. So in PostgreSQL the choice was made to do this at an administrator determined moment (and I presume that choice also was the easy one).
      In older versions of PostgreSQL VACUUM would lock the entire table and physically force all the valid rows to be rewritten consecutively and then reclaim the space at the end. This mode is still available as VACUUM FULL, but nowadays there is a new mode (sometimes called lazy vacuum) that only marks space safe to be overwritten. Subsequent updates/inserts will overwrite it eventually.
      Regular running of this command will eventually lead to some steady state where there is some x% of bloat in the table, but there is no significant amount of locking required.
    • You don't stop PostgreSQL server to run vacuum in 7.x versions - you can do it in background.

      What you'll really missed in PostgreSQL for 24/7 is a good replication. But they are working on it.

      By the way, are you sure you want 24/7/365? I think 24/7/52 will be more correct, no? I don't think that 7 years of uptime is a good idea when you want to upgrade your software (usually you stop/restart the service for it) about ever year.

  • vs. MySQL (Score:2, Interesting)

    by dracocat ( 554744 )
    Besides the MySQL rulez comments. How DOES MySQL compare with PostgreSQL. I must admit I was turned off of MySQL a long time ago as soon as I realized it didn't support transactions.

    However, I have never been happy with Microsoft's SQLServer and have heard rumors that MySQL has come along way since I looked at it 3 years ago.

    But what I don't know is where PostgreSQL fits into all of this. I mean, if it IS the better system, why do I only hear mySQL when someone is talking about open source databases?
    • Re:vs. MySQL (Score:4, Informative)

      by Styx ( 15057 ) on Friday January 24, 2003 @04:37PM (#5153187) Homepage

      MySQL performed better than Postgres, especially on select-only queries, until not too long ago. I did some profiling on a web-based app at work where MySQL outperforms Postgres, and it turns out, that only approx. 0.02% of queries are INSERTs or UPDATEs, so it seems MySQL still has an edge in some applications.

      Postgres also seems to have an (unfair, IMHO), reputation for being hard to set up.

      And yes, MySQL has come a long way in the last 3 years, and does support transactions now.

    • Re:vs. MySQL (Score:3, Interesting)

      by PizzaFace ( 593587 )
      MySQL is faster for simple reads, and therefore a better match for the read-mostly databases that back most websites. PostgreSQL uses versioning for concurrency control, so it scales better with write-often databases. PostgreSQL also has more programming features (triggers, stored procedures, etc.).

      But if you're looking for something to replace Microsoft SQL Server on Windows servers, PostgreSQL is probably not your best bet, because it's really a Unix database and still runs on Windows through a Unix-emulation layer.
    • Re:vs. MySQL (Score:5, Informative)

      by Admiral Burrito ( 11807 ) on Friday January 24, 2003 @05:30PM (#5153511)
      But what I don't know is where PostgreSQL fits into all of this. I mean, if it IS the better system, why do I only hear mySQL when someone is talking about open source databases?


      • MySQL has a commercial entity backing it, that actually makes money selling commercial MySQL licences (the MySQL licence terms are kind of weird, "fully-viral GPL unless you pay us $$$"). This seems to have resulted in some marketoid-speak, which is unusual in the context of an open-source project. For example, "MySQL now supports transactions!" and various other "features", ignoring how fundamental such things are to a real RDBMS and should have always been a part of the design.
      • There are lots of people who don't understand why you would need "subselects" or "outer joins", and didn't know about "transactions" until they read about it in the mysql change log. And MySQL will be a real RDBMS Real Soon Now (tm) so there's no need to switch to anything else and besides you don't really need a real RDBMS anyway.
      • MySQL has a nice Windows installer.
      • PostgreSQL used to suck, once upon a time.
  • isn't it cool... (Score:5, Informative)

    by ubiquitin ( 28396 ) on Friday January 24, 2003 @04:31PM (#5153146) Homepage Journal
    ...that the entire O'Reilly Practical PostgreSQL book [] was put online?

    I've spent so much time lately in the (relatively) flat-table world of MySQL that I had forgotten about inherited tables, subselects, constraints in table definitions, and oh yes, vacuuming. ;) Looks like it is time to revisit postgres, especially for some db-agnostic PEAR apps I'm building. For me, it's the subselects that really make it worth the effort.
  • Oracle, as most commercial DBMSs, doesn't let you export the database in SQL format. Of course, you can write scripts to do that, but it shows how the commercial companies are always trying to find ways to lock you in.
    • No shit? Is there an SQL dump feature for MS SQL ? I need to convert one to postgreSQL soon.
    • Of course it doesn't. SQL isn't a format. It's a "Structured Query Language". I don't even know what you're trying to do.

      As far as exporting, of course you can export. You can export whatever you want in virtually any format you want, and have been since, well, for as long as I remember. Ask a DBA.
  • Non-commercial? (Score:2, Interesting)

    by mcoko ( 464175 )
    Known mostly as the domain for non-commercial organisations, .org is the Internet's fifth largest top-level domain, with more than 2.4 million registered domain names worldwide.

    So Slashdot is Non-Commercial? I don't know. Is non-commercial the same as non-profit, is /. non-profit?

    How strict are they about that. You would think that they would be but I have not heard. Slashdot used to be free/non-ads (except for the one at the top) but now there is an add on every comment page unless you pay. Is that non-commercial?

    • Re:Non-commercial? (Score:4, Informative)

      by stevel ( 64802 ) on Friday January 24, 2003 @05:13PM (#5153409) Homepage
      As far as I know, there has never been any regulation as to who can and can't register a .org domain. The association with not-for-profits is a convention, not a rule. Same with .net, which initially was for ISPs and other network service providers.

      Nowadays, .org and .net are largely used by registrants who couldn't get the .com they wanted. (On the other hand, I have two .org domains registered for legitimate non-profits, a town band and a cat shelter.)
    • /. non-profit?

      Non-profit, no. No-profit, you bet.
  • by shessel ( 180577 ) on Friday January 24, 2003 @04:48PM (#5153244)
    The transition details can be found on the Public Interest Registry's [] Homepage. In short, they'll close the registry at 14:00 UTC tomorrow, transfer to Afilias's systems, and reopen the registrations on Sunday at 23:00 UTC.
  • dispel which myth? (Score:3, Insightful)

    by dan_bethe ( 134253 ) <> on Friday January 24, 2003 @04:55PM (#5153297)
    I guess we can now dispel the "untested in mission-critical applications" myth."

    Yeah. Or we could do that in regard to all the other mission-critical applications it's been in all this time! :)

  • ... or maybe it is for an ISP.. but they're not gonna lose millions of $$ because of a one minute or even 10 second glitch in DNS. Ppl who complain that PSQL is not their choice for mission critical applications are talking about running their enterprise apps on it, not DNS.
  • How to pronounce? (Score:3, Interesting)

    by FireBreathingDog ( 559649 ) on Friday January 24, 2003 @05:02PM (#5153339)
    Will someone please tell me how the hell to pronounce PostgreSQL?

    Or are we supposed to pronounce it POST-GRE-SEE-KWEL? Or POST-GRES-CUE-ELL? Or POST-GRES-QUERY LANGUAGE?

    And where the hell did that name come from? Did they take "Ingres", and increment it (like how C became C++), thereby making it "Postgres"? Then "PostgreSQL" means "the better-than-Ingres query language"?

    I hate it when techies come up with names. It always ends up being something that's either stupid and meaningless, like C#, or self-referential and too-cute-by-half, like GNU. Recursive acronym my ass.

  • Smoking crack... (Score:4, Interesting)

    by NerveGas ( 168686 ) on Friday January 24, 2003 @05:06PM (#5153367)
    "untested in mission-critical applications"?

    You'd have to be a completely ignorant moron to believe that. A good number of large companies have been running PostgreSQL succesfully in mission-critical situation for *years*.

    It's been used in network-monitoring apps for deployment in military vehicles, $30 million POS systems, medical systems, ticketmaster, a good number of heavy-traffic web sites, and just about everything else you can think of.

    Anybody who told you it hadn't been tested was living long in the past.

    • ...I can tell you without question that none of the effects associated with crack include the forming of erroneous conclusions regarding the current state of database field testing.

      Marijuana, on the other hand, allowed me to accept such conclusions as valid, mostly because I was too lazy to doublecheck.
  • When I google looking for benchmarks comparing
    PostgreSQL to MySQL, I can't find anything more
    recent that June, 2001.

    I know that PostgreSQL has come a long way in
    the last 2 years, so I'm unwilling to form any
    opinions on benchmark information that is out
    of date.
  • by The Gline ( 173269 ) on Friday January 24, 2003 @08:33PM (#5154474) Homepage
    I run a text-chat site that is -- please don't lynch me -- based on Win2K and MS SQL Server. The site does about 10-12 DB transactions a second on a slow day and about 100-150/sec on a fast day. At peak hours we have something like 30% CPU usage on the average (it's a 700 Mhz box, not bleeding-edge).

    A friend of mine put someone in touch with me who was trying to build a vaguely similar system and was having no end of problems. Transactions were timing out left and right, and his machine was more than twice as fast as mine. From his experiences -- and from what I've seen in a lot of parallel setups -- there is a difference between being able to code something functional and being able to code something that functions intelligently. I'd learned a lot of ways to cut down massively on system overhead -- use stored procedures, turn off locks when they're not required, don't use transactions unless they're absolutely needed, etc., etc. -- and all of them add up and pay off.

    As far as PostgreSQL goes, it's probably going to depend on how good a job they do coding it into their system. If they do it well, I'd imagine PostgreSQL is gonna be quite solid. If they do it like idiots, not even the best database solution in the world -- not Oracle, nothing -- is going to save them.

    Heck, even Oracle is going to break if you try to fetch a billion rows at once; the trick is to find smarter ways to partition and subdivide the data, to cut down the amount of time needed for every little step on the way. (I found out that adding ONE index in my system sped things up by about 30% alone, an index I would not have realized I needed until I ran a performance profile.)

    Let's see how well they do before we sling tomatoes, OK?
  • Overpricedacle (Score:4, Interesting)

    by digitaltraveller ( 167469 ) on Friday January 24, 2003 @09:50PM (#5154789) Homepage
    One of the PostgreSQL developers [] is at right now. During his talk on Wednesday he mentioned this and that Oracle accused the .org registry guys of "criminial negligence" if they switched to PostgreSQL over Oracle. All I can say is: "HAH!" Feeling the pressure...
  • by binux ( 136998 ) on Saturday January 25, 2003 @02:07AM (#5155514) Homepage
    We have been using and recommending postgres for our telephony server. Recently we had to test high load conditions involving lots of writes into a database with hundred thousand subscriber profiles. The database would slow down to rates of ten transactions per second after a day. When we dumped the database and restored it, it would start working fine again. We then found out that the postgres DB has to be "vacuumed" often to maintain performance levels if there are too many inserts and updates into the DB. We repeated our tests with a scheduled vacuum every hour. Here again the system was unusable during the vacuum run. With a mission critical DB I wouldn't expect such issues. There are also no clear guidelines available on how often a VACUUM has to be performed. This is not a rant against postgres. I appreciate the good work being done and I know that the postgres folks are working on this problem.

    With Oracle we haven't run into such problems. We do have other problems with the OCI that we use though. We find lots of uninitialized memory reads and leaks during connection recovery and technical support for the OCI libraries is real bad. No support issues with postgres.

The wages of sin are high but you get your money's worth.