Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:[Sigh]... Still waiting for bulk loading... (Score 1) 191

While I see your point of view (to an extent), I have to disagree with you on most all points:

1) It only takes one UCV in millions of rows to ruin the load. Also, the data may come from another source, and it may be dirty with UCV when we got it.
2) The field(s) is/are marked UNIQUE - and they are supposed to be. We know this.
3) I whole-heartedly agree with PG protecting the table from violations and faults, but I am telling PG ***exactly*** how to handle the fault. Either:
  a) Keep the old and IGNORE the new
  b) REPLACE the old with the new or
  c) INSERT All or nothing - the current default (well... only) behavior

  *) I suppose there is a (d) here: "If the row has a created_at value older than 8 days and the qty_sold is 10 but the completed_flg is false then replace the row else..."
        But then, of course, your really are getting into application logic. (a) - (c) are simply, DB oriented actions.

After any one of these, I expect - heck, I am *demanding* - the constraints to be in full effect. But I would like a choice as to how the faults are handled.

4) If they lose data in a way that is consistent with the constraints and the command (e.g. a row is IGNOREd), that is my fault. I know what I am asking for.

You know, in general, I expect the RDBMS and its rules and constraints to *work for me*; not *me to work for it*. ;-) I want the 999,999 new unique rows in the DB. I *want* the 1 UCV kept out (or at least handled properly). Computer, take care of it! Sure, report back to me what was done... but just do it!

But I will agree that "...losing some data in a way that you don't notice... for anything mission-critical is really really bad." ;-)

Comment [Sigh]... Still waiting for bulk loading... (Score 1) 191

...comparable to MySQL. I think Postgres kicks MySQL's ass (to the extent that DBMSes have asses) in almost every respect. But MySQL wipes the floor with PG when it comes to bulk loading data with possible unique constraint violations. INSERT IGNORE, INSERT REPLACE, and the mysqlimport CLI command wrapping those statements make life soooooooooo much easier when one has to deal with millions and millions of overlapping rows. The typical workaround offered in the PG community is always a clumsy combination of temp tables, rules, triggers, seances and goat sacrifice, usually ending with the phrase, "See? Simple really!".

I think the addition of convenient bulk loading tools could be a game changer for potential enterprise users, or anyone loading high volumes of data.

Slashdot Top Deals

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...