Comment Re:[Sigh]... Still waiting for bulk loading... (Score 1) 191
I thoroughly agree with your second paragraph. And obviously large data loads can suck total ass.
I thoroughly agree with your second paragraph. And obviously large data loads can suck total ass.
While I see your point of view (to an extent), I have to disagree with you on most all points:
1) It only takes one UCV in millions of rows to ruin the load. Also, the data may come from another source, and it may be dirty with UCV when we got it.
2) The field(s) is/are marked UNIQUE - and they are supposed to be. We know this.
3) I whole-heartedly agree with PG protecting the table from violations and faults, but I am telling PG ***exactly*** how to handle the fault. Either:
a) Keep the old and IGNORE the new
b) REPLACE the old with the new or
c) INSERT All or nothing - the current default (well... only) behavior
*) I suppose there is a (d) here: "If the row has a created_at value older than 8 days and the qty_sold is 10 but the completed_flg is false then replace the row else..."
But then, of course, your really are getting into application logic. (a) - (c) are simply, DB oriented actions.
After any one of these, I expect - heck, I am *demanding* - the constraints to be in full effect. But I would like a choice as to how the faults are handled.
4) If they lose data in a way that is consistent with the constraints and the command (e.g. a row is IGNOREd), that is my fault. I know what I am asking for.
You know, in general, I expect the RDBMS and its rules and constraints to *work for me*; not *me to work for it*.
But I will agree that "...losing some data in a way that you don't notice... for anything mission-critical is really really bad."
I think the addition of convenient bulk loading tools could be a game changer for potential enterprise users, or anyone loading high volumes of data.
All your files have been destroyed (sorry). Paul.