Journal Chacham's Journal: Chronicle: Oracle, sqlldr, and speeding up things. 4
Well, i came onto a team that is importing some thousands of files nightly, and things are going slowly. One of my tasks is to speed up the database side of things, so of course i'm changing the record-by-record processing and changing them to set processing, as in removing CURSORs and replacing them with intelligent queries.
One process will being loading the data into the database in the first place. One a few hundred lines, the current process can take up to 10 seconds per file! Whereas sqlldr seems to do it in
So, we're trying to figure out sqlldr and error handling. When we came across this:
Oracle documentation
ERRORS specifies the maximum number of insert errors to allow. If the number of errors exceeds the value of the ERRORS parameter, SQL*Loader terminates the load. The default is 50. To permit no errors at all, set ERRORS=0. To specify that all errors be allowed, use a very high number.
Did you catch that? To specify that all errors be allowed, use a very high number.
Silly puppy.
Now to figure out how to have the entire file rejected on any errors. Because ERRORS will COMMIT everything in the file up to the erroring record (that causes the abort).
The developer... (Score:1)
Wow, that is just silly. You would think that there would be some commit options, when my error condition occurs, let me rollback the entire set of data. Now, one question, does 1 mean ignore the first error, or does it mean stop on the first error?
Re: (Score:1)
Heh.
Now, one question, does 1 mean ignore the first error, or does it mean stop on the first error?
It means ignore the error. We set it to 0 to test.
Now we decided the opposite. At least for now, we'll accept what we can, and on errors take the file and the log into a bad-data directory.
Figuring out max errors was interesting. I kept using 9s, and the maximun allowed number was
9999999999999999999999999999, with 10000000000000000000
not surprised (Score:2)
Re: (Score:1)