SQLLDR ignoring APT_ORACLE_LOAD_OPTIONS
Moderators: chulett, rschirm, roy
SQLLDR ignoring APT_ORACLE_LOAD_OPTIONS
Hi,
I am using oracle load option to insert records in the table. Since my source can have duplicates/or target table is huge to check against input for existing records. Hence I am using ERRORS parameter of sqlldr to ignore the errors and continue to load good records.
The options I am passing are $APT_ORACLE_LOAD_OPTIONS=OPTIONS(PARALLEL=TRUE,DIRECT=FALSE,ERRORS=9999)
I am testing this job with only 1 record in source that is already in the target table and since errors is set at 9999, I expect the job to finish with out aborting. But the job fails on a
ORA-00001: unique constraint violated
This is what i see in log file
Number to load: ALL
Number to skip: 0
Errors allowed: 9999
Bind array: 64 rows, maximum of 256000 bytes
Continuation: none specified
Path used: Conventional
I dont understand why oracle fails even though errors is set at a high value.
Thoughts please...
I am using oracle load option to insert records in the table. Since my source can have duplicates/or target table is huge to check against input for existing records. Hence I am using ERRORS parameter of sqlldr to ignore the errors and continue to load good records.
The options I am passing are $APT_ORACLE_LOAD_OPTIONS=OPTIONS(PARALLEL=TRUE,DIRECT=FALSE,ERRORS=9999)
I am testing this job with only 1 record in source that is already in the target table and since errors is set at 9999, I expect the job to finish with out aborting. But the job fails on a
ORA-00001: unique constraint violated
This is what i see in log file
Number to load: ALL
Number to skip: 0
Errors allowed: 9999
Bind array: 64 rows, maximum of 256000 bytes
Continuation: none specified
Path used: Conventional
I dont understand why oracle fails even though errors is set at a high value.
Thoughts please...
This is what is reported in the datastage log
The call to sqlldr failed; the return code = 2;
please see the loader logfile: /export/home/dsadm/Ascential/DataStage/Projects/scratch/ora.6483.383524.6.log for details.
Since the return code is 2 is this a warning from sqlldr,but still datastage aborts it?
The call to sqlldr failed; the return code = 2;
please see the loader logfile: /export/home/dsadm/Ascential/DataStage/Projects/scratch/ora.6483.383524.6.log for details.
Since the return code is 2 is this a warning from sqlldr,but still datastage aborts it?
As I mentioned in my first post,I am running that Job with only 1 input record in the source which is also present in the target table. and since errors is set at 999, the job should not abort.Kryt0n wrote:It's clearly not ignoring your load options as your errors allowed is being set to what you specify. This setting is more likely to tell the loader how many records it should allow through before falling over. Have you got a reject link out of the Oracle stage?
For the actual job, i have specified the value as 99999999, got same unique constraint error, so testing out with 1 record in the source
NO, I dont have a reject link.Should I have one for using the Write method="Load"Kryt0n wrote:It's clearly not ignoring your load options as your errors allowed is being set to what you specify. This setting is more likely to tell the loader how many records it should allow through before falling over. Have you got a reject link out of the Oracle stage?
True about the reject link... forgot it isn't allowed for load option.pavankvk wrote:As I mentioned in my first post,I am running that Job with only 1 input record in the source which is also present in the target table. and since errors is set at 999, the job should not abort.Kryt0n wrote:It's clearly not ignoring your load options as your errors allowed is being set to what you specify. This setting is more likely to tell the loader how many records it should allow through before falling over. Have you got a reject link out of the Oracle stage?
For the actual job, i have specified the value as 99999999, got same unique constraint error, so testing out with 1 record in the source
However, as already stated, this is more likely an instruction on how many errors sqlldr should accept, does not mean it should consider it a successful load... try adding a few valid records to your input after the invalid row... this should show sqlldr continues but the load is still a failure. Try setting errors allowed to 1 and have 1 or 2 invalid records and see what it does
Kryt0n wrote:True about the reject link... forgot it isn't allowed for load option.pavankvk wrote:As I mentioned in my first post,I am running that Job with only 1 input record in the source which is also present in the target table. and since errors is set at 999, the job should not abort.Kryt0n wrote:It's clearly not ignoring your load options as your errors allowed is being set to what you specify. This setting is more likely to tell the loader how many records it should allow through before falling over. Have you got a reject link out of the Oracle stage?
For the actual job, i have specified the value as 99999999, got same unique constraint error, so testing out with 1 record in the source
However, as already stated, this is more likely an instruction on how many errors sqlldr should accept, does not mean it should consider it a successful load... try adding a few valid records to your input after the invalid row... this should show sqlldr continues but the load is still a failure. Try setting errors allowed to 1 and have 1 or 2 invalid records and see what it does
What I did now is to delete that source record from the target table and rerun the job. Its able to properly insert. So its a valid record that should be inserted and when errors set to 999 on the second run, SQLLDR should ignore it and not fail with a Unique constraint error