Page 1 of 1

SQLLDR (Direct=True, Parallel=False)

Posted: Mon Dec 08, 2008 4:11 pm
by UPS
I have created a DS job that will load data to an Oracle DB table. The SQL*LOAD options, $APT_ORACLE_LOAD_OPTIONS, are set to "OPTIONS(DIRECT=TRUE,PARALLEL=TRUE,SKIP_INDEX_MAINTENANCE=YES,SKIP_UNUSABLE_INDEXES=YES)"

The data is being loaded to the table but the job completes with the status of "Failed," due to the unusable indexes (which is okay for this testing). Is there a way to set the status to "Ok" instead of "Failed?"

Any ideas on this will be greatly appreciated.

Thank you.

Posted: Mon Dec 08, 2008 11:23 pm
by mahadev.v
SKIP_INDEX_MAINTENANCE=YES,SKIP_UNUSABLE_INDEXES=YES
did you try tweaking any of these properties? I believe this is causing a problem during restart. We have a similar problem in load option with rejects. Though it is a functional success the job aborts. so we check the job log for "BAD" files and set the check point to success.

Posted: Wed Dec 10, 2008 8:13 am
by UPS
We tried tweaking settings for the parameters SKIP_INDEX_MAINTENANCE and SKIP_UNUSABLE_INDEXES, with DIRECT=TRUE and PARALLEL=TRUE, and the job still completed with status of "Failed" for unusable indexes.

The table that it is loading to is a range-partitioned table with two indexes. We created a second DataStage job that rebuilds the indexes, and this runs without any problems. These two jobs will be run via a DataStage sequence.

Now the first job being in the "failed" status, we can set it to continue to the second job. However, we'd like to distinguish if the first job fails only because of unusable indexes or because of another database issue. Is there a way to distinguish them from the DataStage sequence level, say by return codes? Or, better yet, is there a way to set the completion of the first job to be in an "Ok" status for unusable indexes?

Any ideas on this is greatly appreciated. Thank you.

Posted: Wed Dec 10, 2008 8:31 am
by mahadev.v
how are you triggering these jobs? Job activity? One of the ways to check the status of the first job is through a basic routine to retrieve the job log and search through it for the error (to the best of my knowledge). And then trigger the down stream jobs.