SQLLDR (Direct=True, Parallel=False)

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
UPS
Premium Member
Premium Member
Posts: 56
Joined: Tue Oct 10, 2006 12:18 pm
Location: New Jersey

SQLLDR (Direct=True, Parallel=False)

Post by UPS »

I have created a DS job that will load data to an Oracle DB table. The SQL*LOAD options, $APT_ORACLE_LOAD_OPTIONS, are set to "OPTIONS(DIRECT=TRUE,PARALLEL=TRUE,SKIP_INDEX_MAINTENANCE=YES,SKIP_UNUSABLE_INDEXES=YES)"

The data is being loaded to the table but the job completes with the status of "Failed," due to the unusable indexes (which is okay for this testing). Is there a way to set the status to "Ok" instead of "Failed?"

Any ideas on this will be greatly appreciated.

Thank you.
mahadev.v
Participant
Posts: 111
Joined: Tue May 06, 2008 5:29 am
Location: Bangalore

Post by mahadev.v »

SKIP_INDEX_MAINTENANCE=YES,SKIP_UNUSABLE_INDEXES=YES
did you try tweaking any of these properties? I believe this is causing a problem during restart. We have a similar problem in load option with rejects. Though it is a functional success the job aborts. so we check the job log for "BAD" files and set the check point to success.
"given enough eyeballs, all bugs are shallow" - Eric S. Raymond
UPS
Premium Member
Premium Member
Posts: 56
Joined: Tue Oct 10, 2006 12:18 pm
Location: New Jersey

Post by UPS »

We tried tweaking settings for the parameters SKIP_INDEX_MAINTENANCE and SKIP_UNUSABLE_INDEXES, with DIRECT=TRUE and PARALLEL=TRUE, and the job still completed with status of "Failed" for unusable indexes.

The table that it is loading to is a range-partitioned table with two indexes. We created a second DataStage job that rebuilds the indexes, and this runs without any problems. These two jobs will be run via a DataStage sequence.

Now the first job being in the "failed" status, we can set it to continue to the second job. However, we'd like to distinguish if the first job fails only because of unusable indexes or because of another database issue. Is there a way to distinguish them from the DataStage sequence level, say by return codes? Or, better yet, is there a way to set the completion of the first job to be in an "Ok" status for unusable indexes?

Any ideas on this is greatly appreciated. Thank you.
mahadev.v
Participant
Posts: 111
Joined: Tue May 06, 2008 5:29 am
Location: Bangalore

Post by mahadev.v »

how are you triggering these jobs? Job activity? One of the ways to check the status of the first job is through a basic routine to retrieve the job log and search through it for the error (to the best of my knowledge). And then trigger the down stream jobs.
"given enough eyeballs, all bugs are shallow" - Eric S. Raymond
Post Reply