Date Conversion cache disabled, job aborts

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
aaryabhatta
Premium Member
Premium Member
Posts: 20
Joined: Mon Dec 19, 2005 10:00 pm
Location: UK

Date Conversion cache disabled, job aborts

Post by aaryabhatta »

Hi,

Sequential file --> transformer --> surrogate key generator --> Oracle enterprise stage.

When I try to run the job, it aborts with the below error afer loading 23023 records into the table. But even more rows are read from the sequential file.

APT_CombinedOperatorController(1),0: Caught unknown exception from runLocally().
APT_CombinedOperatorController(1),0: The runLocally() of the operator failed.
APT_CombinedOperatorController(1),0: Operator terminated abnormally: runLocally() did not return APT_StatusOk

The log file says

Path used: Direct - with parallel option.

Table NIP:
23023 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.

Date conversion cache disabled due to overflow (default size: 1000)

Bind array size not used in direct path.
Column array rows : 5000
Stream buffer bytes: 256000
Read buffer bytes: 1048576

Total logical records skipped: 0
Total logical records read: 23023
Total logical records rejected: 0
Total logical records discarded: 0
Total stream buffers loaded by SQL*Loader main thread: 2464
Total stream buffers loaded by SQL*Loader load thread: 0


In the table there are 2 date fields and 2 timestamp fields, these are converted from varchar to date/timestamp. I tried changing the data and yet the same number of records is loaded throwing the same error. Can someone help with the error please?
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

As a rule of thumb, the combined controller information is not decrypted.

First - set APT_DISABLE_COMBINATION to TRUE.

Second, try setting the two date values to constant.

It appears some issue in your transformation than the load.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Seeing as how it's a sqlldr message, I'd see if your DBA could help with it.
-craig

"You can never have too many knives" -- Logan Nine Fingers
aaryabhatta
Premium Member
Premium Member
Posts: 20
Joined: Mon Dec 19, 2005 10:00 pm
Location: UK

Post by aaryabhatta »

I tried setting Disable_combination to true and found that the problem is with Surrogate key stage. But the log is the same in director.

DBA is not of much help as they say the DATE_CHACHE has to be increased when sqlldr is used. I guess DataStage is using it by default.

In surrogate key stage, i m using a sequence which is created by DBA and not by DataStage. Any idea how i can create the sequence using the stage as suggested in the Parallel job developer guide.

One more thing, the sequence ends with a value 23219 in the table where as the next value in the sequence is 23221

Any thoughts on this please?
aaryabhatta
Premium Member
Premium Member
Posts: 20
Joined: Mon Dec 19, 2005 10:00 pm
Location: UK

Post by aaryabhatta »

I have created the sequence using Surrogate key stage, but no difference. Job fails at surrogate key stage.

Date fields are made constant and yet the saem error.

Can someone help who have successfully tried using surrogate key for huge input?
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

aaryabhatta wrote:DBA is not of much help as they say the DATE_CHACHE has to be increased when sqlldr is used. I guess DataStage is using it by default.
Ask them how one would do that taking DataStage out of the picture.
-craig

"You can never have too many knives" -- Logan Nine Fingers
aaryabhatta
Premium Member
Premium Member
Posts: 20
Joined: Mon Dec 19, 2005 10:00 pm
Location: UK

Post by aaryabhatta »

Craig, Running the job without surrogate key stage is sucessful and 20 million records are successfully loaded into the table. I guess this has to do with surrogate key stage which is throwing strange error.

Surrogate_Key_Generator_26,0: Caught unknown exception from runLocally().
Surrogate_Key_Generator_26,0: The runLocally() of the operator failed.
Surrogate_Key_Generator_26,0: Input 0 consumed 23220 records.
Surrogate_Key_Generator_26,0: Output 0 produced 23219 records.
Surrogate_Key_Generator_26,0: Operator terminated abnormally: runLocally() did not return APT_StatusOk
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Then it seems like time to involve your official support provider.
-craig

"You can never have too many knives" -- Logan Nine Fingers
aaryabhatta
Premium Member
Premium Member
Posts: 20
Joined: Mon Dec 19, 2005 10:00 pm
Location: UK

Post by aaryabhatta »

IBM says OSH process grows continuously storing everything when surrogate key is used with an oracle sequence and when it crosses 2GB limit, job aborts. Seems to be working fine with DB2 and state file.

Will update again when IBM comes up with a solution.

Thanks!
Post Reply