The partition was evidently corrupted

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
thanush9sep
Premium Member
Premium Member
Posts: 54
Joined: Thu Oct 18, 2007 4:20 am
Location: Chennai

The partition was evidently corrupted

Post by thanush9sep »

Dear All,

Job Description:

Dataset --> cpy---> Oracle stage

Method: Load Append
Partition Type: Same
Index mode: Rebuild
Node: 4 nodes
Disk space: 30 Gb

While the amount of data is around 593708 - This job run fine
While the amount of data is 3742694 - the job aborts with the following message


Error Message:
CPY,0: Failure during execution of operator logic.
CPY,0: Fatal Error: I/O subsystem: partition 0 must be a multiple of 131072 in size (was 9943826432). The partition was evidently corrupted.
Regards
LakshmiNarayanan
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

First guess would be you ran out of space. Evidently. :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Or transport buffer size was somehow misconfigured.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
thanush9sep
Premium Member
Premium Member
Posts: 54
Joined: Thu Oct 18, 2007 4:20 am
Location: Chennai

Post by thanush9sep »

Thanks Chulett

You are absolutely correct, However I stepped into a new thing in DataStage.
My job sequence ran for around 1 hours and 30 min for approximately 3742694 records.

I was told to remove the length of all varchar columns in the target dataset in order to reduce the size of dataset created.
For example design

oracle stage --> copy ---> dataset

Imagine if Oracle stage contains 11 columns of datatype varchar(50). I mean that each column has a datatype varchar(50).

while mapping it to the target dataset, i removed all the length of varchar columns. so what happened was really amazing.

The size of dataset was really low and the job ran faster.

When I changed all the jobs in the job sequence, the job ran for 30 min for 3742694 records and it occupied only 3 gb of space while before it consumed 30 gb ....
Regards
LakshmiNarayanan
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

That's a known standard behaviour of DataStage with regards to unbound varchar fields, there have been many discussions here on the subject.
-craig

"You can never have too many knives" -- Logan Nine Fingers
thanush9sep
Premium Member
Premium Member
Posts: 54
Joined: Thu Oct 18, 2007 4:20 am
Location: Chennai

Post by thanush9sep »

sadly I have been missing them ....

Thanks
Regards
LakshmiNarayanan
Post Reply