Page 1 of 1

The partition was evidently corrupted

Posted: Tue Mar 08, 2011 3:40 am
by thanush9sep
Dear All,

Job Description:

Dataset --> cpy---> Oracle stage

Method: Load Append
Partition Type: Same
Index mode: Rebuild
Node: 4 nodes
Disk space: 30 Gb

While the amount of data is around 593708 - This job run fine
While the amount of data is 3742694 - the job aborts with the following message


Error Message:
CPY,0: Failure during execution of operator logic.
CPY,0: Fatal Error: I/O subsystem: partition 0 must be a multiple of 131072 in size (was 9943826432). The partition was evidently corrupted.

Posted: Tue Mar 08, 2011 7:47 am
by chulett
First guess would be you ran out of space. Evidently. :wink:

Posted: Tue Mar 08, 2011 3:10 pm
by ray.wurlod
Or transport buffer size was somehow misconfigured.

Posted: Wed Mar 09, 2011 5:44 am
by thanush9sep
Thanks Chulett

You are absolutely correct, However I stepped into a new thing in DataStage.
My job sequence ran for around 1 hours and 30 min for approximately 3742694 records.

I was told to remove the length of all varchar columns in the target dataset in order to reduce the size of dataset created.
For example design

oracle stage --> copy ---> dataset

Imagine if Oracle stage contains 11 columns of datatype varchar(50). I mean that each column has a datatype varchar(50).

while mapping it to the target dataset, i removed all the length of varchar columns. so what happened was really amazing.

The size of dataset was really low and the job ran faster.

When I changed all the jobs in the job sequence, the job ran for 30 min for 3742694 records and it occupied only 3 gb of space while before it consumed 30 gb ....

Posted: Wed Mar 09, 2011 8:08 am
by chulett
That's a known standard behaviour of DataStage with regards to unbound varchar fields, there have been many discussions here on the subject.

Posted: Wed Mar 09, 2011 12:44 pm
by thanush9sep
sadly I have been missing them ....

Thanks