Error in join stage

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

sanjay
Premium Member
Premium Member
Posts: 203
Joined: Fri Apr 23, 2004 2:22 am

Post by sanjay »

Hi SomaRaju

u need to remove unwanted files . because it clearly indicates no space in device . try to remove dataset and files which are not used.

Sanjay
somu_june wrote:Hi Sanjay,

I splited job in to two jobs and Iam taking first job out put as one of the input in second and performed join function but Iam getting the error like this

Errors :

Join_Src_Tables,0: Write to dataset failed: No space left on device
The error occurred on Orchestrate node node1 (hostname d3crs40)

Join_Src_Tables,0: Orchestrate was unable to write to any of the following files:

Join_Src_Tables,0: /DataStage/751A/Ascential/DataStage/Datasets/Src_File_Cable.txt.Raju0212.d25was39.0000.0000.0000.e1a0.c81b41a4.0000.d43d544e

Join_Src_Tables,0: Block write failure. Partition: 0

Join_Src_Tables,0: Failure during execution of operator logic.

Join_Src_Tables,0: Fatal Error: File data set, file "{0}".; output of "APT_JoinSubOperatorNC in Join_Src_Tables": DM getOutputRecord error.

buffer(1),0: Error in writeBlock - could not write 21872

buffer(1),0: Failure during execution of operator logic.

buffer(1),0: Fatal Error: APT_BufferOperator::writeAllData() write failed. This is probably due to a downstream operator failure.

node_node1: Player 3 terminated unexpectedly.

buffer(0),0: Fatal Error: APT_BufferOperator::writeAllData() write failed. This is probably due to a downstream operator failure.


Thanks,
SomaRaju.
somu_june
Premium Member
Premium Member
Posts: 439
Joined: Wed Sep 14, 2005 9:28 am
Location: 36p,reading road

Post by somu_june »

Hi,

Can some body tell me how much disk space I need to add to Datasets and scratch directory for 10 million records , In order to overcome this disk space problem and how much free space do I require .



Thanks,
SomaRaju
somaraju
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

No, because we have no idea how big one record is. You can determine the size of each field based on its data type, and sum these to get the record size. Multiply by the number of records and round up to the next higher multiple of 128KB. That's the Data Set calculation.

For scratch disk and free space the answer can only be "more" if you're running out - there is no easy calculation.

After you upgrade to version 8.0 you will have a delightful resource estimation tool available that will perform these calculations for you.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply