Aggregator failing: write failed: Output file full

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
vinothkumar
Participant
Posts: 342
Joined: Tue Nov 04, 2008 10:38 am
Location: Chennai, India

Aggregator failing: write failed: Output file full

Post by vinothkumar »

The job fails at aggregator stage with the fatal error for huge no. of records:
write failed: Output file full, and no more output files
Fatal Error: Tsort merger aborting: mergeOneRecord() punted

Please let me know if anyone has faced the same issue for large no. of records and the possible solutions.

Thanks,
Vinoth
asorrell
Posts: 1707
Joined: Fri Apr 04, 2003 2:00 pm
Location: Colleyville, Texas

Post by asorrell »

A "tsort" is an inserted sort that is automatically put in by DataStage because it is required for correct operation of the stage. It sounds like the inserted sort is filling up your scratch space. You should either clean up scratch or move the scratch to an area with more space. That means changing the APT config file.

You can also improve performance by putting an actual Sort stage in front of the aggregator, setting it with the correct keys, and increasing the setting for restricted memory from 20MB to something appropriate for your system. That will allow the job to grab more memory to perform the sort, which reduces the amount of scratch space it needs. It will also create fewer, larger temporary files as it needs them.

One other possibility - is your output target a sequential file? If so, depending on your operating system you might be exceeding the file size limit for the sequential file.

Try this thread:
viewtopic.php?t=147925&highlight=Output+file+full
Andy Sorrell
Certified DataStage Consultant
IBM Analytics Champion 2009 - 2020
Post Reply