You have filled up your T30FILES descriptors with all of the hashed files you have opened. Look at your uvconfig file settings; you can increase the value to accomodate all these concurrently opened type 30 files, or you can make these files static and thus use other descriptors.
The best solution is to rethink your job. Can't each instance use the same hashed files - if you need to keep the data distinct you might think of adding a column to the hashed files that stores from which job the records came from.
Job Abort --> Error log doubt
Moderators: chulett, rschirm, roy
Last edited by ArndW on Wed Jun 20, 2007 7:08 pm, edited 1 time in total.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Once you have rationalized your use of hashed files (note: not "hash" files) search the forum for T30FILE. This is both the name of the table and of the configuration parameter (in the uvconfig file) for setting its size. You will also need to research - either here or in the manuals - how to reconfigure DataStage Engine.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 40
- Joined: Sun Jan 21, 2007 1:52 pm
- Location: Chennai
- Contact:
HI Thank you for the reply, i shall try the same.ArndW wrote:You have filled up your T30FILES descriptors with all of the hashed files you have opened. Look at your uvconfig file settings; you can increase the value to accomodate all these concurrently opened type 30 files, or you can make these files static and thus use other descriptors.
The best solution is to rethink your job. Can't each instance use the same hashed files - if you need to keep the data distinct you might think of adding a column to the hashed files that stores from which job the records came from.