We have multistance job , which data extract from seven serevr and load data into DSS server , at the time of data extraction we create Hash File for each server and at the time of data loading we are using this Hash File .
for each server we are creating hass file like
Hash_file_11
Hash_file_12
Hash_file_13
etc , so ther is no chances to write or read at same time , and this application is running for last 6 month , we got following error yesterday
while loading data into DSS - Unable to Open Hash file .
DSD.UVOpen Unable to open file HProductCancelReason_51.
DataStage Job 836 Phantom 25631
Program "DSD.UVOpen": Line 396, Unable to allocate Type 30 descriptor, table is full.
Job Aborted after Fatal Error logged.
Attempting to Cleanup after ABORT raised in stage J10IcaLor.51.T1
DataStage Phantom Aborting with @ABORT.CODE = 1
Pl. help me to resolve this issue .
Thanks and Regards
Arun Verma
Unable To Open Hash File - MultiInstance Job
Moderators: chulett, rschirm, roy
Unable To Open Hash File - MultiInstance Job
Arun Verma
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Arun,
"File descriptor table is full" refers to the in-memory table of concurrently open dynamic hashed files.
The size of this tables is set by the T30FILE configuration parameter in the uvconfig file. Once you have increased this, you need to run the uvregen utility, then stop and re-start DataStage.
Are you trying to make your machine work too hard?!!
Regards,
Ray
"File descriptor table is full" refers to the in-memory table of concurrently open dynamic hashed files.
The size of this tables is set by the T30FILE configuration parameter in the uvconfig file. Once you have increased this, you need to run the uvregen utility, then stop and re-start DataStage.
Are you trying to make your machine work too hard?!!
Regards,
Ray
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Dear Mr. Ray
We have checked server log , when this error occur that time a lot of application was running , we have change schedule time , lets see tomorrow .
We have SUN server with 24 CPU M/c 48 GB RAM , the value in uvconfig for T30FILE is "T30FILE 1000" , so should we increase ? .
Thanks and Regard
Arun Verma
We have checked server log , when this error occur that time a lot of application was running , we have change schedule time , lets see tomorrow .
We have SUN server with 24 CPU M/c 48 GB RAM , the value in uvconfig for T30FILE is "T30FILE 1000" , so should we increase ? .
Thanks and Regard
Arun Verma
Arun Verma
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
It's the only way to fix this problem.
Provided uvregen doesn't report a figure too close to the maximum shared memory segment size for your system, you could try increasing T30FILE to, say, 1500 or 2000. Each additional slot requires, as far as I can recall, just over 100 bytes of memory.
The only other approach is to investigate the use of static hashed files rather than dynamic, because static hashed files do not occupy a slot in the T30FILE table in memory.
Provided uvregen doesn't report a figure too close to the maximum shared memory segment size for your system, you could try increasing T30FILE to, say, 1500 or 2000. Each additional slot requires, as far as I can recall, just over 100 bytes of memory.
The only other approach is to investigate the use of static hashed files rather than dynamic, because static hashed files do not occupy a slot in the T30FILE table in memory.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.