Unable To Open Hash File - MultiInstance Job

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
arunverma
Participant
Posts: 90
Joined: Tue Apr 20, 2004 8:20 am
Location: MUMBAI
Contact:

Unable To Open Hash File - MultiInstance Job

Post by arunverma »

We have multistance job , which data extract from seven serevr and load data into DSS server , at the time of data extraction we create Hash File for each server and at the time of data loading we are using this Hash File .

for each server we are creating hass file like
Hash_file_11
Hash_file_12
Hash_file_13

etc , so ther is no chances to write or read at same time , and this application is running for last 6 month , we got following error yesterday
while loading data into DSS - Unable to Open Hash file .

DSD.UVOpen Unable to open file HProductCancelReason_51.

DataStage Job 836 Phantom 25631
Program "DSD.UVOpen": Line 396, Unable to allocate Type 30 descriptor, table is full.
Job Aborted after Fatal Error logged.
Attempting to Cleanup after ABORT raised in stage J10IcaLor.51.T1
DataStage Phantom Aborting with @ABORT.CODE = 1


Pl. help me to resolve this issue .

Thanks and Regards

Arun Verma
Arun Verma
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Arun,

"File descriptor table is full" refers to the in-memory table of concurrently open dynamic hashed files.

The size of this tables is set by the T30FILE configuration parameter in the uvconfig file. Once you have increased this, you need to run the uvregen utility, then stop and re-start DataStage.

Are you trying to make your machine work too hard?!! :roll:

Regards,
Ray
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
arunverma
Participant
Posts: 90
Joined: Tue Apr 20, 2004 8:20 am
Location: MUMBAI
Contact:

Post by arunverma »

Dear Mr. Ray

We have checked server log , when this error occur that time a lot of application was running , we have change schedule time , lets see tomorrow .

We have SUN server with 24 CPU M/c 48 GB RAM , the value in uvconfig for T30FILE is "T30FILE 1000" , so should we increase ? .


Thanks and Regard


Arun Verma
Arun Verma
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

It's the only way to fix this problem.

Provided uvregen doesn't report a figure too close to the maximum shared memory segment size for your system, you could try increasing T30FILE to, say, 1500 or 2000. Each additional slot requires, as far as I can recall, just over 100 bytes of memory.

The only other approach is to investigate the use of static hashed files rather than dynamic, because static hashed files do not occupy a slot in the T30FILE table in memory.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply