DSD.UVOpen

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
arunverma
Participant
Posts: 90
Joined: Tue Apr 20, 2004 8:20 am
Location: MUMBAI
Contact:

DSD.UVOpen

Post by arunverma »

Hi All ,


We have multiinstane job and Data extraction some hash file created and the same used at the time of data loading , we got error while creating hash file and job got aborted . pl. help me .

J10IcaSrc.41.HProductCancelReason.ToCancHFile: DSD.UVOpen "HProductCancelReason_41" is already in your VOC file as a file definition record.
File name =
File not created.
.DataStage Job 868 Phantom 21295
Program "DSD.UVOpen": Line 396, Unable to allocate Type 30 descriptor, table is full.
Job Aborted after Fatal Error logged.
Attempting to Cleanup after ABORT raised in stage J10IcaSrc.41.TCancellationReason
DataStage Phantom Aborting with @ABORT.CODE = 1
Arun Verma
rasi
Participant
Posts: 464
Joined: Fri Oct 25, 2002 1:33 am
Location: Australia, Sydney

Post by rasi »

Hi Arun,

This could be becos of the limit of files opened. Change the T30FILE configuration parameter in the uvconfig file.

Best of luck

Rasi
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

This IS because you have too many hashed files open. The clue is in the message "Unable to allocate Type 30 descriptor, table is full.".

Don't forget that DataStage itself uses hashed files, as many as ten per job, in addition to any that you are using.

What can you do?
  • Investigate increasing the T30FILE parameter.
    Investigate the use of shared hashed files.
    Investigate the use of static, rather than dynamic hashed files.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
arunverma
Participant
Posts: 90
Joined: Tue Apr 20, 2004 8:20 am
Location: MUMBAI
Contact:

Post by arunverma »

Hi Rasi ,

As per Mr. ray's advice , we have already changed T30FILE limit to 2000 , This is maximum limit .


Arun ,
Arun Verma
arunverma
Participant
Posts: 90
Joined: Tue Apr 20, 2004 8:20 am
Location: MUMBAI
Contact:

Post by arunverma »

HI Mr. Ray ,

We have increaed T30FILE limit to 2000 ,

This has file is not shared hash file , one job create at the time of extraction and other job user at the time of loading , we have to use dynamic hash file , because this is master code and name hash file ,
if any record added in source system we have to keep that record .
all master file is not fixed . so pl. advice what we should do .


arun
Arun Verma
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Where is that maximum documented?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
arunverma
Participant
Posts: 90
Joined: Tue Apr 20, 2004 8:20 am
Location: MUMBAI
Contact:

Post by arunverma »

Hi Ray,

Thanks for your prompt reply. I understand from your reply that T30Files set to 2000 is also not the upper bound. Can i request to suggest the optimal value setting for this parameter? Do i need to set it to say 10000 or 20000? Will this likely to resolve my problem? Can it add any other problems to my system?

Sorry to trouble you so much but this has become cronic problem in our production box.

Thanks in advance.

Arun Verma
Arun Verma
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

I am not aware of any upper bound on the size of T30FILE. Certainly none is documented in the Administering UniVerse manual.

As I have explained in the past, the T30FILE parameter determines the number of rows in a shared memory table in which the dynamic sizing values for each open hashed file is stored. The more rows, the more dynamic hashed files that can be open at the same time.

There is a physical limit imposed by the maximum size of a shared memory segment on your hardware, and the other structures that must be in it. When you execute uvregen suggessfully, the size of this segment is reported. Each extra row in the T30FILE table takes slightly over 100 bytes.

You can monitor the use of the T30FILE table with the analyze.shm -d command (the executable is in the DS Engine's bin directory).

2000 is a lot of hashed files, even on a large server like yours.

In your place I'd be looking at using some static hashed files (which perform better for larger sizes than dynamic hashed files), rather than testing the limits of T30FILE.

Since you are using a lot of multi-instance jobs, I'd also look at using shared hashed files. Details of these can be found in the Hash Stage Disk Caching manual (dsdskche.pdf), which is in your documentation set.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
arunverma
Participant
Posts: 90
Joined: Tue Apr 20, 2004 8:20 am
Location: MUMBAI
Contact:

Post by arunverma »

Thanks Mr. Ray ,

Ok I will go through document .

arun
Arun Verma
Post Reply