Page 1 of 1

DSD.UVOpen

Posted: Tue Jul 06, 2004 11:09 pm
by arunverma
Hi All ,


We have multiinstane job and Data extraction some hash file created and the same used at the time of data loading , we got error while creating hash file and job got aborted . pl. help me .

J10IcaSrc.41.HProductCancelReason.ToCancHFile: DSD.UVOpen "HProductCancelReason_41" is already in your VOC file as a file definition record.
File name =
File not created.
.DataStage Job 868 Phantom 21295
Program "DSD.UVOpen": Line 396, Unable to allocate Type 30 descriptor, table is full.
Job Aborted after Fatal Error logged.
Attempting to Cleanup after ABORT raised in stage J10IcaSrc.41.TCancellationReason
DataStage Phantom Aborting with @ABORT.CODE = 1

Posted: Tue Jul 06, 2004 11:34 pm
by rasi
Hi Arun,

This could be becos of the limit of files opened. Change the T30FILE configuration parameter in the uvconfig file.

Best of luck

Rasi

Posted: Wed Jul 07, 2004 12:23 am
by ray.wurlod
This IS because you have too many hashed files open. The clue is in the message "Unable to allocate Type 30 descriptor, table is full.".

Don't forget that DataStage itself uses hashed files, as many as ten per job, in addition to any that you are using.

What can you do?
  • Investigate increasing the T30FILE parameter.
    Investigate the use of shared hashed files.
    Investigate the use of static, rather than dynamic hashed files.

Posted: Wed Jul 07, 2004 12:23 am
by arunverma
Hi Rasi ,

As per Mr. ray's advice , we have already changed T30FILE limit to 2000 , This is maximum limit .


Arun ,

Posted: Wed Jul 07, 2004 12:31 am
by arunverma
HI Mr. Ray ,

We have increaed T30FILE limit to 2000 ,

This has file is not shared hash file , one job create at the time of extraction and other job user at the time of loading , we have to use dynamic hash file , because this is master code and name hash file ,
if any record added in source system we have to keep that record .
all master file is not fixed . so pl. advice what we should do .


arun

Posted: Wed Jul 07, 2004 1:05 am
by ray.wurlod
Where is that maximum documented?

Posted: Wed Jul 07, 2004 1:27 am
by arunverma
Hi Ray,

Thanks for your prompt reply. I understand from your reply that T30Files set to 2000 is also not the upper bound. Can i request to suggest the optimal value setting for this parameter? Do i need to set it to say 10000 or 20000? Will this likely to resolve my problem? Can it add any other problems to my system?

Sorry to trouble you so much but this has become cronic problem in our production box.

Thanks in advance.

Arun Verma

Posted: Wed Jul 07, 2004 1:58 am
by ray.wurlod
I am not aware of any upper bound on the size of T30FILE. Certainly none is documented in the Administering UniVerse manual.

As I have explained in the past, the T30FILE parameter determines the number of rows in a shared memory table in which the dynamic sizing values for each open hashed file is stored. The more rows, the more dynamic hashed files that can be open at the same time.

There is a physical limit imposed by the maximum size of a shared memory segment on your hardware, and the other structures that must be in it. When you execute uvregen suggessfully, the size of this segment is reported. Each extra row in the T30FILE table takes slightly over 100 bytes.

You can monitor the use of the T30FILE table with the analyze.shm -d command (the executable is in the DS Engine's bin directory).

2000 is a lot of hashed files, even on a large server like yours.

In your place I'd be looking at using some static hashed files (which perform better for larger sizes than dynamic hashed files), rather than testing the limits of T30FILE.

Since you are using a lot of multi-instance jobs, I'd also look at using shared hashed files. Details of these can be found in the Hash Stage Disk Caching manual (dsdskche.pdf), which is in your documentation set.

Posted: Wed Jul 07, 2004 3:44 am
by arunverma
Thanks Mr. Ray ,

Ok I will go through document .

arun