Page 1 of 1

Type 30 descriptor, table is full.

Posted: Thu Mar 02, 2006 10:02 am
by vcannadevula
We are having both the server and prallel extender in the same box. When the parallel jobs are running the server jobs are getting the following message and abortings!!!!

Unable to allocate Type 30 descriptor, table is full.
DataStage Job 3096 Phantom 24343
DataStage Phantom Finished



Do any of you encountered this error. Is it running out of RAM????

Posted: Thu Mar 02, 2006 10:07 am
by I_Server_Whale
Hi,

Please USE the SEARCH facility. Here is what I found:

LINK1

LINK2

Thanks!
Naveen.

Re: Type 30 descriptor, table is full.

Posted: Thu Mar 02, 2006 10:08 am
by vcannadevula
vcannadevula wrote:We are having both the server and prallel extender in the same box. When the parallel jobs are running the server jobs are getting the following message and abortings!!!!

Unable to allocate Type 30 descriptor, table is full.
DataStage Job 3096 Phantom 24343
DataStage Phantom Finished



Do any of you encountered this error. Is it running out of RAM????

Please ignore this message. I got the answer

Posted: Thu Mar 02, 2006 10:09 am
by ArndW
You will need to modify your DataStage engine configuration, specifically the T30FILES parameters to allocate enough internal table space to handle all of these concurrent open dynamic files. You can use the search facility to locate threads on this topic, including (i think) some recommendations on sizing.

Posted: Thu Mar 02, 2006 10:20 am
by vcannadevula
ArndW wrote:You will need to modify your DataStage engine configuration, specifically the T30FILES parameters to allocate enough internal table space to handle all of these concurrent open dynamic files. You can use the search facility to locate threads on this topic, including (i think) some recommendations on sizing.

Does any one have any help file how to read the output of this

"analyze.shm -d"

I would like to know how much should i increase the T30 limit by doing this command

Posted: Thu Mar 02, 2006 10:33 am
by ArndW
The command tells you what you have, not what you need.

The original configuration parameters were designed on machines with a lot less physical memory. I used uvconfig settings identical to the current defaults in the 80's on machines with only 16Mb of physical memory - so increasing a non-pagable resident table by a couple of Kb could have a significant impact on system swapping!

Without knowing much about your environment, it should be safe to take your current T30FILES value and add 50% to it.

Posted: Thu Mar 02, 2006 11:48 am
by ray.wurlod
The T30FILE configuration parameter sets the number of slots ("rows") in a table in shared memory in which the current settings for each open dynamic (Type 30) hashed file resides.

The table contains the following columns displayed by ANALYZE.SHM -d

Code: Select all

Slot #     Slot number in table, beginning at 0      
Inode      File's inode number
Device     File's device number 
Ref Count  Number of processes with this file open 
Htype      Hashing algorithm (20 = GENERAL, 21 = SEQ.NUM) 
Split      SPLIT.LOAD value (default 80)
Merge      MERGE.LOAD value (default 50)
Curmod     Current modulus (number of groups)
Basemod    Largest power of 2 less than or equal to Currmod
Largerec   LARGE.RECORD value (default 80% of group size)
Filesp     Physical size of file (bytes)
Selects    Number of currently active SELECT operations on file
Nextsplit  Number of next group to split

Re: Type 30 descriptor, table is full.

Posted: Fri Apr 14, 2006 2:21 pm
by anu123
vcannadevula wrote:
vcannadevula wrote:We are having both the server and prallel extender in the same box. When the parallel jobs are running the server jobs are getting the following message and abortings!!!!

Unable to allocate Type 30 descriptor, table is full.
DataStage Job 3096 Phantom 24343
DataStage Phantom Finished



Do any of you encountered this error. Is it running out of RAM????

Please ignore this message. I got the answer
Hi,

Could you please post your findings/solution for the above problem.Even we are getting same error.

thanks in advance,

Re: Type 30 descriptor, table is full.

Posted: Fri Apr 14, 2006 3:42 pm
by vcannadevula
Our problem was with the Unix OS level limitations.
We cannot create more than 32767 directories with in a directory.
After this , the link limit for the unix will be consumed and it will not allow you to create any directory. as Type 30 hash file is a directory, it did not allow us to create any directory.

Simple test we did was use mkdir command in the directory you are creating the hash file. If it is successful, then u might be facing the T30 limit
If it is not successful , you need to re-create the directory, to which you are creating the hash file.

Re: Type 30 descriptor, table is full.

Posted: Fri Apr 14, 2006 4:16 pm
by anu123
vcannadevula wrote:Our problem was with the Unix OS level limitations.
We cannot create more than 32767 directories with in a directory.
After this , the link limit for the unix will be consumed and it will not allow you to create any directory. as Type 30 hash file is a directory, it did not allow us to create any directory.

Simple test we did was use mkdir command in the directory you are creating the hash file. If it is successful, then u might be facing the T30 limit
If it is not successful , you need to re-create the directory, to which you are creating the hash file.
thanks for the info VC. I will work on it and let you update.

Posted: Fri Apr 14, 2006 4:50 pm
by ray.wurlod
The error message indicates that it is the T30FILE setting that needs to be fixed. You are not hitting the "sub-directories in a directory" limit - that would generate a rather different error message.

The problem is not being caused by running both parallel and server jobs; it's just the total number of jobs. Every job has to open a number of hashed files in the Repository (such as RT_STATUS, RT_LOG, RT_CONFIG) and the total of these, plus hashed files opened by server jobs, is what's led to the T30FILE table becoming full.

As you can see from my earlier post, each row in the T30FILE table is quite small, so increasing T30FILE by 50% or even 100% is quite feasible.

Posted: Fri Apr 14, 2006 4:58 pm
by vcannadevula
[quote="ray.wurlod"]The error message indicates that it is the T30FILE setting that needs to be fixed. You are not hitting the "sub-directories in a directory" limit - that would generate a rather different error message.

Ray,
This might be a bug in Datastage 751. It gives the Type 30 descriptor full message even in the scenario i have specified.