Type 30 descriptor, table is full.

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
vcannadevula
Charter Member
Charter Member
Posts: 143
Joined: Thu Nov 04, 2004 6:53 am

Type 30 descriptor, table is full.

Post by vcannadevula »

We are having both the server and prallel extender in the same box. When the parallel jobs are running the server jobs are getting the following message and abortings!!!!

Unable to allocate Type 30 descriptor, table is full.
DataStage Job 3096 Phantom 24343
DataStage Phantom Finished



Do any of you encountered this error. Is it running out of RAM????
I_Server_Whale
Premium Member
Premium Member
Posts: 1255
Joined: Wed Feb 02, 2005 11:54 am
Location: United States of America

Post by I_Server_Whale »

Hi,

Please USE the SEARCH facility. Here is what I found:

LINK1

LINK2

Thanks!
Naveen.
Anything that won't sell, I don't want to invent. Its sale is proof of utility, and utility is success.
Author: Thomas A. Edison 1847-1931, American Inventor, Entrepreneur, Founder of GE
vcannadevula
Charter Member
Charter Member
Posts: 143
Joined: Thu Nov 04, 2004 6:53 am

Re: Type 30 descriptor, table is full.

Post by vcannadevula »

vcannadevula wrote:We are having both the server and prallel extender in the same box. When the parallel jobs are running the server jobs are getting the following message and abortings!!!!

Unable to allocate Type 30 descriptor, table is full.
DataStage Job 3096 Phantom 24343
DataStage Phantom Finished



Do any of you encountered this error. Is it running out of RAM????

Please ignore this message. I got the answer
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

You will need to modify your DataStage engine configuration, specifically the T30FILES parameters to allocate enough internal table space to handle all of these concurrent open dynamic files. You can use the search facility to locate threads on this topic, including (i think) some recommendations on sizing.
vcannadevula
Charter Member
Charter Member
Posts: 143
Joined: Thu Nov 04, 2004 6:53 am

Post by vcannadevula »

ArndW wrote:You will need to modify your DataStage engine configuration, specifically the T30FILES parameters to allocate enough internal table space to handle all of these concurrent open dynamic files. You can use the search facility to locate threads on this topic, including (i think) some recommendations on sizing.

Does any one have any help file how to read the output of this

"analyze.shm -d"

I would like to know how much should i increase the T30 limit by doing this command
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

The command tells you what you have, not what you need.

The original configuration parameters were designed on machines with a lot less physical memory. I used uvconfig settings identical to the current defaults in the 80's on machines with only 16Mb of physical memory - so increasing a non-pagable resident table by a couple of Kb could have a significant impact on system swapping!

Without knowing much about your environment, it should be safe to take your current T30FILES value and add 50% to it.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The T30FILE configuration parameter sets the number of slots ("rows") in a table in shared memory in which the current settings for each open dynamic (Type 30) hashed file resides.

The table contains the following columns displayed by ANALYZE.SHM -d

Code: Select all

Slot #     Slot number in table, beginning at 0      
Inode      File's inode number
Device     File's device number 
Ref Count  Number of processes with this file open 
Htype      Hashing algorithm (20 = GENERAL, 21 = SEQ.NUM) 
Split      SPLIT.LOAD value (default 80)
Merge      MERGE.LOAD value (default 50)
Curmod     Current modulus (number of groups)
Basemod    Largest power of 2 less than or equal to Currmod
Largerec   LARGE.RECORD value (default 80% of group size)
Filesp     Physical size of file (bytes)
Selects    Number of currently active SELECT operations on file
Nextsplit  Number of next group to split
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
anu123
Premium Member
Premium Member
Posts: 143
Joined: Sun Feb 05, 2006 1:05 pm
Location: Columbus, OH, USA

Re: Type 30 descriptor, table is full.

Post by anu123 »

vcannadevula wrote:
vcannadevula wrote:We are having both the server and prallel extender in the same box. When the parallel jobs are running the server jobs are getting the following message and abortings!!!!

Unable to allocate Type 30 descriptor, table is full.
DataStage Job 3096 Phantom 24343
DataStage Phantom Finished



Do any of you encountered this error. Is it running out of RAM????

Please ignore this message. I got the answer
Hi,

Could you please post your findings/solution for the above problem.Even we are getting same error.

thanks in advance,
Thank you,
Anu
vcannadevula
Charter Member
Charter Member
Posts: 143
Joined: Thu Nov 04, 2004 6:53 am

Re: Type 30 descriptor, table is full.

Post by vcannadevula »

Our problem was with the Unix OS level limitations.
We cannot create more than 32767 directories with in a directory.
After this , the link limit for the unix will be consumed and it will not allow you to create any directory. as Type 30 hash file is a directory, it did not allow us to create any directory.

Simple test we did was use mkdir command in the directory you are creating the hash file. If it is successful, then u might be facing the T30 limit
If it is not successful , you need to re-create the directory, to which you are creating the hash file.
anu123
Premium Member
Premium Member
Posts: 143
Joined: Sun Feb 05, 2006 1:05 pm
Location: Columbus, OH, USA

Re: Type 30 descriptor, table is full.

Post by anu123 »

vcannadevula wrote:Our problem was with the Unix OS level limitations.
We cannot create more than 32767 directories with in a directory.
After this , the link limit for the unix will be consumed and it will not allow you to create any directory. as Type 30 hash file is a directory, it did not allow us to create any directory.

Simple test we did was use mkdir command in the directory you are creating the hash file. If it is successful, then u might be facing the T30 limit
If it is not successful , you need to re-create the directory, to which you are creating the hash file.
thanks for the info VC. I will work on it and let you update.
Thank you,
Anu
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The error message indicates that it is the T30FILE setting that needs to be fixed. You are not hitting the "sub-directories in a directory" limit - that would generate a rather different error message.

The problem is not being caused by running both parallel and server jobs; it's just the total number of jobs. Every job has to open a number of hashed files in the Repository (such as RT_STATUS, RT_LOG, RT_CONFIG) and the total of these, plus hashed files opened by server jobs, is what's led to the T30FILE table becoming full.

As you can see from my earlier post, each row in the T30FILE table is quite small, so increasing T30FILE by 50% or even 100% is quite feasible.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
vcannadevula
Charter Member
Charter Member
Posts: 143
Joined: Thu Nov 04, 2004 6:53 am

Post by vcannadevula »

[quote="ray.wurlod"]The error message indicates that it is the T30FILE setting that needs to be fixed. You are not hitting the "sub-directories in a directory" limit - that would generate a rather different error message.

Ray,
This might be a bug in Datastage 751. It gives the Type 30 descriptor full message even in the scenario i have specified.
Post Reply