Type 30 descriptor, table is full.
Moderators: chulett, rschirm, roy
-
- Charter Member
- Posts: 143
- Joined: Thu Nov 04, 2004 6:53 am
Type 30 descriptor, table is full.
We are having both the server and prallel extender in the same box. When the parallel jobs are running the server jobs are getting the following message and abortings!!!!
Unable to allocate Type 30 descriptor, table is full.
DataStage Job 3096 Phantom 24343
DataStage Phantom Finished
Do any of you encountered this error. Is it running out of RAM????
Unable to allocate Type 30 descriptor, table is full.
DataStage Job 3096 Phantom 24343
DataStage Phantom Finished
Do any of you encountered this error. Is it running out of RAM????
-
- Premium Member
- Posts: 1255
- Joined: Wed Feb 02, 2005 11:54 am
- Location: United States of America
-
- Charter Member
- Posts: 143
- Joined: Thu Nov 04, 2004 6:53 am
Re: Type 30 descriptor, table is full.
vcannadevula wrote:We are having both the server and prallel extender in the same box. When the parallel jobs are running the server jobs are getting the following message and abortings!!!!
Unable to allocate Type 30 descriptor, table is full.
DataStage Job 3096 Phantom 24343
DataStage Phantom Finished
Do any of you encountered this error. Is it running out of RAM????
Please ignore this message. I got the answer
You will need to modify your DataStage engine configuration, specifically the T30FILES parameters to allocate enough internal table space to handle all of these concurrent open dynamic files. You can use the search facility to locate threads on this topic, including (i think) some recommendations on sizing.
-
- Charter Member
- Posts: 143
- Joined: Thu Nov 04, 2004 6:53 am
ArndW wrote:You will need to modify your DataStage engine configuration, specifically the T30FILES parameters to allocate enough internal table space to handle all of these concurrent open dynamic files. You can use the search facility to locate threads on this topic, including (i think) some recommendations on sizing.
Does any one have any help file how to read the output of this
"analyze.shm -d"
I would like to know how much should i increase the T30 limit by doing this command
The command tells you what you have, not what you need.
The original configuration parameters were designed on machines with a lot less physical memory. I used uvconfig settings identical to the current defaults in the 80's on machines with only 16Mb of physical memory - so increasing a non-pagable resident table by a couple of Kb could have a significant impact on system swapping!
Without knowing much about your environment, it should be safe to take your current T30FILES value and add 50% to it.
The original configuration parameters were designed on machines with a lot less physical memory. I used uvconfig settings identical to the current defaults in the 80's on machines with only 16Mb of physical memory - so increasing a non-pagable resident table by a couple of Kb could have a significant impact on system swapping!
Without knowing much about your environment, it should be safe to take your current T30FILES value and add 50% to it.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
The T30FILE configuration parameter sets the number of slots ("rows") in a table in shared memory in which the current settings for each open dynamic (Type 30) hashed file resides.
The table contains the following columns displayed by ANALYZE.SHM -d
The table contains the following columns displayed by ANALYZE.SHM -d
Code: Select all
Slot # Slot number in table, beginning at 0
Inode File's inode number
Device File's device number
Ref Count Number of processes with this file open
Htype Hashing algorithm (20 = GENERAL, 21 = SEQ.NUM)
Split SPLIT.LOAD value (default 80)
Merge MERGE.LOAD value (default 50)
Curmod Current modulus (number of groups)
Basemod Largest power of 2 less than or equal to Currmod
Largerec LARGE.RECORD value (default 80% of group size)
Filesp Physical size of file (bytes)
Selects Number of currently active SELECT operations on file
Nextsplit Number of next group to split
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Re: Type 30 descriptor, table is full.
Hi,vcannadevula wrote:vcannadevula wrote:We are having both the server and prallel extender in the same box. When the parallel jobs are running the server jobs are getting the following message and abortings!!!!
Unable to allocate Type 30 descriptor, table is full.
DataStage Job 3096 Phantom 24343
DataStage Phantom Finished
Do any of you encountered this error. Is it running out of RAM????
Please ignore this message. I got the answer
Could you please post your findings/solution for the above problem.Even we are getting same error.
thanks in advance,
Thank you,
Anu
Anu
-
- Charter Member
- Posts: 143
- Joined: Thu Nov 04, 2004 6:53 am
Re: Type 30 descriptor, table is full.
Our problem was with the Unix OS level limitations.
We cannot create more than 32767 directories with in a directory.
After this , the link limit for the unix will be consumed and it will not allow you to create any directory. as Type 30 hash file is a directory, it did not allow us to create any directory.
Simple test we did was use mkdir command in the directory you are creating the hash file. If it is successful, then u might be facing the T30 limit
If it is not successful , you need to re-create the directory, to which you are creating the hash file.
We cannot create more than 32767 directories with in a directory.
After this , the link limit for the unix will be consumed and it will not allow you to create any directory. as Type 30 hash file is a directory, it did not allow us to create any directory.
Simple test we did was use mkdir command in the directory you are creating the hash file. If it is successful, then u might be facing the T30 limit
If it is not successful , you need to re-create the directory, to which you are creating the hash file.
Re: Type 30 descriptor, table is full.
thanks for the info VC. I will work on it and let you update.vcannadevula wrote:Our problem was with the Unix OS level limitations.
We cannot create more than 32767 directories with in a directory.
After this , the link limit for the unix will be consumed and it will not allow you to create any directory. as Type 30 hash file is a directory, it did not allow us to create any directory.
Simple test we did was use mkdir command in the directory you are creating the hash file. If it is successful, then u might be facing the T30 limit
If it is not successful , you need to re-create the directory, to which you are creating the hash file.
Thank you,
Anu
Anu
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
The error message indicates that it is the T30FILE setting that needs to be fixed. You are not hitting the "sub-directories in a directory" limit - that would generate a rather different error message.
The problem is not being caused by running both parallel and server jobs; it's just the total number of jobs. Every job has to open a number of hashed files in the Repository (such as RT_STATUS, RT_LOG, RT_CONFIG) and the total of these, plus hashed files opened by server jobs, is what's led to the T30FILE table becoming full.
As you can see from my earlier post, each row in the T30FILE table is quite small, so increasing T30FILE by 50% or even 100% is quite feasible.
The problem is not being caused by running both parallel and server jobs; it's just the total number of jobs. Every job has to open a number of hashed files in the Repository (such as RT_STATUS, RT_LOG, RT_CONFIG) and the total of these, plus hashed files opened by server jobs, is what's led to the T30FILE table becoming full.
As you can see from my earlier post, each row in the T30FILE table is quite small, so increasing T30FILE by 50% or even 100% is quite feasible.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Charter Member
- Posts: 143
- Joined: Thu Nov 04, 2004 6:53 am
[quote="ray.wurlod"]The error message indicates that it is the T30FILE setting that needs to be fixed. You are not hitting the "sub-directories in a directory" limit - that would generate a rather different error message.
Ray,
This might be a bug in Datastage 751. It gives the Type 30 descriptor full message even in the scenario i have specified.
Ray,
This might be a bug in Datastage 751. It gives the Type 30 descriptor full message even in the scenario i have specified.