error code = -14

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
panic
Participant
Posts: 17
Joined: Fri Sep 23, 2005 5:58 pm

error code = -14

Post by panic »

i was getting error code = -14 so any one know how to resolve it
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Search the forum for what it means.

Then stop overloading your machine.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
fridge
Premium Member
Premium Member
Posts: 136
Joined: Sat Jan 10, 2004 8:51 am

bit more detail

Post by fridge »

we have been plagued by this one, basically the out the box DS configuration allows for only a set amount of dynamic and static hash files to be open at once.

These are specified in uvconfig (see T30Files and Mfiles entrys)

once this number is hit then universe starts closing file handles to open new ones , this with a busy box can take some time, and if not in 60 seconds will cause timeout (-14) errors.

the solution is to change MFILES and T30files and restart engine after doing uvregen command from dsadm account.,

I would speak to ascential for guidance on this as a bit indepth, but can basically free up engine to run as many jobs as box can handle

hope this points u in right direction
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

MFILES should default to 200 on Windows, which ought to be ample. Of course, you can monitor it, and increase it if it isn't. On UNIX systems it should be set to a value eight less than the value of the kernel parameter NFILE (number of file units that a process may have open).

T30FILE usually does require increasing on busier systems with lots of jobs. If T30FILE is insufficient, the failure is usually more catastrophic than a mere timeout. Resetting the aborted job may yield additional diagnostic information. You can also monitor the number of dynamic hashed files open system-wide, to determine whether you're getting near to this limit.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
moekham
Premium Member
Premium Member
Posts: 2
Joined: Fri Sep 24, 2004 1:44 am

Post by moekham »

Hi
What are the ways to monitor the number of dynamic hashed files open ?

Thx
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

$DSHOME/bin/analyze.shm -d
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply