error code = -14
Moderators: chulett, rschirm, roy
error code = -14
i was getting error code = -14 so any one know how to resolve it
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
bit more detail
we have been plagued by this one, basically the out the box DS configuration allows for only a set amount of dynamic and static hash files to be open at once.
These are specified in uvconfig (see T30Files and Mfiles entrys)
once this number is hit then universe starts closing file handles to open new ones , this with a busy box can take some time, and if not in 60 seconds will cause timeout (-14) errors.
the solution is to change MFILES and T30files and restart engine after doing uvregen command from dsadm account.,
I would speak to ascential for guidance on this as a bit indepth, but can basically free up engine to run as many jobs as box can handle
hope this points u in right direction
These are specified in uvconfig (see T30Files and Mfiles entrys)
once this number is hit then universe starts closing file handles to open new ones , this with a busy box can take some time, and if not in 60 seconds will cause timeout (-14) errors.
the solution is to change MFILES and T30files and restart engine after doing uvregen command from dsadm account.,
I would speak to ascential for guidance on this as a bit indepth, but can basically free up engine to run as many jobs as box can handle
hope this points u in right direction
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
MFILES should default to 200 on Windows, which ought to be ample. Of course, you can monitor it, and increase it if it isn't. On UNIX systems it should be set to a value eight less than the value of the kernel parameter NFILE (number of file units that a process may have open).
T30FILE usually does require increasing on busier systems with lots of jobs. If T30FILE is insufficient, the failure is usually more catastrophic than a mere timeout. Resetting the aborted job may yield additional diagnostic information. You can also monitor the number of dynamic hashed files open system-wide, to determine whether you're getting near to this limit.
T30FILE usually does require increasing on busier systems with lots of jobs. If T30FILE is insufficient, the failure is usually more catastrophic than a mere timeout. Resetting the aborted job may yield additional diagnostic information. You can also monitor the number of dynamic hashed files open system-wide, to determine whether you're getting near to this limit.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact: