Datastage Jobs Not Running
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 1735
- Joined: Thu Mar 01, 2007 5:44 am
- Location: Troy, MI
-
- Participant
- Posts: 3337
- Joined: Mon Jan 17, 2005 4:49 am
- Location: United Kingdom
Removing or purging DS logs is different from &PH&.
Logs are stored in RT_LOG hashed files and are related to the specific job's logs.
&PH& stores the phantom process history - more of kernel level. This is one central area where your process communication occurs.
To clear, log into DS Admin and select the project you wish to clear and run the command
Clearing &PH& will assist DataStage to 'turnaround' better.
Logs are stored in RT_LOG hashed files and are related to the specific job's logs.
&PH& stores the phantom process history - more of kernel level. This is one central area where your process communication occurs.
To clear, log into DS Admin and select the project you wish to clear and run the command
Code: Select all
CLEAR.FILE &PH&
Most people set up a cron script to prune that directory, i.e. delete files older than x days to keep a rolling x days in the phantom directory. It's important to note that CLEAR.FILE is like a truncate so if you take that route you should only do that when no jobs are running.
And the solution to your -14 error is to stop overloading your server. Sometimes that's not about how many are running but rather it means don't attempt to start so many at the same time - stagger out their start a little bit. While there (allegedly) is a "patch" for this to increase the timeout value, it's not something IBM just gives anyone that asks, you have to make a pretty compelling business case for this from what I understand.
And the solution to your -14 error is to stop overloading your server. Sometimes that's not about how many are running but rather it means don't attempt to start so many at the same time - stagger out their start a little bit. While there (allegedly) is a "patch" for this to increase the timeout value, it's not something IBM just gives anyone that asks, you have to make a pretty compelling business case for this from what I understand.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Premium Member
- Posts: 457
- Joined: Tue Sep 25, 2007 4:05 pm
Sue,paranoid wrote:The jobs failed today as well and finally i could find the error code when running manually on the server. It says "Status code = -14 DSJE_TIMEOUT".
This was exactly our problem too. Are the jobs getting fired at all later? I mean, if you try to run them manually, do they run? Or they just keep quiet, not logging anything and stay as they were before?
Vivek Gadwal
Experience is what you get when you didn't get what you wanted
Experience is what you get when you didn't get what you wanted
-
- Premium Member
- Posts: 1735
- Joined: Thu Mar 01, 2007 5:44 am
- Location: Troy, MI
I would first suggest to clean &PH& folder and old job logs then check it. Also monitor the resource usage after you trigger the jobs, itf its taking more than 90-95% CPU , try to reduce the number of jobs running concurrently else including other error messages, performance will be hit. IMOparanoid wrote: I would contact IBM on this as you suggested.
Priyadarshi Kunal
Genius may have its limitations, but stupidity is not thus handicapped.
Genius may have its limitations, but stupidity is not thus handicapped.
Thanks all. Can i clear all of the files in &PH& folder? Will there be any issue with the existing jobs if i delete all the files in it?
If we need to clear that folder, should we need to make sure that no job is running at that time?
@ Vivek -- Yes after some time when we ran them manually, they are running fine.
Sue
If we need to clear that folder, should we need to make sure that no job is running at that time?
@ Vivek -- Yes after some time when we ran them manually, they are running fine.
Sue
-
- Participant
- Posts: 3337
- Joined: Mon Jan 17, 2005 4:49 am
- Location: United Kingdom
-
- Participant
- Posts: 3337
- Joined: Mon Jan 17, 2005 4:49 am
- Location: United Kingdom
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Hi everyone,
We have finally reseolved this issue after a struggle of 10 days or so..
After a DS restart and clearing the files in PH folder we could resolve this issue. No more failures today
As per IBM Support, the number of files in the PH folder should not exceed more than 1000. Since we have more than 3000 files, he advised us to clear the files in PH folder. The same suggestion is given by DS experts in this post as well.
Before clearing the folder we have restarted the DS server as well.
I am very much thankful to each and everyone who replied to this post.
Have a nice day!!
Sue :D
We have finally reseolved this issue after a struggle of 10 days or so..
After a DS restart and clearing the files in PH folder we could resolve this issue. No more failures today
As per IBM Support, the number of files in the PH folder should not exceed more than 1000. Since we have more than 3000 files, he advised us to clear the files in PH folder. The same suggestion is given by DS experts in this post as well.
Before clearing the folder we have restarted the DS server as well.
I am very much thankful to each and everyone who replied to this post.
Have a nice day!!
Sue :D