Can I have a job that checks to see if a job is running?
Moderators: chulett, rschirm, roy
Can I have a job that checks to see if a job is running?
I have a couple of jobs that keep hanging suddenly. They are very simple jobs, but they are processing a lot of data. This is what the jobs look like:
ODBC select data -> Transformer -> File -> ODBC Insert Data
The ODBC Insert Data step, does a "clear the table, insert the rows". The job hangs after writing to the file and before starting the ODBC insert. It just sits there, doing nothing.
In an effort to prevent this from happening in the future, I plan to break this into 2 jobs - One to write the file and one to do the insert. I'm going to investigate the bulk insert again (I have never had it work successfully).
I'd also like to find a way to check to see if the job was hung, and if so, send out an email. So I need to create a job that runs at a certain time and checks to see if job XXX is running and if it is running....do stuff. Does anyone know how I might accomplish this?
ODBC select data -> Transformer -> File -> ODBC Insert Data
The ODBC Insert Data step, does a "clear the table, insert the rows". The job hangs after writing to the file and before starting the ODBC insert. It just sits there, doing nothing.
In an effort to prevent this from happening in the future, I plan to break this into 2 jobs - One to write the file and one to do the insert. I'm going to investigate the bulk insert again (I have never had it work successfully).
I'd also like to find a way to check to see if the job was hung, and if so, send out an email. So I need to create a job that runs at a certain time and checks to see if job XXX is running and if it is running....do stuff. Does anyone know how I might accomplish this?
The job normally runs in about 2 hours total. When it gets stuck, it just sits there like it's about to begin the clear then insert. The table is actually already empty when it gets to that step (it's emptied in another job earlier in the sequence).
It just sits there until I kill it. When I got in this morning, it had been sitting at that step for 13 hours. I killed it and started it again. Ran fine and finished in 2 hours as expected.
Since this has happened twice, I wanted to create another job to check on that job and if it was still running, to send out an email notifying the team that it was stuck, so that we could respond sooner than we are currently responding (when a user calls to complain!).
Per the last response, When I script out the call to the job...that will actually launch the job...won't it? Currently we use the sequence jobs to run all of our server jobs.
It just sits there until I kill it. When I got in this morning, it had been sitting at that step for 13 hours. I killed it and started it again. Ran fine and finished in 2 hours as expected.
Since this has happened twice, I wanted to create another job to check on that job and if it was still running, to send out an email notifying the team that it was stuck, so that we could respond sooner than we are currently responding (when a user calls to complain!).
Per the last response, When I script out the call to the job...that will actually launch the job...won't it? Currently we use the sequence jobs to run all of our server jobs.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
For example, DELETE FOM table will always wait for any update lock on any row in the table to be released, if no "escape clause" (syntax varies with database, but is not provided from the "Clear table" method) is issued.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 273
- Joined: Wed Oct 18, 2006 12:20 pm
- Location: Porto
The DELETE FROM TABLE will be efficient, if the table is EXCLUSIVELY accessible for the job that issues the DELETE FROM ... SQL.
You may issue a LOCK TABLE < table name > IN EXCLUSIVE MODE (DB native formats may differ) as a Before SQL.
Also, COMMIT must be issued after the DELETE is successful.
To be notified upon completion of the job, use DSSendMail (after stage routine).
HTH![Smile :)](./images/smilies/icon_smile.gif)
You may issue a LOCK TABLE < table name > IN EXCLUSIVE MODE (DB native formats may differ) as a Before SQL.
Also, COMMIT must be issued after the DELETE is successful.
To be notified upon completion of the job, use DSSendMail (after stage routine).
HTH
![Smile :)](./images/smilies/icon_smile.gif)
Share to Learn, and Learn to Share.
hope locking and deleting of table as mentioned by replies helps you, as you want to create a job which will track your job sequence, you can code in job control following to track job sequencer.
Status = DSGetJobInfo( RunJobName, DSJ.JOBINTERIMSTATUS)
*Call DSLogFatal("JobControl")
Status_value = DSGetJobInfo (DSJ.ME, DSJ.JOBINTERIMSTATUS)
Status = DSGetJobInfo( RunJobName, DSJ.JOBINTERIMSTATUS)
*Call DSLogFatal("JobControl")
Status_value = DSGetJobInfo (DSJ.ME, DSJ.JOBINTERIMSTATUS)
-
- Participant
- Posts: 132
- Joined: Wed Mar 01, 2006 11:12 pm
- Location: Pune
Re: Can I have a job that checks to see if a job is running?
I dont know it is possible in your case or not, but if your database is a oracle database then you can use the update action as ''Truncate table and then Insert' instead of 'Clear and Insert. This will be faster and it helps a lot as in the truncate statement it doesnot have to write a transaction log. Hence truncate is always faster than Delete. Hope it helps you if u can replace the ODBC stage with Oracle stage and then you wont require a seperate job to check whether the job is running or stuckedjlock23 wrote:I have a couple of jobs that keep hanging suddenly. They are very simple jobs, but they are processing a lot of data. This is what the jobs look like:
ODBC select data -> Transformer -> File -> ODBC Insert Data
The ODBC Insert Data step, does a "clear the table, insert the rows". The job hangs after writing to the file and before starting the ODBC insert. It just sits there, doing nothing.
In an effort to prevent this from happening in the future, I plan to break this into 2 jobs - One to write the file and one to do the insert. I'm going to investigate the bulk insert again (I have never had it work successfully).
I'd also like to find a way to check to see if the job was hung, and if so, send out an email. So I need to create a job that runs at a certain time and checks to see if job XXX is running and if it is running....do stuff. Does anyone know how I might accomplish this?
SMB