job control advice

Archive of postings to DataStageUsers@Oliver.com. This forum intended only as a reference and cannot be posted to.

Moderators: chulett, rschirm

Locked
admin
Posts: 8720
Joined: Sun Jan 12, 2003 11:26 pm

job control advice

Post by admin »

I have a number of jobs (about 6 or 7) which are all fairly similar in structure. Basically, they have job control code which copies the source files from a remote system to a local directory. The stages in the job then use ODBC to read these files.

Very occasionally (except for the last week when it has been all too frequent), there is a problem in the source system which gives me grief when trying to copy the files (remote system is Unix and file is accessed via a variation of Samba - too messy to explain here).

I like the jobs the way they are because they operate as a unit, copying the files and then loading them. Also, the last stage in the job writes to a timestamp table to record the fact that these files have been loaded successfully.

I just have one problem. If there is a problem accessing the source files, I dont want to attempt to load the data at all, I just want to exit the job. At present, the only way I know to do this is to abort the job with DSLogFatal. However, these jobs can be called anywhere up to 200 times in a single night, and even though the data for one shift may be bad, I still want to load all the other shifts.

My question then is this; is there any way to stop the job from the job control (without aborting it) such that the stages will not be run.

I have considered breaking it up into 2 jobs, which obviously would work. (Im still trying to find reasons other than emotional ones for not doing
this.)

If I get a failure, I could substitute an empty file, but Id still have no way to tell the final stage that it didnt work. (An empty file may be completely valid from the source system on occasions.) I think this is part of my problem, that the job control cannot communicate with the stages of the job. I suppose UserStatus might work for this, if I werent using 3.5 which is a bit buggy in relation to setting UserStatus in job control.

I suppose another option is to abort the job and then reset it in the calling job.

Thanks in advance. Lets see what you can do with this one, Ray??? Why do I get the feeling that someone is going to tell me that there is a new feature in 4.0 which solves this for me???

David Barham
Information Technology Consultant
CoalMIS Project
Anglo Coal Australia Pty Ltd
Brisbane, Australia


*************************************************************************
This e-mail and any files transmitted with it may be confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you have received this e-mail in
error, please notify the sender by return e-mail, and delete this e-mail from your in-box. Do not copy it to anybody else

*************************************************************************
admin
Posts: 8720
Joined: Sun Jan 12, 2003 11:26 pm

Post by admin »

The job control code in question is actually in the controlled job. These jobs have both job control code (which is executed before the stages start) as well as stages.
-----Original Message-----
From: Ray Wurlod [SMTP:ray.wurlod@informix.com]
Sent: Tuesday, 15 August 2000 11:21
To: informix-datastage@oliver.com
Subject: RE: job control advice

A quick, little-thought suggestion is to write a generic before-job
subroutine that could detect a value in a file, or something like that, set
by the job control job. The before-job subroutine can then set its
ErrorCode argument to a non-zero value to prevent the job from running.
This is still an "abort" condition requiring the job to be reset.
However, if the job control code is detecting the non-arrival of the
file(s), why can it simply make the decision not to run the controlled job?

> ----------
> From: David Barham[SMTP:David.Barham@anglocoal.com.au]
> Reply To: informix-datastage@oliver.com
> Sent: Tuesday, 15 August 2000 09:46
> To: informix-datastage@oliver.com
> Subject: job control advice
>
> I have a number of jobs (about 6 or 7) which are all fairly similar in
> structure. Basically, they have job control code which copies the source
> files from a remote system to a local directory. The stages in the job
> then
> use ODBC to read these files.
>
> Very occasionally (except for the last week when it has been all too
> frequent), there is a problem in the source system which gives me grief
> when
> trying to copy the files (remote system is Unix and file is accessed via a
> variation of Samba - too messy to explain here).
>
> I like the jobs the way they are because they operate as a unit, copying
> the
> files and then loading them. Also, the last stage in the job writes to a
> timestamp table to record the fact that these files have been loaded
> successfully.
>
> I just have one problem. If there is a problem accessing the source
> files,
> I dont want to attempt to load the data at all, I just want to exit the
> job. At present, the only way I know to do this is to abort the job with
> DSLogFatal. However, these jobs can be called anywhere up to 200 times in
> a
> single night, and even though the data for one shift may be bad, I still
> want to load all the other shifts.
>
> My question then is this; is there any way to stop the job from the job
> control (without aborting it) such that the stages will not be run.
>
> I have considered breaking it up into 2 jobs, which obviously would work.
> (Im still trying to find reasons other than emotional ones for not doing
> this.)
>
> If I get a failure, I could substitute an empty file, but Id still have
> no
> way to tell the final stage that it didnt work. (An empty file may be
> completely valid from the source system on occasions.) I think this is
> part
> of my problem, that the job control cannot communicate with the stages of
> the job. I suppose UserStatus might work for this, if I werent using 3.5
> which is a bit buggy in relation to setting UserStatus in job control.
>
> I suppose another option is to abort the job and then reset it in the
> calling job.
>
> Thanks in advance. Lets see what you can do with this one, Ray??? Why
> do
> I get the feeling that someone is going to tell me that there is a new
> feature in 4.0 which solves this for me???
>
> David Barham
> Information Technology Consultant
> CoalMIS Project
> Anglo Coal Australia Pty Ltd
> Brisbane, Australia
>
>
>
*************************************************************************
> This e-mail and any files transmitted with it may be confidential
> and are intended solely for the use of the individual or entity to
> whom they are addressed. If you have received this e-mail in
> error, please notify the sender by return e-mail, and delete this
> e-mail from your in-box. Do not copy it to anybody else
>
>
*************************************************************************
>
admin
Posts: 8720
Joined: Sun Jan 12, 2003 11:26 pm

Post by admin »

A quick, little-thought suggestion is to write a generic before-job subroutine that could detect a value in a file, or something like that, set by the job control job. The before-job subroutine can then set its ErrorCode argument to a non-zero value to prevent the job from running. This is still an "abort" condition requiring the job to be reset. However, if the job control code is detecting the non-arrival of the file(s), why can it simply make the decision not to run the controlled job?

> ----------
> From: David Barham[SMTP:David.Barham@anglocoal.com.au]
> Reply To: informix-datastage@oliver.com
> Sent: Tuesday, 15 August 2000 09:46
> To: informix-datastage@oliver.com
> Subject: job control advice
>
> I have a number of jobs (about 6 or 7) which are all fairly similar in
> structure. Basically, they have job control code which copies the
> source files from a remote system to a local directory. The stages in
> the job then use ODBC to read these files.
>
> Very occasionally (except for the last week when it has been all too
> frequent), there is a problem in the source system which gives me
> grief when trying to copy the files (remote system is Unix and file is
> accessed via a variation of Samba - too messy to explain here).
>
> I like the jobs the way they are because they operate as a unit,
> copying the files and then loading them. Also, the last stage in the
> job writes to a timestamp table to record the fact that these files
> have been loaded successfully.
>
> I just have one problem. If there is a problem accessing the source
> files, I dont want to attempt to load the data at all, I just want to
> exit the job. At present, the only way I know to do this is to abort
> the job with DSLogFatal. However, these jobs can be called anywhere
> up to 200 times in a
> single night, and even though the data for one shift may be bad, I still
> want to load all the other shifts.
>
> My question then is this; is there any way to stop the job from the
> job control (without aborting it) such that the stages will not be
> run.
>
> I have considered breaking it up into 2 jobs, which obviously would
> work. (Im still trying to find reasons other than emotional ones for
> not doing
> this.)
>
> If I get a failure, I could substitute an empty file, but Id still
> have no way to tell the final stage that it didnt work. (An empty
> file may be completely valid from the source system on occasions.) I
> think this is part
> of my problem, that the job control cannot communicate with the stages of
> the job. I suppose UserStatus might work for this, if I werent using 3.5
> which is a bit buggy in relation to setting UserStatus in job control.
>
> I suppose another option is to abort the job and then reset it in the
> calling job.
>
> Thanks in advance. Lets see what you can do with this one, Ray???
> Why do I get the feeling that someone is going to tell me that there
> is a new feature in 4.0 which solves this for me???
>
> David Barham
> Information Technology Consultant
> CoalMIS Project
> Anglo Coal Australia Pty Ltd
> Brisbane, Australia
>
>
> **********************************************************************
> ***
> This e-mail and any files transmitted with it may be confidential
> and are intended solely for the use of the individual or entity to
> whom they are addressed. If you have received this e-mail in
> error, please notify the sender by return e-mail, and delete this
> e-mail from your in-box. Do not copy it to anybody else
>
> **********************************************************************
> ***
>
admin
Posts: 8720
Joined: Sun Jan 12, 2003 11:26 pm

Post by admin »

None of which stops you having a before-job subroutine as well. I have used DSStopJob(DSJ.ME) in the past and, while it stops the job, its ugly.

> ----------
> From: David Barham[SMTP:David.Barham@anglocoal.com.au]
> Reply To: informix-datastage@oliver.com
> Sent: Tuesday, 15 August 2000 11:14
> To: informix-datastage@oliver.com
> Subject: RE: job control advice
>
> The job control code in question is actually in the controlled job.
> These jobs have both job control code (which is executed before the
> stages
> start)
> as well as stages.
> -----Original Message-----
> From: Ray Wurlod [SMTP:ray.wurlod@informix.com]
> Sent: Tuesday, 15 August 2000 11:21
> To: informix-datastage@oliver.com
> Subject: RE: job control advice
>
> A quick, little-thought suggestion is to write a generic before-job
> subroutine that could detect a value in a file, or something like
> that, set
> by the job control job. The before-job subroutine can then set its
> ErrorCode argument to a non-zero value to prevent the job from
> running.
> This is still an "abort" condition requiring the job to be reset.
> However, if the job control code is detecting the non-arrival of the
> file(s), why can it simply make the decision not to run the
> controlled job?
>
> > ----------
> > From: David Barham[SMTP:David.Barham@anglocoal.com.au]
> > Reply To: informix-datastage@oliver.com
> > Sent: Tuesday, 15 August 2000 09:46
> > To: informix-datastage@oliver.com
> > Subject: job control advice
> >
> > I have a number of jobs (about 6 or 7) which are all fairly similar
> in
> > structure. Basically, they have job control code which copies the
> source
> > files from a remote system to a local directory. The stages in the
> job
> > then
> > use ODBC to read these files.
> >
> > Very occasionally (except for the last week when it has been all
> too
> > frequent), there is a problem in the source system which gives me
> grief
> > when
> > trying to copy the files (remote system is Unix and file is
> accessed via a
> > variation of Samba - too messy to explain here).
> >
> > I like the jobs the way they are because they operate as a unit,
> copying
> > the
> > files and then loading them. Also, the last stage in the job
> writes to a
> > timestamp table to record the fact that these files have been
> loaded
> > successfully.
> >
> > I just have one problem. If there is a problem accessing the
> source
> > files,
> > I dont want to attempt to load the data at all, I just want to
> exit the
> > job. At present, the only way I know to do this is to abort the
> job with
> > DSLogFatal. However, these jobs can be called anywhere up to 200
> times in
> > a
> > single night, and even though the data for one shift may be bad, I
> still
> > want to load all the other shifts.
> >
> > My question then is this; is there any way to stop the job from the
> job
> > control (without aborting it) such that the stages will not be run.
> >
> > I have considered breaking it up into 2 jobs, which obviously
> would work.
> > (Im still trying to find reasons other than emotional ones for
> not doing
> > this.)
> >
> > If I get a failure, I could substitute an empty file, but Id
> still have
> > no
> > way to tell the final stage that it didnt work. (An empty file
> may be
> > completely valid from the source system on occasions.) I think
> this is
> > part
> > of my problem, that the job control cannot communicate with the
> stages of
> > the job. I suppose UserStatus might work for this, if I werent
> using 3.5
> > which is a bit buggy in relation to setting UserStatus in job
> control.
> >
> > I suppose another option is to abort the job and then reset it in
> the
> > calling job.
> >
> > Thanks in advance. Lets see what you can do with this one,
> Ray??? Why
> > do
> > I get the feeling that someone is going to tell me that there is a
> new
> > feature in 4.0 which solves this for me???
> >
> > David Barham
> > Information Technology Consultant
> > CoalMIS Project
> > Anglo Coal Australia Pty Ltd
> > Brisbane, Australia
> >
> >
> >
> *************************************************************************
> > This e-mail and any files transmitted with it may be confidential
> > and are intended solely for the use of the individual or entity to
> > whom they are addressed. If you have received this e-mail in
> > error, please notify the sender by return e-mail, and delete this
> > e-mail from your in-box. Do not copy it to anybody else
> >
> >
> *************************************************************************
> >
>
admin
Posts: 8720
Joined: Sun Jan 12, 2003 11:26 pm

Post by admin »

True, job control or before job routine, 6 of one and half a dozen of the other. As I said, my current approach is to abort the job (in this case, from the job control) to stop the stages from running. My point is though, that Id like to stop the job without aborting it.

Youre not wrong about "ugly". Does it DSStopJob offer any advantages over a simply DSLogFatal? Either way, the calling job would still have to reset it.


-----Original Message-----
From: Ray Wurlod [SMTP:ray.wurlod@informix.com]
Sent: Tuesday, 15 August 2000 11:27
To: informix-datastage@oliver.com
Subject: RE: job control advice

None of which stops you having a before-job subroutine as well.
I have used DSStopJob(DSJ.ME) in the past and, while it stops the job, its
ugly.

> ----------
> From: David Barham[SMTP:David.Barham@anglocoal.com.au]
> Reply To: informix-datastage@oliver.com
> Sent: Tuesday, 15 August 2000 11:14
> To: informix-datastage@oliver.com
> Subject: RE: job control advice
>
> The job control code in question is actually in the controlled job. These
> jobs have both job control code (which is executed before the stages
> start)
> as well as stages.
> -----Original Message-----
> From: Ray Wurlod [SMTP:ray.wurlod@informix.com]
> Sent: Tuesday, 15 August 2000 11:21
> To: informix-datastage@oliver.com
> Subject: RE: job control advice
>
> A quick, little-thought suggestion is to write a generic
before-job
> subroutine that could detect a value in a file, or something
like
> that, set
> by the job control job. The before-job subroutine can then
set its
> ErrorCode argument to a non-zero value to prevent the job
from
> running.
> This is still an "abort" condition requiring the job to be
reset.
> However, if the job control code is detecting the
non-arrival of the
> file(s), why can it simply make the decision not to run the
> controlled job?
>
> > ----------
> > From: David
Barham[SMTP:David.Barham@anglocoal.com.au]
> > Reply To: informix-datastage@oliver.com
> > Sent: Tuesday, 15 August 2000 09:46
> > To: informix-datastage@oliver.com
> > Subject: job control advice
> >
> > I have a number of jobs (about 6 or 7) which are all
fairly
> similar in
> > structure. Basically, they have job control code which
copies the
> source
> > files from a remote system to a local directory. The
stages in
> the job
> > then
> > use ODBC to read these files.
> >
> > Very occasionally (except for the last week when it has
been all
> too
> > frequent), there is a problem in the source system which
gives me
> grief
> > when
> > trying to copy the files (remote system is Unix and file
is
> accessed via a
> > variation of Samba - too messy to explain here).
> >
> > I like the jobs the way they are because they operate as a
unit,
> copying
> > the
> > files and then loading them. Also, the last stage in the
job
> writes to a
> > timestamp table to record the fact that these files have
been
> loaded
> > successfully.
> >
> > I just have one problem. If there is a problem accessing
the
> source
> > files,
> > I dont want to attempt to load the data at all, I just
want to
> exit the
> > job. At present, the only way I know to do this is to
abort the
> job with
> > DSLogFatal. However, these jobs can be called anywhere up
to 200
> times in
> > a
> > single night, and even though the data for one shift may
be bad, I
> still
> > want to load all the other shifts.
> >
> > My question then is this; is there any way to stop the job
from
> the job
> > control (without aborting it) such that the stages will
not be
> run.
> >
> > I have considered breaking it up into 2 jobs, which
obviously
> would work.
> > (Im still trying to find reasons other than emotional
ones for
> not doing
> > this.)
> >
> > If I get a failure, I could substitute an empty file, but
Id
> still have
> > no
> > way to tell the final stage that it didnt work. (An
empty file
> may be
> > completely valid from the source system on occasions.) I
think
> this is
> > part
> > of my problem, that the job control cannot communicate
with the
> stages of
> > the job. I suppose UserStatus might work for this, if I
werent
> using 3.5
> > which is a bit buggy in relation to setting UserStatus in
job
> control.
> >
> > I suppose another option is to abort the job and then
reset it in
> the
> > calling job.
> >
> > Thanks in advance. Lets see what you can do with this
one,
> Ray??? Why
> > do
> > I get the feeling that someone is going to tell me that
there is a
> new
> > feature in 4.0 which solves this for me???
> >
> > David Barham
> > Information Technology Consultant
> > CoalMIS Project
> > Anglo Coal Australia Pty Ltd
> > Brisbane, Australia
> >
> >
> >
>
*************************************************************************
> > This e-mail and any files transmitted with it may be
confidential
> > and are intended solely for the use of the individual or
entity to
> > whom they are addressed. If you have received this e-mail
in
> > error, please notify the sender by return e-mail, and
delete this
> > e-mail from your in-box. Do not copy it to anybody else
> >
> >
>
*************************************************************************
> >
>
admin
Posts: 8720
Joined: Sun Jan 12, 2003 11:26 pm

Post by admin »

I feared as much. Yes, separate jobs are probably the cleanest approach. Oh well, Ill add it to my list...(along with 101 other things).

-----Original Message-----
From: Ray Wurlod [SMTP:ray.wurlod@informix.com]
Sent: Tuesday, 15 August 2000 14:31
To: informix-datastage@oliver.com
Subject: RE: job control advice

There are only three exit status values once a job has been started.
The job either completes (with or without warnings) or is stopped/aborted.
If it does not complete it must be reset. Perchance an enhancement request?
Currently we cant help you.
Easier if the job control were in a separate job, so that the problem is
handled by not starting the controlled job in the first place.
You can still write entries into the log file of an attached job
(DSLogEvent), so the controlling job can report why the controlled job did
not run.

> ----------
> From: David Barham[SMTP:David.Barham@anglocoal.com.au]
> Reply To: informix-datastage@oliver.com
> Sent: Tuesday, 15 August 2000 11:27
> To: informix-datastage@oliver.com
> Subject: RE: job control advice
>
> True, job control or before job routine, 6 of one and half a dozen of the
> other. As I said, my current approach is to abort the job (in this case,
> from the job control) to stop the stages from running. My point is
> though,
> that Id like to stop the job without aborting it.
>
> Youre not wrong about "ugly". Does it DSStopJob offer any advantages
> over
> a simply DSLogFatal? Either way, the calling job would still have to
> reset
> it.
>
[ huge snip ]


*************************************************************************
This e-mail and any files transmitted with it may be confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you have received this e-mail in
error, please notify the sender by return e-mail, and delete this e-mail from your in-box. Do not copy it to anybody else

*************************************************************************
admin
Posts: 8720
Joined: Sun Jan 12, 2003 11:26 pm

Post by admin »

There are only three exit status values once a job has been started. The job either completes (with or without warnings) or is stopped/aborted. If it does not complete it must be reset. Perchance an enhancement request? Currently we cant help you. Easier if the job control were in a separate job, so that the problem is handled by not starting the controlled job in the first place. You can still write entries into the log file of an attached job (DSLogEvent), so the controlling job can report why the controlled job did not run.

> ----------
> From: David Barham[SMTP:David.Barham@anglocoal.com.au]
> Reply To: informix-datastage@oliver.com
> Sent: Tuesday, 15 August 2000 11:27
> To: informix-datastage@oliver.com
> Subject: RE: job control advice
>
> True, job control or before job routine, 6 of one and half a dozen of
> the other. As I said, my current approach is to abort the job (in
> this case, from the job control) to stop the stages from running. My
> point is though, that Id like to stop the job without aborting it.
>
> Youre not wrong about "ugly". Does it DSStopJob offer any advantages
> over a simply DSLogFatal? Either way, the calling job would still
> have to reset
> it.
>
[ huge snip ]
Locked