Hi,
I imagine we have a few people out there using Autosys to run Datastage jobs. We are at an architectural decision point and wanted to see if anyone else had some thoughts. We have some fairly complex flows, some with 100's of Datastage jobs. We have used DS Sequences to chunk some of our flows down to about 30 or 40 Sequences which we run with dsjob with Autosys. So we have decided we don't want to have one Datastage Sequence run our entire flow and it seems extreme to use Autosys to run each Datastage Job individually. Does anyone have any thoughts on the tradefoffs of using Datastage Sequences versus building the equivalent of a Datastage Sequence in Autosys? Everything's on the table, whether the log message would appear in Director vs in some Autosys place, restartability, Operator capability, etc..
Autosys vs. Datastage Sequences
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 99
- Joined: Tue Aug 17, 2004 7:50 am
- Location: Boulder, Colorado
Autosys vs. Datastage Sequences
Flash Gordon
Hyperborean Software Solution
Hyperborean Software Solution
-
- Premium Member
- Posts: 385
- Joined: Tue Oct 07, 2003 4:55 am
Hi,
People usually likes to marge datastage jobs into thier organizational batch monitor in order to combine jobs with other activities.
Datastage director is a very basic job monitoring tool and usually it is not enought.
What I usually recommand is a combination. Create main sequencers that activate other jobs, but the sequencers are activated trough the batch monitor via dsjobs command (you can use autosys or BMC products or any other)
So gather your jobs into a main sequencer and activate it with your autosys.
Amos
People usually likes to marge datastage jobs into thier organizational batch monitor in order to combine jobs with other activities.
Datastage director is a very basic job monitoring tool and usually it is not enought.
What I usually recommand is a combination. Create main sequencers that activate other jobs, but the sequencers are activated trough the batch monitor via dsjobs command (you can use autosys or BMC products or any other)
So gather your jobs into a main sequencer and activate it with your autosys.
Amos
-
- Premium Member
- Posts: 99
- Joined: Tue Aug 17, 2004 7:50 am
- Location: Boulder, Colorado
Here's one useful metric:
The level of Autosys granularity should equal the set of operations
that you don't need to be paged at 3:00 AM to resolve.
In other words, if you can code it so that an operator can handle scheduling, initiation, aborts and restarts through Autosys instead of Datastage, do so. Otherwise, use a Sequence.
Carter
The level of Autosys granularity should equal the set of operations
that you don't need to be paged at 3:00 AM to resolve.
In other words, if you can code it so that an operator can handle scheduling, initiation, aborts and restarts through Autosys instead of Datastage, do so. Otherwise, use a Sequence.
Carter
-
- Premium Member
- Posts: 99
- Joined: Tue Aug 17, 2004 7:50 am
- Location: Boulder, Colorado
Carter,
Thank you. That was the sort of experience we were looking for. I actually find Autosys messaging about the step that was run a little terse. For any sophisticated type of Datastage error message the Operators would have to look in Director. Or maybe you could write something that would send them last 10 lines of the Job log file from DS if it abends or something. Thanks for your insight.
... Tom
Thank you. That was the sort of experience we were looking for. I actually find Autosys messaging about the step that was run a little terse. For any sophisticated type of Datastage error message the Operators would have to look in Director. Or maybe you could write something that would send them last 10 lines of the Job log file from DS if it abends or something. Thanks for your insight.
... Tom
Flash Gordon
Hyperborean Software Solution
Hyperborean Software Solution
Yah,
Anything you can do to add a little more info to the error messages helps.
Cover the common stuff in the runbooks, with specific advice on how to interpret the error messages. Anything more complex will require someone to examine the tea leaves with Director. You can fine tune it with experience running the application.
Anything you can do to add a little more info to the error messages helps.
Cover the common stuff in the runbooks, with specific advice on how to interpret the error messages. Anything more complex will require someone to examine the tea leaves with Director. You can fine tune it with experience running the application.
Carterflashgordon wrote:Carter,
Thank you. That was the sort of experience we were looking for. I actually find Autosys messaging about the step that was run a little terse. For any sophisticated type of Datastage error message the Operators would have to look in Director. Or maybe you could write something that would send them last 10 lines of the Job log file from DS if it abends or something. Thanks for your insight.
... Tom
Tom,
We have in place a standard Unix script for executing all our DataStage jobs, whether they be individual server jobs or job sequences that call other server jobs. This script grabs the log entries from the last run of the job and places the results in a flat file. Production support needs only to look at the Unix log file to determine what happened with the job (no Director access necessary).
One of our decision points in deciding whether to put individual jobs in the scheduler (we use Maestro, not Autosys, but they are similar concepts) or job sequences is defining a "unit of work." In case of a job failure, we instruct our scheduler (Maestro) to restart the failed entry (after corrective action, of course) and continue forward through the remaining dependent jobs. We have developed our jobs with restartiblity as a primary requirement-- we don't want to have to go back to a prior step and manually intervene and run jobs that already ran successfully.
I hope this approach helps you out when developing your job schedules.
Alan
We have in place a standard Unix script for executing all our DataStage jobs, whether they be individual server jobs or job sequences that call other server jobs. This script grabs the log entries from the last run of the job and places the results in a flat file. Production support needs only to look at the Unix log file to determine what happened with the job (no Director access necessary).
One of our decision points in deciding whether to put individual jobs in the scheduler (we use Maestro, not Autosys, but they are similar concepts) or job sequences is defining a "unit of work." In case of a job failure, we instruct our scheduler (Maestro) to restart the failed entry (after corrective action, of course) and continue forward through the remaining dependent jobs. We have developed our jobs with restartiblity as a primary requirement-- we don't want to have to go back to a prior step and manually intervene and run jobs that already ran successfully.
I hope this approach helps you out when developing your job schedules.
Alan
We found that the DS Schedule and Sequencers did not work well and cover enough of the support ascpect that we needed. We ended up using Autosys to call a 'C' program that we developed to interface with the DataStage API and ORACLE and our notification system, along with Autosys to run the jobs and log error/notify the necessary people. AutoSys worked wonderful and I would recommend it to anyone!
![Cool 8)](./images/smilies/icon_cool.gif)
![Cool 8)](./images/smilies/icon_cool.gif)