Using an array as a job handler

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

gpbarsky
Participant
Posts: 160
Joined: Tue May 06, 2003 8:20 pm
Location: Argentina

Using an array as a job handler

Post by gpbarsky »

Hi there.

I would like to know if there is any problem on using an array as a job handler.

Supose the following code:

DIMENSION jobs(10)
...
i = i + 1
jobs(i) = DSAttachJob("StartCobranzas.221", DSJ.ERRFATAL)
IF NOT(jobs(i))
THEN
Call DSLogFatal .....
END
ELSE
* set parameters
* run the job
END

Is this valid ?

Thanks in advance.


Guillermo P. Barsky
Buenos Aires - Argentina
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

Guillermo

I doubt if that will work. The DSAttachJob() function returns a job handle. I do not think you can have an array of job handles. If it works let us know.

Kim.

Kim Duke
DsWebMon - Monitor DataStage over the web
www.Duke-Consulting.com
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Absolutely, you can put job handles into an array.

In fact, the job control that myself and associates have developed and freely distributes does everything that Guillermo has so far asked about. Namely:

1. Users a predecessor/successor array of jobs to control job execution. Think of Microsoft Project with tasks as jobs. In fact, my associates and myself have built a Project .mpp file with macros and templates as the graphical metaphor for our job control sequencer. Jobs belong to groups, with rolled up tasks, predecessor relationships, etc.
2. Jobs have individual attributes such as maximum runtime before a warning notifcation needs to be sent, maximum number of warning messages before a warning notification needs to be sent, maximum runtime and warnings before the job is automatically killed.
3. Jobs have an instantiation parameter, meaning when that job is run the job control should actually run N number of those jobs, passing parameters PartitionCount set to N and PartitionNumber set from 1 to N for the N jobs. This means at runtime the jobs dynamically expand to N instances for maximized throughput in Instantiate-Empowered jobs.
4. Jobs have automatic parameter resolution at a job level. If you supply a runtime parameter file, plus any custom parameters at startup, all jobs are resolved against that file+custom list and set at runtime.
5. Job control has an auto-compile option to compile the job stream at runtime to insure all jobs are compiled and ready to run, as well as cleanup a previous run's instantiated job sets.
6. Job control automatically resets aborted job.
7. Job control publishes log metadata, as well as a spreadsheet of job run, their completion status, start time, and end time.
8. Job control publishes link statistic information in a spreadsheet form for each job executed.
9. Job control has a hook in it to load all process metadata created during runtime, so that an external audit repository is kept up-to-date during job control execution. Really cool if you run an intranet and want your process metadata in Oracle/SQL-Server, etc, without a MetaStage import/cleanup/publish/export lag.
10. Job control has a resurrect capability, so that it can be restarted in a resurrection mode and reads it's own log file of jobs executed and retries failed jobs and continues through the predecessor/successor tree.
11. Job control uses categorization of jobs into subject areas and source system supplied data so that you can "prune" the jobstream at runtime with parameters that dictate which subject areas or source system related jobs you want run.
12. Job control has a milestone feature where a synchronization step can be put into the stream to provide a predecessor/successor stall point. That allows you to "jump" to a point in the job stream, skipping over predecessor jobs all the way to the inception point, and also end at a point in the job stream. Think of many parallel streams connection and diverging, with the ability to prune branches and roots by either source system, subject area, or jump-to/stop-at capabilities.
13. Customizable notification based on job finish statuses, jobs running long, blah blah blah.
14. WORK-AROUNDS for bugs in DataStage, such as running out of disk space and jobs not dying. We check job logs during/after execution to make sure some of these bugs are treated as fatal.
15. Runtime link statistic monitorring. You can set rules to watch jobs as they execute and make sure link values are within tolerance. Imagine a reference lookup that you require 90% or better hits. You can define a simple rule in a spreadsheet to require a 90% ratio between two link values after N number of rows have processed else the job is automatically killed or just a warning is sent out. You could even say warn at 90% and kill at 75%, for example.
16. Load throttling, where you set the maximum number of jobs to be running at any given time in your jobstream. Kind of handy for CPU-challenged systems.
17. External file polling mechanism, to introduce PAUSE, RESUME, STOP, KILL, STATUS controls for third-parties to interact once a job stream has initiated. Pause means don't start any new jobs, resume means continue starting new jobs, stop means stop starting any new, wait for jobs already running to finish and then stop, kill of course means kill off any jobs currently running and then stop, and status says dump a current status file out of what jobs have run, those that are waiting to be run and their predecessors. It's pretty cool to STOP a job stream, then resurrect it, because it picks up right where it left off.
18. Written by consultants who love DataStage and have used it in many environments since the earliest releases. We know what is really expected out there and find the one-size-fits-all as-long-as-its-black Sequencer not to our liking.
...
...
99. Free. It was developed in a cooperative mood by consultants and clients over the last few years. All we ask is that you improve it, don't sell, give us a copy back with your enhancements if you think anyone else out there might benefit from it.

If anyone is interested drop me an email. I have a release 6.0 compatible version available with documentation. At worst, it may be helpful to simply look at the logic written to see how we used inherent API's.

Kenneth Bland
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

Ken

I am sold. i want a copy.

Kim.

Kim Duke
DsWebMon - Monitor DataStage over the web
www.Duke-Consulting.com
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Jeez... me, too!

-craig
gpbarsky
Participant
Posts: 160
Joined: Tue May 06, 2003 8:20 pm
Location: Argentina

Post by gpbarsky »

Ken:

Thanks for your explanation.

Previous to ask you your product, I would like to read a complete list of the features that it offers to me, pre-requisites, etc.. Could you send me a document with the full description ?

And thank you again.


Guillermo P. Barsky
Buenos Aires - Argentina
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

I'll make it available for download shortly. However, it's not a product, nor do I offer it as such. It is simply a compilation/body of work/collection of logic myself and others over the years have work on. I have been the primary architect, whilst others have improved the coding.

That being said, as freely shared logic, there is no guarantee, warranty, documentation is written by programmers not professional writers, etc. I'll share the current state of the logic and documentation with anyone interested. I just have to finish updating the documentation with some additions that just came in from some customers using it in Texas and Minnesota.

Kenneth Bland
gpbarsky
Participant
Posts: 160
Joined: Tue May 06, 2003 8:20 pm
Location: Argentina

Post by gpbarsky »

Ken:

I'll be happy to read the documentation. So, let me know where it is and I will download it.

Thanks.


Guillermo P. Barsky
Buenos Aires - Argentina
Teej
Participant
Posts: 677
Joined: Fri Aug 08, 2003 9:26 am
Location: USA

Post by Teej »

I have no idea why I missed this topic. Must've been on vacation or something...

Where can I find this?

-T.J.
Developer of DataStage Parallel Engine (Orchestrate).
tonystark622
Premium Member
Premium Member
Posts: 483
Joined: Thu Jun 12, 2003 4:47 pm
Location: St. Louis, Missouri USA

Post by tonystark622 »

Kenneth,

I would also be interested in this code.

Many thanks.

Tony
trobinson
Participant
Posts: 208
Joined: Thu Apr 11, 2002 6:02 am
Location: Saint Louis
Contact:

Post by trobinson »

Me too! Sounds cool.
asvictor
Participant
Posts: 31
Joined: Tue Sep 02, 2003 3:06 am
Location: Singapore
Contact:

Can I have a Copy of the Code Pls??

Post by asvictor »

Hi Kenneth,

I do require a copy of the Code.

Can you send please??

Cheers
Victor Auxilium
girishoak
Participant
Posts: 65
Joined: Wed Oct 29, 2003 3:54 am

Post by girishoak »

Dear Ken,
I too need a copy of code. Please make available online ASAP. Seems very cool

Thanks in advance

Girish Oak
raju_chvr
Premium Member
Premium Member
Posts: 165
Joined: Sat Sep 27, 2003 9:19 am
Location: USA

Post by raju_chvr »

Hello Kenneth, Where can I get a copy of this program mentioned in this topic for download.

thanks in advance
sivatallapaneni
Participant
Posts: 53
Joined: Wed Nov 05, 2003 8:36 am

Post by sivatallapaneni »

Hi Ken,
Is there any way you could send me a copy of it. i'm trying to do some of the thing what you have mentioned. Run multiple instances of a job in a batch job-control which is again invoked by a sequencer. we could get the start time, end time, link info, job status for every instance. we did this by calling a after job routine. it looks like the one you are talking about really comes in handy.

Reagrds,
Siva.
Post Reply