Page 1 of 1

Error calling job through sequencer

Posted: Tue Feb 15, 2005 1:33 pm
by bgs
I have 10 sequencers calling same job.when i tried to run all the sequencers parallely I am getting the following error

Controller problem: Error calling DSAttachJob(commonjob.14)
(DSOpenJob) Cannot get shared access to executable file for job commonjob

Thanks in advance

Re: Error calling job through sequencer

Posted: Tue Feb 15, 2005 1:47 pm
by vcannadevula
bgs wrote:I have 10 sequencers calling same job.when i tried to run all the sequencers parallely I am getting the following error

Controller problem: Error calling DSAttachJob(commonjob.14)
(DSOpenJob) Cannot get shared access to executable file for job commonjob

Thanks in advance

This is a known error in DataStage. When ever there is a call to a job from sequence , it will attach the job handle to run the Job.

In your case as you are running 10 sequences for the same job, each sequence will try to do the DSAttachJob.............Where some will succeed and some will abort with the above error.

What actually is your requirement ??? Y are you calling the same job from 10 sequences???? I hope you are running all atonce.......Why are you doing it???

Posted: Tue Feb 15, 2005 1:56 pm
by bgs
we get files from different sources.so as soon as we receive the file the job will be triggered.so if we get 10 files at a time then all the 10 sequencers will be triggered at same time.
And in the common job multiple instance flag is set,so I think it should not be a problem when this job is called by 10 sequencers parallely.

Posted: Tue Feb 15, 2005 2:39 pm
by T42
If you are running the same job, ensure that you have "Allow Multiple Instance" option checked (within your job's property), and provide different invocation ID for each run.

Posted: Tue Feb 15, 2005 3:06 pm
by vcannadevula
bgs wrote:we get files from different sources.so as soon as we receive the file the job will be triggered.so if we get 10 files at a time then all the 10 sequencers will be triggered at same time.
And in the common job multiple instance flag is set,so I think it should not be a problem when this job is called by 10 sequencers parallely.

Though you have enabled the Multiple instance Job, the concept of multiple instance is ...........

The Calling sequence will attach the handle to the called job. This has to happen sequentialllyy...........not really sequential..but there is a limit.
When 3-5 sequences call the Job ...it will be successful. When you are trying to run more than that there is always a chance for this error to occure, because when one is trying to get the handle other would have already got the handle and it is not yet released. IT is all related to the timing.

Posted: Thu Feb 17, 2005 7:23 pm
by sharath
Do try checking the multiple instance option and give a an invocation id with some parameter , so that when ever you try to attach the job with different sequencer you will have an instance of the job created.
I am not sure if what you are saying falls true always, because I run my audit job parallal with each and every job and at one point I have more than 10 jobs running, I never had these problem ..though!

Posted: Tue Feb 22, 2005 2:26 pm
by T42
Yes, but you are probably running DIFFERENT jobs.

If you want to run the SAME job at the same time, you must enable Multiple Invocation, and ensure that the identification naming be unique for each run.

Posted: Tue Feb 22, 2005 3:05 pm
by bgs
I am running different sequencers which are calling the same job with a unique identification number.Is there any limitation on number of instances for a single job?

Posted: Tue Feb 22, 2005 3:42 pm
by xlnc
The most we ran at a time were 19 instances of a job , so till 19 no harm .. I dont know beyond that.

Posted: Tue Feb 22, 2005 4:12 pm
by T42
It is only limited by the system you are running on. Well, there may be some maximum limit within DataStage, but I doubt that we'll ever reach that limit before the system barfs. 32 million calls or something like that.

Posted: Tue Feb 22, 2005 4:53 pm
by ray.wurlod
The limit is initially imposed by the physical size limit on the job log (2GB by default). More instances = more log entries. It's also affected by how many events are logged by each instance.
One could, of course, convert the log file to 64-bit addressing. Then the limit is probably constrained by disk space, since the theoretical upper limit on log file size is then of the order of 19 million TB. Do you have that much free disk space?!