(DSOpenJob) Cannot get shared access to executable file for

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
HSBCdev
Premium Member
Premium Member
Posts: 141
Joined: Tue Mar 16, 2004 8:22 am
Location: HSBC - UK and India
Contact:

(DSOpenJob) Cannot get shared access to executable file for

Post by HSBCdev »

I've got a multiple instance job but when I call it with more than 3 or 4 instances I get the error "(DSOpenJob) Cannot get shared access to executable file for job ".

The job is an unload from a db2 table into a hash file - each instance will unload data from the same table but into a different copy of the hash file.

Can you advise me on why this may be happening? How many instances of a multi-instance job can I run?

Thanks,
sumitgulati
Participant
Posts: 197
Joined: Mon Feb 17, 2003 11:20 pm
Location: India

Re: (DSOpenJob) Cannot get shared access to executable file

Post by sumitgulati »

Are you giving an invocation id different from all the other invocation ids running.

Regards,
-Sumit
HSBCdev wrote:I've got a multiple instance job but when I call it with more than 3 or 4 instances I get the error "(DSOpenJob) Cannot get shared access to executable file for job ".

The job is an unload from a db2 table into a hash file - each instance will unload data from the same table but into a different copy of the hash file.

Can you advise me on why this may be happening? How many instances of a multi-instance job can I run?

Thanks,
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

I commonly use this technique on varying jobs designs, some just OCI-->XFM-->SEQ others really complex w/ lots of hash lookups. I regularly have designs with 10-20 instances of the same job.

I've seen your error only when the number of instances have vastly overwhelmed the system resources. Say, 20 cpu server running 80 jobs where each job itself is capable of fully utilizing a cpu. The machine is so overwhelmed that DS struggles to do what it does. I've seen it have internally locking issues, as well as internal timers (unable to start job after 60 seconds) time out. Make sure you're using glance or top or prstat or some other utility to watch resource allocation, usage, and availability.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
HSBCdev
Premium Member
Premium Member
Posts: 141
Joined: Tue Mar 16, 2004 8:22 am
Location: HSBC - UK and India
Contact:

Post by HSBCdev »

Thanks all.

I was calling them with individual invocation ids so that's not the problem.

I'm trying to run about 70 instances on 4 cpu's. Our guess it that its Datastage itself which can't handle the number of jobs - we've looked at the cpu usage and it is actually fairly low.

Another symptom of this problem is that when we try to log on to Datastage we get an error message saying that the project has been locked by the administrator. This seems to happen whenever there are a lot of Datastage jobs running at the same time.

Any thoughts?
ogmios
Participant
Posts: 659
Joined: Tue Mar 11, 2003 3:40 pm

Post by ogmios »

What's the value of MFILES and T30FILE in the uvconfig file and how many jobs are you running concurrently.

A hunch I would have is a lack of enough "concurrently" open hash files. This is the T30FILE and per running job you already need 4 hash files + any own hash file you explicitly use in your jobs.

Ogmios
In theory there's no difference between theory and practice. In practice there is.
HSBCdev
Premium Member
Premium Member
Posts: 141
Joined: Tue Mar 16, 2004 8:22 am
Location: HSBC - UK and India
Contact:

Post by HSBCdev »

MFILES is 50
T30FILE is 1500

My job is actually a sequence of multiinstance jobs.

1 loads from a table to a hash file
1 uses 3 hash files as lookups (although it has more than one stage for each hash - total of 10 hash stages)
1 uses 2 hash files.
plus the 4 hash files you mention (not explicitly opened in job)

So if its the number of hash stages that count then its 17 hash stages running for each sequence. With 70 instances of the sequence - giving 1190 stages. - And that's just for the job I'm running!

Would changing the values in uvconfig help? What are the affects and side affects of changing these values?
ogmios
Participant
Posts: 659
Joined: Tue Mar 11, 2003 3:40 pm

Post by ogmios »

T30FILE is the number of possible concurrently open hash files, if you reach the limit you get most of the time jobs failing during execution or failing to start at all.

MFILES is the number of a rotating pool of open files for DataStage, if you reach this limit DataStage will just become slower. On MFILES there's a restriction with a kernel parameter (MFILES need to be smaller than NFILES of the kernel) I think.

The affect of each is that a little bit more memory is allocated for DataStage, but to be honest I didn't see anything the last time I went from 200 to 2000 for T30FILE, usually I also try to put MFILES to 400 or 500.

In your case you may even need more than 2000 for T30FILE if you have a lot of other jobs running.

On the other hand, ask yourself if you have only 4 CPU's why do you want to run 70 jobs concurrently (each job probably consist even of a number of real OS processes), wouldn't it be better to switch to a smaller number of jobs: for 4 CPU's about 20 or so.

There's a procedure described for changing the uvconfig file, it includes executing on DSv7: uv -admin -regen

Ogmios

P.S. search for T30FILE on this forum, you will find more.
In theory there's no difference between theory and practice. In practice there is.
Post Reply