Page 1 of 2

Controller problem

Posted: Tue Jun 08, 2004 11:39 pm
by arunverma
can any body help with following error ,

In our application , We have created extraction job sequencer and this sequencer call 5 Data extraction job parallely , but job sequencer could not triggered all extarction and give following error

J0BlaSrc..JobControl (@J3BlaSrc): Controller problem: Error calling DSAttachJob(J3BlaSrc)
(DSGetJobInfo) Failed to open RT_STATUS1886 file.
(DSOpenJob) Cannot open job J3BlaSrc - not a runnable job

Posted: Tue Jun 08, 2004 11:58 pm
by amitdurve
Check if the job being called is in runable state ("compiled"/"finished"/"has been reset").
Try running the sequence job after recompiling the jobs being controlled(the child jobs).

Posted: Wed Jun 09, 2004 12:03 am
by Sreenivasulu
Set the jobs in the sequencer to "Reset" Mode. This would make the uncompiled jobs compile and run automatically

Regards

Posted: Wed Jun 09, 2004 12:10 am
by arunverma
amitdurve wrote:Check if the job being called is in runable state ("compiled"/"finished"/"has been reset").
Try running the sequence job after recompiling the jobs being controlled(the child jobs).
Job is in runable state , this is working for last six month.

arun

Posted: Wed Jun 09, 2004 12:11 am
by arunverma
Yest all job is in reset mode , it is working for last six month , we faced this problem yesterday onle .

arun

Posted: Wed Jun 09, 2004 12:22 am
by amitdurve
Can you try running the child job alone in test environment.
Can you open the child job?

Posted: Wed Jun 09, 2004 12:33 am
by arunverma
NO ,

This job is in production , nobody can open , we test all child job and Individually , working fine . reset option seted


arun

Posted: Wed Jun 09, 2004 12:35 am
by amitdurve
Save this child job which cannot be invoked, with some other name. Use this new name to call it from the sequencer.

Posted: Wed Jun 09, 2004 1:30 am
by arunverma
Hi amitdurve ,

We can change job name this is running application , if we change chaild job , we have to change a lot of place , but probelm is why it not triggred ,
is any other solution .


Arun

Posted: Wed Jun 09, 2004 4:58 am
by amitdurve
I am not sure about this one, but I think the problem will be solved if the DataStage server is restarted.
I am not sure why this problem exactly happens.

Posted: Wed Jun 09, 2004 9:34 am
by ogmios
my 2 cents, your T30FILES is set too low. You run a lot of jobs at the same time and you hit the limit. Search for T30FILES on this site.

And if that's not it look at the access rights of the physical directory RT_STATUS1886 in your project.

Ogmios

Posted: Wed Jun 09, 2004 11:01 pm
by arunverma
Yest it is true that this time a lot of process run , let it me search file and test ,
then i will reply

Thanks

Arun

Posted: Thu Jun 10, 2004 12:38 am
by ray.wurlod
The command to monitor how many dynamic hashed files are open on the system (that is, the figure that is limited by T30FILE) is

Code: Select all

ANALYZE.SHM -d
You need to be in the UV account to execute ANALYZE.SHM from the TCL prompt, or you can execute analyze.shm -d (or smat -d) from the operating system command prompt. Make sure that the DataStage bin directory is in PATH, otherwise you'll need to type in the full pathname of the command.

Each line in the report represents one open dynamic hashed file. Beware that, on an 80 column display, the report wraps around; better is to capture the output into a file, pipe it through wc, or some similar method for answering the "how many" question.

Notes
  • smat stands for "shared memory analysis tool"; it's the old name for the analyze.shm command, but they use exactly the same executable.

    The option is case sensitive; it must be lower case.

Posted: Thu Jun 10, 2004 1:48 am
by arunverma
We have checked in confing file that T30FILES values is 1000 , should we increase more .

Posted: Thu Jun 10, 2004 1:54 am
by arunverma
Hi Ray ,

Is it related to controler problem ?

Thanks & Regards

arun