Page 1 of 1

Strange Issue - Datastage Going down

Posted: Fri Apr 16, 2010 2:27 am
by prasad v
Hi

We have created a new job which will fetch the data from flat file and loaded into Oracle table which is there in the same box.

When we run this Job, it runs for some time and later the entire Datastage apps is going down. We cannot able to login to datastage.
Tried to running on multiple servers like Development and Production. Same thing was happened.

File size is aroung 9lacks records.
Commit Interval is 0.
Version: 7.5 Server Edition.

Can any help me out to resolve this issue.

Posted: Fri Apr 16, 2010 2:45 am
by Sainath.Srinivasan
Very difficult to predict with limited information supplied. It is similar to saying 'Everytime I start my car, it breaks down'.

Is there a before / after routine or calls from transformer that may stop ?

Is it writing to some restricted structure ?

Are you running when the system is hit hard by other processes ?

What about any other similar jobs ?

If you cannot resolve in simple steps, run the Unix trace to see the system calls.

Posted: Fri Apr 16, 2010 5:19 am
by prasad v
there is no After/before routine in the job.
No restricted structure.
No, We ran it on development and No other jobs are running at that time.
Other jobs are running fine.

Posted: Fri Apr 16, 2010 5:27 am
by Sainath.Srinivasan
Copy the job with different name and run that.

Post your findings.

Is there anything in the log ?

Posted: Fri Apr 16, 2010 6:30 am
by prasad v
Yes, We did the same But i couldn't get success.

Again we restarted Datastage.

After running this Job it runs for sometime, later the whole Datastage will be down. we cannot find log

Posted: Fri Apr 16, 2010 6:38 am
by ArndW
What do you mean that you cannot find the log? What is meant here is the output from the job and is visible in the Director for that job.

Posted: Fri Apr 16, 2010 7:05 am
by chulett
Can you be more specific as to what exactly "the entire Datastage apps is going down" means? :?

Posted: Fri Apr 16, 2010 10:50 am
by ArjunK
Few years back I had seen a similar thing happen in one of the projects. On investigating the issue it was found that in the DataStage flow a Unix delete command was being fired rm * and it was deleting some internal set up files as it did not have the right path specified.
..Good Stuff .. :D ..i hope you dont have that happening!!

Posted: Sun Apr 18, 2010 11:56 pm
by prasad v
there is no such command in the Job.

about Log: Log is showing that till the Insert command.

After there is no log entry.

Posted: Mon Apr 19, 2010 2:11 am
by ArndW
I still cannot understand the exact problem.

Can you change your job to write to a sequential file or PEEK stage, does that work correctly? If yes, then the cause is most likely in the Oracle portion. Do you have specific SQL in Oracle?

Posted: Mon Apr 19, 2010 9:10 am
by prasad v
We tried this aswell, Still it went down. it has loaded aroung 90000 records after that whole Datastage was down.

Again we have restarted and started investigation.

Posted: Mon Apr 19, 2010 9:17 am
by Sainath.Srinivasan
Are you running out of space ?

Is this your largest load ?

Did you setup the OS trace ?

Posted: Mon Apr 26, 2010 7:25 am
by Abhijeet1980
Hi Prasad,

Try deleting the job from your live environment. Import the job again from your development environment and share the results.

"The Datastage app is going down" is not understood. Your machine could have been occupied by the Oracle load process and no user is allowed entry; this is quite possible.

-Abhijit

Posted: Tue May 21, 2013 2:14 am
by rohitagarwal15
Ideally whenever you are running the job until and unless you are not passing any command to kill the process it will not get killed. Please share your job design (dsx file).

Posted: Tue May 21, 2013 6:20 am
by chulett
Three year old post, so probably not much "sharing" going to happen at this point.