stop not working

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
koojo
Premium Member
Premium Member
Posts: 43
Joined: Sun Jul 11, 2004 1:30 pm
Location: USA

stop not working

Post by koojo »

I have a simple sever job that extracts from a SAP table and loads it to an oracle database.

The database tables are huge (greater that 100Lkh rows) i.e the SAP tables and number of records pulled in are quiet large as well.

Every Time I stop the datastage job :evil:
the status shown is Aborted and
The ABAP (Use the ABAP extract pack) programme is running, Oracle load seems to be running (The oracle table being loaded has a lock on it).

The unix phantom is running for that job. :roll:

I am not sure on what to do? I have to stop these process. :?:

Anyone come across this error?
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

How, exactly, do you "stop the DataStage job"?

Why do you claim that it's "not working", and what do you claim that the "error" is?

After you attempt to "stop" the DataStage job, what messages are logged in the DataStage job log?

Are you aware that a DataStage job may comprise quite a number of related processes on the server machines - DataStage conductor, section leader and player processes, as well as any child processes used to interact with data sources?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
koojo
Premium Member
Premium Member
Posts: 43
Joined: Sun Jul 11, 2004 1:30 pm
Location: USA

Post by koojo »

I checked the ABAP programme on the SAP server it was running once, the other time there was a SAP kernel stop command issued to it. There was one datastage job phantom and an exclusive lock on the oracle table. These process I found after the job stopped. Makes me think that the job was actualy running on the server but the client for some reason shows the wrong status.

Just was not sure if this kind of a error needed a patch, guess not?

There was no error message.
The last message was Job * has aborted. No warning signs no error message. Now that u brought it up that seems weried.

Anyways I know it is a kind of funny thing to happen. But well here I am. The source has a lot of records in it. The funny thing is the initial thousand rows run very slow then the rows/sec seem to pick up considerably.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Using Director, reset the aborted job. Then check the log for any message "from previous run..." which may contain additional diagnostic information.

What you are looking at in Director is the last updated status, which is not necessarily the current status of a job.

Timings include all time needed to establish connections, parse SQL, get the ABAP process started and so on. This is why you see a slow initial rate climbing later to a plateau.

You also need to show that the phantom process you found is actually associated with the aborted job. Look at the detail record or the DSD.RUN file for that run in &PH& to get the process IDs (pids) of the DataStage processes. Please also advise the command being executed by this phantom process. It may be a DataStage daemon (such as dsdlockd or dsrpcd) that is completely unrelated to any executed job.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply