Weird thing happening while running a job

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
sathyanveshi
Participant
Posts: 66
Joined: Tue Dec 07, 2004 12:48 pm

Weird thing happening while running a job

Post by sathyanveshi »

Hi,

I observed a weird thing today while I was running the job. I ran a job and after it loaded around 200,000 records, I have aborted the job. Later when I reset and ran the job again, the performance statstics shows 210,000 rows, 0 rows/sec. After sometime, again I aborted, rest and the ran the job again. Now, the performance statistics showed 240,000, 0 rows/sec. This clearly showed that it started reading from the point where it was previously aborted.

Any reasons for this peculiar behaviour?

Cheers,
Mohan
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

Sometimes when you stop / reset a job, the background engine and the datastage controller goes out of sync and hence maybe this problem. Try to cleanup resources and locks in such cases.
sathyanveshi
Participant
Posts: 66
Joined: Tue Dec 07, 2004 12:48 pm

Post by sathyanveshi »

Hi,

When I try to clean up resources, I encounter the following message:

"Cannot find any process numbers for stages in job LOAD_PS_RO_LINE".

What does that mean?

Cheers,
Mohan
amsh76
Charter Member
Charter Member
Posts: 118
Joined: Wed Mar 10, 2004 10:58 pm

Post by amsh76 »

Hey are you doing any aggregation in that Job ? then this kind of behaviour is quite possible..if your incoming data is not sorted..

Describe your job, please.
sathyanveshi
Participant
Posts: 66
Joined: Tue Dec 07, 2004 12:48 pm

Post by sathyanveshi »

Hi,

I'm not doing any aggregation. It's a plain one-to-one job (matching columns).

Cheers,
Mohan
amsh76
Charter Member
Charter Member
Posts: 118
Joined: Wed Mar 10, 2004 10:58 pm

Post by amsh76 »

do you have enough space ?
sathyanveshi
Participant
Posts: 66
Joined: Tue Dec 07, 2004 12:48 pm

Post by sathyanveshi »

Well...that's a gud question. We are lagging at space but we have enough space to accomodate data. What's the point that you are trying to make?

Cheers,
Mohan
amsh76
Charter Member
Charter Member
Posts: 118
Joined: Wed Mar 10, 2004 10:58 pm

Post by amsh76 »

What i am trying to say is, is it by any chance..after writing around 200000 rows..its going out of space ?
sathyanveshi
Participant
Posts: 66
Joined: Tue Dec 07, 2004 12:48 pm

Post by sathyanveshi »

Nopes....it didn't....

It didn't write even a record..

Cheers,
Mohan
T42
Participant
Posts: 499
Joined: Thu Nov 11, 2004 6:45 pm

Post by T42 »

Do not worry very much about the Job Monitor. It is a very inprecise tool provided as a 'visual aid' at the expense of performance.

Instead, focus on the log output for precise information.

DataStage does not 'continue' on the job level.
sathyanveshi
Participant
Posts: 66
Joined: Tue Dec 07, 2004 12:48 pm

Post by sathyanveshi »

Thanks a lot for your responses..

Cheers,
Mohan
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

Try resetting the job and run again. If nothing works, export the job, delete from DS and import it back again. This will remove and rebuild the job's metadata.
Post Reply