Job Performance

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
pklcnu
Premium Member
Premium Member
Posts: 50
Joined: Wed Aug 06, 2008 4:39 pm

Job Performance

Post by pklcnu »

Dear Experts

I have a job designed where it reads the data from Oracle table and had few look ups in the job after that the job writes the out put to a dataset.

There are few million records that needs to be processed, during first few minutes after the job starts it shows like it is processing around 15,000 records/sec and after some time the number of records processed goes on reducing and by the time job completes the number of records processed is around 4000 rows/sec. And the whole job takes around 45 minutes to complete.

I was wondering why in the beginning it is 15,000 rows/sec and at the end only 4000 rows/sec. What should I do to keep the number of rows processed is high similar to the start of the job ?

Thanks
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The same reason that file download rates vary when using Windows - the average is taken over the entire period so any delay will increase the time while not increasing the count as fast, so the average seems to decrease.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Exactly, it's an average.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply