Page 1 of 1

Performance question

Posted: Mon Jun 04, 2007 9:15 am
by yiminghu
Hi All,

I'm running out of ideas. I'm having this complicated the job. The job processes a file row by row, (the order of the file content is also very important), depends on the value of incoming rows, it would go to 4 direction, each invovles a lookup and update or insert. The lookup table and update/insert table are the same one.

The problem I'm having is that when the job first starts, it runs more than 200 rows/second, then it keeps decreasing, until only 10 rows/second, which is really crawling. I don't know why the performace degrades so much and whether there is any way to prevent performance degrading.

Thanks a lot,

Carol

Posted: Tue Jun 05, 2007 1:08 am
by ray.wurlod
It is probably something in the target database. Are you sending all rows as a single transaction?

Make a copy of the job and replace the database stage with a Sequential File stage, and see whether that design produces the same behaviour. If it does not, there is something in the database (or the connection to it) that is causing the slowdown, and we can concentrate on that. But without testing to isolate the cause it would be unwise to offer suggestions.

Re: Performance question

Posted: Tue Jun 05, 2007 3:19 am
by p.jain
Did you check the job log? Normally when the row count falls, you have tons of warnings in the job log. Are there any warnings for you job ?

Re: Performance question

Posted: Tue Jun 05, 2007 6:17 am
by VCInDSX
yiminghu wrote: ...each invovles a lookup and update or insert. The lookup table and update/insert table are the same one.
Hi,
You mentioned lookup in your job. Could you throw some light on this, please? How is the lookup implemented?
Are you having a separate instance of the lookup table (as an ODBC stage) and joining?

Thanks,