Page 1 of 1

Slow performance.

Posted: Mon Nov 19, 2007 3:43 am
by raji
Hi,

In my job i used source and target stage as Dyanamic RDBMS.

I am using the query which is having UNION.

Initally the job is fetching 200 rows/sec and slowly it get worst to 2 row/sec.

Is this issues due to some hardware where the datastage is installed or Problem with the query itself.

Will any one give your input on the same.

Posted: Mon Nov 19, 2007 3:47 am
by ray.wurlod
What is your disposal of the rows? That is, what is the update method you are specifying in the target stage?

Posted: Mon Nov 19, 2007 7:32 am
by chulett
Typically a target SQL issue - I'll wager no indexes for an update and thus full table scans are being performed.

Posted: Mon Nov 19, 2007 7:46 am
by gateleys
As Craig put it, its gotta be a database write issue. Just to confirm it, replace the target db stage with a sequential file and check the fetch performance. If that is ok, then check if the identifying key is indexed for target updates, and for pure inserts if there are too many constraints, try disabling them to perform the inserts.

Posted: Mon Nov 19, 2007 8:55 am
by kcbland
Put @FALSE in the output link constraint. When you run the job, you'll see the performance of the query and output rate to the DataStage process. It will probably be illuminating.

Posted: Mon Nov 19, 2007 9:37 am
by chulett
:idea:

Posted: Mon Nov 19, 2007 11:45 pm
by raji
Hi All,

Thanks for the help.

The update method used is 'INSERT OR UPDATE'.

The table is indexed for the some of the columns and not to all the columns.

We tried using the sequential file which is very fast and we put the target table in which performance is down Eventhough there is no constraint for the inserting.

We run the runstats for the tables also. Eventhough it is very slow.


Please advise

Posted: Mon Nov 19, 2007 11:48 pm
by raji
Hi All,

Thanks for the help.

The update method used is 'INSERT OR UPDATE'.

The table is indexed for the some of the columns and not to all the columns.

We tried using the sequential file which is very fast and we put the target table in which performance is down Eventhough there is no constraint for the inserting.

We run the runstats for the tables also. Then also it is very slow.


Please advise

Posted: Mon Nov 19, 2007 11:50 pm
by raji
Hi All,

Thanks for the help.

The update method used is 'INSERT OR UPDATE'.

The table is indexed for the some of the columns and not to all the columns.

We tried using the sequential file which is very fast and we put the target table in which performance is down Eventhough there is no constraint for the inserting.

We run the runstats for the tables also. Then also it is very slow.


Please advise

Posted: Tue Nov 20, 2007 12:42 am
by ray.wurlod
A better design is to segregate the inserts from the updates, and run them as separate input links (insert only and update only) into the target stage.

The "double-barrelled" update strategy will get slower over time; it has to try the insert then, if there's a failure, determine whether or not it was because of a primary key uniqueness constraint violation and, if so, to submit an update request.

Posted: Tue Nov 20, 2007 8:43 am
by chulett
raji wrote:The table is indexed for the some of the columns and not to all the columns.
'Some' of the columns can be ok as long as the columns you are using in your update statement are on the leading edge of the index - i.e. are the first columns in a compound index. Otherwise, it cannot be used and we're back to full table scans.