Page 1 of 1

Help

Posted: Wed Aug 18, 2004 12:21 pm
by marc_brown98
I have 2 similar ds jobs, same structure but touch different file/tables. Both are almost identical and update/insert about 400 records. One takes about a few seconds and the other runs for 12 minutes. Any suggestions on what to look for? Indexing is similar. Any help is much appreciated.

Posted: Wed Aug 18, 2004 2:24 pm
by kcbland
1. Are both tables same size? (number of rows, columns, data types)
2. Are you talking pure inserts, updates, or mixture?
3. What about triggers, any doing behind the scenes work?
4. What about foreign key constraints?
5. Have you divided inserts from updates and can you benchmark how long insert-only processing takes versus update-only processing?
6. Have you looked at where the physical tables reside, are they on the same filesystems, etc?

Posted: Wed Aug 18, 2004 2:28 pm
by kommven
I guess one that takes longer time have more keys for updating, check that out...

Will also depend
What are the prefetch row size you are using
Array and Transaction size
Are there any lookups?

Posted: Wed Aug 18, 2004 2:44 pm
by marc_brown98
kcbland wrote:1. Are both tables same size? (number of rows, columns, data types)
2. Are you talking pure inserts, updates, or mixture?
3. What about triggers, any doing behind the scenes work?
4. What about foreign key constraints?
5. Have you divided inserts from updates and can you benchmark how long insert-only processing takes versus update-only processing?
6. Have you looked at where the physical tables reside, are they on the same filesystems, etc?
Ken,
Tables are almost same size in terms of rows & data types (about 400 records)
2. The job does mixture or inserts and updates, but they are separated.
3. No triggers
4. The PK of each of the 2 tables is a FK in 1 other table for each.
5. Yes, I have a stage to take care of inserts and 1 for updates.
6. Same filesystem.

I'm considering removing the PK/FK on the long running update table, the FK table is a helper table and gets completely refreshed regularly and the build process uses the table in question via hash file, so by default ref. integrity is being enforced.