Delete

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

asitagrawal
Premium Member
Premium Member
Posts: 273
Joined: Wed Oct 18, 2006 12:20 pm
Location: Porto

Post by asitagrawal »

chulett wrote:... About all you could control from a Transaction Size standpoint would be if each set of records from any given 'key set' were committed per key change. That would take a value of 1.
If I give Transaction Size as 1, then Array Size also comes down to 1, right ?
i,e once set of keys goes to Database, delets, commits and processes anither key set, correct ?
Also, If I am right, then Transaction Size cannot be a less than Array Size, right ???
That time for those volumes might be great in a weekly or monthly process, less so for a daily one.
yes, this is running on daily basis.. 2 times per day...

actually, I was wondering about Array Szie and Transaction size, because, earlier when I was usign Array Size = 1000 and Transaction Size = 1000, the DBA said that there are no COMMITS being issued, but over this weekend I have implemented the chages, so will check with the DBA on Monday regarding How frequent are the comits being issued now 8 ie. After Array Size= 100 and Transaction Size = 200 ).

Now, what more I need to explore is about Bulk Deletes... i.e framing a single WHERE claue and issuing the Delete command... For this I need to check with the DBA what is the maximum length of a WHERE clause....
Share to Learn, and Learn to Share.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

As I said, in this case going from A/T sizes of 1000/1000 to 100/200 won't change a dang thing if all you are sending to the stage are nine records. Your DBA still won't see any commits being done. :?

Best you can do is set T Size to 1 and then commit on every key change. That would mean, based on the numbers you posted earlier, that you would:

1: Delete 2 records, then commit once
2: Delete 5,189,142 records then commit once
3: Delete 87,425 records then commit once

Etc, etc. If that's what your DBA wants then fine. Anything else, other than to continue to commit only once at the end, would require a completely different approach. Not sure how many different ways I can say it.

On the subject of Transaction Size versus Array Size - yes, you are asking for trouble if you make TS < AS. TS should always be a multiple of AS - an even multiple to make it more clear. From what I've seen and heard here, with anything else results can be... unpredictable at best. Unless, of course, all of the numbers you are using exceed the record counts being processed, then those setting are moot.
-craig

"You can never have too many knives" -- Logan Nine Fingers
asitagrawal
Premium Member
Premium Member
Posts: 273
Joined: Wed Oct 18, 2006 12:20 pm
Location: Porto

Post by asitagrawal »

If DB2 UDB server is on the same machine as DataStage server, set Array Size to 1. Otherwise set Array Size to some multiple of (packet_size / row_size). Is there a large number of deletes? If not, set rows per transaction to 0, so that all the deletes form one transaction.
This has helped.. Thx
Share to Learn, and Learn to Share.
Post Reply