Deleting 2 million rows

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
cosec
Premium Member
Premium Member
Posts: 230
Joined: Tue May 08, 2007 8:10 pm

Deleting 2 million rows

Post by cosec »

Hi,

I have a job that deletes rows in the target based 'delete existing rows' option in a DB2 stage.

To avoid getting a transaction log error do i need to issue a commit ? or does it automatically commit when I use the 'Cursor Stability' Option ?

Thanks.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

All DELETEs will be logged - that is, you will always "get a log". DB2 does not have the option of disabling transaction logging.

Auto-commit (rows/transaction = 1) means that the number of pending rows will be minimized, so you don't risk blowing your undo tablespace.

Based on far too little research, I don't believe that "cursor stability" will make any difference to how transactions are logged. I'd be happy to be corrected here. From the on-line help for this stage:
Cursor Stability. Takes exclusive locks on modified data and sharable locks on all other data. This is the default. This level is equivalent to read committed. Exclusive locks are held until a commit or rollback is executed. Uncommitted changes are not readable by other transactions. Shared locks are released immediately after the data has been processed, allowing other transactions to modify it.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Mike
Premium Member
Premium Member
Posts: 1021
Joined: Sun Mar 03, 2002 6:01 pm
Location: Tampa, FL

Post by Mike »

I believe Ray is mistaken about disabling transaction logging... though he may have been talking in the context of a DataStage stage property. See the ACTIVATE NOT LOGGED INITIALLY clause for the ALTER TABLE statement in the SQL reference.

He is correct about "cursor stability". That applies to managing locks and has nothing to do with transaction logging.

What percentage of total are your 2 million deletes?

Deletes are an expensive operation. You will likely be better off creating a load file with all rows minus the 2 million and doing a load replace operation.

Mike
thurmy34
Premium Member
Premium Member
Posts: 198
Joined: Fri Mar 31, 2006 8:27 am
Location: Paris

Post by thurmy34 »

Hi
I use the ACTIVATE NOT LOGGED INITIALLY clause in the before sql of my DB2 stage when I want to perform conditional delete.
It works fine but if the sql command following it failed for whatever reason the table will not be available.
You'll have to drop/create it.
Hope This Helps
Regards
KadetG
Participant
Posts: 30
Joined: Mon Nov 06, 2006 12:43 pm

Post by KadetG »

Hi

I using Load db2 commad to fast delete all rows. That works for tables without any reference constraint (otherwise you will have to swich the check pending state for tables). You could issue the LOAD command from a sequence. the load coname could be:
LOAD FROM /dev/null OF IXF MESSAGES /dev/null REPLACE INTO <table>
Alex
Post Reply