Page 1 of 1

Deleting 2 million rows

Posted: Sun Mar 01, 2009 10:31 pm
by cosec
Hi,

I have a job that deletes rows in the target based 'delete existing rows' option in a DB2 stage.

To avoid getting a transaction log error do i need to issue a commit ? or does it automatically commit when I use the 'Cursor Stability' Option ?

Thanks.

Posted: Sun Mar 01, 2009 11:08 pm
by ray.wurlod
All DELETEs will be logged - that is, you will always "get a log". DB2 does not have the option of disabling transaction logging.

Auto-commit (rows/transaction = 1) means that the number of pending rows will be minimized, so you don't risk blowing your undo tablespace.

Based on far too little research, I don't believe that "cursor stability" will make any difference to how transactions are logged. I'd be happy to be corrected here. From the on-line help for this stage:
Cursor Stability. Takes exclusive locks on modified data and sharable locks on all other data. This is the default. This level is equivalent to read committed. Exclusive locks are held until a commit or rollback is executed. Uncommitted changes are not readable by other transactions. Shared locks are released immediately after the data has been processed, allowing other transactions to modify it.

Posted: Mon Mar 02, 2009 6:45 am
by Mike
I believe Ray is mistaken about disabling transaction logging... though he may have been talking in the context of a DataStage stage property. See the ACTIVATE NOT LOGGED INITIALLY clause for the ALTER TABLE statement in the SQL reference.

He is correct about "cursor stability". That applies to managing locks and has nothing to do with transaction logging.

What percentage of total are your 2 million deletes?

Deletes are an expensive operation. You will likely be better off creating a load file with all rows minus the 2 million and doing a load replace operation.

Mike

Posted: Mon Mar 16, 2009 3:59 am
by thurmy34
Hi
I use the ACTIVATE NOT LOGGED INITIALLY clause in the before sql of my DB2 stage when I want to perform conditional delete.
It works fine but if the sql command following it failed for whatever reason the table will not be available.
You'll have to drop/create it.

Posted: Fri Mar 27, 2009 1:58 am
by KadetG
Hi

I using Load db2 commad to fast delete all rows. That works for tables without any reference constraint (otherwise you will have to swich the check pending state for tables). You could issue the LOAD command from a sequence. the load coname could be:
LOAD FROM /dev/null OF IXF MESSAGES /dev/null REPLACE INTO <table>