Hi,
I have a job that deletes rows in the target based 'delete existing rows' option in a DB2 stage.
To avoid getting a transaction log error do i need to issue a commit ? or does it automatically commit when I use the 'Cursor Stability' Option ?
Thanks.
Deleting 2 million rows
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
All DELETEs will be logged - that is, you will always "get a log". DB2 does not have the option of disabling transaction logging.
Auto-commit (rows/transaction = 1) means that the number of pending rows will be minimized, so you don't risk blowing your undo tablespace.
Based on far too little research, I don't believe that "cursor stability" will make any difference to how transactions are logged. I'd be happy to be corrected here. From the on-line help for this stage:
Auto-commit (rows/transaction = 1) means that the number of pending rows will be minimized, so you don't risk blowing your undo tablespace.
Based on far too little research, I don't believe that "cursor stability" will make any difference to how transactions are logged. I'd be happy to be corrected here. From the on-line help for this stage:
Cursor Stability. Takes exclusive locks on modified data and sharable locks on all other data. This is the default. This level is equivalent to read committed. Exclusive locks are held until a commit or rollback is executed. Uncommitted changes are not readable by other transactions. Shared locks are released immediately after the data has been processed, allowing other transactions to modify it.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
I believe Ray is mistaken about disabling transaction logging... though he may have been talking in the context of a DataStage stage property. See the ACTIVATE NOT LOGGED INITIALLY clause for the ALTER TABLE statement in the SQL reference.
He is correct about "cursor stability". That applies to managing locks and has nothing to do with transaction logging.
What percentage of total are your 2 million deletes?
Deletes are an expensive operation. You will likely be better off creating a load file with all rows minus the 2 million and doing a load replace operation.
Mike
He is correct about "cursor stability". That applies to managing locks and has nothing to do with transaction logging.
What percentage of total are your 2 million deletes?
Deletes are an expensive operation. You will likely be better off creating a load file with all rows minus the 2 million and doing a load replace operation.
Mike
Hi
I using Load db2 commad to fast delete all rows. That works for tables without any reference constraint (otherwise you will have to swich the check pending state for tables). You could issue the LOAD command from a sequence. the load coname could be:
LOAD FROM /dev/null OF IXF MESSAGES /dev/null REPLACE INTO <table>
I using Load db2 commad to fast delete all rows. That works for tables without any reference constraint (otherwise you will have to swich the check pending state for tables). You could issue the LOAD command from a sequence. the load coname could be:
LOAD FROM /dev/null OF IXF MESSAGES /dev/null REPLACE INTO <table>
Alex