Page 1 of 1

DB2 "Delete" failing with Transaction log full msg

Posted: Mon May 27, 2013 7:11 am
by RK72
Hi Friends,

I am facing a serious issue in production.
There is a job which deletes records from a table every week end from datastage using ODBC enterprice stage.

Target DB:DB2 V9X
Records to be deleted every time:around 7.5 million
total size of the table: 18.5 million records

Failing with the reason:"Transaction log full".We increased log size but still no improvement.
I imported ENV variable APT_oraupsert_commit_row_interval and time interval variables into job and defaulted to 1000000 and 400 sec respectively. In target ODBC stage, Insert Array size=1000000.

Table has index on two key columns.
My DBA told me that if I committ rows for every 1 million records, then log space will be released.But after doing above still job is failing.......

Pls give your thoughts??

Thank you so much

Posted: Mon May 27, 2013 7:45 am
by chandra.shekhar@tcs.com
Then I guess you have lower the commit size(like 500000)
By the way how are you deleting the records ??

Posted: Mon May 27, 2013 8:06 am
by RK72
Upsert method:Delete only
delete FROM schema.ABCtabl
WHERE DATE(TIMESTAMP) <current_date - 42 days
close command:Re org the table

Posted: Mon May 27, 2013 8:10 am
by chulett
This is all about your database but wanted to point out that you can't set APT_ORAUPSERT_COMMIT_ROW_INTERVAL for DB2 - the "ORA" part means Oracle.

Posted: Mon May 27, 2013 11:04 am
by chulett
Use APT_RDBMS_COMMIT_ROWS for DB2. Default is 2048.

Posted: Mon May 27, 2013 11:49 pm
by RK72
chulett wrote:Use APT_RDBMS_COMMIT_ROWS for DB2. Default is 2048. ...
Hi Chulett,

I tried the option you mentioned set, APT_RDBMS_COMMIT_ROWS=500000
and also tried with Insert array size=500000.... still job is failing with same reason. Any other options??

FYI The log size is set to 25GB in DB2..

Error:
Failure during execution of operator logic. [api/operator_rep.C:401]

Fatal Error: [IBM(DataDirect OEM)][ODBC DB2 Wire Protocol driver][UDB DB2 for Windows, UNIX, and Linux]The transaction log for the database is full. [odbcUtils.C:1347]

Posted: Tue May 28, 2013 12:27 am
by chandra.shekhar@tcs.com
@RK72
If the job is still getting aborted then I guess you have only two options.
We also faced this issue in our Project, so we

1) Increased the transaction log space.
2) Reduced the commit size.
using ODBC enterprice stage
Why not DB2 connector stage ?? :?
also tried with Insert array size=500000
Thats not the commit size, its the array size. It has nothing to do with the commiting of records.

As you know the trasanction log gets filled during insert/update/delete.
So your space will be released untill 500000 records which means the size of these 500000 records is greater than defined log space.
Also, at the same time if other trasanctions are happening at the database they will also use the log space.
By the way, is log space 25 GB per partition ??
How much partitions does your database has ??

Posted: Tue May 28, 2013 2:51 am
by ray.wurlod
Why such large arrays? Bring it back to the default, say 2000 or 5000. See if that works. Then ramp it up gradually. Get the DBA to help with keeping the transaction log clean. Beware of any long-running transactions.

If you're completely replacing all the records, why not truncate the table?

Posted: Tue May 28, 2013 6:32 am
by chulett
Ray - They're deleting based on date, about half of the records each time according to the stats posted earlier.

As noted, continue to work with your DBA on what the commit interval should be and possible see about adding more transaction log space.
Also as noted, that Array Size is the number of records sent to the database 'at the same time' over the network. While it is not part of your problem, it is CRaZy Big and should be dropped to something much smaller.