DB2 "Delete" failing with Transaction log full msg
Moderators: chulett, rschirm, roy
DB2 "Delete" failing with Transaction log full msg
Hi Friends,
I am facing a serious issue in production.
There is a job which deletes records from a table every week end from datastage using ODBC enterprice stage.
Target DB:DB2 V9X
Records to be deleted every time:around 7.5 million
total size of the table: 18.5 million records
Failing with the reason:"Transaction log full".We increased log size but still no improvement.
I imported ENV variable APT_oraupsert_commit_row_interval and time interval variables into job and defaulted to 1000000 and 400 sec respectively. In target ODBC stage, Insert Array size=1000000.
Table has index on two key columns.
My DBA told me that if I committ rows for every 1 million records, then log space will be released.But after doing above still job is failing.......
Pls give your thoughts??
Thank you so much
I am facing a serious issue in production.
There is a job which deletes records from a table every week end from datastage using ODBC enterprice stage.
Target DB:DB2 V9X
Records to be deleted every time:around 7.5 million
total size of the table: 18.5 million records
Failing with the reason:"Transaction log full".We increased log size but still no improvement.
I imported ENV variable APT_oraupsert_commit_row_interval and time interval variables into job and defaulted to 1000000 and 400 sec respectively. In target ODBC stage, Insert Array size=1000000.
Table has index on two key columns.
My DBA told me that if I committ rows for every 1 million records, then log space will be released.But after doing above still job is failing.......
Pls give your thoughts??
Thank you so much
-
- Premium Member
- Posts: 353
- Joined: Mon Jan 17, 2011 5:03 am
- Location: Mumbai, India
Hi Chulett,chulett wrote:Use APT_RDBMS_COMMIT_ROWS for DB2. Default is 2048. ...
I tried the option you mentioned set, APT_RDBMS_COMMIT_ROWS=500000
and also tried with Insert array size=500000.... still job is failing with same reason. Any other options??
FYI The log size is set to 25GB in DB2..
Error:
Failure during execution of operator logic. [api/operator_rep.C:401]
Fatal Error: [IBM(DataDirect OEM)][ODBC DB2 Wire Protocol driver][UDB DB2 for Windows, UNIX, and Linux]The transaction log for the database is full. [odbcUtils.C:1347]
-
- Premium Member
- Posts: 353
- Joined: Mon Jan 17, 2011 5:03 am
- Location: Mumbai, India
@RK72
If the job is still getting aborted then I guess you have only two options.
We also faced this issue in our Project, so we
1) Increased the transaction log space.
2) Reduced the commit size.
As you know the trasanction log gets filled during insert/update/delete.
So your space will be released untill 500000 records which means the size of these 500000 records is greater than defined log space.
Also, at the same time if other trasanctions are happening at the database they will also use the log space.
By the way, is log space 25 GB per partition ??
How much partitions does your database has ??
If the job is still getting aborted then I guess you have only two options.
We also faced this issue in our Project, so we
1) Increased the transaction log space.
2) Reduced the commit size.
Why not DB2 connector stage ??using ODBC enterprice stage
![Confused :?](./images/smilies/icon_confused.gif)
Thats not the commit size, its the array size. It has nothing to do with the commiting of records.also tried with Insert array size=500000
As you know the trasanction log gets filled during insert/update/delete.
So your space will be released untill 500000 records which means the size of these 500000 records is greater than defined log space.
Also, at the same time if other trasanctions are happening at the database they will also use the log space.
By the way, is log space 25 GB per partition ??
How much partitions does your database has ??
Last edited by chandra.shekhar@tcs.com on Wed May 29, 2013 12:10 am, edited 3 times in total.
Thanx and Regards,
ETL User
ETL User
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Why such large arrays? Bring it back to the default, say 2000 or 5000. See if that works. Then ramp it up gradually. Get the DBA to help with keeping the transaction log clean. Beware of any long-running transactions.
If you're completely replacing all the records, why not truncate the table?
If you're completely replacing all the records, why not truncate the table?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Ray - They're deleting based on date, about half of the records each time according to the stats posted earlier.
As noted, continue to work with your DBA on what the commit interval should be and possible see about adding more transaction log space.
Also as noted, that Array Size is the number of records sent to the database 'at the same time' over the network. While it is not part of your problem, it is CRaZy Big and should be dropped to something much smaller.
As noted, continue to work with your DBA on what the commit interval should be and possible see about adding more transaction log space.
Also as noted, that Array Size is the number of records sent to the database 'at the same time' over the network. While it is not part of your problem, it is CRaZy Big and should be dropped to something much smaller.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers