DB2 "Delete" failing with Transaction log full msg

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
RK72
Participant
Posts: 154
Joined: Wed Sep 29, 2010 4:10 pm

DB2 "Delete" failing with Transaction log full msg

Post by RK72 »

Hi Friends,

I am facing a serious issue in production.
There is a job which deletes records from a table every week end from datastage using ODBC enterprice stage.

Target DB:DB2 V9X
Records to be deleted every time:around 7.5 million
total size of the table: 18.5 million records

Failing with the reason:"Transaction log full".We increased log size but still no improvement.
I imported ENV variable APT_oraupsert_commit_row_interval and time interval variables into job and defaulted to 1000000 and 400 sec respectively. In target ODBC stage, Insert Array size=1000000.

Table has index on two key columns.
My DBA told me that if I committ rows for every 1 million records, then log space will be released.But after doing above still job is failing.......

Pls give your thoughts??

Thank you so much
chandra.shekhar@tcs.com
Premium Member
Premium Member
Posts: 353
Joined: Mon Jan 17, 2011 5:03 am
Location: Mumbai, India

Post by chandra.shekhar@tcs.com »

Then I guess you have lower the commit size(like 500000)
By the way how are you deleting the records ??
Thanx and Regards,
ETL User
RK72
Participant
Posts: 154
Joined: Wed Sep 29, 2010 4:10 pm

Post by RK72 »

Upsert method:Delete only
delete FROM schema.ABCtabl
WHERE DATE(TIMESTAMP) <current_date - 42 days
close command:Re org the table
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

This is all about your database but wanted to point out that you can't set APT_ORAUPSERT_COMMIT_ROW_INTERVAL for DB2 - the "ORA" part means Oracle.
-craig

"You can never have too many knives" -- Logan Nine Fingers
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Use APT_RDBMS_COMMIT_ROWS for DB2. Default is 2048.
-craig

"You can never have too many knives" -- Logan Nine Fingers
RK72
Participant
Posts: 154
Joined: Wed Sep 29, 2010 4:10 pm

Post by RK72 »

chulett wrote:Use APT_RDBMS_COMMIT_ROWS for DB2. Default is 2048. ...
Hi Chulett,

I tried the option you mentioned set, APT_RDBMS_COMMIT_ROWS=500000
and also tried with Insert array size=500000.... still job is failing with same reason. Any other options??

FYI The log size is set to 25GB in DB2..

Error:
Failure during execution of operator logic. [api/operator_rep.C:401]

Fatal Error: [IBM(DataDirect OEM)][ODBC DB2 Wire Protocol driver][UDB DB2 for Windows, UNIX, and Linux]The transaction log for the database is full. [odbcUtils.C:1347]
chandra.shekhar@tcs.com
Premium Member
Premium Member
Posts: 353
Joined: Mon Jan 17, 2011 5:03 am
Location: Mumbai, India

Post by chandra.shekhar@tcs.com »

@RK72
If the job is still getting aborted then I guess you have only two options.
We also faced this issue in our Project, so we

1) Increased the transaction log space.
2) Reduced the commit size.
using ODBC enterprice stage
Why not DB2 connector stage ?? :?
also tried with Insert array size=500000
Thats not the commit size, its the array size. It has nothing to do with the commiting of records.

As you know the trasanction log gets filled during insert/update/delete.
So your space will be released untill 500000 records which means the size of these 500000 records is greater than defined log space.
Also, at the same time if other trasanctions are happening at the database they will also use the log space.
By the way, is log space 25 GB per partition ??
How much partitions does your database has ??
Last edited by chandra.shekhar@tcs.com on Wed May 29, 2013 12:10 am, edited 3 times in total.
Thanx and Regards,
ETL User
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Why such large arrays? Bring it back to the default, say 2000 or 5000. See if that works. Then ramp it up gradually. Get the DBA to help with keeping the transaction log clean. Beware of any long-running transactions.

If you're completely replacing all the records, why not truncate the table?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Ray - They're deleting based on date, about half of the records each time according to the stats posted earlier.

As noted, continue to work with your DBA on what the commit interval should be and possible see about adding more transaction log space.
Also as noted, that Array Size is the number of records sent to the database 'at the same time' over the network. While it is not part of your problem, it is CRaZy Big and should be dropped to something much smaller.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply