Page 1 of 2

An insert, update, or delete statement failed to run

Posted: Thu Apr 12, 2012 2:17 am
by patelamit009
Hi All,

I am also facing this problem with mentioned fatal below.

Code: Select all

An insert, update, or delete statement failed to run. (CC_DB2DBStatement::processRowStatusArray, file CC_DB2DBStatement.cpp, line 1,984).
I am using the target as DB2 connector and had set the write mode as Insert then Update. Curious things is the job works fine for less number of volume and aborts for huge volume (in millions ).

Has anyone found the rootcause of this issue?

Posted: Thu Apr 12, 2012 2:33 am
by pandeesh
Could you post how you are configuring the connector stage?
Especially, the array size ..
And also diagnose by checking for locks while the job is running.

Posted: Thu Apr 12, 2012 3:00 am
by patelamit009
Hi Pandeesh,

Thanks for the quick reply.


My Connector stage settings are
  • Write Mode : Insert then Update
    Record Count : 5000
    Isolation Level : Cursor stability
    Auto commit : On
    Array size : 5000

    Failed on size mismatch : Yes
    Failed on type mismatch : Yes
    Dropped unmatched fields : Yes

    Lock wait mode : Use the lock timeout database config parameters.
I could not able to diagnose the job and check while it is running because it abort once the job starts. Could you please help if you mean some thing else?

Posted: Thu Apr 12, 2012 6:52 am
by chulett
:!: Split to your own post.

Posted: Thu Apr 12, 2012 11:30 am
by chandra.shekhar@tcs.com
Hi patelamit009,
The only solution which we tried was to run the job on a single node or at least try to write the data on 1 node only.
My DBA told me that while writing the data multiple processes(active/inactive) try to write at the same time which in turn leads to the locking at the database level. We still haven't got the right solution, the only thing which we are doing is to write the data at 1 node (using node constraint property).
We are thinking to raise a PMR with IBM so the get the answer that why this happens with some particular jobs/tables only.

Posted: Thu Apr 12, 2012 11:32 am
by chandra.shekhar@tcs.com
On second thoughts, try to run the job with auto commit as "off" and then let us know.

Posted: Thu Apr 12, 2012 4:54 pm
by Kryt0n
Make sure there is a suitable index based on the keys you use to determine an update... at least that is what helped us but since you are doing an insert else update, you would hope an index already exists...

If the insert/update can use an index, it's less likely to lock unused rows that the other side of the partition wants to update.

Posted: Fri Apr 13, 2012 12:26 am
by sri_vin
1. First check the record lock
2. Second turn the score on and check where it is failing
3. If it is failing as soon as you start the job then reduce the no of nodes.

hope it helps

Posted: Fri Apr 13, 2012 12:33 am
by SURA
Ensure that you selected the KEY columns for Update / Insert.

DS User

Posted: Fri Apr 13, 2012 1:12 pm
by patelamit009
Hi All,

Thanks for the valuable replies. But, I forgot to mention that the table is DB2 z/OS which is catalog in DB2 environment and i am loading it using DB2 connector stage.

The fatal mentioned is with respect to DB2 connector stage while loading MF table. I had tried all the options that are discussed above and no luck with job execution. It aborts with the same error as mentioned in the begining of the post.

Few observations are:

1. Primary Keys are defined on the Table.
2. Job still aborts when setting Auto commit as Off.
3. Have executed with single node in sequential and parallel but no luck.


As per the IBM documentation,
Insert then update: Writes data to the target table; runs the INSERT statement first, and if the INSERT fails with a duplicate key violation, runs the UPDATE statement.
But, in my case when INSERT statement fails UPDATE statement is not been executed and hence job aborts.

Can any one please share further updates on it?

Thanks in advance.

Posted: Fri Apr 13, 2012 2:50 pm
by chulett
WHY is your insert statement failing? Is it for a... duplicate key violation?

Posted: Fri Apr 13, 2012 8:34 pm
by qt_ky
With the "Insert then Update" setting you would need to set the commit count (record count) to 1. You have it set to 5000 so that may be causing problems, unless the connector is smart enough to override that setting to 1 in the background w/o telling you, which it may or may not be doing.

Set your record count to 1 and see if it works. If it does, then you may come to know the root cause.

I would not run millions of inserts and updates into DB2/z using the "Insert then Update" setting unless it's always a low volume of records. For better performance, split the two and run an insert job then an update job with large commit counts (record counts).

Posted: Sun Apr 15, 2012 5:43 am
by chandra.shekhar@tcs.com
Did you try running the job with auto commit off ?
Or running the job on single node??

Posted: Tue Apr 17, 2012 11:05 am
by patelamit009
Thanks Eric. The job certainly worked after keeping the record count to 1.So, my assumption about the issue would be,
The DB2 connector for the z/OS table requires each record to be put in array when write method is set as "Insert then Update". (Here i mentioned array size as 1 which has dependent on record count )

Also, due to performance i have taken your suggestion to split the Insert and Update into two differnet jobs and it is working as expected.


Hi Chandra, have executed the job with auto commit off also with single node. But i couldnt able to successful on it. Please have a look on my previous post. Thanks.

Posted: Tue Apr 17, 2012 5:43 pm
by qt_ky
That's good. Problem resolved now?