DB2 UPSERT , Records silently rejected

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
prabhu_dc
Premium Member
Premium Member
Posts: 5
Joined: Fri Mar 09, 2007 12:28 pm
Location: Bangalore
Contact:

DB2 UPSERT , Records silently rejected

Post by prabhu_dc »

Hi All,


We have job that loads data into a table on UPSERT mode. The Update SQL has been maked with a "1=0" condition in its where clause. The destination table has a single column as key and is the LD_SEQ_I.
LD_SEQ_I is set as auto increment at a database level. Basically every record coming through the input link will need to be inserted and the database will automatically increment the LD_SEQ_I and insert it in there with the record.

What has been happening is that occation only 20-30% of the records get loaded and the remaining simply get rejected. There is no reason why the records could get rejected at all. As a temporary fix I just clean up the table and rerun the job, the records go in easily.

Success and Rejects logs are as following.


Project:XXXXXXXX
Job name:XXXXXXX
Event #:92227
Timestamp:7/31/2009 3:36:37 AM
Event type:Info
User:xxxxxxx
Message:
db2API_REF_INITEM_E,92: Records Inserted: 163
Records Used for Update: 0
Records Rejected: 0
Total Records: 163.



Project:XXXXXXXX
Job name:XXXXXXX
Event #:92231
Timestamp:7/31/2009 3:45:58 AM
Event type:Info
User:xxxxxxx
Message:
db2API_REF_INITEM_E,102: Records Inserted: 0
Records Used for Update: 0
Records Rejected: 152
Total Records: 152.



This is a daily job and it only fails like this intermittently. It occurred 10 times this month.


Any thoughts or suggestions ???
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

More than likely, your records are failing to "upsert" due to duplicate key issues... but we'd really need to know what you mean by the "clean up the table" action you do that fixes things. Truncate?
-craig

"You can never have too many knives" -- Logan Nine Fingers
prabhu_dc
Premium Member
Premium Member
Posts: 5
Joined: Fri Mar 09, 2007 12:28 pm
Location: Bangalore
Contact:

Post by prabhu_dc »

By Clean up I just delete the records that got loaded in the most recent run. As I said only 20-30% gets in and the remaining get silently rejected. So before rerunning I will delete the partial set of records to mainitain the data integrity.

I too thought of Unique key violation as the cause. But not able to connect the dots because the LD_SEQ_I is set as auto increment in the database and is not coming from the job.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

So, are you saying that when 70% of the data gets rejected, you can go in and remove the 30% that loaded, rerun the exact same data again and then they load 100%? :?
-craig

"You can never have too many knives" -- Logan Nine Fingers
prabhu_dc
Premium Member
Premium Member
Posts: 5
Joined: Fri Mar 09, 2007 12:28 pm
Location: Bangalore
Contact:

Post by prabhu_dc »

Yes. That's exactly what's happening. I have no way to replicate the same problem because it simply works when I try it again.
miwinter
Participant
Posts: 396
Joined: Thu Jun 22, 2006 7:00 am
Location: England, UK

Post by miwinter »

How about contention with other processes? Maybe your silent rejects are actually failures due to a lock-related issue (i.e. deadlock) and on rerun, the issue isn't there a second time, by chance or by nature of timing.
Mark Winter
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>
Shamanth_vk
Premium Member
Premium Member
Posts: 9
Joined: Sat Jul 14, 2007 10:47 am

Enable Output Reject Records

Post by Shamanth_vk »

Hi prabhu_dc,

Enable Output Reject Records, In the target DB stage & Load the rejects to a dataset.

After loading to Reject dataset, Check the SqlCode ( View Data of Reject Dataset will provide you the SqlCode ) for the exact DB error code.

Once you have the error code, check the corresponding error description.

This might help you in terms of getting the exact error.
prabhu_dc
Premium Member
Premium Member
Posts: 5
Joined: Fri Mar 09, 2007 12:28 pm
Location: Bangalore
Contact:

Post by prabhu_dc »

@miwinter : I don't see a possiblity of that. This is a heavy duty production with large resources. These inserts are more over simple and low cost. Hence I don't think there is a contention.

@Shamanth_vk : That is the challenge I am facing. I cannot modify the production job as it is protected. I could although move the job to a personal project and connect to the same source and destination table. Unfortunately the job works fine when I rerun. Even if it is production or personal projects, they just work on a rerun. Annoying thing turned up again today. I usually have a solution for most such problems. This is the first one that has really baffled me.
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

Set a database level trace to see what is happening.

From that you can trace everything including the offending object, data and its code.

What happens if you run with a single node config file ?
Post Reply