Update then insert not working in 8.7

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Hmmm... hard to say based on what was posted. For a "combo" action to work the first action must fail before the second option will fire. So the update must update zero records before the insert is attempted. It sounds like that happens and then you get an PK violation on the insert, yes?

Your PK column, is it a surrogate key? If so, how are you handling it? And what is your "key" field for the update - one or more business key values?
-craig

"You can never have too many knives" -- Logan Nine Fingers
prasannakumarkk
Participant
Posts: 117
Joined: Wed Feb 06, 2013 9:24 am
Location: Chennai,TN, India

Post by prasannakumarkk »

1) Have you specified auto partition in the transformer stage.
2) Did you check the key option for the primary key column in connector stage.

You have mentioned there are duplicates in the source. If there are duplicates, which records must get inserted and which must be updated.
Thanks,
Prasanna
akarsh
Participant
Posts: 51
Joined: Fri May 09, 2008 4:03 am
Location: Pune

Post by akarsh »

No Its not a primary key. Its coming in file itself. an may come as duplicate.
that's the reason i want to use update then insert.
Thanks,
Akarsh Kapoor
akarsh
Participant
Posts: 51
Joined: Fri May 09, 2008 4:03 am
Location: Pune

Post by akarsh »

yes, in tfm its auto and in connector its hash partition.

there is no such requirement which need to be updated.
Thanks,
Akarsh Kapoor
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Hey, you're the one that said you had a "single primary key (numeric) column in table". :wink:

So these duplicates... do have the duplicates in the file itself, or is there just a single instance of the key value in the file which may or may not exist in the table? The answer changes how you need to handle this.
-craig

"You can never have too many knives" -- Logan Nine Fingers
chandra.shekhar@tcs.com
Premium Member
Premium Member
Posts: 353
Joined: Mon Jan 17, 2011 5:03 am
Location: Mumbai, India

Post by chandra.shekhar@tcs.com »

@akarsh
We have also experienced the same issue while updating data on our DB2 tables. Try one of these solution :-
1) Use Oracle Connector Partitioning instead of Hash Partitioning in the target.
OR
2) Reduce the "Array Size" to 1, keeping Record Count to higher value e.g 2000.
OR
3) Run your job on a single node.
Thanx and Regards,
ETL User
akarsh
Participant
Posts: 51
Joined: Fri May 09, 2008 4:03 am
Location: Pune

Post by akarsh »

chulett wrote:So these duplicates... do have the duplicates in the file itself, or is there just a single instance of the key value in the file which may or may not exist in the table? The answer changes how you need to handle this.
We have duplicate key values in file itself.
Thanks,
Akarsh Kapoor
akarsh
Participant
Posts: 51
Joined: Fri May 09, 2008 4:03 am
Location: Pune

Post by akarsh »

Nothing worked out.

Finally had to split the job and create separate update and insert job :(

Thanks all for your suggestions.
Thanks,
Akarsh Kapoor
chandra.shekhar@tcs.com
Premium Member
Premium Member
Posts: 353
Joined: Mon Jan 17, 2011 5:03 am
Location: Mumbai, India

Post by chandra.shekhar@tcs.com »

@akarsh
Did you try any of my options ?
Thanx and Regards,
ETL User
akarsh
Participant
Posts: 51
Joined: Fri May 09, 2008 4:03 am
Location: Pune

Post by akarsh »

Yeah.. but didn't work as expected :cry:
Thanks,
Akarsh Kapoor
Post Reply