Hi,
I am migrating a job from 8.1 version to 8.7. This job loads data into a teradata table which is having UPI on key columns and source data is expecetd to have duplicate rows, so in 8.1, multiload stage in sequential mode is used to counter this situation, However in 8.7 using connector I am not able to execute this job, as it gets aborted due to constraint violation. I have set the connector to sequential mode, even then it gets aborted.
Loading duplicate rows using teradata connector
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 50
- Joined: Tue Jan 19, 2010 4:14 am
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Running in Sequential mode does not eliminate duplicates - it merely processes the duplicates in one node. Eliminating duplicates must be coded for, for example using a Remove Duplicates stage.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 50
- Joined: Tue Jan 19, 2010 4:14 am
@ Ray - I agree to your point of removing duplicates beforehand, but since it is running fine in 8.1/multiload, we are told not to do any changes execpt for that of connector
.
@ prasson - Table doesn't have duplicates, they are trying to retain the last update as per requirement.
@ ramesh - we can change the execution mode to sequential in connector, you just have to click on the connector icon, once you open properties, instead of link (which opens by default)
![Sad :(](./images/smilies/icon_sad.gif)
@ prasson - Table doesn't have duplicates, they are trying to retain the last update as per requirement.
@ ramesh - we can change the execution mode to sequential in connector, you just have to click on the connector icon, once you open properties, instead of link (which opens by default)