Issue with ODBC Stage

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
monaz
Participant
Posts: 98
Joined: Sat Aug 23, 2008 3:14 am

Issue with ODBC Stage

Post by monaz »

Hi All,

Can anyone please guide me on the below issue.

I have a job1 to insert records into sybase stage using server job , while inserting 1million records it took around 1hr to complete without any WARNING

Similarly i designed a job in Parallel job2 and inserting the same 1million records and it took aroung half hr to insert and completed the JOb with the Warning

Attempt to insert duplicate key row in object 'od_trans_map_output_new' with unique index 'od_trans_map_output_idx'


Please let me know how we can an ODBC stage gave this warning whereas Sybase stage is not giving not the warning
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

All things being equal, I'd venture to guess it was the data and not the stage.
-craig

"You can never have too many knives" -- Logan Nine Fingers
miwinter
Participant
Posts: 396
Joined: Thu Jun 22, 2006 7:00 am
Location: England, UK

Post by miwinter »

Definitely with Craig on that one... what changed between the two jobs data-wise? Did you definitely use the same input data? Moreover, was the data you loaded with job1 actually removed from the target before re-running the data in with job2...?

Apologies if I'm asking the obvious, but best to start there in my experience. You need to verify the unique key index identified in the warning from job2 (i.e. the key members) against the data you are loading in with that job and the data already in existence.

Place your bets...
Mark Winter
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>
monaz
Participant
Posts: 98
Joined: Sat Aug 23, 2008 3:14 am

Post by monaz »

miwinter wrote:Definitely with Craig on that one... what changed between the two jobs data-wise? Did you definitely use the same input data? Moreover, was the data you loaded with job1 actually removed from the target before re-running the data in with job2...?

Apologies if I'm asking the obvious, but best to start there in my experience. You need to verify the unique key index identified in the warning from job2 (i.e. the key members) against the data you are loading in with that job and the data already in existence.

Place your bets...
tHANKS alot,

I am using the same data in both the jobs and in job1(that is sybase stage in server job) i am using and truncate and insert option for loading the data.

In job2 with the ODBC stage i am using just and insert only option and before job subroutine script for trucatiing a table.

Here is the below script

#! /usr/bin/ksh
# Load into od_trans_map_output table for HALO Transaction Mapping

isql -Uuser -Ppassword -Sserver <<-EOI1
use database
go
truncate table database..od_trans_map_output_new
go


Please check the before job subroutine code also in the log it is not showing wheather the below commanc is executed or not
miwinter
Participant
Posts: 396
Joined: Thu Jun 22, 2006 7:00 am
Location: England, UK

Post by miwinter »

OK, I'm not familiar with Sybase per se, but I'd bank on that truncate command not working.

Could you remove the before routine and, before running Job2 again, run this manually instead and then confirm that the data has gone from the table? Then, run Job2 again to see the behaviour.

If it works this way, you've at least proven the failing is in the actions of the before routine, namely that the table isn't being truncated prior to load, which would certainly explain the issue you first reported.
Mark Winter
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>
Post Reply