Hi All,
Can anyone please guide me on the below issue.
I have a job1 to insert records into sybase stage using server job , while inserting 1million records it took around 1hr to complete without any WARNING
Similarly i designed a job in Parallel job2 and inserting the same 1million records and it took aroung half hr to insert and completed the JOb with the Warning
Attempt to insert duplicate key row in object 'od_trans_map_output_new' with unique index 'od_trans_map_output_idx'
Please let me know how we can an ODBC stage gave this warning whereas Sybase stage is not giving not the warning
Issue with ODBC Stage
Moderators: chulett, rschirm, roy
Definitely with Craig on that one... what changed between the two jobs data-wise? Did you definitely use the same input data? Moreover, was the data you loaded with job1 actually removed from the target before re-running the data in with job2...?
Apologies if I'm asking the obvious, but best to start there in my experience. You need to verify the unique key index identified in the warning from job2 (i.e. the key members) against the data you are loading in with that job and the data already in existence.
Place your bets...
Apologies if I'm asking the obvious, but best to start there in my experience. You need to verify the unique key index identified in the warning from job2 (i.e. the key members) against the data you are loading in with that job and the data already in existence.
Place your bets...
Mark Winter
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>
tHANKS alot,miwinter wrote:Definitely with Craig on that one... what changed between the two jobs data-wise? Did you definitely use the same input data? Moreover, was the data you loaded with job1 actually removed from the target before re-running the data in with job2...?
Apologies if I'm asking the obvious, but best to start there in my experience. You need to verify the unique key index identified in the warning from job2 (i.e. the key members) against the data you are loading in with that job and the data already in existence.
Place your bets...
I am using the same data in both the jobs and in job1(that is sybase stage in server job) i am using and truncate and insert option for loading the data.
In job2 with the ODBC stage i am using just and insert only option and before job subroutine script for trucatiing a table.
Here is the below script
#! /usr/bin/ksh
# Load into od_trans_map_output table for HALO Transaction Mapping
isql -Uuser -Ppassword -Sserver <<-EOI1
use database
go
truncate table database..od_trans_map_output_new
go
Please check the before job subroutine code also in the log it is not showing wheather the below commanc is executed or not
OK, I'm not familiar with Sybase per se, but I'd bank on that truncate command not working.
Could you remove the before routine and, before running Job2 again, run this manually instead and then confirm that the data has gone from the table? Then, run Job2 again to see the behaviour.
If it works this way, you've at least proven the failing is in the actions of the before routine, namely that the table isn't being truncated prior to load, which would certainly explain the issue you first reported.
Could you remove the before routine and, before running Job2 again, run this manually instead and then confirm that the data has gone from the table? Then, run Job2 again to see the behaviour.
If it works this way, you've at least proven the failing is in the actions of the before routine, namely that the table isn't being truncated prior to load, which would certainly explain the issue you first reported.
Mark Winter
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>