Duplicate data - Un-usable index

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
dsuser_cai
Premium Member
Premium Member
Posts: 151
Joined: Fri Feb 13, 2009 4:19 pm

Duplicate data - Un-usable index

Post by dsuser_cai »

Hi

I posted this question yesterday, and now i have some more questions about the same.

Source : oracle
Target : oracle

I have primary key constraint and not null constraint in the target table. i have several hash lookups in the DS job. I already had some data in the target table, this data will be used for lookup.

i have desigend the job in such a way, that it will update if the records were present and insert new data, and this is what it was doing so far, recently i made some chenges to the staging job (this creates the staging file - text file). no other changes were made. how come DS introduced duplicate records in.
This has caused the index in those tables to be 'UNUSABLE'. Can somebody explain me how this happens.
Thanks
Karthick
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Hmmm... I don't see more questions, I see the same questions we already went over in your other thread on this topic. And I answered them over there. :?

Two points:

1) You're still posting in the wrong forum.
2) More questions means adding them to your existing topic, please don't start a new one to do that.
-craig

"You can never have too many knives" -- Logan Nine Fingers
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Moderator, please move to the Server forum.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply