Un-Usabale index while loading tables

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
dsuser_cai
Premium Member
Premium Member
Posts: 151
Joined: Fri Feb 13, 2009 4:19 pm

Un-Usabale index while loading tables

Post by dsuser_cai »

Hi im loading into Oracle tables. Im using DS 7.5.2.

The job design has a source text file, lookup hasfiles and a target odbc stage. While loading data into the table i got the following error:

ORA-01502: index 'CAIODS.SYS_C0091669' or partition of such index is in unusable state

so i checked with the DBA and he said that i had duplicate records in the table. He got the following error:

SQL> /
alter index CAIODS.SYS_C0091669 rebuild online
*
ERROR at line 1:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found

The issue was fixed by droping the index, deleting the duplicates and rebuilding the index. and I loaded the tables properly.

Can somebody expleain me why this happens even if had primary key constraint in the table level. i saw many duplicates. how come datastage loaded duplicate data into the table.
Thanks
Karthick
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Re: Un-Usabale index while loading tables

Post by chulett »

dsuser_cai wrote:Can somebody expleain me why this happens even if had primary key constraint in the table level. i saw many duplicates. how come datastage loaded duplicate data into the table.
There isn't any way you could create this situation with that job design. Worst case, your job will fail with duplicate key errors if your hashed file check isn't proper and the table wasn't already horked up. Someone or something else created this problem before your job ran and you were a victim of it.

Typically this happens as a result of a direct path bulk load where duplicates were not handled properly and because of that the PK constraint could not be re-enabled and thus the index that supports it was left in an UNUSABLE status.
-craig

"You can never have too many knives" -- Logan Nine Fingers
dsuser_cai
Premium Member
Premium Member
Posts: 151
Joined: Fri Feb 13, 2009 4:19 pm

Post by dsuser_cai »

Thank you chulett for the explanation,

I remember now, I was testing the job and for the first time i loaded the data and while testing it again i didnt truncate the tables. (i was testing witht he same data). I believe thats why it might have loaded the duplicates, also when i checked for the number of duplicates it was exactly twice the source. Now i understood, the problem.

Could you please tell me if:

1)Does datastage while loading data into tables drops the constrains. or does it drops the index and rebuilt them again.
Thanks
Karthick
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Not in a Server job unless you specifically add these steps in the before- and after-sql tabs. Even the Server ORABULK stage doesn't include an option for that. That happens either from command line sqlldr or when using the Enterprise stage in a PX job.

And no, simply truncating the table would not allow you to load duplicate values into a PK. However, if the index / constraint was already blown when you did so, then yes it would have allowed it regardless of if you truncated the table first or not.
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Moderator: please move to Server forum
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply