error in hashfile

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
nag0143
Premium Member
Premium Member
Posts: 159
Joined: Fri Nov 14, 2003 1:05 am

error in hashfile

Post by nag0143 »

hi all,

i have a table in oracle8i with 7987234 records & 98 columns..... i tryed to look up with other table with 9654344 records, so i send it to hash file and then looked up... i kicked the load last night, but in the morning i saw my job is aborted and the error

"sddcompare04..Hashed_File_10.D0106Link1: ds_uvput() - Write failed for record id '03294684
2002-07-23 00:00:00
BU
BU003'"

i can't figure it out what is the error.....

also can any one guess how much time will this job takes approx...

thanks in advance.....

nag
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Several questions:

Any chance you ran out of disk space during the load? Was this the only error or were there a series of them?

When building your hash file lookup, are you loading the bare minimum number of columns needed for the lookup or all 98? Do you really need all of the records or just an active subset? Did you use the Hash File Calculator (HFC) to precreate the hash or did you just let the job autocreate it with all the defaults?

As to how long it will take... much longer than it should if the hash isn't properly precreated. :wink: In the end, I think you'll have to tell us.
-craig

"You can never have too many knives" -- Logan Nine Fingers
raju_chvr
Premium Member
Premium Member
Posts: 165
Joined: Sat Sep 27, 2003 9:19 am
Location: USA

Re: error in hashfile

Post by raju_chvr »

Taking any guess on the job time or the time to write into a Hash file will be a wild guess.

Did u calculate the size of the file approximately? You are almost writing 1 million rows into hash file.

Can you filter the rows? If you have any constraints, check if you can move them to ORAOCI 8i stage.

the error has something to do with that particular record. check that particular record in your source and see for any data anamolies?

may be someother people can throw some more light on this issue :?:
Last edited by raju_chvr on Thu Jan 29, 2004 8:58 am, edited 1 time in total.
nag0143
Premium Member
Premium Member
Posts: 159
Joined: Fri Nov 14, 2003 1:05 am

Post by nag0143 »

I AM USING ORACLE8I PLUG IN AND I AM LOOKING INTO THE RECORDS
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Another possibility is that the hashed file has reached its 2GB limit, and you have not created it able to be larger (that is, with 64 bit internal pointers).

When loading the hashed file, select only the columns that you will actually need in the job. This will mean a much smaller hashed file than loading the entire contents of the table. There will still be 9654344 rows (assuming no duplicates), but much smaller row sizes.

Is it possible to have Oracle perform the join as part of the extraction (SELECT) process? In this case, you would not need the hashed file at all.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply