Page 1 of 1

error in hashfile

Posted: Thu Jan 29, 2004 8:32 am
by nag0143
hi all,

i have a table in oracle8i with 7987234 records & 98 columns..... i tryed to look up with other table with 9654344 records, so i send it to hash file and then looked up... i kicked the load last night, but in the morning i saw my job is aborted and the error

"sddcompare04..Hashed_File_10.D0106Link1: ds_uvput() - Write failed for record id '03294684
2002-07-23 00:00:00
BU
BU003'"

i can't figure it out what is the error.....

also can any one guess how much time will this job takes approx...

thanks in advance.....

nag

Posted: Thu Jan 29, 2004 8:46 am
by chulett
Several questions:

Any chance you ran out of disk space during the load? Was this the only error or were there a series of them?

When building your hash file lookup, are you loading the bare minimum number of columns needed for the lookup or all 98? Do you really need all of the records or just an active subset? Did you use the Hash File Calculator (HFC) to precreate the hash or did you just let the job autocreate it with all the defaults?

As to how long it will take... much longer than it should if the hash isn't properly precreated. :wink: In the end, I think you'll have to tell us.

Re: error in hashfile

Posted: Thu Jan 29, 2004 8:47 am
by raju_chvr
Taking any guess on the job time or the time to write into a Hash file will be a wild guess.

Did u calculate the size of the file approximately? You are almost writing 1 million rows into hash file.

Can you filter the rows? If you have any constraints, check if you can move them to ORAOCI 8i stage.

the error has something to do with that particular record. check that particular record in your source and see for any data anamolies?

may be someother people can throw some more light on this issue :?:

Posted: Thu Jan 29, 2004 8:51 am
by nag0143
I AM USING ORACLE8I PLUG IN AND I AM LOOKING INTO THE RECORDS

Posted: Thu Jan 29, 2004 3:18 pm
by ray.wurlod
Another possibility is that the hashed file has reached its 2GB limit, and you have not created it able to be larger (that is, with 64 bit internal pointers).

When loading the hashed file, select only the columns that you will actually need in the job. This will mean a much smaller hashed file than loading the entire contents of the table. There will still be 9654344 rows (assuming no duplicates), but much smaller row sizes.

Is it possible to have Oracle perform the join as part of the extraction (SELECT) process? In this case, you would not need the hashed file at all.