Page 1 of 1

Problem while Looking up in Distributed Hashed File

Posted: Wed Mar 22, 2006 1:22 am
by Gokul
Hi All,

I have created a Distributed Hashed Files EMPLOYEESALL with part files as
EmployeesATOL ,EmployeesMTOV and EmployeesWTOZ.

Data is loaded in the hashed file EMPLOYEESALL. But when i use hashed file EMPLOYEESALL as a lookup with Pre-Load file to memory enabled or Enabled,Lock for Updates ,it gave me error

DataStage Job 46 Phantom 2356
Program "DSD.UVOpen": Line 572, Exception raised in GCI subroutine:
Access violation.
Attempting to Cleanup after ABORT raised in stage LookupMulitpartHash..Hashed_File_2
DataStage Phantom Aborting with @ABORT.CODE = 3

When i set the Pre-Load file to memory to disabled or Disabled,Lock for Updates, the lookup works fine.

Why does the Distributed file behave like this.
If the distributed hashed files work this way, then will it not slow down the process as we are not able to pre-load the hashed file during lookup.

Thanks,
Gokul

Posted: Wed Mar 22, 2006 1:33 am
by ArndW
Gokul,
this is a bug that should be reported to your support provider. Most likely the solution is going to be that this functionality is going to be explicitly disallowed, since it will be a lot of work to duplicate the distributed memory mechanism in a memory file. I would not hold my breath waiting for a solution to this.

Posted: Wed Mar 22, 2006 6:30 am
by chulett
I don't believe this is a bug but rather a known limitation of distributed hashed files. You cannot cache them - individual parts, yes - but the whole, no. Should be documented somewhere...

Posted: Wed Mar 22, 2006 7:36 am
by ArndW
Craig - I did a quickie search to see if I could find it documented but couldn't find a limitation anywhere. But after this is submitted I'm sure it will show up in the future documentation.

Posted: Wed Mar 22, 2006 8:12 am
by kcbland
64BIT and distributed files have never been cache supported, AFAIK.

Posted: Wed Mar 22, 2006 3:56 pm
by ray.wurlod
Distributed hashed files have never been cache supported (definitely) and never will be.