Page 1 of 1

Job Hangs when using hash file as a Lookup

Posted: Fri Feb 12, 2010 3:32 am
by mohdtausifsh
HI All

I have a job which hangs when using Hash file as a lookup.

My job design is like this

!-------------HASHFILE1
! !
! !
! !
! !
! !
DRS-----------TRANS1------------TRNAS2-----------SEQFILE
! !
! !
! !
! !
AGG------------HASHFILE2


The First HASHFILE1 is looked up with TRANS! and Second HASHFILE2 is looked up with HASHFILE2.The job is working fine as long as i dont use HASHFILE2 as lookup,there is no error message thrown ,upon running the job turns in waiting state. i.e hangs


Please advice

Posted: Fri Feb 12, 2010 6:38 am
by ArndW
Remove the HASHFILE1 lookup in a job copy, does that still hang? Have you turned on preload file to memory in the HASHFILE2 and is it a large reference file?

Posted: Fri Feb 12, 2010 7:13 am
by mohdtausifsh
If I remove the HASHFILE 1 ,Iam facing the same issue.

"preload file to memory " ? where do i need to set that,the size of refrence file is not that big.

Posted: Fri Feb 12, 2010 7:21 am
by mohdtausifsh
The metadata and amount of records in both the HASHFILE's are same,
the file names and data are different though.I think this will not be a problem becuase the data gets loaded in HASHFILE1 ,while the job hangs while loading in to HASHFILE2

Posted: Fri Feb 12, 2010 7:50 am
by chulett
:idea: People, please wrap your ascii art with 'code' tags to preserve all of the lovely work you've done lining crap up. That and liberal use of the 'Preview' option.

That being said, I have no idea what you mean by this:

The First HASHFILE1 is looked up with TRANS! and Second HASHFILE2 is looked up with HASHFILE2.The job is working fine as long as i dont use HASHFILE2 as lookup,there is no error message thrown ,upon running the job turns in waiting state. i.e hangs

So, HASHFILE2 is 'looked up' by TRANS2 I assume. And I don't think that's a valid job design and somewhat surprised it will actually compile. HASH2 has to be completely populated before it can be used as a lookup by TRANS2 and the job isn't just going to stop there and wait for the Aggregator to do its work.

Build HASH2 from a separate source.