Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.
Moderators: chulett , rschirm , roy
rprasanna28
Participant
Posts: 10 Joined: Fri May 12, 2006 12:31 am
Post
by rprasanna28 » Wed Jun 13, 2007 11:46 pm
I have designed a lookup job. When I try to execute that job Iam getting the following error message
"LKP_AL_YES_COEP,0: Could not map table file "/dsetlsoft/datastage/Ascential/DataStage/Datasets/lookuptable.20070613.ujo03nc (size 1733722784 bytes)": Not enough space
Error finalizing / saving table /tmp/dynLUT6390006fc4e533"
We tried executing the job after increasing the Temp Directory memory but still we are facing the same error.
Could any one suggest me in solving this.
Regards
Prasanna Lakshmi R
balajisr
Charter Member
Posts: 785 Joined: Thu Jul 28, 2005 8:58 am
Post
by balajisr » Thu Jun 14, 2007 12:05 am
What is the size of your reference data?
Use join instead of lookup if your reference data is huge.
rprasanna28
Participant
Posts: 10 Joined: Fri May 12, 2006 12:31 am
Post
by rprasanna28 » Thu Jun 14, 2007 12:26 am
I HAVE AROUND 33 LAKH RECORDS IN MY REFERENCE TABLE
Hemant_Kulkarni
Premium Member
Posts: 50 Joined: Tue Jan 02, 2007 1:40 am
Post
by Hemant_Kulkarni » Thu Jun 14, 2007 3:01 am
If your input data is relatively less, use sparse lookup instead
rameshrr3
Premium Member
Posts: 609 Joined: Mon May 10, 2004 3:32 am
Location: BRENTWOOD, TN
Post
by rameshrr3 » Thu Jun 14, 2007 7:06 am
For those not in the 'know'
33 LAKH = 3.3 Million
DSguru2B
Charter Member
Posts: 6854 Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX
Post
by DSguru2B » Thu Jun 14, 2007 7:50 am
If your source is equally huge, go for a join stage to join the records. If your source is small, go for a sparse lookup.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.