ODBC LOOK UP ERROR

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
Rubu
Premium Member
Premium Member
Posts: 82
Joined: Sun Feb 27, 2005 9:09 pm
Location: Bangalore

ODBC LOOK UP ERROR

Post by Rubu »

Hi

I have been using a look up through ODBC as follows.

Input----->TX-----Target(ODBC)
^
|
ODBC (tab1)

Key columns for tab1 are K1 (Decimal(13,0)) K2 (DATE). For smaller tables this is working fine. But for tables with large volume of data (more than 2-3 lakhs) some time look up is returning values and sometimes it is returing null, even though matching records are present in tab1.

If I create a hash file, and look up the hash file instead of the ODBC stage , it works fine.

But, I donot want to use the Hash file as tab1 is very big and there are very few input (say 50-60) compared to it.

Any thought?

Thanks in Advance
rubu[quote][/quote]
    elavenil
    Premium Member
    Premium Member
    Posts: 467
    Joined: Thu Jan 31, 2002 10:20 pm
    Location: Singapore

    Post by elavenil »

    Hash file is the fast lookup. When hash file is used as a lookup, the job works fine. Check the input data, it might have any invalid characters.

    Since your input is very less, that input data can be loaded into a Database table and do a join in the database and writes the required output into a sequential file.

    You can choose to do Left/Right/Full outer joins that fits for your requirement.

    Regards
    Saravanan
    Sainath.Srinivasan
    Participant
    Posts: 3337
    Joined: Mon Jan 17, 2005 4:49 am
    Location: United Kingdom

    Post by Sainath.Srinivasan »

    You need to check the values being stored in hash file. Especially, check the format of date you are passing in.
    ray.wurlod
    Participant
    Posts: 54607
    Joined: Wed Oct 23, 2002 10:52 pm
    Location: Sydney, Australia
    Contact:

    Post by ray.wurlod »

    3 lakhs of records is not "big" for a hashed file. Go for it. Only load the columns that you actually need, so that the run goes as fast as possible (without carrying redundant data, which you are - necessarily - doing in the ODBC table). Pre-allocate the space (use MINIMUM.MODULUS for a dynamic hashed file, or use a static hashed file) and enable write cache for fastest loading. Use read cache for fastest lookup.

    For the non-Indian posters, 3 lakhs is 300,000 (written as 3,00,000 in Indian nomenclature).
    IBM Software Services Group
    Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
    chulett
    Charter Member
    Charter Member
    Posts: 43085
    Joined: Tue Nov 12, 2002 4:34 pm
    Location: Denver, CO

    Re: ODBC LOOK UP ERROR

    Post by chulett »

    Rubu wrote:But, I do not want to use the Hash file as tab1 is very big and there are very few input (say 50-60) compared to it.
    Is there really a need to load all of tab1 into the hash? As Ray noted, 3 lakhs is not all that big - however - you are correct in the fact that you don't need all of them in the hash, only the subset of records that match to your input data. So...

    Any way you can constraint your hash build to that? Sometimes all that takes is adding a join to the staged data into the query that populates the hash.
    -craig

    "You can never have too many knives" -- Logan Nine Fingers
    Rubu
    Premium Member
    Premium Member
    Posts: 82
    Joined: Sun Feb 27, 2005 9:09 pm
    Location: Bangalore

    Re: ODBC LOOK UP ERROR

    Post by Rubu »

    Hi

    I agree that 300000 is not a big number. But this is the smallest number of the BIGGER table group. The bulkiest is around 1 million. And it is much bigger compared to the number of rows to be processed; equals 50-100 on average.

    I have tried with all the combination of formats for the DATE column but it was still giving the same result. Interestingly, for a few records its failing and for few others the lookup is successful.

    Does it anything to do with DATASTAGE and INFORMIX compatibility?

    Rubu
    Sainath.Srinivasan
    Participant
    Posts: 3337
    Joined: Mon Jan 17, 2005 4:49 am
    Location: United Kingdom

    Post by Sainath.Srinivasan »

    You can write the rejected rows in a separate file and manually verify their occurance in the hash file.

    Also check the constraint logic to reject the rows.
    Post Reply