Accessing SQL tabla from a seq file

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
gpbarsky
Participant
Posts: 160
Joined: Tue May 06, 2003 8:20 pm
Location: Argentina

Accessing SQL tabla from a seq file

Post by gpbarsky »

Hello my friends..... :D

I have a seq file with fields KEY1 + KEY2. This file has several records.

For each of the records in the file, I need to access a SQL table with the exact value of KEY1 + KEY2.

How can I do this in a server job ?

I don't know how to use fields KEY1 + KEY2 of the file, and insert it into the SQL stage.

Any help, will be appreciated.
:wink:
Guillermo P. Barsky
Buenos Aires - Argentina
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

Load the sql table into a hashed file and perform a lookup based on those two keys. This way for each key present in the sequential file, you will get its corresponding record in the sql table.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
gpbarsky
Participant
Posts: 160
Joined: Tue May 06, 2003 8:20 pm
Location: Argentina

Post by gpbarsky »

Thanks, but the table may have about 3,500,000 records, and the file about 20,000 records.

The process is a daily process, and I think that this can take an important time to load the table.

Thanks anyway.
Guillermo P. Barsky
Buenos Aires - Argentina
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

It will take time but wont crawl. Load only the columns you need into the hashed file.
OR
get a temp table created in the same schema, and pass a join sql to get the values. That will be much faster.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

gpbarsky wrote:Thanks, but the table may have about 3,500,000 records, and the file about 20,000 records.

The process is a daily process, and I think that this can take an important time to load the table.
This is a common requirement and will not be slow - if done properly. For a file with 20,000 records, the most amount of records from the 3.5M record tagble will be just that - 20,000. No need to load everything every run.

Load only the records you need for each run. Load only the fields you actually need. Take a moment to farm the keys from your source file and built a work table with them. Join those keys to your large table when populating the hashed file. You then have only 'what you need' run over run.
-craig

"You can never have too many knives" -- Logan Nine Fingers
gpbarsky
Participant
Posts: 160
Joined: Tue May 06, 2003 8:20 pm
Location: Argentina

Post by gpbarsky »

Thanks, Craig.

We will try the option of loading a temp table, and then make a join.

Really, I love this forum because you answer me faster than the consultants here in Argentina.

You are so great..... :wink:

Have a nice day.
Guillermo P. Barsky
Buenos Aires - Argentina
Post Reply