gpbarsky wrote:Thanks, but the table may have about 3,500,000 records, and the file about 20,000 records.
The process is a daily process, and I think that this can take an important time to load the table.
This is a
common requirement and will
not be slow - if done properly. For a file with 20,000 records, the most amount of records from the 3.5M record tagble will be just that - 20,000. No need to load everything every run.
Load only the records you need
for each run. Load only the fields you actually need. Take a moment to farm the keys from your source file and built a work table with them. Join those keys to your large table when populating the hashed file. You then have only 'what you need' run over run.