limit the lookup data in a hashed file
Posted: Sat Oct 06, 2007 1:54 am
there should be some way to limit the lookup data that is written to a hashed file.
e.g.
i extract 100 000 rows from a source (lets assume a subset of the customer base) and need to do a lookup on the latest order from a table in a different system with 10 000 000 rows (lets assume its the orders summary table). It would be beneficial to pass the list of account numbers into the job that creates the hashed file. This means that I only load 100 000 rows into the lookup, not records for each customer!
e.g.
i extract 100 000 rows from a source (lets assume a subset of the customer base) and need to do a lookup on the latest order from a table in a different system with 10 000 000 rows (lets assume its the orders summary table). It would be beneficial to pass the list of account numbers into the job that creates the hashed file. This means that I only load 100 000 rows into the lookup, not records for each customer!