Hi,
We have to ftp a file from mainframe. It has around 400 thousand rows. I wanted to know if I should use a database temp table or use a sequential file as temp storage of this file, before further processing. On using target as sequential file it was around 39MB. What is good performance wise.
Thanks.
Sam
If all you are doing in the next processing step is reading the file, then always use a sequential file. Otherwise, the choice is between a hash file or a table.
One angle is if your processing this data sequencially then nothing beats the performance of a sequential file vs. database. Another angle is if the intent is to persist the data vs. deleting this at the end of the Job then the database is the best option.
Thanks for the replies. I think a sequential file would be good for temp storage and then use a hash file too to run look ups on it later on. I am creating both a sequential file and a hash file same time from the transformer stage when ftping from the source mainframe.