Processing a file which has more than 1500 columns
Posted: Mon Apr 05, 2010 12:41 am
Hi,
We are processing a file which has more than 1500 columns in it.
Please let me know the complications we might be facing while processing these types of files.
Planned approach for processing.
1) Split the file into 5 pieces (for ex 1-300 columns in first file 301-600 columns in second file & so on..)
2) Add rownum in each splitted file and after processing join it back and load to the target/staging table.
Please let me know if our approach will cause any issues later.
We can process the file without splitting it.( I believe datastage will not put any limitations) We want to make our life easier while testing, so thought of this approach.
Please advise if there is a better alternative.
Thanks for your help.
Thanks & Regards,
Shamanth VK
We are processing a file which has more than 1500 columns in it.
Please let me know the complications we might be facing while processing these types of files.
Planned approach for processing.
1) Split the file into 5 pieces (for ex 1-300 columns in first file 301-600 columns in second file & so on..)
2) Add rownum in each splitted file and after processing join it back and load to the target/staging table.
Please let me know if our approach will cause any issues later.
We can process the file without splitting it.( I believe datastage will not put any limitations) We want to make our life easier while testing, so thought of this approach.
Please advise if there is a better alternative.
Thanks for your help.
Thanks & Regards,
Shamanth VK