I have a job that takes complex flat sequential files and converts them to SQL tables using the ODBC driver. The current process works great and we are having no problems with it.
Now the question: What do we do when the layout of the input files changes and those changes need to be replicated to the SQL tables?
What has happened is that fields were added to the flat files that we now need in the SQL files. Do I need to add the new fields to the SQL table first (outside of DS) and then change my DS jobs? The SQL tables are only updated with changes or new rows added, not rebuilt every time we run the job.
Thanks for any help.
Changing ODBC SQL file layouts?
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
DataStage is driven by metadata.
If the metadata change, DataStage has no mechanism to detect that fact. It must all be altered manually.
That is, you need to change your DataStage job design to incorporate metadata for the new fields in the source file. You also need, if applicable, to alter the target table (to add the new column definitions), bring these into DataStage (import), then modify the job that loads data into the table so that the new columns are derived.
If that seems "unfair", consider what you would have to do if DataStage were not in the picture, and you were (for example) bulk loading directly from the text file. You would need to create a new script for the bulk loader specifying the new columns, and you would have to alter the table so that it had those new columns.
If the metadata change, DataStage has no mechanism to detect that fact. It must all be altered manually.
That is, you need to change your DataStage job design to incorporate metadata for the new fields in the source file. You also need, if applicable, to alter the target table (to add the new column definitions), bring these into DataStage (import), then modify the job that loads data into the table so that the new columns are derived.
If that seems "unfair", consider what you would have to do if DataStage were not in the picture, and you were (for example) bulk loading directly from the text file. You would need to create a new script for the bulk loader specifying the new columns, and you would have to alter the table so that it had those new columns.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Correct - but technically, it doesn't matter what order you do the two things in, as long as both are done before you try running the jobs again.
Some people prefer changing the tables first, as this allows them to re-import the metadata and to test right away, but for a simply change you can manually edit the metadata and change the job(s) while you are waiting for your DBA resources to get the changes done.
Some people prefer changing the tables first, as this allows them to re-import the metadata and to test right away, but for a simply change you can manually edit the metadata and change the job(s) while you are waiting for your DBA resources to get the changes done.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers