Hi,
We have lot of jobs running on production environment and of course we have lot of hashed files too. The amount of data that gets loaded in these files during job runs are extremenly high and it has become difficult data within hashed files.
To basically validate data in hashed files, it will be easy if I can quickly dump it into a csv file and test it through UNIX commands. What is the DS command or TCL command to copy the hashed file data to a flat file.
Please note that I do not want to change the job design just for this purpose. These situations might come once in a month and we have many hashed files in our jobs. I dont think its worth time spent to modify the job design.
Any adhoc way or easy way to achieve this. Thanks.
Kumar
Hashed File to a Sequential file
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
With a simple modification to your job design that populates the hashed file (not the one that uses it as a reference lookup) the task is even easier - an extra output link to a text files, with post-processing to remove duplicates (perhaps a sort -u command from an ExecSH after-stage subroutine).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.