Page 1 of 1

Loading data from datasets to text file

Posted: Wed Jul 04, 2012 3:29 am
by dspxlearn
Hello,

We have a requirement to load the dataset into a text file, archive the text file and delete the datasets due to the space constraints. We thought of loading the data from dataset to a text file through orchadmin utilities.

So, can anyone suggest if this is a good approach to use the orchadmin utility or to use a datastage job itself?

Note: We have many datasets each with different metadata and the text file will be used to load the data back to the datasets if needed.

Posted: Wed Jul 04, 2012 5:09 am
by ray.wurlod
Ultimately using a DataStage job or orchadmin execute the same functionality.

Posted: Wed Jul 04, 2012 5:12 am
by ArndW
I would write a generic job to do the dataset -> text transfer. The job would declare no columns at all and use RCP, parameterizing the dataset file name and the output file. In the after-job routine I would call the shell command "orchadmin rm {path-and-name of dataset}" to delete the dataset.
If the text files are for archival, I would add a filter to your output sequential file such as "gzip -" to compress the archive while writing.

Posted: Wed Jul 04, 2012 8:16 am
by qt_ky
Have you compared the cost of adding disk space vs. all the extra labor?

Have you considered using the Compress and Expand stages?