Extraction of huge number of records to sequential file

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
debasisp
Premium Member
Premium Member
Posts: 34
Joined: Wed Feb 01, 2006 1:53 am

Extraction of huge number of records to sequential file

Post by debasisp »

Hi,
I have to extract more than 400 millions of records from an Oracle table to sequential file(s) and FTP the file(s) to Mainframe.

Can someone suggest me what is the most feasible solution.

Thanks
Debasis
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

Best way would be to extract it to multiple files, then passing a cat command as an after job subroutine to create a single file and then ftp it. The cat and ftp can be coupled in a simple few liner shell script as well.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
debasisp
Premium Member
Premium Member
Posts: 34
Joined: Wed Feb 01, 2006 1:53 am

Post by debasisp »

Hi,
Can you tell me what is the max size limit to create a sequential file thru datastage?? Is there any way we can increase the sequential file size limit.

Becasus if I am going to extract the records from the table based upon partition I assume each file will be more than 3 GB. Is it feasible thru datastage to create the file with 3GB?
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

That would be an O/S limitation, not a DataStage one.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply