Import from fixed length file

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

laknar
Participant
Posts: 162
Joined: Thu Apr 26, 2007 5:59 am
Location: Chennai

Post by laknar »

Try to create two temp file one with 30 bytes and another with 33 bytes

awk 'length>30' Filename > 30byteFilename_temp(your Convenient)

awk 'length=30' Filename > 33byteFilename_temp(your Convenient)
Regards
LakNar
dxk9
Participant
Posts: 105
Joined: Wed Aug 19, 2009 12:46 am
Location: Chennai, Tamil Nadu

Post by dxk9 »

I still dont understand. Create the file for what??

Regards,
Divya
laknar
Participant
Posts: 162
Joined: Thu Apr 26, 2007 5:59 am
Location: Chennai

Post by laknar »

Read as a Single Column then check for the length and extract the Columns by using substring Function.

Totally in Transformer define 6 Columns

Column1

Code: Select all

SingleColumn[1,5]

Column2

Code: Select all

If Len(SingleColumn)=30 Then SingleColumn[6,9] Else SingleColumn[6,8]
and so on for other columns.

This code is as per the length in previous post.
Regards
LakNar
dxk9
Participant
Posts: 105
Joined: Wed Aug 19, 2009 12:46 am
Location: Chennai, Tamil Nadu

Post by dxk9 »

laknar wrote:Read as a Single Column then check for the length and extract the Columns by using substring Function.

Totally in Transformer define 6 Columns

Column1

Code: Select all

SingleColumn[1,5]

Column2

Code: Select all

If Len(SingleColumn)=30 Then SingleColumn[6,9] Else SingleColumn[6,8]
and so on for other columns.

This code is as per the length in previous post.
But this logic is again importing the file as a single column and spliting it to various targets based on the row length right??

Regards,
Divya
vinnz
Participant
Posts: 92
Joined: Tue Feb 17, 2004 9:23 pm

Post by vinnz »

dxk9 wrote: But this logic is again importing the file as a single column and spliting it to various targets based on the row length right??
Regards,
Divya
You could split it into different columns using substring functions to extract each field as mentioned or split the file and process them downstream.

Unless you want to process both record types in the same job, you could split the file based on the first character/record type after importing each line as a single column. You could then import those files separately in successor jobs.

Hope that helps.
dxk9
Participant
Posts: 105
Joined: Wed Aug 19, 2009 12:46 am
Location: Chennai, Tamil Nadu

Post by dxk9 »

That helps!!

I was jus wondering if I can use the same job for processing too. To be more specific, I want to split the input file to various other files based on the record type and then process the individual record types separately.

I can put these into different parallel jobs and run as sequence. Is there anyother way, like using the same parallel jobs??

And can anybody tell me the specific use of complex file stage?? I went through online docs, but not sure of its specific usage.
Regards,
Divya
vinnz
Participant
Posts: 92
Joined: Tue Feb 17, 2004 9:23 pm

Post by vinnz »

dxk9 wrote:I can put these into different parallel jobs and run as sequence. Is there anyother way, like using the same parallel jobs??
That might not work since if I remember correctly, the sequential stage does not support non-reject outputs when there is an input link writing to it.

HTH
Post Reply