Try to create two temp file one with 30 bytes and another with 33 bytes
awk 'length>30' Filename > 30byteFilename_temp(your Convenient)
awk 'length=30' Filename > 33byteFilename_temp(your Convenient)
Import from fixed length file
Moderators: chulett, rschirm, roy
Read as a Single Column then check for the length and extract the Columns by using substring Function.
Totally in Transformer define 6 Columns
Column1
Column2
and so on for other columns.
This code is as per the length in previous post.
Totally in Transformer define 6 Columns
Column1
Code: Select all
SingleColumn[1,5]
Column2
Code: Select all
If Len(SingleColumn)=30 Then SingleColumn[6,9] Else SingleColumn[6,8]
This code is as per the length in previous post.
Regards
LakNar
LakNar
But this logic is again importing the file as a single column and spliting it to various targets based on the row length right??laknar wrote:Read as a Single Column then check for the length and extract the Columns by using substring Function.
Totally in Transformer define 6 Columns
Column1
Code: Select all
SingleColumn[1,5]
Column2
and so on for other columns.Code: Select all
If Len(SingleColumn)=30 Then SingleColumn[6,9] Else SingleColumn[6,8]
This code is as per the length in previous post.
Regards,
Divya
You could split it into different columns using substring functions to extract each field as mentioned or split the file and process them downstream.dxk9 wrote: But this logic is again importing the file as a single column and spliting it to various targets based on the row length right??
Regards,
Divya
Unless you want to process both record types in the same job, you could split the file based on the first character/record type after importing each line as a single column. You could then import those files separately in successor jobs.
Hope that helps.
That helps!!
I was jus wondering if I can use the same job for processing too. To be more specific, I want to split the input file to various other files based on the record type and then process the individual record types separately.
I can put these into different parallel jobs and run as sequence. Is there anyother way, like using the same parallel jobs??
And can anybody tell me the specific use of complex file stage?? I went through online docs, but not sure of its specific usage.
Regards,
Divya
I was jus wondering if I can use the same job for processing too. To be more specific, I want to split the input file to various other files based on the record type and then process the individual record types separately.
I can put these into different parallel jobs and run as sequence. Is there anyother way, like using the same parallel jobs??
And can anybody tell me the specific use of complex file stage?? I went through online docs, but not sure of its specific usage.
Regards,
Divya