Page 4 of 4

Posted: Mon Sep 07, 2009 12:22 am
by laknar
Try to create two temp file one with 30 bytes and another with 33 bytes

awk 'length>30' Filename > 30byteFilename_temp(your Convenient)

awk 'length=30' Filename > 33byteFilename_temp(your Convenient)

Posted: Mon Sep 07, 2009 2:53 am
by dxk9
I still dont understand. Create the file for what??

Regards,
Divya

Posted: Mon Sep 07, 2009 7:05 am
by laknar
Read as a Single Column then check for the length and extract the Columns by using substring Function.

Totally in Transformer define 6 Columns

Column1

Code: Select all

SingleColumn[1,5]

Column2

Code: Select all

If Len(SingleColumn)=30 Then SingleColumn[6,9] Else SingleColumn[6,8]
and so on for other columns.

This code is as per the length in previous post.

Posted: Wed Sep 09, 2009 5:27 am
by dxk9
laknar wrote:Read as a Single Column then check for the length and extract the Columns by using substring Function.

Totally in Transformer define 6 Columns

Column1

Code: Select all

SingleColumn[1,5]

Column2

Code: Select all

If Len(SingleColumn)=30 Then SingleColumn[6,9] Else SingleColumn[6,8]
and so on for other columns.

This code is as per the length in previous post.
But this logic is again importing the file as a single column and spliting it to various targets based on the row length right??

Regards,
Divya

Posted: Wed Sep 09, 2009 4:37 pm
by vinnz
dxk9 wrote: But this logic is again importing the file as a single column and spliting it to various targets based on the row length right??
Regards,
Divya
You could split it into different columns using substring functions to extract each field as mentioned or split the file and process them downstream.

Unless you want to process both record types in the same job, you could split the file based on the first character/record type after importing each line as a single column. You could then import those files separately in successor jobs.

Hope that helps.

Posted: Thu Sep 10, 2009 12:13 am
by dxk9
That helps!!

I was jus wondering if I can use the same job for processing too. To be more specific, I want to split the input file to various other files based on the record type and then process the individual record types separately.

I can put these into different parallel jobs and run as sequence. Is there anyother way, like using the same parallel jobs??

And can anybody tell me the specific use of complex file stage?? I went through online docs, but not sure of its specific usage.
Regards,
Divya

Posted: Tue Sep 15, 2009 3:13 pm
by vinnz
dxk9 wrote:I can put these into different parallel jobs and run as sequence. Is there anyother way, like using the same parallel jobs??
That might not work since if I remember correctly, the sequential stage does not support non-reject outputs when there is an input link writing to it.

HTH