Page 2 of 4

Posted: Thu Sep 03, 2009 7:52 am
by miwinter
True.. but not in Windows :wink:

Posted: Thu Sep 03, 2009 8:03 am
by chulett
Unless they have the MKSToolkit installed. :wink:

Posted: Thu Sep 03, 2009 8:12 am
by miwinter
"MKS" as in Mortice Kern Systems? Info on what the 'toolkit' is please! :D

(Apologies to OP for the semi-hijack) :wink:

Posted: Thu Sep 03, 2009 9:19 am
by chulett
I believe so, yes. The MKS Toolkit ships with the Enterprise Edition for Windows and brings UNIX capabilities to the table. Pretty cool stuff.

Posted: Thu Sep 03, 2009 9:23 am
by miwinter
:shock:

Sweet... candy! :D Ta Craig :wink:

Posted: Thu Sep 03, 2009 11:20 pm
by dxk9
dxk9 wrote:
I have a file containing different row types. Each row type has different number of columns of fixed length.

Eg:
Header
AAA field1(4) field2(11) field3(6) field4(7) field5(14)
BBB field1(9) field2(1) field3(16) field4(6) field5(4) field6(10) field7(11) field8(6) field9(2) field10(14)
CCC field1(7) field2(23)
-----------------
-------------
Footer

I need to import this entire file and filter selected row types to specific destinations.

I first imported the entire row as one single field and then used different transformers to send it to different targets. I used string functions in each transformer depending on the target to get that row type separately. But this becomes very complex when there are hundreds of fields in a row type.

Is there any better way to do this??
Can anybody help me on this :(


Regards,
Divya

Posted: Fri Sep 04, 2009 2:30 am
by ray.wurlod
Read the file using a single VarChar column.

Your file is NOT fixed-length format, which would require every record to contain the same number of characters. Specify none for the field delimiter.

Determine the record type in a Filter stage or in Transformer stage constraints and parse the separate record types using separate Column Import stages or separate output links from the Transformer stage.

Posted: Fri Sep 04, 2009 3:22 am
by dxk9
I have already imported the entire file as a single field and then separated it to various targets based on the row_type ( 1st field), Now when I try to take one of those target file(fixed length) as input and try to read it, I am not able to view the data.

The properties I set were:

Record Length = fixed
datatype for all columns as "Char"

It says:
<Sequential_File> "record_length=fixed" (no length given) and record field format is variable-length. The first variable-length field is "REC_TYPE".

"REC_TYPE" is my 1st field.

When I remove 'Record Length = fixed' property, I get the error

Input buffer overrun at field "user_id", at offset: 276 :(

Regards,
Divya

Posted: Fri Sep 04, 2009 3:33 am
by laknar
your problem may be in last line.
check whether the last line contains new line character or line feed.

Posted: Fri Sep 04, 2009 3:38 am
by dxk9
I checked the file, it seems to be fine only.
Can anybody give a way to view a fixed length sequential file.

Assume this is the format,

fields length
AAA 15
BBB 16
CCC 11
DDD 64
EEE 64
FFF 24
GGG 32
HHH 50


Regards,
Divya

Posted: Fri Sep 04, 2009 3:44 am
by Sainath.Srinivasan
Count the characters until you hit 'user_id' (around 276th)

Posted: Fri Sep 04, 2009 4:18 am
by dxk9
I have only 276 characters per record. All are of the same length.

Have no clue about the error as its the first time that I am working on the fixed length file as input.


Regards,
Divya

Posted: Fri Sep 04, 2009 5:10 am
by laknar
Specify Record Delimiter= UNIX New Line or the file may be in DOS Format

Posted: Fri Sep 04, 2009 5:14 am
by dxk9
Ya I tried with record delimiter as UNIX Newline also, but still not able to view it.

Can you provide me with the entire flow and also the settings for the import.
dxk9 wrote: The file format,

fields length
AAA 15
BBB 16
CCC 11
DDD 64
EEE 64
FFF 24
GGG 32
HHH 50


Regards,
Divya

Posted: Fri Sep 04, 2009 3:04 pm
by ray.wurlod
Use Record Delimiter String = DOS Format, not Record Delimiter = UNIX newline (because your server is on Windows).