Page 1 of 1

Compress Stage

Posted: Tue Nov 17, 2009 12:38 pm
by ppp
Hello,

I am trying to compress a data file with 3 fields (ID, CODE, TITLE). I am getting the following error, can someone please explain as to what I am doing wrong.

1) sequential_file_compressed: Error when checking operator: Export validation failed.

2) sequential_file_compressed: Error when checking operator: At field "t": Tagged aggregate must have a prefix or link reference

Posted: Wed Nov 18, 2009 1:34 am
by ArndW
If you look at page 283 (V8 documentation) of the Parallel Job Developer guide the output schema is described.

Posted: Wed Nov 18, 2009 8:32 am
by ppp
I am using version 7.5.1.

Posted: Wed Nov 18, 2009 9:21 am
by ArndW
<sigh> that just means that the same information is at a different page in the manual. Have you looked for the section on the Compress stage in your developer guide?

Posted: Wed Nov 18, 2009 9:32 am
by ppp
In the Parallel Job Developers Guide Documentation there is no mention about output schema. I have tried different options but I still get the same error.
When I load the table definition for Output/Target I get an error saying that
sequential_file_compressed: Error when checking operator: Could not find input field "ID".
and so on for other fields.

Posted: Wed Nov 18, 2009 9:40 am
by ppp
When I use a Peek or DataSet on the Output Side, the job runs fine but when I use a Sequential file I get all those errors.
Also when using the DataSet as the output I did not load the column definition.
Thanks

Posted: Wed Nov 18, 2009 9:44 am
by ArndW
The schema for the compressed data set would be:
record
( t: tagged {preservePartitioning=no}
( encoded: subrec
( bufferNumber: dfloat;
bufferLength: int32;
bufferData: raw[32000];
);
schema: subrec
( a: int32;
b: string[50];
);