I am trying to compress a data file with 3 fields (ID, CODE, TITLE). I am getting the following error, can someone please explain as to what I am doing wrong.
1) sequential_file_compressed: Error when checking operator: Export validation failed.
2) sequential_file_compressed: Error when checking operator: At field "t": Tagged aggregate must have a prefix or link reference
<sigh> that just means that the same information is at a different page in the manual. Have you looked for the section on the Compress stage in your developer guide?
In the Parallel Job Developers Guide Documentation there is no mention about output schema. I have tried different options but I still get the same error.
When I load the table definition for Output/Target I get an error saying that
sequential_file_compressed: Error when checking operator: Could not find input field "ID".
and so on for other fields.
When I use a Peek or DataSet on the Output Side, the job runs fine but when I use a Sequential file I get all those errors.
Also when using the DataSet as the output I did not load the column definition.
Thanks
The schema for the compressed data set would be:
record
( t: tagged {preservePartitioning=no}
( encoded: subrec
( bufferNumber: dfloat;
bufferLength: int32;
bufferData: raw[32000];
);
schema: subrec
( a: int32;
b: string[50];
);