Compress Stage

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
ppp
Participant
Posts: 21
Joined: Mon Aug 31, 2009 11:53 am

Compress Stage

Post by ppp »

Hello,

I am trying to compress a data file with 3 fields (ID, CODE, TITLE). I am getting the following error, can someone please explain as to what I am doing wrong.

1) sequential_file_compressed: Error when checking operator: Export validation failed.

2) sequential_file_compressed: Error when checking operator: At field "t": Tagged aggregate must have a prefix or link reference
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

If you look at page 283 (V8 documentation) of the Parallel Job Developer guide the output schema is described.
ppp
Participant
Posts: 21
Joined: Mon Aug 31, 2009 11:53 am

Post by ppp »

I am using version 7.5.1.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

<sigh> that just means that the same information is at a different page in the manual. Have you looked for the section on the Compress stage in your developer guide?
ppp
Participant
Posts: 21
Joined: Mon Aug 31, 2009 11:53 am

Post by ppp »

In the Parallel Job Developers Guide Documentation there is no mention about output schema. I have tried different options but I still get the same error.
When I load the table definition for Output/Target I get an error saying that
sequential_file_compressed: Error when checking operator: Could not find input field "ID".
and so on for other fields.
ppp
Participant
Posts: 21
Joined: Mon Aug 31, 2009 11:53 am

Post by ppp »

When I use a Peek or DataSet on the Output Side, the job runs fine but when I use a Sequential file I get all those errors.
Also when using the DataSet as the output I did not load the column definition.
Thanks
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

The schema for the compressed data set would be:
record
( t: tagged {preservePartitioning=no}
( encoded: subrec
( bufferNumber: dfloat;
bufferLength: int32;
bufferData: raw[32000];
);
schema: subrec
( a: int32;
b: string[50];
);
Post Reply