Expand stage Input Metadata

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
madsongtel
Participant
Posts: 31
Joined: Fri Aug 01, 2008 2:56 am

Expand stage Input Metadata

Post by madsongtel »

Hi All,

i have a dataset called ABC.ds, doing compress and loading to BCD.ds
and second job is for read compressed dataset and load into CDE.ds
job design is like
ABC.ds --->Compress stage(compress)--->BCD.ds

Now i want to expand BCD.ds to CDE.ds, my job design is like
BCD.ds --->Expand Stage(Uncompress)--->CDE.ds

In second job BCD.ds dataset what metadata i have to give...
When i enable RCP i am able to run second job.

if i give the same ABC.ds metadata i am getting below error,
Data_Set_11: Error when checking operator: When binding output schema variable "outRec": Cannot drop the case from tagged output; case "t.encoded".
Data_Set_11: Error when checking operator: When binding output schema variable "outRec": Cannot drop the case from tagged output; case "t.schema".
Error when checking operator: When binding output schema variable "outRec": Cannot drop the case from tagged output; case "t.noPreservePartitioning".
Data_Set_11: Error when checking operator: When binding output schema variable "outRec": Output tag case 2 is not associated with any concrete tag case.

Please help me.

Thanks in Advance.
Pavan.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

You need exactly the same record schema that was used to populate the Data Set (from upstream of the Compress stage). In particular because that record schema includes tagged subrecords, you must have the fields that contain the tag values. These, according to the error message, are missing.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply