Hi All,
i have a dataset called ABC.ds, doing compress and loading to BCD.ds
and second job is for read compressed dataset and load into CDE.ds
job design is like
ABC.ds --->Compress stage(compress)--->BCD.ds
Now i want to expand BCD.ds to CDE.ds, my job design is like
BCD.ds --->Expand Stage(Uncompress)--->CDE.ds
In second job BCD.ds dataset what metadata i have to give...
When i enable RCP i am able to run second job.
if i give the same ABC.ds metadata i am getting below error,
Data_Set_11: Error when checking operator: When binding output schema variable "outRec": Cannot drop the case from tagged output; case "t.encoded".
Data_Set_11: Error when checking operator: When binding output schema variable "outRec": Cannot drop the case from tagged output; case "t.schema".
Error when checking operator: When binding output schema variable "outRec": Cannot drop the case from tagged output; case "t.noPreservePartitioning".
Data_Set_11: Error when checking operator: When binding output schema variable "outRec": Output tag case 2 is not associated with any concrete tag case.
Please help me.
Thanks in Advance.
Pavan.
Expand stage Input Metadata
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
You need exactly the same record schema that was used to populate the Data Set (from upstream of the Compress stage). In particular because that record schema includes tagged subrecords, you must have the fields that contain the tag values. These, according to the error message, are missing.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.