![Image](http://img232.imageshack.us/img232/8770/getrow7gb.jpg)
Thanks in advanced.[/img]
Moderators: chulett, rschirm, roy
Hi all,chulett wrote:Also, if this is coming from a Link Collector which it looks like it might, then double-check that all links into the stage provide the exact same metadata.
Hi Ray... Thanks for the help. below are some screens to enlighten on the design...ray.wurlod wrote:This error means that DataStage found 71 columns in a row, whereas your job design has only six columns defined for that link.
Can you please let us know what stage types you are using in your job design, and how they are linked together?
One place this can occur is where reading from a sequential file; because of the nature of a sequential file (you must read every byte to get to the next), you can not skip/select columns as you can when selecting rows from a database table.
Something similar can happen when using row buffering. If you push 71 columns into the row buffer, you must retrieve 71 columns. Row buffering may be implicit (row buffering on) or explicit (use of IPC, Link Partitioner or Link Collector stages).
6 columns on the input link to Agg01ray.wurlod wrote:How many columns are defined on the input link to Agg01?
How many columns are defined on the output link from Agg01?
How many columns are defined on the other output link from Agg01?
( Optional: What are you trying to achieve by aggregating aggregated data?)
Hi all,jzparad wrote:I vaguely remember a similar problem in a previous project and, from memory, the problem disappeared when we put an IPC stage between the active stages.
Sorry that should have readActive stages are simply stages that spawn a process. Two active stages should only spawn a single process according to the DataStage manual.
Active stages are simply stages that spawn a process. Two active stages in sequence should only spawn a single process according to the DataStage manual.