Hi
I am writing to a dataset and using the same in another job. The job in which i create the dataset ,i am able to view the data . But when i use the same dataset for reading the data , i am getting the following error that one of the field is not there in the input dataset. Any clue on this please
##I TOSH 000002 15:05:29(001) <main_program> orchgeneral: loaded
##I TOSH 000002 15:05:29(002) <main_program> orchsort: loaded
##I TOSH 000002 15:05:29(003) <main_program> orchstats: loaded
##I TFSC 000001 15:05:29(004) <main_program> APT configuration file: /opt/biretl2dev/apps/ascential/Ascential/DataStage/Configurations/default.apt
##W TCOS 000049 15:05:29(005) <main_program> Parameter specified but not used in flow: DSProjectMapName
>##E TOPK 000000 15:05:30(003) <_PEEK_IDENT_> Input dataset does not have field: "WORKSTATION_ID".
>##E TFSR 000019 15:05:30(005) <main_program> Could not check all operators because of previous error(s)
##W TFOR 000000 15:05:30(006) <Data_Set_0> When checking operator: The modify operator has a binding for the non-existent output field "WORKSTATION_ID".
##W TFOP 000073 15:05:30(007) <_Head> When checking operator: A sequential operator cannot preserve the partitioning
of the parallel data set on input port 0.
>##E TCOS 000029 15:05:30(008) <main_program> Creation of a step finished with status = FAILED.
>
But for the same worstation_id i am able to view the data in the job in which i am writing the dataset.
Hi
I hope that the metadata for the datasets should be loaded from the datastage manager instead of typing it .I just saved the column definition of the dataset that i created in the datastage job and loaded the same in the job in which i read the dataset. This is working fine and i am able to read data from the dataset. So any body who is searching for the forum for the keyword