Hi,
I had created a server job, and i had mapped a field to a record, but later on it was found that that mapping is not correct and it was changed, this is brought through an hash file
but the problem, when i run this job individually, tha job brings correct records, but when a sequencer runs the job old field is brought
the hash file paths are different when we run jobs individually and sequentially and we use write differd in all hash files,
could some one suggest he possible cause
thanks
ranga
changes made are appearing in job are not appearing in seque
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 3593
- Joined: Thu Jan 23, 2003 5:25 pm
- Location: Australia, Melbourne
- Contact:
Have you at some point copied and renamed your job? Perhaps your sequence job is pointing to an old copy of the job you are working on. Using DataStage Manager click on your job and choose the Usage Analysis option to see what sequence jobs are using it.
Certus Solutions
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
-
- Participant
- Posts: 3337
- Joined: Mon Jan 17, 2005 4:49 am
- Location: United Kingdom
When your job sequencer runs, you can try to order the jobs by status in the DataStage director to check whether the job you expect to run are the ones that are really running.
Also you can see the job log's first few lines to check the parameters passed to it and its values to confirm the correctness of the flow.
Also you can see the job log's first few lines to check the parameters passed to it and its values to confirm the correctness of the flow.