Run on Single node in multi node environment

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
Shruthi
Participant
Posts: 74
Joined: Sun Oct 05, 2008 10:59 pm
Location: Bangalore

Run on Single node in multi node environment

Post by Shruthi »

Hi,

Is there any parameter that can be defined at job level to make the job run on single node in multi-node (4 nodes) environment?
We dont have access to change the Config file. Is there any other parameter available in datastaga?

I'm facing issue with join stage which gives different output for each run in a multi-node environment but works fine with single node. Not able to trace this issue. So, want to run it on any one node out of the 4 nodes.
Shruthi
kandyshandy
Participant
Posts: 597
Joined: Fri Apr 29, 2005 6:19 am
Location: Singapore

Post by kandyshandy »

I am not sure whether this will help you. You don't have to logon to UNIX server.

Check Tools menu in Designer and click "Configurations", then you can create new config file and save it. In the job where you want to use this single node file, just goto job properties and click 'add environment variable' and then select $APT_CONFIG_FILE to specify the new config file
Kandy
_________________
Try and Try again…You will succeed atlast!!
Shruthi
Participant
Posts: 74
Joined: Sun Oct 05, 2008 10:59 pm
Location: Bangalore

Thanks

Post by Shruthi »

Thanks so much. Do you know where this new config file which we create in designer will be stored? In case if it gets created in unix, I just wanted to make sure that no other user deletes it.
Shruthi
kandyshandy
Participant
Posts: 597
Joined: Fri Apr 29, 2005 6:19 am
Location: Singapore

Post by kandyshandy »

it will be created at the same place as the default.apt . So you may want to keep others informed of the same
Kandy
_________________
Try and Try again…You will succeed atlast!!
kandyshandy
Participant
Posts: 597
Joined: Fri Apr 29, 2005 6:19 am
Location: Singapore

Post by kandyshandy »

it will be created at the same place as the default.apt . So you may want to keep others informed of the same
Kandy
_________________
Try and Try again…You will succeed atlast!!
Shruthi
Participant
Posts: 74
Joined: Sun Oct 05, 2008 10:59 pm
Location: Bangalore

Post by Shruthi »

Thanks.
Shruthi
prasadduvasi
Participant
Posts: 19
Joined: Wed Feb 15, 2006 11:08 am

Post by prasadduvasi »

You can create different config files with different number of nodes (single node,2 nodes,4 nodes) at the fallowing location

/opt/datastage/Ascential/DataStage/Configurations
on your UNIX box

and use these different config files for different jobs in same project using the $APT_CONFIG_FILE environment variable
Prasad Duvasi
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Another way is to compile the job in trace mode, which then gives the ability to force all stages (operators) to execute in sequential mode. When you're done, re-compile not in trace mode.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
balajisr
Charter Member
Charter Member
Posts: 785
Joined: Thu Jul 28, 2005 8:58 am

Post by balajisr »

Shruthi,

If this issue is same as your previous thread regarding Join stage then my advice of running in single node is for debug purposes only. I advise you to find the root cause of the problem.

It is very difficult for me sitting in a remote location to find an exact cause of your issue. I cannot reproduce the same here.

For the time being,why don't you try running this job using default or Auto partition method which usually will be much faster than running the same in sequential mode and verify the results. Once you identify the root cause you can modify the original job.
Shruthi
Participant
Posts: 74
Joined: Sun Oct 05, 2008 10:59 pm
Location: Bangalore

Post by Shruthi »

We tried auto partition also. It didnt work. As this issue occurs only with huge data, we are not able to trace back to the root cause. We are still debugging the issue. Meanwhile, due to time constraint, planning to proceed with single node.
nani0907
Participant
Posts: 155
Joined: Wed Apr 18, 2007 10:30 am

Post by nani0907 »

Run the Job in sequential mode or do hash partitioning on the key column beacause it makes sure that the same key values are processed on the same node. check it once.
thanks n regards
nani
nani0907
Participant
Posts: 155
Joined: Wed Apr 18, 2007 10:30 am

Post by nani0907 »

Run the Job in sequential mode or do hash partitioning on the key column beacause it makes sure that the same key values are processed on the same node. check it once.
thanks n regards
nani
abhishekachrekar
Participant
Posts: 45
Joined: Wed May 02, 2007 8:30 am
Location: Prague, Czech Republic

Post by abhishekachrekar »

Yes, Hash partitioning on both the input links will solve this issue.
Regards,
Abhishek
Post Reply