Run on Single node in multi node environment
Moderators: chulett, rschirm, roy
Run on Single node in multi node environment
Hi,
Is there any parameter that can be defined at job level to make the job run on single node in multi-node (4 nodes) environment?
We dont have access to change the Config file. Is there any other parameter available in datastaga?
I'm facing issue with join stage which gives different output for each run in a multi-node environment but works fine with single node. Not able to trace this issue. So, want to run it on any one node out of the 4 nodes.
Is there any parameter that can be defined at job level to make the job run on single node in multi-node (4 nodes) environment?
We dont have access to change the Config file. Is there any other parameter available in datastaga?
I'm facing issue with join stage which gives different output for each run in a multi-node environment but works fine with single node. Not able to trace this issue. So, want to run it on any one node out of the 4 nodes.
Shruthi
-
- Participant
- Posts: 597
- Joined: Fri Apr 29, 2005 6:19 am
- Location: Singapore
I am not sure whether this will help you. You don't have to logon to UNIX server.
Check Tools menu in Designer and click "Configurations", then you can create new config file and save it. In the job where you want to use this single node file, just goto job properties and click 'add environment variable' and then select $APT_CONFIG_FILE to specify the new config file
Check Tools menu in Designer and click "Configurations", then you can create new config file and save it. In the job where you want to use this single node file, just goto job properties and click 'add environment variable' and then select $APT_CONFIG_FILE to specify the new config file
Kandy
_________________
Try and Try again…You will succeed atlast!!
_________________
Try and Try again…You will succeed atlast!!
-
- Participant
- Posts: 597
- Joined: Fri Apr 29, 2005 6:19 am
- Location: Singapore
-
- Participant
- Posts: 597
- Joined: Fri Apr 29, 2005 6:19 am
- Location: Singapore
-
- Participant
- Posts: 19
- Joined: Wed Feb 15, 2006 11:08 am
You can create different config files with different number of nodes (single node,2 nodes,4 nodes) at the fallowing location
/opt/datastage/Ascential/DataStage/Configurations on your UNIX box
and use these different config files for different jobs in same project using the $APT_CONFIG_FILE environment variable
/opt/datastage/Ascential/DataStage/Configurations on your UNIX box
and use these different config files for different jobs in same project using the $APT_CONFIG_FILE environment variable
Prasad Duvasi
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Another way is to compile the job in trace mode, which then gives the ability to force all stages (operators) to execute in sequential mode. When you're done, re-compile not in trace mode.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Shruthi,
If this issue is same as your previous thread regarding Join stage then my advice of running in single node is for debug purposes only. I advise you to find the root cause of the problem.
It is very difficult for me sitting in a remote location to find an exact cause of your issue. I cannot reproduce the same here.
For the time being,why don't you try running this job using default or Auto partition method which usually will be much faster than running the same in sequential mode and verify the results. Once you identify the root cause you can modify the original job.
If this issue is same as your previous thread regarding Join stage then my advice of running in single node is for debug purposes only. I advise you to find the root cause of the problem.
It is very difficult for me sitting in a remote location to find an exact cause of your issue. I cannot reproduce the same here.
For the time being,why don't you try running this job using default or Auto partition method which usually will be much faster than running the same in sequential mode and verify the results. Once you identify the root cause you can modify the original job.
-
- Participant
- Posts: 45
- Joined: Wed May 02, 2007 8:30 am
- Location: Prague, Czech Republic