ORCHESTRATE program must be started on a node

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
cleonard1261
Premium Member
Premium Member
Posts: 1
Joined: Tue May 16, 2006 8:48 am

ORCHESTRATE program must be started on a node

Post by cleonard1261 »

I'm getting the error message below. I've seen this before and it usually means fixing the Config File. The first time I got the message the config file had the ipconfig address for the fastname. Then I changed the fastname to the value for 'uname -n' and I got the same error message. Below I also have the config file that I'm using. I've replaced the server name with HOSTNAME in this post. I also have the values for hostname and uname -n just to show that my config file is correct. Has anyone seen this behavior other than with a bad config file? Any help would be greatly appreciated.

##F TFSC 000000 17:58:49(007) <main_program> Fatal Error: An ORCHESTRATE program must be started on a node
in the configuration file. This program is running on HOSTNAME
which is not in the configuration file: /home/dsadm/Ascential/DataStage/Configurations/1node.apt
$ hostname
HOSTNAME
$ uname -n
HOSTNAME
$ uname -s
Linux
$ uname -a
Linux HOSTNAME 2.4.21-4.ELsmp #1 SMP Fri Oct 3 17:52:56 EDT 2003 i686 i686 i386 GNU/Linux
$ uname -r
2.4.21-4.ELsmp
$ cat 1node.apt
{
node "node0"
{
fastname "HOSTNAME"
pools "" "node0"
resource disk "/home/dsadm/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/home/dsadm/Ascential/DataStage/Scratch" {pools ""}
}
}
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Welcome aboard. :D

This error is often caused by the configuration file having no nodes in the default node pool, which is not the case here.

Is DataStage server installed on HOSTNAME ? That is, are all the pieces needed by the conductor process, such as access to DataStage logs, available on HOSTNAME?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Andet
Charter Member
Charter Member
Posts: 63
Joined: Mon Nov 01, 2004 9:40 am
Location: Clayton, MO

Post by Andet »

Check the permissions on the configuration file....

Ande
roblew
Charter Member
Charter Member
Posts: 123
Joined: Mon Mar 27, 2006 7:32 pm
Location: San Ramon

Post by roblew »

I have a related question. We are running DSEE 7.5.1A (with SAP R/3 and BW PACKs) on an MPP environment with two physical servers. We are trying to isolate different projects to use either one of the servers using different config files. Example: project A would run on server0, and project B would run on server1.

config file for projectA:
{
node "node1"
{
fastname "server0.net"
pools ""
resource disk "/vend/dsadm/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/vend/dsadm/Ascential/DataStage/Scratch" {pools ""}
}
node "node2"
{
fastname "server0.net"
pools ""
resource disk "/vend/dsadm/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/vend/dsadm/Ascential/DataStage/Scratch" {pools ""}
}
}

config file for projectB:
{
node "node0"
{
fastname "server0.net"
pools "conductor"
}
node "node1"
{
fastname "server1.net"
pools ""
resource disk "/vend/dsadm/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/vend/dsadm/Ascential/DataStage/Scratch" {pools ""}
}
node "node2"
{
fastname "server1.net"
pools ""
resource disk "/vend/dsadm/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/vend/dsadm/Ascential/DataStage/Scratch" {pools ""}
}
}

Has this sort of configuration been used and tested with success? We tried it at one point earlier this year, but ran into some errors (that I have forgotten now). Some jobs would run successfully, but I remember that others would not. I'm not sure if it was problems running SAP R/3 jobs exclusively or not.

I also remember asking IBM support if this configuration is supported, but they said it was not supported.

Also, I noticed this APT_PM_CONDUCTOR_HOSTNAME parameter. Would this help at all with my situation? From the description, it sounds like I could use this to omit the "conductor" pool altogether. I haven't noticed if setting this parameter makes any difference.

APT_PM_CONDUCTOR_HOSTNAME
The network name of the processing node from which you invoke a
job should be included in the configuration file as either a node or a
fastname. If the network name is not included in the configuration file,
DataStage users must set the environment variable
APT_PM_CONDUCTOR_HOSTNAME to the name of the node invoking
the DataStage job.


I have also tried this configuration for projectB (with and without setting APT_PM_CONDUCTOR_HOSTNAME):
{
node "node1"
{
fastname "server1.net"
pools ""
resource disk "/vend/dsadm/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/vend/dsadm/Ascential/DataStage/Scratch" {pools ""}
}
node "node2"
{
fastname "server1.net"
pools ""
resource disk "/vend/dsadm/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/vend/dsadm/Ascential/DataStage/Scratch" {pools ""}
}
}


This is our working current configuration for projectB:

{
node "node0"
{
fastname "server0.net"
pools "" "conductor"
resource disk "/vend/dsadm/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/vend/dsadm/Ascential/DataStage/Scratch" {pools ""}
}
node "node1"
{
fastname "server1.net"
pools ""
resource disk "/vend/dsadm/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/vend/dsadm/Ascential/DataStage/Scratch" {pools ""}
}
node "node2"
{
fastname "server1.net"
pools ""
resource disk "/vend/dsadm/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/vend/dsadm/Ascential/DataStage/Scratch" {pools ""}
}
}
Andet
Charter Member
Charter Member
Posts: 63
Joined: Mon Nov 01, 2004 9:40 am
Location: Clayton, MO

Post by Andet »

Whew, I guess you had to be there....
DSEE has to be installed on each of your servers or installed on one server and NFS mounted on the other server.
The fastname of the server where the job is running has to be in the config file and all of the servers where the database has defined nodes.
I am assuming that you are using a partitiuoned database. Are the tables for both projects defined on both servers?

Ande
Last edited by Andet on Tue Oct 10, 2006 5:30 pm, edited 1 time in total.
roblew
Charter Member
Charter Member
Posts: 123
Joined: Mon Mar 27, 2006 7:32 pm
Location: San Ramon

Post by roblew »

In our cluster, the DataStage binaries are NFS mounted across the nodes and databases are all remote from the DataStage servers.

Also, the databases are not usually used across projects, but there's nothing saying they could not be used.

So are you are saying that since the job is running on server0 (and binaries NFS mounted to server1), server0 needs to be in the config file?

If so, can we safely use the single node in the "conductor" pool, and not include it in the "" default pool in order to avoid actual job processing on server0 (like the second example I posted above)?
Andet
Charter Member
Charter Member
Posts: 63
Joined: Mon Nov 01, 2004 9:40 am
Location: Clayton, MO

Post by Andet »

you can't avoid some processing on the datastage server. If nothing else, startup and shutdown for the job run locally.
When you said you were running an MPP system, I gues I was assuming the tables and databases were defined across the nodes.
If the database you are going against is only on server one, and the partitions only exist on server one, I would think that a minimum, the datastage server nodes and the server one nodes would be in the configuration file. There is no requirement that all nodes be used in any processing. I believe you can define a node pool that only exists on server one and specify that node pool in all stages of the job.

Ande
Post Reply