I have two configurations:
node1:
{
node "node1"
{
fastname "tnet368"
pools ""
resource disk "/prop/loc/tnbilfor/dtstnbilfor/work/node01/disk00/" {pools ""}
resource scratchdisk "/prop/loc/tnbilfor/dtstnbilfor/work/node01/scratch00/" {pools ""}
}
}
node2:
{
node "node2"
{
fastname "tnet368"
pools ""
resource disk "/prop/loc/tnbilfor/dtstnbilfor/work/node02/disk00/" {pools ""}
resource scratchdisk "/prop/loc/tnbilfor/dtstnbilfor/work/node02/scratch00/" {pools ""}
}
}
If I run a parallel job with the first configuration, it runs fine. if I run the same job with the seconde configuration, it aborts with:
descriptions: Error when checking operator: Node name "node1" not in config file
If I reset the job and run it again with the second, it runs fine. If I run then with the first, it aborts.
It seems that the engine remembers the last configuration and excepts that the same node is available.
Can anybody help?
parallel job on one node
Moderators: chulett, rschirm, roy
I couldn't reproduce that here. Which "operator" is giving the error?
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Re: parallel job on one node
Hi ,hne wrote:I have two configurations:
node1:
{
node "node1"
{
fastname "tnet368"
pools ""
resource disk "/prop/loc/tnbilfor/dtstnbilfor/work/node01/disk00/" {pools ""}
resource scratchdisk "/prop/loc/tnbilfor/dtstnbilfor/work/node01/scratch00/" {pools ""}
}
}
node2:
{
node "node2"
{
fastname "tnet368"
pools ""
resource disk "/prop/loc/tnbilfor/dtstnbilfor/work/node02/disk00/" {pools ""}
resource scratchdisk "/prop/loc/tnbilfor/dtstnbilfor/work/node02/scratch00/" {pools ""}
}
}
If I run a parallel job with the first configuration, it runs fine. if I run the same job with the seconde configuration, it aborts with:
descriptions: Error when checking operator: Node name "node1" not in config file
If I reset the job and run it again with the second, it runs fine. If I run then with the first, it aborts.
It seems that the engine remembers the last configuration and excepts that the same node is available.
Can anybody help?
Since configuration is related to server . You need to restart the server for the configuartion file to get updated.
Regards
Thiru
samythiru - no, there is no need to restart the DataStage engine when using different PX configuration files.
p.s. No need to quote messages, either.
p.s. No need to quote messages, either.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Is your job design using Data Set or File Set stage(s)? These are, in some sense, tied to the configuration file that was used when they were created. In particular, the resource disk that specifies the location of their data files must be accessible in whatever configuration is used subsequent to their creation to access them. This is not the case between your configuration files.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 210
- Joined: Wed Feb 16, 2005 7:17 am
This will happen if your job has a dataset or a file set stage and
1. You have used the 'overwrite' option.
2. The data set or file set already exists.
So basically then PX tries to find the part file (cant remember the correct term for the data file on a particular partiiton) to delete, it will not be able to find it since a node name has changed. Then it throws the error.
Cheers
Aakash
1. You have used the 'overwrite' option.
2. The data set or file set already exists.
So basically then PX tries to find the part file (cant remember the correct term for the data file on a particular partiiton) to delete, it will not be able to find it since a node name has changed. Then it throws the error.
Cheers
Aakash
L'arrêt essayant d'être parfait… évoluons.