We're deploying jobs to USS on a mainframe.
In development I'd like to be able to run parallel jobs on a different lpar on the mainframe from time to time. (for example we normally run on LPAR1 but I like to run on LPAR2)
I'd assumed it would be enough to change the remote deployment settings on the Project in Administrator, to point at lpar2, and also change the config file to reference fastname lpar2 - then just re-run the parallel job. However, when the job starts we get this message
but then an error like this:Parallel job initiated on LPAR2
The parallel job appears to attempt to start on lpar2 but somehow it "knows" where it was compiled.main_program: Fatal Error: An ORCHESTRATE program must be started on a node
in the configuration file. This program is running on LPAR1
which is not in the configuration file: /dev/devuser/Configurations/default.apt
We don't have the APT_PM_CONDUCTOR_NODE variable set.
Compiling a job with the remote deployment options set results in a series of files on the remote machine:
OshExecute.sh OshScript.osh evdepfile jpdepfile pxrun.sh
I can see no reference to a machine name in any of these and running the parallel job outside of DataStage (using pxrun.sh) results in a successful run of the job. Running the job under DataStage's control doesn't work unless the job is recompiled.
I suspect very few people actually use the remote deployment option but does anyone know if it's possible to switch the remote execution environment without having to recompile parallel jobs or are the remote deployment project settings stored with the job in the DataStage server repository, at the time of compilation
Thanks.