Page 1 of 1

Unable to open project 'XYZ' - 81016. ( Job Migrated )

Posted: Tue Oct 13, 2009 3:47 am
by pavans
I have a Job Design which was migrated from SUN Solaris/DS 7.5.1 to AIX/DS 7.5.3
Job Design is as below:
ORacle-Transformer(4 Links Coming out) ----Funnel---Seq File
In the Transformer i have a Server Routine which had 4 arguments..job name, table name, source name and file name
Here we capture the table count, tables loaded, tables not loaded etc.,

Job is aborting with the below fatal error:

Trns,0: Unable to open project 'XYZ' - 81016.
Trns,0: The runLocally() of the operator failed. [api/operator_rep.C:4069]
Trns,0: Operator terminated abnormally: runLocally did not return APT_StatusOk [processmgr/rtpexecutil.C:167]
main_program: Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]

I did search in the forum on
Unable to open project 'XYZ' - 81016.
but could not find the any solution, as it looks like the error is misleading.

Any ideas....

Thanks in Advance..

Posted: Tue Oct 13, 2009 1:47 pm
by ray.wurlod
How was the actual migration performed?

Code: Select all

SELECT * FROM SYS.MESSAGE WHERE @ID = '081016';

Posted: Tue Oct 13, 2009 11:15 pm
by pavans
Thanks Ray for the quick reply..

We manually exported/imported the .dsx from SUN Solaris/DS 7.5.1 to AIX/DS 7.5.3.
We have all the necessary Config Files created on the new AIX server and the other code is working fine.
This was the only job which has a Basic Transformer with a server routine in it is aborting.

Posted: Tue Oct 13, 2009 11:51 pm
by chulett
If you create a new job from scratch with a BASIC Transformer stage in it, does it work correctly or abort as well?

Posted: Wed Oct 14, 2009 2:54 am
by pavans
I created a simple job(Oracle-Basic T/F-Seq File) with a basic transformer in it. Even this is aborting with the same error.
BASIC_Transformer_10,1: Unable to open project 'EDW' - 81016.

But when i call a Server Routine in the Basic Transformer, job aborted again with the same error.
BASIC_Transformer_10,1: Unable to open project 'EDW' - 81016.

Posted: Tue Oct 20, 2009 2:25 am
by pavans
Currently we are using Datastage version 7.5.3 and OS AIX 5.3 version.
Do you think we need to request IBM to install any patches to resolve this issue.
Is there a workaround? Any Ideas?

Posted: Tue Oct 20, 2009 5:24 am
by chulett
I'm not aware of what circumstances would cause the failure of the BASIC Transformer in a PX job like that. I would think the next step would be to contact your official support provider and see if they can resolve it, then post the results back here.

As to a 'workaround', to me that would be to not use that stage. Convert whatever the server routine does into its PX equivalent.

Posted: Tue Oct 20, 2009 5:47 am
by ArndW
Have you tried a forced compile on the job? IS the "XYZ" a project name on the old machine?

Posted: Tue Oct 20, 2009 5:53 am
by chulett
Seeing as how this happens with newly created jobs as well, I don't see how...

Posted: Tue Oct 20, 2009 5:59 am
by ArndW
Craig - I missed that part :oops: I wonder if this could be distributed installation... what does the apt_config file look like?

Posted: Tue Oct 20, 2009 7:50 am
by chulett
I was wondering the same thing, but all that's been mentioned is a single server. Pavan, can you confirm?

Posted: Tue Oct 20, 2009 11:23 pm
by pavans
The Project name is same on old and new machines.
I tried force compile as well..but it didn't work.
The config file we are using is:
main_program: APT configuration file: /apps/Ascential/DataStage/Configurations/XYZDefault.apt

Code: Select all

/*  DataStage Configuration File - Project=XYZ

    File automatically generated - 2009/07/16 09:25
*/
{
   node "Conductor_01"
   {
      fastname "dseax006"
      pools "Conductor"
      resource disk "/worknode/datasets" {pools ""}
      resource scratchdisk "/worknode/scratch" {pools ""}
   }
   node "etlax007_01"
   {
      fastname "etlax007"
      pools ""
      resource disk "/worknode07/datasets" {pools ""}
      resource disk "/worknode01/datasets" {pools ""}
      resource disk "/worknode08/datasets" {pools ""}
      resource disk "/worknode05/datasets" {pools ""}
      resource disk "/worknode03/datasets" {pools ""}
      resource disk "/worknode04/datasets" {pools ""}
      resource disk "/worknode06/datasets" {pools ""}
      resource disk "/worknode02/datasets" {pools ""}
      resource scratchdisk "/worknode07/scratch" {pools ""}
      resource scratchdisk "/worknode01/scratch" {pools ""}
      resource scratchdisk "/worknode08/scratch" {pools ""}
      resource scratchdisk "/worknode05/scratch" {pools ""}
      resource scratchdisk "/worknode03/scratch" {pools ""}
      resource scratchdisk "/worknode04/scratch" {pools ""}
      resource scratchdisk "/worknode06/scratch" {pools ""}
      resource scratchdisk "/worknode02/scratch" {pools ""}
   }
   node "etlax008_02"
   {
      fastname "etlax008"
      pools ""
      resource disk "/worknode07/datasets" {pools ""}
      resource disk "/worknode01/datasets" {pools ""}
      resource disk "/worknode08/datasets" {pools ""}
      resource disk "/worknode05/datasets" {pools ""}
      resource disk "/worknode03/datasets" {pools ""}
      resource disk "/worknode04/datasets" {pools ""}
      resource disk "/worknode06/datasets" {pools ""}
      resource disk "/worknode02/datasets" {pools ""}
      resource scratchdisk "/worknode07/scratch" {pools ""}
      resource scratchdisk "/worknode01/scratch" {pools ""}
      resource scratchdisk "/worknode08/scratch" {pools ""}
      resource scratchdisk "/worknode05/scratch" {pools ""}
      resource scratchdisk "/worknode03/scratch" {pools ""}
      resource scratchdisk "/worknode04/scratch" {pools ""}
      resource scratchdisk "/worknode06/scratch" {pools ""}
      resource scratchdisk "/worknode02/scratch" {pools ""}
   }
   node "etlax007_03"
   {
      fastname "etlax007"
      pools ""
      resource disk "/worknode07/datasets" {pools ""}
      resource disk "/worknode01/datasets" {pools ""}
      resource disk "/worknode08/datasets" {pools ""}
      resource disk "/worknode05/datasets" {pools ""}
      resource disk "/worknode03/datasets" {pools ""}
      resource disk "/worknode04/datasets" {pools ""}
      resource disk "/worknode06/datasets" {pools ""}
      resource disk "/worknode02/datasets" {pools ""}
      resource scratchdisk "/worknode07/scratch" {pools ""}
      resource scratchdisk "/worknode01/scratch" {pools ""}
      resource scratchdisk "/worknode08/scratch" {pools ""}
      resource scratchdisk "/worknode05/scratch" {pools ""}
      resource scratchdisk "/worknode03/scratch" {pools ""}
      resource scratchdisk "/worknode04/scratch" {pools ""}
      resource scratchdisk "/worknode06/scratch" {pools ""}
      resource scratchdisk "/worknode02/scratch" {pools ""}
   }
   node "etlax008_04"
   {
      fastname "etlax008"
      pools ""
      resource disk "/worknode07/datasets" {pools ""}
      resource disk "/worknode01/datasets" {pools ""}
      resource disk "/worknode08/datasets" {pools ""}
      resource disk "/worknode05/datasets" {pools ""}
      resource disk "/worknode03/datasets" {pools ""}
      resource disk "/worknode04/datasets" {pools ""}
      resource disk "/worknode06/datasets" {pools ""}
      resource disk "/worknode02/datasets" {pools ""}
      resource scratchdisk "/worknode07/scratch" {pools ""}
      resource scratchdisk "/worknode01/scratch" {pools ""}
      resource scratchdisk "/worknode08/scratch" {pools ""}
      resource scratchdisk "/worknode05/scratch" {pools ""}
      resource scratchdisk "/worknode03/scratch" {pools ""}
      resource scratchdisk "/worknode04/scratch" {pools ""}
      resource scratchdisk "/worknode06/scratch" {pools ""}
      resource scratchdisk "/worknode02/scratch" {pools ""}
   }
}
We have one main server and two cluster servers.

The job aborts is on the same single server which is our Dev server.
Let me know if I can provide any further information.

Posted: Thu Nov 28, 2013 1:41 am
by ulab
Please update, What is the resolution, how come this issue resolved?

Posted: Thu Nov 28, 2013 10:20 am
by chulett
Even if you have a similar error message, please start a new post and tell us the gory details of the issue you are facing. I sincerely doubt you are migrating a 7.5 PX job from SUN to AIX. :wink: