Page 1 of 1

Could not load "V10S0_JobName"

Posted: Mon Jun 07, 2010 5:03 pm
by kaps
All of sudden we are getting the following error.

node_db2node0: Warning: the following libraries failed to load: "V10S0_JobName_trn": Could not load "V10S0_JobName_trn": Could not load module .
System error: No such file or directory; for class "APT_TransformOperatorImplV10S0_JobName_trn".

Any Idea ?

Posted: Mon Jun 07, 2010 5:06 pm
by Sreedhar
Hi,

1) Force compile your job
2) If that doesn't work, save the job with a different name and compile.

Posted: Tue Jun 08, 2010 8:59 am
by kaps
This is not compilation error. This warning comes when we run the job and even the job finishes fine. I am just wondering why this warning is coming.

Just FYI...We just upgraded to EE...

Posted: Tue Jun 08, 2010 5:40 pm
by Sreedhar
I am surprised. When I get that error the job does not even compile. what all Stages does that job have?

Posted: Wed Jun 09, 2010 3:51 am
by antonyraj.deva
Check for the value of the APT_TRANSFORM environment variable in the server :idea:

Thanks,
Tony

Posted: Tue Nov 30, 2010 6:13 pm
by kaps
Does anyone know why we get this warning and how can I correct it ?
node_db2node0: Warning: the following libraries failed to load: "V14S0_JOBNAME_trn_fmt_tpa": Could not load "V14S0_JOBNAJME_trn_fmt_tpa": Could not load module .
System error: No such file or directory; for class "APT_TransformOperatorImplV14S0_JOBNAME_trn_fmt_tpa".
Any input is appreciated.

Posted: Tue Nov 30, 2010 7:54 pm
by ray.wurlod
Your shared library search path is not being correctly adjusted for the location of the callable Transformer code. Check in the job log for the environment variable that contains your shared library search path (one of LD_LIBRARY_PATH, SHLIB_PATH or LIBPATH).

Posted: Tue Dec 07, 2010 4:54 pm
by kaps
Thanks for the reply Ray.

I see both LIBPATH and LD_LIBRARY_PATH in the job log. Is that correct ?
I thought we should only see LIBPATH as the OS is AIX but I am not sure
where does the job get that LD variable as it's not there in the dsenv file
and also in the profile of the user. I even tried to null out
LD_LIBRARY_PATH from the profile of the user which runs the job explicitly but did not help.

I see the following in the dsenv file.
LIBPATH=/datastage/DataStage/Projects/dmttst5/RT_BP1094.O:/datastage/DataStage/Projects/dmttst5/buildop:/opt/dsadm/Ascential/DataStage/DSCAPIOp:
/opt/dsadm/Ascential/DataStage/RTIOperators:/opt/dsadm/Ascential/DataStage/DSParallel:/opt/dsadm/Ascential/DataStage/PXEngine/user_lib:
/opt/dsadm/Ascential/DataStage/PXEngine/lib:/opt/dsadm/Ascential/DataStage/branded_odbc/lib:/opt/dsadm/Ascential/DataStage/DSEngine/lib:
/opt/dsadm/Ascential/DataStage/DSEngine/uvdlls:/opt/dsadm/Ascential/DataStage/DSEngine/java/jre/bin/classic:
/opt/dsadm/Ascential/DataStage/DSEngine/java/jre/bin:/usr/opt/db2_08_01/lib:/opt/IBM/db2/V9.5.0.5/lib32:/usr/lib:/lib:/opt/IBM/db2inst1/sqllib/lib

LD_LIBRARY_PATH=/opt/IBM/db2inst1/sqllib/lib
How can I remove the LD variable. Please advise.

Posted: Tue Dec 07, 2010 6:35 pm
by ray.wurlod
At least for testing add environment variable $LD_LIBRARY_PATH as a parameter to your job and make its default value the special token $UNSET.

Posted: Wed Dec 08, 2010 11:06 am
by kaps
Ray

I did as you suggested. I have added an environment variable called LD_LIBRARY_PATH in Administrator and added the same in the job and set it's value to $UNSET. Now I don't see the variable in my job log but I still get the same error. Anything else I need to check ?

Any input is appreciated.

Thanks

Posted: Wed Dec 08, 2010 12:07 pm
by Mike
"node_db2node0" - is this a node on a remote DB2 server?

My guess is you're trying to execute the process on a remote DB2 server that doesn't have visibility to the load module sitting on your engine tier.

Mike

Posted: Wed Dec 08, 2010 6:34 pm
by kaps
My db2nodes.cfg file look like this...
1 pmudb05t 1
2 pmudb05t 2
My Apt_Config file look like this...
{
node "node0"
{
fastname "pmetl05t"
pools "" "node1" "pmetl05t" "mnode"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/Scratch" {pools ""}
}
node "node1"
{
fastname "pmetl05t"
pools "" "node1" "pmetl05t" "pnode"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/Scratch" {pools ""}
}
node "node2"
{
fastname "pmetl05t"
pools "" "node2" "pmetl05t" "pnode"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/Scratch" {pools ""}
}
node "db2node0"
{
fastname "pmudb05t"
pools "db2"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/" {pools ""}
}
node "db2node1"
{
fastname "pmudb05t"
pools "db2"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/" {pools ""}
}
node "db2node2"
{
fastname "pmudb05t"
pools "db2"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/" {pools ""}
}
}
Do I need to have the database node resource disk and scratch disk directories created in db2 server or etl server ?
I have them on ETL server. Does anyone know why I get the above mentioned error ?

Posted: Tue Dec 14, 2010 9:58 am
by kaps
Anyone has clue about this ?

Posted: Tue Apr 26, 2011 12:36 pm
by sjfearnside
Did you ever get a solution for this problem? If so please post it.

thanks

Posted: Thu Nov 03, 2011 3:40 pm
by darrreever
Hello DataStagers:

Please see this link to explain what is going on (in some circumstances):

https://www-304.ibm.com/support/docview ... wg21404595

Here is my understanding, please post the correction if I am wrong.

As you know, the Orchestrate Engine will try to parallel pipeline partion run on every node in the APT_Config file. However, as seen in the above link, every node must have "access to current copy of project directory will result in warning message if that job contains a compiled transformer (or other compiled code) unless environment is configured to copy transformers between servers."

Further, recall that on nearly every stage under Stage > Advanced, you will see the Node Pool and Resource Constraints. If you limit the stage to run only on that node that is defined (which of course has access to the current copy of the project directory) you will not get the error.

Here is what I did, I added the warning message to a Message Handler (in Director, select the job, open the log, right click on the message, select "add rule to message handler" and choose the appropriate "Action" (in my case "Suppress from Log") ) and rerun.

This will prevent the message from showing up as a warning message and you should get a clean run.

Hope this helps.

God Bless :)