Could not load "V10S0_JobName"
Moderators: chulett, rschirm, roy
Could not load "V10S0_JobName"
All of sudden we are getting the following error.
node_db2node0: Warning: the following libraries failed to load: "V10S0_JobName_trn": Could not load "V10S0_JobName_trn": Could not load module .
System error: No such file or directory; for class "APT_TransformOperatorImplV10S0_JobName_trn".
Any Idea ?
node_db2node0: Warning: the following libraries failed to load: "V10S0_JobName_trn": Could not load "V10S0_JobName_trn": Could not load module .
System error: No such file or directory; for class "APT_TransformOperatorImplV10S0_JobName_trn".
Any Idea ?
-
- Premium Member
- Posts: 138
- Joined: Wed Jul 16, 2008 9:51 pm
- Location: Kolkata
Does anyone know why we get this warning and how can I correct it ?
Any input is appreciated.node_db2node0: Warning: the following libraries failed to load: "V14S0_JOBNAME_trn_fmt_tpa": Could not load "V14S0_JOBNAJME_trn_fmt_tpa": Could not load module .
System error: No such file or directory; for class "APT_TransformOperatorImplV14S0_JOBNAME_trn_fmt_tpa".
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Your shared library search path is not being correctly adjusted for the location of the callable Transformer code. Check in the job log for the environment variable that contains your shared library search path (one of LD_LIBRARY_PATH, SHLIB_PATH or LIBPATH).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Thanks for the reply Ray.
I see both LIBPATH and LD_LIBRARY_PATH in the job log. Is that correct ?
I thought we should only see LIBPATH as the OS is AIX but I am not sure
where does the job get that LD variable as it's not there in the dsenv file
and also in the profile of the user. I even tried to null out
LD_LIBRARY_PATH from the profile of the user which runs the job explicitly but did not help.
I see the following in the dsenv file.
I see both LIBPATH and LD_LIBRARY_PATH in the job log. Is that correct ?
I thought we should only see LIBPATH as the OS is AIX but I am not sure
where does the job get that LD variable as it's not there in the dsenv file
and also in the profile of the user. I even tried to null out
LD_LIBRARY_PATH from the profile of the user which runs the job explicitly but did not help.
I see the following in the dsenv file.
How can I remove the LD variable. Please advise.LIBPATH=/datastage/DataStage/Projects/dmttst5/RT_BP1094.O:/datastage/DataStage/Projects/dmttst5/buildop:/opt/dsadm/Ascential/DataStage/DSCAPIOp:
/opt/dsadm/Ascential/DataStage/RTIOperators:/opt/dsadm/Ascential/DataStage/DSParallel:/opt/dsadm/Ascential/DataStage/PXEngine/user_lib:
/opt/dsadm/Ascential/DataStage/PXEngine/lib:/opt/dsadm/Ascential/DataStage/branded_odbc/lib:/opt/dsadm/Ascential/DataStage/DSEngine/lib:
/opt/dsadm/Ascential/DataStage/DSEngine/uvdlls:/opt/dsadm/Ascential/DataStage/DSEngine/java/jre/bin/classic:
/opt/dsadm/Ascential/DataStage/DSEngine/java/jre/bin:/usr/opt/db2_08_01/lib:/opt/IBM/db2/V9.5.0.5/lib32:/usr/lib:/lib:/opt/IBM/db2inst1/sqllib/lib
LD_LIBRARY_PATH=/opt/IBM/db2inst1/sqllib/lib
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Ray
I did as you suggested. I have added an environment variable called LD_LIBRARY_PATH in Administrator and added the same in the job and set it's value to $UNSET. Now I don't see the variable in my job log but I still get the same error. Anything else I need to check ?
Any input is appreciated.
Thanks
I did as you suggested. I have added an environment variable called LD_LIBRARY_PATH in Administrator and added the same in the job and set it's value to $UNSET. Now I don't see the variable in my job log but I still get the same error. Anything else I need to check ?
Any input is appreciated.
Thanks
My db2nodes.cfg file look like this...
I have them on ETL server. Does anyone know why I get the above mentioned error ?
My Apt_Config file look like this...1 pmudb05t 1
2 pmudb05t 2
Do I need to have the database node resource disk and scratch disk directories created in db2 server or etl server ?{
node "node0"
{
fastname "pmetl05t"
pools "" "node1" "pmetl05t" "mnode"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/Scratch" {pools ""}
}
node "node1"
{
fastname "pmetl05t"
pools "" "node1" "pmetl05t" "pnode"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/Scratch" {pools ""}
}
node "node2"
{
fastname "pmetl05t"
pools "" "node2" "pmetl05t" "pnode"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/Scratch" {pools ""}
}
node "db2node0"
{
fastname "pmudb05t"
pools "db2"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/" {pools ""}
}
node "db2node1"
{
fastname "pmudb05t"
pools "db2"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/" {pools ""}
}
node "db2node2"
{
fastname "pmudb05t"
pools "db2"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/" {pools ""}
}
}
I have them on ETL server. Does anyone know why I get the above mentioned error ?
-
- Premium Member
- Posts: 278
- Joined: Wed Oct 03, 2007 8:45 am
-
- Participant
- Posts: 19
- Joined: Tue Feb 23, 2010 11:15 am
- Location: Los Angeles
Hello DataStagers:
Please see this link to explain what is going on (in some circumstances):
https://www-304.ibm.com/support/docview ... wg21404595
Here is my understanding, please post the correction if I am wrong.
As you know, the Orchestrate Engine will try to parallel pipeline partion run on every node in the APT_Config file. However, as seen in the above link, every node must have "access to current copy of project directory will result in warning message if that job contains a compiled transformer (or other compiled code) unless environment is configured to copy transformers between servers."
Further, recall that on nearly every stage under Stage > Advanced, you will see the Node Pool and Resource Constraints. If you limit the stage to run only on that node that is defined (which of course has access to the current copy of the project directory) you will not get the error.
Here is what I did, I added the warning message to a Message Handler (in Director, select the job, open the log, right click on the message, select "add rule to message handler" and choose the appropriate "Action" (in my case "Suppress from Log") ) and rerun.
This will prevent the message from showing up as a warning message and you should get a clean run.
Hope this helps.
God Bless
Please see this link to explain what is going on (in some circumstances):
https://www-304.ibm.com/support/docview ... wg21404595
Here is my understanding, please post the correction if I am wrong.
As you know, the Orchestrate Engine will try to parallel pipeline partion run on every node in the APT_Config file. However, as seen in the above link, every node must have "access to current copy of project directory will result in warning message if that job contains a compiled transformer (or other compiled code) unless environment is configured to copy transformers between servers."
Further, recall that on nearly every stage under Stage > Advanced, you will see the Node Pool and Resource Constraints. If you limit the stage to run only on that node that is defined (which of course has access to the current copy of the project directory) you will not get the error.
Here is what I did, I added the warning message to a Message Handler (in Director, select the job, open the log, right click on the message, select "add rule to message handler" and choose the appropriate "Action" (in my case "Suppress from Log") ) and rerun.
This will prevent the message from showing up as a warning message and you should get a clean run.
Hope this helps.
God Bless
Darryl