Could not load "V10S0_JobName"

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
kaps
Participant
Posts: 452
Joined: Tue May 10, 2005 12:36 pm

Could not load "V10S0_JobName"

Post by kaps »

All of sudden we are getting the following error.

node_db2node0: Warning: the following libraries failed to load: "V10S0_JobName_trn": Could not load "V10S0_JobName_trn": Could not load module .
System error: No such file or directory; for class "APT_TransformOperatorImplV10S0_JobName_trn".

Any Idea ?
Sreedhar
Participant
Posts: 187
Joined: Mon Oct 30, 2006 12:16 am

Post by Sreedhar »

Hi,

1) Force compile your job
2) If that doesn't work, save the job with a different name and compile.
Regards,
Shree
785-816-0728
kaps
Participant
Posts: 452
Joined: Tue May 10, 2005 12:36 pm

Post by kaps »

This is not compilation error. This warning comes when we run the job and even the job finishes fine. I am just wondering why this warning is coming.

Just FYI...We just upgraded to EE...
Sreedhar
Participant
Posts: 187
Joined: Mon Oct 30, 2006 12:16 am

Post by Sreedhar »

I am surprised. When I get that error the job does not even compile. what all Stages does that job have?
Regards,
Shree
785-816-0728
antonyraj.deva
Premium Member
Premium Member
Posts: 138
Joined: Wed Jul 16, 2008 9:51 pm
Location: Kolkata

Post by antonyraj.deva »

Check for the value of the APT_TRANSFORM environment variable in the server :idea:

Thanks,
Tony
kaps
Participant
Posts: 452
Joined: Tue May 10, 2005 12:36 pm

Post by kaps »

Does anyone know why we get this warning and how can I correct it ?
node_db2node0: Warning: the following libraries failed to load: "V14S0_JOBNAME_trn_fmt_tpa": Could not load "V14S0_JOBNAJME_trn_fmt_tpa": Could not load module .
System error: No such file or directory; for class "APT_TransformOperatorImplV14S0_JOBNAME_trn_fmt_tpa".
Any input is appreciated.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Your shared library search path is not being correctly adjusted for the location of the callable Transformer code. Check in the job log for the environment variable that contains your shared library search path (one of LD_LIBRARY_PATH, SHLIB_PATH or LIBPATH).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
kaps
Participant
Posts: 452
Joined: Tue May 10, 2005 12:36 pm

Post by kaps »

Thanks for the reply Ray.

I see both LIBPATH and LD_LIBRARY_PATH in the job log. Is that correct ?
I thought we should only see LIBPATH as the OS is AIX but I am not sure
where does the job get that LD variable as it's not there in the dsenv file
and also in the profile of the user. I even tried to null out
LD_LIBRARY_PATH from the profile of the user which runs the job explicitly but did not help.

I see the following in the dsenv file.
LIBPATH=/datastage/DataStage/Projects/dmttst5/RT_BP1094.O:/datastage/DataStage/Projects/dmttst5/buildop:/opt/dsadm/Ascential/DataStage/DSCAPIOp:
/opt/dsadm/Ascential/DataStage/RTIOperators:/opt/dsadm/Ascential/DataStage/DSParallel:/opt/dsadm/Ascential/DataStage/PXEngine/user_lib:
/opt/dsadm/Ascential/DataStage/PXEngine/lib:/opt/dsadm/Ascential/DataStage/branded_odbc/lib:/opt/dsadm/Ascential/DataStage/DSEngine/lib:
/opt/dsadm/Ascential/DataStage/DSEngine/uvdlls:/opt/dsadm/Ascential/DataStage/DSEngine/java/jre/bin/classic:
/opt/dsadm/Ascential/DataStage/DSEngine/java/jre/bin:/usr/opt/db2_08_01/lib:/opt/IBM/db2/V9.5.0.5/lib32:/usr/lib:/lib:/opt/IBM/db2inst1/sqllib/lib

LD_LIBRARY_PATH=/opt/IBM/db2inst1/sqllib/lib
How can I remove the LD variable. Please advise.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

At least for testing add environment variable $LD_LIBRARY_PATH as a parameter to your job and make its default value the special token $UNSET.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
kaps
Participant
Posts: 452
Joined: Tue May 10, 2005 12:36 pm

Post by kaps »

Ray

I did as you suggested. I have added an environment variable called LD_LIBRARY_PATH in Administrator and added the same in the job and set it's value to $UNSET. Now I don't see the variable in my job log but I still get the same error. Anything else I need to check ?

Any input is appreciated.

Thanks
Mike
Premium Member
Premium Member
Posts: 1021
Joined: Sun Mar 03, 2002 6:01 pm
Location: Tampa, FL

Post by Mike »

"node_db2node0" - is this a node on a remote DB2 server?

My guess is you're trying to execute the process on a remote DB2 server that doesn't have visibility to the load module sitting on your engine tier.

Mike
kaps
Participant
Posts: 452
Joined: Tue May 10, 2005 12:36 pm

Post by kaps »

My db2nodes.cfg file look like this...
1 pmudb05t 1
2 pmudb05t 2
My Apt_Config file look like this...
{
node "node0"
{
fastname "pmetl05t"
pools "" "node1" "pmetl05t" "mnode"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/Scratch" {pools ""}
}
node "node1"
{
fastname "pmetl05t"
pools "" "node1" "pmetl05t" "pnode"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/Scratch" {pools ""}
}
node "node2"
{
fastname "pmetl05t"
pools "" "node2" "pmetl05t" "pnode"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/Scratch" {pools ""}
}
node "db2node0"
{
fastname "pmudb05t"
pools "db2"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/" {pools ""}
}
node "db2node1"
{
fastname "pmudb05t"
pools "db2"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/" {pools ""}
}
node "db2node2"
{
fastname "pmudb05t"
pools "db2"
resource disk "/datastage/tst5/dmttst5/Datasets" {pools ""}
resource scratchdisk "/datastage/tst5/dmttst5/" {pools ""}
}
}
Do I need to have the database node resource disk and scratch disk directories created in db2 server or etl server ?
I have them on ETL server. Does anyone know why I get the above mentioned error ?
kaps
Participant
Posts: 452
Joined: Tue May 10, 2005 12:36 pm

Post by kaps »

Anyone has clue about this ?
sjfearnside
Premium Member
Premium Member
Posts: 278
Joined: Wed Oct 03, 2007 8:45 am

Post by sjfearnside »

Did you ever get a solution for this problem? If so please post it.

thanks
darrreever
Participant
Posts: 19
Joined: Tue Feb 23, 2010 11:15 am
Location: Los Angeles

Post by darrreever »

Hello DataStagers:

Please see this link to explain what is going on (in some circumstances):

https://www-304.ibm.com/support/docview ... wg21404595

Here is my understanding, please post the correction if I am wrong.

As you know, the Orchestrate Engine will try to parallel pipeline partion run on every node in the APT_Config file. However, as seen in the above link, every node must have "access to current copy of project directory will result in warning message if that job contains a compiled transformer (or other compiled code) unless environment is configured to copy transformers between servers."

Further, recall that on nearly every stage under Stage > Advanced, you will see the Node Pool and Resource Constraints. If you limit the stage to run only on that node that is defined (which of course has access to the current copy of the project directory) you will not get the error.

Here is what I did, I added the warning message to a Message Handler (in Director, select the job, open the log, right click on the message, select "add rule to message handler" and choose the appropriate "Action" (in my case "Suppress from Log") ) and rerun.

This will prevent the message from showing up as a warning message and you should get a clean run.

Hope this helps.

God Bless :)
Darryl
Post Reply