Page 1 of 1

path search failed - Unable to locate dscapiop

Posted: Sat Oct 03, 2009 5:53 am
by pxraja
Hi all,

I am getting following error message(job aborts) for only one job whereas other jobs are running fine.

main_program: orchgeneral: loaded
orchsort: loaded
orchstats: loaded

main_program: PATH search failure:

main_program: Error loading "@dscapiop": Could not load "dscapiop": Could not load module .
System error: No such file or directory.

main_program: Could not locate operator definition, wrapper, or Unix command for "dscapiop"; please check that all needed libraries are preloaded, and check the PATH for the wrappers

main_program: Creation of a step finished with status = FAILED.

Job Job_name aborted.

(Sequence_name) <- Job_name: Job under control finished.

Note:

I have made change to .odbc.ini when the job aborts due to database down. since we are running on cluster environment I had given the virtual ip address of the database(earlier it was pointing to one database out of two)


After that I triggered the sequence, It was running for other parallel jobs when control comes to the particular job it aborts due to above error.



I have no clue to debug the error. since its in production i cannot avoid this job and run other.

Please share your views to debug and rectify the above error

thanks in advance

Posted: Sat Oct 03, 2009 8:11 am
by chulett
So... to make sure we understand, you made an ODBC change to a working job (IP change) and now it no longer works?

Posted: Sat Oct 03, 2009 6:13 pm
by ray.wurlod
Is the change visible on all machines in your cluster/grid?

Posted: Sun Oct 04, 2009 9:46 pm
by pxraja
Hi ray,

other jobs are working fine only one particular job gets aborted(whereas the job structure like

ODBC-->TRANSFORMER--->PIVOT---->MODIFY---->FILTER---->ODBC
)

before the change in odbc file everything was running fine.


for example odbc file was changed to virtual ip 20.300.140.73 (which will pick up whatever database is up(that is 20.300.140.69))

database(down) --- 20.300.140.71
database(up) --- 20.300.140.69

netstat |grep ds "gives the following status"

datastage hostname database_hostname(20.300.140.71) close_wait


Also the warning says
"Error loading "@dscapiop": Could not load "dscapiop": Could not load module .
System error: No such file or directory."

why this warning comes? how can we avoid this and make the job running!

please share your views

thanks in advance

Posted: Mon Oct 05, 2009 2:59 am
by ArndW
The "dscapiop.o" library is in the "/DSParallel" subdirectory and is a library object, not an executable. If you revert your changes, does the job work again? (My feeling is that the two aren't directly related)

Posted: Mon Oct 05, 2009 8:46 pm
by pxraja
Hi all,

I think, I had misguided you all by the words cluster environment I am correcting it to HIGH AVAILABILITY...

Also I have noticed that pivot stage seems to have problem...

Tested for keeping the pivot stage and populating record to dataset -- It aborts with same warnings..

After that I tested without pivot stage the job is running fine.

But how to fix the problem? Whether problem due to database change or due to pivot stage?

Posted: Mon Oct 05, 2009 9:12 pm
by chulett
Which pivot stage, exactly?

Posted: Tue Oct 06, 2009 12:13 am
by pxraja
Hi craig,

Its PivotPX stage

I had designed the sample job like

ODBC-->TRF--->PIVOT---->Dataset

the job got aborted with the same message above posted.


whereas it is successful for
ODBC-->TRF--->Dataset

Posted: Mon Sep 06, 2010 1:39 am
by pxraja
Hi all,

Job is running fine after I have included the path name of "dscapiop.o" for the particular job alone.

Thanks for your suggestions.

Posted: Mon Sep 06, 2010 6:37 am
by chulett
Interesting... hope it didn't take a year to figure that out! :wink:

I would imagine you could have put those same changes in your dsenv file so that any job with that particular stage would now work.