Page 1 of 1

job stops running after some time

Posted: Tue Jun 24, 2008 10:19 pm
by Ragunathan Gunasekaran
Hi ,
The job pulls from a Oracle Source and aggregates the information ,so the design is something like this


Oracle Stage --->Transformer----> Aggregator------>Text file


Following is the environment setting used to run the job , This i have captured from the director log.The Job automatically stops after pulling the 100th row from oracle database. Any clue on thi please

Code: Select all

Environment variable settings:
_=/usr/bin/nohup
LANG=en_US
LOGIN=dsadm
APT_ORCHHOME=/opt/biretl2dev/apps/ascential/Ascential/DataStage/PXEngine
PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java14/jre/bin:/usr/java14/bin:/usr/java131/jre/bin:/usr/java131/bin:/usr/local/bin:/usr/seos/bin:/QualityStage/bin:/opt/biretl2dev/apps/ascential/Ascential/DataStage/PXEngine.752.1/bin:/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine/bin:/opt/biretl2dev/apps/db2/db2inst1/sqllib/bin:/opt/biretl2dev/apps/db2/db2inst1/sqllib/adm:/opt/biretl2dev/apps/oracle/product/10.2.0/bin:/usr:/usr/vacpp:/usr/vacpp/bin
NLS_LANG=ENGLISH_UNITED KINGDOM.WE8MSWIN1252
LC__FASTMSG=true
LOCPATH=/usr/lib/nls/loc
ORACLE_SID=BI02DBIR
LDR_CNTRL=MAXDATA=0x30000000
NLS_DATE_FORMAT=DD-MON-YYYY HH24:MI:SS

DSHOME=/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine
ODMDIR=/etc/objrepos

ODBCINI=/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine/.odbc.ini
HOME=/
DB2INSTANCE=db2inst1
QSHOME=/QualityStage
ORACLE_HOME=/opt/biretl2dev/apps/oracle/product/10.2.0
PWD=/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine
INTBIN=/QualityStage/bin
TZ=GMT0BST,M3.5.0,M10.5.0
INSTHOME=/opt/biretl2dev/apps/db2/db2inst1/sqllib
UDTHOME=/opt/biretl2dev/apps/ascential/Ascential/DataStage/ud41
UDTBIN=/opt/biretl2dev/apps/ascential/Ascential/DataStage/ud41/bin
LOGNAME=l2013480
DS_USERNO=-12570
WHO=sys
TERM=
BELL=^G
FLAVOR=-1
DSIPC_OPEN_TIMEOUT=30
APT_CONFIG_FILE=/opt/biretl2dev/apps/ascential/Ascential/DataStage/Configurations/default.apt
APT_MONITOR_MINTIME=10
DS_ENABLE_RESERVED_CHAR_CONVERT=0
DS_OPERATOR_BUILDOP_DIR=buildop
DS_OPERATOR_WRAPPED_DIR=wrapped
DS_TDM_TRACE_SUBROUTINE_CALLS=0
DS_TDM_PIPE_OPEN_TIMEOUT=720
APT_COMPILER=/usr/vacpp/bin/xlC_r
APT_COMPILEOPT=-O -c -qspill=32704
APT_LINKER=/usr/vacpp/bin/xlC_r
APT_LINKOPT=-G
NLSPATH=/usr/lib/nls/msg/%L/%N:/usr/lib/nls/msg/en_US/%N:/usr/lib/nls/msg/%L/%N.cat:/usr/lib/nls/msg/en_US/%N.cat
LIBPATH=/opt/biretl2dev/apps/ascential/Ascential/DataStage/branded_odbc/lib:/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine/lib:/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine/uvdlls:/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine/java/jre/bin/classic:/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine/java/jre/bin::/QualityStage/bin:/opt/biretl2dev/apps/ascential/Ascential/DataStage/PXEngine.752.1/lib:/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine/lib:/opt/biretl2dev/apps/db2/db2inst1/sqllib/lib:/opt/biretl2dev/apps/oracle/product/10.2.0/lib32:/usr/lib

Posted: Tue Jun 24, 2008 10:55 pm
by chulett
Automatically stops? Sounds like you are running this from the Director with a Row Limit of 100. If so, it would log an entry similar to this:

At row 100, link "X"
Run stopped

Posted: Tue Jun 24, 2008 11:02 pm
by Ragunathan Gunasekaran
Hi ,
I am running from designer with out giving any rowlimits ( No Row limits ). I took the same Oracle stage out of the job and directly dumped the query result from the oracle stage to a text file and the ran the sample job through the designer, It pulled around 713475 rows. Hope something is wrong with the environment or some time out is happening any clue please

Posted: Tue Jun 24, 2008 11:58 pm
by ag_ram
I think your compile trace mode is on. Disable the option in job properties. And try to run the job.

Posted: Wed Jun 25, 2008 1:12 am
by Ragunathan Gunasekaran
I have tested the same and its not working. still its stopping at 100 th row.

Posted: Wed Jun 25, 2008 1:44 am
by Ragunathan Gunasekaran
How do i try to remove the DSIPC_OPEN_TIMEOUT environment variable for a particular job alone ? Any clue on this please ?

Posted: Wed Jun 25, 2008 2:33 am
by ArndW
Do you have a constraint in your transform stage - either an explicit clause using @INROWNUM or perhaps a row limiter? When you job stops, is it with a status of aborted?

Posted: Wed Jun 25, 2008 3:18 am
by Ragunathan Gunasekaran
Hi ,
No there are no such constraints or system variables used . The state of the job is aborted after the execution.

Posted: Wed Jun 25, 2008 6:42 am
by chulett
Post the other log messages. Reset the aborted job and post any 'From previous run...' message as well.

Debugging

Posted: Thu Jun 26, 2008 2:54 am
by sajarman
As a debug option, you can remove the stages from the job and them add one by one. Run the job after adding each stage (add a copy stage as the destination). The job might be aborting due to the later stages (other than Oracle stage).

Posted: Thu Jun 26, 2008 5:20 am
by chulett
Server job. No 'Copy' stage. Sequential as the universal end point.

Posted: Thu Jun 26, 2008 2:05 pm
by kcbland
There's probably a datatype issue (wrong datatype or NULL) happening in the Aggregator stage and the job is blowing up. The 100 rows is not an indication of which row has the issue, just the last time the job updated its link statistics. If I was you I would take the sequential file and try to run that thru the rest of the job and see what the Aggregator does.

Posted: Mon Jun 30, 2008 5:23 am
by ushas
Best thing is to check for Numeric data on those columns before doing aggregation.
(If Num(columnname) then columnname else 0).
Try this .May be it work.

Posted: Mon Jul 21, 2008 2:06 pm
by roy
I was wondering,
How many rows are there?
if you say less then 200 and your running OCI stage
I have encountered once a situation where array size of more then 1' let's say 100 gave only increments of 100 at the resulting row number equal Div(row number,100)
The work-around we used then was to use 1 at the array size.
I Hope This Helps,