job stops running after some time

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
Ragunathan Gunasekaran
Participant
Posts: 247
Joined: Mon Jan 22, 2007 11:33 pm

job stops running after some time

Post by Ragunathan Gunasekaran »

Hi ,
The job pulls from a Oracle Source and aggregates the information ,so the design is something like this


Oracle Stage --->Transformer----> Aggregator------>Text file


Following is the environment setting used to run the job , This i have captured from the director log.The Job automatically stops after pulling the 100th row from oracle database. Any clue on thi please

Code: Select all

Environment variable settings:
_=/usr/bin/nohup
LANG=en_US
LOGIN=dsadm
APT_ORCHHOME=/opt/biretl2dev/apps/ascential/Ascential/DataStage/PXEngine
PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java14/jre/bin:/usr/java14/bin:/usr/java131/jre/bin:/usr/java131/bin:/usr/local/bin:/usr/seos/bin:/QualityStage/bin:/opt/biretl2dev/apps/ascential/Ascential/DataStage/PXEngine.752.1/bin:/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine/bin:/opt/biretl2dev/apps/db2/db2inst1/sqllib/bin:/opt/biretl2dev/apps/db2/db2inst1/sqllib/adm:/opt/biretl2dev/apps/oracle/product/10.2.0/bin:/usr:/usr/vacpp:/usr/vacpp/bin
NLS_LANG=ENGLISH_UNITED KINGDOM.WE8MSWIN1252
LC__FASTMSG=true
LOCPATH=/usr/lib/nls/loc
ORACLE_SID=BI02DBIR
LDR_CNTRL=MAXDATA=0x30000000
NLS_DATE_FORMAT=DD-MON-YYYY HH24:MI:SS

DSHOME=/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine
ODMDIR=/etc/objrepos

ODBCINI=/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine/.odbc.ini
HOME=/
DB2INSTANCE=db2inst1
QSHOME=/QualityStage
ORACLE_HOME=/opt/biretl2dev/apps/oracle/product/10.2.0
PWD=/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine
INTBIN=/QualityStage/bin
TZ=GMT0BST,M3.5.0,M10.5.0
INSTHOME=/opt/biretl2dev/apps/db2/db2inst1/sqllib
UDTHOME=/opt/biretl2dev/apps/ascential/Ascential/DataStage/ud41
UDTBIN=/opt/biretl2dev/apps/ascential/Ascential/DataStage/ud41/bin
LOGNAME=l2013480
DS_USERNO=-12570
WHO=sys
TERM=
BELL=^G
FLAVOR=-1
DSIPC_OPEN_TIMEOUT=30
APT_CONFIG_FILE=/opt/biretl2dev/apps/ascential/Ascential/DataStage/Configurations/default.apt
APT_MONITOR_MINTIME=10
DS_ENABLE_RESERVED_CHAR_CONVERT=0
DS_OPERATOR_BUILDOP_DIR=buildop
DS_OPERATOR_WRAPPED_DIR=wrapped
DS_TDM_TRACE_SUBROUTINE_CALLS=0
DS_TDM_PIPE_OPEN_TIMEOUT=720
APT_COMPILER=/usr/vacpp/bin/xlC_r
APT_COMPILEOPT=-O -c -qspill=32704
APT_LINKER=/usr/vacpp/bin/xlC_r
APT_LINKOPT=-G
NLSPATH=/usr/lib/nls/msg/%L/%N:/usr/lib/nls/msg/en_US/%N:/usr/lib/nls/msg/%L/%N.cat:/usr/lib/nls/msg/en_US/%N.cat
LIBPATH=/opt/biretl2dev/apps/ascential/Ascential/DataStage/branded_odbc/lib:/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine/lib:/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine/uvdlls:/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine/java/jre/bin/classic:/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine/java/jre/bin::/QualityStage/bin:/opt/biretl2dev/apps/ascential/Ascential/DataStage/PXEngine.752.1/lib:/opt/biretl2dev/apps/ascential/Ascential/DataStage/DSEngine/lib:/opt/biretl2dev/apps/db2/db2inst1/sqllib/lib:/opt/biretl2dev/apps/oracle/product/10.2.0/lib32:/usr/lib
Regards
Ragu
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Automatically stops? Sounds like you are running this from the Director with a Row Limit of 100. If so, it would log an entry similar to this:

At row 100, link "X"
Run stopped
-craig

"You can never have too many knives" -- Logan Nine Fingers
Ragunathan Gunasekaran
Participant
Posts: 247
Joined: Mon Jan 22, 2007 11:33 pm

Post by Ragunathan Gunasekaran »

Hi ,
I am running from designer with out giving any rowlimits ( No Row limits ). I took the same Oracle stage out of the job and directly dumped the query result from the oracle stage to a text file and the ran the sample job through the designer, It pulled around 713475 rows. Hope something is wrong with the environment or some time out is happening any clue please
Regards
Ragu
ag_ram
Premium Member
Premium Member
Posts: 524
Joined: Wed Feb 28, 2007 3:51 am

Post by ag_ram »

I think your compile trace mode is on. Disable the option in job properties. And try to run the job.
Ragunathan Gunasekaran
Participant
Posts: 247
Joined: Mon Jan 22, 2007 11:33 pm

Post by Ragunathan Gunasekaran »

I have tested the same and its not working. still its stopping at 100 th row.
Regards
Ragu
Ragunathan Gunasekaran
Participant
Posts: 247
Joined: Mon Jan 22, 2007 11:33 pm

Post by Ragunathan Gunasekaran »

How do i try to remove the DSIPC_OPEN_TIMEOUT environment variable for a particular job alone ? Any clue on this please ?
Regards
Ragu
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Do you have a constraint in your transform stage - either an explicit clause using @INROWNUM or perhaps a row limiter? When you job stops, is it with a status of aborted?
Ragunathan Gunasekaran
Participant
Posts: 247
Joined: Mon Jan 22, 2007 11:33 pm

Post by Ragunathan Gunasekaran »

Hi ,
No there are no such constraints or system variables used . The state of the job is aborted after the execution.
Regards
Ragu
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Post the other log messages. Reset the aborted job and post any 'From previous run...' message as well.
-craig

"You can never have too many knives" -- Logan Nine Fingers
sajarman
Participant
Posts: 41
Joined: Mon Nov 28, 2005 6:29 am

Debugging

Post by sajarman »

As a debug option, you can remove the stages from the job and them add one by one. Run the job after adding each stage (add a copy stage as the destination). The job might be aborting due to the later stages (other than Oracle stage).
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Server job. No 'Copy' stage. Sequential as the universal end point.
-craig

"You can never have too many knives" -- Logan Nine Fingers
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

There's probably a datatype issue (wrong datatype or NULL) happening in the Aggregator stage and the job is blowing up. The 100 rows is not an indication of which row has the issue, just the last time the job updated its link statistics. If I was you I would take the sequential file and try to run that thru the rest of the job and see what the Aggregator does.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
ushas
Participant
Posts: 9
Joined: Mon Apr 09, 2007 3:08 am

Post by ushas »

Best thing is to check for Numeric data on those columns before doing aggregation.
(If Num(columnname) then columnname else 0).
Try this .May be it work.
roy
Participant
Posts: 2598
Joined: Wed Jul 30, 2003 2:05 am
Location: Israel

Post by roy »

I was wondering,
How many rows are there?
if you say less then 200 and your running OCI stage
I have encountered once a situation where array size of more then 1' let's say 100 gave only increments of 100 at the resulting row number equal Div(row number,100)
The work-around we used then was to use 1 at the array size.
I Hope This Helps,
Roy R.
Time is money but when you don't have money time is all you can afford.

Search before posting:)

Join the DataStagers team effort at:
http://www.worldcommunitygrid.org
Image
Post Reply