WRITE failure another Phantom Error

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
datastagedummy
Participant
Posts: 56
Joined: Thu Feb 13, 2003 6:08 pm
Location: USA

WRITE failure another Phantom Error

Post by datastagedummy »

Hello Everbody

I have 2 jobs

1.Batch4ShiftLoadDBFtoCSV
2.DbfExt4thShiftMergeCsv

The 1st job is a Controller job which just runs the second job in a loop with different parameter value everytime.

These jobs have been working fine so far but we just upgraded the ETL
server to Server Ver 6.0.1 from 6.0 and with this upgrade we also had NLS enabled.

After the upgrade we have started having problem with the above job.

The job has started behaving erratically sometimes it's fails with the
first parameter and sometimes it runs for 10 or 15 parameters and then
aborts the error message that I get is given below.

I tried looking at the RT_BP1143 directory at line number 180 (Screen shot
below)

174 IF STAGECOM.TRACE.STATS THEN CALL $PERF.END(-13)
175
176 PUT.Pin%%V0S61P2
177 IF NOT(Pin%%V0S61P2.REJECTEDCODE) THEN
178 REJECTED = @FALSE
179 END ELSE
180 Pin%%V0S61P2.REJECTED = @TRUE
181 END
182 END
183 ELSE
184 Pin%%V0S61P2.REJECTED = @TRUE
185 Pin%%V0S61P2.REJECTEDCODE = 0
186 END
187
188
189 UPDATE.COUNT -= 1

----------------------------------------------------------------------
Error Messages :

1. From Batch4ShiftLoadDBFtoCSV

Batch4ShiftLoadDBFtoCSV..JobControl (fatal error from JobControl): Job
Failed: DbfExt4thShiftMergeCsv

When I reset the Job


From previous run
DataStage Job 1140 Phantom 28341
Job Aborted after Fatal Error logged.
Program "DSD.WriteLog": Line 238, Abort.
Attempting to Cleanup after ABORT raised in stage
Batch4ShiftLoadDBFtoCSV..JobControl

2. DbfExt4thShiftMergeCsv

DataStage Job 1143 Phantom 3016
Program "DSD.Startup": Line 180, WRITE failure.
Attempting to Cleanup after ABORT raised in stage
DbfExt4thShiftMergeCsv..XfmBKFLC
DataStage Phantom Aborting with @ABORT.CODE = 3

----------------------------------------------------------------------

Does anybody know what DSD.Startup and DSD.WriteLog does ?

Please help the dummy.

Thanks
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The error was not in the Transformer stage (whose code you are inspecting from RT_BP1143) but, rather, in the subroutines DSD.Startup and DSD.WriteLog, both of which are part of DataStage and whose source code, therefore, you don't have.

Do you keep your log files regularly purged? DSD.WriteLog (fairly obviously) is an underlying routine for writing an event to a job log. A job log is a hashed file, which has an upper limit of 2GB by default. However, sometimes - particularly when many instances of the same job are running - contention on the job log can be experienced.

The WRITE failure in DSD.Startup (which is a routine responsible for starting a job) may also indicate a problem with the job log, this routine adds the event indicating that the job or active stage has started.

It may be the case that the log file has become corrupted; to scan this execute UVFIXFILE RT_LOG1143 from the Administrator client's Command window, or in the DataStage environment on the server.


Ray Wurlod
Education and Consulting Services
ABN 57 092 448 518
datastagedummy
Participant
Posts: 56
Joined: Thu Feb 13, 2003 6:08 pm
Location: USA

Post by datastagedummy »

Ray thanks for your reply.

This job just extracts from DbaseIII files and writes to a Sequential File with one lookup to hash file in between.

But this is done on 21 Dbase files in this one particular job which means that I have 21 ODBC Stages sourcing from Dbase files creating 21 Sequential File Stages with 21 Transformer Stages in between.

Is there a limitation to the number of Transformer stages in a job ?

I Split the job into 2 and now it seems to work fine (still testing).
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

There is no limit on the number of stages in a job. There is a limit on the number of links that a stage can handle; this varies between stages types and is a set of properties of the stage type.

Best practice, on the other hand, dictates that you keep your jobs as simple as possible, and document them as completely as possible (particularly the tricky bits), so that they are most easily maintained in the future.

You don't necessarily need 21 ODBC stages. The ODBC stage links to the data source; a link connects to a table (well, technically, to the database server to execute a prepared SQL statement). You can have multiple output links from the one ODBC stage. It may simplify future maintenance of your job design to cut back the number of ODBC stages to, say, six. (This is only a suggestion. Your design is OK.)
Post Reply