Wave.sequence.error in phantom job

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
katz
Charter Member
Charter Member
Posts: 52
Joined: Thu Jan 20, 2005 8:13 am

Wave.sequence.error in phantom job

Post by katz »

I am getting the following error during the last stage of my job (which happens to be an OCI) and my job aborts.

DSD.StageStatus: Wave sequence error, resource "testMEtemplate.Transform_and_Validate.OverWrite_RS_HF.SI_Pass_Thru_read" has CUR.WAVE.NUM=2, was looking for wave 1

Although though the error occurs in the last stage of the job, the stage referenced in the error occurs much earlier in the design. My guess is that the error is happening during an end of job clean-up process which happens to coincide with the execution of the final OCI stage. The final OCI stage performs an insert is not directly linked to the stage named in the error message (Overwrite_RS_HF)

The stage named in the error message is a transformer and the link that is referenced in the error (SI_Pass_Thru_read) is the primary input. The transformer is also has a lookup, but the primary input link is simply mapped to the output link - nothing fancy.

The job is also quite straight forward. Its not multi-instance enabled and uses a fairly routine set of stages (OCI, Hashed and Sequential Files, Transformers, and a Link Collector). Nothing in the job design violates any rules, that I am aware of.

I am unclear about what the error message means, so do not know how to address the problem in my job design. Does anyone know what this message means and what events can cause it?
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

You probably had an orphaned process from a previous failure. If you stop a job or it aborts, sometimes a part of a job that's talking to a database stays active. The next time the job is run, weird things happen if a thread is out there. Use "ps -ef |grep phantom" and look for parts of your job and kill them.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Resetting or recompiling the job should correct any wave synchronization problems. A wave number is simply an internal DataStage mechanism for keeping track of unique runs - for example, resetting a job uses a new wave number because it can't be certain that the previous wave is in a runnable state (indeed, it probably isn't if you're resetting the job).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

No, because the wave number gets reset all the time. Simply recompiling resets it to 1. Just ignore that it's there.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Post Reply