Source with zero rows but target with one row

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
shave
Participant
Posts: 15
Joined: Thu Mar 15, 2007 1:31 am

Source with zero rows but target with one row

Post by shave »

Hello,

My Source is sequential file and target is teradata.I am using transformer stage to process.

If i get my source file with zero records the requirement is that the target teradata table should display a row with rundate. I have 4 columns in my target with rundate as a column. But i should get only rundate(passed as parameter) as one row when they were zero records.

Thanks in Advance
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Post-process your job to detect whether it sent zero rows and, if so, run another job to write the required information.

You can do this in an after-job subroutine or in a later activity in the job sequence from which the job was run.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
shave
Participant
Posts: 15
Joined: Thu Mar 15, 2007 1:31 am

Re: Source with zero rows but target with one row

Post by shave »

Thanks Ray..Can I do it without using a routine? If not can you tell me how to use this routine ?
ag_ram
Premium Member
Premium Member
Posts: 524
Joined: Wed Feb 28, 2007 3:51 am

Re: Source with zero rows but target with one row

Post by ag_ram »

shave wrote:Can I do it without using a routine?
Yes. But instead, an insertion of Execute Command Activity in the Job Sequence to run a Shellscript(your OS) which checks whether the Sequential file is empty. If the Sequential file is not empty, Job Sequence continues to call the main Load Job.

In this suggestion, there is no need of making another Job nor using Server Routine. Note that, this is an another way of solving your problem, if you feel any inconvenience in writing a Server Routine.
throbinson
Charter Member
Charter Member
Posts: 299
Joined: Wed Nov 13, 2002 5:38 pm
Location: USA

Post by throbinson »

I can name that tune in a fewer notes but it might be too unmaintainable.

Use ExecSH in the Before-job subroutine.
Input Value: echo `date` >> #InputFile# (and whatever other formatting is needed to make this line look like a input line in the file).

Interrogate for this line in the job transformer (assuming there is one). If it is the first and only line from the file, write it to Teradata.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Here is a sample of how you might go about it; I have supplied a suitable after-job subroutine. Full error handling has been omitted for clarity.

Code: Select all

SUBROUTINE NoRows(InputArg,ErrorCode)

$IFNDEF JOBCONTROL.H
$INCLUDE DSINCLUDE JOBCONTROL.H
$ENDIF

DEFFUN UtilityRunJob(Arg1,Arg2,Arg3,Arg4) Calling "DSX.UTILITYRUNJOB"

ErrorCode = 0

* InputArg contains active stage name and link name.
StageName = Field((InputArg), " ", 1, 1)
LinkName = Field((InputArg), " ", 2, 1)
RowCount = DSGetLinkInfo(DSJ.ME,StageName,LinkName,DSJ.LINKROWCOUNT)

* If link row count is zero, run other job to write row to target.
If RowCount = 0
Then
   * Substitute here for OtherJob (the one that writes a row to target)
   * and OtherParams (parameters for that job as name=value pairs
   * delimited by "|" - see code for SDK routine UtilityRunJob)
   Result = UtilityRunJob(OtherJob,OtherParams,0,0)
End

RETURN
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
OddJob
Participant
Posts: 163
Joined: Tue Feb 28, 2006 5:00 am
Location: Sheffield, UK

Post by OddJob »

I had a similar situation to this recently.

I used a Row Generator to create 1 row with a field 'Key' and a value of '1'.

From my source data (in your case the sequential file) I added an extra field 'Key' using a Transformer and again hard coded to '1'.

Then, taking these two streams, use a Join stage doing Full Outer join on field 'Key'.

After the join, use a Filter stage to detect if one of the sequential file fields - best bet is a key field that must be populated - is null. If this is the case then you haven't received any records from the sequential file and you can then move on to your DB stage to insert a single record with the date is hard coded from a parameter.

The other output from the filter checks for not null, i.e. this is for the case when 1 or more records are received from source file. In this case you can go ahead and insert with the source file data.

Hope this helps.
Post Reply