Stop the job after 1 read row

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
wahil
Participant
Posts: 23
Joined: Tue Oct 25, 2005 11:14 am

Stop the job after 1 read row

Post by wahil »

Hi to all,

I have the job below:

HASH ---> TRANSFORM ---> SEQFILE

This hash have millions of rows, but i need stop the job after read the first only. In transform stage i put @INROWNUM in constraint, but the dstage read all rows in hash...It's take 6hours to finish...
I try with @OUTRONNUM too, but the result as the same.

Any idea?

Thanks a lot!
thumsup9
Charter Member
Charter Member
Posts: 168
Joined: Fri Feb 18, 2005 11:29 am

Post by thumsup9 »

Use a Before Job Routine with Head or Tail to get the number of rows you want.
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

Or you can specifiy the limit as 1 row in the director. You can also specify it in command line.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
wahil
Participant
Posts: 23
Joined: Tue Oct 25, 2005 11:14 am

Post by wahil »

DSGuru,

Excuse-me but i have other stages before the hash table. I don't write about this to be more easy...
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

Split the job. Simple as that. Aborting the job or even stopping the job before its true end doesnt always comeout with good results, IMHO. Thats why i advised to finish the job by limiting the execution to one row.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
gateleys
Premium Member
Premium Member
Posts: 992
Joined: Mon Aug 08, 2005 5:08 pm
Location: USA

Re: Stop the job after 1 read row

Post by gateleys »

Can you post the exact constraint that you put in your Xfmr stage?

gateleys
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

HASH-->XFM--->SEQ means ALL rows output from the HASHed file to get to the XFM to be thrown away. Split the jobs if your logic is messy.


As for the first row out of the hashed file, how do you qualify the first row? By order written into the file? Sorry, hashed data is randomized throughout the file. There is no "first" row concept.

If you use the UV/ODBC (Universe) stage, you could use SQL to limit the rows via a WHERE clause, or if your hashed file is created in the project (if not use SETFILE, search the forum) then you can use the WITH @ID=xxxx where xxxx is the single key value for the row you want output.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The Hashed File stage has a Selection tab does it not? An appropriate selection phrase phrase to put in here is

Code: Select all

FIRST 1
If that builds a WITH phrase that causes a syntax error, trick DataStage into generating a legal phrase, for example

Code: Select all

1 = 1 FIRST 1
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
pigpen
Participant
Posts: 38
Joined: Thu Jul 13, 2006 2:51 am

Post by pigpen »

At the moment of creating the hash file, create another sequential file with the first record and use it afterwards.
Post Reply