How to run a Server job by setting a row limit

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
pongal
Participant
Posts: 77
Joined: Thu Mar 04, 2004 4:46 am

How to run a Server job by setting a row limit

Post by pongal »

Hi guys.... :D
i wanted to know how to run a job by setting row limit.
i am running a job where source(database table) having 70 lac records.
i wanted to check first 100 rows how the records were transformed to target.
is there any constraint or predefined command to limit the rows while passing to target.
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

If you are running the job from Director or the Designer debugger, the invocation dialog box has a tab that allows you to set the limit of rows.

If you are using custom job control, there is an API to call after you have attached a job handle. Look in your DS BASIC manual, I think it's called DSSetJobLimit.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

There are a few possibilities.

Ken mentioned setting a row limit in Director when submitting the run request. This can also be accomplished, in various ways, with the other ways of submitting a job run request, for example the dsjob command line interface.

The down side to this approach is that the job's exit status will be Stopped, which means you'll need to reset it.

While you're still in the development environment, why not constrain the Transfomer output link with (@INROWNUM <= 100) in addition to any other constraints? Of course this does mean that your job will read the entire 70 lakh records (for the non-Indian folks, that's 7 million), but you will then get some feel for how long that will take.

Yet another possibility is to use the Debugger to step through the job design a row, or even a link, at a time. You can, by varying the breakpoint expression, determine when the job is to pause. And the watch window will, provided they are in context, allow you to see column values going into and coming out of a Transformer stage, so that you can determine that transformations (and link navigation) is or is not proceeding according to how you believe it should. You can stop the job run in Debugger (instead of processing all 70 lakh input rows) and re-compile as a quick way to reset.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Another possibliity!

Create a 'local' copy of the table (perhaps in your own schema / workspace) and populate it with only 100 of the rows from your source table. Fast and easy to work with it when it's bite sized. :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply