i have written the routine code like this
#INCLUDE DSINCLUDE JOBCONTROL.H
#INCLUDE DSINCLUDE DSJ_XFUNCS.H
intTotAmount= 0.00
intTotOpenAmount= 0.00
OpenSeq '/Ascential/datastage/Projects/Account.txt' To objFileVar
Else Create objFileVar Else ErrorCode =1
WeofSeq objFileVar
intNoRecLoaded = DSGetLinkInfo(DSJ.ME,"TRNS_PurchaseOrders","Out_SEQL_PO", DSJ.LINKROWCOUNT)
Call DSLogInfo("No of Records":intNoRecLoaded,DSJ.ME)
for Ans = 1 to intNoRecLoaded
intTotAmount = intTotAmount + Amount
Next Ans
for j = 1 to intNoRecLoaded
intTotOpenAmount = intTotOpenAmount + OpenAmount
Next j
WriteSeq FMT("TOTAL AMOUNT :","31L"):FMT(intTotAmount, "15'0'R") To objFileVar Then
End
WriteSeq FMT("TOTAL OPEN AMOUNT :","31L"):FMT(intTotOpenAmount,"15'0'R") To objFileVar Then
End
WriteSeq FMT("TOTAL RECORD COUNT :","31L"):FMT(intNoRecLoaded,"12'0'R") To objFileVar Then
End
WriteSeq "END OF REPORT" To objFileVar Then
End
CloseSeq objFileVar
Call DSLogInfo("Total Amount":intTotAmount,DSJ.ME)
Call DSLogInfo("Total Open Amount":intTotOpenAmount,DSJ.ME)
Call DSLogInfo("ACR Balancing file created",DSJ.ME)
ErrorCode = 0 ; * set this to non-zero to stop the stage/job
I am just calling the above routine from transformer stage by giving input column values amount,openamount in a stage variable.
routine is successfully executing without any results.
OUTPUT is getting like this
TOTAL AMOUNT : 000000000000000
TOTAL OPEN AMOUNT : 000000000000000
TOTAL RECORD COUNT : 000000000000
what could be the reason?
but, if i convert same routine to After job routine,
i am getting DSLinkinfo correctly but, how can i pass column values amount and openamount as inputs?
how to capture the link info from server routine
Moderators: chulett, rschirm, roy
You run this on every row? So for the first row it counts from 1 thru 1, the second row 1 thru 2, then 1 thru 3, etc? Is that really what you want? For example, the Amount passed in on row 3 gets added to intTotAmount three times, does it not?
Why not just write this information out to a flat file and then (either in the same job or in a 'post process') read it back in and roll it up? An Aggregator stage would work quite nicely for that, I would think.
![Confused :?](./images/smilies/icon_confused.gif)
Why not just write this information out to a flat file and then (either in the same job or in a 'post process') read it back in and roll it up? An Aggregator stage would work quite nicely for that, I would think.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
thanks chulett...
i want just total amount by calculating all the amounts from source records (suppose say 1000 records) and same is the case with total openamounts.
here i want a seperate file which contains all the detials like in the same format in a text file like this.
Total Amount : 9999.99
Total Open Amount: 999999.99
Total Record Count: 00000000
if i put aggregator and do calculations and pass it to sequential file in the same job it would store in different format like this
Total Amount Total Open Amount Total Record Count
99999.99 9999999.99 000000000
how to read the file contents put in another file in the above said format.
can you throw some light just giving procedure or code.
i want just total amount by calculating all the amounts from source records (suppose say 1000 records) and same is the case with total openamounts.
here i want a seperate file which contains all the detials like in the same format in a text file like this.
Total Amount : 9999.99
Total Open Amount: 999999.99
Total Record Count: 00000000
if i put aggregator and do calculations and pass it to sequential file in the same job it would store in different format like this
Total Amount Total Open Amount Total Record Count
99999.99 9999999.99 000000000
how to read the file contents put in another file in the above said format.
can you throw some light just giving procedure or code.
You could work this out after the Aggregator by building a 'single' record with all of your desired fields in it, formatted as shown but with a record terminator between each 'record'. For UNIX that would be a line feed, for Windows a CR/LF pair. It would then be three records when read by a subsequent process. There's a FAQ in the FAQ forum that discusses this technique.
Or make a smaller version of your routine and call it after the Aggregator. Pass in the resulting three values and then let it do the 'write out the three records' part.
Or make a smaller version of your routine and call it after the Aggregator. Pass in the resulting three values and then let it do the 'write out the three records' part.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Lose the code. It's horribly inefficient, re-creating the output file for each row processed! Do it all in a job, with an Aggregator stage as Craig suggests.
Is this really a parallel job, in which you're trying to use DataStage BASIC code? What kind of a routine is it (transform function, before/after subroutine or custom UniVerse function)?
Is this really a parallel job, in which you're trying to use DataStage BASIC code? What kind of a routine is it (transform function, before/after subroutine or custom UniVerse function)?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.