Basic transformer in Parallel Job

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
mary
Participant
Posts: 23
Joined: Fri Jun 02, 2006 1:28 am
Location: Bng

Basic transformer in Parallel Job

Post by mary »

Hi All,

We are using basic transformer in our parallel job to call a server routine.

Our job flow is like this
Sequential file --->BasicTransformer----API db2 stage

The sequential file is having one reject link.
In Basic transformer the null check and special character check is done. The special character check in the last column is done using the server routine called in the transformer.
In DB2 API stage we have a before sql to delete the record from the table with a condition. in generic sql an insertion is happening.

The problem we are facing is, when we run the job sometimes it is inserting the records and sometimes not.

Whenever it is inserting the below log we are getting
"ABAJBAMZN_NODE_ITEM(Trns_Format).ABAJBAMZN_NODE_ITEM_BDA#0.Trns_Format: DSD.StageRun Active stage finishing.
1211786 rows read from Input_records
1211786 rows written to Output_records
127.830 CPU seconds used, 4937.000 seconds elapsed."


whenever it is not inserting the below log we are getting
"ABAJBAMZN_NODE_ITEM(Trns_Format).ABAJBAMZN_NODE_ITEM_BDA#0.Trns_Format: DSD.StageRun Active stage finishing.
1211786 rows read from Input_records
0 rows written to Output_records
127.840 CPU seconds used, 5162.000 seconds elapsed.
"

Thanks In advance
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

What is different (possibly in the data) between when the inserts are successful and when they are not? Are there any other warnings (for example database not available) also logged?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
mary
Participant
Posts: 23
Joined: Fri Jun 02, 2006 1:28 am
Location: Bng

Post by mary »

In both scenario the job is successfully finished and no warning related to Database.

We ran the same jobs in different environment. There the job got aborted .But we can see at backend the records are getting processed in the table(First records are getting deleted and then inserted)
mary
Participant
Posts: 23
Joined: Fri Jun 02, 2006 1:28 am
Location: Bng

Post by mary »

Another issue we faced is if the job aborts once and if we try to run the sequencer again, the job is giving the erro unable to run the job error at job level.

If we compile the routine and then run , the job is running fine.
Post Reply