Page 1 of 1

MQ Connector as a target stage/re-processing on job failure

Posted: Wed Aug 20, 2014 12:53 pm
by dj
I am new to websphere MQ connector stage :( . I am trying to understand how does the MQ connector stage can prevent sending the same messages to the queue in the event the job failed after processing few messages and then i re-run the job again. is there a setting i can use in the connector stage or how can i design my job to prevent sending same messages again after the job failure/re-run.

thanks in advance for your advice.

Posted: Thu Aug 21, 2014 7:21 am
by prasson_ibm
Hi,
How are you currently running your job,i mean how are you passing multiple messages in MQ?

Are you using message segmenation feature?

Posted: Thu Aug 21, 2014 11:59 am
by dj
My job design is as shown below

db2_connector --> XFM --> XML_Output --> MQ_Connector

I am picking the required records from a table in DB2, creating a required XML(sequentially) for each record and sending them on to the cluster queue. I am using the default config file while running the job and with default properties on the MQ stage.

Posted: Fri Aug 22, 2014 2:27 am
by prasson_ibm
If you can add indicator column in source table (eg Indicator='P') then your work will be very easy.

1.Intially flag all indicator to Indicator='P' then add this in your db2 select statement where clause.

2. Select ROWID from source table and in you job store it in dataset.

3. Design post update job which will update all rowid's stored in dataset and update flag to 'Y' .

4. In case your job failed, run the post update.When you rerun job you gonna pick only records with Indicator='P'

Posted: Fri Aug 22, 2014 7:12 am
by Mike
First thing to ask yourself:
Is there any harm to the downstream consumers if they reprocess a message that they've already processed?

If there is no harm, then don't worry about it.

If there is a potential for harm, then Prasoon's solution is practical as long as it's not a matter of life and death. There is some risk in updating a database table and writing to a queue in separate transactions which is what you'll have when using a post update job.

If it is a matter of life and death, then you need to update the processed database row and write to the MQ queue in a single transaction. You'll need to utilize DTS (distributed transaction stage) to make that happen.

Mike

Posted: Mon Sep 08, 2014 4:20 pm
by dj
Thank you Prasson and Mike for your inputs.

Discussed with the downstream consumers about re-sending the messages again and concluded to implement the logic on their side(similar to the logic as per prasson_ibm) to not re-process the previously sent messages. So, there was no change to the datastage job on my side and the functionality is addressed.

Thank you.