MQ 2024 Error

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
ArjunK
Participant
Posts: 30
Joined: Sun Apr 30, 2006 6:32 pm

MQ 2024 Error

Post by ArjunK »

Hi,
We have a datastage process in place which reads messages off the queue for processing.

We have the following properties set:

Wait time=60 sec
Message limit = 0
Destructive read with Commit/backout option checked

In the Queue Manager , there is a property which limits the number of messages that can be read in a single unit of work. If the number of messages read is more than this limit then the following error is thrown - MQRC_SYNCPOINT_LIMIT_REACHED (2024).

Currently , for us this limit is set to 5000 and we have been told by the MQ Admin team that this limit should be kept low.

So now any time we have more than 5000 messages in the queue our job is aborting as it tries to read all the messages in the queue.

We could think of a couple of ways to overcome this issue:

1. Limit the number of messages read to only 5000 and then run the entire ETL flow in a loop.

2. Create a separate job which just reads all the messages and generate local files for each message. This job can be run in a loop. Then have all the files processed together by the main ETL flow.

Both these options don't seem that attractive. Just wanted to know if anybody else has also faced a similar situation and have some better options/suggestions.

Thanks,
Arjun
eostic
Premium Member
Premium Member
Posts: 3838
Joined: Mon Oct 17, 2005 9:34 am

Post by eostic »

Hi Arjun....

Can you describe more about the overall process? What are you doing with the messages on the target ? Are they being applied to another queue?....and rdbms? I'd like to get a better idea of the overall "unit of work" that you are trying to encapsulate.

Thanks....

Ernie
Ernie Ostic

blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
eostic
Premium Member
Premium Member
Posts: 3838
Joined: Mon Oct 17, 2005 9:34 am

Post by eostic »

Never mind, for now...I inferred what you are trying to do from the other MQ thread....... read "all" the messages, but treat them as a "unit" for the entire job. If the job finishes successfully, then it's ok to delete them?

Assuming that's the case, it would still be interesting to know what you are doing with the messages. I assume you don't want to drop any. You might consider setting yourself up with a "side queue." Use the ensured delivery technique (it's well documented in the PDF) that supports the ability to put each message into an alternate queue under syncpoint. It has it's own complications for performance and for multiple readers, because it uses a browse at the original source, and also complicates your recovery scenario, but it would reduce your MQ transaction size to one, allow you to do whatever you need to (send messages to flat files, folders, whereever), and still not lose any messages if the job aborts overall.

Your loop idea should be all that painful to build either.

Ernie
Ernie Ostic

blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
ArjunK
Participant
Posts: 30
Joined: Sun Apr 30, 2006 6:32 pm

Post by ArjunK »

Hi Ernie,
Yes, we are reading all the messages from the queue. Each message is a XML and our job flow is something like

MQ-> XML Input -> Transformer - > Dataset

Then we have downstream jobs which consume the dataset and load tables.

We don't want to run our entire ETL flow with just 5000 records in every loop.

I will look into your suggestion.

Thanks.
eostic
Premium Member
Premium Member
Posts: 3838
Joined: Mon Oct 17, 2005 9:34 am

Post by eostic »

...gee...the loop seems really easy to implement....

However, you could consider something like the following:

1. add the "message id" to the output from your initial MQ Stage. Also carry the original contents of your message somewhere. Use "browse" at this Source Stage.

2. go thru your normal job flow, reaching a transformer close to, or just before, your final target....

3. make a decision there if you like the data you've just processed....if so, write it to your target, and then also send two links from that same transformer into an MQStage...one that carries the message ID, and the other carrying the original message content. Use the technique in the PDF to "commit" this new message to a special application queue, causing the message in the source queue to be deleted under syncpoint. This "special" queue is effectively a "dummy queue" but could be used for later auditing purposes.

4. avoid firing those two links if something goes awry.

5. Somewhere I have an example of this using an rdbms and will pull it from an archive and put it on my blog. With an RDBMS and Server edition, you have the added ability to perform those two links "only" if the rdbms doesn't complain. There's not quite as much control with a sequential target, but if the whole job were to abort, you'd still have your messages in the source queue, and successfully processed ones in the special target queue.

Ernie
Ernie Ostic

blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
eostic
Premium Member
Premium Member
Posts: 3838
Joined: Mon Oct 17, 2005 9:34 am

Post by eostic »

Found it and uploaded it to the site below. Let me know if it is useful.

Ernie
Ernie Ostic

blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
Post Reply