Page 1 of 1

MQ stage and parallel process on remote node

Posted: Thu Jan 27, 2005 10:53 am
by lgharis
Is it possible to run a parallel job with two processes, one local and one remote, using MQ as input? If it is possible I would appreciate suggestions on what to put in the QMGR name field. Also, does it require any special setup in MQ, like alias queues or clusters?

Posted: Sun Jan 30, 2005 2:36 am
by roy
Hi,
read chapter 58 section 10 of the parjdev.pdf in your docs directory in the client installation.

IHTH,

Posted: Mon Jan 31, 2005 9:00 am
by lgharis
Thanks for the response. I've read through that before and it does not help with my problem. Maybe I missed something but I don't see it. The problem is that when you use a MQ stage you must specify the Queue Manager name. The two systems involved have different QM names with queues of the same name.

Let's say we have node1 and node2. On node1 we have QMGR1 with Q_local. On node2 we have QMGR2 with Q_local. The intent is to make the queues clustered so that the load is shared between the nodes. The process on node1 will process the messages on QMGR1 and the process on node2 will process the messages on QMGR2.

If we put QMGR1 in the MQ stage the process on node2 will fail because it cannot connect to QMGR1. The same is true for the process on node1 if we specifiy QMGR2. Is there a MQ resource for the APT config file similar to DB2 and sas?

The developers have gotten around the apparent limitation by creating two jobs with different QMGR specifications. One job runs a process on the local node1 with QMGR1. The second job gets tricked into running a process on the remote node2 with QMGR2. Unless we are missing something it seems there is a limitation of parallel processing not being able to run MQ processes in parallel on multiple nodes.

Posted: Tue Feb 01, 2005 1:44 pm
by T42
You can not technically control processes with DataStage. Allow me to rephrase what you are asking:

You want to pull something from a single source on one specific location, and run the job on multiple locations, and land the data... somewhere.

You will need to have a node constraint for the input stage if your MQ stage is incapable of accessing the location from the remote node. Look at how DB2 nodes are built and handled, and follow that example for your configuration file.

This way, MQ data will only be pulled on one node, and partitioned to all nodes on the process.