Websphere MQ - Transactional Data Move

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
turash
Premium Member
Premium Member
Posts: 51
Joined: Fri Apr 06, 2007 10:09 am
Location: Edison, NJ

Websphere MQ - Transactional Data Move

Post by turash »

Scenario :
Transferring transactions/data between two transactional databases.
Work unit key to be transfered is going to come as MQ Message. I have to extract this key passes this to a job sequence
which will extract approximately 50 tables and transform to staging and last job will
move this data to another transactional database and finally write to Websphere MQ with status of each job in the sequence.

as a newbie to datastage I know how to design the job which will accept the parameters. rest of the approach and how to do it is not very clear.

Please share your thoughts and expertise on how do design such kind of job
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Probably the optimal model is to destructively read the input queue, and establish an intemediate "hold" queue in which you can manage your units of work. You might also send your outputs to one queue and your rejects to another.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
turash
Premium Member
Premium Member
Posts: 51
Joined: Fri Apr 06, 2007 10:09 am
Location: Edison, NJ

Post by turash »

ray.wurlod wrote:Probably the optimal model is to destructively read the input queue, and establish an intemediate "hold" queue in which you can manage your units of work. You might also send your outputs to one queue and your rejects to another.
Ray,
How do i extract the workunit from the xml file and assign to my jobs which will extract this information from source database?
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The answer to that totally depends on how work units are defined in your case. There is no general answer that can be given.

IBM appears to prefer the term "unit of work", possibly because UOW can be expressed as a three letter acronym, and possibly because this terminology seems to be more widely used in XML-related publications.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
eostic
Premium Member
Premium Member
Posts: 3838
Joined: Mon Oct 17, 2005 9:34 am

Post by eostic »

With this many tables, you are going to definitely need, as Ray notes, some type of intermediate scratch queue, or place you can go to if the whole thing falls over. We have a certain degree of UOW capability within DS Server (one transaction across multipe tables -- a special feature that uses "n" links from a single Transformer Stage), and EE goes even further (one transaction across an MQ queue as well as multiple tables using special Operators) but much of that is within the confines of a single job. With sequencing in this, it will be difficult to carry the entire unit of work across all the tables in the database....set up a good staging strategy and work out the reconciliation effort if parts and pieces die along the way.

Ernie
turash
Premium Member
Premium Member
Posts: 51
Joined: Fri Apr 06, 2007 10:09 am
Location: Edison, NJ

Post by turash »

eostic wrote:With this many tables, you are going to definitely need, as Ray notes, some type of intermediate scratch queue, or place you can go to if the whole thing falls over. We have a certain degree of UOW capability within DS Server (one transaction across multipe tables -- a special feature that uses "n" links from a single Transformer Stage), and EE goes even further (one transaction across an MQ queue as well as multiple tables using special Operators) but much of that is within the confines of a single job. With sequencing in this, it will be difficult to carry the entire unit of work across all the tables in the database....set up a good staging strategy and work out the reconciliation effort if parts and pieces die along the way.

Ernie
We are going to have staging. each table will have its own job to move data between source and staging. once that successfully done move all the data
from staging in one server job using ODBC transaction grouping feature.
eostic
Premium Member
Premium Member
Posts: 3838
Joined: Mon Oct 17, 2005 9:34 am

Post by eostic »

Ok...great. Then you are all set. The DB2 and Oracle Stages also support Transaction grouping from a single Transformer, but I've found that the ODBC Stage is much better when you want to exploit SQL STATE and return codes on the links within your single transaction, also allowing you to have simple "rollback" and "commit" links establshed that only fire "if" all the other links complete successfully (or not).

Ernie
Post Reply