moving data to Mainframe via MQ - not all data getting there
Moderators: chulett, rschirm, roy
moving data to Mainframe via MQ - not all data getting there
Moving data (CFF defined) to mainframe queue. It's getting there, but only the first field (2 bytes) is showing up in the queue. All 4 records are there, but only the first field of them.
Options used in MQ Stage:
Server Mode
Context mode: Set all
Cluster queue: No
Dynamic queue: No
Message write mode: Create
Record count: 0
Thanks in advance.
Options used in MQ Stage:
Server Mode
Context mode: Set all
Cluster queue: No
Dynamic queue: No
Message write mode: Create
Record count: 0
Thanks in advance.
Have you tried using the MQ Connector stage? We are using MQ from mainframe with very long and variable message sizes with no issues.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Sorry, my mistake.
Don't you just have one data output column called "Payload", which comprises all the column data, to which you write? The template for this is in Table Definitions -> Real Time -> WebSphere MQ Connector -> MQMessage.
Don't you just have one data output column called "Payload", which comprises all the column data, to which you write? The template for this is in Table Definitions -> Real Time -> WebSphere MQ Connector -> MQMessage.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Well, unfortunately, I can't see your whole message. I'll make a guess - and look for the table def. However, since we have 27 fields in our input, would we now need a transform between CFF and MQ that concatenates all the 27 fields into 1 output field (the Payload) ? or should I redefine my input file to be just one field and use that for this job ?ArndW wrote:Sorry, my mistake.
Don't you just have one data output column called "Payload", which comprises all the column data, to which you write? The template for this is in Table Definitions -> Real Tim ...
Thanks.
Yes, you will need to concatenate the fields to one; you can use a transform stage or a column export stage to effect this.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Okay - I've concatenated a couple of different ways - and was able to send to the Queue. However, only 72 of the 775 bytes of the now one field was received (again, all 4 records were sent and received). I'm thinking that I should be specifying something in the Header instead of taking all defaults. I'm investigating that now, but if anyone has any revelations, I'll gladly listen. :DArndW wrote:Yes, you will need to concatenate the fields to one; you can use a transform stage or a column export stage to effect this. ...
Thanks
The "Maximum Message Length" is a Queue setup parameter - perhaps that is set too small in your queue?
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Finally figured it out. I had to lie to DS and tell it that the file that I created in one job with CFF (with 127 fields, about 1/3 of them defined as COMP-3) was actually a Sequential File, with 1 field which is binary - length of 775. Then things work.ArndW wrote:The "Maximum Message Length" is a Queue setup parameter - perhaps that is set too small in your queue? ...
Thanks!