I have a job design like
MQ--Transformer--Server Shared Container(with Folder Stage in it).
I am reading around 4 messages from MQ at a time and creating each XML for each message and storing in a FOlder stage. each xml is of around 1.2 MB size. I increade the APT_DEFAULT_TRASPORT_BLOCK_SIZE to higher number as well.
I am getting the below error when i am reading the file from MQ and storing it in a FOlder on the server.
SCLoadXML,0: row is to big to fit in shared memory buffer
Can some one help me on this
MQ and Server Shared container
Moderators: chulett, rschirm, roy
It has been 2 or more years since I did a DataStage with MQ project, but seem to recall having hit the same problem as you are seeing. What OS are you working on? We were on AIX and I believe we did something AIX-specific. There was also an interrelationship between the default and max block sizes in the DSParams as well.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 42
- Joined: Thu Dec 11, 2008 11:07 am
Skip the EE Job altogether. Real time --- do you "really" need the deep parallelism that you can accomplish with EE? Sometimes you do...but often Server will be just fine in a real time paradigm -- maybe even better. Unless this is quality Stage, or you can really justify the need for massive parallelism, drop EE as the host and use a Server Job all the way...the barrier to shipping large rows across the Container "boundary" go away --- and Server is better at handling huge, totally undefined varchar strings anyway.....
Ernie
Ernie
Ernie Ostic
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
-
- Participant
- Posts: 42
- Joined: Thu Dec 11, 2008 11:07 am