Hi,
We have a job which aborts due to garbage data trailing behind one or 2 columns ..as it appears in director log. But when we reset the job and run it ..it just go fine.
The only reason I see is IPC stage , when i remove IPC stage as we done in many other job it runs fine. Is it happening due to memory leak/ time out property , how we can avoid it without removing the IPC stage.
the data appaers to be like 1234#some funny chrs......
Thanks,
Garbage data
Moderators: chulett, rschirm, roy
Thats by default. That number specifies a block of memory to hold the data. One block for writing to and the other is for reading. So if you put 128, that means 256 is specified. So I would say increase that. Let it be a multiple of the complete record size + a few bytes.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
Just checked its 512 .....will it make a difference on performance of jobs...as we have nearly 300 job batch cycle and more than 200 jobs might be using IPC stage...... if we make it to 1024 ...changing in env variable for ipc bufferDSguru2B wrote:Did you try increasing the buffer size? How huge is your record?
Thanks
So I have seventeen fields with length total of 144 i.e. sum of length of all fields so u suggest i may increase it to 200 ......DSguru2B wrote:Thats by default. That number specifies a block of memory to hold the data. One block for writing to and the other is for reading. So if you put 128, that means 256 is specified. So I would say increase that. Let it be a multiple of the complete record size + a few bytes.
Thanks
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact: