Contents of phantom output file =>
RT_SC767/OshExecuter.sh[16]: 3842258 Segmentation fault(coredump).
I have the above error after I run the parallel datastage job. The original job is a row generator with 6 DB2 stages that have a user defined delete statement. This job worked fine for a year but now it seems to have this problem from last few days. It even worked fine after we migrated from 6.x to 7.x.
The work around I found was split the job into two jobs with 3 delete statements each. This runs fine. But as this job is part of a sequence I do not want to change the job if possible.
My question is if I want to keep the original job is there some parameter tuning or some other configuration changes that I have to do to fix this memory error "Segmentation fault(coredump)". What are these settings if any ?
Any help is appreciated.
Segmentation fault(coredump)
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 5
- Joined: Tue Oct 07, 2003 8:20 am
- Location: USA
-
- Participant
- Posts: 5
- Joined: Tue Oct 07, 2003 8:20 am
- Location: USA
[quote="Eric"]Have the server resources changed?
Mabybe you only see the problem now due to machine resources being used by another process?[/quote]
We usually have a lot of process running at the same time. I was able to reproduce this error on our development box.
Appraently it was a bug for which Ascential has just released a patch . We are testing this patch. Thanks for your help
Mabybe you only see the problem now due to machine resources being used by another process?[/quote]
We usually have a lot of process running at the same time. I was able to reproduce this error on our development box.
Appraently it was a bug for which Ascential has just released a patch . We are testing this patch. Thanks for your help