Segmentation fault(coredump)

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
roshanp_21
Participant
Posts: 5
Joined: Tue Oct 07, 2003 8:20 am
Location: USA

Segmentation fault(coredump)

Post by roshanp_21 »

Contents of phantom output file =>
RT_SC767/OshExecuter.sh[16]: 3842258 Segmentation fault(coredump).

I have the above error after I run the parallel datastage job. The original job is a row generator with 6 DB2 stages that have a user defined delete statement. This job worked fine for a year but now it seems to have this problem from last few days. It even worked fine after we migrated from 6.x to 7.x.

The work around I found was split the job into two jobs with 3 delete statements each. This runs fine. But as this job is part of a sequence I do not want to change the job if possible.

My question is if I want to keep the original job is there some parameter tuning or some other configuration changes that I have to do to fix this memory error "Segmentation fault(coredump)". What are these settings if any ?

Any help is appreciated.
Eric
Participant
Posts: 254
Joined: Mon Sep 29, 2003 4:35 am

Post by Eric »

Have the server resources changed?
Mabybe you only see the problem now due to machine resources being used by another process?
roshanp_21
Participant
Posts: 5
Joined: Tue Oct 07, 2003 8:20 am
Location: USA

Post by roshanp_21 »

[quote="Eric"]Have the server resources changed?
Mabybe you only see the problem now due to machine resources being used by another process?[/quote]

We usually have a lot of process running at the same time. I was able to reproduce this error on our development box.
Appraently it was a bug for which Ascential has just released a patch . We are testing this patch. Thanks for your help
Post Reply