Page 1 of 1

Abnormal termination of stage sample..T1 detected

Posted: Mon Jul 30, 2007 3:38 am
by skumar
Hi all,

My job is frequently aborting due to the error

Abnormal termination of stage sample..T1 detected.I checked in the forum and i did n't get the relevant information what i am looking for.Can,some body plz help me out why this error is occuring frequently.Interesting thing is that just compiling the job makes it run very much fine with out doing any modifications....

Thanks in advance.

Regards,
skumar.

Posted: Mon Jul 30, 2007 4:41 am
by ray.wurlod
Reset the job in Director and report back whether any "from previous run" message is logged and, if so, its contents.

I am assuming here that your job name is "sample" and there is a Transformer stage called T1. Further, the job is not running as a multi-instance job.

Posted: Mon Jul 30, 2007 4:42 am
by soumik
Hi,

Can you give the job design? What are the other warnings shown in the log when you reset the aborted job?
If you are using OCI stage, then what is the array size you are using?

having same problem

Posted: Thu Aug 02, 2007 11:20 am
by johm73
[quote="soumik"]Hi,

Can you give the job design? What are the other warnings shown in the log when you reset the aborted job?
If you are using OCI stage, then what is the array size you are using?[/quote]


I'm having the same problem with a job of mine. I am using an OCI stage that connects to 10g and the array size is 32767. Any ideas?

Posted: Thu Aug 02, 2007 11:40 am
by chulett
There's typically more to these kinds of issues than just the array size. However, try dropping it to 1 and see if that helps.

Posted: Thu Aug 02, 2007 12:13 pm
by johm73
[quote="chulett"]There's typically more to these kinds of issues than just the array size. However, try dropping it to 1 and see if that helps.[/quote]


Setting to 1 does resolve the issue. However, isn't having an array size of 1 and issue in itself!? 1 row at a time just doesn't make much sense to me. Do you know of a root cause on this issue?

Thanks.

Posted: Thu Aug 02, 2007 12:24 pm
by chulett
Not, it's not an issue per se. It will effect the performance of the job but you'd have to decide if the performance was 'acceptable' with it set to 1.

The root cause will require you to provide a great deal more information than you have so far. However, I'd guess that it is related to how much information (how 'fat') each record is. Take your average byte value and multiply it by 32767 and see how much information you are asking to be shoved across the network at any given time.

I've also seen CLOB issues cause this, any array size other than 1 can cause the job to fall over dead.

Try bumping the value up in smaller increments and see what you get. There's a law of diminishing returns here, while it can help at first, setting it 'too high' can degrade performance as well. Experiment.

Posted: Thu Aug 02, 2007 2:54 pm
by ray.wurlod
Is johm73 the same poster as skumar (the original poster)?
:?

Posted: Fri Aug 17, 2007 6:12 am
by soumik
Hi Johm,

why did you choose an array size=32767 ?

The real cause for these sort of error is the memory assigned to the Datastage. Ask your administrator to check the Kernel level parameters and the UVCONFIG file.

I faced similar issues on the array size. re-tuning the Kernel level parameters and the UVCONFIG file resolved the issue.

Hope that helps you to find the root cause.

Posted: Mon Aug 20, 2007 6:55 am
by ash_singh84
kjj

Posted: Mon Aug 20, 2007 7:04 am
by chulett
Who's issue is not yet resolved? What issue? :?

The original poster never came back after their first message. Someone else popped in with 'the same problem' which generated more questions and answers. And now you.

It's time to start over. Start a new thread. Let us know your particulars, including job design and what you found when you 'Reset' the aborted job.