Abnormal termination of transformer

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Abnormal termination of transformer

Post by vmcburney »

I'm using DataStage 5.1 on Solaris.

I have a job that reads from a sequential file, uses a transformer to map columns and apply a filter and outputs via Oracle OCI to an Oracle table. It worked well. I added a second transformer to perform a lookup on a single field. Same inputs and outputs. It aborted after about a million rows with this error:
Abnormal termination of stage WP2AVLoadandMapList..tranfilter detected

I have reproduced this now in a few jobs, all aborting at around the 1 to 1.3 million row mark. When I have two transformers in a row the jobs always abort with an error caused by the first transformer. When I run with just one transformer it always works.

Can anyone tell me why two transformers in a single job could be causing this problem? Already checked the 'PH' and there is no additional error information.

Vincent McBurney
Data Integration Services
www.intramatix.com
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

Vincent

You may be running out of temp space. Is there some reason you need 2 transforms. Can you do it in one?

Kim.

Kim Duke
DwNav - ETL Navigator
www.Duke-Consulting.com
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Post by vmcburney »

I have got it working now, I removed the first transformer and I now do the filter in Unix using a cat / grep combination. I would still like to find out why my jobs consistently fail.

I don't think it's temp space, the box is only using 1% of a large temp folder. I would have thought the job was using the same amount of resources at row 1 million that it was at row 1000.

Vincent McBurney
Data Integration Services
www.intramatix.com
degraciavg
Premium Member
Premium Member
Posts: 39
Joined: Tue May 20, 2003 3:36 am
Location: Singapore

Post by degraciavg »

It sounds like a memory-leak problem on the OCI stage. The OCI8 plugin used to have this problem in DS4.2. in AIX, you can use the ps v command to monitor it... but I don't know its equivalent in Solaris...

It would be easier to diagnose though if we see your job design... but whatever it is, strange phenomena like this should be raised to Ascential tech support.

regards,
vladimir
msigal
Participant
Posts: 31
Joined: Tue Nov 26, 2002 3:19 pm
Location: Denver Metro

Post by msigal »

We'll often get this error when we have two transformers back to back. The job will abort even on the first record. Taking out the second transform, if possible, usually takes care of it or we'll land the data to a file. These are extremely frustrating and time consuming to diagnose. I'm not much help on this one, but had to put in my two cents.

Myles
mhester
Participant
Posts: 622
Joined: Tue Mar 04, 2003 5:26 am
Location: Phoenix, AZ
Contact:

Post by mhester »

Vincent,

Does each job that produces this error reference the same lookup? Is the lookup a Hash or ODBC? If each job is using the same file as a lookup then I might focus my efforts on the lookup - specifically the key derivation or data in the Hash table.

Regards,

Michael Hester
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

Vincent

I think vladimir is right. It may be a memory leak. You should get support involved. It is reproducable. You found a bug.

Kim.

Kim Duke
DwNav - ETL Navigator
www.Duke-Consulting.com
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

If you have the time to experiment, try putting a Sequential File stage between the two Transformer stages, to try to focus a bit more accurately on the location of the problem. (In DS 6.0 and later you can use an Inter Process stage, and achieve the same effect in memory.)

Ray Wurlod
Education and Consulting Services
ABN 57 092 448 518
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Ray - I'm curious how the IPC stage would 'achieve the same effect'?

It won't cause the two 'halves' of the job to execute separately, at least no in the same manner that dropping a sequential stage in the middle would. Yes, they will run as two different processes, but the data will flow thru immediately, will it not? I haven't had an opportunity to do much more than play with IPC stages and read up on them in the manuals, so I'm curious how this could be leveraged to determine the problem.

Thanks for any enlightenment!

-craig
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Conceptually the IPC stage is like two sequential file stages, the first writing to a named pipe and the second (executing in a separate process) reading from the named pipe.
So I think it's fair to say that it does, in fact, cause the two halves of the job to execute separately.
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Post by vmcburney »

Thanks for all your replies. I've been away from DataStage for a couple days but I will try the sequential file when I get the chance. This client is running DataStage 5.0 on Solaris 2.6 (with some circa 2.8 libraries loaded) which is not supported by Ascential and I suspect it may have something to do with the problem. Once again I'll suggest to them that they upgrade to Solaris 2.8.

Vincent McBurney
Data Integration Services
www.intramatix.com
Post Reply