Phantom Error in job

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
narsingrp
Premium Member
Premium Member
Posts: 37
Joined: Wed Jan 21, 2004 10:38 pm

Phantom Error in job

Post by narsingrp »

My job is like this below.

seq-->xfrm-->xfrm--hashfile

It is giving following error and aborts

DataStage Job 76 Phantom 5364
Program "DSD.UVOpen": Line 572, Exception raised in GCI subroutine:
Integer division by zero.
Attempting to Cleanup after ABORT raised in stage JobCreateSrcHdrs..hshTargetHdrs
DataStage Phantom Aborting with @ABORT.CODE = 3


Row buffering is enabled in job.Buffer size is 1024 and time out is set to 200.

Any help would be appreciated.
narsingrp
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Don't worry about the word "phantom" - that's just DataStage terminology for "background process" - all DataStage jobs run as background processes.

When you reset the job after it aborts, do you get any additional diagnostic information "from previous run"?

The mentioned routine DSD.UVOpen is the one that opens the hashed file. Does a hashed file of that name actually exist? Beware that hashed file names are case sensitive.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
narsingrp
Premium Member
Premium Member
Posts: 37
Joined: Wed Jan 21, 2004 10:38 pm

Post by narsingrp »

Thanks Ray. Yes .Hash file is by this name.I will try to reset and see of there is any addl message.
narsingrp
Premium Member
Premium Member
Posts: 37
Joined: Wed Jan 21, 2004 10:38 pm

Post by narsingrp »

I tried resetting and re-run again.There is nothing other than this error message in log.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Perhaps the hashed file is broken. Do you have the "clear file" switch turned on in your job? If not, could you try setting it to see if the error goes away (if you can easily re-create the data).
narsingrp
Premium Member
Premium Member
Posts: 37
Joined: Wed Jan 21, 2004 10:38 pm

Post by narsingrp »

Clear file option is on.This error occurs on and off.Some times job runs fine without errors.Do we need change any settings .
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

Uncheck the clear option, do create file and open the options box and check "Delete file before create". See if you see this error again.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
narsingrp
Premium Member
Premium Member
Posts: 37
Joined: Wed Jan 21, 2004 10:38 pm

Post by narsingrp »

Thanks guys.I tried using both options but still this problem is coming sometimes.Is there any other reasons for this thing to happen.
narsingrp
Premium Member
Premium Member
Posts: 37
Joined: Wed Jan 21, 2004 10:38 pm

Post by narsingrp »

Also I am getting following error on and off when am trying to read CSV file and create another CSV in job control.

Attempting to Cleanup after ABORT raised in stage JobExtractFileNameValue..JobControl.
(SeqChkScrubFiles) <- JobExtractFileNameValue: Job under control finished.

I reset the jobs and this is the log from previous run.

From previous run
DataStage Job 80 Phantom 3400
Program "JOB.693873716.DT.1429657750": Line 101, WRITE failure.
Attempting to Cleanup after ABORT raised in stage JobExtractFileNameValue..JobControl

DataStage Phantom Aborting with @ABORT.CODE = 3

Can anyone help me understand this problem.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

The four most common causes of the write failure:

1. NULL or @FM in the key
2. insufficient OS level access rights
3. Standard file growing beyond 2Gb limit
4. Corrupt hashed file.

You have have ruled out #4
narsingrp
Premium Member
Premium Member
Posts: 37
Joined: Wed Jan 21, 2004 10:38 pm

Post by narsingrp »

Sorry.I might have confused you by mixing two issues here.
One is reading hash file and other is with CSV file.

The issue with CSV file is resolved.I am gussing there might be non-printable character or CR+LF in source file that caused write failure.
I cleansed data before writing to output file and removed CR+LF.It is working fine now.


The issue with Hash file is still a problem.My job is like this

seq-->xfrm-->seq

And in xfrm,I am doing 18 hash lookups.This may be causing problem.The look up data is negligible and that is why I am doing all in same job.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

No, Arnd meant you have "ruled out #4" because you are dealing with a flat file. :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Are you writing to any hashed files? After all, the error is a WRITE failure.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
narsingrp
Premium Member
Premium Member
Posts: 37
Joined: Wed Jan 21, 2004 10:38 pm

Post by narsingrp »

I am writing to flat file.It is a write failure and is resolved.I tried to clean data before writing by removing CR+LF etc.
The other problem is when I am reading from hash file in another job,getting following message and job aborts.

It is giving following error and aborts

DataStage Job 76 Phantom 5364
Program "DSD.UVOpen": Line 572, Exception raised in GCI subroutine:
Integer division by zero.
Attempting to Cleanup after ABORT raised in stage JobCreateSrcHdrs..hshTargetHdrs
DataStage Phantom Aborting with @ABORT.CODE = 3
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The job design in the first post on this thread had you writing to a hashed file. Was that problem solved? If so, please close this thread as Resolved, and open a new thread for a new problem.

We will not enter into wandering discourses about various, changing problems. It makes life too difficult for those seeking answers in future.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply