Page 1 of 1

Phantom Error in job

Posted: Fri Feb 23, 2007 5:32 pm
by narsingrp
My job is like this below.

seq-->xfrm-->xfrm--hashfile

It is giving following error and aborts

DataStage Job 76 Phantom 5364
Program "DSD.UVOpen": Line 572, Exception raised in GCI subroutine:
Integer division by zero.
Attempting to Cleanup after ABORT raised in stage JobCreateSrcHdrs..hshTargetHdrs
DataStage Phantom Aborting with @ABORT.CODE = 3


Row buffering is enabled in job.Buffer size is 1024 and time out is set to 200.

Any help would be appreciated.

Posted: Fri Feb 23, 2007 5:45 pm
by ray.wurlod
Don't worry about the word "phantom" - that's just DataStage terminology for "background process" - all DataStage jobs run as background processes.

When you reset the job after it aborts, do you get any additional diagnostic information "from previous run"?

The mentioned routine DSD.UVOpen is the one that opens the hashed file. Does a hashed file of that name actually exist? Beware that hashed file names are case sensitive.

Posted: Fri Feb 23, 2007 5:49 pm
by narsingrp
Thanks Ray. Yes .Hash file is by this name.I will try to reset and see of there is any addl message.

Posted: Fri Feb 23, 2007 5:57 pm
by narsingrp
I tried resetting and re-run again.There is nothing other than this error message in log.

Posted: Sat Feb 24, 2007 9:40 am
by ArndW
Perhaps the hashed file is broken. Do you have the "clear file" switch turned on in your job? If not, could you try setting it to see if the error goes away (if you can easily re-create the data).

Posted: Sat Feb 24, 2007 6:20 pm
by narsingrp
Clear file option is on.This error occurs on and off.Some times job runs fine without errors.Do we need change any settings .

Posted: Sat Feb 24, 2007 9:53 pm
by DSguru2B
Uncheck the clear option, do create file and open the options box and check "Delete file before create". See if you see this error again.

Posted: Mon Feb 26, 2007 4:16 am
by narsingrp
Thanks guys.I tried using both options but still this problem is coming sometimes.Is there any other reasons for this thing to happen.

Posted: Mon Feb 26, 2007 3:02 pm
by narsingrp
Also I am getting following error on and off when am trying to read CSV file and create another CSV in job control.

Attempting to Cleanup after ABORT raised in stage JobExtractFileNameValue..JobControl.
(SeqChkScrubFiles) <- JobExtractFileNameValue: Job under control finished.

I reset the jobs and this is the log from previous run.

From previous run
DataStage Job 80 Phantom 3400
Program "JOB.693873716.DT.1429657750": Line 101, WRITE failure.
Attempting to Cleanup after ABORT raised in stage JobExtractFileNameValue..JobControl

DataStage Phantom Aborting with @ABORT.CODE = 3

Can anyone help me understand this problem.

Posted: Mon Feb 26, 2007 3:44 pm
by ArndW
The four most common causes of the write failure:

1. NULL or @FM in the key
2. insufficient OS level access rights
3. Standard file growing beyond 2Gb limit
4. Corrupt hashed file.

You have have ruled out #4

Posted: Mon Feb 26, 2007 5:50 pm
by narsingrp
Sorry.I might have confused you by mixing two issues here.
One is reading hash file and other is with CSV file.

The issue with CSV file is resolved.I am gussing there might be non-printable character or CR+LF in source file that caused write failure.
I cleansed data before writing to output file and removed CR+LF.It is working fine now.


The issue with Hash file is still a problem.My job is like this

seq-->xfrm-->seq

And in xfrm,I am doing 18 hash lookups.This may be causing problem.The look up data is negligible and that is why I am doing all in same job.

Posted: Mon Feb 26, 2007 7:04 pm
by chulett
No, Arnd meant you have "ruled out #4" because you are dealing with a flat file. :wink:

Posted: Mon Feb 26, 2007 7:51 pm
by ray.wurlod
Are you writing to any hashed files? After all, the error is a WRITE failure.

Posted: Fri Mar 02, 2007 9:57 am
by narsingrp
I am writing to flat file.It is a write failure and is resolved.I tried to clean data before writing by removing CR+LF etc.
The other problem is when I am reading from hash file in another job,getting following message and job aborts.

It is giving following error and aborts

DataStage Job 76 Phantom 5364
Program "DSD.UVOpen": Line 572, Exception raised in GCI subroutine:
Integer division by zero.
Attempting to Cleanup after ABORT raised in stage JobCreateSrcHdrs..hshTargetHdrs
DataStage Phantom Aborting with @ABORT.CODE = 3

Posted: Fri Mar 02, 2007 3:16 pm
by ray.wurlod
The job design in the first post on this thread had you writing to a hashed file. Was that problem solved? If so, please close this thread as Resolved, and open a new thread for a new problem.

We will not enter into wandering discourses about various, changing problems. It makes life too difficult for those seeking answers in future.