Phantom Error in job
Moderators: chulett, rschirm, roy
Phantom Error in job
My job is like this below.
seq-->xfrm-->xfrm--hashfile
It is giving following error and aborts
DataStage Job 76 Phantom 5364
Program "DSD.UVOpen": Line 572, Exception raised in GCI subroutine:
Integer division by zero.
Attempting to Cleanup after ABORT raised in stage JobCreateSrcHdrs..hshTargetHdrs
DataStage Phantom Aborting with @ABORT.CODE = 3
Row buffering is enabled in job.Buffer size is 1024 and time out is set to 200.
Any help would be appreciated.
seq-->xfrm-->xfrm--hashfile
It is giving following error and aborts
DataStage Job 76 Phantom 5364
Program "DSD.UVOpen": Line 572, Exception raised in GCI subroutine:
Integer division by zero.
Attempting to Cleanup after ABORT raised in stage JobCreateSrcHdrs..hshTargetHdrs
DataStage Phantom Aborting with @ABORT.CODE = 3
Row buffering is enabled in job.Buffer size is 1024 and time out is set to 200.
Any help would be appreciated.
narsingrp
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Don't worry about the word "phantom" - that's just DataStage terminology for "background process" - all DataStage jobs run as background processes.
When you reset the job after it aborts, do you get any additional diagnostic information "from previous run"?
The mentioned routine DSD.UVOpen is the one that opens the hashed file. Does a hashed file of that name actually exist? Beware that hashed file names are case sensitive.
When you reset the job after it aborts, do you get any additional diagnostic information "from previous run"?
The mentioned routine DSD.UVOpen is the one that opens the hashed file. Does a hashed file of that name actually exist? Beware that hashed file names are case sensitive.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Perhaps the hashed file is broken. Do you have the "clear file" switch turned on in your job? If not, could you try setting it to see if the error goes away (if you can easily re-create the data).
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Also I am getting following error on and off when am trying to read CSV file and create another CSV in job control.
Attempting to Cleanup after ABORT raised in stage JobExtractFileNameValue..JobControl.
(SeqChkScrubFiles) <- JobExtractFileNameValue: Job under control finished.
I reset the jobs and this is the log from previous run.
From previous run
DataStage Job 80 Phantom 3400
Program "JOB.693873716.DT.1429657750": Line 101, WRITE failure.
Attempting to Cleanup after ABORT raised in stage JobExtractFileNameValue..JobControl
DataStage Phantom Aborting with @ABORT.CODE = 3
Can anyone help me understand this problem.
Attempting to Cleanup after ABORT raised in stage JobExtractFileNameValue..JobControl.
(SeqChkScrubFiles) <- JobExtractFileNameValue: Job under control finished.
I reset the jobs and this is the log from previous run.
From previous run
DataStage Job 80 Phantom 3400
Program "JOB.693873716.DT.1429657750": Line 101, WRITE failure.
Attempting to Cleanup after ABORT raised in stage JobExtractFileNameValue..JobControl
DataStage Phantom Aborting with @ABORT.CODE = 3
Can anyone help me understand this problem.
The four most common causes of the write failure:
1. NULL or @FM in the key
2. insufficient OS level access rights
3. Standard file growing beyond 2Gb limit
4. Corrupt hashed file.
You have have ruled out #4
1. NULL or @FM in the key
2. insufficient OS level access rights
3. Standard file growing beyond 2Gb limit
4. Corrupt hashed file.
You have have ruled out #4
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Sorry.I might have confused you by mixing two issues here.
One is reading hash file and other is with CSV file.
The issue with CSV file is resolved.I am gussing there might be non-printable character or CR+LF in source file that caused write failure.
I cleansed data before writing to output file and removed CR+LF.It is working fine now.
The issue with Hash file is still a problem.My job is like this
seq-->xfrm-->seq
And in xfrm,I am doing 18 hash lookups.This may be causing problem.The look up data is negligible and that is why I am doing all in same job.
One is reading hash file and other is with CSV file.
The issue with CSV file is resolved.I am gussing there might be non-printable character or CR+LF in source file that caused write failure.
I cleansed data before writing to output file and removed CR+LF.It is working fine now.
The issue with Hash file is still a problem.My job is like this
seq-->xfrm-->seq
And in xfrm,I am doing 18 hash lookups.This may be causing problem.The look up data is negligible and that is why I am doing all in same job.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
I am writing to flat file.It is a write failure and is resolved.I tried to clean data before writing by removing CR+LF etc.
The other problem is when I am reading from hash file in another job,getting following message and job aborts.
It is giving following error and aborts
DataStage Job 76 Phantom 5364
Program "DSD.UVOpen": Line 572, Exception raised in GCI subroutine:
Integer division by zero.
Attempting to Cleanup after ABORT raised in stage JobCreateSrcHdrs..hshTargetHdrs
DataStage Phantom Aborting with @ABORT.CODE = 3
The other problem is when I am reading from hash file in another job,getting following message and job aborts.
It is giving following error and aborts
DataStage Job 76 Phantom 5364
Program "DSD.UVOpen": Line 572, Exception raised in GCI subroutine:
Integer division by zero.
Attempting to Cleanup after ABORT raised in stage JobCreateSrcHdrs..hshTargetHdrs
DataStage Phantom Aborting with @ABORT.CODE = 3
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
The job design in the first post on this thread had you writing to a hashed file. Was that problem solved? If so, please close this thread as Resolved, and open a new thread for a new problem.
We will not enter into wandering discourses about various, changing problems. It makes life too difficult for those seeking answers in future.
We will not enter into wandering discourses about various, changing problems. It makes life too difficult for those seeking answers in future.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.