ds_uvput() - Write failed for record id
Moderators: chulett, rschirm, roy
ds_uvput() - Write failed for record id
I was trying to run 3 ETLs in the same environment (each of them processes around 7 million rows) , all of them failed at the same time giving the following error (record id is different on all three) while writing their respective Hashed files. I think it may be due to resource issue and trying to run them individually. But I see in some post its also said that it may be due to he Hashed file size hitting 2 GB.. if it is then there is no clear resoultion posted. Any help? (may be my search capability is also not that good ;)) -
ds_uvput() - Write failed for record id '439
10461
11
10017
2147483646
13234
2147483646
2147483646
2147483646
2644
2971
2147483646
2147483646
2147483646
2147483646
USD
2147483646
2147483646
2147483646
2147483646
13558
13036
279'
ds_uvput() - Write failed for record id '439
10461
11
10017
2147483646
13234
2147483646
2147483646
2147483646
2644
2971
2147483646
2147483646
2147483646
2147483646
USD
2147483646
2147483646
2147483646
2147483646
13558
13036
279'
All failing "at the same time" sounds like a disk space issue to me. Any kind of "bigger than 2GB" situation would corrupt the hashed file and get you "blink" errors, do you see anything like that? If so, the solution to that is to switch to 64bit hashed files and that has been discussed here quite a bit.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
yeah , at the end of the job I am getting -chulett wrote:All failing "at the same time" sounds like a disk space issue to me. Any kind of "bigger than 2GB" situation would corrupt the hashed file and get you "blink" errors, do you see anything like that? If so, the solution to that is to switch to 64bit hashed files and that has been discussed here quite a bit.
DataStage Job 2985 Phantom 1996
Program "JOB.156795596.DT.1546540536.TRANS4": Line 1295, Internal data error.
Program "JOB.156795596.DT.1546540536.TRANS4": Line 1407, Internal data error.
Job Aborted after Fatal Error logged.
File 'E:\\Ascential\\DataStage\\Projects\\BI/HASH_TARGET/DATA.30':
Computed blink of 0x934 does not match expected blink of 0x0!
Detected within group starting at address 0x800CC000!
File 'E:\\Ascential\\DataStage\\Projects\\BI/HASH_TARGET/DATA.30':
Computed blink of 0x934 does not match expected blink of 0x0!
Detected within group starting at address 0x800CC000!
Attempting to Cleanup after ABORT raised in stage OH_J_Fact_FY2008..Trans_Assign_Values
DataStage Phantom Aborting with @ABORT.CODE = 1
-
- Premium Member
- Posts: 457
- Joined: Tue Sep 25, 2007 4:05 pm
So I ran the resize command -
RESIZE HASHEDFILE DYNAMIC 64BIT
The command rund for some time and then returns with the some blink error. Does it mean I have to empty the file and then do it?
Error message -
Computed blink of 0x908 does not match expected blink of 0x0!
Detected within group starting at address 0x80000000!
RESIZE: Error on HASHEDFILE. File not resized.
RESIZE HASHEDFILE DYNAMIC 64BIT
The command rund for some time and then returns with the some blink error. Does it mean I have to empty the file and then do it?
Error message -
Computed blink of 0x908 does not match expected blink of 0x0!
Detected within group starting at address 0x80000000!
RESIZE: Error on HASHEDFILE. File not resized.
Last edited by ORACLE_1 on Wed Aug 18, 2010 1:37 pm, edited 1 time in total.
-
- Premium Member
- Posts: 457
- Joined: Tue Sep 25, 2007 4:05 pm
Can you post the error that you are getting?ORACLE_1 wrote:The command rund for some time and then returns with the some blink error. Does it mean I have to empty the file and then do it?
You could also try this post to see if it helps you.
Vivek Gadwal
Experience is what you get when you didn't get what you wanted
Experience is what you get when you didn't get what you wanted
ds_uvput() - Write failed for record id
We sometimes get this error, but I don't think it's a disk space issue (we still have lots), and I don't think it's a file size issue either. We just resubmit the same job, with same input, and it usually runs OK the second time. Could it be caused by running out of file handles? We're running under AIX, and max. file handles is currently set to 2000. That seems like a lot, but sometimes we have 20 or 30 jobs running at the same time.
Re: ds_uvput() - Write failed for record id
must be colliding while writing as they are in the same environment.try giving different hashed file names.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Right, but for dfkettle where a rerun solves the issue, that doesn't sound like it could be the case. So, being transient it does sound resource related. Don't know the answer to the file handle question, perhaps others do but the other thing that comes to mind is that AIX specific limit on the number of directory entries, perhaps that is playing a role here?
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact: