Page 1 of 2

ds_uvput() - Write failed for record id

Posted: Wed Aug 18, 2010 12:10 pm
by ORACLE_1
I was trying to run 3 ETLs in the same environment (each of them processes around 7 million rows) , all of them failed at the same time giving the following error (record id is different on all three) while writing their respective Hashed files. I think it may be due to resource issue and trying to run them individually. But I see in some post its also said that it may be due to he Hashed file size hitting 2 GB.. if it is then there is no clear resoultion posted. Any help? (may be my search capability is also not that good ;)) -

ds_uvput() - Write failed for record id '439
10461
11
10017
2147483646
13234
2147483646
2147483646
2147483646
2644
2971
2147483646
2147483646
2147483646
2147483646
USD
2147483646
2147483646
2147483646
2147483646
13558
13036
279'

Posted: Wed Aug 18, 2010 12:39 pm
by chulett
All failing "at the same time" sounds like a disk space issue to me. Any kind of "bigger than 2GB" situation would corrupt the hashed file and get you "blink" errors, do you see anything like that? If so, the solution to that is to switch to 64bit hashed files and that has been discussed here quite a bit.

Posted: Wed Aug 18, 2010 12:54 pm
by ORACLE_1
chulett wrote:All failing "at the same time" sounds like a disk space issue to me. Any kind of "bigger than 2GB" situation would corrupt the hashed file and get you "blink" errors, do you see anything like that? If so, the solution to that is to switch to 64bit hashed files and that has been discussed here quite a bit.
yeah , at the end of the job I am getting -

DataStage Job 2985 Phantom 1996
Program "JOB.156795596.DT.1546540536.TRANS4": Line 1295, Internal data error.
Program "JOB.156795596.DT.1546540536.TRANS4": Line 1407, Internal data error.
Job Aborted after Fatal Error logged.
File 'E:\\Ascential\\DataStage\\Projects\\BI/HASH_TARGET/DATA.30':
Computed blink of 0x934 does not match expected blink of 0x0!
Detected within group starting at address 0x800CC000!
File 'E:\\Ascential\\DataStage\\Projects\\BI/HASH_TARGET/DATA.30':
Computed blink of 0x934 does not match expected blink of 0x0!
Detected within group starting at address 0x800CC000!
Attempting to Cleanup after ABORT raised in stage OH_J_Fact_FY2008..Trans_Assign_Values
DataStage Phantom Aborting with @ABORT.CODE = 1

Posted: Wed Aug 18, 2010 1:29 pm
by vivekgadwal
ORACLE_1 wrote: Computed blink of 0x934 does not match expected blink of 0x0!
Then this Hashed file should be increased to 64-bit, as noted by Craig.

Posted: Wed Aug 18, 2010 1:30 pm
by ORACLE_1
So I ran the resize command -

RESIZE HASHEDFILE DYNAMIC 64BIT

The command rund for some time and then returns with the some blink error. Does it mean I have to empty the file and then do it?

Posted: Wed Aug 18, 2010 1:31 pm
by ORACLE_1
So I ran the resize command -

RESIZE HASHEDFILE DYNAMIC 64BIT

The command rund for some time and then returns with the some blink error. Does it mean I have to empty the file and then do it?

Error message -

Computed blink of 0x908 does not match expected blink of 0x0!
Detected within group starting at address 0x80000000!
RESIZE: Error on HASHEDFILE. File not resized.

Posted: Wed Aug 18, 2010 1:36 pm
by vivekgadwal
ORACLE_1 wrote:The command rund for some time and then returns with the some blink error. Does it mean I have to empty the file and then do it?
Can you post the error that you are getting?

You could also try this post to see if it helps you.

Posted: Wed Aug 18, 2010 1:37 pm
by ORACLE_1
Here is the error -

Error message -

Computed blink of 0x908 does not match expected blink of 0x0!
Detected within group starting at address 0x80000000!
RESIZE: Error on HASHEDFILE. File not resized.

Posted: Wed Aug 18, 2010 3:15 pm
by ORACLE_1
Alrite Guys just an FYI , I think the file was already hit the 2 GB limit and hence was not able to resize.
So Cleared the File , Resized it and now it work. Running the job now , crossed fingers ;).

Will mark it resolved if its complete !!

Thanks
Dk

Posted: Wed Aug 18, 2010 5:24 pm
by chulett
Right, once it was corrupted you were out of luck with the resize.

ds_uvput() - Write failed for record id

Posted: Thu Jan 20, 2011 8:18 am
by dfkettle
We sometimes get this error, but I don't think it's a disk space issue (we still have lots), and I don't think it's a file size issue either. We just resubmit the same job, with same input, and it usually runs OK the second time. Could it be caused by running out of file handles? We're running under AIX, and max. file handles is currently set to 2000. That seems like a lot, but sometimes we have 20 or 30 jobs running at the same time.

Re: ds_uvput() - Write failed for record id

Posted: Thu Jan 20, 2011 10:17 am
by India2000
must be colliding while writing as they are in the same environment.try giving different hashed file names.

Posted: Thu Jan 20, 2011 3:18 pm
by ray.wurlod
This error can also be caused by an illegal key (null, or one containing dynamic array delimiter characters ("mark characters")).

Posted: Fri Jan 21, 2011 8:03 am
by chulett
Right, but for dfkettle where a rerun solves the issue, that doesn't sound like it could be the case. So, being transient it does sound resource related. Don't know the answer to the file handle question, perhaps others do but the other thing that comes to mind is that AIX specific limit on the number of directory entries, perhaps that is playing a role here?

Posted: Fri Jan 21, 2011 1:52 pm
by ray.wurlod
No, the "group starting at address 0x80000000" part of the message indicates the 2GB limit for a hashed file with 32-bit addressing has been encountered.