ds_uvput() - Write failed for record id

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

ORACLE_1
Premium Member
Premium Member
Posts: 35
Joined: Mon Feb 16, 2009 1:19 pm

ds_uvput() - Write failed for record id

Post by ORACLE_1 »

I was trying to run 3 ETLs in the same environment (each of them processes around 7 million rows) , all of them failed at the same time giving the following error (record id is different on all three) while writing their respective Hashed files. I think it may be due to resource issue and trying to run them individually. But I see in some post its also said that it may be due to he Hashed file size hitting 2 GB.. if it is then there is no clear resoultion posted. Any help? (may be my search capability is also not that good ;)) -

ds_uvput() - Write failed for record id '439
10461
11
10017
2147483646
13234
2147483646
2147483646
2147483646
2644
2971
2147483646
2147483646
2147483646
2147483646
USD
2147483646
2147483646
2147483646
2147483646
13558
13036
279'
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

All failing "at the same time" sounds like a disk space issue to me. Any kind of "bigger than 2GB" situation would corrupt the hashed file and get you "blink" errors, do you see anything like that? If so, the solution to that is to switch to 64bit hashed files and that has been discussed here quite a bit.
-craig

"You can never have too many knives" -- Logan Nine Fingers
ORACLE_1
Premium Member
Premium Member
Posts: 35
Joined: Mon Feb 16, 2009 1:19 pm

Post by ORACLE_1 »

chulett wrote:All failing "at the same time" sounds like a disk space issue to me. Any kind of "bigger than 2GB" situation would corrupt the hashed file and get you "blink" errors, do you see anything like that? If so, the solution to that is to switch to 64bit hashed files and that has been discussed here quite a bit.
yeah , at the end of the job I am getting -

DataStage Job 2985 Phantom 1996
Program "JOB.156795596.DT.1546540536.TRANS4": Line 1295, Internal data error.
Program "JOB.156795596.DT.1546540536.TRANS4": Line 1407, Internal data error.
Job Aborted after Fatal Error logged.
File 'E:\\Ascential\\DataStage\\Projects\\BI/HASH_TARGET/DATA.30':
Computed blink of 0x934 does not match expected blink of 0x0!
Detected within group starting at address 0x800CC000!
File 'E:\\Ascential\\DataStage\\Projects\\BI/HASH_TARGET/DATA.30':
Computed blink of 0x934 does not match expected blink of 0x0!
Detected within group starting at address 0x800CC000!
Attempting to Cleanup after ABORT raised in stage OH_J_Fact_FY2008..Trans_Assign_Values
DataStage Phantom Aborting with @ABORT.CODE = 1
vivekgadwal
Premium Member
Premium Member
Posts: 457
Joined: Tue Sep 25, 2007 4:05 pm

Post by vivekgadwal »

ORACLE_1 wrote: Computed blink of 0x934 does not match expected blink of 0x0!
Then this Hashed file should be increased to 64-bit, as noted by Craig.
Vivek Gadwal

Experience is what you get when you didn't get what you wanted
ORACLE_1
Premium Member
Premium Member
Posts: 35
Joined: Mon Feb 16, 2009 1:19 pm

Post by ORACLE_1 »

So I ran the resize command -

RESIZE HASHEDFILE DYNAMIC 64BIT

The command rund for some time and then returns with the some blink error. Does it mean I have to empty the file and then do it?
ORACLE_1
Premium Member
Premium Member
Posts: 35
Joined: Mon Feb 16, 2009 1:19 pm

Post by ORACLE_1 »

So I ran the resize command -

RESIZE HASHEDFILE DYNAMIC 64BIT

The command rund for some time and then returns with the some blink error. Does it mean I have to empty the file and then do it?

Error message -

Computed blink of 0x908 does not match expected blink of 0x0!
Detected within group starting at address 0x80000000!
RESIZE: Error on HASHEDFILE. File not resized.
Last edited by ORACLE_1 on Wed Aug 18, 2010 1:37 pm, edited 1 time in total.
vivekgadwal
Premium Member
Premium Member
Posts: 457
Joined: Tue Sep 25, 2007 4:05 pm

Post by vivekgadwal »

ORACLE_1 wrote:The command rund for some time and then returns with the some blink error. Does it mean I have to empty the file and then do it?
Can you post the error that you are getting?

You could also try this post to see if it helps you.
Vivek Gadwal

Experience is what you get when you didn't get what you wanted
ORACLE_1
Premium Member
Premium Member
Posts: 35
Joined: Mon Feb 16, 2009 1:19 pm

Post by ORACLE_1 »

Here is the error -

Error message -

Computed blink of 0x908 does not match expected blink of 0x0!
Detected within group starting at address 0x80000000!
RESIZE: Error on HASHEDFILE. File not resized.
ORACLE_1
Premium Member
Premium Member
Posts: 35
Joined: Mon Feb 16, 2009 1:19 pm

Post by ORACLE_1 »

Alrite Guys just an FYI , I think the file was already hit the 2 GB limit and hence was not able to resize.
So Cleared the File , Resized it and now it work. Running the job now , crossed fingers ;).

Will mark it resolved if its complete !!

Thanks
Dk
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Right, once it was corrupted you were out of luck with the resize.
-craig

"You can never have too many knives" -- Logan Nine Fingers
dfkettle
Participant
Posts: 27
Joined: Tue Jun 13, 2006 6:56 am

ds_uvput() - Write failed for record id

Post by dfkettle »

We sometimes get this error, but I don't think it's a disk space issue (we still have lots), and I don't think it's a file size issue either. We just resubmit the same job, with same input, and it usually runs OK the second time. Could it be caused by running out of file handles? We're running under AIX, and max. file handles is currently set to 2000. That seems like a lot, but sometimes we have 20 or 30 jobs running at the same time.
India2000
Participant
Posts: 274
Joined: Sun Aug 22, 2010 11:07 am

Re: ds_uvput() - Write failed for record id

Post by India2000 »

must be colliding while writing as they are in the same environment.try giving different hashed file names.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

This error can also be caused by an illegal key (null, or one containing dynamic array delimiter characters ("mark characters")).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Right, but for dfkettle where a rerun solves the issue, that doesn't sound like it could be the case. So, being transient it does sound resource related. Don't know the answer to the file handle question, perhaps others do but the other thing that comes to mind is that AIX specific limit on the number of directory entries, perhaps that is playing a role here?
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

No, the "group starting at address 0x80000000" part of the message indicates the 2GB limit for a hashed file with 32-bit addressing has been encountered.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply