HashFile Problems
Moderators: chulett, rschirm, roy
HashFile Problems
Hi!
I have a job that uses a hashfile stage, the directory is cleared before creating the hashfile, there is no VOC entry for the hashfile and the job works nearly all the times. But once in a blue moon I get the error "DSD.UVOpen": Line 503, WRITE failure. If I track this line it seems to be something like this
write PASSIVE.STAGE.STATUS DSRTCOM.RTSTATUS.FVAR STAGE.ID
Can someone explain to me what may be going wrong for this error to be caused and how I can correct it.
Thanks
ap_nat
I have a job that uses a hashfile stage, the directory is cleared before creating the hashfile, there is no VOC entry for the hashfile and the job works nearly all the times. But once in a blue moon I get the error "DSD.UVOpen": Line 503, WRITE failure. If I track this line it seems to be something like this
write PASSIVE.STAGE.STATUS DSRTCOM.RTSTATUS.FVAR STAGE.ID
Can someone explain to me what may be going wrong for this error to be caused and how I can correct it.
Thanks
ap_nat
You probably didn't paste the whole message. You sure you aren't writing a NULL primary key value on a row? Is there a constraint to check for this and prevent it?
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
But, Kenneth, DataStage accepts null values in hash key, from 6.0 version, doesn't it? And much more besides, it accepts them in the case that Nullable box in column definition is set to "No".kcbland wrote:You probably didn't paste the whole message. You sure you aren't writing a NULL primary key value on a row? Is there a constraint to check for this and prevent it?
Please, let me know if I'm wrong.
Thanks, Cecilia
I believe that writing a null key or a key that evaluates to char(000) to a Hash is perfectly fine, although I would question someone doing this. However if an attempt is made to write a record to a Hash and the key evaluates to the NULL.STR / char(128) then you will receive a write failure. This is not DS, but UniVerse.
Also, if you have a multi-part key and one of the "parts" evaluates to the NULL.STR or NULL this is ok too, although you may get results you were not counting on.
As Ken points out, check for this condition in the constraint and take appropriate action.
Regards,
Michael Hester
Also, if you have a multi-part key and one of the "parts" evaluates to the NULL.STR or NULL this is ok too, although you may get results you were not counting on.
As Ken points out, check for this condition in the constraint and take appropriate action.
Regards,
Michael Hester
Last edited by mhester on Thu Jan 22, 2004 9:00 pm, edited 1 time in total.
Mike Hester
mhester@petra-ps.com
mhester@petra-ps.com
No, you cannot have a NULL primary key value in a hash file. Now, if your key is compound, meaning more than one column comprises the primary key, then this also does not work if one of the columns is NULL (this is not the same as BLANK). MetaData "null" settings does not prevent anything.cecilia wrote: But, Kenneth, DataStage accepts null values in hash key, from 6.0 version, doesn't it? And much more besides, it accepts them in the case that Nullable box in column definition is set to "No".
Please, let me know if I'm wrong.
Thanks, Cecilia
If you wish to avoid writing a row to a hash file with a NULL as the primary key, you must put in the output link constraint checks to prevent this.
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Sorry Mike, didn't know you were posting at the same time. My understanding is that multi-part keys cannot have NULL in them, as any concatenation with a NULL always returns NULL. I did not differentiate NULL and NULL.STR. You're probably right, but it's a good practice to never map NULL into any column designated a primary key. NULLs are so much fun to troubleshoot, it's best to not have them proliferate.
Either way, a single column primary key of NULL should be mapped to a NULL representative. I believe this is probably an error situation by the user, and she needs to code appropriately for this condition.
Either way, a single column primary key of NULL should be mapped to a NULL representative. I believe this is probably an error situation by the user, and she needs to code appropriately for this condition.
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Ken,
No problem. Whenever I see questions like this I always feel that I know the answer (been there, done that), although I always write a small program in UV to test my theory. There are many tech tips out there from Ardent, PRIME, etc... that outline the havoc this can cause in a system if you allow nulls or null.str (multi-part) to be written. I like your answer, because in reality for what we do, there is no good reason to be writing keys with null parts or at least I have not heard of one.
Regards,
Michael Hester
No problem. Whenever I see questions like this I always feel that I know the answer (been there, done that), although I always write a small program in UV to test my theory. There are many tech tips out there from Ardent, PRIME, etc... that outline the havoc this can cause in a system if you allow nulls or null.str (multi-part) to be written. I like your answer, because in reality for what we do, there is no good reason to be writing keys with null parts or at least I have not heard of one.
Regards,
Michael Hester
Mike Hester
mhester@petra-ps.com
mhester@petra-ps.com
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Hashed files are how the UniVerse database implements its tables.
Every column in the primary key must be defined as NOT NULL. This is the same as every table in every database product I have ever worked with.
It follows that no key column in a hashed file can accept NULL as a value.
That definite enough?
One difficulty in this discussion is that old-time UniVerse users, from before when an SQL capability was added (UniVerse's precursors actually pre-date SQL), used to refer to the zero-length string as "null", which can cause confusion, since it is legal that a primary key column can have "" as a value (this is a known value, rather than unknown as in NULL).
Every column in the primary key must be defined as NOT NULL. This is the same as every table in every database product I have ever worked with.
It follows that no key column in a hashed file can accept NULL as a value.
That definite enough?
![Smile :)](./images/smilies/icon_smile.gif)
One difficulty in this discussion is that old-time UniVerse users, from before when an SQL capability was added (UniVerse's precursors actually pre-date SQL), used to refer to the zero-length string as "null", which can cause confusion, since it is legal that a primary key column can have "" as a value (this is a known value, rather than unknown as in NULL).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
To all,
Both Ken and Ray are correct. I was reverting back to my old days prior to SQL implementation and was actually thinking of a slightly different issue I had run into many years ago.
My apologies if this added to the confusion.
Regards,
Michael Hester
Both Ken and Ray are correct. I was reverting back to my old days prior to SQL implementation and was actually thinking of a slightly different issue I had run into many years ago.
My apologies if this added to the confusion.
Regards,
Michael Hester
Mike Hester
mhester@petra-ps.com
mhester@petra-ps.com
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Re: HashFile Problems
That much of the message suggests that an attempt is being made to access the status of a passive stage (or resource). If this is the case, there is a problem accessing the status file for the job (RT_STATUSnn, where nn is the job number). It may be an intermittent contention issue.ap_nat wrote:Hi!
I have a job that uses a hashfile stage, the directory is cleared before creating the hashfile, there is no VOC entry for the hashfile and the job works nearly all the times. But once in a blue moon I get the error "DSD.UVOpen": Line 503, WRITE failure. If I track this line it seems to be something like this
write PASSIVE.STAGE.STATUS DSRTCOM.RTSTATUS.FVAR STAGE.ID
Can someone explain to me what may be going wrong for this error to be caused and how I can correct it.
Thanks
ap_nat
Nonetheless, you should determine the job number and check that the run time repository files (RT_CONFIGnn, RT_LOGnn and RT_STATUSnn) are intact. Just to be certain.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Re: HashFile Problems
Thanks for the replies, I am not passing any null string as the key, I take proper care of validating that the field to be used as the key in this hashfile is not null.ray.wurlod wrote: That much of the message suggests that an attempt is being made to access the status of a passive stage (or resource). If this is the case, there is a problem accessing the status file for the job (RT_STATUSnn, where nn is the job number). It may be an intermittent contention issue.
Nonetheless, you should determine the job number and check that the run time repository files (RT_CONFIGnn, RT_LOGnn and RT_STATUSnn) are intact. Just to be certain.
Coming to ray's suggestions, all the RT_ files are proper, I mean I can hit queries against them from the uv prompt. Is there any other way of checking if they are proper.
If possible, can u also explain what you mean by contention issues.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Contention issues?
I imagine (without access to source code) that the BASIC code in which DataStage is written incorporates a timeout when it has to do things like wait for a lock to be released by another process. So, if it waits too long, it reports a write failure.
It would be nice if the error message could be more specific, even if it just reported the error code.
Log a call with your support provider to find out what's happening at line 503 of DSD.UVOpen; this will help to ascertain the cause of the message.
It would be nice if the error message could be more specific, even if it just reported the error code.
Log a call with your support provider to find out what's happening at line 503 of DSD.UVOpen; this will help to ascertain the cause of the message.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Also, if you have a multi-part key and one of the "parts" evaluates to the NULL.STR or NULL this is ok too, although you may get results you were not counting on.
As Ken points out, check for this condition in the constraint and take appropriate action.
Regards,
Michael Hester[/quote]
Sorry for insist with this issue, but I think it's important to understand hash files behaivour. I made tests with DataStage 6.0 and Mike is right.
If you want, I can send you a job that moves @NULL to columns of a multi-part key in a hash (except to one column), and it create the record. And I have other job that uses this hash as reference and this record is found.
Regards, Cecilia
As Ken points out, check for this condition in the constraint and take appropriate action.
Regards,
Michael Hester[/quote]
Sorry for insist with this issue, but I think it's important to understand hash files behaivour. I made tests with DataStage 6.0 and Mike is right.
If you want, I can send you a job that moves @NULL to columns of a multi-part key in a hash (except to one column), and it create the record. And I have other job that uses this hash as reference and this record is found.
Regards, Cecilia
Cecilia,
What both Ray and Ken pointed out is correct, although the intent of my original reply was to illustrate that you can indeed write a multi-part key to a hash that contains the @NULL or @NULL.STR in any portion of the key (or the entire key).
Here are some scenarios that can happen -
Multi-part
KEY1 = @NULL
KEY2 = @NULL
Key in UV looks like -
^128^251^128
Result = record written
Keep in mind that if 10 records come in and the keys evaluated as above that you would only have one record.
KEY1 = @NULL.STR
KEY2 = "SOMETHING"
Key in UV looks like -
^128^251SOMETHING
Result = record written
With this you may lose records since you have lost uniqueness in the key.
Single column keys
KEY = @NULL
Result = write failure
KEY = @NULL.STR
Result = write failure
The @NULL is apparently being translated by DS to a 128 or @NULL.STR.
Conclusion:
Again, my position would be as Ken stated - ensure that this condition does not occur in your data and take appropriate action if it does.
Ray pointed out that no database allows for a true/SQL null in any part of the key and this is correct and UV is no different. As far as I know any value from the ASCII character set is valid in the key. There may be some values in the extended ASCII set that are not, but I don't know about those (never tried).
I hope this helps and can finally put this subject to rest![Smile :-)](./images/smilies/icon_smile.gif)
Regards,
Michael Hester
What both Ray and Ken pointed out is correct, although the intent of my original reply was to illustrate that you can indeed write a multi-part key to a hash that contains the @NULL or @NULL.STR in any portion of the key (or the entire key).
Here are some scenarios that can happen -
Multi-part
KEY1 = @NULL
KEY2 = @NULL
Key in UV looks like -
^128^251^128
Result = record written
Keep in mind that if 10 records come in and the keys evaluated as above that you would only have one record.
KEY1 = @NULL.STR
KEY2 = "SOMETHING"
Key in UV looks like -
^128^251SOMETHING
Result = record written
With this you may lose records since you have lost uniqueness in the key.
Single column keys
KEY = @NULL
Result = write failure
KEY = @NULL.STR
Result = write failure
The @NULL is apparently being translated by DS to a 128 or @NULL.STR.
Conclusion:
Again, my position would be as Ken stated - ensure that this condition does not occur in your data and take appropriate action if it does.
Ray pointed out that no database allows for a true/SQL null in any part of the key and this is correct and UV is no different. As far as I know any value from the ASCII character set is valid in the key. There may be some values in the extended ASCII set that are not, but I don't know about those (never tried).
I hope this helps and can finally put this subject to rest
![Smile :-)](./images/smilies/icon_smile.gif)
Regards,
Michael Hester
Mike Hester
mhester@petra-ps.com
mhester@petra-ps.com