HashFile Problems

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

ap_nat
Participant
Posts: 9
Joined: Thu Apr 24, 2003 3:51 pm

HashFile Problems

Post by ap_nat »

Hi!

I have a job that uses a hashfile stage, the directory is cleared before creating the hashfile, there is no VOC entry for the hashfile and the job works nearly all the times. But once in a blue moon I get the error "DSD.UVOpen": Line 503, WRITE failure. If I track this line it seems to be something like this
write PASSIVE.STAGE.STATUS DSRTCOM.RTSTATUS.FVAR STAGE.ID

Can someone explain to me what may be going wrong for this error to be caused and how I can correct it.

Thanks
ap_nat
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

You probably didn't paste the whole message. You sure you aren't writing a NULL primary key value on a row? Is there a constraint to check for this and prevent it?
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
cecilia
Participant
Posts: 33
Joined: Thu Jan 15, 2004 9:55 am
Location: Argentina
Contact:

Post by cecilia »

kcbland wrote:You probably didn't paste the whole message. You sure you aren't writing a NULL primary key value on a row? Is there a constraint to check for this and prevent it?
But, Kenneth, DataStage accepts null values in hash key, from 6.0 version, doesn't it? And much more besides, it accepts them in the case that Nullable box in column definition is set to "No".
Please, let me know if I'm wrong.
Thanks, Cecilia
mhester
Participant
Posts: 622
Joined: Tue Mar 04, 2003 5:26 am
Location: Phoenix, AZ
Contact:

Post by mhester »

I believe that writing a null key or a key that evaluates to char(000) to a Hash is perfectly fine, although I would question someone doing this. However if an attempt is made to write a record to a Hash and the key evaluates to the NULL.STR / char(128) then you will receive a write failure. This is not DS, but UniVerse.

Also, if you have a multi-part key and one of the "parts" evaluates to the NULL.STR or NULL this is ok too, although you may get results you were not counting on.

As Ken points out, check for this condition in the constraint and take appropriate action.

Regards,

Michael Hester
Last edited by mhester on Thu Jan 22, 2004 9:00 pm, edited 1 time in total.
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

cecilia wrote: But, Kenneth, DataStage accepts null values in hash key, from 6.0 version, doesn't it? And much more besides, it accepts them in the case that Nullable box in column definition is set to "No".
Please, let me know if I'm wrong.
Thanks, Cecilia
No, you cannot have a NULL primary key value in a hash file. Now, if your key is compound, meaning more than one column comprises the primary key, then this also does not work if one of the columns is NULL (this is not the same as BLANK). MetaData "null" settings does not prevent anything.

If you wish to avoid writing a row to a hash file with a NULL as the primary key, you must put in the output link constraint checks to prevent this.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Sorry Mike, didn't know you were posting at the same time. My understanding is that multi-part keys cannot have NULL in them, as any concatenation with a NULL always returns NULL. I did not differentiate NULL and NULL.STR. You're probably right, but it's a good practice to never map NULL into any column designated a primary key. NULLs are so much fun to troubleshoot, it's best to not have them proliferate.

Either way, a single column primary key of NULL should be mapped to a NULL representative. I believe this is probably an error situation by the user, and she needs to code appropriately for this condition.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
cecilia
Participant
Posts: 33
Joined: Thu Jan 15, 2004 9:55 am
Location: Argentina
Contact:

Post by cecilia »

Thanks Kenneth and Mike, and apologize ap_nat for take your case for introduce this situation :oops:
Saying this, I will check with Kenneth and Mike information in mine, but the jobs loaded hash files and aborted when they inserted null value from hash key to the database.
Thanks again, Cecilia
mhester
Participant
Posts: 622
Joined: Tue Mar 04, 2003 5:26 am
Location: Phoenix, AZ
Contact:

Post by mhester »

Ken,

No problem. Whenever I see questions like this I always feel that I know the answer (been there, done that), although I always write a small program in UV to test my theory. There are many tech tips out there from Ardent, PRIME, etc... that outline the havoc this can cause in a system if you allow nulls or null.str (multi-part) to be written. I like your answer, because in reality for what we do, there is no good reason to be writing keys with null parts or at least I have not heard of one.

Regards,

Michael Hester
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Hashed files are how the UniVerse database implements its tables.

Every column in the primary key must be defined as NOT NULL. This is the same as every table in every database product I have ever worked with.

It follows that no key column in a hashed file can accept NULL as a value.

That definite enough? :)


One difficulty in this discussion is that old-time UniVerse users, from before when an SQL capability was added (UniVerse's precursors actually pre-date SQL), used to refer to the zero-length string as "null", which can cause confusion, since it is legal that a primary key column can have "" as a value (this is a known value, rather than unknown as in NULL).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
mhester
Participant
Posts: 622
Joined: Tue Mar 04, 2003 5:26 am
Location: Phoenix, AZ
Contact:

Post by mhester »

To all,

Both Ken and Ray are correct. I was reverting back to my old days prior to SQL implementation and was actually thinking of a slightly different issue I had run into many years ago.

My apologies if this added to the confusion.

Regards,

Michael Hester
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Re: HashFile Problems

Post by ray.wurlod »

ap_nat wrote:Hi!

I have a job that uses a hashfile stage, the directory is cleared before creating the hashfile, there is no VOC entry for the hashfile and the job works nearly all the times. But once in a blue moon I get the error "DSD.UVOpen": Line 503, WRITE failure. If I track this line it seems to be something like this
write PASSIVE.STAGE.STATUS DSRTCOM.RTSTATUS.FVAR STAGE.ID

Can someone explain to me what may be going wrong for this error to be caused and how I can correct it.

Thanks
ap_nat
That much of the message suggests that an attempt is being made to access the status of a passive stage (or resource). If this is the case, there is a problem accessing the status file for the job (RT_STATUSnn, where nn is the job number). It may be an intermittent contention issue.

Nonetheless, you should determine the job number and check that the run time repository files (RT_CONFIGnn, RT_LOGnn and RT_STATUSnn) are intact. Just to be certain.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
ap_nat
Participant
Posts: 9
Joined: Thu Apr 24, 2003 3:51 pm

Re: HashFile Problems

Post by ap_nat »

ray.wurlod wrote: That much of the message suggests that an attempt is being made to access the status of a passive stage (or resource). If this is the case, there is a problem accessing the status file for the job (RT_STATUSnn, where nn is the job number). It may be an intermittent contention issue.

Nonetheless, you should determine the job number and check that the run time repository files (RT_CONFIGnn, RT_LOGnn and RT_STATUSnn) are intact. Just to be certain.
Thanks for the replies, I am not passing any null string as the key, I take proper care of validating that the field to be used as the key in this hashfile is not null.

Coming to ray's suggestions, all the RT_ files are proper, I mean I can hit queries against them from the uv prompt. Is there any other way of checking if they are proper.

If possible, can u also explain what you mean by contention issues.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Contention issues?

Post by ray.wurlod »

I imagine (without access to source code) that the BASIC code in which DataStage is written incorporates a timeout when it has to do things like wait for a lock to be released by another process. So, if it waits too long, it reports a write failure.

It would be nice if the error message could be more specific, even if it just reported the error code.

Log a call with your support provider to find out what's happening at line 503 of DSD.UVOpen; this will help to ascertain the cause of the message.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
cecilia
Participant
Posts: 33
Joined: Thu Jan 15, 2004 9:55 am
Location: Argentina
Contact:

Post by cecilia »

Also, if you have a multi-part key and one of the "parts" evaluates to the NULL.STR or NULL this is ok too, although you may get results you were not counting on.

As Ken points out, check for this condition in the constraint and take appropriate action.

Regards,

Michael Hester[/quote]


Sorry for insist with this issue, but I think it's important to understand hash files behaivour. I made tests with DataStage 6.0 and Mike is right.
If you want, I can send you a job that moves @NULL to columns of a multi-part key in a hash (except to one column), and it create the record. And I have other job that uses this hash as reference and this record is found.
Regards, Cecilia
mhester
Participant
Posts: 622
Joined: Tue Mar 04, 2003 5:26 am
Location: Phoenix, AZ
Contact:

Post by mhester »

Cecilia,

What both Ray and Ken pointed out is correct, although the intent of my original reply was to illustrate that you can indeed write a multi-part key to a hash that contains the @NULL or @NULL.STR in any portion of the key (or the entire key).

Here are some scenarios that can happen -

Multi-part

KEY1 = @NULL
KEY2 = @NULL

Key in UV looks like -

^128^251^128

Result = record written

Keep in mind that if 10 records come in and the keys evaluated as above that you would only have one record.

KEY1 = @NULL.STR
KEY2 = "SOMETHING"

Key in UV looks like -

^128^251SOMETHING

Result = record written

With this you may lose records since you have lost uniqueness in the key.

Single column keys

KEY = @NULL

Result = write failure

KEY = @NULL.STR

Result = write failure

The @NULL is apparently being translated by DS to a 128 or @NULL.STR.

Conclusion:

Again, my position would be as Ken stated - ensure that this condition does not occur in your data and take appropriate action if it does.

Ray pointed out that no database allows for a true/SQL null in any part of the key and this is correct and UV is no different. As far as I know any value from the ASCII character set is valid in the key. There may be some values in the extended ASCII set that are not, but I don't know about those (never tried).

I hope this helps and can finally put this subject to rest :-)

Regards,

Michael Hester
Post Reply