unable to read from a hash file...
Moderators: chulett, rschirm, roy
unable to read from a hash file...
Hai
Let me make my question easier and clearer. I have 2 keys x, y in my hash file ( the columns are x,y,z). The condition I am using in the output field is
if input.y = hash.y then hash.z else 0
The key expressions I used for x, y in hash are a constant and input.y respectively.
I tried all the known methods like trimming the hash etl as well as trimming hash, input, output in the main etl which uses the above logic. What ever I do, the ETL does not read the Hash file at all.
I request some guidance in this issue.
Thanks in advance,
Raj
Let me make my question easier and clearer. I have 2 keys x, y in my hash file ( the columns are x,y,z). The condition I am using in the output field is
if input.y = hash.y then hash.z else 0
The key expressions I used for x, y in hash are a constant and input.y respectively.
I tried all the known methods like trimming the hash etl as well as trimming hash, input, output in the main etl which uses the above logic. What ever I do, the ETL does not read the Hash file at all.
I request some guidance in this issue.
Thanks in advance,
Raj
Raj,
when you read from a hash file that has multiple columns defined as keys you need to specify all of the keys to read a record. So in your case you are not getting anything because you have keys x and y but are accessing using only y.
If you wish to use this type of a select on a hash file you need to access the file as if it were a table; either through the Universe or the ODBC stage; then your query will return the appropriate rows.
when you read from a hash file that has multiple columns defined as keys you need to specify all of the keys to read a record. So in your case you are not getting anything because you have keys x and y but are accessing using only y.
If you wish to use this type of a select on a hash file you need to access the file as if it were a table; either through the Universe or the ODBC stage; then your query will return the appropriate rows.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Let us take a step back, are you using this hash file as your main input or as a lookup? If it is your main input, then you need to use different syntax in your select.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 3337
- Joined: Mon Jan 17, 2005 4:49 am
- Location: United Kingdom
Seems like there are two problems here.
One seems to be the fact that the lookup is failing. The hashed file is being read, however every row fails the lookup due to key mis-matches so your reference link total is zero after the job finishes. You are going to have to do all of the 'normal' things people need to do when they are in that situation - make sure the metadata used to create the hashed file is the same being used in the lookup, key fields are being trimmed if needed, and keys are being correctly mapped in from the input source. Etc. Etc. Something is not quite kosher or the lookup would be working.
The second is the proper method to check if a lookup has succeeded or failed. That expression you are using in the derivation downstream that is reliant on the result of the lookup is not proper:
You should either use the 'Input Link Variables' for this or check a key field for a null value. When the lookup succeeds, all of the key fields will still match whatever values you loaded into them. When it fails, all hashed fields will be null. Take your pick:
Taking the first (more common) approach, your derivation would change to:
Not saying your expression won't work... guess it might... but it is decidely non-standard.
One seems to be the fact that the lookup is failing. The hashed file is being read, however every row fails the lookup due to key mis-matches so your reference link total is zero after the job finishes. You are going to have to do all of the 'normal' things people need to do when they are in that situation - make sure the metadata used to create the hashed file is the same being used in the lookup, key fields are being trimmed if needed, and keys are being correctly mapped in from the input source. Etc. Etc. Something is not quite kosher or the lookup would be working.
The second is the proper method to check if a lookup has succeeded or failed. That expression you are using in the derivation downstream that is reliant on the result of the lookup is not proper:
Code: Select all
if input.y = hash.y and hash.x = constant then hash.z else 0
Code: Select all
LookupLink.NOTFOUND [failure]
Not(LookupLink.NOTFOUND) [success]
-or-
IsNull(LookupLink.KeyField) [failure]
Not(IsNull(LookupLink.KeyField)) [success]
Code: Select all
if hash.NOTFOUND then 0 else hash.z
-or-
if Not(hash.NOTFOUND) Then hash.z else 0
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
hai chulett,
I used this expression in my output which is ....
if hash.NOTFOUND THEN 1 ELSE 0
and the result returned a '1'..
Does this mean that the hash file has null values in it? if so, what would be my next step in the process to debug the issue...
I appreciate you time and precise response...
Thanks,
Raj
I used this expression in my output which is ....
if hash.NOTFOUND THEN 1 ELSE 0
and the result returned a '1'..
Does this mean that the hash file has null values in it? if so, what would be my next step in the process to debug the issue...
I appreciate you time and precise response...
Thanks,
Raj
No, it means the fields returned by the lookup will be null - not the data in the hashed file itself. So, yes, time to dig in and start debugging why the lookup is failing.honeyraj wrote:Does this mean that the hash file has null values in it? if so, what would be my next step in the process to debug the issue...
Does everything look 'ok' when you 'View data' on the hashed file?
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
yes, I can see the data in the hash file and the keys match.
as part of the debugging, I am using constraints in output and error files.
for output, the constraint I am using is NOT(hash1.NOTFOUND)
for error output file, the constraint is hash1.NOTFOUND
when i ran the ETL, all the rows got into the error file.
So, I request you guys to help in debugging this issue in a step wise manner. I had been struggling for the past 2 days on this issue.
Thanks for all your support,
Raj
as part of the debugging, I am using constraints in output and error files.
for output, the constraint I am using is NOT(hash1.NOTFOUND)
for error output file, the constraint is hash1.NOTFOUND
when i ran the ETL, all the rows got into the error file.
So, I request you guys to help in debugging this issue in a step wise manner. I had been struggling for the past 2 days on this issue.
Thanks for all your support,
Raj
Guys,
I appreciate everyone who has responded to my query. I could clear my problem.
The solution is, I need to use 'trim(left(input,12))' in the key expression under hash.y field. Now, the hash is read correctly. I understood now that the number of characters in the key expression and the field should match (this might be basics of datastage but I missed it!!)
Thanks once again,
Raj
I appreciate everyone who has responded to my query. I could clear my problem.
The solution is, I need to use 'trim(left(input,12))' in the key expression under hash.y field. Now, the hash is read correctly. I understood now that the number of characters in the key expression and the field should match (this might be basics of datastage but I missed it!!)
Thanks once again,
Raj
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
The more complete solution is to TRIM all string type (Char and VarChar) key values when loading them into the hashed file as well as when performing lookups against them.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.