Hi ,
Iam using a hash file in job for looking in to DB2 table. I read that hash file removes duplicates. I want to know will that effect my job by using hash file for ex: I had material,mandt,variant as keys in hash file.
Material ..................mandt....................variant as keys
XXXX......................YYYYYY......................uuuuu
XXXX......................YYYYYY........................''
XXXX......................YYYYYY.......................wwww
RRRR......................yyyyyy.........................wwww
I think I can achieve my output if it is like the one above in DB2 table.Then I can write all records in hash file is it write
If I have like this
Material................Mandt.......................Variant as keys
XXXXX...............YYYYY.........................uuuuuu
XXXXX...............YYYYY.........................uuuuuu
SSSSS...............TTTTT.........................RRRRRR
SSSSS...............TTTTT.........................RRRRRR
Then I can have only one record from Db2 table in to hash file because of key. Please correct me if Iam wrong
Thanks,
Somaraju.
Doubt in Hash file
Moderators: chulett, rschirm, roy
Doubt in Hash file
somaraju
When writing to a Hash file, if you have a duplicate record key, the last record written is what is in the file.
For your second example, only 2 records would exist in the hash file
Material................Mandt.......................Variant as keys
XXXXX...............YYYYY.........................uuuuuu
SSSSS...............TTTTT.........................RRRRRR
Sephen
For your second example, only 2 records would exist in the hash file
Material................Mandt.......................Variant as keys
XXXXX...............YYYYY.........................uuuuuu
SSSSS...............TTTTT.........................RRRRRR
Sephen
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Premium Member
- Posts: 1255
- Joined: Wed Feb 02, 2005 11:54 am
- Location: United States of America
Hi Ray,
I'm just curious. What is the difference between a normal overwrite and a destructive overwrite?
I'm thinking that a write to a hashed file is a destructive write and not destructive overwrite. Is it not that a overwrite is always destructive.
Please correct me if I'm wrong. Your help is very much appreciated.
Many Thanks,
Naveen.
I'm just curious. What is the difference between a normal overwrite and a destructive overwrite?
I'm thinking that a write to a hashed file is a destructive write and not destructive overwrite. Is it not that a overwrite is always destructive.
Please correct me if I'm wrong. Your help is very much appreciated.
Many Thanks,
Naveen.
Anything that won't sell, I don't want to invent. Its sale is proof of utility, and utility is success.
Author: Thomas A. Edison 1847-1931, American Inventor, Entrepreneur, Founder of GE
Author: Thomas A. Edison 1847-1931, American Inventor, Entrepreneur, Founder of GE
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
When a Hashed File stage (or DataStage BASIC routine) writes to a hashed file, if the key value is one that already exists on the hashed file, the old record is destroyed and the new one replaces it completely. That's why it's called destructive. However, this is all one operation in hashed files (not delete followed by insert, which you might do using one of the SQL-based stages, and which is a double operation). That's why it's called overwriting.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.