Hash file Location
Moderators: chulett, rschirm, roy
Hash file Location
Hi,
I'm having a question regarding to hash file. When you create hash file, you can use account name (usually I select this choice, and leave the account name empty) or use direct path.
We have a datastage consultant doing consulting service in our company.He suggested us we use direct path instead of account name. His reason is that by using direct path, we can bypass uvtemp directory, (because sometimes we have problem about 'can not open file' in the folder). Is it true? Besides that, any other reasons?
I really appreicate if somebody can help clarify this.
Thanks,
Carol
I'm having a question regarding to hash file. When you create hash file, you can use account name (usually I select this choice, and leave the account name empty) or use direct path.
We have a datastage consultant doing consulting service in our company.He suggested us we use direct path instead of account name. His reason is that by using direct path, we can bypass uvtemp directory, (because sometimes we have problem about 'can not open file' in the folder). Is it true? Besides that, any other reasons?
I really appreicate if somebody can help clarify this.
Thanks,
Carol
Hello Carol,
It is usually a very good idea to separate your program (i.e. DataStage Project Account) and data storage. It is so easy to run out of data space by getting larger than expected data volumes, forgetting to delete large temporary files or many other possible reasons.
Normally this is not a huge issue; but DataStage is particularly finicky when it comes to a disk-full condition while writing within the project (i.e. saving a job, compiling a job, or writing to the log file while running a job). When this happens a project may become corrupt and require some work to get back into a runnable state - just do a search in this forum on that subject.
The uvtemp should be specified and be located elsewhere, so that is not directly the issue, but I think that your consultant has it right. Just make sure that the path is on a different volume, not just a different directory on the same volume as your projects and/or server.
It is usually a very good idea to separate your program (i.e. DataStage Project Account) and data storage. It is so easy to run out of data space by getting larger than expected data volumes, forgetting to delete large temporary files or many other possible reasons.
Normally this is not a huge issue; but DataStage is particularly finicky when it comes to a disk-full condition while writing within the project (i.e. saving a job, compiling a job, or writing to the log file while running a job). When this happens a project may become corrupt and require some work to get back into a runnable state - just do a search in this forum on that subject.
The uvtemp should be specified and be located elsewhere, so that is not directly the issue, but I think that your consultant has it right. Just make sure that the path is on a different volume, not just a different directory on the same volume as your projects and/or server.
You might find this information helpful regarding hash files:
viewtopic.php?t=85364
viewtopic.php?t=85364
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Thank you for all the info.
I still have one more question, if I use external directory path, how I do check the content of hash file using sql statement? ( I usually exeute through datastage administrator - command button). I found this feature is very convienient during the development of job.
Carol
I still have one more question, if I use external directory path, how I do check the content of hash file using sql statement? ( I usually exeute through datastage administrator - command button). I found this feature is very convienient during the development of job.
Carol
I think I read somewhere that you can actually use SETFILE to create a VOC record for the parent directory that your hash files live in. This would allow you to treat the directory as an 'account' and so avoid having to add individual VOC records for each hash file in the directory.
Is this true?
Can something like this be done?
Is this true?
![Confused :?](./images/smilies/icon_confused.gif)
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Such a pointer can be created, but it does not treat the directory as an account. Rather it treats the directory as a table, the file names within the directory as key values, and the file contents as data records. Not all of which will be readable (hashed files, for example) via queries.
Also, you probably then need to modify the VOC entry to use a dictionary of directories. For example:
But this is not a solution for Carol's question.
Also, you probably then need to modify the VOC entry to use a dictionary of directories. For example:
Code: Select all
UPDATE VOC SET F3 = '<<F(VOC,&UFD&,3)>>' WHERE @ID = 'MyPointer';
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
I remembered where I found the little tidbit I was referring to - here. Curious what anyone thinks of the specifics of this 'tip'.
I haven't tried to implement it, so can't vouch for it... just had it filed away in the back of my brain.![Wink :wink:](./images/smilies/icon_wink.gif)
I haven't tried to implement it, so can't vouch for it... just had it filed away in the back of my brain.
![Wink :wink:](./images/smilies/icon_wink.gif)
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Craig,
I looked at the code, and it makes sense and ought to work. A bit of a roundabout way to edit the UV.ACCOUNT record but it does the job. It should be noted that this is not a valid DS account that one can attach to from the client front-end, but if you do specify this account name in the hash file stage it will work.
p.s. I did notice that my UV.ACCOUNT file at 7.5 is read-only.
I looked at the code, and it makes sense and ought to work. A bit of a roundabout way to edit the UV.ACCOUNT record but it does the job. It should be noted that this is not a valid DS account that one can attach to from the client front-end, but if you do specify this account name in the hash file stage it will work.
p.s. I did notice that my UV.ACCOUNT file at 7.5 is read-only.
-
- Premium Member
- Posts: 385
- Joined: Wed Jun 16, 2004 12:43 pm
- Location: Virginia, USA
- Contact:
I have been able to update UV.ACCOUNT on a 7.5 windows-based server. I was logged into the server under the user id that installed DataStage.
Chuck Smith
www.anotheritco.com
www.anotheritco.com