Let me see if I understand what you're doing. You have a Hashed File stage with an input link and an output file link.
On that basis let's investigate further.
A passive stage (for example sequential file, hashed file) with an output link in the job has the function to READ from that file or table. Therefore the file or table must exist.
A passive stage with an input link in the job has the function to WRITE to that file or table, whice therefore need not exist, and is why there is an option on the input link to create same.
You can force it to create a different file name each run by using a job parameter for the file name. There is a "delete file first" check box in the create file dialog. You would presumably want to use the same parameter on the output link, so you'd be reading from the same hashed file you've just written to.
Of course, your job design then has to clean up (delete) the hashed file once the job run is done. This could be effected in an after-stage or after-job subroutine. The actual subroutine and command used will depend on whether you created the hashed file in an account or directory.
- If you created the hashed file in an account, use ExecTCL and use the DELETE.FILE #HashedFile# command.
If you created the hashed file in a directory, use ExecSH and use the rm #HashedFile# ; rm D_#HashedFile# command.