Unable to write to a sequential file
Posted: Wed Feb 22, 2012 8:22 am
Hello everyone,
I tried searching for a post/topic in this forum similar to the problem I have but did not have a lot of luck. Please forgive me if I have missed anything![Smile :)](./images/smilies/icon_smile.gif)
Ok so, I have a job that reads data from a sequential file. It then does some lookups and transformations and then writes to another sequential file. I have an ExecSH command that strips out nulls from the generated output file (see below):
tr -d '\000' < /myprod_targetdirectory/xfilename.txt > /myprod_targetdirectory/filename.txt | rm /myprod_targetdirectory/xfilename.txt
For some reason, the first two times it ran in production the file did not get created giving me the following message. But the file was created the third time (i.e. today) it ran.
"Executed command: tr -d '\000' < /myprod_targetdirectory/xfilename.txt > /myprod_targetdirectory/filename.txt | rm /myprod_targetdirectory/xfilename.txt
*** Output from command was: ***
SH: /myprod_targetdirectory/xfilename.txt: No such file or directory"
The job completed successfully without any error. The message above is NOT a warning. There are two more jobs that execute the same kind of ExecSH command but for a different file, their target directory is the same so I rule out the permission issue here but may be I am wrong.
I can work around the issue by making the job check if the file was created and re-run it until it is because the file got created the third time. But I need to understand the issue why the file was not created the first time.
Dear experts, I humbly request you to help me find out what could be the reason why the job did not create the file. and what should be done when something like this happens. Should I just remove the ExecSH and handle the nulls in my job and see if rm command had to do anything with it? or Should I check if the job creates the file and re-run it?
I appreciate your help in this!
Thanks,
Hiral
I tried searching for a post/topic in this forum similar to the problem I have but did not have a lot of luck. Please forgive me if I have missed anything
![Smile :)](./images/smilies/icon_smile.gif)
Ok so, I have a job that reads data from a sequential file. It then does some lookups and transformations and then writes to another sequential file. I have an ExecSH command that strips out nulls from the generated output file (see below):
tr -d '\000' < /myprod_targetdirectory/xfilename.txt > /myprod_targetdirectory/filename.txt | rm /myprod_targetdirectory/xfilename.txt
For some reason, the first two times it ran in production the file did not get created giving me the following message. But the file was created the third time (i.e. today) it ran.
"Executed command: tr -d '\000' < /myprod_targetdirectory/xfilename.txt > /myprod_targetdirectory/filename.txt | rm /myprod_targetdirectory/xfilename.txt
*** Output from command was: ***
SH: /myprod_targetdirectory/xfilename.txt: No such file or directory"
The job completed successfully without any error. The message above is NOT a warning. There are two more jobs that execute the same kind of ExecSH command but for a different file, their target directory is the same so I rule out the permission issue here but may be I am wrong.
I can work around the issue by making the job check if the file was created and re-run it until it is because the file got created the third time. But I need to understand the issue why the file was not created the first time.
Dear experts, I humbly request you to help me find out what could be the reason why the job did not create the file. and what should be done when something like this happens. Should I just remove the ExecSH and handle the nulls in my job and see if rm command had to do anything with it? or Should I check if the job creates the file and re-run it?
I appreciate your help in this!
Thanks,
Hiral