Hi Folks,
This is a strange issue
First time when we run this job I am getting the Error. But when we run the same job again, it is executing succesfully.
this happend same like 3 times first time job aborts due to the below message and second time it is running successfully.
We are running this job every day in DEV But we are not getting this issue in DEV only in Prod it is occuring.
main_program: When deleting data set /IISData/cfp/cfpprd010/Work/Lkup/d_PFTCTR_Lookup_Data_Map.ds, previous delete attempt not complete; removing /IISData/cfp/cfpprd010/Work/Lkup/d_PFTCTR_Lookup_Data_Map.ds.being_deleted in order to proceed.
Any ideas will be appreciated
Thanks in Advance
deleting data set previous delete attempt not complete
Moderators: chulett, rschirm, roy
I've seen this message (only on windows) several times, but it seems to be caused by a previous run having aborted and is only a warning message and does not prevent the job from running.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Ya I think these are warnings and in our settings after 50 warnings job will be aborted. In the log i see this messageArndW wrote:I've seen this message (only on windows) several times, but it seems to be caused by a previous run having aborted and is only a warning message and does not prevent the job from running.
Fatal
main_program: ORCHESTRATE step execution terminating due to SIGINT
I guess Recompiling the job will solve this issue?
No, recompilation won't change anything. What is odd is that this warning should only occur once (or do you have more than 50 DataSets in the job?).
If you go to the directory you will see files named "...ds.being_deleted", if no jobs are runnig then you can manually do an "orchadmin rm {file_name.ds.being_deleted}". Again, these files should only be present if something went wrong on the previous run and it couldn't complete the cleanup process.
If you go to the directory you will see files named "...ds.being_deleted", if no jobs are runnig then you can manually do an "orchadmin rm {file_name.ds.being_deleted}". Again, these files should only be present if something went wrong on the previous run and it couldn't complete the cleanup process.
Last edited by ArndW on Mon Aug 02, 2010 8:58 am, edited 1 time in total.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
ArndW wrote:I've seen this message (only on windows) several times, but it seems to be caused by a previous run having aborted and is only a warning message and does not prevent the job from running.
My question here, Even its a warning, how to solve the warning, Bcoz it should able to delete the existing Dataset. Even If i increase the number of warnings.
ya we have more than 50 Datasets in our job. but after running for second time it is running succesfully.ArndW wrote:No, recompilation won't change anything. What is odd is that this warning should only occur once (or do you have more than 50 DataSets in the job?).
If you go to the directory you will see files named "...ds.being_deleted", if no jobs are runnig then you can manually do an "orchadmin rm {file_name.ds.being_deleted}". Again, these files should only be present if something went wrong on the previous run and it couldn't complete the cleanup process.
The question here is why aren't the datasets being deleted. You need to look at the log file of the run before the one which shows all the warnings.
Once you have found the cause and cannot avoid it, then an option would be to deprecate the warning to an informational message in the job log handler.
Once you have found the cause and cannot avoid it, then an option would be to deprecate the warning to an informational message in the job log handler.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
ya Agree with you. I have the log files of both successful and aborted jobs. I am going to delete the ds.being deleted files by using orchadmin and will seeArndW wrote:The question here is why aren't the datasets being deleted. You need to look at the log file of the run before the one which shows all the warnings.
Once you have found the cause and cannot av ...