Page 1 of 1

deleting data set previous delete attempt not complete

Posted: Mon Aug 02, 2010 8:32 am
by ajaykumar
Hi Folks,

This is a strange issue

First time when we run this job I am getting the Error. But when we run the same job again, it is executing succesfully.

this happend same like 3 times first time job aborts due to the below message and second time it is running successfully.

We are running this job every day in DEV But we are not getting this issue in DEV only in Prod it is occuring.

main_program: When deleting data set /IISData/cfp/cfpprd010/Work/Lkup/d_PFTCTR_Lookup_Data_Map.ds, previous delete attempt not complete; removing /IISData/cfp/cfpprd010/Work/Lkup/d_PFTCTR_Lookup_Data_Map.ds.being_deleted in order to proceed.

Any ideas will be appreciated

Thanks in Advance

Posted: Mon Aug 02, 2010 8:36 am
by ArndW
I've seen this message (only on windows) several times, but it seems to be caused by a previous run having aborted and is only a warning message and does not prevent the job from running.

Posted: Mon Aug 02, 2010 8:40 am
by ajaykumar
ArndW wrote:I've seen this message (only on windows) several times, but it seems to be caused by a previous run having aborted and is only a warning message and does not prevent the job from running.
Ya I think these are warnings and in our settings after 50 warnings job will be aborted. In the log i see this message

Fatal
main_program: ORCHESTRATE step execution terminating due to SIGINT

I guess Recompiling the job will solve this issue?

Posted: Mon Aug 02, 2010 8:56 am
by ArndW
No, recompilation won't change anything. What is odd is that this warning should only occur once (or do you have more than 50 DataSets in the job?).

If you go to the directory you will see files named "...ds.being_deleted", if no jobs are runnig then you can manually do an "orchadmin rm {file_name.ds.being_deleted}". Again, these files should only be present if something went wrong on the previous run and it couldn't complete the cleanup process.

Posted: Mon Aug 02, 2010 8:58 am
by ajaykumar
ArndW wrote:I've seen this message (only on windows) several times, but it seems to be caused by a previous run having aborted and is only a warning message and does not prevent the job from running.

My question here, Even its a warning, how to solve the warning, Bcoz it should able to delete the existing Dataset. Even If i increase the number of warnings.

Posted: Mon Aug 02, 2010 9:01 am
by ajaykumar
ArndW wrote:No, recompilation won't change anything. What is odd is that this warning should only occur once (or do you have more than 50 DataSets in the job?).

If you go to the directory you will see files named "...ds.being_deleted", if no jobs are runnig then you can manually do an "orchadmin rm {file_name.ds.being_deleted}". Again, these files should only be present if something went wrong on the previous run and it couldn't complete the cleanup process.
ya we have more than 50 Datasets in our job. but after running for second time it is running succesfully.

Posted: Mon Aug 02, 2010 9:14 am
by ArndW
The question here is why aren't the datasets being deleted. You need to look at the log file of the run before the one which shows all the warnings.
Once you have found the cause and cannot avoid it, then an option would be to deprecate the warning to an informational message in the job log handler.

Posted: Mon Aug 02, 2010 9:38 am
by ajaykumar
ArndW wrote:The question here is why aren't the datasets being deleted. You need to look at the log file of the run before the one which shows all the warnings.
Once you have found the cause and cannot av ...
ya Agree with you. I have the log files of both successful and aborted jobs. I am going to delete the ds.being deleted files by using orchadmin and will see