deleting data set previous delete attempt not complete

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
ajaykumar
Participant
Posts: 49
Joined: Tue Sep 01, 2009 7:56 am

deleting data set previous delete attempt not complete

Post by ajaykumar »

Hi Folks,

This is a strange issue

First time when we run this job I am getting the Error. But when we run the same job again, it is executing succesfully.

this happend same like 3 times first time job aborts due to the below message and second time it is running successfully.

We are running this job every day in DEV But we are not getting this issue in DEV only in Prod it is occuring.

main_program: When deleting data set /IISData/cfp/cfpprd010/Work/Lkup/d_PFTCTR_Lookup_Data_Map.ds, previous delete attempt not complete; removing /IISData/cfp/cfpprd010/Work/Lkup/d_PFTCTR_Lookup_Data_Map.ds.being_deleted in order to proceed.

Any ideas will be appreciated

Thanks in Advance
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

I've seen this message (only on windows) several times, but it seems to be caused by a previous run having aborted and is only a warning message and does not prevent the job from running.
ajaykumar
Participant
Posts: 49
Joined: Tue Sep 01, 2009 7:56 am

Post by ajaykumar »

ArndW wrote:I've seen this message (only on windows) several times, but it seems to be caused by a previous run having aborted and is only a warning message and does not prevent the job from running.
Ya I think these are warnings and in our settings after 50 warnings job will be aborted. In the log i see this message

Fatal
main_program: ORCHESTRATE step execution terminating due to SIGINT

I guess Recompiling the job will solve this issue?
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

No, recompilation won't change anything. What is odd is that this warning should only occur once (or do you have more than 50 DataSets in the job?).

If you go to the directory you will see files named "...ds.being_deleted", if no jobs are runnig then you can manually do an "orchadmin rm {file_name.ds.being_deleted}". Again, these files should only be present if something went wrong on the previous run and it couldn't complete the cleanup process.
Last edited by ArndW on Mon Aug 02, 2010 8:58 am, edited 1 time in total.
ajaykumar
Participant
Posts: 49
Joined: Tue Sep 01, 2009 7:56 am

Post by ajaykumar »

ArndW wrote:I've seen this message (only on windows) several times, but it seems to be caused by a previous run having aborted and is only a warning message and does not prevent the job from running.

My question here, Even its a warning, how to solve the warning, Bcoz it should able to delete the existing Dataset. Even If i increase the number of warnings.
ajaykumar
Participant
Posts: 49
Joined: Tue Sep 01, 2009 7:56 am

Post by ajaykumar »

ArndW wrote:No, recompilation won't change anything. What is odd is that this warning should only occur once (or do you have more than 50 DataSets in the job?).

If you go to the directory you will see files named "...ds.being_deleted", if no jobs are runnig then you can manually do an "orchadmin rm {file_name.ds.being_deleted}". Again, these files should only be present if something went wrong on the previous run and it couldn't complete the cleanup process.
ya we have more than 50 Datasets in our job. but after running for second time it is running succesfully.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

The question here is why aren't the datasets being deleted. You need to look at the log file of the run before the one which shows all the warnings.
Once you have found the cause and cannot avoid it, then an option would be to deprecate the warning to an informational message in the job log handler.
ajaykumar
Participant
Posts: 49
Joined: Tue Sep 01, 2009 7:56 am

Post by ajaykumar »

ArndW wrote:The question here is why aren't the datasets being deleted. You need to look at the log file of the run before the one which shows all the warnings.
Once you have found the cause and cannot av ...
ya Agree with you. I have the log files of both successful and aborted jobs. I am going to delete the ds.being deleted files by using orchadmin and will see
Post Reply