When deleting data set -previous delete attempt not complete

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
djbarham
Participant
Posts: 34
Joined: Wed May 07, 2003 4:39 pm
Location: Brisbane, Australia

When deleting data set -previous delete attempt not complete

Post by djbarham »

Yes, there are other threads on this, I have read most / all of them and didn't really come up with a conclusive answer. I am experiencing a different pattern to those other threads.

Job A creates about 10 data sets that job B uses for further processing. A sequence has a loop to run A and B multiple times depending on the nature of the source data. Job A failed last night due to (probably) resource issues / server overloaded / lack of memory.

So, this morning, I reset job A and reran the sequence. First time through the loop, Job A works fine and Job B reads and processes the data in the data sets. SECOND time through the loop, Job A returns the following warnings.

main_program: When deleting data set /dstage/RACQi/Development/Dataset/CTPLoad/PC_PolicyPeriod, previous delete attempt not complete; removing /dstage/RACQi/Development/Dataset/CTPLoad/PC_PolicyPeriod.being_deleted in order to proceed.

What does not make sense to me is why the first run after the fail and reset would be fine, but the second run has the problem.

This is starting to get frustrating (happens too often) and I am thinking the simple solution is to use the message handler to demote these warnings to informational.

Job A has the Update Policy set to "Overwrite" for all the datasets it creates. Would it be better if I changed this to "Use Existing (Discard schema & records)"?

Any thoughts as to why this only happens on the second run of the job, not the first?
Mike
Premium Member
Premium Member
Posts: 1021
Joined: Sun Mar 03, 2002 6:01 pm
Location: Tampa, FL

Post by Mike »

I've seen this exact scenario triggered by the /tmp directory running out of space while jobs are in progress (they of course aborted and left a mess).

To prevent it from reoccurring in the future, make sure your /tmp does not fill up.

You'll have some manual cleanup of partially deleted datasets to do to get back to a normal situation. You will probably see some orphaned data files in your resource disk directories.

Mike
Post Reply