Page 1 of 1

dsob waiting to finish even though -wait not specified

Posted: Sat Jan 06, 2007 8:28 am
by jking123
We are running muliple instance job with mulitple instances and we need to run them in parallel. But each instance seems to be waiting before the next instance starts.
We are not using the -wait parameter. Anyone seen this. Here is the command.
Any help would be papreciated.

dsjob -run -param ETL_PROCESS_ID=1028 -param RESET=0 projload LoadTargetType2.customer_auth_1028

Posted: Sat Jan 06, 2007 8:52 am
by chulett
Well... it's certain not supposed to work that way unless you specify one of the following options:

-wait
-jobstatus
-userstatus


Otherwise, it should start the job and exit. If you are certain you are seeing this behaviour and you are not using one of the above options with your dsjob command, I'd suggest you report that to Support. Let us know what you find out.

ps. The fact that it's a "multi-instance" job shouldn't matter but can you tell us if you seem to be having this issue on non "mi" jobs? :?

Posted: Sat Jan 06, 2007 9:26 pm
by kumar_s
Perhaps you can explain how you run all the jobs. In a loop? In a sequential order? If so how do you go to next? You wait for something?

Think we found it (Possibly a logging issue)

Posted: Sun Jan 07, 2007 6:21 am
by jking123
1. We are executing in a loop for about 100 instances.
2. Now this has been running for the last 2 months non stop and we recently noticed that timings had increased from running all 100 instances in 5 mins to about 1 hour. In looking deeper we found they were being run sequentially.

Yesterday we stopped all, deleted all the logs (even though we have auto purge setup). And now they are running in parallel again and back down to 5 mins.

Anyone else seen this.
Related question: Anyone know of a dsjob or any other command to delete logs. Don't want to rely on auto purge.

Posted: Sun Jan 07, 2007 4:00 pm
by DSguru2B
Auto purge option works without fault. Someone must have cleared one of the logs using the CLEAR.FILE option which disables the auto purge option. If you want a routine to purge the logs you can create it. Its simple, all you need is to get the job id's and pass them one by one to the TCL command CLEAR.FILE RT_LOGnnn where nnn would be the job id. But i wouldnt recommed it.
Also make sure you do cleanup the &PH& folder from time to time.

Posted: Sun Jan 07, 2007 4:37 pm
by kumar_s
Or add a script after the loop, which compiles the main job.

Posted: Sun Jan 07, 2007 4:59 pm
by jking123
Is there an new option to compile from command line or any other way I can do it in the script. That would be good option. Yup agree someone may have done a clear status or something.
I can always ask the admins to compile whole project once a week or so. But it looks a bit of a Hack.

Posted: Sun Jan 07, 2007 5:47 pm
by kumar_s
Command line compile option. dscc. But available in client.
C:\Program Files\Ascential\DataStagexxx>dscc /h <hostname> /u <userid> /p <password> <Project> /j * [/ouc]

Posted: Mon Jan 08, 2007 6:43 am
by jking123
Yup, dscc on client is not going to help.
Also I don't think compile actually deletes the logs.
So it is bck to deleting logs or finding out scenarios in which auto-purge stops.

Posted: Mon Jan 08, 2007 7:47 am
by chulett
The only 'help' compiling would bring would be to remove the 'multi-instance' entries from the Director's Status view. As you noted, however, the log entries would remain.

The only times I'm aware of that Auto Purge doesn't work as expected are 1) when someone has done a CLEAR.FILE on the log or 2) the job Aborts. For #1, that's easy to tell if you know it was set previously because when you go to check the logs it is no longer set. And #2 is actually normal behaviour. Other than that, it should work fine.

The only other time might be if the type of purge or limit is too high for the nature of the job runs. For example, you run a multi-instance something with 100 instances a day and then tell it to purge after 14 days. Understanding that there's only one log that they all share, that could be alot of messages that accumulate before any purging takes place.

Other than that, however... [shrug] Unless you've corrupted the hashed file then it should all be working. I'd suggest opening a Support case if you think you've got something broken in some other fashion.

I believe that people have posted snippets of Job Control code that can work logs. Ken Bland for one has utilities available for download from his website, ones that clear logs and set Auto Purge setting across groups of jobs. Not sure it's my place to post them here, so find any post of his and visit his site - there's no charge for them last I heard. Kim Duke may as well, or even Ray. Check'em out! :wink:

Posted: Mon Jan 08, 2007 10:40 pm
by jking123
Thanks craig. Actually that helps a lot. We do have a failure rate of 1 in say a 1000 due to resource contention of DB. IBM is investigating that issue. This is probably causing auto-purge to reset.
I will look for the utility.