dsob waiting to finish even though -wait not specified

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
jking123
Premium Member
Premium Member
Posts: 29
Joined: Tue Mar 23, 2004 9:18 pm

dsob waiting to finish even though -wait not specified

Post by jking123 »

We are running muliple instance job with mulitple instances and we need to run them in parallel. But each instance seems to be waiting before the next instance starts.
We are not using the -wait parameter. Anyone seen this. Here is the command.
Any help would be papreciated.

dsjob -run -param ETL_PROCESS_ID=1028 -param RESET=0 projload LoadTargetType2.customer_auth_1028
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Well... it's certain not supposed to work that way unless you specify one of the following options:

-wait
-jobstatus
-userstatus


Otherwise, it should start the job and exit. If you are certain you are seeing this behaviour and you are not using one of the above options with your dsjob command, I'd suggest you report that to Support. Let us know what you find out.

ps. The fact that it's a "multi-instance" job shouldn't matter but can you tell us if you seem to be having this issue on non "mi" jobs? :?
-craig

"You can never have too many knives" -- Logan Nine Fingers
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Perhaps you can explain how you run all the jobs. In a loop? In a sequential order? If so how do you go to next? You wait for something?
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
jking123
Premium Member
Premium Member
Posts: 29
Joined: Tue Mar 23, 2004 9:18 pm

Think we found it (Possibly a logging issue)

Post by jking123 »

1. We are executing in a loop for about 100 instances.
2. Now this has been running for the last 2 months non stop and we recently noticed that timings had increased from running all 100 instances in 5 mins to about 1 hour. In looking deeper we found they were being run sequentially.

Yesterday we stopped all, deleted all the logs (even though we have auto purge setup). And now they are running in parallel again and back down to 5 mins.

Anyone else seen this.
Related question: Anyone know of a dsjob or any other command to delete logs. Don't want to rely on auto purge.
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

Auto purge option works without fault. Someone must have cleared one of the logs using the CLEAR.FILE option which disables the auto purge option. If you want a routine to purge the logs you can create it. Its simple, all you need is to get the job id's and pass them one by one to the TCL command CLEAR.FILE RT_LOGnnn where nnn would be the job id. But i wouldnt recommed it.
Also make sure you do cleanup the &PH& folder from time to time.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Or add a script after the loop, which compiles the main job.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
jking123
Premium Member
Premium Member
Posts: 29
Joined: Tue Mar 23, 2004 9:18 pm

Post by jking123 »

Is there an new option to compile from command line or any other way I can do it in the script. That would be good option. Yup agree someone may have done a clear status or something.
I can always ask the admins to compile whole project once a week or so. But it looks a bit of a Hack.
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Command line compile option. dscc. But available in client.
C:\Program Files\Ascential\DataStagexxx>dscc /h <hostname> /u <userid> /p <password> <Project> /j * [/ouc]
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
jking123
Premium Member
Premium Member
Posts: 29
Joined: Tue Mar 23, 2004 9:18 pm

Post by jking123 »

Yup, dscc on client is not going to help.
Also I don't think compile actually deletes the logs.
So it is bck to deleting logs or finding out scenarios in which auto-purge stops.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

The only 'help' compiling would bring would be to remove the 'multi-instance' entries from the Director's Status view. As you noted, however, the log entries would remain.

The only times I'm aware of that Auto Purge doesn't work as expected are 1) when someone has done a CLEAR.FILE on the log or 2) the job Aborts. For #1, that's easy to tell if you know it was set previously because when you go to check the logs it is no longer set. And #2 is actually normal behaviour. Other than that, it should work fine.

The only other time might be if the type of purge or limit is too high for the nature of the job runs. For example, you run a multi-instance something with 100 instances a day and then tell it to purge after 14 days. Understanding that there's only one log that they all share, that could be alot of messages that accumulate before any purging takes place.

Other than that, however... [shrug] Unless you've corrupted the hashed file then it should all be working. I'd suggest opening a Support case if you think you've got something broken in some other fashion.

I believe that people have posted snippets of Job Control code that can work logs. Ken Bland for one has utilities available for download from his website, ones that clear logs and set Auto Purge setting across groups of jobs. Not sure it's my place to post them here, so find any post of his and visit his site - there's no charge for them last I heard. Kim Duke may as well, or even Ray. Check'em out! :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
jking123
Premium Member
Premium Member
Posts: 29
Joined: Tue Mar 23, 2004 9:18 pm

Post by jking123 »

Thanks craig. Actually that helps a lot. We do have a failure rate of 1 in say a 1000 due to resource contention of DB. IBM is investigating that issue. This is probably causing auto-purge to reset.
I will look for the utility.
Post Reply