Page 1 of 1

Auto Clearing Status File With Jobs

Posted: Wed Jan 25, 2006 2:26 am
by Nick_6789
Greetings all!

I was wondering if it's possible to create a job to clear status files for other jobs each time before running these jobs. Is this possible?

Thanks in advanced.

Posted: Wed Jan 25, 2006 2:57 am
by ArndW
Nick,

DataStage does not inlude a programmatic call to clear the log file for a job; it can be done explicitly (and the methods have been posted here) if necessary, but it is preferable to use the purge settings to automatically clear aged log entries either by age or number of previous runs.

postarea_Reject_Ref_Chk_Del_C

Posted: Wed Jan 25, 2006 3:26 am
by Nick_6789
ArndW wrote:Nick,

DataStage does not inlude a programmatic call to clear the log file for a job; it can be done explicitly (and the methods have been posted here) if necessary, but it is preferable to use the purge settings to automatically clear aged log entries either by age or number of previous runs.
Yups, I noticed the auto clear log function. Still looking for the post where this can be done explicitly...

What bout clearing status files and resources? I don't see an option to auto clear them?

Thanks.

Auto Clearing Status File With Jobs

Posted: Wed Jan 25, 2006 3:30 am
by Nick_6789
ArndW wrote:Nick,

DataStage does not inlude a programmatic call to clear the log file for a job; it can be done explicitly (and the methods have been posted here) if necessary, but it is preferable to use the purge settings to automatically clear aged log entries either by age or number of previous runs.
Hi further adding on...

I am curious if this can be done... because in an event my job fails... or hangs... that it will auto manage itself an clear the logs, clean up available resources as well as clear status files.

Or... as a pre-requisite before running jobs I would execute something explicitly to do those things...

I have had encountered situations whereby my jobs hangs too often.

Thanks.

Posted: Wed Jan 25, 2006 3:56 am
by ArndW
Manually clearing the STATUS file of any job is not a good idea, it can cause a lot of problems.

If you start your jobs from sequences you can specify the Job Activity to reset the job if required prior to a run. The same result can also be achieved if you start your jobs via the UNIX dsjob command, in which case you should use the dsjob -run -mode reset syntax.

The contents of the log file won't affect a job's execution (although a huge log file might slow down a job, it won't cause it to abort), so you don't really need to go to the effort of clearing it manually and the purge settings you decide on should suffice.

Posted: Wed Jan 25, 2006 7:53 am
by kumar_s
Hi,

To avoid deadlock, you can alter dead lock manager, in $DSHOME/dsdlockd.config
as
start=1
timer=900
res=0
log=

The value of timer can be altered according to your needs. The frequency in which the locks need to be cleared.

-Kumar

Posted: Wed Jan 25, 2006 4:23 pm
by ray.wurlod
Automatically clearing the status file is a Bad Idea.

Forget you ever had the idea.

Even manually clearing a status file is considered an operation of last resort. The text in the "are you sure?" message box makes this point. The fact that a second, "are you REALLY sure?" message box appears should really drive the point home.

Don't do it.

Auto Clearing Status File With Jobs

Posted: Thu Jan 26, 2006 7:49 pm
by Nick_6789
kumar_s wrote:Hi,

To avoid deadlock, you can alter dead lock manager, in $DSHOME/dsdlockd.config
as
start=1
timer=900
res=0
log=

The value of timer can be altered according to your needs. The frequency in which the locks need to be cleared.

-Kumar
Hi all, thanks for the advice.

The reason why I brought this up is because I am constantly facing the below error (Abortion):

-------------------------------------------------------------------------------------

JOB_STG_Staff_Contact_C..ShrContSTGcurrentM.Trans_01.Lnk_SRC_DB_ESD_lb_staff_cntct: DSD.BCIGetNext call to SQLFetch failed.
SQL statement:SELECT S.trnsctn_fl, S.staff_cntct_fl, TRIM(staff_cntct_nm)||TRIM(NVL(srv_area_cd,''))||TRIM(NVL(fclty_id,''))||TRIM(NVL(dept_type_cd,'')), S.image_cd, L.command_type, L.chnge_ts FROM lb_staff_cntct S, lb_trnsctn L WHERE S.trnsctn_fl= L.trnsctn_fl AND (L.chnge_ts > "2006-01-25 08:18:42.000" AND L.chnge_ts <= "2006-01-26 08:00:11.000")
SQLSTATE=S1000, DBMS.CODE=-245
[DataStage][SQL Client][ODBC][DataDirect][ODBC Informix driver][Informix]Could not position within a file via an index.

SQLSTATE=S1000, DBMS.CODE=-144
[DataStage][SQL Client][ODBC][DataDirect][ODBC Informix driver][Informix]ISAM error: key value locked


-------------------------------------------------------------------------------------

Is this to do with deadlocks as mentioned by Mr. Kumar S?

Whenever I encounter this error I would clear log files and status files and release all resources via director before re-running my job.

This error occurs when I am reading values from a given source database presumably that there is no one on the other side of the world running transactions into it. We have timed the running session of the jobs as not to clash with the time anyone is keying information into that database. So I really doubt that it's caused by someone accessing the database at that time.

If it is indeed deadlocks... I guess I would try out Mr. Kumar S's solution. Thanks guys. =)

Re: Auto Clearing Status File With Jobs

Posted: Thu Jan 26, 2006 8:04 pm
by Nick_6789
Hi, in addition,

In order to overcome the problem I am having...

Issit recommended that I do read uncommited (Dirty read) from my source database?

What are the cons?

Posted: Thu Jan 26, 2006 9:22 pm
by chulett
You and Kumar are a little... out of sync. The 'deadlocks' Kumar mentions are within DataStage and are totally unrelated to the issue you are facing in your Informix database.

Without intimate knowledge of your job design, it would be hard to say exactly what is going on. You are either running into other activity during the time your job runs or your job is stepping on its own toes. Do you update multiple tables in this job? Do you update the same table via multiple links in this job? Do you use more than one target ODBC stage for all this? Or are you updating the same table you are sourcing from? Perhaps you could take a stab at posting a picture of the job design so we can see it.

Why all the shenanigans when the job aborts? Simply reset and rerun. You only need to clear the log if the size is getting excessively large and it affects the run time of the job. There's no reason at all to be clearing the status file in this situation, nor for you to be releasing 'all resources via Director'. :?

Work on resolving your deadlock issue, that's the root of all evil here. Don't worry about all this 'Auto Clearing Status File with Jobs' stuff.

Auto Clearing Status File With Jobs

Posted: Thu Jan 26, 2006 9:55 pm
by Nick_6789
chulett wrote:You and Kumar are a little... out of sync. The 'deadlocks' Kumar mentions are within DataStage and are totally unrelated to the issue you are facing in your Informix database.

Without intimate knowledge of your job design, it would be hard to say exactly what is going on. You are either running into other activity during the time your job runs or your job is stepping on its own toes. Do you update multiple tables in this job? Do you update the same table via multiple links in this job? Do you use more than one target ODBC stage for all this? Or are you updating the same table you are sourcing from? Perhaps you could take a stab at posting a picture of the job design so we can see it.

Why all the shenanigans when the job aborts? Simply reset and rerun. You only need to clear the log if the size is getting excessively large and it affects the run time of the job. There's no reason at all to be clearing the status file in this situation, nor for you to be releasing 'all resources via Director'. :?

Work on resolving your deadlock issue, that's the root of all evil here. Don't worry about all this 'Auto Clearing Status File with Jobs' stuff.
Hi thanks for the reply.

My design is just simply retrieving from an ODBC stage and filtering somethings via transformers then placing it into staging file.

And nope... no multiple links to a destination odbc stage or etc... just as mentioned above.

Yeah I noticed from some source that I should not be clearing logs, status files, cleaning up resources... those are not related directly to the issue I am facing.

I guess that since the source database is being referenced world wide that's just users from a corner of the world using that db...

Posted: Thu Jan 26, 2006 10:15 pm
by chulett
I don't use the ODBC stage all that much... is there an option to do a 'Read Only' or dirty read type query from the source database? As you mentioned, that may solve this issue for you.

Auto Clearing Status File With Jobs

Posted: Fri Jan 27, 2006 1:59 am
by Nick_6789
chulett wrote:I don't use the ODBC stage all that much... is there an option to do a 'Read Only' or dirty read type query from the source database? As you mentioned, that may solve this issue for you.
Yups, there is an option in the Transaction Handling tab to set the reading as read uncommited/dirty read.

However, I think this is not recommended as there may be inconsistencies in the data we extract from the ODBC stage... given the design and frequency of running the jobs...

Posted: Fri Jan 27, 2006 6:56 am
by chulett
Then you've got yourself a wee bit of a Catch 22, it would seem. :?

Unless I'm mistaken, doing a dirty read should just mean that you would see the state of any transactions that are 'in the air' as their 'before image'. You wouldn't see the result of anything actively being changed until the next time you ran the job - assuming it had completed by then. Not sure if I'd label that as an inconsistancy or not, but that could depend on the design of the job as you noted and how your source system is handling those transactions.

In any case, did you click the Help button inside the stage and read up on what exactly 'Read Uncommited' means there? Not sure what other solution there is in this case. Anyone else? Thoughts?