DSJob -logdetails Hanging

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
hamzaqk
Participant
Posts: 249
Joined: Tue Apr 17, 2007 5:50 am
Location: islamabad

DSJob -logdetails Hanging

Post by hamzaqk »

Hi i am trying to get the job log of the job from unix command prompt as the director bit is way to slow even after changing the refresh interval for it. When i run it through command prompt it just goes to sleep and does not return anything. i have tried other options like -jobinfo and it works fine. Just anything related to the job log does not seem to work. I am using the following command with hardcoded values for parameters.

Code: Select all

 ./dsjob -domain domain_name:9080 -server server_name -user dsadm -password dsadm -logdetail project_name job_name
If i abort the command ctl+c its show the following message
DSICTerminate, pthread_mutex_destroy failed: Device or resource busy
Any help on this please?

Many thanks!
Teradata Certified Master V2R5
hamzaqk
Participant
Posts: 249
Joined: Tue Apr 17, 2007 5:50 am
Location: islamabad

Post by hamzaqk »

Well it took ages to return the result so i guess this work around is as slow as the dirctor itself !! . Any faster way to retrieve this log?

Thanks
Teradata Certified Master V2R5
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Where are your logs, local to the project or in the unified metadata repository? The former is considerably faster.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
hamzaqk
Participant
Posts: 249
Joined: Tue Apr 17, 2007 5:50 am
Location: islamabad

Post by hamzaqk »

Well they are in the metadata repository. I did some research on it and found out that we can swtich to the old method. It is mentioned on the IBM site. Is this a recommended though ? or is there a way of making the UMR accesible in a faster manner?

I think if we have to switch back to the old method there was no point of coming with the new way of storing the metadata.
http://www-01.ibm.com/support/docview ... -8&lang=en

Thanks
Teradata Certified Master V2R5
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

You might want to ping your official support provider and see if you have an XMETA issue - bad or missing index, perhaps? Shouldn't be *that* slow.
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

I don't think they've got it right yet - and the log viewer in Web Console still leaves a lot to be desired.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
hamzaqk
Participant
Posts: 249
Joined: Tue Apr 17, 2007 5:50 am
Location: islamabad

Post by hamzaqk »

@ Ray yes i guess so. i hope they do something about it as its pretty frustrating.

@ Craig, Sounds like a good idea. I just opened the xmeta database and it has loads of tables in it. Which one should i be looking at for Datastage Logs and job related information? We have DS_JOBS etc in the Universe but i doubt they are keeping the same names in the new one.

Many thanks
Teradata Certified Master V2R5
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

The XMETA database is well and truly obfuscated, so you'd need to involve your official support provider for help with that.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Ultramundane
Participant
Posts: 407
Joined: Mon Jun 27, 2005 8:54 am
Location: Walker, Michigan
Contact:

Post by Ultramundane »

chulett wrote:The XMETA database is well and truly obfuscated, so you'd need to involve your official support provider for help with that. ...
I ran a whole bunch of tracing on the DB2 UDB instance while pulling out log information.

Here is what I found:

#1: DB2 UDB is extremely inefficient at cursor handling. Specifically the cursor close event.

#2: Reading out the log information:

DataStage is opening a cursor and running a select statement for every event which is part of the error log. DataStage should not be running so many select statements and it certainly should not open so many cursors.

IMHO, DataStage needs to run one select statement and open one cursor (if even necessary) with the results returned ordered as necessary.
nagarjuna
Premium Member
Premium Member
Posts: 533
Joined: Fri Jun 27, 2008 9:11 pm
Location: Chicago

Post by nagarjuna »

The job for which you are trying to see the log is a generic job with multiple instance ??
Nag
Ultramundane
Participant
Posts: 407
Joined: Mon Jun 27, 2005 8:54 am
Location: Walker, Michigan
Contact:

Post by Ultramundane »

nagarjuna wrote:The job for which you are trying to see the log is a generic job with multiple instance ??
The indexing is not very good on the XMETA tables. There are not any composite indexes.

This helps (about 20 times faster, but still too slow):
CREATE INDEX XMETA.IDX_1466CB5F_RYAN
ON XMETA.LOGGING_XMETAGEN_LOGGINGEVENT1466CB5F(cis14_xmeta, cis12_xmeta,cis13_xmeta,cis7_xmeta,categoryName_xmeta,deleted_xmeta)
ALLOW REVERSE SCANS
;

GRANT CONTROL ON INDEX XMETA.IDX_1466CB5F_RYAN TO USER XMETA
;

REORGCHK UPDATE STATISTICS on SCHEMA XMETA
;
Ultramundane
Participant
Posts: 407
Joined: Mon Jun 27, 2005 8:54 am
Location: Walker, Michigan
Contact:

Post by Ultramundane »

Ultramundane wrote: The indexing is not very good on the XMETA tables. There are not any composite indexes.
The data model is not very good either. Looks like it was just thrown together with a whole bunch of columns and whole bunch of indexes and should they be used, they'll be used.

Columns have bad names (so what though). Columns have terrible choices of datatypes (This is a big deal). Numeric datatypes should not be of a VARGRAPHIC type as it makes querying the repository difficult and for very expensive queries.

So, the data model first needs to be fixed and then the developers and DBA's need to work together to fix the code and enhance the schema with a good choice of indexes.

From what I can tell, the best thing you can do is to turn this feature off as described by the article you had posted.
Post Reply