Very slow log refresh under director

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
RodBarnes
Charter Member
Charter Member
Posts: 182
Joined: Fri Mar 18, 2005 2:10 pm

Very slow log refresh under director

Post by RodBarnes »

In 8.1, Has anyone else experienced a very slow log refresh when viewing a job log in director? It is significantly slower under 8.1 than under 7.5 -- to the point that it is pretty useless for troubleshooting.

Example: I can run a job and then go to view the log. Even though the refresh is set to 5 seconds (Tools -> General -> Refresh Interval), and I can see the display physically refresh, the data doesn't show up for quite a while -- sometimes as much as a minute or more later. Not very helpful when you are trying to troubleshoot an issue in a job.
asorrell
Posts: 1707
Joined: Fri Apr 04, 2003 2:00 pm
Location: Colleyville, Texas

Post by asorrell »

There's a PMR for this with IBM. It has to do with the fact that starting with 8.1 the logs are now in the xmeta operational repository (ie: Oracle / DB2 / SQL Server) instead of in uniVerse tables in the UV account. On some systems it can lead to loss of entries in the log.

http://www-01.ibm.com/support/docview.w ... -8&lang=en

Just in case you can't see that link, here's a partial quote:
Abstract
DataStage logging was changed at release 8.1 to log job run detail records into the operational repository, (xmeta) rather than the local project level log files, (RT_LOGxxx) that we utilized in prior releases for each job. As a result of this change we have seen the following issues:

Job run times can increase and Information Server client applications may not be as responsive depending on the amount of log data that is generated by the jobs.

In some cases no job log entries are written to the log as viewed via the DataStage Director client or Web Console, even though jobs appear to run and in some cases job log detail entries may be missing or do not show up until later.

Job log purge operations may run slowly or fail depending on the amount of log entries.

Content
These issues can be worked around by reverting to the logging mechanism that was in place prior to release 8.1 by implementing the following project level changes on the DataStage engine server.
Edit the project level DSParams file, typically located in:
/opt/IBM/InformationServer/Server/Projects/%projectName% for Linux/UNIX and C:\IBM\InformationServer\Server\Projects\%projectName% for Windows and modify the following 2 lines as shown to revert to pre 8.1 logging:

RTLogging=1
ORLogging=0

Keep in mind that newly created projects inherit their settings from the DSParams file that is located in /opt/IBM/InformationServer/Server/Template/DSParams for Linux/UNIX and C:\IBM\InformationServer\Server\Template\DSParams for Windows by default and that it should also be modified to ensure that new projects use the pre 8.1 logging mechanism.

After switching to RTLogging, the existing log details entries in the repository can still be viewed via the Web Console or Server Console but they will not be accessible using the DataStage Director. These log entries should be purged when they are no longer required by scheduling a background purge of the log entries; up to 10,000 at a time is suggested to minimize the memory requirements and to ensure that we do not run into WebSphere Application Server out of memory issues by trying to purge all the log entries as one task.
Andy Sorrell
Certified DataStage Consultant
IBM Analytics Champion 2009 - 2020
JRodriguez
Premium Member
Premium Member
Posts: 425
Joined: Sat Nov 19, 2005 9:26 am
Location: New York City
Contact:

Re: Very slow log refresh under director

Post by JRodriguez »

RodBarnes,

How about clearing the log? If you have a lot of logs then it will become more and more slower. In the Administrator you can set auto-clear log option by number of days or number of previous runs

Also if you have 8.1 you can set the log to be stored in files - the old way- instead of having them store in the metadata repository by default
Julio Rodriguez
ETL Developer by choice

"Sure we have lots of reasons for being rude - But no excuses
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

No matter where the log is stored, it's in a database table and its retrieval query includes an ORDER BY clause based on the date sorter field (this is a hidden field in log view, though it's there).

Therefore it behoves you always to keep the log as small as possible. That is, have as few entries as possible. If you need to keep older log entries, archive them, and keep only one or two runs in the current working log.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
RodBarnes
Charter Member
Charter Member
Posts: 182
Joined: Fri Mar 18, 2005 2:10 pm

Post by RodBarnes »

Just for reference: I am not experiencing anything related to log size. It doesn't matter how many entries there are -- even if I just cleared it out. In fact, if it hasn't displayed the latest yet (i.e, I am still waiting for the latest entry) and I attempt to clear the log it will tell me that no entries were removed. The issue I am encountering is very much described by the response in the first post.
asorrell
Posts: 1707
Joined: Fri Apr 04, 2003 2:00 pm
Location: Colleyville, Texas

Post by asorrell »

I figured that might be your problem - FYI - if you talk to IBM about it you might need to refer to APAR JR31806.
Andy Sorrell
Certified DataStage Consultant
IBM Analytics Champion 2009 - 2020
chanaka
Premium Member
Premium Member
Posts: 96
Joined: Tue Sep 15, 2009 4:06 am
Location: United States

Post by chanaka »

Hi Guys,

Say we purged the logs without any issue. How do we make xmeta db to shrink its size and release the unused tablespace allocated for db2. I searched on it a bit unable to find a clear answer.

FYI my ds 8.1 redhat box is now having 0 space on the / drive. Full capacity is 62 G. We don't have anything else which can be cleaned. I know this is irrelevant to ask this question here. However it would be great if any of you can help us.
db2 => CALL GET_DBSIZE_INFO(?, ?, ?, -1)

Value of output parameters
--------------------------
Parameter Name : SNAPSHOTTIMESTAMP
Parameter Value : 2009-11-12-16.13.19.674878

Parameter Name : DATABASESIZE
Parameter Value : 31141191680

Parameter Name : DATABASECAPACITY
Parameter Value : 31330205692

Return Status = 0
Chanaka Wagoda
venky144
Participant
Posts: 4
Joined: Mon Mar 29, 2010 2:45 pm
Location: Irving

Post by venky144 »

I am facing the same issue as chanaka, but the db i have is sql server 2005..rest everything is similar...
asorrell
Posts: 1707
Joined: Fri Apr 04, 2003 2:00 pm
Location: Colleyville, Texas

Post by asorrell »

Recent update - my understanding is that the performance issue will finally be fixed in release 8.5! Something to look forward to...
Andy Sorrell
Certified DataStage Consultant
IBM Analytics Champion 2009 - 2020
Post Reply