Reducing Joblogsize after Purge?
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 42
- Joined: Tue Oct 20, 2009 8:36 am
Reducing Joblogsize after Purge?
Is it possible to reduce the size of the joblog after purging log entries (I've purged all entries older than 30 days)? Perhaps by some kind of Universe Database command like "Optimize Database" or so (I already know the "RT_LOGXXX" table which I need via uvsh)?
I know that it's possible to delete the job and create it new but I don't want to loose all entries...
I know that it's possible to delete the job and create it new but I don't want to loose all entries...
-
- Participant
- Posts: 42
- Joined: Tue Oct 20, 2009 8:36 am
We've built a job which lists logfiles which get too huge. The job is a MultipleInstance job which got > 1GB. After deleting entries older than 30 days via the director, the size stayed almost exactly the same (the older log entries are really deleted): 1207MB before, 1206MB now. The deletions made should be definately more than 1MB...normally it should be about the half the size or smaller.
-
- Participant
- Posts: 42
- Joined: Tue Oct 20, 2009 8:36 am
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Transactional DELETE does not free up disk space - it remains within the hashed file structure for re-use.
Commands CLEAR.FILE and RESIZE are non-transactional and do free up disk space. An appropriate syntax isThis does not alter any of the hashed file's tuning parameters, but does free up disk space.
If disk space is at a premium in the project directory, add a USING clause to the RESIZE command to use a different file system as temporary workspace.
Commands CLEAR.FILE and RESIZE are non-transactional and do free up disk space. An appropriate syntax is
Code: Select all
RESIZE RT_LOGnnn * * *
If disk space is at a premium in the project directory, add a USING clause to the RESIZE command to use a different file system as temporary workspace.
Code: Select all
RESIZE RT_LOGnnn * * * USING /usr/tmp
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 42
- Joined: Tue Oct 20, 2009 8:36 am
uvsh "RESIZE RT_LOGxxxx *"
worked so far and recovered about 200MB.
Analyze.File before resize:
worked so far and recovered about 200MB.
Analyze.File before resize:
After:File name .................. RT_LOGxxxx
Pathname ................... RT_LOGxxxx
File type .................. DYNAMIC
NLS Character Set Mapping .. NONE
Hashing Algorithm .......... GENERAL
No. of groups (modulus) .... 6544 current ( minimum 1 )
Large record size .......... 1628 bytes
Group size ................. 2048 bytes
Load factors ............... 80% (split), 50% (merge) and 50% (actual)
Total size ................. 1265258496 bytes
I think the reason why not more was recovered is this load factor. I think I'll have to live with that and just delete more...File name .................. RT_LOGxxxx
Pathname ................... RT_LOGxxxx
File type .................. DYNAMIC
NLS Character Set Mapping .. NONE
Hashing Algorithm .......... GENERAL
No. of groups (modulus) .... 4040 current ( minimum 1 )
Large record size .......... 1628 bytes
Group size ................. 2048 bytes
Load factors ............... 80% (split), 50% (merge) and 80% (actual)
Total size ................. 1088081920 bytes