Yes, I know it's not possible, and I've never seen it before, but one of the DataStage job log files (RT_LOGnnnn) at UniVerse level does have duplicates in it.
As well as duplicate log records even the //SEQUENCE.NO record is duplicated!
Trying to view the log in Director makes Director crash out completely with the error:
Run-time error '457':
This key is already associated with an element of this collection
However, by trawling through the log entries in the UniVerse RT_LOGnnnn file I found this error:
Incorrect group hash in DBsplit30!!
Trying to copy the records into another dummy hash file had the predictable result - it told me Record already exists in file. But I have managed to copy the data into another, unused RT_LOG file at UNIX.
Doing a sort on the file at UniVerse level gives:
SORT ONLY RT_LOG3238 02:57:31pm 02 Aug 2006 PAGE 1
RT_LOG........................
0
0
1
1
2
2
3
3
4
etc. down to...
864
865
866
867
//JOB.STARTED.NO
//PURGE.SETTINGS
//SEQUENCE.NO
//SEQUENCE.NO
857 records listed.
>
Duplicate records in a Hash File !!!
Moderators: chulett, rschirm, roy
You have a broken file. You can try using UVFIXFILE in TCL to fix it or, if it doesn't contain vital information, do a "CLEAR.FILE RT_LOGnnn" and then you can manually add in the purge settings.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
It is not possible to have true duplicates without the hashed file being damaged. <pedantry>Non-printing and space characters are not duplicate keys.</pedantry> Although they may appear to be, to the human viewer.
There are a couple of ways that duplicate keys can occur in a damaged hashed file. One results from an older version of the record, which had been marked as deleted (free space) getting its free space bit reset (highly unusual). The other is the one you've encountered, where an invalid operation (perhaps split, perhaps merge, perhaps overflow allocation) has resulted in mismatched pointers, so that one copy of the record is not in the correct physical group. But can still be found on a sequential scan through the file. You might also get "record id xxx does not has to group nnn!!" messages in processing such a hashed file.
Obviously clearing the file means that it has no logical groups, and any physical structure will be marked entirely as free space, so the invalid pointers will become extinct.
There are a couple of ways that duplicate keys can occur in a damaged hashed file. One results from an older version of the record, which had been marked as deleted (free space) getting its free space bit reset (highly unusual). The other is the one you've encountered, where an invalid operation (perhaps split, perhaps merge, perhaps overflow allocation) has resulted in mismatched pointers, so that one copy of the record is not in the correct physical group. But can still be found on a sequential scan through the file. You might also get "record id xxx does not has to group nnn!!" messages in processing such a hashed file.
Obviously clearing the file means that it has no logical groups, and any physical structure will be marked entirely as free space, so the invalid pointers will become extinct.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 41
- Joined: Wed Mar 05, 2003 1:28 am
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact: