Duplicate records in a Hash File !!!
Posted: Wed Aug 02, 2006 8:07 am
Yes, I know it's not possible, and I've never seen it before, but one of the DataStage job log files (RT_LOGnnnn) at UniVerse level does have duplicates in it.
As well as duplicate log records even the //SEQUENCE.NO record is duplicated!
Trying to view the log in Director makes Director crash out completely with the error:
Run-time error '457':
This key is already associated with an element of this collection
However, by trawling through the log entries in the UniVerse RT_LOGnnnn file I found this error:
Incorrect group hash in DBsplit30!!
Trying to copy the records into another dummy hash file had the predictable result - it told me Record already exists in file. But I have managed to copy the data into another, unused RT_LOG file at UNIX.
Doing a sort on the file at UniVerse level gives:
SORT ONLY RT_LOG3238 02:57:31pm 02 Aug 2006 PAGE 1
RT_LOG........................
0
0
1
1
2
2
3
3
4
etc. down to...
864
865
866
867
//JOB.STARTED.NO
//PURGE.SETTINGS
//SEQUENCE.NO
//SEQUENCE.NO
857 records listed.
>
As well as duplicate log records even the //SEQUENCE.NO record is duplicated!
Trying to view the log in Director makes Director crash out completely with the error:
Run-time error '457':
This key is already associated with an element of this collection
However, by trawling through the log entries in the UniVerse RT_LOGnnnn file I found this error:
Incorrect group hash in DBsplit30!!
Trying to copy the records into another dummy hash file had the predictable result - it told me Record already exists in file. But I have managed to copy the data into another, unused RT_LOG file at UNIX.
Doing a sort on the file at UniVerse level gives:
SORT ONLY RT_LOG3238 02:57:31pm 02 Aug 2006 PAGE 1
RT_LOG........................
0
0
1
1
2
2
3
3
4
etc. down to...
864
865
866
867
//JOB.STARTED.NO
//PURGE.SETTINGS
//SEQUENCE.NO
//SEQUENCE.NO
857 records listed.
>