Page 1 of 1

cleanup resources

Posted: Thu Mar 10, 2005 4:04 pm
by scottr
if i select cleanup resources for a perticular job from director, will it effect any other jobs that are currently running(which are using hash files).

to day i did the same and some other job is stopped with the following error.

Program "JOB.180808767.DT.1358363097.TRANS1": Line 172, Internal data error.

k of 0x834 does not match expected blink of 0x0!
Detected within group starting at address 0x80024000!
File 'xxx/Lookup1996to2002/DATA.30':
Computed blink of 0x834 does not match expected blink of 0x0!
Detected within group starting at address 0x80024000!

thanks

Posted: Thu Mar 10, 2005 7:20 pm
by ray.wurlod
Cleanup resources only affects the currently selected job.

However, your error message indicates that this is not the answer to your problem. Your problem is in the hashed file Lookup1996to2002, which has become corrupted, almost certainly (given the hex address of the group in which the problem occurred) because the hashed file has hit the 2GB size limit.

You almost certainly will have lost some data, and will not readily be able to determine what data you have lost. The safest thing to do is to clear the hashed file, RESIZE it to using 64-bit addressing, and re-load it.

From the Adminstrator client command window, or in a dssh session on the server, execute:

Code: Select all

CLEAR.FILE hashedfilename
RESIZE hashedfilename * * * 64BIT
You may need to use SETFILE to establish a VOC pointer to the hashed file first.

You might also contemplate whether you really need every row and every column in the hashed file that you are loading into it. For example, if you only need current rows, don't load any non-current ones. Any column that is not being used in downstream processing should never be loaded into the hashed file. If you can be savage enough with these cuts, you may be able to fit within the 2GB limit.