Page 1 of 1

Abnormal termination of stage - Hash files

Posted: Thu Jan 26, 2006 11:14 am
by sankarsadasivan
Abnormal termination of stage
.CHashedFileStage336.IDENT5 detected

There is a problem with creation of hash files in one of the job.
The job fails with the above error. It faield writting 15894729 rows (2 columns). But the same job was running fine before with same amount of data.

Is it a problem with the size or data.

Any ideas? Please suggest.

Posted: Thu Jan 26, 2006 12:36 pm
by ArndW
You should tell us what error message it failed with before you can get an answer. A common error is trying to write an empty key.

Posted: Thu Jan 26, 2006 4:02 pm
by ray.wurlod
Point 1 - the problem is not with creation of the hashed file (note, it's "hashed", not "hash"). Your claiming that is it is can be misleading.

Point 2 - how large are the rows? Does this size of load get anywhere near 2GB of data? On the server find out the size of the DATA.30 and OVER.30 files associated with this hashed file.

Point 3 - has the hashed file become corrupted in any way? Execute a query against it (a simple count will suffice).

Point 4 - does your job design allow a NULL key to be attempted to be written? (Merely having not null in the Columns grid does not enforce anything - you need explicitly to prevent null keys from getting through.)

Abnormal termination of stage - Hash Files

Posted: Sat Jan 28, 2006 5:44 am
by maliaydin
Hi,
While you are using a large data ( greater than 2 GB) Aggregator stage gives error due to the method option. Change the Aggreator stage method from hash to sort.

Posted: Sat Jan 28, 2006 9:02 am
by chulett
Welcome aboard, maliaydin. One thing to know here is this forum is for Server jobs and the first post in each thread should be marked the same. This means you'd need to stick to Server answers here. Answers relevant to PX jobs would belong over in the EE forum unless someone has specifically marked their question as a Parallel question. As there is no specific 'method' in a Server Aggregator stage, I'm assuming you've given a PX specific answer. Apologies if that's not the case. Plus I'm not sure why we're talking about that stage as it doesn't seem to be related to this post. :?

Anyway, just to finish this off, you can accomplish something similar in a Server Aggregator by pre-sorting the data in an order that supports the aggregation being done and then by asserting that Sort Order in the Aggregator.

Re: Abnormal termination of stage - Hash files

Posted: Sat Jan 28, 2006 3:10 pm
by ceenu
Hi,

While you are using a large data Aggregator stage gives error due to the method option. Change the method option in the Aggreator stage to the SORT.
If you possible give us the exact error are you getting.

Thanks
cnu

Posted: Sat Jan 28, 2006 3:22 pm
by chulett
Ok... :roll:

Hello? [tap tap] Is this thing on?

Posted: Sat Jan 28, 2006 3:29 pm
by ray.wurlod
This thread began as a question about hashed files. I have no idea why people responded with thoughts on Aggregator stages.

We continue to await a response from sankarsadasivan to the questions posed about the job design, data and hashed file size.

Please begin a separate thread if you want to discuss Aggregator - it only confuses people searching the forum when threads get hijacked.

Moderator: can you untangle this thread into two?