Page 1 of 2

writing and reading a hash file in same stage problem

Posted: Wed Aug 08, 2007 8:43 pm
by jiegao
I have a job like this

Folder stage -> XML stage -> Hash File stage(with input and output on the same file) -> OCI

The reason I use Hash file stage is to remove duplicate data. Most of times it works fine and I have used a lot in my jobs with a hash file stage having input and output link for the same file. But I experienced problem today that I had 10000 records write to the hash file and only 100 records were inserted into the table in OCI stage. When I viewed the file, it definitely had more than 100 records in the hash file. The job was completed successfully without any error or warning. Does anyone experienced the same problem?Thanks

Posted: Wed Aug 08, 2007 8:51 pm
by ArndW
How many rows went "into" the OCI stage - perhaps you are getting insert errors due to constraints. Add a reject link in the job to ensure that this isn't causing your dropped records.

Posted: Wed Aug 08, 2007 8:54 pm
by ray.wurlod
Look in the job log for Oracle warnings.

It's hashed file, not hash file.

Posted: Wed Aug 08, 2007 9:04 pm
by jiegao
No eror. only 100 rows went into OCI stage but has 10000 rows in the hash file
ArndW wrote:How many rows went "into" the OCI stage - perhaps you are getting insert errors due to constraints. Add a reject link in the job to ensure that this isn't causing your dropped records. ...

Posted: Wed Aug 08, 2007 9:06 pm
by jiegao
Thanks Ray. There is no warning.

ray.wurlod wrote:Look in the job log for Oracle warnings.

It's hashed file, not hash file. ...

Posted: Wed Aug 08, 2007 9:07 pm
by ArndW
Do you have a SELECT on your hashed file read or any warnings in the log file?

Posted: Wed Aug 08, 2007 9:09 pm
by jiegao
No constraint used. The hashed file stage is used only for removing duplicate records.

ArndW wrote:Do you have a SELECT on your hashed file read or any warnings in the log file? ...

Posted: Wed Aug 08, 2007 9:10 pm
by ArndW
Also, do you re-create the hashed file each run or append to it? Is the error visible on each run or just occasionally? Do other jobs use the same hashed file? If you turn off buffering on the write does the error remain?

Posted: Wed Aug 08, 2007 9:13 pm
by jiegao
The file is used only in one job. File is recreated every job run. No write cache is used.

ArndW wrote:Also, do you re-create the hashed file each run or append to it? Is the error visible on each run or just occasionally? Do other jobs use the same hashed file? If you turn off buffering on the write d ...

Posted: Wed Aug 08, 2007 9:14 pm
by jiegao
Just occasionally. But It happened today and last week.
ArndW wrote:Also, do you re-create the hashed file each run or append to it? Is the error visible on each run or just occasionally? Do other jobs use the same hashed file? If you turn off buffering on the write d ...

Posted: Wed Aug 08, 2007 9:46 pm
by ArndW
Is it reproduceable now? Do other jobs use that hashed file as well and could be affecting the contents?

Hash File Probem

Posted: Wed Aug 08, 2007 11:01 pm
by hemachandra.m
Could you check like you have given the same path in the Input & output tab of Hash file. If you have given a path in input & some other path in output, it may take from the old hash file.

Re: Hash File Probem

Posted: Thu Aug 09, 2007 10:12 am
by jiegao
I seperate the job into 2 jobs. A simple job loading from hashed file to Oracle table. I got the following log today:
Run stopped after 100 rows

No warning.

Posted: Thu Aug 09, 2007 4:04 pm
by ray.wurlod
Run stopped after 100 rows can only be caused by one thing; when the job run request is issued, a limit of 100 rows is imposed. Check the Limits tab on your Job Run Options dialog.

Posted: Thu Aug 09, 2007 7:16 pm
by jiegao
ray.wurlod wrote:Run stopped after 100 rows can only be caused by one thing; when the job run request is issued, a limit of 100 rows is imposed. Check the Limits tab on your Job Run Options dialog. ...
I will check the Run Options when I get back office tomorrow morning. But I run the sequence job from the director. There are many other jobs in this sequence and only this job got stopped after 100 rows. Very werid.