writing and reading a hash file in same stage problem

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

jiegao
Premium Member
Premium Member
Posts: 46
Joined: Fri Sep 22, 2006 6:12 pm

writing and reading a hash file in same stage problem

Post by jiegao »

I have a job like this

Folder stage -> XML stage -> Hash File stage(with input and output on the same file) -> OCI

The reason I use Hash file stage is to remove duplicate data. Most of times it works fine and I have used a lot in my jobs with a hash file stage having input and output link for the same file. But I experienced problem today that I had 10000 records write to the hash file and only 100 records were inserted into the table in OCI stage. When I viewed the file, it definitely had more than 100 records in the hash file. The job was completed successfully without any error or warning. Does anyone experienced the same problem?Thanks
Regards
Jie
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

How many rows went "into" the OCI stage - perhaps you are getting insert errors due to constraints. Add a reject link in the job to ensure that this isn't causing your dropped records.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Look in the job log for Oracle warnings.

It's hashed file, not hash file.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
jiegao
Premium Member
Premium Member
Posts: 46
Joined: Fri Sep 22, 2006 6:12 pm

Post by jiegao »

No eror. only 100 rows went into OCI stage but has 10000 rows in the hash file
ArndW wrote:How many rows went "into" the OCI stage - perhaps you are getting insert errors due to constraints. Add a reject link in the job to ensure that this isn't causing your dropped records. ...
Regards
Jie
jiegao
Premium Member
Premium Member
Posts: 46
Joined: Fri Sep 22, 2006 6:12 pm

Post by jiegao »

Thanks Ray. There is no warning.

ray.wurlod wrote:Look in the job log for Oracle warnings.

It's hashed file, not hash file. ...
Regards
Jie
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Do you have a SELECT on your hashed file read or any warnings in the log file?
jiegao
Premium Member
Premium Member
Posts: 46
Joined: Fri Sep 22, 2006 6:12 pm

Post by jiegao »

No constraint used. The hashed file stage is used only for removing duplicate records.

ArndW wrote:Do you have a SELECT on your hashed file read or any warnings in the log file? ...
Regards
Jie
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Also, do you re-create the hashed file each run or append to it? Is the error visible on each run or just occasionally? Do other jobs use the same hashed file? If you turn off buffering on the write does the error remain?
jiegao
Premium Member
Premium Member
Posts: 46
Joined: Fri Sep 22, 2006 6:12 pm

Post by jiegao »

The file is used only in one job. File is recreated every job run. No write cache is used.

ArndW wrote:Also, do you re-create the hashed file each run or append to it? Is the error visible on each run or just occasionally? Do other jobs use the same hashed file? If you turn off buffering on the write d ...
Regards
Jie
jiegao
Premium Member
Premium Member
Posts: 46
Joined: Fri Sep 22, 2006 6:12 pm

Post by jiegao »

Just occasionally. But It happened today and last week.
ArndW wrote:Also, do you re-create the hashed file each run or append to it? Is the error visible on each run or just occasionally? Do other jobs use the same hashed file? If you turn off buffering on the write d ...
Regards
Jie
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Is it reproduceable now? Do other jobs use that hashed file as well and could be affecting the contents?
hemachandra.m
Participant
Posts: 27
Joined: Wed Jan 03, 2007 1:29 am

Hash File Probem

Post by hemachandra.m »

Could you check like you have given the same path in the Input & output tab of Hash file. If you have given a path in input & some other path in output, it may take from the old hash file.
Hemachandra
jiegao
Premium Member
Premium Member
Posts: 46
Joined: Fri Sep 22, 2006 6:12 pm

Re: Hash File Probem

Post by jiegao »

I seperate the job into 2 jobs. A simple job loading from hashed file to Oracle table. I got the following log today:
Run stopped after 100 rows

No warning.
Regards
Jie
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Run stopped after 100 rows can only be caused by one thing; when the job run request is issued, a limit of 100 rows is imposed. Check the Limits tab on your Job Run Options dialog.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
jiegao
Premium Member
Premium Member
Posts: 46
Joined: Fri Sep 22, 2006 6:12 pm

Post by jiegao »

ray.wurlod wrote:Run stopped after 100 rows can only be caused by one thing; when the job run request is issued, a limit of 100 rows is imposed. Check the Limits tab on your Job Run Options dialog. ...
I will check the Run Options when I get back office tomorrow morning. But I run the sequence job from the director. There are many other jobs in this sequence and only this job got stopped after 100 rows. Very werid.
Regards
Jie
Post Reply