Dead End
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 503
- Joined: Wed Jun 29, 2005 8:14 am
Dead End
Hi all,
I am facing a strange issue. I designed a job and is a very normal job with a source and target and few lookups in it. Nostages other than DRS,hash files and IPCs are used in the job.
The job runs fine for some rows and then slowly dies out. What I mean is the job starts running with a rate of 150 rows / sec and then slowly goes to 1 row / sec and then when i see the performance stats the jobs looks like it has died. Does n't abort and doesn't have any warning messages in the log file.
Can any one help me in this . What may be the reason??
I am facing a strange issue. I designed a job and is a very normal job with a source and target and few lookups in it. Nostages other than DRS,hash files and IPCs are used in the job.
The job runs fine for some rows and then slowly dies out. What I mean is the job starts running with a rate of 150 rows / sec and then slowly goes to 1 row / sec and then when i see the performance stats the jobs looks like it has died. Does n't abort and doesn't have any warning messages in the log file.
Can any one help me in this . What may be the reason??
-
- Premium Member
- Posts: 503
- Joined: Wed Jun 29, 2005 8:14 am
-
- Premium Member
- Posts: 503
- Joined: Wed Jun 29, 2005 8:14 am
If you don't properly size a hashed file for the number of rows you will be sending it, it starts to 'overflow' and slows down. When it overflows badly, it starts slowing down badly as well - until the job seems to crawl or stand still.
One test would be to replace the hashed file with a Sequential stage and see if the writes stay steady that way.
To size the hashed file, find the sizes of the DATA.30 and OVER.30 in the hashed file directory, add them and then divide by 2048. Take that number and put at least that amount in as the Minimum Modulus of the hashed file.
One test would be to replace the hashed file with a Sequential stage and see if the writes stay steady that way.
To size the hashed file, find the sizes of the DATA.30 and OVER.30 in the hashed file directory, add them and then divide by 2048. Take that number and put at least that amount in as the Minimum Modulus of the hashed file.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Premium Member
- Posts: 503
- Joined: Wed Jun 29, 2005 8:14 am
Re: Dead End
I had a similar issue couple of times, the reason was there were some locks on the target table. once you are able to unlock them , then I guess you can get rid of the issue.
-
- Premium Member
- Posts: 503
- Joined: Wed Jun 29, 2005 8:14 am
Re: Dead End
hey sun rays,
Can you please give more explaination what do yo umean by Locks on tables????
Can you please give more explaination what do yo umean by Locks on tables????
sun rays wrote:I had a similar issue couple of times, the reason was there were some locks on the target table. once you are able to unlock them , then I guess you can get rid of the issue.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Every database uses locks of some kind to prevent data anomalies (for example lost data caused by two processes trying to change the same data at the same time).
This is a big topic. Ask your DBA to explain.
The DataStage process doing inserts or updates will request locks. If there is some other process in the database already holding those locks, then the DataStage process must wait.
A really bad DataStage design can actually lock itself out, for example by starting two processes trying to update the same table. (What's happening inside the shared container??)
This is a big topic. Ask your DBA to explain.
The DataStage process doing inserts or updates will request locks. If there is some other process in the database already holding those locks, then the DataStage process must wait.
A really bad DataStage design can actually lock itself out, for example by starting two processes trying to update the same table. (What's happening inside the shared container??)
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 503
- Joined: Wed Jun 29, 2005 8:14 am
Shared container is selecting data from a table and writing to a HASH file.
Its selecting data from the table which I am writing to.
Its selecting data from the table which I am writing to.
ray.wurlod wrote:Every database uses locks of some kind to prevent data anomalies (for example lost data caused by two processes trying to change the same data at the same time).
This is a big topic. Ask your DBA to explain.
The DataStage process doing inserts or updates will request locks. If there is some other process in the database already holding those locks, then the DataStage process must wait.
A really bad DataStage design can actually lock itself out, for example by starting two processes trying to update the same table. (What's happening inside the shared container??)
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
So?
UniVerse (DataStage Engine) is a database, and hashed files are how that particular database implements its tables. Row-level locks are still taken.
If you populate it with a UV stage, the lock is promoted to table-level once a configurable threshold (MAXRLOCK) is reached.
UniVerse (DataStage Engine) is a database, and hashed files are how that particular database implements its tables. Row-level locks are still taken.
If you populate it with a UV stage, the lock is promoted to table-level once a configurable threshold (MAXRLOCK) is reached.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.