Dead End

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
DeepakCorning
Premium Member
Premium Member
Posts: 503
Joined: Wed Jun 29, 2005 8:14 am

Dead End

Post by DeepakCorning »

Hi all,

I am facing a strange issue. I designed a job and is a very normal job with a source and target and few lookups in it. Nostages other than DRS,hash files and IPCs are used in the job.

The job runs fine for some rows and then slowly dies out. What I mean is the job starts running with a rate of 150 rows / sec and then slowly goes to 1 row / sec and then when i see the performance stats the jobs looks like it has died. Does n't abort and doesn't have any warning messages in the log file.

Can any one help me in this . What may be the reason??
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

What stages are you writing to in the job? They would be a typical culprit in a situation like this, especially a hashed file that's not properly sized.
-craig

"You can never have too many knives" -- Logan Nine Fingers
DeepakCorning
Premium Member
Premium Member
Posts: 503
Joined: Wed Jun 29, 2005 8:14 am

Post by DeepakCorning »

I am writing to a Target Table ( DRS ) and yes I am writing to a hash file as well.

So is that the problem ?? But why ?? what is the reason??
DeepakCorning
Premium Member
Premium Member
Posts: 503
Joined: Wed Jun 29, 2005 8:14 am

Post by DeepakCorning »

Oops Forgot to mention I am writing to a Shared container as well.
DeepakCorning wrote:I am writing to a Target Table ( DRS ) and yes I am writing to a hash file as well.

So is that the problem ?? But why ?? what is the reason??
rajkraj
Premium Member
Premium Member
Posts: 98
Joined: Wed Jun 15, 2005 1:41 pm

Post by rajkraj »

I think problem is not with DS, problem might be with your target tables...it happens some times...when you are loading in to the tables some times tables are get locked. It might be because of your user access permission or your oracle server problem.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

If you don't properly size a hashed file for the number of rows you will be sending it, it starts to 'overflow' and slows down. When it overflows badly, it starts slowing down badly as well - until the job seems to crawl or stand still.

One test would be to replace the hashed file with a Sequential stage and see if the writes stay steady that way.

To size the hashed file, find the sizes of the DATA.30 and OVER.30 in the hashed file directory, add them and then divide by 2048. Take that number and put at least that amount in as the Minimum Modulus of the hashed file.
-craig

"You can never have too many knives" -- Logan Nine Fingers
DeepakCorning
Premium Member
Premium Member
Posts: 503
Joined: Wed Jun 29, 2005 8:14 am

Post by DeepakCorning »

I don't think so its a hash file load problem as I did a diff logic now and removed loading the hash file from the job . Still the job goes in to a stand still state although fetches lil more amount of data.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Well, then you need to work through the other stages one by one until you find the culprit.
-craig

"You can never have too many knives" -- Logan Nine Fingers
sun rays
Charter Member
Charter Member
Posts: 57
Joined: Wed Jun 08, 2005 3:35 pm
Location: Denver, CO

Re: Dead End

Post by sun rays »

I had a similar issue couple of times, the reason was there were some locks on the target table. once you are able to unlock them , then I guess you can get rid of the issue.
DeepakCorning
Premium Member
Premium Member
Posts: 503
Joined: Wed Jun 29, 2005 8:14 am

Re: Dead End

Post by DeepakCorning »

hey sun rays,

Can you please give more explaination what do yo umean by Locks on tables????


sun rays wrote:I had a similar issue couple of times, the reason was there were some locks on the target table. once you are able to unlock them , then I guess you can get rid of the issue.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Every database uses locks of some kind to prevent data anomalies (for example lost data caused by two processes trying to change the same data at the same time).

This is a big topic. Ask your DBA to explain.

The DataStage process doing inserts or updates will request locks. If there is some other process in the database already holding those locks, then the DataStage process must wait.

A really bad DataStage design can actually lock itself out, for example by starting two processes trying to update the same table. (What's happening inside the shared container??)
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
DeepakCorning
Premium Member
Premium Member
Posts: 503
Joined: Wed Jun 29, 2005 8:14 am

Post by DeepakCorning »

Shared container is selecting data from a table and writing to a HASH file.
Its selecting data from the table which I am writing to.
ray.wurlod wrote:Every database uses locks of some kind to prevent data anomalies (for example lost data caused by two processes trying to change the same data at the same time).

This is a big topic. Ask your DBA to explain.

The DataStage process doing inserts or updates will request locks. If there is some other process in the database already holding those locks, then the DataStage process must wait.

A really bad DataStage design can actually lock itself out, for example by starting two processes trying to update the same table. (What's happening inside the shared container??)
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

So?

UniVerse (DataStage Engine) is a database, and hashed files are how that particular database implements its tables. Row-level locks are still taken.

If you populate it with a UV stage, the lock is promoted to table-level once a configurable threshold (MAXRLOCK) is reached.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply