Job hangs - Hashfiles populated, not Transformations

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
Tobbi
Participant
Posts: 14
Joined: Fri Oct 31, 2003 9:33 am
Location: Sweden, Stockholm

Job hangs - Hashfiles populated, not Transformations

Post by Tobbi »

Hi,

I'm having problem with a job that seems to hang. Since I'm testing I'm only running it on approx. 5000 rows.

When I do a "Monitor" on the job I see that all Hashfiles are populated but all the transformations aren't fetching a single row!!

Background info:
During the past two days it's been running fine - 2-4 minutes before it's done. Then when most bugs were cleaned out I encountered a difference in the NLS language settings ('.' and ',' issue). For each datasource I added "alter session set NLS_NUMERIC_CHARACTERS = '.,'" - everything collapsed. When the job had run for 40 min, I aborted it, removed the "alter session..." ran it again yet it is still taking more than 40 min.

I'm leaving it now over the weekend - anyone encountered a similar problem?

BR
Tobbi
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

There really isn't enough information for diagnosis here, Tobbi. What does your job do? What are your sources, what are your targets? Does the Transformer stage have constraint expressions? Are the hashed files loaded in the same job that refers to them? I suspect that you've tried something like the following:

Code: Select all

       ----> HashedFile  ---->  Transformer  ---->  Any  ---->
                    ^                                |
                    +--------------------------------+
This style of design will hang (in DS 5.x) because a passive stage (in this case the hashed file stage) can not open its output until all its inputs are closed. The solution, if this is the problem, is to use separate hashed file stages, disable all caching, and ensure that "lock for update" is set.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Amos.Rosmarin
Premium Member
Premium Member
Posts: 385
Joined: Tue Oct 07, 2003 4:55 am

Post by Amos.Rosmarin »

I'm not sure it's the case but if you have a big hash file and you enable 'Allow write cache' the job first writes the data to the cache and from there to the disk... so what you'll see in the monitor is that the write phase ended and but it still takes several minutes (I've seen it taking up to 1/2 hour and more) until the job terminates. If you kill the job at this phase you'll get a zombie process.

Do you realy need hash file ? can it be replacerd with a sequential ?


Amos
Tobbi
Participant
Posts: 14
Joined: Fri Oct 31, 2003 9:33 am
Location: Sweden, Stockholm

No hashed files now.. still not working

Post by Tobbi »

Thanks for you reply and I'll try to be a little bit more specific!

What I did (this morning) was that I removed all the hashed files - I still have the same problem as before (without the hashed files).

- I'm accessing an Oracle 8.1.7 database
- I'm doing reads and writes to and from the same database (different schemes)
- When monitoring, the first query doesn't execute, no errors in the log, the job just sits there, completly idle.

This is the first query to run:

SELECT NORMALISED_EVENT_ID,TO_CHAR(LAST_MODIFIED, 'YYYY-MM-DD HH24:MI:SS'),REPROCESSED_COUNT,A_PARTY_TON_CODE,C_PARTY_CELL_ID,substr(C_PARTY_CARRIER_CODE,1,3),DURATION,VOLUME,PULSES,CHARGE,GENERAL_7,GENERAL_14 FROM ADC_NORMALISED_EVENT WHERE BATCH_LOAD_ID = #BATCH_LOAD_ID# and LAST_MODIFIED > to_date('2003-09-24 06:00:00','YYYY-MM-DD HH24:MI:SS')

There is no problem running it (through DS or Toad).

/Tobbi
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

If there's "no problem" running the query in TOAD and you can do a "View Data" in DataStage (which is what I assume you meant by running it through DS), then something in the job design, perhaps.

You've never said how you are accessing Oracle and what your job looks like. At this point it seems like it would help to know the gory details of your job design. OCI? ODBC? What active stages are in between your source and target?

Also, have you done any tracing, either inside the DataStage job or the Oracle database? For example, in TOAD when connected to the source schema, what kind of activity shows up in the "Kill/Trace Session" browser for this job's connection?
-craig

"You can never have too many knives" -- Logan Nine Fingers
roy
Participant
Posts: 2598
Joined: Wed Jul 30, 2003 2:05 am
Location: Israel

Post by roy »

Hi,
I had something similar with oracle 8
the cause was locks that were not cleard due to some crash of the job leaving connections which lock the table.
please check with your DBA to release all locks on your target table.

once our DBA cleaned the conections locking the table the job worked fine.

IHTH (I Hope This Helps)
Roy R.
Time is money but when you don't have money time is all you can afford.

Search before posting:)

Join the DataStagers team effort at:
http://www.worldcommunitygrid.org
Image
Tobbi
Participant
Posts: 14
Joined: Fri Oct 31, 2003 9:33 am
Location: Sweden, Stockholm

Post by Tobbi »

roy wrote:Hi,
I had something similar with oracle 8
the cause was locks that were not cleard due to some crash of the job leaving connections which lock the table.
please check with your DBA to release all locks on your target table.

once our DBA cleaned the conections locking the table the job worked fine.

IHTH (I Hope This Helps)
I was just about to comment on this - I found out an hour ago, I had a "Read Commited" on two datasources that used the same table, one for reading, the other for writing (stupid really).

I set the read datasource to "Read-only" and the lock-problem disappeard. I still have the problem that the job doesn't work, but this time I know why!

Thanks for you replies all!

Cheers
Tobbi
Post Reply