Job hangs intermediately
Moderators: chulett, rschirm, roy
Job hangs intermediately
Hi,
We have a parallel job which reads the data from oracle table and loads into a sequential file.
The job was running fine for the past few years.
Now there are around 4 million records in the table.
Usually the job finishes within 1.5 hrs.
But for the past few days, the job processes the 3 million records within 40 minutes. After that it hangs and doesn't process any records.
We tried to stop the job manually and reran.
After that also the same thing is happening.
How to avoid this and make the job to complete fine?
How to resolve this?>
Please help me with your valuable suggestions.
Thanks
We have a parallel job which reads the data from oracle table and loads into a sequential file.
The job was running fine for the past few years.
Now there are around 4 million records in the table.
Usually the job finishes within 1.5 hrs.
But for the past few days, the job processes the 3 million records within 40 minutes. After that it hangs and doesn't process any records.
We tried to stop the job manually and reran.
After that also the same thing is happening.
How to avoid this and make the job to complete fine?
How to resolve this?>
Please help me with your valuable suggestions.
Thanks
pandeeswaran
-
- Participant
- Posts: 251
- Joined: Mon Jun 09, 2008 5:52 am
Hi,
My exact job design is as given below:
1)Oracle source stage
2)Lookup stage which having 2 sequential files and 2 lookup filesets for reference
3)copy stage
4)insert or update in target oracle table
Past few days back, the job was running fine.
There are around 4 billiion records in the source..
There are no issues upto 3 billion records.
After that the source query becomes inactive
The job is running on five node configuration.
Recently i have analyzed the target oracle table.
But that also din't help.
My exact job design is as given below:
1)Oracle source stage
2)Lookup stage which having 2 sequential files and 2 lookup filesets for reference
3)copy stage
4)insert or update in target oracle table
Past few days back, the job was running fine.
There are around 4 billiion records in the source..
There are no issues upto 3 billion records.
After that the source query becomes inactive
The job is running on five node configuration.
Recently i have analyzed the target oracle table.
But that also din't help.
pandeeswaran
indexes in source table??
But in the recent run, it fetched all the records fromm the source and the source query session becomes inactive in oracle. so thats not a problem,,
This time the insert/update query in the target table is very very slow.
Previouly with more data from the source, teh same job finished within 1 hr.
But the job is running more than 6 hrs.
What could be the problem and how to avoid this?
thanks
But in the recent run, it fetched all the records fromm the source and the source query session becomes inactive in oracle. so thats not a problem,,
This time the insert/update query in the target table is very very slow.
Previouly with more data from the source, teh same job finished within 1 hr.
But the job is running more than 6 hrs.
What could be the problem and how to avoid this?
thanks
pandeeswaran
-
- Premium Member
- Posts: 120
- Joined: Thu Oct 28, 2004 4:24 pm
what I have had happen in the past when a job that is inserting or updating is just sitting there is that my transaction log is full and cannot grow. After doing a TLOG backup the datastage job continued to process.
We set our tlog backups to run quite frequent when we are processing the DW so that we don't have these issues.
Just something else to look at.
We set our tlog backups to run quite frequent when we are processing the DW so that we don't have these issues.
Just something else to look at.
"Don't let the bull between you and the fence"
Thanks
Gregg J Knight
"Never Never Never Quit"
Winston Churchill
Thanks
Gregg J Knight
"Never Never Never Quit"
Winston Churchill
Try writing to a dataset rather than directly to oracle to see if the throughput changes. Then take the dataset and write it to oracle to see if you have throughput issues.
Is your quantity in the millions or billions of rows? You have mentioned both in this thread.
Has the ratio of inserts to updates changed recently? You should choose the appropriate upsert order to match the data you're processing. Update then Insert, or Inset then Update. Alternately, you could perform perform only inserts in this job while writing update records to a dataset, then run a second job to apply the updates.
Regards,
Is your quantity in the millions or billions of rows? You have mentioned both in this thread.
Has the ratio of inserts to updates changed recently? You should choose the appropriate upsert order to match the data you're processing. Update then Insert, or Inset then Update. Alternately, you could perform perform only inserts in this job while writing update records to a dataset, then run a second job to apply the updates.
Regards,
- james wiles
All generalizations are false, including this one - Mark Twain.
All generalizations are false, including this one - Mark Twain.