Update table taking too long.
Moderators: chulett, rschirm, roy
Update table taking too long.
I am trying to Update an Oracle 9i table, which has 275,000 records. The Source and Target tables are SAME.
When I load the data to a file, it takes less than a minute where as UPDATING the same data to a table is taking too long.....
When I monitor the job in Director it hangs at the Target table stage saying: Dynamic_RDBMS_32,0: Logging delayed metadata.
I haven't changed any properties on the target DRS stage.
Update Action is: Update existing rows only
The Transaction Isolation is default: Read Committed.
Array size is 1
Transaction size is 0
Partitioning is AUTO.
Any suggestions to make it fast.
Thanks in advance.
When I load the data to a file, it takes less than a minute where as UPDATING the same data to a table is taking too long.....
When I monitor the job in Director it hangs at the Target table stage saying: Dynamic_RDBMS_32,0: Logging delayed metadata.
I haven't changed any properties on the target DRS stage.
Update Action is: Update existing rows only
The Transaction Isolation is default: Read Committed.
Array size is 1
Transaction size is 0
Partitioning is AUTO.
Any suggestions to make it fast.
Thanks in advance.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
The job needs to wait until the 275,000 row transaction is assembled and successfully committed. This is your choice, since your job specifies "all rows are one transaction" (transaction size 0).
You also need to check what obstacles exist to update operations (which are slow in any case). What indexes and constraints exist on the table?
You also need to check what obstacles exist to update operations (which are slow in any case). What indexes and constraints exist on the table?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 783
- Joined: Mon Jan 16, 2006 10:17 pm
- Location: Sydney, Australia
-
- Participant
- Posts: 24
- Joined: Sat Oct 15, 2005 1:09 pm
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
The transaction size is already "all rows"!
You can demonstrate that the problem is not in DataStage by changing the job to write its records to a file. This job will fly. Then create a second job that does nothing but read that file and update the rows.
You can demonstrate that the problem is not in DataStage by changing the job to write its records to a file. This job will fly. Then create a second job that does nothing but read that file and update the rows.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Use a user defined update statement...how many CPU you have at your database side?
Mainly in Oracle you can use parallel DML to fast up the process...
something like-
In Oracle USE_HASH and PARALLEL are the keywords for tuning the query. Also you have a large volume of data you should partition the target table accordingly.
It is Oracle level issue for sure....so let me know if you need help
Mainly in Oracle you can use parallel DML to fast up the process...
something like-
Code: Select all
update /*+ parallel(c,4) */
(select /*+ parallel(a,4) parallel(b,4) */
a.column1,
b.column1
from schema.table1a ,
schema.table b
on a.id1 = b.id2) c
set old_val = new_val
It is Oracle level issue for sure....so let me know if you need help
Avishek Mukherjee
Data Integration Architect
Chicago, IL, USA.
Data Integration Architect
Chicago, IL, USA.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact: