Update table taking too long.

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
edwds
Premium Member
Premium Member
Posts: 32
Joined: Mon May 01, 2006 1:06 pm

Update table taking too long.

Post by edwds »

I am trying to Update an Oracle 9i table, which has 275,000 records. The Source and Target tables are SAME.

When I load the data to a file, it takes less than a minute where as UPDATING the same data to a table is taking too long.....
When I monitor the job in Director it hangs at the Target table stage saying: Dynamic_RDBMS_32,0: Logging delayed metadata.

I haven't changed any properties on the target DRS stage.

Update Action is: Update existing rows only
The Transaction Isolation is default: Read Committed.
Array size is 1
Transaction size is 0
Partitioning is AUTO.

Any suggestions to make it fast.

Thanks in advance.
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

Increase the array size to about 250. See how does that effect the performance.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The job needs to wait until the 275,000 row transaction is assembled and successfully committed. This is your choice, since your job specifies "all rows are one transaction" (transaction size 0).

You also need to check what obstacles exist to update operations (which are slow in any case). What indexes and constraints exist on the table?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
keshav0307
Premium Member
Premium Member
Posts: 783
Joined: Mon Jan 16, 2006 10:17 pm
Location: Sydney, Australia

Post by keshav0307 »

i think its more related to your sql query tuning.
Do you have proper Indexes on the target table... and on the columns in the where cluase of the update query...
Rajesh_kr82
Participant
Posts: 24
Joined: Sat Oct 15, 2005 1:09 pm

Post by Rajesh_kr82 »

I think you are not using the proper keys and indexes to update the table. So you can try using the correct keys and then increase your transaction size. That should do.
Regards,
Rajesh
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The transaction size is already "all rows"!

You can demonstrate that the problem is not in DataStage by changing the job to write its records to a file. This job will fly. Then create a second job that does nothing but read that file and update the rows.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
evans036
Premium Member
Premium Member
Posts: 72
Joined: Tue Jan 31, 2006 11:13 pm

Post by evans036 »

if the ratio of rows being updated (verses rows in the table) is high (eg > 20%), try dropping all indexes not required for the update. you may also see perf boost if you sort the data in the order of the remaining index.

then recreate indexes.

good luck,

steve
avi21st
Charter Member
Charter Member
Posts: 135
Joined: Thu May 26, 2005 10:21 am
Location: USA

Post by avi21st »

Use a user defined update statement...how many CPU you have at your database side?

Mainly in Oracle you can use parallel DML to fast up the process...

something like-

Code: Select all

update /*+ parallel(c,4) */
(select /*+ parallel(a,4) parallel(b,4) */ 
a.column1, 
b.column1
from schema.table1a , 
schema.table b
on a.id1 = b.id2) c     
set old_val = new_val 
In Oracle USE_HASH and PARALLEL are the keywords for tuning the query. Also you have a large volume of data you should partition the target table accordingly.

It is Oracle level issue for sure....so let me know if you need help
Avishek Mukherjee
Data Integration Architect
Chicago, IL, USA.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Doesn't the Oracle Enterprise stage do this automatically? It ought to. IMHO, of course.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply