Page 1 of 1

Datasatge job performance with version 9.1 and Oracle 12C

Posted: Mon Mar 09, 2015 6:42 am
by satheesh_color
Hi All,

We have a datastage project in version 9.1 and Oracle 12C as an target DB. When we ran the jobs it ran fine for few jobs, but some of the jobs really do underperformance like <10 rows/sec when they do update and then insert using Oracle Connector Stage. When i remove the stage and use Dataset it ran with 3000 rows/sec.


Really curious to know from where we have to take a look into it, either Datastage or Database.


Aprciate your response.

Thanks,
Satheesh.R

Posted: Mon Mar 09, 2015 7:26 am
by chulett
Database. Let's start with - do you have an index over the fields in your update where clause?

Posted: Mon Mar 09, 2015 3:34 pm
by ray.wurlod
chulett wrote:Database. Let's start with
Yoda?
:lol:

Posted: Tue Mar 10, 2015 7:20 am
by satheesh_color
Hi Chullet,

Yes. We do have a index for the key column for the update.

Note: Even tried to truncate the table and re-load by only inserting the records using cdc code. Still there we no performance improvement.

Thanks,
Satheesh.R

Posted: Tue Mar 10, 2015 7:30 am
by chulett
satheesh_color wrote:re-load by only inserting the records using cdc code.
Can you explain exactly what this means, please. What does 'using the cdc code' have to do with doing pure inserts?

Posted: Wed Mar 11, 2015 12:59 am
by satheesh_color
Hi Chullet,

My Bad, sorry for the confusion. Change capture stage return code.


Thanks,
Satheesh.R

Posted: Wed Mar 11, 2015 6:53 am
by qt_ky
I vaguely recall similar issues posted and patches or fix packs solving similar problems on version 9.1. Have you looked on Fix Central and checked with Support?

Posted: Wed Mar 11, 2015 7:46 am
by chulett
I wasn't confused as to what "CDC" stands for and am well aware of the change code of which you speak. I just don't see how you can be using it and allegedly "re-load by only inserting". I'm guessing you meant that you switched from "insert else update" to separate actions of either an insert or an update, driven by the change code. That's fine as it doesn't possibly burn two actions to perform one. If that's still slow, my money is still on the updates.

What's your ratio of inserts versus updates? If you only process the inserts is there still a speed issue? Does it start of fast and then slow down? Have your DBA trace the running processes and the explain plan on the Oracle side, see the path the optimizer took and if in fact it is even using your index for updates.

Oh.. and check with support. :wink:

Posted: Wed Mar 11, 2015 10:27 am
by rkashyap
This may not be related, but I will pass this on. We had faced a similar underperfromace issue with some of the jobs on DataStage 9.1.2 (Solaris) while writing to Oracle 11.2.0.3 on three Node RAC. (APT Config = 8 Nodes) in Update/Insert mode. Matching indexes existed on target tables. Input was coming from an Oracle table with same primary keys, so there were no duplicates either. Our support provider could not replicate it in their environment and believed that this to be an Oracle database issue.

Work around for this issue was to run the job on one node or (target) Oracle connector in sequential mode.

Another related issue was that at high volumes, "Insert" of Update/Insert write mode aborted with "Unique Constraint error. Above mentioned workaround worked for this issue also. Another workaround for this issue was to provide ignore_row_on_dupkey_index hint in custom Insert SQL in Oracle Connector.

Posted: Wed Mar 11, 2015 2:50 pm
by ray.wurlod
Did you think of running the job on three nodes, to match the number of nodes in the Oracle RAC?

Posted: Wed Mar 25, 2015 2:49 pm
by rkashyap
We did not try running the job on three nodes at that time, but we have tried it now. Results are same as earlier :(

Posted: Thu Mar 26, 2015 8:32 am
by PaulVL
What did your Oracle DBA say about the matter?

Posted: Mon Mar 30, 2015 9:28 pm
by rkashyap
In our shop, there are multiple applications successfully inserting/updating records in parallel mode, so DBAs were not convinced that the issue is on Oracle side. They did review all settings and patches etc.

IBM was not able to replicate this issue, so they were not ready to engage engineering! IBM did send several suggestions which were time-consuming to implement, but did not work. At one point, IBM also mentioned Oracle server side bug 10178982, but as per technote this only impacts Oracle version 11.2.0.2.0 or lower (so it should not impact Oracle 11.2.0.3 database tables).

At this point, we had spent several weeks pursuing this issue with vendors without any solution in sight, so we decided to load problem tables in sequential mode and move on.

Please note that other posters have also faced similarissue.

Another interesting thing we observed with Unique Constraint issue was

Code: Select all

Source Oracle ---> Transformer ---> Target Oracle ----> RejFile
(Same PK as target)              (Update then Insert)
If source record count is 1000 and if 10 records are rejected then only 990 records should be loaded to target table. However, all of the 1000 records were successfully loaded to the target but 10 records also showed up in reject file.!

Posted: Tue Mar 31, 2015 1:09 am
by priyadarshikunal
Does oracle enterprise help? try changing to enterprise stage, just to test.