Datasatge job performance with version 9.1 and Oracle 12C
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 182
- Joined: Thu Jun 16, 2005 2:05 am
Datasatge job performance with version 9.1 and Oracle 12C
Hi All,
We have a datastage project in version 9.1 and Oracle 12C as an target DB. When we ran the jobs it ran fine for few jobs, but some of the jobs really do underperformance like <10 rows/sec when they do update and then insert using Oracle Connector Stage. When i remove the stage and use Dataset it ran with 3000 rows/sec.
Really curious to know from where we have to take a look into it, either Datastage or Database.
Aprciate your response.
Thanks,
Satheesh.R
We have a datastage project in version 9.1 and Oracle 12C as an target DB. When we ran the jobs it ran fine for few jobs, but some of the jobs really do underperformance like <10 rows/sec when they do update and then insert using Oracle Connector Stage. When i remove the stage and use Dataset it ran with 3000 rows/sec.
Really curious to know from where we have to take a look into it, either Datastage or Database.
Aprciate your response.
Thanks,
Satheesh.R
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 182
- Joined: Thu Jun 16, 2005 2:05 am
-
- Participant
- Posts: 182
- Joined: Thu Jun 16, 2005 2:05 am
I wasn't confused as to what "CDC" stands for and am well aware of the change code of which you speak. I just don't see how you can be using it and allegedly "re-load by only inserting". I'm guessing you meant that you switched from "insert else update" to separate actions of either an insert or an update, driven by the change code. That's fine as it doesn't possibly burn two actions to perform one. If that's still slow, my money is still on the updates.
What's your ratio of inserts versus updates? If you only process the inserts is there still a speed issue? Does it start of fast and then slow down? Have your DBA trace the running processes and the explain plan on the Oracle side, see the path the optimizer took and if in fact it is even using your index for updates.
Oh.. and check with support.
What's your ratio of inserts versus updates? If you only process the inserts is there still a speed issue? Does it start of fast and then slow down? Have your DBA trace the running processes and the explain plan on the Oracle side, see the path the optimizer took and if in fact it is even using your index for updates.
Oh.. and check with support.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
This may not be related, but I will pass this on. We had faced a similar underperfromace issue with some of the jobs on DataStage 9.1.2 (Solaris) while writing to Oracle 11.2.0.3 on three Node RAC. (APT Config = 8 Nodes) in Update/Insert mode. Matching indexes existed on target tables. Input was coming from an Oracle table with same primary keys, so there were no duplicates either. Our support provider could not replicate it in their environment and believed that this to be an Oracle database issue.
Work around for this issue was to run the job on one node or (target) Oracle connector in sequential mode.
Another related issue was that at high volumes, "Insert" of Update/Insert write mode aborted with "Unique Constraint error. Above mentioned workaround worked for this issue also. Another workaround for this issue was to provide ignore_row_on_dupkey_index hint in custom Insert SQL in Oracle Connector.
Work around for this issue was to run the job on one node or (target) Oracle connector in sequential mode.
Another related issue was that at high volumes, "Insert" of Update/Insert write mode aborted with "Unique Constraint error. Above mentioned workaround worked for this issue also. Another workaround for this issue was to provide ignore_row_on_dupkey_index hint in custom Insert SQL in Oracle Connector.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
In our shop, there are multiple applications successfully inserting/updating records in parallel mode, so DBAs were not convinced that the issue is on Oracle side. They did review all settings and patches etc.
IBM was not able to replicate this issue, so they were not ready to engage engineering! IBM did send several suggestions which were time-consuming to implement, but did not work. At one point, IBM also mentioned Oracle server side bug 10178982, but as per technote this only impacts Oracle version 11.2.0.2.0 or lower (so it should not impact Oracle 11.2.0.3 database tables).
At this point, we had spent several weeks pursuing this issue with vendors without any solution in sight, so we decided to load problem tables in sequential mode and move on.
Please note that other posters have also faced similarissue.
Another interesting thing we observed with Unique Constraint issue wasIf source record count is 1000 and if 10 records are rejected then only 990 records should be loaded to target table. However, all of the 1000 records were successfully loaded to the target but 10 records also showed up in reject file.!
IBM was not able to replicate this issue, so they were not ready to engage engineering! IBM did send several suggestions which were time-consuming to implement, but did not work. At one point, IBM also mentioned Oracle server side bug 10178982, but as per technote this only impacts Oracle version 11.2.0.2.0 or lower (so it should not impact Oracle 11.2.0.3 database tables).
At this point, we had spent several weeks pursuing this issue with vendors without any solution in sight, so we decided to load problem tables in sequential mode and move on.
Please note that other posters have also faced similarissue.
Another interesting thing we observed with Unique Constraint issue was
Code: Select all
Source Oracle ---> Transformer ---> Target Oracle ----> RejFile
(Same PK as target) (Update then Insert)
-
- Premium Member
- Posts: 1735
- Joined: Thu Mar 01, 2007 5:44 am
- Location: Troy, MI