Page 1 of 1

Posted: Sun Sep 06, 2009 6:58 am
by chulett
How many nodes are you running the job on? Do you have the same issue if you run on a single node?

Posted: Mon Sep 07, 2009 12:43 am
by Shashwat
chulett wrote:How many nodes are you running the job on? Do you have the same issue if you run on a single node? ...
Jobs is running on two nodes

Posted: Mon Sep 07, 2009 2:53 am
by keshav0307
What is commit size?
Do hash partition on the unique key column.

Posted: Mon Sep 07, 2009 8:05 am
by chulett
Please answer both questions.

Posted: Wed Sep 09, 2009 4:56 am
by rajeshgundala
Could you check whether there are any indexes other than normal/unique index is present in that table into which you are loading. If there are bitmap index it would cause performance degrade.

We faced similar kind of isue and droping the bitmapindex and then running the jobs helped us.

Regards

Posted: Wed Sep 09, 2009 11:14 pm
by om.ranjan
keshav0307 wrote:What is commit size?
Do hash partition on the unique key column.

I have introduced a environment variable $APT_ORAUPSERT_COMMIT_ROW_INTERVAL and set commit interval value 500, with this value initially job worked fine, but
When I increase number of input records (up to 10000) again it getting into infinite conditions.

When I tried with commit interval value with 50 it working fine when I tried with 2 millions of records it getting into infinite conditions.

Now my concern is if I will specify commit interval as 50 or less then I/O operations will increase drastically, in turn it may slow down complete process.

I didn't specify any partition on unique key, I left with DataStage default.

I cheaked with DBA, they confirmed that database side there is no errors.

Please recommend.

Thanks,
Ranjan

Posted: Thu Sep 10, 2009 12:52 am
by ArndW
While the DBA might be correct in stating that there are no errors in the database, he or she should be checking the locks, particularly deadlocks.

Posted: Thu Sep 10, 2009 8:01 pm
by mekrreddy
I am also facing the same issue, but our DBA said there is a deadlock when trying to upsert. we are working on this.. no success yet..

Posted: Thu Sep 10, 2009 8:06 pm
by keshav0307
Do you have the same issue if you run on a single node?

for upsert you must have unique index on the table, define that unique column(s) as key in the table metadata, and hash partition on that key

Posted: Fri Sep 11, 2009 2:36 pm
by om.ranjan
keshav0307 wrote:Do you have the same issue if you run on a single node?

for upsert you must have unique index on the table, define that unique column(s) as key in the table metadata, and hash partition on that key
I'm already specified unique index column as a key in DataStage; when I tried with Hash partition, it doesn't work.

Please suggest...

Thanks
Ranjan

Posted: Thu Oct 01, 2009 11:10 am
by om.ranjan
om.ranjan wrote:
keshav0307 wrote:Do you have the same issue if you run on a single node?

for upsert you must have unique index on the table, define that unique column(s) as key in the table metadata, and hash partition on that key
I'm already specified unique index column as a key in DataStage; when I tried with Hash partition, it doesn't work.

Please suggest...

Thanks
Ranjan
......................................................................................................

when I changed execution mode from PARALLEL to SEQUENTIAL in Target ORACLE ENTERPRISE STAGE (Stage->Advanced->Execution Mode->Sequential) It's working fine...

Thanks to all
Ranjan