Joba are keep on running without loading single record into
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 783
- Joined: Mon Jan 16, 2006 10:17 pm
- Location: Sydney, Australia
-
- Participant
- Posts: 10
- Joined: Tue Apr 22, 2008 4:50 am
Could you check whether there are any indexes other than normal/unique index is present in that table into which you are loading. If there are bitmap index it would cause performance degrade.
We faced similar kind of isue and droping the bitmapindex and then running the jobs helped us.
Regards
We faced similar kind of isue and droping the bitmapindex and then running the jobs helped us.
Regards
Rajesh Gundala
keshav0307 wrote:What is commit size?
Do hash partition on the unique key column.
I have introduced a environment variable $APT_ORAUPSERT_COMMIT_ROW_INTERVAL and set commit interval value 500, with this value initially job worked fine, but
When I increase number of input records (up to 10000) again it getting into infinite conditions.
When I tried with commit interval value with 50 it working fine when I tried with 2 millions of records it getting into infinite conditions.
Now my concern is if I will specify commit interval as 50 or less then I/O operations will increase drastically, in turn it may slow down complete process.
I didn't specify any partition on unique key, I left with DataStage default.
I cheaked with DBA, they confirmed that database side there is no errors.
Please recommend.
Thanks,
Ranjan
RANJAN
While the DBA might be correct in stating that there are no errors in the database, he or she should be checking the locks, particularly deadlocks.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Premium Member
- Posts: 783
- Joined: Mon Jan 16, 2006 10:17 pm
- Location: Sydney, Australia
I'm already specified unique index column as a key in DataStage; when I tried with Hash partition, it doesn't work.keshav0307 wrote:Do you have the same issue if you run on a single node?
for upsert you must have unique index on the table, define that unique column(s) as key in the table metadata, and hash partition on that key
Please suggest...
Thanks
Ranjan
RANJAN
......................................................................................................om.ranjan wrote:I'm already specified unique index column as a key in DataStage; when I tried with Hash partition, it doesn't work.keshav0307 wrote:Do you have the same issue if you run on a single node?
for upsert you must have unique index on the table, define that unique column(s) as key in the table metadata, and hash partition on that key
Please suggest...
Thanks
Ranjan
when I changed execution mode from PARALLEL to SEQUENTIAL in Target ORACLE ENTERPRISE STAGE (Stage->Advanced->Execution Mode->Sequential) It's working fine...
Thanks to all
Ranjan
RANJAN