Joba are keep on running without loading single record into

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

How many nodes are you running the job on? Do you have the same issue if you run on a single node?
-craig

"You can never have too many knives" -- Logan Nine Fingers
Shashwat
Participant
Posts: 2
Joined: Mon Jul 27, 2009 11:20 pm
Location: chandigarh

Post by Shashwat »

chulett wrote:How many nodes are you running the job on? Do you have the same issue if you run on a single node? ...
Jobs is running on two nodes
SHASHWAT GUPTA
keshav0307
Premium Member
Premium Member
Posts: 783
Joined: Mon Jan 16, 2006 10:17 pm
Location: Sydney, Australia

Post by keshav0307 »

What is commit size?
Do hash partition on the unique key column.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Please answer both questions.
-craig

"You can never have too many knives" -- Logan Nine Fingers
rajeshgundala
Participant
Posts: 10
Joined: Tue Apr 22, 2008 4:50 am

Post by rajeshgundala »

Could you check whether there are any indexes other than normal/unique index is present in that table into which you are loading. If there are bitmap index it would cause performance degrade.

We faced similar kind of isue and droping the bitmapindex and then running the jobs helped us.

Regards
Rajesh Gundala
om.ranjan
Participant
Posts: 13
Joined: Mon Jan 09, 2006 4:46 am

Post by om.ranjan »

keshav0307 wrote:What is commit size?
Do hash partition on the unique key column.

I have introduced a environment variable $APT_ORAUPSERT_COMMIT_ROW_INTERVAL and set commit interval value 500, with this value initially job worked fine, but
When I increase number of input records (up to 10000) again it getting into infinite conditions.

When I tried with commit interval value with 50 it working fine when I tried with 2 millions of records it getting into infinite conditions.

Now my concern is if I will specify commit interval as 50 or less then I/O operations will increase drastically, in turn it may slow down complete process.

I didn't specify any partition on unique key, I left with DataStage default.

I cheaked with DBA, they confirmed that database side there is no errors.

Please recommend.

Thanks,
Ranjan
RANJAN
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

While the DBA might be correct in stating that there are no errors in the database, he or she should be checking the locks, particularly deadlocks.
mekrreddy
Participant
Posts: 88
Joined: Wed Oct 08, 2008 11:12 am

Post by mekrreddy »

I am also facing the same issue, but our DBA said there is a deadlock when trying to upsert. we are working on this.. no success yet..
keshav0307
Premium Member
Premium Member
Posts: 783
Joined: Mon Jan 16, 2006 10:17 pm
Location: Sydney, Australia

Post by keshav0307 »

Do you have the same issue if you run on a single node?

for upsert you must have unique index on the table, define that unique column(s) as key in the table metadata, and hash partition on that key
om.ranjan
Participant
Posts: 13
Joined: Mon Jan 09, 2006 4:46 am

Post by om.ranjan »

keshav0307 wrote:Do you have the same issue if you run on a single node?

for upsert you must have unique index on the table, define that unique column(s) as key in the table metadata, and hash partition on that key
I'm already specified unique index column as a key in DataStage; when I tried with Hash partition, it doesn't work.

Please suggest...

Thanks
Ranjan
RANJAN
om.ranjan
Participant
Posts: 13
Joined: Mon Jan 09, 2006 4:46 am

Post by om.ranjan »

om.ranjan wrote:
keshav0307 wrote:Do you have the same issue if you run on a single node?

for upsert you must have unique index on the table, define that unique column(s) as key in the table metadata, and hash partition on that key
I'm already specified unique index column as a key in DataStage; when I tried with Hash partition, it doesn't work.

Please suggest...

Thanks
Ranjan
......................................................................................................

when I changed execution mode from PARALLEL to SEQUENTIAL in Target ORACLE ENTERPRISE STAGE (Stage->Advanced->Execution Mode->Sequential) It's working fine...

Thanks to all
Ranjan
RANJAN
Post Reply