Need Help in desigining

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
jagadam
Premium Member
Premium Member
Posts: 107
Joined: Wed Jul 01, 2009 4:55 pm
Location: Phili

Need Help in desigining

Post by jagadam »

Hi All,

I have a requirement to inserts records into oracle table based on 2 scenarios.Table has 30 fields and 15 fields were defined as Unique keys created unique indexs.

Scenario 1 : --- Process records with specific indicator (15th field) say <> 'V'
Need to upsert records (have to update 14th and 15th fields too).

Scenario 2:--- Process records with specific indicator (15th field) = 'V'

Need to upsert records with all 15 fields.

Created dsjob and using constraint in the transformer where 15th field<>'V' (1st link) to Oracle EE stage/Connector stage. Here i have excluded 14th and 15th fields from keys in the stage and doing upserts by including where 15th field<>'V' in the update query.
and 2nd link from transformer where 15th field='V' as constraint, passing records to Oracle EE stage/Connector stage. Here i have selected all the 15 fields as keys and doing upserts by including where 15th field='V' in the update query.

Initial load is completed.Some of the tables have 20 million plus rows.

Results for delta load -- With Oracle Connector stage all records were rejected for both scenario's saying "Row cannot be updated".

I am still trying with Oracle EE stage.

Would i get desired output with this design?
Could you please throw your inputs.

Thanks in Advance
NJ
Manfred Hagedorn
Participant
Posts: 58
Joined: Wed Apr 04, 2007 10:02 am

Post by Manfred Hagedorn »

Hello,
sounds, that you have one job with 2 orcacle steps updating the same table?
I would do both updates in separate jobs, one after the other.
Best regards
Manfred
jagadam
Premium Member
Premium Member
Posts: 107
Joined: Wed Jul 01, 2009 4:55 pm
Location: Phili

Post by jagadam »

Ok thanks for your suggestion. I will split the job into two. 1st scenario upserts in one job and 2nd scenario upserts in another job and let you know the results.
NJ
jagadam
Premium Member
Premium Member
Posts: 107
Joined: Wed Jul 01, 2009 4:55 pm
Location: Phili

Post by jagadam »

It worked with Oracle EE stage,but not with connector stage.Not able to figure it out why connector stage is not working I've tried with sample data say less than 10 records and worked without any issues.But actual table is going to have millions of records (10 - 50 millions) and it's taking more time to update few records against 6 millions. What options i have to consider inorder to increase the performace of the job.

Please suggest.

Thanks in Advance.
SURA
Premium Member
Premium Member
Posts: 1229
Joined: Sat Jul 14, 2007 5:16 am
Location: Sydney

Post by SURA »

1. Update will take time if you have millions of records. To make more efficient, ensure the key columns (used in where col = ) are indexed

2. Write it in a Dataset file and use the job to from Dataset to table.

3. Sit with your DBA and find the best way

DS User
Post Reply