HOW TO COMMIT AFTER EVERY 1 MILLION RECORDS

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
seeta
Participant
Posts: 8
Joined: Mon Nov 07, 2005 8:10 pm

HOW TO COMMIT AFTER EVERY 1 MILLION RECORDS

Post by seeta »

HI
I want to COMMIT my records after 1 million row of data in datasatge ,Is there any way to COMMIT in datastage.
I know how to COMMIT in DataBase

Any help would appreciated
Thanks
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Seeta,

the term used in most places in DataStage for this is "transaction size", this equates to the commit frequency. Each Database stage has a way for you to set this value. I think the documentation for whatever stage you are using might be a good place to begin if you can't find the place where you should set this.
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Post by vmcburney »

In Enterprise database stages you will often find transaction size as an optional option, click on the stage options and look for it in the bottom right properties window. By default it gets set to 2000 (I think!). This may destroy your database, you will get multiple instances trying to build and save 1,000,000 rows of data in a single transaction which can lead to rollback space problems. If you have four partitions maybe you need to commit 250,000 rows at a time so you get a maximum of 1,000,000 outstanding inserts.
Post Reply