Page 1 of 1

Commit intervals in ODBC enterprise stage

Posted: Wed May 12, 2010 7:32 pm
by anitha_casturi
We are working on Parallel extender on 8.1 on the AIX environment.

We are using ODBC enterprise stage to read and load data into MS SQL Sever 2005 tables.

How do we set commit intervals in this stage?

We will want commit to happen only after the entire load is complete. Incase the job fails due to various reasons, no record should get committed to the target table.

I am sure there must be a way out and requesting experts to help us out.

Thanks in advance

Posted: Wed May 12, 2010 8:36 pm
by ray.wurlod
Like so many other things in the parallel execution environment it's managed through environment variables rather than through stage properties. It's probably a good idea to add the environment to the job, or to a parameter set that the job includes.

Posted: Wed May 12, 2010 8:44 pm
by chulett
There seems to be a property in the ODBC Connector stage to control that, but no mention of anything of the sort in the documentation for the Enterprise stage.

Posted: Thu May 13, 2010 12:05 am
by anitha_casturi
ray.wurlod wrote:Like so many other things in the parallel execution environment it's managed through environment variables rather than through stage properties. It's probably a good idea to add the environment to th ...
Thank you. Can you please help with pointers on the environment variables that have to be set for this one.

Posted: Thu May 13, 2010 12:38 am
by ray.wurlod
From memory it's APT_RDBMS_COMMIT_ROWS

Posted: Sun May 16, 2010 8:58 am
by anitha_casturi
Is there anyway i seta value to this environment variable such that the commit happens only after the job completes?

I have been trying to check for this but have been getting to see that

if APT_RDBMS_COMMIT_ROWS is set to 0, a negative number, or
an invalid value, a warning is issued and each partition commits only
once after the last insertion. But I am wanting to set such that the database gets committed only after the job is complete.

Posted: Sun May 16, 2010 9:21 am
by ray.wurlod
The separate processing nodes are separate. The only way you can accomplish what you are doing is to run on one node only. Unless, of course, you can figure out a way to wrap the entire DataStage job in a single transaction.