MS-SQL transactions from ODBC stage

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
Amos.Rosmarin
Premium Member
Premium Member
Posts: 385
Joined: Tue Oct 07, 2003 4:55 am

MS-SQL transactions from ODBC stage

Post by Amos.Rosmarin »

Hi,


I'm working with MS SQL Server database by ODBC stage (it's unix)
And I can not perform transaction, for example:

Code: Select all

I insert 50 rows into a table with values from 50 to 100
I execute a DS job that inserts 100 rows with value 1-100

What I was expecting is that the job will abort because of duplicate keys and a rollback will be done... so no change will be done for the table.
(I use transaction size 0 and array size 1000)
What I get is the first 50 rows inserted and the rest are rejected.
(as if the database has no transactions)
I consulted the DBA and he assured me that the DB side is OK and it supports transactions.

Any idea ?
How do I set SQL_AUTOCOMMIT ?



TNX,
Amos
roy
Participant
Posts: 2598
Joined: Wed Jul 30, 2003 2:05 am
Location: Israel

Post by roy »

Hi,
I don't have the tools to check it yet, but have you checked under the branded_odbc directory for relevant info?

IHTH,
Roy R.
Time is money but when you don't have money time is all you can afford.

Search before posting:)

Join the DataStagers team effort at:
http://www.worldcommunitygrid.org
Image
Amos.Rosmarin
Premium Member
Premium Member
Posts: 385
Joined: Tue Oct 07, 2003 4:55 am

Post by Amos.Rosmarin »

of course .... nice way to gain points :P
shawn_ramsey
Participant
Posts: 145
Joined: Fri May 02, 2003 9:59 am
Location: Seattle, Washington. USA

Re: MS-SQL transactions from ODBC stage

Post by shawn_ramsey »

Amos,

When you set the commit size to zero you are setting the transaction so that all rows must insert without error or all rows are rolled back. If you set the commit size to 1 then each row will be committed individually and only the errored rows will be missing.
Shawn Ramsey

"It is a mistake to think you can solve any major problems just with potatoes."
-- Douglas Adams
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

I'd suggest leaving your commit size at zero so it works as you expect. The only thing you are missing is to set limits on your job so it aborts after X warnings. It is the fact that the job aborts that causes the rollback.

We keep our limit fairly low, typically 1. :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
shawn_ramsey
Participant
Posts: 145
Joined: Fri May 02, 2003 9:59 am
Location: Seattle, Washington. USA

Post by shawn_ramsey »

Craig,

Works as expected? It seems that it is not working as Amos had expected therefore the question. :D
Shawn Ramsey

"It is a mistake to think you can solve any major problems just with potatoes."
-- Douglas Adams
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Yup, exactly! Trying to get it to work as he expects, and it should. :wink: From the original post:
What I was expecting is that the job will abort because of duplicate keys and a rollback will be done... so no change will be done for the table.
(I use transaction size 0 and array size 1000)
As I mentioned, the only reason it didn't rollback after all of his duplicate key errors is the job was run with 'Unlimited' warnings allowed. When the job Finished (with Warnings) that allowed the 'good' rows to be comitted. It actually needs to show Aborted to perform the transactional rollback.
-craig

"You can never have too many knives" -- Logan Nine Fingers
shawn_ramsey
Participant
Posts: 145
Joined: Fri May 02, 2003 9:59 am
Location: Seattle, Washington. USA

Post by shawn_ramsey »

I guess I read too much into it. I was never that good with story problems. :oops:
Shawn Ramsey

"It is a mistake to think you can solve any major problems just with potatoes."
-- Douglas Adams
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

:lol: No problem Shawn.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Amos.Rosmarin
Premium Member
Premium Member
Posts: 385
Joined: Tue Oct 07, 2003 4:55 am

Post by Amos.Rosmarin »

Hi,

Just to put some order ....

The job has transaction zise = 0 but works as if trx = 1 (commit after each row)

The warning limit is 50 (the default) so the job aborts

Is anyone familiar with SQL_AUTOCOMMIT variable ?
According to the ODBC manual:
With two-phased commit, SQL Server supports multiple statements per connection. Otherwise, SQL Server supports a single statement per connection if SQL_AUTOCOMMIT is 0 and multiple statements per connection if SQL_AUTOCOMMIT is 1.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Amos.Rosmarin wrote:The job has transaction zise = 0 but works as if trx = 1 (commit after each row)
This makes no sense to me. :?
Amos.Rosmarin also wrote:The warning limit is 50 (the default) so the job aborts
You didn't mention the fact that the job had actually aborted. :?

Perhaps it's related to the fact that you marked this as a Parallel job, yet posted it in the Server forum... something I just noticed. For all I know it works differently in the PX world than when you do something simple like this in Server. :?

Otherwise, a transaction size of zero = only commit once at the end of the job, not on every record - and rollback on abort.

Really don't think your issue has anything to do with two-phased commits, perhaps more of an issue with the wire driver. But hopefully someone will chime in on your SQL_AUTOCOMMIT setting question for ODBC, I've got no clue. :? [wow, much confusion reigns here]

Out of curiousity, what version of DataStage 7.x are you running?
-craig

"You can never have too many knives" -- Logan Nine Fingers
Amos.Rosmarin
Premium Member
Premium Member
Posts: 385
Joined: Tue Oct 07, 2003 4:55 am

Post by Amos.Rosmarin »

It is not a PX job .. just an old fashion server job , The server i'm using is 7.5.1 with PX .

I created in purpose duplicate keys and therefore expected rollback on abort. But the jobs works a if there is a commit after each row.

Sorry if i was not clear

Amos
Amos.Rosmarin
Premium Member
Premium Member
Posts: 385
Joined: Tue Oct 07, 2003 4:55 am

Post by Amos.Rosmarin »

The Problem:
===========
DataStage server job (solaris) that insert rows into SQL server via odbc stage does work as expected and does not perform rollback in case of sql error (although the job aborts)

The solution:
=============

It's not realy a solution , just a workaround,

Transaction size = 0
Array size = 2

For any inrownum > 2 that triggers sql error (duplicate key, unique constraint ....) a rollback will be done.

Not much but it works for me :wink:
Amos
Post Reply