Page 1 of 1

DB2 ordered targets and transactions

Posted: Mon Sep 24, 2012 10:54 am
by PhilHibbs
I'm trying a DB2 load with multiple input links to the stage, one link to do a "delete" and one to do an "insert".

The "Transaction" section has disappeared. It is no longer present in either of the links nor on the stage. The Isolation Level has moved from the Link page to the Stage page. Does anyone know what Transaction Record Count will be used? Or will it just commit after every record ordering key?

*Edit* The documentation says this:
Isolation level
Specify the degree to which the data that is being accessed by the DB2 connector stage is locked or isolated from other concurrently executing transactions, units of work, or processes. (Transactions or units or work are committed in a target stage when the end of wave marker is detected.)

What's an "end of wave marker"?

*Edit* Am I supposed to have a third link that does a User-defined SQL "commit" statement? My job worked fine for a small test but fell over with "unavailable resource" on a larger volume run. Adding a third "commit" link seems to have overcome this.

One neat effect that can be achieved this way is to commit before reaching a certain record limit. If you set the constraint on the "commit" link to "Mod( @INROWNUM, pCommitLevel ) = 0", and set the "commit" link to be the first link in the Link Order, then the commit will be issued on a change of key that would have breached the commit level. So if you have a limit of 1000, and you have got to 997, and the next key inserts 4 rows, the commit will be issued before those 4 rows are inserted.

Re: DB2 ordered targets and transactions

Posted: Mon Sep 24, 2012 1:34 pm
by kwwilliams
Do you have the auto commit turned on?

Re: DB2 ordered targets and transactions

Posted: Tue Sep 25, 2012 12:05 am
by Kryt0n
This and your previous related post got me intrigued, I didn't even know they allowed two input links to a connector so had always split the streams.
PhilHibbs wrote:What's an "end of wave marker"?
Generally it's an indicator in real time processing that the full data set has been sent allowing the stages to complete their task and shutdown. I started playing with to see if it could control the DB transaction but hit a flaw... maybe I've done it wrong so anyone with ideas, feel free to fire.

There is a stage called "Wave Generator" which allows you to insert these waves in to your stream based on your conditions, e.g. your primary/foreign key when trying to maintain referential integrity... What I found is that when both my nodes (in a two node config) had sent an end of wave to the DB, the connection to the DB closed causing the rest of the process to fail. To me that sounds like defeating the purpose of being able to insert an end of wave flag...
PhilHibbs wrote:One neat effect that can be achieved this way is to commit before reaching a certain record limit.
I like this idea but rather than committing at a certain record limit, sending the commit at a key change (or after x number of key changes) allowing a greater control of when you commit (paritcularly for referential integrity) is where I see the benefit

Re: DB2 ordered targets and transactions

Posted: Tue Sep 25, 2012 6:55 am
by chulett
Kryt0n wrote:This and your previous related post got me intrigued, I didn't even know they allowed two input links to a connector so had always split the streams.
This has always been allowed in Server jobs, guess it finally came to the Parallel world with the Connectors. 8)

Re: DB2 ordered targets and transactions

Posted: Tue Sep 25, 2012 7:15 am
by BI-RMA
chulett wrote:This has always been allowed in Server jobs, guess it finally came to the Parallel world with the Connectors. 8)
In fact, it has been allowed in the parallel world in some older stages as well, but not with the Enterprise Stages. For example the DB2_API-Stage supported it and still does.

Posted: Tue Sep 25, 2012 7:19 am
by chulett
Ah... thanks for the clarification.

Re: DB2 ordered targets and transactions

Posted: Tue Sep 25, 2012 9:58 am
by PhilHibbs
kwwilliams wrote:Do you have the auto commit turned on?
No, we've always explicitly set the commit frequency. I don't like the word "auto", I like to be in control.

*Edit* I suppose I could set that and also explicitly commit with the third link, that might protect against the possibility that a single foreign key might have more records than the database can handle in a single transaction. I don't think we'll ever get there, the most I've seen per key has been 11 records, but you never know.