Hi,
We have a simple job that calls a stored procedure on SQL Server that deletes around 34 billion rows in chunks (Commits on the DB every few 100, 000 deletes or so). We tested the job today and received the following from the production DBA:
"We are stuck, The Datastage processing is done as a single transaction. We ran out of log space.
The log space used is now 274GB.
You will need to check how to configure the DataStage job so that it will not nest the execution of the stored procedure inside its own transaction.
Processing stopped at 4:07 PM. I think we should kill the job and understand how to reconfigure Datastage before resubmitting."
Has anyone heard of this before?
Thanks - - John
Stored Procedure Stage question
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 306
- Joined: Wed Jun 21, 2006 11:41 am
In all honesty, no. I don't see how DataStage could possibly override what happens inside the stored procedure and if it is coded to do work in 'chunks' and issue intermediate commits, I don't really see how DataStage running it would be any different from you running it. All the stage does is wrap it in an anonymous block. Unless perhaps SQL Server plays a role here, don't recall ever seeing that behaviour with Oracle procedures being "nested inside its own transaction" somehow.
But then I have no access or ability to test anything and am only working from my fading memories of days gone by.![Wink :wink:](./images/smilies/icon_wink.gif)
Hopefully others with more pertinent replies will chime in as to what might be going on here.
But then I have no access or ability to test anything and am only working from my fading memories of days gone by.
![Wink :wink:](./images/smilies/icon_wink.gif)
Hopefully others with more pertinent replies will chime in as to what might be going on here.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Premium Member
- Posts: 306
- Joined: Wed Jun 21, 2006 11:41 am
Looks like the DBA's were right....They proved it tom showing that when the proc is called directly a new transaction id is issued with every commit but when called from DataStage it is the same transaction id for the entire 34 billion row delete. I engaged IB engineering and they were aware of this isse for SQL Server and the stored procedure stage and issued a Patch fo this and it worked.
patch_JR46989_dsp-STP_windows_9100-9120.zip
Here is some verbage from the Readme.txt:
PROBLEM: STP-ODBC stage causing transaction logs to be filled on the SQL Server
DETAILS:
STP-ODBC always sets AUTOCOMMIT to off and controls transactions.
Customer wants to skip setting transaction since some drivers and databases
take this differently. In SQL Server transaction logs were growing when
transaction is being handled within STP.
RESOLUTION:
Provided option to skip setting AUTOCOMMIT in STP-ODBC.
Environment variable "DS_STP_DONOT_SET_TRX" needs to be set to 1 for same.
After setting environment variable at Job level recompile STP jobs.
patch_JR46989_dsp-STP_windows_9100-9120.zip
Here is some verbage from the Readme.txt:
PROBLEM: STP-ODBC stage causing transaction logs to be filled on the SQL Server
DETAILS:
STP-ODBC always sets AUTOCOMMIT to off and controls transactions.
Customer wants to skip setting transaction since some drivers and databases
take this differently. In SQL Server transaction logs were growing when
transaction is being handled within STP.
RESOLUTION:
Provided option to skip setting AUTOCOMMIT in STP-ODBC.
Environment variable "DS_STP_DONOT_SET_TRX" needs to be set to 1 for same.
After setting environment variable at Job level recompile STP jobs.