Record load getting committed even after job abort
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 37
- Joined: Wed Oct 29, 2014 10:48 pm
- Location: Ohio
Record load getting committed even after job abort
Hi,
I am loading some data into an Oracle table using Oracle connector. I want the load to roll back completely from the Oracle table if the job aborts. For this I have set the record count property under Isolation level to '0'. But still I can see records in the target table though my job has aborted due to some reason (primary key violation).
Other properties set by me:
Isolation level: Read commited
Array size: 2000
Drop unmatched fields: Yes
Fail on row error: Yes
I am loading some data into an Oracle table using Oracle connector. I want the load to roll back completely from the Oracle table if the job aborts. For this I have set the record count property under Isolation level to '0'. But still I can see records in the target table though my job has aborted due to some reason (primary key violation).
Other properties set by me:
Isolation level: Read commited
Array size: 2000
Drop unmatched fields: Yes
Fail on row error: Yes
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Welcome aboard.
This behaviour is governed primarily by the transaction record count specified in your Oracle connector. Each time that many rows is sent to Oracle, DataStage also sends a COMMIT.
You can change the behaviour to "all or nothing" by setting the transaction record count to 0 (all rows are one transaction), but make sure you are in your DBA's good books, as it will be necessary to handle very large transactions (talk to them about rollback or redo segments).
This behaviour is governed primarily by the transaction record count specified in your Oracle connector. Each time that many rows is sent to Oracle, DataStage also sends a COMMIT.
You can change the behaviour to "all or nothing" by setting the transaction record count to 0 (all rows are one transaction), but make sure you are in your DBA's good books, as it will be necessary to handle very large transactions (talk to them about rollback or redo segments).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 37
- Joined: Wed Oct 29, 2014 10:48 pm
- Location: Ohio
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Sure, related but it will change the Array Size to be an even multiple of the Record Count if it needs to but not the other way 'round. And that doesn't apply if the Record Count is set to zero. And as noted in the docs, if record count is a zero then "the connector processes all records before it commits the transaction".
What about the two APT ORA variables that control commits? Are they set in the environment? I don't have their exact names off the top of my head but one commits based on time and another based on count. Wondering if they are causing what you are seeing.
What about the two APT ORA variables that control commits? Are they set in the environment? I don't have their exact names off the top of my head but one commits based on time and another based on count. Wondering if they are causing what you are seeing.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
I don't think we are but if we're talking about an Oracle bulk load then there are two kinds, DIRECT and CONVENTIONAL. The former is the true bulk load where you have DIRECT=TRUE in the control file and there are no commits done as everything is written directly to the table and bypasses the redo mechanism. Conventional (DIRECT=FALSE) is the default and does plain old regular inserts and commits so not really all that... bulky.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Premium Member
- Posts: 37
- Joined: Wed Oct 29, 2014 10:48 pm
- Location: Ohio
-
- Premium Member
- Posts: 37
- Joined: Wed Oct 29, 2014 10:48 pm
- Location: Ohio
Hi,
I check both the environment variables i.e.
$APT_ORAUPSERT_COMMIT_ROW_INTERVAL and
$APT_ORAUPSERT_COMMIT_TIME_INTERVAL
They are set to default value on project level in the administrator (5000 and 2 respectively).
I want to override this property in my job so I used these variables in my job. I tried setting their value to 0, -1, $UNSET etc but still the records are getting committed. Any idea how can I actually unset these values so that project level values are not assigned to these variables.
I check both the environment variables i.e.
$APT_ORAUPSERT_COMMIT_ROW_INTERVAL and
$APT_ORAUPSERT_COMMIT_TIME_INTERVAL
They are set to default value on project level in the administrator (5000 and 2 respectively).
I want to override this property in my job so I used these variables in my job. I tried setting their value to 0, -1, $UNSET etc but still the records are getting committed. Any idea how can I actually unset these values so that project level values are not assigned to these variables.
-
- Premium Member
- Posts: 37
- Joined: Wed Oct 29, 2014 10:48 pm
- Location: Ohio
Technically it would go back to your 'Official Support Provider' which may or may not be IBM... it can depend on whom you got the software from. That vendor may be your official first line of defense for support. Hopefully someone there knows.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers