DB2 Connector Stage
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
DB2 Connector Stage
I am using the DB2 Connector stage in a generic job that loads data from a dataset into a table. This job is called around a 100 times to load a hundred tables each time the dataset and table name being supplied as parameters.
What I need to do is disable auto commit completely so that, for example, if there is an error in inserting even one line in the 99th table, the entire operation can be rolled back and all of the 100 tables return to having their older values.
Can this be done in anyway?
What I need to do is disable auto commit completely so that, for example, if there is an error in inserting even one line in the 99th table, the entire operation can be rolled back and all of the 100 tables return to having their older values.
Can this be done in anyway?
Tony
BI Consultant - Datastage
BI Consultant - Datastage
No, it can't be done. When the generic job completes the connection to DB2 is closed and an autocommit is executed.
Your best bet might be to add a column with a run-id to each table and upon abort you could have a job traverse all the tables and delete rows where the run-id is equal to that of the aborted batch.
Your best bet might be to add a column with a run-id to each table and upon abort you could have a job traverse all the tables and delete rows where the run-id is equal to that of the aborted batch.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 246
- Joined: Mon Jun 30, 2008 3:22 am
- Location: New York
- Contact:
As mentioned by ArndW, there will usually be a batch ID loaded to all the tables, unique for every run. And in order to revert, there will be a separate delete job, which will pick up the batch IDs and delete all records for that batch ID.
We also had a situation when there was no batch ID and the tables were updated with that timestamp to a column called 'Load date'. We used to store the job start timestamp and the job name for every instance and when there is a need to revert the process, all the records with the stored timestamps will be deleted.
We also had a situation when there was no batch ID and the tables were updated with that timestamp to a column called 'Load date'. We used to store the job start timestamp and the job name for every instance and when there is a need to revert the process, all the records with the stored timestamps will be deleted.
Arun
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
Hmmm... there's no coming back from a truncate. You'll need an alternate approach in that case, perhaps something like a table swap? Rename the original table, create a new empty version and load... on failure swap the old table back.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
So you are talking of a CREATE table and subsequent DROP table on the fly. I'm pretty sure the user used by Datastage in our production environment does not have CREATE table privileges. CREATE tables are to be done only by the administrator at the time of delivery. Daft? Yes...
Tony
BI Consultant - Datastage
BI Consultant - Datastage