Page 1 of 1

DB2 Connector Stage

Posted: Wed Jul 31, 2013 5:54 am
by TonyInFrance
I am using the DB2 Connector stage in a generic job that loads data from a dataset into a table. This job is called around a 100 times to load a hundred tables each time the dataset and table name being supplied as parameters.

What I need to do is disable auto commit completely so that, for example, if there is an error in inserting even one line in the 99th table, the entire operation can be rolled back and all of the 100 tables return to having their older values.

Can this be done in anyway?

Posted: Wed Jul 31, 2013 6:33 am
by ArndW
No, it can't be done. When the generic job completes the connection to DB2 is closed and an autocommit is executed.

Your best bet might be to add a column with a run-id to each table and upon abort you could have a job traverse all the tables and delete rows where the run-id is equal to that of the aborted batch.

Posted: Wed Jul 31, 2013 7:58 am
by arunkumarmm
As mentioned by ArndW, there will usually be a batch ID loaded to all the tables, unique for every run. And in order to revert, there will be a separate delete job, which will pick up the batch IDs and delete all records for that batch ID.

We also had a situation when there was no batch ID and the tables were updated with that timestamp to a column called 'Load date'. We used to store the job start timestamp and the job name for every instance and when there is a need to revert the process, all the records with the stored timestamps will be deleted.

Posted: Wed Jul 31, 2013 10:32 am
by TonyInFrance
Its not deleting entries that are loaded during the erroneous run phase that is worrying me. My data is loaded in a truncate insert mode. So if for one table load fails then I need to revert back to the data that existed right at the beginning.

Posted: Wed Jul 31, 2013 10:57 am
by chulett
Hmmm... there's no coming back from a truncate. You'll need an alternate approach in that case, perhaps something like a table swap? Rename the original table, create a new empty version and load... on failure swap the old table back.

Posted: Wed Jul 31, 2013 3:22 pm
by TonyInFrance
Exactly what I feared... There is effectively no rolling back a truncated table. i guess I'll have to first copy the table into a temporary dataset and on insertion failure load the older data stored in the temporary dataset back into the table.

Posted: Wed Jul 31, 2013 3:59 pm
by chulett
IF you can get the grants / permissions, you really should check out the 'swap' that I mentioned. WAY faster than your dump before and reload after.

Posted: Thu Aug 01, 2013 2:14 am
by TonyInFrance
Definitely. But this would need creation of a second table right? And in all I have a 108 of them. More tables might not be desirable by the client.

Posted: Thu Aug 01, 2013 6:53 am
by chulett
A 'backup' copy of each table, yes, that exists during the load of that table only. After the load completes - one way or the other - you're back to only one table.

Posted: Thu Aug 01, 2013 7:00 am
by TonyInFrance
So you are talking of a CREATE table and subsequent DROP table on the fly. I'm pretty sure the user used by Datastage in our production environment does not have CREATE table privileges. CREATE tables are to be done only by the administrator at the time of delivery. Daft? Yes...:-)

Posted: Thu Aug 01, 2013 7:02 am
by chulett
Yup... hence the big "IF". :wink: