Job recovery
Moderators: chulett, rschirm, roy
Job recovery
Hi,
One of my team members deleted a job (using Designer) accidently. There is no backup of the job. Is there any way that I can recover that job? Does DS store any information regarding deleted objects in any kind of buffer?
Thanks in advance!
One of my team members deleted a job (using Designer) accidently. There is no backup of the job. Is there any way that I can recover that job? Does DS store any information regarding deleted objects in any kind of buffer?
Thanks in advance!
Nitin Jain | India
If everything seems to be going well, you have obviously overlooked something.
If everything seems to be going well, you have obviously overlooked something.
No, once the job is deleted all information about that job is irretrievably gone and no recovery is possible.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Arnd is correct. Job designs are stored as records in tables in a database (the Repository). The deletion having been auto-committed, no recovery is possible.
There's a lesson to be learned about backups here!
I export all my work at the end of each day.
There is a record in the DS_AUDIT table that reports when it was deleted and by whom, but the job itself remains irrecoverable.
There's a lesson to be learned about backups here!
I export all my work at the end of each day.
There is a record in the DS_AUDIT table that reports when it was deleted and by whom, but the job itself remains irrecoverable.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 437
- Joined: Fri Oct 21, 2005 10:00 pm
This brings up a topic that has been bugging me recently. Backup strategy. There are three ways.
1. Each person saves a copy of his own work at the end of each day. Before rolling to production the job is then stored in whatever repository you use for source code.
2. The DS admin creates a job that exports all of the jobs each night.
3. Your server backup is responsible for recovery.
We are using both 2 & 3, but 2 assumes that there will be a time when there is no activity on your box. On our Solaris system 3, also requires that there is no activity to get a succesful backup. For us this is beginning to cause problems because there is not always going to be a time where all requirements for a company will give you the hours needed for a back up.
Are there any options that I am missing? Has anyone done anything to work around the need for a mandatory quiet time?
Thanks,
1. Each person saves a copy of his own work at the end of each day. Before rolling to production the job is then stored in whatever repository you use for source code.
2. The DS admin creates a job that exports all of the jobs each night.
3. Your server backup is responsible for recovery.
We are using both 2 & 3, but 2 assumes that there will be a time when there is no activity on your box. On our Solaris system 3, also requires that there is no activity to get a succesful backup. For us this is beginning to cause problems because there is not always going to be a time where all requirements for a company will give you the hours needed for a back up.
Are there any options that I am missing? Has anyone done anything to work around the need for a mandatory quiet time?
Thanks,
Keith Williams
keith@peacefieldinc.com
keith@peacefieldinc.com
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
To do it properly there's no way around a mandatory quiet time. This is true of all databases, really, though there are some strategies (like snapshots) that "pretend" it's a quiet time.
There is a backup utility (uvbackup) shipped with DataStage, that no-one uses. It has the ability to back up a quiesced database. And, yes, there is a command available (SUSPEND.FILES) to quiesce the "UniVerse" DataStage engine. At least for versions earlier than Hawk.
There is a backup utility (uvbackup) shipped with DataStage, that no-one uses. It has the ability to back up a quiesced database. And, yes, there is a command available (SUSPEND.FILES) to quiesce the "UniVerse" DataStage engine. At least for versions earlier than Hawk.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 407
- Joined: Mon Jun 27, 2005 8:54 am
- Location: Walker, Michigan
- Contact:
Ray.Wurlod wrote: 'This is true of all databases, really, though there are some strategies (like snapshots) that "pretend" it's a quiet time.'
That is a lie. That is what the transaction logs do for us. They give us incremental recovery ability. In addition, they provide the database with roll forwarding and roll backwarding so that the database is in a consistent transactional state. Anyways, I have heard this product is supposed to be moving to a UDB backend database in Hawk. I would suspect that that would give you another option of recovery. Daily backups of the database and incremental backups during the day.
That is a lie. That is what the transaction logs do for us. They give us incremental recovery ability. In addition, they provide the database with roll forwarding and roll backwarding so that the database is in a consistent transactional state. Anyways, I have heard this product is supposed to be moving to a UDB backend database in Hawk. I would suspect that that would give you another option of recovery. Daily backups of the database and incremental backups during the day.
Ultramundane,
it is not a lie, and that term is a bad one to use - it implies that Ray is intentionally trying to deceive; you might have stated that Ray is incorrect, which might have been more accurate - but I believe it is still false.
The transaction logs are used to get a sequential list of changes or a "delta", but you still need a baseline from which to roll forward/backward and in order to get that baseline image the database still effectively needs to be in a quiescent state at some point in time.
it is not a lie, and that term is a bad one to use - it implies that Ray is intentionally trying to deceive; you might have stated that Ray is incorrect, which might have been more accurate - but I believe it is still false.
The transaction logs are used to get a sequential list of changes or a "delta", but you still need a baseline from which to roll forward/backward and in order to get that baseline image the database still effectively needs to be in a quiescent state at some point in time.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 407
- Joined: Mon Jun 27, 2005 8:54 am
- Location: Walker, Michigan
- Contact:
Fine then, but you are incorrect also. The transaction logs are used to get a sequential list of changes or a "delta", but you still need a baseline from which to roll forward/backward and in order to get that baseline image the database still effectively needs to be in a quiescent state at some point in time.
This is not at all true for Sybase or MSSQL. The database can be up and processing at near or full capacity when the backup is started (Has been that way for over 8 years now in Sybase and MSSQL). The backup utilities track the changes using the database timestamp. After the full backup you can continue with the incremental backups. I would imagine that Oracle has incorporated this functinality with RMAN and if IBM does not support this in UDB then they are behind in backup strategy.
This is not at all true for Sybase or MSSQL. The database can be up and processing at near or full capacity when the backup is started (Has been that way for over 8 years now in Sybase and MSSQL). The backup utilities track the changes using the database timestamp. After the full backup you can continue with the incremental backups. I would imagine that Oracle has incorporated this functinality with RMAN and if IBM does not support this in UDB then they are behind in backup strategy.
Since job design information is scattered across different rows (records), the concept of recovering the repository to a point in time is futile given its nature. A file system level backup is unaware of the transient state of the data in the file structures, so a backup could catch an instantaneous image of the DS_JOBOBJECTS file in flux. This could make the backup unusable because it's returning the file in a state that doesn't connect the records internally correctly. Likewise, since data and overflow are separate files, you're in a lot more trouble as the file system backup could get separate images of the files rather than get both together.
Not good, this is why the daily exports have been and will always be the best recommendation for preserving your work.
As for the tiff over database transactional log symantics and such, does it really matter? DS doesn't work that way, and unless they've really engineered a true relational solution that makes jobs, routines, metadata, etc available for selective backup and recovery we'll still be in the same boat. It's like two bald guys arguing over a hairbrush. Developers will still need to be responsible for not clobbering each others work. Doesn't matter what language or tool you're using, that's a fact of life we all know.
Not good, this is why the daily exports have been and will always be the best recommendation for preserving your work.
As for the tiff over database transactional log symantics and such, does it really matter? DS doesn't work that way, and unless they've really engineered a true relational solution that makes jobs, routines, metadata, etc available for selective backup and recovery we'll still be in the same boat. It's like two bald guys arguing over a hairbrush. Developers will still need to be responsible for not clobbering each others work. Doesn't matter what language or tool you're using, that's a fact of life we all know.
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
We are talking different types of backups here. Every one of the databases we are talking about has an incremental backup functionality as you have described. But there is a difference between a table-level and a database-level backup (or schema/tablespace levle) and in order to retain consistency the granularity of the internal timestamp would need to be at a record level and would have to be processed for each row. No matter what approach is used there needs to be a snapshot taken - be it by temporarily suspending database changes or by processing each row by it's internal timestamp. Even with the internal timestamp method you are talking about the table needs to be locked for some period to get the information in a consistent manner and that equates to a "freeze" as well.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 407
- Joined: Mon Jun 27, 2005 8:54 am
- Location: Walker, Michigan
- Contact:
-
- Participant
- Posts: 407
- Joined: Mon Jun 27, 2005 8:54 am
- Location: Walker, Michigan
- Contact:
I'd also like to mention that when using these utilities to backup the databases no tables are read! Only the allocated blocks/pages. These blocks/pages contain the timestamp of the database. When your flavour of DBMS does its hot backup ( Sybase, MSSQL, Oracle, and UDB ), it simply gets the current database timestamp and dumps all blocks/pages that are equal to or less than this value. If it finds that a block/page has changed, it can go to the log and pull out what to dump so that the dump is consistent.
Reading up on RMAN documentation shows that it works at a block level - so at any time that it is saving a block all rows in that block are locked; this "freeze" is split into block level as opposed to table or tablespace level in other backups. So although one doesn't see a lengthy freeze the lock is still present; the total time needed is just broken into smaller slices with RMAN's technology. And if the DURATION option is set appropriately high for the backup the database will be effectively frozen.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 407
- Joined: Mon Jun 27, 2005 8:54 am
- Location: Walker, Michigan
- Contact:
-
- Participant
- Posts: 437
- Joined: Fri Oct 21, 2005 10:00 pm
Not to divert the thread, but back to the DataStage backup. Right now my freeze is an hour long. It starts at 01:00 and runs until 02:00. Are there methods that any of you use that cut down on the amount of time unavailable time.
Keith Williams
keith@peacefieldinc.com
keith@peacefieldinc.com