snapshot too old

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
yiminghu
Participant
Posts: 91
Joined: Fri Nov 14, 2003 10:09 am

snapshot too old

Post by yiminghu »

Hi,

I'm having this job.

OCI stage--> Transformer --> OCI stage

The first OCI stage reads from a table, the second one updates the same table. The problem is one the volum of data is small, everything is OK. Now the job is trying to update nearly 20 millon records, it aborted because of following error 'snapshot too old: rollback segment number 2 with name "RBS_1" too small'. I asked DBA, he told me that is because I tried to select and upate the same table.

I am wondering if I dump the result of first query into seqential file, then update directly from seqential file, would that solve the problem? Also what's the best approach for such situations?

Thanks,

Carol
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Re: snapshot too old

Post by chulett »

yiminghu wrote:Now the job is trying to update nearly 20 millon records, it aborted because of following error 'snapshot too old: rollback segment number 2 with name "RBS_1" too small'. I asked DBA, he told me that is because I tried to select and upate the same table.
That is contributing to the problem, but it's not the literal meaning of that particular error. :?

Yes, your approach would be a good way to work around the problem. It would also give you the ability to 'parallelize' the work if a single stream wasn't fast enough. Ken will counsel about MOD functions, bulk loads, use of the MERGE command, etc, etc. All of these can be considered.

You could also just do intermediate commits, assuming you have the Transaction Size set to zero right now. It would help in either approach but 'complicate' your restart/recovery plans.
-craig

"You can never have too many knives" -- Logan Nine Fingers
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Craigs suggestions are all good. But you have to realize that your SELECT in the source OCI is also consuming rollback segment because you're holding all of that source data as well. With 20+ million rows held, you're running the risk that other activity in the database will require rollback and they will fail, or your query will die because of the ongoing needs.

You need to get that data out of the source database as quickly as possible. You will NOT want to wait for the transformation and loading into the target table. The best choice is to write to a sequential text file. In the next job, read that file and it from there.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
rleishman
Premium Member
Premium Member
Posts: 252
Joined: Mon Sep 19, 2005 10:28 pm
Location: Melbourne, Australia
Contact:

Post by rleishman »

Here is the best link I have ever seen on Snapshot Too Old http://www.ixora.com.au/tips/admin/ora-1555.htm. As you can see, there are a couple of possible causes, but the PROBABLE cause in your case is that the SELECT is trying to perform a consistent point-in-time read on the data, which you are changing as it reads.
chullet wrote:You could also just do intermediate commits, assuming you have the Transaction Size set to zero right now. It would help in either approach but 'complicate' your restart/recovery plans.
There is a first time for everything: I disagree with Craig (here's where I need a "pensive" smiley).

More frequent COMMITs will not help. To get a consistent point-in-time read from your SELECT, it will still have to go to the rollback segments to see what the data used to look like. As the rollback segments get recycled, you may not be able to get them (hence Too Old error). In fact, you WILL NOT get a STO error if the Transaction Size in 0, because the writer OCI will not release the rollback segment, ensuring that they will not be recycled. Of course, 20M transactions is a lot with no commits.

Since you can get a Snapshot Too Old error when ANY user is changing the data you are reading, Ken's advice is spot-on ("relieved" smiley). Get the data out quick-as-you-can into a sequential file and run from there.
Ross Leishman
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

rleishman wrote:There is a first time for everything: I disagree with Craig (here's where I need a "pensive" smiley).
No problem. Looking back over it, you are right - I was getting my scenarios mixed up. At least I can blame it on it being 2 o'clock in the morning for me. :P

I didn't specifically state it as it's been covered here before and is a database issue not a DataStage issue - it *is* about Oracle attempting to maintain a point-in-time snapshot of data you need and it not being able to before your process completes. I didn't go into great detail on the 'get it out as quickly as possible' part as I knew Hurricane Ken would be along shortly to... what is the expression? 'Do the needful'. :lol:

Thanks for picking up on that.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply