Table getting locked when trying to update a table

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
Rea_Antony
Participant
Posts: 73
Joined: Thu Mar 08, 2007 9:50 pm
Location: Bangalore

Table getting locked when trying to update a table

Post by Rea_Antony »

Hi Everyone,

I have a question.

I have a table A. It has 3 columns: a unique id,value1, date, indicator.
Now, the default value for indicator is 'N'

The entries are as follows:

1 abc 28-Jul-2008 N
2 abc 29-Jul-2008 N

I need to design a job that will set the indicator to 'Y' for each value of columns Value1 based on the maximum date available. So, for my example, the 2nd row should be updated with indicator 'Y'

My problem is that i designed a job that has a lookup. This lookup will reference a hash file which has the row with maximum date for each Value1. Based on the unique id i will update my table.

Problem: since my source and destination is the same-> my tables get locked and sometimes the job aborts.

The only solution i can think of is to have two separate jobs: one to populate the hash file and other to update table.

But if anyone could sugegst a better solution, i would be greatly obliged.

I'm using Datastage Server Vs 7.5 and MS SQL 2000 as database. Windows OS.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Post your job design. I don't hear anything (yet) that would lead me to think you would have locking problems.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Rea_Antony
Participant
Posts: 73
Joined: Thu Mar 08, 2007 9:50 pm
Location: Bangalore

Post by Rea_Antony »

Hi,
My problem is that I'm reading from a table and updating the same. So, sometimes while the reading from source is gng on.. say trh is 1 lakh rows..then sometimes it starts writing to the destination as soon as teh lookup is matched. So since the read process is not over, a lock is created

Design is somethign like this:

Source table A-> Transformer where lookup (from table A) works->Target table A

Please advice
kwwilliams
Participant
Posts: 437
Joined: Fri Oct 21, 2005 10:00 pm

Post by kwwilliams »

You could just design your job to look like this:

Source table A --> Transformer --> Target table A

set the select in your source to:

select value 1, max(date) as date from table A group by value 1

That should do it.
Rea_Antony
Participant
Posts: 73
Joined: Thu Mar 08, 2007 9:50 pm
Location: Bangalore

Post by Rea_Antony »

Hi,

I don't think that will work as I have to update the table based on the unique indicator.

So if i do this : select value 1, max(date) as date from table A group by value 1
I would get the max value but will not know which row to update

An example for the data is as follows:

1 abc 28-Jul-2008 N
2 abc 29-Jul-2008 N
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Does your source stage support Transaction Isolation? Try changing it to Read-only to avoid locking the source records.
-craig

"You can never have too many knives" -- Logan Nine Fingers
DS_SUPPORT
Premium Member
Premium Member
Posts: 232
Joined: Fri Aug 04, 2006 1:20 am
Location: Bangalore

Post by DS_SUPPORT »

Introduce, Sequential File or IPC before the Target Stage, That might solve the problem.
Rea_Antony
Participant
Posts: 73
Joined: Thu Mar 08, 2007 9:50 pm
Location: Bangalore

Post by Rea_Antony »

Hi,
I'm still drawing a blank. I would really appreciate it if anyone could give me some pointers abt how to design this job. Thank you
Yoga_12
Participant
Posts: 18
Joined: Fri Feb 08, 2008 2:14 am
Location: Bangalore

Post by Yoga_12 »

CREATE TABLE table_A
(
ID NUMBER,
string VARCHAR2(256),
dt DATE,
flg VARCHAR2(1));

INSERT INTO table_A VALUES(1,'abc','31-Jan-2008','n');
INSERT INTO table_A VALUES(1,'abc','1-Jan-2008','n');

UPDATE table_A SET flg='Y'
WHERE dt=(SELECT MAX(dt) FROM table_a)


Since it is a table why can't you go with after sql in the Target itself
just write an update statement like as i written above.
Yoga Kathirvelu
Working with interesting people is more interesting than just working
Rea_Antony
Participant
Posts: 73
Joined: Thu Mar 08, 2007 9:50 pm
Location: Bangalore

Post by Rea_Antony »

The type of data i have is

INSERT INTO table_A VALUES(1,'abc','31-Jan-2008','n');
INSERT INTO table_A VALUES(2,'abc','1-Jan-2008','n');
INSERT INTO table_A VALUES(3,'def','31-Feb-2008','n');
INSERT INTO table_A VALUES(4,'def','2-Jan-2008','n');

So, i need to find teh max date based on teh value of the second column: ie., max of date for abc, max of date for def etc.. then based on the row position determined by the unique id (column 1) i need to update it..

So, in above example i need to update rows 1 & 3 alone.

Hence, I dont feel that the method u suggested would work as it would update only one row for entire table.
Rea_Antony
Participant
Posts: 73
Joined: Thu Mar 08, 2007 9:50 pm
Location: Bangalore

Post by Rea_Antony »

Forgot to thank you earlier for responding to my query. Please do let me know if you can think of any other alternative.
Yoga_12
Participant
Posts: 18
Joined: Fri Feb 08, 2008 2:14 am
Location: Bangalore

Post by Yoga_12 »

UPDATE table_A SET flg='Y'
WHERE (dt,string) in (SELECT MAX(dt),string FROM table_a where string=a.string group by string)

I think this query will solve your problem.

if you want alternate option

introduce Either Hash or Sequential stage between your source and target as like below


Source----Transformer with Lkp ------->HashFile---->Target
Yoga Kathirvelu
Working with interesting people is more interesting than just working
damianpaul
Participant
Posts: 1
Joined: Thu Apr 03, 2008 8:46 am

Post by damianpaul »

Add an Aggregator stage which does nothing (ie 1:1 maps)

Source-> Transformer -> Aggregator -> Target.

Aggregator waits for all rows before proceeding preventing the locks from happening.
Post Reply