Page 1 of 1

DB2/UDB stage causing table deadlock

Posted: Mon Nov 10, 2008 10:19 pm
by my_stm
Dear all,

How is everyone? =)

I was hoping to be able to obtain some clarification here. I am currently facing a predicament not encountered previously.

I have a simple job that does this:

Dataset-> Surrogatekey generator-> Db2/UDB stage.

In my Db2/UDB stage I am selecting 'Write: Append' as the insert option.

My volume of data is just 800,000 records. Normally this would be dealted with swiftly in the development environment, but in production, it is inserting in a crawl... at around few hundred records per 2-3 minute interval. That is, if the records are still being inserted.

Worse case scenario is that the table will go into a deadlock after few records. <-- this being the frequent one.

I checked with the DBA, and he noticed that for that job alone, which was assigned a specific session id, it is generating many many lock threads...

Example lock snapshot:

Application handle = 354
Application ID = *N0.db2inst1.081111031741
Sequence number = 0001
Application name = osh
CONNECT Authorization ID = ETLOPER
Application status = UOW Executing
Status change time = Not Collected
Application code page = 874
Locks held = 1024
Total wait time (ms) = Not Collected

List Of Locks
Database partition = 0
Lock Name = 0x003B010701DD72010000000052
Lock Attributes = 0x00000008
Release Flags = 0x40000000
Lock Count = 1
Hold Count = 0
Lock Object Name = 31289857
Object Type = Row
Tablespace Name = STG_DATA1_TS
Table Schema = CTRLSTG
Table Name = FNC_STG_DATA_001
Mode = X

Database partition = 0
Lock Name = 0x003B010701DD72000000000052
Lock Attributes = 0x00000008
Release Flags = 0x40000000
Lock Count = 1
Hold Count = 0
Lock Object Name = 31289856
Object Type = Row
Tablespace Name = STG_DATA1_TS
Table Schema = CTRLSTG
Table Name = FNC_STG_DATA_001
Mode = X

Question:
1) Is this the cause of the deadlock/job crawling?
2) Is this normal for write/append option?
3) Any other hints to where this might have gone wrong?

I've searched and thus far no one encountered this problem. Slow maybe, but not to a grinding halt.

Any ideas guys? Right now the DBA and developers are throwing punches at each other. hahaha. Funny situation really...

Posted: Mon Nov 10, 2008 10:33 pm
by ray.wurlod
The stage itself will not cause any deadlocks; it is your design that will. Check that the partitioning is key-based so that the same key can not occur on multiple partitions.

Posted: Mon Nov 10, 2008 11:59 pm
by my_stm
ray.wurlod wrote:The stage itself will not cause any deadlocks; it is your design that will. Check that the partitioning is key-based so that the same key can not occur on multiple partitions. ...
Dear Ray,

Sorry, I don't quite understand that. What does it mean by same key across multiple partition? Do I check from the job level or database level?

Thanks and regards,
Mike--

Posted: Tue Nov 11, 2008 12:36 am
by mahadev.v
On the input link to your DB2/UDB stage, Hash partition the data on all key fields.

Posted: Tue Nov 11, 2008 12:46 am
by my_stm
mahadev.v wrote:On the input link to your DB2/UDB stage, Hash partition the data on all key fields.
Dear Mahadev,

I've explored that option, but sadly the problem is still there. =( The job seems to be on a standstill...

Does anyone know about the multiple threads per session id invoked by the DB2/UDB stage? Could there be something perculiar there?

Regards,
Mike--

Posted: Tue Nov 11, 2008 7:25 am
by ray.wurlod
How many nodes in your DataStage parallel execution configuration file?

Posted: Tue Nov 11, 2008 8:15 pm
by my_stm
ray.wurlod wrote:How many nodes in your DataStage parallel execution configuration file? ...
Dear ray,

2 nodes specified in the configuration file.

1 node on datastage server
and the other on db2 server.

This configuration is same between production and development servers.

Thanks and regards,
Mike--