Page 1 of 2

Job do not progress after certain percent...

Posted: Tue Mar 31, 2009 2:45 am
by ds_search2008
Initially I searched this forum for a long time and tried all possible suggestions. However, the job is in running state for a very long time. Process percent in log is not proceeding after 40 percent. No error/warning message is found in the log file.

Its a simple job that involves two stages only.
Source:DB2 stage and target is also DB2 stage. I tried writing the source DB2 stage into a Data Set it works fine. But DB2 to DB2 the data is not getting loaded. The log throws no error message. The job link is in blue/green color. I can see the rows passing (initially 102 rows, 198 rows...) but the target table has no data.

I increased the transaction size and array size slowly from 1 till 40000. Can you please share your ideas on this issue.

Many Thanks

Re: Job do not progress after certain percent...

Posted: Tue Mar 31, 2009 3:20 am
by Pagadrai
ds_search2008 wrote: Its a simple job that involves two stages only.
Source:DB2 stage and target is also DB2 stage.
Hi,
I understand that datastage is not doing any validation / transformation of the records here. You are picking records from one DB2 table and inserting / updating into another DB2 table.

I think writing a DB procedure will be useful in this case.

Posted: Tue Mar 31, 2009 3:38 am
by v2kmadhav
does it work any better using an active stage in between the two read and write processes ...

Posted: Tue Mar 31, 2009 4:55 am
by ds_search2008
Thanks both of you for your replies.

v2kmadhav

Code: Select all

does it work any better using an active stage in between the two read and write processes 
I tried this possibility also. However the job is in running state only. :cry: Please help.

Posted: Tue Mar 31, 2009 5:03 am
by v2kmadhav
which stage are you using? API? when you set the array size to 1.... u still dont see records in your table?? you said it passed 102 records??

can you try using the EE stage to see if makes a difference...

how big is that target table??

i hope u are not reading and writing to the same table :D

all the best...

Posted: Tue Mar 31, 2009 5:18 am
by ds_search2008
I'm using DB2 API stage. I tried different array size starting from 1. The performance statistics shows me that 102 records are passing into target. However back-end has no rows. Source and target are different.

Everything seems to be perfect. I wonder where the problem is....

Posted: Tue Mar 31, 2009 6:00 am
by v2kmadhav
the insert update.....
can you try running that as a script outside datastage in DB2 administrator or any other tool you have...

does it work fine there....

Posted: Wed Apr 01, 2009 12:19 am
by ds_search2008
Thanks v2kmadhav.

Yes, I can the insert/update the target table successfully through DB2 tool, ie., outside DataStage. This proves that there are no locks in the table. I wonder why we are facing the issue with DataStage.

Posted: Wed Apr 01, 2009 12:25 am
by nani0907
Hello,

Have you checked if there are any triggers set for that particular table to commit after each row ?

Posted: Wed Apr 01, 2009 1:01 am
by sima79
What is your transaction isolation setting in the API stage? Can you post your SQL for both source and target stages as well?

Posted: Wed Apr 01, 2009 1:53 am
by ds_search2008
Thanks for the replies.

There is no trigger being set in any of the tables. Even the target is made very simple without any keys or not null columns for test purpose. We still face this issue.

Code: Select all

What is your transaction isolation setting in the API stage?
Transaction size =1; Array Size=1.

Sima sorry if I understood your question wrong.

Posted: Wed Apr 01, 2009 2:56 am
by v2kmadhav
How many inserts, how many records do you have in your target table that are going to be updated....

Did this job run faster ever before? Has there been any considerable change to datastage, the table or the database?? im sure you have looked at indexes too...

Did you try using EE stage?

Are that target table and source table on the same database..if different how are they cataloged on your machine?
if your target table is on a remote machine.... what is the bandwidth of that line?

Posted: Wed Apr 01, 2009 3:32 am
by ds_search2008
Thanks v2kmadhav.


The records are around 2000 (we have started with test data to check the load). I tried loading field by field I found a bit improvement meaning the progress percentage got increased to 90%. However, the data is not getting loaded at the backend. I changed the transaction size and array size starting from 1. There are no indexes set on target table.

Yes we have tried EE. We face the same issue.

To fix this issue we now use sequential file stage as source instead of table. So there shouldn't be any issue. I tried adding an active stage between source and target. Even that is not working.

:cry: I have no idea on this issue. Nothing is positive about this job. :cry:

Posted: Wed Apr 01, 2009 3:37 am
by v2kmadhav
so whats with the network then....

You have checked that writing the data from
db2table---->dataset or file is fast enough

How fast is the
dataset or file -----> db2 target table?

again... are those two tables on the same database? did you check the cataloging? is that fine?

Posted: Wed Apr 01, 2009 4:01 am
by sima79
Your transaction isolation should be in your source Db2 stage. Try setting it (if not set) to "Uncommitted Read"