hi, thanks for your replies. based on my observation of Statistics, when i am significantly increasing the nodes the time is decreasing. but when i run the same job in single node it is taking very less time. May i know the reason behind this? I think since i am hash partitioning in the source stage...
Hi, When i ran the job in single node it is taking 10 mins around. If i run it using the 4 nodes then it is taking 30 mins around to process all the records. I am hash partiitoning the incoming records based on key fields. Please throw some light on this. I would like to know why the performance is ...
what are the points to be considered to make a job run in multiple nodes. Other than configuration file changes what are the changes to be done at stage level.
Please provide me the useful links if any.
sorry guys, posted question wrongly. the corrected question is as follows. what are the points to be considered in making a job which is running in a single node to run in multiple nodes. I mean what are the properties in the stages that needs to be done and required changes from configuration file ...
Thanks again. How can we access that XMETA database repository? is it not in Universe database? I am interested to know the details. Please provide me the related links, where can i find those info.
May i know the reason. one more doubt am able to rename the jobs, move the jobs from one folder to another in Datastage 7.5. but am unable to do the same in Datastage 8.1. Are there any changes in the properties in the table DS_JOBS in 8.1 version.
Hi, Thanks for your reply. Actually the job is designed to implement SCD type2 (using flag y/n), in that i had used the Oracle stage where i was allowed to upsert(insert as well as update). In that i used the same id for refering the rows. I guess i got that error in that place. Can you please help ...