problem with job design
Posted: Fri Aug 03, 2007 1:38 pm
Hello All,
I have relational data model in DB2 now I have to transfer the data from 3 tables in DB2 to 4 Dimensions and 2 facts to my Warehose in UDB.
Source tables have 6,2.5,4 million rows.
No validations ..very few derivations..all are pretty much straight mappings. Once I do the bulk load first time it will be daily delta processing.
Please suggest me is designing the job in SERVER or PARALLEl is better performance wise??
in Server:
1)I can read the data from 3 tables at once with a join query
2)Do the look-ups to dimension tables write to multiple transformers
3)generate the keys using KeyMgmt for inserts
4)keep the keys in memory and then insert/update the records in facts
in Parallel:
1)I can read the data from 3 tables at once with a join query
2)Do the look-ups to dimension tables write to multiple transformers
3)generate the keys by using Cnt+1 whenever I have inserts(where Cnt is the job parm that has the max key from the table)
4)keep the keys in memory and then insert/update the records in facts
I couldn't decide which option is better.I am new to DS EE development.
As most part is either reading from or writing to database.i am feeling like doing it in PX wouldn't help much.
Please suggest me and let me know if what I thought I could do in PX doesn't work.
Thanks
nsm.
I have relational data model in DB2 now I have to transfer the data from 3 tables in DB2 to 4 Dimensions and 2 facts to my Warehose in UDB.
Source tables have 6,2.5,4 million rows.
No validations ..very few derivations..all are pretty much straight mappings. Once I do the bulk load first time it will be daily delta processing.
Please suggest me is designing the job in SERVER or PARALLEl is better performance wise??
in Server:
1)I can read the data from 3 tables at once with a join query
2)Do the look-ups to dimension tables write to multiple transformers
3)generate the keys using KeyMgmt for inserts
4)keep the keys in memory and then insert/update the records in facts
in Parallel:
1)I can read the data from 3 tables at once with a join query
2)Do the look-ups to dimension tables write to multiple transformers
3)generate the keys by using Cnt+1 whenever I have inserts(where Cnt is the job parm that has the max key from the table)
4)keep the keys in memory and then insert/update the records in facts
I couldn't decide which option is better.I am new to DS EE development.
As most part is either reading from or writing to database.i am feeling like doing it in PX wouldn't help much.
Please suggest me and let me know if what I thought I could do in PX doesn't work.
Thanks
nsm.