Hi,
I need to implement SCD-2 in datastage server jobs
Like parallel (change capture stage) do we have any stage in server for SCD?
My job design like : seq file(after data)) -> transformer ->insert and update
|
Hash file(before data)
what logic need to be used in transformer stage.. help me to proceed further
how to capture the upate record?
Thanks in advance
SCD
Moderators: chulett, rschirm, roy
SCD
raj
Re: SCD
No, all you need to build your own logic.rajee1234 wrote:do we have any stage in server for SCD?
Write constraint in Transformer stage - Not(reference_link_name.NOTFOUND)rajee1234 wrote: how to capture the upate record?
What is the stage, you are using to insert/update DB table?
Fetch data from source and dump it in sequential file.
Place a look up of target table and place 2 destination stages one each for update(strategy update rows only) and insert(insert rows only).
In xformer stage:
For update destination stage load data by giving constraint
NOT(lookuplinkname.NOTFOUND)
For insert destination stage load data by giving constraint
(lookuplinkname.NOTFOUND)
Hope this solves ur problem..![Smile :)](./images/smilies/icon_smile.gif)
Place a look up of target table and place 2 destination stages one each for update(strategy update rows only) and insert(insert rows only).
In xformer stage:
For update destination stage load data by giving constraint
NOT(lookuplinkname.NOTFOUND)
For insert destination stage load data by giving constraint
(lookuplinkname.NOTFOUND)
Hope this solves ur problem..
![Smile :)](./images/smilies/icon_smile.gif)
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Might there be duplicates in the source data? In that case there's more to do. Are all columns Type 2 changes, or do you need to consider only some columns' changes?
Map out the logic in a specification/flowchart and the rest will follow quite simply. For detecting changes where there may be duplicates, pre-load (the required rows and columns from the target table) into a hashed file, and maintain that hashed file as the job progresses.
Map out the logic in a specification/flowchart and the rest will follow quite simply. For detecting changes where there may be duplicates, pre-load (the required rows and columns from the target table) into a hashed file, and maintain that hashed file as the job progresses.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.