Hello Forum,
How does one go about creating their own Plugin in DS? Is it feasible? Where is the procedures/interface documented? Does anybody have experience in doing this?
Thanks,
Greg
How to create Plugin
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
It's all documented in the DataStage Plug-In Writer's Guide, which should be available for the asking from the vendor.
To create the stage itself you need to be a very competent C programmer while to create a GUI for it you need C++ skills.
In my experience there ought almost never be a need to create your own plug-in stage; most things can be done with the product as it stands. For what functionality do you believe you need to create your own stage type?
To create the stage itself you need to be a very competent C programmer while to create a GUI for it you need C++ skills.
In my experience there ought almost never be a need to create your own plug-in stage; most things can be done with the product as it stands. For what functionality do you believe you need to create your own stage type?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Hello Ray,
Thank you for your response.
My client is using Teradata, and is discouraged by the performance of the Teradata API stage. They are using the API stage in some situations where they are not using MultiLoad because of some extra indexing requirements required by that stage.
A improvement in non-multiload loading to Teradata is expected in Hawk, but my client wanted to research some alternate solutions to waiting for that fix:
1. Calling BTEQ script to load from file via job control, sequence, or before job subroutine.
2. Writing their own plugin around BTEQ.
3. Using a Wrapper in Parallel Edition.
4. Migration of existing jobs to use Multiload instead.
Thank you for your response.
My client is using Teradata, and is discouraged by the performance of the Teradata API stage. They are using the API stage in some situations where they are not using MultiLoad because of some extra indexing requirements required by that stage.
A improvement in non-multiload loading to Teradata is expected in Hawk, but my client wanted to research some alternate solutions to waiting for that fix:
1. Calling BTEQ script to load from file via job control, sequence, or before job subroutine.
2. Writing their own plugin around BTEQ.
3. Using a Wrapper in Parallel Edition.
4. Migration of existing jobs to use Multiload instead.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Ah, one of those clients! OK, request the manual, maybe it will scare them off.
That said, if you have the skills, writing your own plug-in can be useful for functionality that DataStage lacks. For example, Hitachi (in Japan) created plug-in stages that allowed their database (HiRDB) to be accessed by DataStage.
![Wink :wink:](./images/smilies/icon_wink.gif)
That said, if you have the skills, writing your own plug-in can be useful for functionality that DataStage lacks. For example, Hitachi (in Japan) created plug-in stages that allowed their database (HiRDB) to be accessed by DataStage.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.