Hi
We need to develop a job which does lookup to multiple hash files (around 20-30) and loads it to a table.
Currently using desinger developing these jobs taking more time.
Is there any way other than using designer, can we do the same design work which would be faster like writing a code?
FYI: All the hash files have the similar structure with different column names and different file names.
We would like to do this lookup mapping work for lot of jobs, and wanted to finish as fast as we can....
Please suggest any solution for this..
Thanks
Sai
Automating Design work
Moderators: chulett, rschirm, roy
No. If you want to "finish as fast as you can" then do not waste time on an effort to make this faster, just do it. Any sort of automated effort would take a fair degree of expertise with the underlying architecture.
Have you considered a shared container if the lookups are the same? Or just copy/paste to get a running start?
Have you considered a shared container if the lookups are the same? Or just copy/paste to get a running start?
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 158
- Joined: Tue Mar 15, 2005 3:16 am
Hi Chullett
I mean to say that the hash files are having similar structure but not same.
What I wanted to explore was
i. Editing the dsx file that was generated from the export of the job...
ii. Doing a direct insert on the UV tables underlined the DataStage Engine.
Any suggetion could help me...
Thanks
Sai
I mean to say that the hash files are having similar structure but not same.
What I wanted to explore was
i. Editing the dsx file that was generated from the export of the job...
ii. Doing a direct insert on the UV tables underlined the DataStage Engine.
Any suggetion could help me...
Thanks
Sai
I'm going to stick to my original suggestion. Investigate something like this when you aren't under the gun to finish "as fast as you can". You'll waste more time futzing with this than you'll realize and if you just buckle down and do the monkey work it will be done before you know it. Futz when you have some free time.
ii. Very Bad Idea. Banish it from your mind.
i. Possible. Still I doubt it will be any faster than adding a stage via the Designer and dragging over the saved metadata. Or faster than copy/paste from another open job.saikrishna wrote:i. Editing the dsx file that was generated from the export of the job...
ii. Doing a direct insert on the UV tables underlined the DataStage Engine.
ii. Very Bad Idea. Banish it from your mind.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Premium Member
- Posts: 385
- Joined: Wed Jun 16, 2004 12:43 pm
- Location: Virginia, USA
- Contact:
You say the hashed files are similar. Could making the hashed files identical simplify the process? If a square peg will not fit into a round hole, then make more round pegs.
Chuck Smith
www.anotheritco.com
www.anotheritco.com