Shared Container problem

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
reddy
Premium Member
Premium Member
Posts: 168
Joined: Tue Dec 07, 2004 12:54 pm

Shared Container problem

Post by reddy »

Hello Sir,

Datastage Gurus please help me:

I created Hashed file as a Stream line in a Shared container (just dumping from database table)

and using this shared container as a Lookup (Reference link) in another job.

I am getting compilation error of mismatch link (Stream link in shared differ with reference link in a main job)

Please help me out datastage gurus,

Thanks in advance

Thanks
Narasa
pnchowdary
Participant
Posts: 232
Joined: Sat May 07, 2005 2:49 pm
Location: USA

Post by pnchowdary »

Hi Reddy,

Didnt we answer the same post on 06/24/05 in a different thread?. Is this problem a different one?

Here is your old thread viewtopic.php?t=93631
reddy
Premium Member
Premium Member
Posts: 168
Joined: Tue Dec 07, 2004 12:54 pm

Post by reddy »

pnchowdary wrote:Hi Reddy,

Didnt we answer the same post on 06/24/05 in a different thread?. Is this problem a different one?

Here is your old thread viewtopic.php?t=93631

Hi Chowdary,

Thanks for replies.

This is u previous response

ODBC -------------> Tranformer -------------> Hash File ------------>

L1 is the link between ODBC and Tranformer
L2 is the link between Transformer and Hash File
L3 is the link from the Hash file to the output

I believe that you have all the links L1,L2,L3 as stream links (solid arrow)
To get rid of your error, right click on link L3 and from the popup menu, press convert to reference.

This should take care of your problem. Let me know whether it worked for ya.

But in my case i don't have L3 Link just i have L1 and L2.

I don't how we create Reference link (dotted arrows) for link L2.

When i am trying to convert as reference link i am getting error.

Thanks

Narasa
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

That is because a transform stage does not support reference links emerging from it.
reddy
Premium Member
Premium Member
Posts: 168
Joined: Tue Dec 07, 2004 12:54 pm

Post by reddy »

Sainath.Srinivasan wrote:That is because a transform stage does not support reference links emerging from it.

Hi Sainath,

Thanks for reply.My client asked me to create shared containers for all
Hash file creation jobs and use these shared containers in the master job.

What i did is just created shared containers for all these hash file creation jobs.When i used shared containers in master job i am getting compilation errors.

Can you suggest me any new ideas.

Otherwise can i tell with client as we can't use shared containers for hash creation.

Please suggest me .

Thanks
Narasa
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

Let us take a step back.

You designed a job to 'create' an hash file whereas the other developer is using it to 'reference' from an hash file. So where is the hash file now?

I see a 'missing link' there.
reddy
Premium Member
Premium Member
Posts: 168
Joined: Tue Dec 07, 2004 12:54 pm

Post by reddy »

Sainath.Srinivasan wrote:Let us take a step back.

You designed a job to 'create' an hash file whereas the other developer is using it to 'reference' from an hash file. So where is the hash file now?

I see a 'missing link' there.

Sainath,

I created shared container for the job that outputs hashed file like this

simple input,transform and hash file output.

ODBC ----------> Transform -----------------> Hash file


Some other developers using this shared containers as reference files
they are getting compilation problem becasuse in the shared container hash file with stream link where as in main job we are using it as reference.

I hope you got my point.

Thanks
Narsa
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

This isn't a "shard container" problem, it's a usage problem. This may not be an appropriate use of a shared container.

If you really wanted to do this, you'd need to include a Container Output stage in the SC and then drag a reference link from the hash file to the output. This would give your developers something to hook onto to do their lookups from the SC Hash in their jobs.

But think about this for a sec... that would also mean that each job that includes this SC would rebuild the hash from the ODBC connection each time it runs just before it starts processing rows. Is that really what you want? :?
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

That's why, in the Best Practices class, the recommendation is to create totally separate jobs to populate hashed files.

The reason for this is obvious when you accept that more than one job at once can be using the hashed file for reference lookups (and, indeed, that the cache of this can be shared - full details in dsdskche.pdf).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply