Page 1 of 1

Shared Container problem

Posted: Wed Jun 29, 2005 6:37 am
by reddy
Hello Sir,

Datastage Gurus please help me:

I created Hashed file as a Stream line in a Shared container (just dumping from database table)

and using this shared container as a Lookup (Reference link) in another job.

I am getting compilation error of mismatch link (Stream link in shared differ with reference link in a main job)

Please help me out datastage gurus,

Thanks in advance

Thanks
Narasa

Posted: Wed Jun 29, 2005 9:20 am
by pnchowdary
Hi Reddy,

Didnt we answer the same post on 06/24/05 in a different thread?. Is this problem a different one?

Here is your old thread viewtopic.php?t=93631

Posted: Wed Jun 29, 2005 9:55 am
by reddy
pnchowdary wrote:Hi Reddy,

Didnt we answer the same post on 06/24/05 in a different thread?. Is this problem a different one?

Here is your old thread viewtopic.php?t=93631

Hi Chowdary,

Thanks for replies.

This is u previous response

ODBC -------------> Tranformer -------------> Hash File ------------>

L1 is the link between ODBC and Tranformer
L2 is the link between Transformer and Hash File
L3 is the link from the Hash file to the output

I believe that you have all the links L1,L2,L3 as stream links (solid arrow)
To get rid of your error, right click on link L3 and from the popup menu, press convert to reference.

This should take care of your problem. Let me know whether it worked for ya.

But in my case i don't have L3 Link just i have L1 and L2.

I don't how we create Reference link (dotted arrows) for link L2.

When i am trying to convert as reference link i am getting error.

Thanks

Narasa

Posted: Wed Jun 29, 2005 10:41 am
by Sainath.Srinivasan
That is because a transform stage does not support reference links emerging from it.

Posted: Wed Jun 29, 2005 10:49 am
by reddy
Sainath.Srinivasan wrote:That is because a transform stage does not support reference links emerging from it.

Hi Sainath,

Thanks for reply.My client asked me to create shared containers for all
Hash file creation jobs and use these shared containers in the master job.

What i did is just created shared containers for all these hash file creation jobs.When i used shared containers in master job i am getting compilation errors.

Can you suggest me any new ideas.

Otherwise can i tell with client as we can't use shared containers for hash creation.

Please suggest me .

Thanks
Narasa

Posted: Wed Jun 29, 2005 10:51 am
by Sainath.Srinivasan
Let us take a step back.

You designed a job to 'create' an hash file whereas the other developer is using it to 'reference' from an hash file. So where is the hash file now?

I see a 'missing link' there.

Posted: Wed Jun 29, 2005 11:28 am
by reddy
Sainath.Srinivasan wrote:Let us take a step back.

You designed a job to 'create' an hash file whereas the other developer is using it to 'reference' from an hash file. So where is the hash file now?

I see a 'missing link' there.

Sainath,

I created shared container for the job that outputs hashed file like this

simple input,transform and hash file output.

ODBC ----------> Transform -----------------> Hash file


Some other developers using this shared containers as reference files
they are getting compilation problem becasuse in the shared container hash file with stream link where as in main job we are using it as reference.

I hope you got my point.

Thanks
Narsa

Posted: Wed Jun 29, 2005 1:11 pm
by chulett
This isn't a "shard container" problem, it's a usage problem. This may not be an appropriate use of a shared container.

If you really wanted to do this, you'd need to include a Container Output stage in the SC and then drag a reference link from the hash file to the output. This would give your developers something to hook onto to do their lookups from the SC Hash in their jobs.

But think about this for a sec... that would also mean that each job that includes this SC would rebuild the hash from the ODBC connection each time it runs just before it starts processing rows. Is that really what you want? :?

Posted: Wed Jun 29, 2005 6:48 pm
by ray.wurlod
That's why, in the Best Practices class, the recommendation is to create totally separate jobs to populate hashed files.

The reason for this is obvious when you accept that more than one job at once can be using the hashed file for reference lookups (and, indeed, that the cache of this can be shared - full details in dsdskche.pdf).