Accessing Universe/hashfiles using odbc stage in PX jobs

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
rkacham_DSX
Charter Member
Charter Member
Posts: 27
Joined: Tue Nov 02, 2004 5:34 pm

Accessing Universe/hashfiles using odbc stage in PX jobs

Post by rkacham_DSX »

Hi ,

I am trying to access universe /hashfiles using odbc stge in parallel jobs,
i am able to connect to unverse using odbc stage from my server jobs,
when i try this in parallel jobs i'm not able to connect to universe using odbc..

i was wondering do we need to change any entries in .odbc.ini files or any other env file to access univers using odbc in parallel jobs..


thanks in advace...
Thanks,
Ramesh
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

You should NOT use ODBC to connect to Universe. If you understand Universe technology, you would know that support for multi-value attributes exists within the Universe UV/ODBC stage.

Even then, your efforts are not sufficient for connecting to Universe. You will need to setup clean dictionaries with properly defined multivalue associations and their phrases so that the multi-value attributes will be properly normalized. You also need to avoid I-descriptors, F and T correlatives that span to other files, etc.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
rkacham_DSX
Charter Member
Charter Member
Posts: 27
Joined: Tue Nov 02, 2004 5:34 pm

Post by rkacham_DSX »

In IBM Datstaage Advanced devloper calss, they said the only way to access the hash fiels in parllel jobs is using odbc stage...

is there any other way to access hashfiles in parallel jobs?

we have project in server jobs further devlopement to this project we are trying to do in parallel jobs so we need to access those hashfiles in parllel jobs
Thanks,
Ramesh
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Universe is something else entirely. If you need to reference DataStage hashed files in a PX job, then you should consider using a container and a hashed file stage reading the hashed file and then move it into a dataset of some kind. Using the ODBC stage to read a hashed file will work if you simply need to access the physical file, but that's a messy way to do it. You need to constrain yourself to operating on the node that has the hashed file, often the Server node. It's just an awful way to do it.

Use a hashed file stage to read hashed files, makes sense don't it?
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

You can also use External Source or External Target stages if the number of records is small, using the somewhat less efficient UVread and UVwrite executables. Note that these stages will need to be constrained to execute in a node pool all of whose nodes are on the same machine as the DataStage server.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply