Page 1 of 1

ODBC.ini file Alternatives

Posted: Mon Mar 02, 2015 7:23 pm
by ADIRBK2010
Hi All ,

I got some funny requirement to connect to more than 300 SQL Server Databases and run a simple query on each database.

I have created a Job which would connect to all of these , All the DSN/connection entries were added to .odbc.ini file and I am parameterizing the connection name to run the job. Also the userid and password for all the databases is same.

Now the issue is updating the .odbc.ini file in regular intervals , due to the nature of the requirement we might require to add are remove databases connections from .odbc.ini file weekly.

Is there any way to connect to SQL server without making an entry to .odbc.ini file. something like building the DSN at runtime.
I am assuming there might be some system functions which can achieve this .

Any thoughts about this.



Thanks
RPatra

Posted: Mon Mar 02, 2015 11:49 pm
by ray.wurlod
None of which I'm aware, given that you're on UNIX.

Posted: Tue Mar 03, 2015 6:51 am
by qt_ky
I think you would have to very carefully implement some fancy shell scripting to modify the .odbc.ini file.

Posted: Thu Mar 05, 2015 12:13 pm
by MrBlack
I had a similar but not exactly similar issue, maybe it'll help.

I had to grab data from various FoxPro files using the ODBC connection. This client was really weird and instead of storing all their data from one facility in one file, they made a file for each department. So their file structure looked like this:

Code: Select all

/<Facility>
//<Dept>
////<Year>
////<Year>
//<Dept>
////<Year>
////<Year>
//<Dept>
////<Year>
////<Year>
//<Dept>
////<Year>
////<Year>

/Building 1
//Sales
///2010
///2011
///2012
//HR
///2010
///2011
///2012
//Admin
///2010
///2011
///2012

/Building 2
//Sales
///2010
///2011
///2012
//HR
///2010
///2011
///2012
//Admin
///2010
///2011
///2012
So the number of ODBC locations could grow rapidly because each department gets a new folder for the year and if departments come and go. So what I did was mount this disk to my server (Linux), then created a "Staging Folder" where in my DataStage job I would execute a command to look up how many facilities there are today, then in a second loop, I would look up how many department folders there for that facility and start sub-looping through that list. In the sub-loop I would create a Linux symbolic link from that location to my "Stage Folder". I configured my ODBC to point to the stage folder, so when my DataStage job tried to load the data, it would follow the symbolic link back to the actual data.

And that is how I got around having to declare numerous ODBC configurations: created a staging location, and used symbolic links back to the real data files.

Posted: Tue Aug 25, 2015 11:55 am
by ADIRBK2010
Here is the way that we fixed this.


1.Create a shell script that would generate a file that would containing ODBC entries - We parameterised databasenames,servernames to script.

2. Only in the parallel job where it connects to database - add a existing environmental variable which contains the default odbc.ini path.

3. Change the value of the path to your custom location having custom odbc entries.

Thats all. When ever the job executes the job would refer to this custom location of odbc and connect only to the required databases.

Posted: Tue Aug 25, 2015 12:29 pm
by chulett
8)