Page 1 of 1

ODBC SQL load question

Posted: Wed Dec 03, 2003 10:22 am
by kwt10
We are having a problem with one of our jobs and need the help of the forum!!!

We have a Job with 15 input files that are writing into one SQL file. The data in the seperate files is coming from Complex Flat files. We are using the ODBC stage to load the table. When we kick off our job containing the 15 loads we always get errors that state:

[DataStage][SQL Client][ODBC][Microsoft][ODBC SQL Server Driver][SQL Server]The SQL Server cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users or ask the system administrator to check the SQL Server lock and memory configurations.

The only way around this error that we have found is to take the job and split it so that not more that two or three files convert at a time. This is a work around but we would like to run all 15 files at the same time to save on the load time.

Any help or thoughts would be helpful.

Re: ODBC SQL load question

Posted: Wed Dec 03, 2003 10:27 am
by JDionne
kwt10 wrote:We are having a problem with one of our jobs and need the help of the forum!!!

We have a Job with 15 input files that are writing into one SQL file. The data in the seperate files is coming from Complex Flat files. We are using the ODBC stage to load the table. When we kick off our job containing the 15 loads we always get errors that state:

[DataStage][SQL Client][ODBC][Microsoft][ODBC SQL Server Driver][SQL Server]The SQL Server cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users or ask the system administrator to check the SQL Server lock and memory configurations.

The only way around this error that we have found is to take the job and split it so that not more that two or three files convert at a time. This is a work around but we would like to run all 15 files at the same time to save on the load time.

Any help or thoughts would be helpful.

What I would do (for tracking reasons as well as resource reasons) would be combine all the flat files into one and then load the one file. That way you can addit the process and limit the locks you will need to input the data.
Jim

Posted: Wed Dec 03, 2003 10:41 am
by kwt10
The files all come across from different sources and we need to keep them seperate. We actually place a key in front of the entire record during the conversion to keep the data seperate in the SQL file.

Thanks and please keep the help coming.

Re: ODBC SQL load question

Posted: Wed Dec 03, 2003 10:55 am
by chulett
kwt10 wrote:[DataStage][SQL Client][ODBC][Microsoft][ODBC SQL Server Driver][SQL Server]The SQL Server cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users or ask the system administrator to check the SQL Server lock and memory configurations.
I don't do SQL Server - but I think your answer is pretty well spelled out in the error message. You are hitting a per-user or per-process threshold. If you can't throttle the number of loads back, have an admin adjust (ie, increase) whatever limits you are slamming into.

Re: ODBC SQL load question

Posted: Wed Dec 03, 2003 11:18 am
by JDionne
chulett wrote:
kwt10 wrote:[DataStage][SQL Client][ODBC][Microsoft][ODBC SQL Server Driver][SQL Server]The SQL Server cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users or ask the system administrator to check the SQL Server lock and memory configurations.
I don't do SQL Server - but I think your answer is pretty well spelled out in the error message. You are hitting a per-user or per-process threshold. If you can't throttle the number of loads back, have an admin adjust (ie, increase) whatever limits you are slamming into.
unfortunatly if the admin set up this server "out of the box" then the locks are being set up dynamicaly based on resources. There may not be any "throttleing" of this db. :(
Jim

Posted: Wed Dec 03, 2003 11:24 am
by 1stpoint
Consider placing a Hash File stage inbetween the CFF Stages and before the final output to SQL. The Hash File woud mimick the SQL Target and the process can then feed the data in 1 push.

Re: ODBC SQL load question

Posted: Wed Dec 03, 2003 12:00 pm
by shawn_ramsey
chulett wrote:I don't do SQL Server - but I think your answer is pretty well spelled out in the error message. You are hitting a per-user or per-process threshold. If you can't throttle the number of loads back, have an admin adjust (ie, increase) whatever limits you are slamming into.
It may also be a license limit on the SQL Server. Depending on how server is licensed there may be a license limit on the number of concurrent sessions that are allowed.

Re: ODBC SQL load question

Posted: Wed Dec 03, 2003 2:02 pm
by JDionne
shawn_ramsey wrote:
chulett wrote:I don't do SQL Server - but I think your answer is pretty well spelled out in the error message. You are hitting a per-user or per-process threshold. If you can't throttle the number of loads back, have an admin adjust (ie, increase) whatever limits you are slamming into.
It may also be a license limit on the SQL Server. Depending on how server is licensed there may be a license limit on the number of concurrent sessions that are allowed.
its more of a lock issue than conncurent users. one user can have multipule locks. it boils down to the resources of the machine. The idea of the hash file is not a bad idea.
Jim

Posted: Wed Dec 03, 2003 2:35 pm
by kwt10
Since I am a new user to the entire ETL structure can someone explain to me how I would use a Hash file? I have 15 files coming in and I need to convert them all to a single SQL file with a Key added to the front of each record. More details of the problem are in the first post of this thread.

Thanks again

Posted: Wed Dec 03, 2003 2:39 pm
by kcbland
Skip using a hash file. Simply have each job produce a sequential text file as its output. Make sure all jobs produce exactly the same format.

Then, just concatenate them together using a command stage issuing a copy statement (search the forum, this is covered a lot).

Now you have a single load file, have a job read this file and load it.

Posted: Wed Dec 03, 2003 2:45 pm
by 1stpoint
huh?

This type of problem is exactly one of the major benefits of hash files.

Posted: Wed Dec 03, 2003 2:53 pm
by kcbland
If the hash file is simply being used as a collection point, not for reference purposes, then my suggestion is a good one.

If the multiple independent streams have to reference the hash file to see if the row is already there, that's a different story. But the problem is that multiple processes are hitting the same target table and the database can't sustain that. The solution is probably to pool the data to disk and hit the database once. In that case, sequential files are almost maintenance free, but there is a cost to using the hash file.

The reason is that a hash file is a storage construct that is constantly growing and resizing itself. When you have a single process writing to it, every so often the job will stall while the file dynamically resizes itself. If you have multiple jobs writing to the same hash file, you'll see resizing more often. If you mitigate this with write-delay caching, it can be worse. This is because when the jobs finish they then purge their cache all at once, which could involve multiple jobs with purge output i/o all attempting to hit that hash file.

If the effort is simply to concatenate multiple job stream results into a single file, then the approach I suggested is the easiest. Not only will you be able to visually inspect the data, but you have the option of loading individual files in a loop, or concatenate them into a single file and load it.

Posted: Wed Dec 03, 2003 4:03 pm
by shawn_ramsey
kcbland wrote:Skip using a hash file. Simply have each job produce a sequential text file as its output. Make sure all jobs produce exactly the same format.

Then, just concatenate them together using a command stage issuing a copy statement (search the forum, this is covered a lot).

Now you have a single load file, have a job read this file and load it.
If you are loading all these files into one SQL Server table you would be better off using a job sequencer to load one file at a time. With few exceptions, trying to load them in parallel into one table is not going to provide any performance benefit and may have just the opposite effect and slow you down.

Another option that I have not tried but should work is to use the combiner stage to bring the 15 feeds together into one output.

Posted: Wed Dec 03, 2003 4:09 pm
by JDionne
shawn_ramsey wrote:
kcbland wrote:Skip using a hash file. Simply have each job produce a sequential text file as its output. Make sure all jobs produce exactly the same format.

Then, just concatenate them together using a command stage issuing a copy statement (search the forum, this is covered a lot).

Now you have a single load file, have a job read this file and load it.
If you are loading all these files into one SQL Server table you would be better off using a job sequencer to load one file at a time. With few exceptions, trying to load them in parallel into one table is not going to provide any performance benefit and may have just the opposite effect and slow you down.

Another option that I have not tried but should work is to use the combiner stage to bring the 15 feeds together into one output.
The combiner could get tricky because the meta data has to match. It should work though.
Jim