ODBC SQL load question

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
kwt10
Participant
Posts: 17
Joined: Tue Nov 25, 2003 9:33 am

ODBC SQL load question

Post by kwt10 »

We are having a problem with one of our jobs and need the help of the forum!!!

We have a Job with 15 input files that are writing into one SQL file. The data in the seperate files is coming from Complex Flat files. We are using the ODBC stage to load the table. When we kick off our job containing the 15 loads we always get errors that state:

[DataStage][SQL Client][ODBC][Microsoft][ODBC SQL Server Driver][SQL Server]The SQL Server cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users or ask the system administrator to check the SQL Server lock and memory configurations.

The only way around this error that we have found is to take the job and split it so that not more that two or three files convert at a time. This is a work around but we would like to run all 15 files at the same time to save on the load time.

Any help or thoughts would be helpful.
JDionne
Participant
Posts: 342
Joined: Wed Aug 27, 2003 1:06 pm

Re: ODBC SQL load question

Post by JDionne »

kwt10 wrote:We are having a problem with one of our jobs and need the help of the forum!!!

We have a Job with 15 input files that are writing into one SQL file. The data in the seperate files is coming from Complex Flat files. We are using the ODBC stage to load the table. When we kick off our job containing the 15 loads we always get errors that state:

[DataStage][SQL Client][ODBC][Microsoft][ODBC SQL Server Driver][SQL Server]The SQL Server cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users or ask the system administrator to check the SQL Server lock and memory configurations.

The only way around this error that we have found is to take the job and split it so that not more that two or three files convert at a time. This is a work around but we would like to run all 15 files at the same time to save on the load time.

Any help or thoughts would be helpful.

What I would do (for tracking reasons as well as resource reasons) would be combine all the flat files into one and then load the one file. That way you can addit the process and limit the locks you will need to input the data.
Jim
Sure I need help....But who dosent?
kwt10
Participant
Posts: 17
Joined: Tue Nov 25, 2003 9:33 am

Post by kwt10 »

The files all come across from different sources and we need to keep them seperate. We actually place a key in front of the entire record during the conversion to keep the data seperate in the SQL file.

Thanks and please keep the help coming.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Re: ODBC SQL load question

Post by chulett »

kwt10 wrote:[DataStage][SQL Client][ODBC][Microsoft][ODBC SQL Server Driver][SQL Server]The SQL Server cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users or ask the system administrator to check the SQL Server lock and memory configurations.
I don't do SQL Server - but I think your answer is pretty well spelled out in the error message. You are hitting a per-user or per-process threshold. If you can't throttle the number of loads back, have an admin adjust (ie, increase) whatever limits you are slamming into.
-craig

"You can never have too many knives" -- Logan Nine Fingers
JDionne
Participant
Posts: 342
Joined: Wed Aug 27, 2003 1:06 pm

Re: ODBC SQL load question

Post by JDionne »

chulett wrote:
kwt10 wrote:[DataStage][SQL Client][ODBC][Microsoft][ODBC SQL Server Driver][SQL Server]The SQL Server cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users or ask the system administrator to check the SQL Server lock and memory configurations.
I don't do SQL Server - but I think your answer is pretty well spelled out in the error message. You are hitting a per-user or per-process threshold. If you can't throttle the number of loads back, have an admin adjust (ie, increase) whatever limits you are slamming into.
unfortunatly if the admin set up this server "out of the box" then the locks are being set up dynamicaly based on resources. There may not be any "throttleing" of this db. :(
Jim
Sure I need help....But who dosent?
1stpoint
Participant
Posts: 165
Joined: Thu Nov 13, 2003 2:10 pm
Contact:

Post by 1stpoint »

Consider placing a Hash File stage inbetween the CFF Stages and before the final output to SQL. The Hash File woud mimick the SQL Target and the process can then feed the data in 1 push.
shawn_ramsey
Participant
Posts: 145
Joined: Fri May 02, 2003 9:59 am
Location: Seattle, Washington. USA

Re: ODBC SQL load question

Post by shawn_ramsey »

chulett wrote:I don't do SQL Server - but I think your answer is pretty well spelled out in the error message. You are hitting a per-user or per-process threshold. If you can't throttle the number of loads back, have an admin adjust (ie, increase) whatever limits you are slamming into.
It may also be a license limit on the SQL Server. Depending on how server is licensed there may be a license limit on the number of concurrent sessions that are allowed.
Shawn Ramsey

"It is a mistake to think you can solve any major problems just with potatoes."
-- Douglas Adams
JDionne
Participant
Posts: 342
Joined: Wed Aug 27, 2003 1:06 pm

Re: ODBC SQL load question

Post by JDionne »

shawn_ramsey wrote:
chulett wrote:I don't do SQL Server - but I think your answer is pretty well spelled out in the error message. You are hitting a per-user or per-process threshold. If you can't throttle the number of loads back, have an admin adjust (ie, increase) whatever limits you are slamming into.
It may also be a license limit on the SQL Server. Depending on how server is licensed there may be a license limit on the number of concurrent sessions that are allowed.
its more of a lock issue than conncurent users. one user can have multipule locks. it boils down to the resources of the machine. The idea of the hash file is not a bad idea.
Jim
Sure I need help....But who dosent?
kwt10
Participant
Posts: 17
Joined: Tue Nov 25, 2003 9:33 am

Post by kwt10 »

Since I am a new user to the entire ETL structure can someone explain to me how I would use a Hash file? I have 15 files coming in and I need to convert them all to a single SQL file with a Key added to the front of each record. More details of the problem are in the first post of this thread.

Thanks again
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Skip using a hash file. Simply have each job produce a sequential text file as its output. Make sure all jobs produce exactly the same format.

Then, just concatenate them together using a command stage issuing a copy statement (search the forum, this is covered a lot).

Now you have a single load file, have a job read this file and load it.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
1stpoint
Participant
Posts: 165
Joined: Thu Nov 13, 2003 2:10 pm
Contact:

Post by 1stpoint »

huh?

This type of problem is exactly one of the major benefits of hash files.
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

If the hash file is simply being used as a collection point, not for reference purposes, then my suggestion is a good one.

If the multiple independent streams have to reference the hash file to see if the row is already there, that's a different story. But the problem is that multiple processes are hitting the same target table and the database can't sustain that. The solution is probably to pool the data to disk and hit the database once. In that case, sequential files are almost maintenance free, but there is a cost to using the hash file.

The reason is that a hash file is a storage construct that is constantly growing and resizing itself. When you have a single process writing to it, every so often the job will stall while the file dynamically resizes itself. If you have multiple jobs writing to the same hash file, you'll see resizing more often. If you mitigate this with write-delay caching, it can be worse. This is because when the jobs finish they then purge their cache all at once, which could involve multiple jobs with purge output i/o all attempting to hit that hash file.

If the effort is simply to concatenate multiple job stream results into a single file, then the approach I suggested is the easiest. Not only will you be able to visually inspect the data, but you have the option of loading individual files in a loop, or concatenate them into a single file and load it.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
shawn_ramsey
Participant
Posts: 145
Joined: Fri May 02, 2003 9:59 am
Location: Seattle, Washington. USA

Post by shawn_ramsey »

kcbland wrote:Skip using a hash file. Simply have each job produce a sequential text file as its output. Make sure all jobs produce exactly the same format.

Then, just concatenate them together using a command stage issuing a copy statement (search the forum, this is covered a lot).

Now you have a single load file, have a job read this file and load it.
If you are loading all these files into one SQL Server table you would be better off using a job sequencer to load one file at a time. With few exceptions, trying to load them in parallel into one table is not going to provide any performance benefit and may have just the opposite effect and slow you down.

Another option that I have not tried but should work is to use the combiner stage to bring the 15 feeds together into one output.
Shawn Ramsey

"It is a mistake to think you can solve any major problems just with potatoes."
-- Douglas Adams
JDionne
Participant
Posts: 342
Joined: Wed Aug 27, 2003 1:06 pm

Post by JDionne »

shawn_ramsey wrote:
kcbland wrote:Skip using a hash file. Simply have each job produce a sequential text file as its output. Make sure all jobs produce exactly the same format.

Then, just concatenate them together using a command stage issuing a copy statement (search the forum, this is covered a lot).

Now you have a single load file, have a job read this file and load it.
If you are loading all these files into one SQL Server table you would be better off using a job sequencer to load one file at a time. With few exceptions, trying to load them in parallel into one table is not going to provide any performance benefit and may have just the opposite effect and slow you down.

Another option that I have not tried but should work is to use the combiner stage to bring the 15 feeds together into one output.
The combiner could get tricky because the meta data has to match. It should work though.
Jim
Sure I need help....But who dosent?
Post Reply