ODBC Timeout Expired Error
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 24
- Joined: Fri Oct 01, 2004 1:43 pm
- Location: USA
ODBC Timeout Expired Error
I have a job that is reading in a large amount of data from sequential files and then aggregating it and finally writing it directly to an ODBC stage. If the job process a small amount of data it works properly. When I process a large amount of data such that the aggregator doesn't allow any rows to pass to the ODBC stage for some time I get the following message
Occurred: 9:34:28 PM On date: 3/15/2007 Type: Fatal
Event: APT_CombinedOperatorController(1),4: [DataDirect][ODBC SQL Server Driver][libssclient20]Timeout Expired.
I get the error exactly 20 minutes after the job was started. It is obviously being timed out, but from where? Is this a DataStage parameter? Database parameter? Network parameter? I'm at a loss on how to approach this issue.
Any help/advice would be greatly appreciated.
Occurred: 9:34:28 PM On date: 3/15/2007 Type: Fatal
Event: APT_CombinedOperatorController(1),4: [DataDirect][ODBC SQL Server Driver][libssclient20]Timeout Expired.
I get the error exactly 20 minutes after the job was started. It is obviously being timed out, but from where? Is this a DataStage parameter? Database parameter? Network parameter? I'm at a loss on how to approach this issue.
Any help/advice would be greatly appreciated.
-
- Participant
- Posts: 24
- Joined: Fri Oct 01, 2004 1:43 pm
- Location: USA
Might be a little early to mark it as Resolved, unless your experiments actually solved your problem. If that's the case, please share your findings with the world so that others with the same problem can benefit from your blood, sweat and tears.
Thanks!
Thanks!
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
It's not DataStage. The message shows that the ODBC driver manager was successfully loaded, and the SQL Server driver was successfully loaded using the libssclient20 library. So the problem is outside the ODBC driver - maybe the network but I'd be checking the database server first.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 71
- Joined: Mon Nov 13, 2006 12:40 am
Hi Ray/Craig,
We have been getting lots of posting regarding the Timeout issue. What i feel is that when we give a command to run the job all the connection with the database are established. Now in the above stated problem in the job he is reading data from the source file and then doing the aggregation. So by the time the aggregation is done the connection which was established with the target database that gets timed out. Because if we are not using a database connection for sometime then the DB server will be timing it out automatically based on the DB settings.So, in my view the only possible solution is that he should increase his database timeout time.
Please correct me if i am wrong on some point. I am having this concept in my mind. So just wanted to have your opinion about the same.
Thanks for all the help you have been providing.
Thanks & Regards,
We have been getting lots of posting regarding the Timeout issue. What i feel is that when we give a command to run the job all the connection with the database are established. Now in the above stated problem in the job he is reading data from the source file and then doing the aggregation. So by the time the aggregation is done the connection which was established with the target database that gets timed out. Because if we are not using a database connection for sometime then the DB server will be timing it out automatically based on the DB settings.So, in my view the only possible solution is that he should increase his database timeout time.
Please correct me if i am wrong on some point. I am having this concept in my mind. So just wanted to have your opinion about the same.
Thanks for all the help you have been providing.
Thanks & Regards,
It could be either Data Base or Network. Check with Database, Call a Stored Procedure which runs for more than 20 minutes (Since you speicified 20 Minutes) through toad or sql plus, after 20 minutes if SP is running still (You can use DBA to check the process), then talk with your network admin. If SP is not running then talk with your DBA. We had the same issue and for us it is Network.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
sqlplus, alas no. However, now that 'TOAD' is actually 'Toad', yes!
It used to be an acronym - Tool for Oracle Application Developers. Now it's just the amphibian, and you really need to spell out which version you have as there are several:
Toad for Oracle
Toad for DB2
Toad for MySQL
Toad for SQL Server
:D
It used to be an acronym - Tool for Oracle Application Developers. Now it's just the amphibian, and you really need to spell out which version you have as there are several:
Toad for Oracle
Toad for DB2
Toad for MySQL
Toad for SQL Server
:D
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
And I agree that there can be multiple places you'd need to check for an "idle timeout" setting. As noted, first would be your database as I find that to be fairly common. Some tools can get you around issues like that by a kind of 'keep alive' setting where they ping the database periodically to let them know they are still there. DataStage won't do that.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 24
- Joined: Fri Oct 01, 2004 1:43 pm
- Location: USA
Still isn't going very well. I've checked with the DBA's and they are saying that all the timeout connections are set to unlimited. I would like to note this is connect to MS SQL Server 2005. From reading the documentation I see this.
SQLServer Wire
Protocol
(VMmsss20.so)
MS SQLServer 7.0
MS SQLServer 2000 (with service
packs 1, 2 and 3)
Solaris, AIX, LINUX,
HP-UX
It doesn't say anything about SQL Server 2005. Is it compatible. I'm still working with my network admins to see if it is possibly there. Thanks everyone for your comments. I'll keep you updated.
SQLServer Wire
Protocol
(VMmsss20.so)
MS SQLServer 7.0
MS SQLServer 2000 (with service
packs 1, 2 and 3)
Solaris, AIX, LINUX,
HP-UX
It doesn't say anything about SQL Server 2005. Is it compatible. I'm still working with my network admins to see if it is possibly there. Thanks everyone for your comments. I'll keep you updated.
Methinks you could land to a file and then from there go to SQL-Server, that way the ODBC connection doesn't open until the file is ready to be read, which is at the completion of your aggregation. Somewhere here someone like Craig or me mutters about milestoning and restartability.
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
-
- Participant
- Posts: 24
- Joined: Fri Oct 01, 2004 1:43 pm
- Location: USA
I can land to a file and then load easily, but then I would incur the cost of picking the data back up again and then loading which would break the pipeling concept. The total job would run in 40 minutes, I know this because to go ahead and get my data out there I did just what you mentioned, drop the file and then pick it up again and load it. I do adopt what you are saying for longer job runs, for me this is greater than > 1 hour. I would rather have one job to extract and load everything if it runs in under 1 hour, but for longer jobs I split them.
The pipelining concept works great right up to the point the job blows up. Just mentioning a work-around incase someone else reading this hasn't rationalized when recoverabilility warrants landing to file first. You know someone six months from now will hijack your post stating they have a similar job that runs for 22 hours and dies right when it goes to load.
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
-
- Participant
- Posts: 24
- Joined: Fri Oct 01, 2004 1:43 pm
- Location: USA
Folks sometimes don't realize that sometimes we "preach" an answer in the hopes that future DSXchange members will read it and learn something. It's never any offense to the original poster, just us old guys trying to teach the new guys to think that maybe some discretion needs exercising when designing jobs.
Thanks for taking it in stride.
Thanks for taking it in stride.
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle