ODBC Error - Array size change results in supression of it

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

ODBC Error - Array size change results in supression of it

Post by Kirtikumar »

Hi,

I have a job which read a file and load it in SQL server table. ODBC stage is used to do this.

Whenever I trying to load a big file it is giving me following error.

Code: Select all

djsLoadGECStagingExp.3611_200709.tfbExp.lnkExpWrite: DSD.BCIPut call to SQLExecute failed.
SQL statement:INSERT INTO tblExposure(SiteID, PeriodID, ExposureID, BorrowerID, SupporterID, SupportType, ExposureDesc, EffNumLoan, UCurrency, NetCommitment, ExposureAtDefault, LossGivenDefault, LossGivenDefaultPercent, UsgAmount1, UsgAmount2, StartDate, NetAgreementInd, MaturityDate, RecurFee, UpfrontFee, NonUsgFee, UsgFee1, UsgFee2, FixCoupon, PayFrequency, CouponType, PrincipleType, AmortDate, Period, Entity, CustGroup, CustSubGroup1, CustSubGroup2, SecuritizationFlag, BaselPortfolioFlag) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) 
SQLSTATE=, DBMS.CODE=0
[DataStage][SQL Client][ODBC][DataDirect][ODBC SQL Serv
When it gives me this error, my array size is 200 and rows per trans is 0. If I change these parameters as array size is 1 and rows per trans is 0. It works fine with the same file getting loaded properly in the table.

I searched the forum a lot, but no one seems to have faced such issue.

This was working fine earlier with 200 array size.

Any inputs or suggestion?
Regards,
S. Kirtikumar.
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Might be a memory issue with the ODBC driver. Any reason you HAVE to use 200?
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

Post by Kirtikumar »

Just to improve performance we are using 200. If we do not, it takes around 80 mins and it we set it to 200, then runs in 50 mins or so.
Regards,
S. Kirtikumar.
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

Post by Kirtikumar »

When we tested the same job with the same setting for the prepared data, it runs fine.

But when we run it for the file extracted from some interface, it aborts.
Regards,
S. Kirtikumar.
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

With array size 15 it starts working

Post by Kirtikumar »

Hi,

Still not able get the solution for this and I have tried various things on this.

Instead of running the job on thousands of rows, I ran it on 100 rows it failed. I tried following combinations on it with array size as 200:

10 rows - ran fine.
11 to 14 - ran fine
15 rows - ran fine
16 rows or above - fails.

With buffer size as 1 it runs fine for any no. of rows. I thought it might be a memory prob, so I changed the buffer from default 128 KB to 500 KB, but no change in behaviour.
Regards,
S. Kirtikumar.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

It is obviously a memory problem, related to the amount of data you are trying to pull through the ODBC stage at one time. Suggest you stick with the smaller array sizes that work and be happy with it rather than burn out many more brain cells. Couple of questions...

You changed what "buffer", exactly? How did you change it?

How 'fat' are your records? Any clue what your average row length is?
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Please post your uvodbc.config entry for this DSN. In particular I would like to know whether you have made any attempt to tune the throughput of this (or every) DSN using MAXFETCHBUFF and/or MAXFETCHCOLS tuneable parameters.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

Post by Kirtikumar »

Hi

I am just wondering if buffer env params are causing this to fail, then it should appy to all files. But it works fine for cooked data and fails for live file.

Some info abt job: rowsize 700 bytes, 35 columns.
The buffer changed from 128K to 500K is the link row buffer in job properties.

This is the uvodbc.config file contents:

Code: Select all

*** Allows access to ODBC sources through BCI ***
***
*** To get to any ODBC source other than UniVerse, you need entries
*** that look as follows (the data source must also be configured
*** via the operating system's own mechanisms):
***
*** <data source name>
*** DBMSTYPE = ODBC
***
***** NOTE THAT DATASTAGE VERSION 2.0 ONWARDS DOES NOT NEED THE ABOVE    *****
***** ENTRIES ON WINDOWS NT SYSTEMS - IT READS THE NT REGISTRY DIRECTLY. *****
***
*** The local DataStage Server Engine is available via the data source name
*** "localuv" as defined below - please do not alter this entry!
***
*** To access a remote UniVerse database, you need another entry similar to
*** that for localuv but with a remote host name in place of "localhost".
***
*** To access a (coresident) UniVerse on the local machine, you need to
*** specify your local machine name or IP address in place of "localhost".
***
*** Note that the spaces around the " = " signs are required, and the
*** data source name must be enclosed in angle brackets "<>".
***
[ODBC DATA SOURCES]
<localuv>
DBMSTYPE = UNIVERSE
network = TCP/IP
service = uvserver
host = 127.0.0.1

<GEC_CORE>
DBMSTYPE = ODBC

<GEC_STAGING>
DBMSTYPE = ODBC

<GECD_REP>
DBMSTYPE = ODBC
<#prmGECDSN#>
DBMSTYPE = ODBC
<#prmRFStagingDSN#>
DBMSTYPE = ODBC
<gecd_dev>
DBMSTYPE = ODBC
<gecd_rep>
DBMSTYPE = ODBC
<GECD_DEV>
DBMSTYPE = ODBC
<GEC_SYS>
DBMSTYPE = ODBC
<honey>
DBMSTYPE = ODBC
Regards,
S. Kirtikumar.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

I do believe that Ray actually meant the ".odbc.ini" entry. :?
-craig

"You can never have too many knives" -- Logan Nine Fingers
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

Post by Kirtikumar »

I think I might have found a work around for the problem after spending so much time for it.

I compared the cooked data and live data. In the cooked data, amount fields were without any quote around them. But the file from live is generated thro PX jobs and so while creating file makes all the numeric and decimal cols to varchar. Due to this, the amount fields come with double quotes around them.

I took 100 rows from the job and removed quotes around amount fields and ran the job with it. It went fine. Then with quotes it failed.

In the original job then I multiplied all the Numeric columns by 1 on ODBC link and it worked without any warning.

Now I can sleep freely. Just wondering why the quote is causing problem with the DB?
Regards,
S. Kirtikumar.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

I meant uvodbc.config. That's why I said uvodbc.config.
:roll:
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Ok, sorry. What can that tell you other than the fact that an entry does in fact exist? Seemed to me that any such 'MAXFETCH' tunables would be elsewhere, hence the thought that perhaps you meant 'the other file'.
-craig

"You can never have too many knives" -- Logan Nine Fingers
sanath1234
Participant
Posts: 24
Joined: Wed Oct 17, 2007 1:28 am
Location: hyderabad

Re: ODBC Error - Array size change results in supression of

Post by sanath1234 »

Kirtikumar wrote:Hi,

I have a job which read a file and load it in SQL server table. ODBC stage is used to do this.

Whenever I trying to load a big file it is giving me following error.

Code: Select all

djsLoadGECStagingExp.3611_200709.tfbExp.lnkExpWrite: DSD.BCIPut call to SQLExecute failed.
SQL statement:INSERT INTO tblExposure(SiteID, PeriodID, ExposureID, BorrowerID, SupporterID, SupportType, ExposureDesc, EffNumLoan, UCurrency, NetCommitment, ExposureAtDefault, LossGivenDefault, LossGivenDefaultPercent, UsgAmount1, UsgAmount2, StartDate, NetAgreementInd, MaturityDate, RecurFee, UpfrontFee, NonUsgFee, UsgFee1, UsgFee2, FixCoupon, PayFrequency, CouponType, PrincipleType, AmortDate, Period, Entity, CustGroup, CustSubGroup1, CustSubGroup2, SecuritizationFlag, BaselPortfolioFlag) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) 
SQLSTATE=, DBMS.CODE=0
[DataStage][SQL Client][ODBC][DataDirect][ODBC SQL Serv
When it gives me this error, my array size is 200 and rows per trans is 0. If I change these parameters as array size is 1 and rows per trans is 0. It works fine with the same file getting loaded properly in the table.

I searched the forum a lot, but no one seems to have faced such issue.

This was working fine earlier with 200 array size.

Any inputs or suggestion?
ETL=DS
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

Post by Kirtikumar »

Guys thanks for all the help. DSX is like a breath for people like me.
Regards,
S. Kirtikumar.
Post Reply