ODBC Connector record count issue

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
marpadga18
Premium Member
Premium Member
Posts: 96
Joined: Fri Aug 20, 2010 8:51 am

ODBC Connector record count issue

Post by marpadga18 »

Hi,

I am facing problem while loading data from source sql server - target sql server.
stage using odbc connector
If I change record count to >1 then it is not loading full data it is loading only some records but if I change record count>1 then it is loading full data.
I am not able to understand about this issue I tried to investigate but went in wain.

Any help appreciated Thanks for your help.
kwwilliams
Participant
Posts: 437
Joined: Fri Oct 21, 2005 10:00 pm

Re: ODBC Connector record count issue

Post by kwwilliams »

Is there a reject link from the target?

SQL Server version? Some versions of sql server do not support a reject link with the array size greater than 1.

Does the table have triggers, and if so what's in the triggers? If it's another stored procedure I have seen issues. I've seen issues where triggers are calling stored procedures which drops records.
marpadga18
Premium Member
Premium Member
Posts: 96
Joined: Fri Aug 20, 2010 8:51 am

Re: ODBC Connector record count issue

Post by marpadga18 »

I have triggers on the table but I dnt have reject link it is dropping the records in target if give record count >1
kwwilliams
Participant
Posts: 437
Joined: Fri Oct 21, 2005 10:00 pm

Re: ODBC Connector record count issue

Post by kwwilliams »

What's in the triggers? (stored procedure executed in the trigger I have seen mixed results with).

And do you have 'AlwaysReportTriggerResults=1' in your DSN?

From the DataDirect documentation:
Attribute
AlwaysReportTriggerResults (ARTR)
Description
Determines how the driver reports results that are generated by database triggers (procedures that are stored in the database and executed, or fired, when a table is modified). For Microsoft SQL Server 2005 and higher, this includes triggers that are fired by Data Definition Language (DDL) events.
Valid Values
0 | 1
If set to 1 (Enabled), the driver returns all results, including results that are generated by triggers. Multiple trigger results are returned one at a time. You can use the SQLMoreResults function to return individual trigger results. Warnings and errors are reported in the results as they are encountered.
If set to 0 (Disabled):
■ For Microsoft SQL Server 2005 and higher, the driver does not report trigger results if the statement is a single INSERT, UPDATE, DELETE, CREATE, ALTER, DROP, GRANT, REVOKE, or DENY statement.

■ For other Microsoft SQL Server databases, the driver does not report trigger results if the statement is a single INSERT, UPDATE, or DELETE statement.

When set to 0, the only result that is returned is the update count that is generated by the statement that was executed (if no errors occurred). Although trigger results are ignored, any errors and warnings that are generated by the trigger are reported. If errors are reported, the update count is not reported.
In the end I think triggers are bad. You are basically reverting back to stored procedure based ETL. They are almost always bottlenecks to the entire ETL flow. Why not remove triggers and perform the work that you are completing in a trigger in a subsequent job.

Try adding the AlwaysReportTriggerResults. I would also encourage to clone your tab le without the triggersand run the ETL without. This will let you see if the processing would be faster without triggers and would isolate the problem as being triggers (assuming that therows are all inserted correctly).
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

I'm sorry... you are using this "record count > 1" where? :?
-craig

"You can never have too many knives" -- Logan Nine Fingers
marpadga18
Premium Member
Premium Member
Posts: 96
Joined: Fri Aug 20, 2010 8:51 am

Post by marpadga18 »

chulett wrote:I'm sorry... you are using this "record count > 1" where? :?
If I give record count as 1 its running fine with odbc connector stage but if give from 2 it is loading partial data...
marpadga18
Premium Member
Premium Member
Posts: 96
Joined: Fri Aug 20, 2010 8:51 am

Post by marpadga18 »

Hi Williams thanks for your reply !!

Unfortunately we can not touch triggers on that table but work around is keeping record count as 1.. then it is loading full data....

We dont have access to disable triggers so I am loading now with record count 1.....

I will investigate on this if I get anything I will post here.
anyway thanks for your help
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

marpadga18 wrote:
chulett wrote:I'm sorry... you are using this "record count > 1" where? :?
If I give record count as 1 its running fine with odbc connector stage but if give from 2 it is loading partial data...
Which doesn't come close to answering my question, so decided to check the ODBC Connector documentation. It's a property of the stage:

Use this property to specify the number of records to process before the connector commits the current transaction or unit of work.

Sheesh, why not call it Transaction Size?

So, how about some specifics on what exact data that is being lost. Have you determined if what is lost are records from the last 'transaction group'? Or records from each group? What is your Array Size set to? If you read through the documentation you'll see there is a relationship between the two and the Record Count must be an even multiple of the Array Size, I suspect you would see your issue when that is not true.
-craig

"You can never have too many knives" -- Logan Nine Fingers
kwwilliams
Participant
Posts: 437
Joined: Fri Oct 21, 2005 10:00 pm

Post by kwwilliams »

The job fails with an error specifying that transaction size must be a multiple of array size in the case where they are not lined up.

I've seen this behavior before and was related to triggers on the table. You are welcome to continue the work around of array size and commit scope = 1, however that is not an ideal situation. Ideally your array size would equal the n in this equation:

n * rowsize in bytes = network package size - 1

where n is rounded down.

Change your DSN to include the attribute in your previous post. If its not there you can have an error in the trigger and be unaware of the error because you are not specifying the error, the transaction which includes the trigger fails and then the entire array is ommitted from the table.

I recommend that you remove the triggers permantly because it is almost always poor design. However, I implore you to clone the table without the triggers so you can isolate where the problem is occuring. Whether you choose to properly troubleshoot by isolating the problem is completely up to you.
Post Reply