Page 1 of 1

loading large record in to teradata table

Posted: Fri Jun 19, 2015 10:44 pm
by just4u_sharath
Hello,
I am trying to load data in to teradata table from a file. One filed has a size of 90k and is being loaded in to column defined as clob. I have defined it as long varchar in the datastage and trying to load. I able to load records that has size less than 64k but its failing for records greater than 64k.

The job is aborting with the following error.

The message's length (91669) exceeds the limit (65466) and it can't be broken at newlines; the message is truncated.

thank you

Posted: Sat Jun 20, 2015 7:23 am
by chulett
That just looks like a message about the error, not the error itself.

Posted: Sat Jun 20, 2015 8:20 am
by just4u_sharath
Thank you for the response..

Following is the error:

TD_etl_unq,0: Field col contains a value that has a byte length of 91,456 which exceeds the allocated length of 64,000 (the field value is '{"ptyLocRtf":[{"actnCd":"I","locUsgId":"0","addrSeqNbr":"453564706","callInd":"Y","policy":null,"persId":null,"addrTyp":null,"ptyLocEffDt":"2003-06-01","ptyLocEndDt":"9999-12-31","ptyKeyTyp":"MEMBER","ptySrcKey":"IL1^000055^296099546^....

prints entire value of the field and ends with ...[Message split, exceeds 64000 characters]...

next in the log it shows
TD_etl_rtf_unq,0: RDBMS code 3514: User-generated transaction ABORT. SQL statement: ROLLBACK
TD_etl_rtf_unq,0: Number of rows inserted: 0

buffer(0),0: Fatal Error: APT_BufferOperator::writeAllData() write failed. This is probably due to a downstream operator failure.

Posted: Sat Jun 20, 2015 8:23 am
by just4u_sharath
I am able to load using bteq script in to the teradata table. so there is no issue on the DB side.

Posted: Sat Jun 20, 2015 5:43 pm
by chulett
You haven't specified your target stage - is it a Teradata Connector or one of the legacy stages? This may help in either case. If you are going to need more help, you'll need to provide more details about the stage and settings / properties you are using on the target side.

Posted: Sun Jun 21, 2015 8:14 pm
by just4u_sharath
1. I am using Teradata connector stage
2. I am using immediate option with array size 1. bulk option is not working.
3. write mode: Insert
4. Table action: Append
5. Record count 100
6. Enable lob references: NO
7. The target column that i have issue is defined as clob. I am able to insert filed less than 64k but not more than 64k. I am able to insert using bteq script but its failing when trying to insert using datastage.

thanks

Posted: Sun Jun 21, 2015 9:31 pm
by PaulVL
What are the memory buffer settings you have on the stage?

Also.. not sure if this will help (probably not,but at this point it can't hurt to try)... APT_TERA_64K_BUFFERS ? Did you set it?

Another you may investigate is APT_DEFAULT_TRANSPORT_BLOCK_SIZE.


Honestly... just guessing at this point.

Posted: Mon Jun 22, 2015 3:15 am
by priyadarshikunal
Bulk method only supports up to 64kb in an array, if the single record is exceeding that, you may not be able to use bulk at all.


This link should help you.

Posted: Mon Jun 22, 2015 8:33 am
by just4u_sharath
Thanks for the reply.

1. Yes i am aware of the limitation on the bulk mode.
2. Yes i did use the APT_DEFAULT_TRANSPORT_BLOCK_SIZE ENV variable with default value 1280000
3. I also used APT_TSORT_STRESS_BLOCKSIZE with default value as 3000000 as per IBM link below.
4. The buffer side i tried using
a) Default
b) Auto Buffer: 6145728, Buffer free run 95% and disk increment to 2 MB
3) Buffer: 6145728, Buffer free run 95% and disk increment to 2 MB

still fails with same error
http://www-01.ibm.com/support/docview.w ... wg21660153

thanks

Posted: Tue Jun 23, 2015 1:58 am
by priyadarshikunal
there should be "max buffer size" row in teradata connector properties under limits section, which is by default set to 0 to use default buffer size of 64KB. You will have to change that to push the record larger than 64kb.

Posted: Wed Jun 24, 2015 9:01 am
by just4u_sharath
I have tired giving max buffer size = 12800. But still failing with same issue.

Thanks