When i try to run the job i am getting the error as "Consumed more than 100000 bytes looking for record delimiter; aborting".
Source is Sequential File and the Final Delimiter : End , Field Delimiter:comma , Null Field Value:" and Quote: double.
I have created a user defined variable $APT_MAX_DELIMITED_READ_SIZE and defined at the job level to 300000 but still the job got Aborted . Can any one please provide me the solution for the issue
Apparently there is no UNIX newline characters in the first 100000 bytes (or 300000 bytes) of your file. It may, for example, be a fixed-width format file with no record delimiters at all. DataStage can handle that - but you have to set the Record Delimiter property to None.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
You need to determine what the details of your file are so you can read it properly. A hex editor can help or perhaps "od -h" or some other flavor of a dump so you can see the actual hex/octal/decimal values.
-craig
"You can never have too many knives" -- Logan Nine Fingers