Hi,
I have double quotes inside the data.
While reading, i am getting this error
with 'delim=end' did not consume entire input, at offset: 317
Ex
"1"|"9999"|"Exbis"|"MC"kh""|""
I searched the forum and i couldnt find any resolution to handle this
I have the Quote character as double quote now, i changed it to none, but then all the records are getting rejected.
Any help?
double quotes inside data
Moderators: chulett, rschirm, roy
dnat - your source file is badly formed, the correct thing to do is to correct the file. If that is not possible then another option would work if you are certain that your column separator, the '|' symbol, does not occur in the data. In that case, don't use a quote character at all and read in the strings, then strip leading and trailing double quotes from each string. Another option is to "fix" the file. Declare it as just one (big) column, then replace all double-quotes with two double quotes where the quote is not preceded by or followed by a pipe.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Re: double quotes inside data
dnat
i have the same problem. in server jobs this works correct.
here is the answer from IBM support:
"Yes, Parallel job doesn't manage quote included in a field as a Server job for the reading of sequential file (CSV file).
I confirm Parallel job doesn't manage quote included in a field as a Server job for the reading of sequential file (CSV file).
To read this type of CSV file, they could:
- keep a "Server" job to read data, and include this job in a job sequence.
- write a specific 'buildop' to split the record in fields, and use this 'buildop' reading 1 record=1 field."
I am going to achieve a recognition of it as a bug
i have the same problem. in server jobs this works correct.
here is the answer from IBM support:
"Yes, Parallel job doesn't manage quote included in a field as a Server job for the reading of sequential file (CSV file).
I confirm Parallel job doesn't manage quote included in a field as a Server job for the reading of sequential file (CSV file).
To read this type of CSV file, they could:
- keep a "Server" job to read data, and include this job in a job sequence.
- write a specific 'buildop' to split the record in fields, and use this 'buildop' reading 1 record=1 field."
I am going to achieve a recognition of it as a bug
DST - this is not a bug.
Embedded quotes must be doubled.
Code: Select all
"I am not " a well-formed sentence. But ""I"" am."
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
[
is a WFF.
The server job is tolerant of this incorrect data format.
Code: Select all
1,"company name ""super company""","company1"
The server job is tolerant of this incorrect data format.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
... but parallel job not ((( why ???ArndW wrote:[is a WFF.Code: Select all
1,"company name ""super company""","company1"
The server job is tolerant of this incorrect data format. ...
The appropriate question is not why it doesn't work in PX jobs but why it works (when it shouldn't) in Server jobs.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>