reading sequential file containing an xml clob > 100 000

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
rcanaran
Premium Member
Premium Member
Posts: 64
Joined: Wed Jun 14, 2006 3:51 pm
Location: CANADA

reading sequential file containing an xml clob > 100 000

Post by rcanaran »

I've looked at a few posts, especially
viewtopic.php?t=129785&highlight=Consum ... +delimiter

I'm using 7.5x2 (windows server).

The input file contains several columns (keys and other data) and then has 2 columns containing very long XML (each defined as LongVarchar 10000000000 (10 billion))

Field Delimiter is hex 001. No final field delimiter.
Record delimiter is DOS CR/LF.

The sequential file stage of course aborts on "consumed more than 100000 bytes looking for a record delimiter". I get this abort using the ExternalSource stage as well.

Using the SequentialFile stage DOES work when the xml is contained in columns that are somewhat smaller ( <100000).

I cannot use the XML Input stage to read this directly from the file using the "filename / url/path" method as I have all the NON-XML columns to move through as well.

So far, the only successful workaround seems to be to use a sever job.

Is there another way in 7.5.x2 ?
is there a better way in 8.1 or 8.5 ?
Post Reply