Wrong "new lines" when reading/writing cobol files

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
manuel.gomez
Premium Member
Premium Member
Posts: 291
Joined: Wed Sep 26, 2007 11:23 am
Location: Madrid, Spain

Wrong "new lines" when reading/writing cobol files

Post by manuel.gomez »

Hello everyone:

I am having big troubles writing EBCDIC files. Let me tell you the whole problem.

I am readind/writing using FTP Enteprise stage. Reading is just fine: I can see data very well with "view data". Problem comes when writing.

To make it simpler and easier, I am reading just using an unique column, format binary, and length 383 (length of the record according to cobol copy book). The reading, as said, works fine.

Pass data through a copy stage, and then write to host again using FTP enterprise. Same column definition at destination.

Both files are configured the same regarding format:

Code: Select all

Record type = implicit
Delimiter = none
Null field value = -
character set = EBCDIC
data format = binary
allow all zeros = yes
Either downloading file to my local computer or checking it through TSO, I just can perfectly see data for SOURCE.

But when checking destination files, if I try to watch it in TSO, and it seems that there is some return carriage somewhere. Simply not all lines are starting at position 1, it is really strange.
BUT if I click on "view data" in a source FTP enterprise reading my destination file, I can view it with lines perfectly "cut".

If a download both files (source and destination) in BINARY mode, both files are exactly equal in size and format, everything is OK

But if I download files in TEXT mode, in my local PC the wrong cut in the destination file is exactly in the same place that when viewing in TSO

Can anybody help??

Thanks a lot!
FranklinE
Premium Member
Premium Member
Posts: 739
Joined: Tue Nov 25, 2008 2:19 pm
Location: Malvern, PA

Post by FranklinE »

Does the destination dataset exist in the catalog when you try to transfer data to it, or are you "creating" a new dataset each time? In my experience, FTP Enterprise is weak when it comes to new datasets, and my solution is to initialize the destination with the correct attributes -- especially lrecl -- before trying to write to it.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
manuel.gomez
Premium Member
Premium Member
Posts: 291
Joined: Wed Sep 26, 2007 11:23 am
Location: Madrid, Spain

Post by manuel.gomez »

Yes, the dataset is created on each transfer

How can I initialize a dataset with correct attributes? (totally nerd when it comes to cobol)

Thanks a lot
FranklinE
Premium Member
Premium Member
Posts: 739
Joined: Tue Nov 25, 2008 2:19 pm
Location: Malvern, PA

Post by FranklinE »

You're welcome. It's a JCL function, not Cobol. It only needs to be done once -- big assumption: that you will reuse the same dataset name every cycle -- and you will overwrite it every time. The catalog supplies your data control information (DCB or data control block) when the dataset is referenced by the FTP session.

Ask your mainframers about this. There's too much I don't know about your processing environment and requirements to suggest anything specific. Besides, I don't think my employer would like me doing free coding for someone outside the company. :oops:
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
manuel.gomez
Premium Member
Premium Member
Posts: 291
Joined: Wed Sep 26, 2007 11:23 am
Location: Madrid, Spain

Post by manuel.gomez »

Of course not Frankline. And your assumption was absolutely right

Actually, you were totally right (again), and you gave me the lead to follow to find the problem. Source and destination file were defined with different allocation, so there was no way that one could fit into the other

I asked host team to allocate new file with exactly same properties than source file, and worked fine

Thanks very much!
Post Reply