Hi All,
As part of my job design, I have created a CFF file xyz.cff (Binary)
Now this file needs to be FTPed over to the mainframe box.
Typically the ftp job would have a sequential file stage and then a ftp stage.
In this case should I
1) Use Complex Flat File stage to read and then use the ftp stage
2) Use seq flat file stage with fixed width option and Mainframe option and ftp it.
3) Use a UNIX scritp.
With Option 2, it fails with error "Field "OFFER_RATE" has import error and no default value; data: {f7 f0 f9 f2}, at offset: 51
Trying to check the metadata
Thank you
FTP CFF / Mainframe file from DS to Mainframe
Moderators: chulett, rschirm, roy
The mainframe faq at viewtopic.php?t=143596 touches on ftp. If you are creating a file on the local server first, then I see no reason to use anything but sequential file stage for your ftp job.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
Thanks for your reply Franklin.
I am creating a complex flat file ( writing to a CFF stage, got the copybook details from mainframe team).
Having trouble in the new job where I am trying to read it using CFF stage and ftp it.
From the link you posted it tells on how to read it using ftp stage from outside DS box, however my file is created on teh DS box by the DS ETL job.
Can you explain " If you are creating a file on the local server first, then I see no reason to use anything but sequential file stage for your ftp job. "
I am creating a complex flat file ( writing to a CFF stage, got the copybook details from mainframe team).
Having trouble in the new job where I am trying to read it using CFF stage and ftp it.
From the link you posted it tells on how to read it using ftp stage from outside DS box, however my file is created on teh DS box by the DS ETL job.
Can you explain " If you are creating a file on the local server first, then I see no reason to use anything but sequential file stage for your ftp job. "
I'm glad to help. I get plenty of help here in other areas.
Unless CFF puts special attributes on the physical file, using the sequential stage should be enough. You will have to experiment a bit -- I recommend using the copybook schema as the table definition in the ftp stage, at least at first -- but all you should need to know is the record length of the mainframe file you are writing.
Whether the destination file is a flat file or a gdg, and unless DS 8.x has changes in it that I'm not aware of, you need to have a mainframe job intialize the destination file before writing to it. The FAQ covers that. Here is the relevant text:
Unless CFF puts special attributes on the physical file, using the sequential stage should be enough. You will have to experiment a bit -- I recommend using the copybook schema as the table definition in the ftp stage, at least at first -- but all you should need to know is the record length of the mainframe file you are writing.
Whether the destination file is a flat file or a gdg, and unless DS 8.x has changes in it that I'm not aware of, you need to have a mainframe job intialize the destination file before writing to it. The FAQ covers that. Here is the relevant text:
For writing to the mainframe, you don't really need the individual columns in the table definition. What you must have is the correct row length that matches the lrecl in the DCB.Mainframe file storage separates the data from its format attributes. For example, in z/OS (and its predecessors like MVS) a catalog stores a "data control block" (DCB) that contains whether a record is fixed or variable, the record length, and other attributes that are not relevant here. A program that executes in z/OS queries the catalog before attempting to open, read from and/or write to the file. DataStage is not so privileged, so the "implicit" setting assumes that the table definition in your job will accommodate the data as it was written.
Q: I can't write data to the mainframe file. Why not?
A: Only your mainframe support will know for sure. The most frequent reason I've found is that your job doesn't have access to the catalog to set the DCB. The simplest solution is to have a mainframe job that initializes your file before you try to write to it. That is similar to the Unix "touch" command, except that the mainframe job specifies for you the DCB.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872