Skip header record from complex flat file stage
Moderators: chulett, rschirm, roy
Skip header record from complex flat file stage
Hi,
I am receiving zipped binary (EBCDIC))complex flat file
This is a fixed width file and contains a header record
Could you please suggest how an I filter the header record in the complex flat file stage
At the job level the file is being unzipped and on running the job, the job is failing with the below error
Short read encountered on import; this most likely indicates one of the following possibilities:
1) the import schema you specified is incorrect
2) invalid data (the schema is correct, but there is an error in the data).
I understand the issue is with file definition or with the data.
I was told to remove the header record and continue the loading.
Please suggest how can I get the header record filtered from complex flat file stage
I am receiving zipped binary (EBCDIC))complex flat file
This is a fixed width file and contains a header record
Could you please suggest how an I filter the header record in the complex flat file stage
At the job level the file is being unzipped and on running the job, the job is failing with the below error
Short read encountered on import; this most likely indicates one of the following possibilities:
1) the import schema you specified is incorrect
2) invalid data (the schema is correct, but there is an error in the data).
I understand the issue is with file definition or with the data.
I was told to remove the header record and continue the loading.
Please suggest how can I get the header record filtered from complex flat file stage
Thanks,
HK
*Go GREEN..Save Earth*
HK
*Go GREEN..Save Earth*
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Not sure about the compression aspect. If the data is coming into CFF in the clear, it should not be an issue.
CFF requires you to define every record type on the input, but it does not require you to have an output link for every type. I have many files with header and trailer records, and I never need the trailer, so I don't code an output link for it.
CFF requires you to define every record type on the input, but it does not require you to have an output link for every type. I have many files with header and trailer records, and I never need the trailer, so I don't code an output link for it.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
-
- Participant
- Posts: 3593
- Joined: Thu Jan 23, 2003 5:25 pm
- Location: Australia, Melbourne
- Contact:
Is this a file generated by Cobol or from IMS? There should be a Cobol Definition File or Copy Book that DataStage can import to load up the required metadata to read the file. This would not just identify the record type, level and position of the header record but would also handle Cobol keywords like OCCURS and REDEFINES.
Certus Solutions
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
That'll do it every time, and says a great deal about information governance in your organization. You should escalate the existence of this lack of communication through the stewardship community - as far as the CDO if necessary.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
It is often the case that there is a very different "method" of version control between platforms. Cobol systems generally have an embedded (and sometimes proprietary) method. We use Endevor, which both manages code between regions (dev, test, prod) and does automatic compiles. Interfacing that with a DataStage environment, at least here, just doesn't work.
We download copybooks during development, then rely on that same communication that didn't work for HK. We also enjoy strong discipline in our host development, and changes to copybooks are almost always made in the end "filler" of a copybook layout. This means that our code only needs to change if we have data being added in that filler area. If we don't need it, it doesn't actually change for us.
We download copybooks during development, then rely on that same communication that didn't work for HK. We also enjoy strong discipline in our host development, and changes to copybooks are almost always made in the end "filler" of a copybook layout. This means that our code only needs to change if we have data being added in that filler area. If we don't need it, it doesn't actually change for us.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872