Header on import via FTP enterprise stage
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 22
- Joined: Tue May 09, 2017 8:46 am
Header on import via FTP enterprise stage
I am trying to import a file via FTP enterprise stage. The first column is report date. I have kept the datatype as date and not as varchar(255). I have mentioned on the table definition that first row is header.
When FTP imports the process it throws record input error as it reads the header row as first record.
FTP_Enterprise_0,0: Data string 'REPORT_DT~' does not match format '%yyyy-%mm-%dd': an integer was expected to match tag %yyyy.
I want to be able to read directly from the FTP stage, and not use transformations, as we are doing simple extracts to HDFS and have to replicate this process for 30+ tables.
Is there a way FTP enterprise stage can ignore header? I see an open command.
When FTP imports the process it throws record input error as it reads the header row as first record.
FTP_Enterprise_0,0: Data string 'REPORT_DT~' does not match format '%yyyy-%mm-%dd': an integer was expected to match tag %yyyy.
I want to be able to read directly from the FTP stage, and not use transformations, as we are doing simple extracts to HDFS and have to replicate this process for 30+ tables.
Is there a way FTP enterprise stage can ignore header? I see an open command.
-
- Participant
- Posts: 22
- Joined: Tue May 09, 2017 8:46 am
Re: Header on import via FTP enterprise stage
Sequential_File_1,0: Fatal Error: waitForWriteSignal(): Premature EOF on node ditdwapp1w204m7 Bad file descriptor
This above error happens whenever all columns are made varchar(255)
This above error happens whenever all columns are made varchar(255)
What's the actual format of the file you are transferring? What does the header record look like?
Often, the OSH/DataStage column attributes are just off in some way. You are reading external data, and the import is seeing something unexpected. You need to identify that attribute.
Often, the OSH/DataStage column attributes are just off in some way. You are reading external data, and the import is seeing something unexpected. You need to identify that attribute.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
-
- Participant
- Posts: 22
- Joined: Tue May 09, 2017 8:46 am
Yes, it is seeing something different. because it is reading the data with the header. The first column is date, but the metadata is report_dt date. Since it is reading header too it tries reading report_dt as date column.
I tried unchecking first column is header, and setting all datalengths to 255, but then I got the error on second comment.
Does FTP in datastage not have a simple pick up and drop functionality? That way it does not go into the file and read the data.
I tried unchecking first column is header, and setting all datalengths to 255, but then I got the error on second comment.
Does FTP in datastage not have a simple pick up and drop functionality? That way it does not go into the file and read the data.
My questions need answers if I'm to help further. I checked FTP Enterprise and didn't see anywhere to set for first row headers. It looks like the stage is reading the header row as a data row.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
No, and is why I was never a big fan of it. For that functionality, fall back on scripted transfers to get the file local.swathi.raoetr wrote:Does FTP in datastage not have a simple pick up and drop functionality? That way it does not go into the file and read the data.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 22
- Joined: Tue May 09, 2017 8:46 am
That should be it, then. You're expecting a sequential file stage read, and FTP doesn't conform to the same processes.swathi.raoetr wrote:Franklin, the table definition of sequential file has first row as header, checkbox. I had read these as columns into FTP enterprise. Yes, header reads first row as data row.
Looks like your only alternative is to FTP the file to the server, and use a sequential file stage to read it.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
-
- Participant
- Posts: 22
- Joined: Tue May 09, 2017 8:46 am
Sure... but the Enterprise stage is not "just FTP" but rather built for a different purpose. It's metadata driven so it can be used as a source directly into a job, that or directly out to an FTP target so that there's no intermediate flat file needed on the DataStage server. Just an FYI.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 22
- Joined: Tue May 09, 2017 8:46 am
So I am writing these files from FTP location into an HDFS location. By making the metadata varchar(max), I was able to achieve placing the files. However, I am having another strange problem. FTP enterprise is introducing a file descriptor record as the first line.
I do not want the file descriptor. How can we get rid of this? In the HDFS connector, when I selected first record as header, it added yet another row, instead of getting rid of this record.
Code: Select all
ABC.dsv 201707280000 428 records
No clue, sorry.
Of course. When reading, that option skips the first record assuming it is a header, when writing it adds a header record using the column names in your metadata.swathi.raoetr wrote:In the HDFS connector, when I selected first record as header, it added yet another row, instead of getting rid of this record.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers