Sequential file Import problem
Moderators: chulett, rschirm, roy
Sequential file Import problem
Hi,
HAPPY NEW YEAR
I am facing an import problem when I am trying to read a tab delimited file . When I am running the job, sequential file stage is dropping the records with the following warning
Sequential_File_7,0: Field "SDATE" delimiter not seen, at offset: 27.
I have input data like this in the file
"T#PSR4003 "||Jan 1, 0001||"40"
Where first one is CSE char 11 and second one is SDATE char 12
and in sequential file stage properties I defined properties like this
Record level
Final delimiter = none
Record delimiter = UNIX newline
Field defaults
Delimiter String = ||
quote = double
please help me in solving my problem
Thanks
Soma Raju
HAPPY NEW YEAR
I am facing an import problem when I am trying to read a tab delimited file . When I am running the job, sequential file stage is dropping the records with the following warning
Sequential_File_7,0: Field "SDATE" delimiter not seen, at offset: 27.
I have input data like this in the file
"T#PSR4003 "||Jan 1, 0001||"40"
Where first one is CSE char 11 and second one is SDATE char 12
and in sequential file stage properties I defined properties like this
Record level
Final delimiter = none
Record delimiter = UNIX newline
Field defaults
Delimiter String = ||
quote = double
please help me in solving my problem
Thanks
Soma Raju
somaraju
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Only single-character delimiters are supported. You will need to change your file, or process it as a single field and decompose it using Field() functions in a Transformer stage. There will be empty fields between adjacent pipe characters that you can ignore.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Re: Sequential file Import problem
Other options...
If you don't have any pipes in your data (other than the double pipe delimiter),
You could preprocess the file, replace all the '||' with '|'
Another option could be to read the file as is with '|' as the delimiter.
Then in your job you can map only the required fields and ignore the rest.
Is your file a tab delimited or a '||' delimited?
If you don't have any pipes in your data (other than the double pipe delimiter),
You could preprocess the file, replace all the '||' with '|'
Another option could be to read the file as is with '|' as the delimiter.
Then in your job you can map only the required fields and ignore the rest.
Clarificationsomu_june wrote:I am facing an import problem when I am trying to read a tab delimited file .
Is your file a tab delimited or a '||' delimited?
Narasimha Kade
Finding answers is simple, all you need to do is come up with the correct questions.
Finding answers is simple, all you need to do is come up with the correct questions.
-
- Premium Member
- Posts: 644
- Joined: Sat Aug 26, 2006 3:59 pm
- Location: Mclean, VA
Re: Sequential file Import problem
By preprocessing, do you mean using awk? Or something in DataStage? Just wanted to check if I got what you meant.narasimha wrote:Other options...
If you don't have any pipes in your data (other than the double pipe delimiter),
You could preprocess the file, replace all the '||' with '|'
Attitude is everything....
Re: Sequential file Import problem
Not sure how you are related to the OP.just4geeks wrote:[By preprocessing, do you mean using awk? Or something in DataStage? Just wanted to check if I got what you meant.
But yes you are right, thats what I meant.
You could do it outside Datastage or build it inside Datastage, your choice.
You can also use sed, which every you are comfortable in.
Narasimha Kade
Finding answers is simple, all you need to do is come up with the correct questions.
Finding answers is simple, all you need to do is come up with the correct questions.
Re: Sequential file Import problem
to my understanding CSE char 11 is "T#PSR4003 "
and SDATE char 12 Jan 1, 0001
if thats the case try varchar instead of char, and add one extra column in metadata at the end and use varchar 3 and let me know whether u got the answer.
and SDATE char 12 Jan 1, 0001
if thats the case try varchar instead of char, and add one extra column in metadata at the end and use varchar 3 and let me know whether u got the answer.
I think sed should be enough.
Something like
This searches all occurances of '||' and replaces it with '|'.
Something like
Code: Select all
sed 's/||/|/' myFile.txt
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
What Ray and DSguru2B is suggesting is, use either in BeforJobSubroutine or using a Execute command in JobSequence where the job been called, and read the Newfile.txt for you lay out.
Code: Select all
sed 's/||/|/g' File.txt > Newfile.txt
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
Re: Sequential file Import problem
Hi,
My problem was solved and in input data the length of the date field is not constant , for some it has 14 char length and for others it has 12 char length. After making it as constant length, I can load the data with out dropping the records
Thanks,
SomaRaju
My problem was solved and in input data the length of the date field is not constant , for some it has 14 char length and for others it has 12 char length. After making it as constant length, I can load the data with out dropping the records
Thanks,
SomaRaju
somaraju