Hi,
We are working on a Migration Project (MainFrame to SQLServer).
Initially the source files are manually FTPed from MainFrame to DataStage Server(On Windows).
We use that source files and transform the data.
Here my question is,
when a file is FTPed from MainFrame,
a new line is generated at the end of the data.
Is there any way in DataStage,
we can remove that newline from the source files.
Right now, we are dealing that issue by specifying the constraint,
which is over head, since it has to process for each and every record.
Can any one have any Thoughts.
Thanks
new line generated at the end of the source file
Moderators: chulett, rschirm, roy
Well, the 'overhead' of a simple constraint like that would be negligible, so if it is working for you I probably wouldn't worry about it. That being said...
There are ways on UNIX where the last line can be stripped in the Filter box of the Sequential File stage, or via some kind of 'pre-processing' step, not sure what the equivalents would be under Windows.
You could also look into the ftp process you are using, there may be an option there to strip (or not generate) the last newline. Curious... is it literally an extra single 'new line' or is it something like an EOF marker (^Z)?
There are ways on UNIX where the last line can be stripped in the Filter box of the Sequential File stage, or via some kind of 'pre-processing' step, not sure what the equivalents would be under Windows.
You could also look into the ftp process you are using, there may be an option there to strip (or not generate) the last newline. Curious... is it literally an extra single 'new line' or is it something like an EOF marker (^Z)?
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
The number of lines in the file can be determined using
One less than this can be obtained with
You could create a command or script that captured one less than this value and used it in the head command. For example:
Or, as a single command (which you could use a filter in a Sequential File stage:
Code: Select all
wc -l filename
Code: Select all
expr `wc -l filename | awk '{print $1}'`
Code: Select all
lines=`wc -l filename | awk '{print $1}'`
lines=`expr $lines - 1`
head -$lines filename
Code: Select all
lines=`wc -l filename | awk '{print $1}'` && lines=`expr $lines - 1` && head -$lines filename
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
MKS Toolkit spoils one sometimes!
How about something like this before/after subroutine?
Code: Select all
SUBROUTINE RemoveFinalTerminator(InputArg, ErrorCode)
ErrorCode = 0
OpenSeq InputArg To hFile
Then
* Position to a point two characters short of EOF
Seek hFile,2,-2
Then
* Read those two characters to make sure they're CRLF
ReadBlk TwoChars From hFile,2
Then
If TwoChars = Char(13):Char(10)
Then
Seek hFile,2,-2
Then
* Truncate the file at this position
WeofSeq hFile
End
End
End
End
CloseSeq hFile
End
RETURN
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.