Hello all,
I did a search on maximum file size and also got some useful information.
A small doubt,
Does the record length in a sequential file also matters in performance?
and does the record length matters if the data is in ASCII or EBCDIC ??
Are their any constraints on the record length to be used in the sequential file.
Thanks in advance
Yogesh
Record length in sequential file
Moderators: chulett, rschirm, roy
HI,
No sure about the maximum length in datastge, but some of unix commands have limitation of 2048bytes.
If you ask about the generic infromation "performance", the answer would be certainly YES, there would be performance bottlneck with very large file with lengthier columns.
If you look in to performance tuning, the first point to be insisted would be to reduce the number of columns and rows as much as you can in to the datastage grid.
-Kumar
No sure about the maximum length in datastge, but some of unix commands have limitation of 2048bytes.
If you ask about the generic infromation "performance", the answer would be certainly YES, there would be performance bottlneck with very large file with lengthier columns.
If you look in to performance tuning, the first point to be insisted would be to reduce the number of columns and rows as much as you can in to the datastage grid.
-Kumar
DataStage sequential file rows can be quite large, I'm not sure what the limits are but I know I've used binary records that are in the Mb range. The processing speed it not really dependent upon row width but on the number of columns in a sequential file. Reading 100Kb of data that has one column per row is going to be much faster than reading the same 100Kb of data but split into 50 columns.
yksjosh,
there is no difference between ASCII and EBCDIC; both represent one character in one byte. As mentioned before there is certainly a limitation on record length; but I've done over 1Mb/row so I wouldn't worry about it unless you have longer rows. I suspect that the limitation will be the user process virtual memory space available.
there is no difference between ASCII and EBCDIC; both represent one character in one byte. As mentioned before there is certainly a limitation on record length; but I've done over 1Mb/row so I wouldn't worry about it unless you have longer rows. I suspect that the limitation will be the user process virtual memory space available.
The heading for your post is Job Type: 390 and OS: Unix. Are you running USS?
If you're NOT developing jobs in DS390, disregard the following.
In case you need to carry all of the source data through to the target, there's a 'filler' option that you can use in the Flat File stage. It will allow you to minimize the number of columns to be processed in the job.
As ArndW said, the processing speed it not really dependent upon row width but on the number of columns in a sequential file
If you're NOT developing jobs in DS390, disregard the following.
In case you need to carry all of the source data through to the target, there's a 'filler' option that you can use in the Flat File stage. It will allow you to minimize the number of columns to be processed in the job.
As ArndW said, the processing speed it not really dependent upon row width but on the number of columns in a sequential file
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact: