folding a single record into multiple records at fixed width
Moderators: chulett, rschirm, roy
Ray - Naaaah, I'll let you wait. I'll just ignore any posts to that effect
News78 - do you see that your "one line of data" can also be interpreted as "One file with rows of 5-byte fixed length data with no separator"? This is what bcarlson was driving at - you don't need to change your data around, just how you present it to DataStage.
News78 - do you see that your "one line of data" can also be interpreted as "One file with rows of 5-byte fixed length data with no separator"? This is what bcarlson was driving at - you don't need to change your data around, just how you present it to DataStage.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
That's not actually what I was thinking... You won't need to add a line terminator at all:DSguru2B wrote:Thats good thinking guys. Split it at the source and then add a line terminator after every record to pivot it.
import -> Oracle write
If the import is done right, you don't have to do anything. Once DataStage imports it into the right format, you can immediately output to your Oracle table. You would only need to add the line terminator if you have to write it out to file.
Brad.
Re: folding a single record into multiple records at fixed w
hello with so much of post little confusion !!!
i think you want your input data like
sachinbankarparthkarkahnyusuf
to be like
sachi
nbank
arpar
thkar
kahny
usuf
i have used a routine for this conversion, but it shows a data in output file like
"sachi
nbank
arpar
thkar
kahny
usuf"
so need to take care of ". also if your input string is quite long i think it will use your resources.
simple code:javascript:emoticon(':oops:')
Embarassed
Arg1 is input
----------------------------------------------
String1= trim(Arg1)
val1=len(Arg1)
Ans=val1
For Outer = 1 To val1 Step 5
String2 = String1[Outer,5] : char(013):char(010);
Ans = Ans:String2
Next Outer
i think you want your input data like
sachinbankarparthkarkahnyusuf
to be like
sachi
nbank
arpar
thkar
kahny
usuf
i have used a routine for this conversion, but it shows a data in output file like
"sachi
nbank
arpar
thkar
kahny
usuf"
so need to take care of ". also if your input string is quite long i think it will use your resources.
simple code:javascript:emoticon(':oops:')
Embarassed
Arg1 is input
----------------------------------------------
String1= trim(Arg1)
val1=len(Arg1)
Ans=val1
For Outer = 1 To val1 Step 5
String2 = String1[Outer,5] : char(013):char(010);
Ans = Ans:String2
Next Outer
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
But the OP wants the data to be pivoted at every 5th position. How will that happen without concatenating the line terminatorsbcarlson wrote:That's not actually what I was thinking... You won't need to add a line terminator at all:
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
The input data is one big long record. The import knows that it should treat it as if it is a file of 5-byte fixed length records. The import will 'fold' the input data after every 5th character and 'store' it internally. No line terminators will be explicitly added. Once the data is imported, DataStage knows the start/end of every record and will supply the necessary information to the downstream operator.
When you import a binary-EBCDIC file, do you add line terminators? I know we don't - we've never needed to. Once you define what the record looks like coming in and how you want it split, DataStage takes care of the rest.
Now, if you need to spit this data out into a file, then yes, you may need to specify line terminators. However, it looks like the question was how to split it up and get the data into Oracle, so that shouldn't be an issue.
Brad.
When you import a binary-EBCDIC file, do you add line terminators? I know we don't - we've never needed to. Once you define what the record looks like coming in and how you want it split, DataStage takes care of the rest.
Now, if you need to spit this data out into a file, then yes, you may need to specify line terminators. However, it looks like the question was how to split it up and get the data into Oracle, so that shouldn't be an issue.
Brad.
Here's a sample I put together:
* Input file contains the following - 40 chars (data has no line terminator, just one continuous stream to end-of-file)
The program itself is exactly what I mentioned earlier, except that it writes to DB2 instead of Oracle (sorry, we are a Big Blue shop)
Seq. File Import -> DB2 write
Here is the OSH that gets generated in teh DSX file (edited to protect the innocent...):
Hope this helps.
Brad.
* Input file contains the following - 40 chars (data has no line terminator, just one continuous stream to end-of-file)
* Input schema:abcdefghijklmnopqrstuvwxyz1234567890!@#$
Code: Select all
record
{final_delim=end, record_length=5, delim=none}
(
char5col:string[5];
)
Seq. File Import -> DB2 write
Here is the OSH that gets generated in teh DSX file (edited to protect the innocent...):
Code: Select all
#################################################################
#### STAGE: load_target_table
## Operator
db2write
## Operator options
-table 'myschema.long_rec_split_to_db'
-mode replace
-dboptions '{tablespace = \\'mytbspace\\'}'
-dbname '[&db1]'
-server '[&server_instance1]'
-client_instance '[&client_instance1]'
-user '[&dbuser1]'
-password [&dbpasswd1]
## General options
[ident('load_target_table'); jobmon_ident('load_target_table')]
## Inputs
0< 'Sequential_File_4:DSLink2.v'
;
#################################################################
#### STAGE: Sequential_File_4
## Operator
import
## Operator options
-schema record
{final_delim=end, record_length=5, delim=none}
(
char5col:string[5];
)
-file '/u001/data/dsxchange/long_rec.dat'
-rejects continue
-reportProgress yes
## General options
[ident('Sequential_File_4'); jobmon_ident('Sequential_File_4')]
## Outputs
0> [] 'Sequential_File_4:DSLink2.v'
;
Brad.