Page 2 of 2

Posted: Wed Jun 20, 2007 1:45 am
by ArndW
Ray - Naaaah, I'll let you wait. I'll just ignore any posts to that effect :roll:

News78 - do you see that your "one line of data" can also be interpreted as "One file with rows of 5-byte fixed length data with no separator"? This is what bcarlson was driving at - you don't need to change your data around, just how you present it to DataStage.

Posted: Wed Jun 20, 2007 7:32 am
by DSguru2B
Thats good thinking guys. Split it at the source and then add a line terminator after every record to pivot it.

Posted: Wed Jun 20, 2007 4:32 pm
by bcarlson
DSguru2B wrote:Thats good thinking guys. Split it at the source and then add a line terminator after every record to pivot it.
That's not actually what I was thinking... You won't need to add a line terminator at all:

import -> Oracle write

If the import is done right, you don't have to do anything. Once DataStage imports it into the right format, you can immediately output to your Oracle table. You would only need to add the line terminator if you have to write it out to file.

Brad.

Re: folding a single record into multiple records at fixed w

Posted: Wed Jun 20, 2007 10:12 pm
by sachin1
hello with so much of post little confusion !!!

i think you want your input data like

sachinbankarparthkarkahnyusuf

to be like

sachi
nbank
arpar
thkar
kahny
usuf

i have used a routine for this conversion, but it shows a data in output file like
"sachi
nbank
arpar
thkar
kahny
usuf"
so need to take care of ". also if your input string is quite long i think it will use your resources.

simple code:javascript:emoticon(':oops:')
Embarassed

Arg1 is input
----------------------------------------------
String1= trim(Arg1)

val1=len(Arg1)
Ans=val1
For Outer = 1 To val1 Step 5

String2 = String1[Outer,5] : char(013):char(010);
Ans = Ans:String2
Next Outer

Posted: Wed Jun 20, 2007 10:44 pm
by ray.wurlod
Harrumph!! If you're going to use DataStage BASIC, you may as well just use the Fold() function.

But this is the parallel jobs forum...

Posted: Thu Jun 21, 2007 7:58 am
by DSguru2B
bcarlson wrote:That's not actually what I was thinking... You won't need to add a line terminator at all:
But the OP wants the data to be pivoted at every 5th position. How will that happen without concatenating the line terminators :?:

Posted: Thu Jun 21, 2007 9:22 am
by bcarlson
The input data is one big long record. The import knows that it should treat it as if it is a file of 5-byte fixed length records. The import will 'fold' the input data after every 5th character and 'store' it internally. No line terminators will be explicitly added. Once the data is imported, DataStage knows the start/end of every record and will supply the necessary information to the downstream operator.

When you import a binary-EBCDIC file, do you add line terminators? I know we don't - we've never needed to. Once you define what the record looks like coming in and how you want it split, DataStage takes care of the rest.

Now, if you need to spit this data out into a file, then yes, you may need to specify line terminators. However, it looks like the question was how to split it up and get the data into Oracle, so that shouldn't be an issue.

Brad.

Posted: Thu Jun 21, 2007 9:28 am
by DSguru2B
Ahh, now I see. For some reason I was stuck on the line terminator. Thanks for the clarification.

Posted: Thu Jun 21, 2007 10:03 am
by bcarlson
Here's a sample I put together:

* Input file contains the following - 40 chars (data has no line terminator, just one continuous stream to end-of-file)
abcdefghijklmnopqrstuvwxyz1234567890!@#$
* Input schema:

Code: Select all

record
  {final_delim=end, record_length=5, delim=none}
(
  char5col:string[5];
)
The program itself is exactly what I mentioned earlier, except that it writes to DB2 instead of Oracle (sorry, we are a Big Blue shop)

Seq. File Import -> DB2 write

Here is the OSH that gets generated in teh DSX file (edited to protect the innocent...):

Code: Select all

#################################################################
#### STAGE: load_target_table
## Operator
db2write
## Operator options
-table 'myschema.long_rec_split_to_db'
-mode replace
-dboptions '{tablespace = \\'mytbspace\\'}'
-dbname '[&db1]'
-server '[&server_instance1]'
-client_instance '[&client_instance1]'
-user '[&dbuser1]'
-password [&dbpasswd1]

## General options
[ident('load_target_table'); jobmon_ident('load_target_table')]
## Inputs
0< 'Sequential_File_4:DSLink2.v'
;

#################################################################
#### STAGE: Sequential_File_4
## Operator
import
## Operator options
-schema record
  {final_delim=end, record_length=5, delim=none}
(
  char5col:string[5];
)
-file  '/u001/data/dsxchange/long_rec.dat'
-rejects continue
-reportProgress yes

## General options
[ident('Sequential_File_4'); jobmon_ident('Sequential_File_4')]
## Outputs
0> [] 'Sequential_File_4:DSLink2.v'
;
Hope this helps.

Brad.

Posted: Thu Jun 21, 2007 10:13 am
by DSguru2B
Yup. I got it. Thanks once again.