Page 1 of 1

append column

Posted: Wed Jan 19, 2005 3:13 pm
by DSkkk
hi all,
i want to append a new column from a file which has only 1 row to another file which has three other columns. i do not have any common columns. but the output file should have all the rows of the second file populated with the same data of the only row in the first file.
for example:
**first file**
date
2004-01-19

**second file**
no code perm
1 A y
2 B n
3 C y

**output file**
date no code perm
2004-01-19 1 A y
2004-01-19 2 B n
2004-01-19 3 C y

i tried using a merge stage and a transformer, but did not get the required output.
can anyone suggest?
thanks.

Posted: Wed Jan 19, 2005 4:01 pm
by chulett
Put the first file's single record into a hash file with a hard-coded key value - like a '1'. Then stream in your second file, do the lookup and add the column to the output.

Easy Peasy. :wink:

Posted: Wed Jan 19, 2005 4:33 pm
by vmcburney
You could also open file 1 using a routine and retrieve the value. Pass this value into your job as a job parameters. Use the job to retrieve the contents of file 2 and output the job parameter with each row of file 2.

It is quite easy in a Sequence job to set a job parameter by calling a routine. In your job stage set the Value Expression to the name and input arguments of your routine.

Name Value Expression
INPUTDATE rGetInputDate(FilePath:InputFileName)

The routine opens the file and returns the date to the Value Expression.

Posted: Wed Jan 19, 2005 5:41 pm
by chucksmith
On Craig's hashed file design, enable the Pre-load file to memory option in the Hashed File stage.

Posted: Wed Jan 19, 2005 6:24 pm
by chulett
True. Shouldn't be any problem loading the entire hash file into memory. :wink:

Posted: Thu Jan 20, 2005 3:48 am
by Sainath.Srinivasan
Try this command

for i in `cut -f1 -d" " t2`
do
cat t1 >> /tmp/testfile
done

paste /tmp/testfile t2

Where t1 is the first file with the date and t2 is the second file with multiple rows