Columns to rows - pivot stage ?

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
Xanadu
Participant
Posts: 61
Joined: Thu Jul 22, 2004 9:29 am

Columns to rows - pivot stage ?

Post by Xanadu »

columns to rows ..

I know that using pivot stage I can split one row into multiple columns - but can I do it the other way too..
here is my problem:

I have a table like this

ID Colname New_Value
23 colA 100
23 colB 200
23 colC 300
24 colA 400
24 colB 500
24 colC 600

which should be converted to this table :

ID colA colB colC
23 100 200 300
24 400 500 600

Can someone help please.
Thanks
-Xan
kiran_kom
Participant
Posts: 29
Joined: Mon Jan 12, 2004 10:51 pm

Post by kiran_kom »

Try a self join....3 times...
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Search the forum for "vertical pivot" (for this is what you are seeking to do). You will find a number of techniques described.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
changming
Participant
Posts: 68
Joined: Wed Oct 13, 2004 3:35 am

suggest another way

Post by changming »

biult two jobs
the first one will read the table or file, and load the coloumnName into a hash file . after the job finish, you will get a distinct column name(because hash file will rewrite the value with the same key. in her, key is the column value). this job can also be done in a database.
the second job will lookup the hash file in the first job. and write the data into another hash file.
load the hash file in the memoery and lock when wrtie, so that you can read and write into a same file. then
the first record written into the second hash file will look like this:
23, 100, , , , , , , ,(use rutine to wrtie a number of delimiters according the sum of Column number in hash file one.such as select count(*) from hash file one)
that is a delimited string, the ID value always in the first fields, value will be wrtitten depends on the sequence number in the lookup hash file.(it is not hard to assign a sequence number to every distinct column name).
hen read the second records in the source file, and look up the the first hash file, you will know the position the value will be written in the delimited string. and then insert the value to that accurate position in second hash file(read and write from and in to same file).
I think in your case, ID is the primary key, so after so reacords are load into the second hash file, the hash file will look like that:
23,100, 200,300,somevalue1,somevalue2,..
24,400,500,600,somebalue3,somevalue4,...
of course, there are some routine needed to seach the hash file and read and write a particular reacords in hash file.
just a rough idea.
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Re: Columns to rows - pivot stage ?

Post by kcbland »

Me, I like simple SQL to pivot the data:

Code: Select all

select ID, MAX(colA) "colA", MAX(colB) "colB", MAX(colC) "colC"
from
   (select ID,
       case when Colname = 'colA' then New_Value else NULL end "colA",
       case when Colname = 'colB' then New_Value else NULL end "colB",
       case when Colname = 'colC' then New_Value else NULL end "colC"
    from your_table
   )
group by ID
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
changming
Participant
Posts: 68
Joined: Wed Oct 13, 2004 3:35 am

what if there are 100 column?

Post by changming »

what if the colomn is dynamicly changed?
Post Reply