columns to rows ..
I know that using pivot stage I can split one row into multiple columns - but can I do it the other way too..
here is my problem:
I have a table like this
ID Colname New_Value
23 colA 100
23 colB 200
23 colC 300
24 colA 400
24 colB 500
24 colC 600
which should be converted to this table :
ID colA colB colC
23 100 200 300
24 400 500 600
Can someone help please.
Thanks
-Xan
Columns to rows - pivot stage ?
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
suggest another way
biult two jobs
the first one will read the table or file, and load the coloumnName into a hash file . after the job finish, you will get a distinct column name(because hash file will rewrite the value with the same key. in her, key is the column value). this job can also be done in a database.
the second job will lookup the hash file in the first job. and write the data into another hash file.
load the hash file in the memoery and lock when wrtie, so that you can read and write into a same file. then
the first record written into the second hash file will look like this:
23, 100, , , , , , , ,(use rutine to wrtie a number of delimiters according the sum of Column number in hash file one.such as select count(*) from hash file one)
that is a delimited string, the ID value always in the first fields, value will be wrtitten depends on the sequence number in the lookup hash file.(it is not hard to assign a sequence number to every distinct column name).
hen read the second records in the source file, and look up the the first hash file, you will know the position the value will be written in the delimited string. and then insert the value to that accurate position in second hash file(read and write from and in to same file).
I think in your case, ID is the primary key, so after so reacords are load into the second hash file, the hash file will look like that:
23,100, 200,300,somevalue1,somevalue2,..
24,400,500,600,somebalue3,somevalue4,...
of course, there are some routine needed to seach the hash file and read and write a particular reacords in hash file.
just a rough idea.
the first one will read the table or file, and load the coloumnName into a hash file . after the job finish, you will get a distinct column name(because hash file will rewrite the value with the same key. in her, key is the column value). this job can also be done in a database.
the second job will lookup the hash file in the first job. and write the data into another hash file.
load the hash file in the memoery and lock when wrtie, so that you can read and write into a same file. then
the first record written into the second hash file will look like this:
23, 100, , , , , , , ,(use rutine to wrtie a number of delimiters according the sum of Column number in hash file one.such as select count(*) from hash file one)
that is a delimited string, the ID value always in the first fields, value will be wrtitten depends on the sequence number in the lookup hash file.(it is not hard to assign a sequence number to every distinct column name).
hen read the second records in the source file, and look up the the first hash file, you will know the position the value will be written in the delimited string. and then insert the value to that accurate position in second hash file(read and write from and in to same file).
I think in your case, ID is the primary key, so after so reacords are load into the second hash file, the hash file will look like that:
23,100, 200,300,somevalue1,somevalue2,..
24,400,500,600,somebalue3,somevalue4,...
of course, there are some routine needed to seach the hash file and read and write a particular reacords in hash file.
just a rough idea.
Re: Columns to rows - pivot stage ?
Me, I like simple SQL to pivot the data:
Code: Select all
select ID, MAX(colA) "colA", MAX(colB) "colB", MAX(colC) "colC"
from
(select ID,
case when Colname = 'colA' then New_Value else NULL end "colA",
case when Colname = 'colB' then New_Value else NULL end "colB",
case when Colname = 'colC' then New_Value else NULL end "colC"
from your_table
)
group by ID
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
what if there are 100 column?
what if the colomn is dynamicly changed?