same sequence number to be used in loading multiple tables

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
harithay
Participant
Posts: 106
Joined: Tue Dec 14, 2004 10:51 pm

same sequence number to be used in loading multiple tables

Post by harithay »

Hi all,

i am loading from flat file to three tables.
target database is sql server;
i have 20 columns in source file.
first i am loading parent table with 5 columns mappping from source , with surrogate key generation. sequnce is generated in sql server data base .

i need to load other two tables

second table pivoting of 7 coumns , third table pivoting of 8 columns(f
both from source)

the foreign key constraint for these two tables is sequence number generated when loading first table in sql server database.

what is procedure to take same sequence number to load other two tables


Thanks
dsdesigner
Participant
Posts: 34
Joined: Thu Jul 29, 2004 1:03 pm

Post by dsdesigner »

Hi Harithay,

You can do it by creating 4 seperate jobs.

JOb 1) first i am loading parent table with 5 columns mappping from source , with surrogate key generation. sequnce is generated in sql server data base .

src------>transfomer-------->target1

Job2) Create a hash file of the first target with the natural key, and the generated surrogate as the columns. Make the natural key as the key in the hash file

target1-------->hashfile

Job 3 and job 4) Read from the source perform pivoting, lookup the surrogate from the hash file by joining the natural keys and populate the targets

hashfile
|
|
|
src2------>pivot------>transformer-------->target

Thanks,
Shekar
dsdesigner
Participant
Posts: 34
Joined: Thu Jul 29, 2004 1:03 pm

Post by dsdesigner »

Hi Harithay,

You can do it by creating 4 seperate jobs.

JOb 1) first i am loading parent table with 5 columns mappping from source , with surrogate key generation. sequnce is generated in sql server data base .

src------>transfomer-------->target1

Job2) Create a hash file of the first target with the natural key, and the generated surrogate as the columns. Make the natural key as the key in the hash file

target1-------->hashfile

Job 3 and job 4) Read from the source perform pivoting, lookup the surrogate from the hash file by joining the natural keys and populate the targets

Code: Select all


                         hashfile
                          | 
                          |
src2------>pivot------>transformer-------->target

Thanks,
Shekar
harithay
Participant
Posts: 106
Joined: Tue Dec 14, 2004 10:51 pm

Post by harithay »

Hi shekar, exactly i am doing same way. but my look up is not working properly. my source is same for job1(parent table) , job3( 7 columns, pivot)), job4( 8 columns- pivot) .


yes, i loaded surrogate key, composite key 5 columns (natural key) , matching key columns in hashed file. in transforemr i have mapped 5 key columns from source link and hashed file link. and mapping surrogate key from hashed file link output link in tarnsformer.


hashfile (parent table from job1)
|
|
source ------>transformer------>pivot-------->target

there is no specific key matching to hashed file , so i am using combination of 5 columns as matching key.


there are 27142 records in source, i have same number of records going to hashed file. when i run the job i am getting only 178 records on hashed file reference link to the transformer. . the job finished succesfully but not loading surrogate key to the out put.

Thanks





dsdesigner wrote:Hi Harithay,

You can do it by creating 4 seperate jobs.

JOb 1) first i am loading parent table with 5 columns mappping from source , with surrogate key generation. sequnce is generated in sql server data base .

src------>transfomer-------->target1

Job2) Create a hash file of the first target with the natural key, and the generated surrogate as the columns. Make the natural key as the key in the hash file

target1-------->hashfile

Job 3 and job 4) Read from the source perform pivoting, lookup the surrogate from the hash file by joining the natural keys and populate the targets

Code: Select all


                         hashfile
                          | 
                          |
src2------>pivot------>transformer-------->target

Thanks,
Shekar
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

Did you check the format and content of source and reference data?

You can try to set default value to test the lookup working fine.

You can try to delete and re-create your hash-file in case of any dictionary problems.
harithay
Participant
Posts: 106
Joined: Tue Dec 14, 2004 10:51 pm

Post by harithay »

Hi sainath ;

I did not get you.
the meta data is perfect in both source and and refence link.

can u please explain in detail.
dsdesigner
Participant
Posts: 34
Joined: Thu Jul 29, 2004 1:03 pm

Post by dsdesigner »

there are 27142 records in source, i have same number of records going to hashed file. when i run the job i am getting only 178 records on hashed file reference link to the transformer. . the job finished succesfully but not loading surrogate key to the out put.
Are you getting this statistics, from the designers "view link statistics".
Are the five columns unique across all the 27142 rows?

Even though you see 27142 records going to the hash file, may be they are replaced/overwritten in the hash file because the selected key is not unique?

Please let us know if they combination of these 5 columns is unique across all rows anf if it is, have you selected these five columns as when you are creating the hash file?

Thanks,
Shekar
harithay
Participant
Posts: 106
Joined: Tue Dec 14, 2004 10:51 pm

Post by harithay »

hi shekar, yes the combination of all 5 columns is unique.



what might be the mistake
="dsdesigner"]
Are you getting this statistics, from the designers "view link statistics".
Are the five columns unique across all the 27142 rows?

Even though you see 27142 records going to the hash file, may be they are replaced/overwritten in the hash file because the selected key is not unique?

Please let us know if they combination of these 5 columns is unique across all rows anf if it is, have you selected these five columns as when you are creating the hash file?

Thanks,
Shekar[/quote]
Last edited by harithay on Wed Jun 08, 2005 5:04 pm, edited 1 time in total.
harithay
Participant
Posts: 106
Joined: Tue Dec 14, 2004 10:51 pm

Post by harithay »

yes shekar, the combination of all 5 columns is unique.

dsdesigner wrote:
there are 27142 records in source, i have same number of records going to hashed file. when i run the job i am getting only 178 records on hashed file reference link to the transformer. . the job finished succesfully but not loading surrogate key to the out put.
Are you getting this statistics, from the designers "view link statistics".
Are the five columns unique across all the 27142 rows?

Even though you see 27142 records going to the hash file, may be they are replaced/overwritten in the hash file because the selected key is not unique?

Please let us know if they combination of these 5 columns is unique across all rows anf if it is, have you selected these five columns as when you are creating the hash file?

Thanks,
Shekar
dsdesigner
Participant
Posts: 34
Joined: Thu Jul 29, 2004 1:03 pm

Post by dsdesigner »

Hi Harithay,
Please try by getting the code to the bare minimum. In the job 3 and job 4 don't use the pivot stage. write the output to a temporary seq. file, and see if it is getting the right surrogates.

Code: Select all

                         hashfile
                          |
                          |
seqfle------------>transformer------>seqfile
see if it is getting the right surrogates this way.

Thanks,
Shekar
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

Try writing a simple job that will pass a constant and reference it from the hash file. This will confirm the working of your lookup.

Next is to write a sequential file in your job linked to the central transformer. Have the source column linked to lookup as a new column in the output. This will verify the value passed to reference link.

Check this and let us know.
harithay
Participant
Posts: 106
Joined: Tue Dec 14, 2004 10:51 pm

Post by harithay »

Hi shekar;

Even i tried that way also. by connecting temporary sequential file.

in that case also i am getting 178 records on reference link out of 27000 records. in this case also i am not getting surogate key on to the output link . in the output , it is giving just blank.

Thanks
dsdesigner wrote:Hi Harithay,
Please try by getting the code to the bare minimum. In the job 3 and job 4 don't use the pivot stage. write the output to a temporary seq. file, and see if it is getting the right surrogates.

Code: Select all

                         hashfile
                          |
                          |
seqfle------------>transformer------>seqfile
see if it is getting the right surrogates this way.

Thanks,
Shekar
dsdesigner
Participant
Posts: 34
Joined: Thu Jul 29, 2004 1:03 pm

Post by dsdesigner »

Then there is definitely something wrong, in the manner in which you are creating the hash file or referencing it. Please write the contents of the hash file to a seq. file and see how many records the seq. file contains.

Thanks,
Shekar
harithay
Participant
Posts: 106
Joined: Tue Dec 14, 2004 10:51 pm

Post by harithay »

Hi shekar;

I did writing to sequentail file , input to sequential contains 27142 records. the output of Hashfilestage (input to sequential file) ( if i check the 5 key columns)
,
so i think there is no problem in hashfile creation ryt?


dsdesigner wrote:Then there is definitely something wrong, in the manner in which you are creating the hash file or referencing it. Please write the contents of the hash file to a seq. file and see how many records the seq. file contains.

Thanks,
Shekar
Post Reply