Datastage Sequencer problem
Moderators: chulett, rschirm, roy
Datastage Sequencer problem
hi,
I am new to datastage parallel,my requirement is to generate one surrogate key for a job while the job gets started.This id is a MetatagId for each job.
I am using surrogate key generator to create the surrogate key.
while try to generate the key,i used 1 number of records in the options in surrage key stage ,while running the job,the stage creates 2 records 1 and 101.I have chosen the option of generate from last highest value but it does not take the last value ,during 2nd run job generates 201 and 301.
Could you kindly let me know why 2 records are getting created insted of one.
is there any inbuilt function(keymgmtfunction) to generate the key like the same used in server job?
regards
rumu
I am new to datastage parallel,my requirement is to generate one surrogate key for a job while the job gets started.This id is a MetatagId for each job.
I am using surrogate key generator to create the surrogate key.
while try to generate the key,i used 1 number of records in the options in surrage key stage ,while running the job,the stage creates 2 records 1 and 101.I have chosen the option of generate from last highest value but it does not take the last value ,during 2nd run job generates 201 and 301.
Could you kindly let me know why 2 records are getting created insted of one.
is there any inbuilt function(keymgmtfunction) to generate the key like the same used in server job?
regards
rumu
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
That's one "solution". Another would be to execute stage(s) in sequential mode. Another would be to execute stage(s) in a single-node node pool within a multi-node configuration. My personal preference for processing a single row is a server job, where the startup and execution overheads are far lower than for a parallel job.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
I tried one option ie run the stage in sequential mode.
I made the surrogate key stage in sequential mode..my output is a sequential file stage.
I got the following error:
'Surrogate_Key_Generator_0: Error when checking operator: Input data set on port 0 has a partition method, but the operator is not parallel.'
I made the surrogate key stage in sequential mode..my output is a sequential file stage.
I got the following error:
'Surrogate_Key_Generator_0: Error when checking operator: Input data set on port 0 has a partition method, but the operator is not parallel.'
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
I'm not asking anything. I am observing that the job design has a partitioning algorithm specified, and that it is that which is triggering the alert message.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Hi Ray,
here is the dump:
OSH script
# OSH / orchestrate script for Job metatagidcreate compiled at 09:28:02 24 Sep 2008
#################################################################
#### STAGE: Surrogate_Key_Generator_0
## Operator
surrogatekey
## Operator options
-sk_id '/transfer01/workdir/Metatagid1.txt'
-output_key 'Metatagid'
-asSequencer
-records 1
## General options
[ident('Surrogate_Key_Generator_0'); jobmon_ident('Surrogate_Key_Generator_0'); seq; nodemap ( node1 ) ]
## Outputs
0> [-pp; modify (
metatag:string[max=30]=Metatagid;
keep
Metatagid;
)] 'Surrogate_Key_Generator_0:DSLink2.v'
;
#################################################################
#### STAGE: Sequential_File_4
## Operator
export
## Operator options
-schema record
{final_delim=end, delim=',', quote=double}
(
metatag:string[max=30];
)
-file '/transfer01/workdir/testmeta'
-overwrite
-rejects continue
## General options
[ident('Sequential_File_4'); jobmon_ident('Sequential_File_4'); nodemap ( node1 ) ]
## Inputs
0< [] 'Surrogate_Key_Generator_0:DSLink2.v'
;
# End of OSH code
regards,
rumu
here is the dump:
OSH script
# OSH / orchestrate script for Job metatagidcreate compiled at 09:28:02 24 Sep 2008
#################################################################
#### STAGE: Surrogate_Key_Generator_0
## Operator
surrogatekey
## Operator options
-sk_id '/transfer01/workdir/Metatagid1.txt'
-output_key 'Metatagid'
-asSequencer
-records 1
## General options
[ident('Surrogate_Key_Generator_0'); jobmon_ident('Surrogate_Key_Generator_0'); seq; nodemap ( node1 ) ]
## Outputs
0> [-pp; modify (
metatag:string[max=30]=Metatagid;
keep
Metatagid;
)] 'Surrogate_Key_Generator_0:DSLink2.v'
;
#################################################################
#### STAGE: Sequential_File_4
## Operator
export
## Operator options
-schema record
{final_delim=end, delim=',', quote=double}
(
metatag:string[max=30];
)
-file '/transfer01/workdir/testmeta'
-overwrite
-rejects continue
## General options
[ident('Sequential_File_4'); jobmon_ident('Sequential_File_4'); nodemap ( node1 ) ]
## Inputs
0< [] 'Surrogate_Key_Generator_0:DSLink2.v'
;
# End of OSH code
regards,
rumu