Error While Using DB2 Sequence for Surrogate Key
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 7
- Joined: Wed Jan 16, 2008 6:50 pm
Error While Using DB2 Sequence for Surrogate Key
Hello,
I am trying to generate surrogate keys using a DB2 sequence, but keep getting an error. The user-defined-SQL in the DB2/UDB Enterprise stage is :
INSERT
INTO
TEST_ACCOUNT_LIST
(TSNO, TS, TSNOTE)
VALUES
(NEXTVAL FOR ACC_SEQ, ORCHESTRATE.TS, ORCHESTRATE.TSNOTE)
and the Error Message is :
test_account_list_db2: When preparing operator: When binding partitioner interface: Could not find input field "tsno".
The job is reading from a sequential file and through a transformer, the data is getting inserted into the TEST_ACCOUNT_LIST table. TS and TSNOTE is coming from the sequential-file, whereas, TSNO is from a db2-sequence (ACC_SEQ). As far as datastage is concerned, I am a complete Newbie
Appreciate any Help !
Cheers,
Shawn
I am trying to generate surrogate keys using a DB2 sequence, but keep getting an error. The user-defined-SQL in the DB2/UDB Enterprise stage is :
INSERT
INTO
TEST_ACCOUNT_LIST
(TSNO, TS, TSNOTE)
VALUES
(NEXTVAL FOR ACC_SEQ, ORCHESTRATE.TS, ORCHESTRATE.TSNOTE)
and the Error Message is :
test_account_list_db2: When preparing operator: When binding partitioner interface: Could not find input field "tsno".
The job is reading from a sequential file and through a transformer, the data is getting inserted into the TEST_ACCOUNT_LIST table. TS and TSNOTE is coming from the sequential-file, whereas, TSNO is from a db2-sequence (ACC_SEQ). As far as datastage is concerned, I am a complete Newbie
Appreciate any Help !
Cheers,
Shawn
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
If using a sequence, you do NOT supply a key value from DataStage. Therefore the only column passed from DataStage should be the non-key columns - do not have TSNO in the Columns grid.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 7
- Joined: Wed Jan 16, 2008 6:50 pm
Ray,
Thanks for the information. I removed the column names from the grid, but get a different error message :
main_program: Missing field: TSNO in the input dataset.
the job structure is :
[sequential-file] ---> [transformer]--->db2-table
sequential-file structure ( csv) :
TS, TSNOTE
db2-table-structure :
TSNO
TS
TSNOTE
The column Grid in sequential-file, transformer and db2-table is
ts
tsnote
Cheers,
Shawn
Thanks for the information. I removed the column names from the grid, but get a different error message :
main_program: Missing field: TSNO in the input dataset.
the job structure is :
[sequential-file] ---> [transformer]--->db2-table
sequential-file structure ( csv) :
TS, TSNOTE
db2-table-structure :
TSNO
TS
TSNOTE
The column Grid in sequential-file, transformer and db2-table is
ts
tsnote
Cheers,
Shawn
In which case you are using custom SQL that still contains TSNO
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 7
- Joined: Wed Jan 16, 2008 6:50 pm
Andrw,
I am using the following "user defined" insert statement :
INSERT
INTO
TEST_ACCOUNT_LIST
(TSNO, TS, TSNOTE)
VALUES
(NEXTVAL FOR ACC_SEQ, ORCHESTRATE.TS, ORCHESTRATE.TSNOTE)
I know for sure, i must be doing something really stupid
What I am trying to achieve here is :
1) Read a sequential file ( two comma separated fields : "ts" and "tsnote")
2) trim the fields (leading/trailing) in the transformer
3) insert into a db2 table ( TSNO (bigint), TS and TSNOTE)
using the values from the sequential file and a db2-sequence ( surrogate key -- for the tsno column).
Thanks for the help !
Shawn
I am using the following "user defined" insert statement :
INSERT
INTO
TEST_ACCOUNT_LIST
(TSNO, TS, TSNOTE)
VALUES
(NEXTVAL FOR ACC_SEQ, ORCHESTRATE.TS, ORCHESTRATE.TSNOTE)
I know for sure, i must be doing something really stupid
What I am trying to achieve here is :
1) Read a sequential file ( two comma separated fields : "ts" and "tsnote")
2) trim the fields (leading/trailing) in the transformer
3) insert into a db2 table ( TSNO (bigint), TS and TSNOTE)
using the values from the sequential file and a db2-sequence ( surrogate key -- for the tsno column).
Thanks for the help !
Shawn
Sorry, I just realized I didn't pay attention to your first post. Are you 100% certain that you aren't using TSNO anywhere else in the stage? Perhaps custom update code, or before/after code? This is the only think I can think of at the moment.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 7
- Joined: Wed Jan 16, 2008 6:50 pm
ArndW, Thanks for the response.
No, I checked the job thoroughly and I can confirm that I am not using TSNO anywhere else.
But, Here's an interesting thing :
When I used a DB2/UDB API stage for the table, the job works fine - I was able to insert records based on the sequence.
( all other jobs at our place is using DB2/UDB Enterprise Stage )
Individually, i can confirm that sequence is working fine and since other jobs are using the Enterprise Stage, i guess that's working fine too.
The only problem is that DB2/UDB Enterprise stage is NOT working with DB2 sequence. I wonder if there is any fundamental difference in the way these two stages handle db2-sequence .
Cheers,
Shawn
No, I checked the job thoroughly and I can confirm that I am not using TSNO anywhere else.
But, Here's an interesting thing :
When I used a DB2/UDB API stage for the table, the job works fine - I was able to insert records based on the sequence.
( all other jobs at our place is using DB2/UDB Enterprise Stage )
Individually, i can confirm that sequence is working fine and since other jobs are using the Enterprise Stage, i guess that's working fine too.
The only problem is that DB2/UDB Enterprise stage is NOT working with DB2 sequence. I wonder if there is any fundamental difference in the way these two stages handle db2-sequence .
Cheers,
Shawn
-
- Participant
- Posts: 7
- Joined: Wed Jan 16, 2008 6:50 pm
Got it to work finally after three cans of Red Bull and couple of hours of grappling with a slow server...
:D :D :D
I changed the following setting and the job is running like a charm !
In the DB2/UDB Enterprise Stage,
changed STAGE-->ADVANCED-->EXECUTION MODE to "Sequential" , the default was "Parallel".
I don't have an explanation why it worked, but I will experiment with couple of other settings and do some testing ( db2 --> "no order" clause in the sequence, sequence cache value etc ).
Though it works, it probably negates whatever "parallel" benefit the job might get by running on appropriate hardware.
Cheers,
Shawn
:D :D :D
I changed the following setting and the job is running like a charm !
In the DB2/UDB Enterprise Stage,
changed STAGE-->ADVANCED-->EXECUTION MODE to "Sequential" , the default was "Parallel".
I don't have an explanation why it worked, but I will experiment with couple of other settings and do some testing ( db2 --> "no order" clause in the sequence, sequence cache value etc ).
Though it works, it probably negates whatever "parallel" benefit the job might get by running on appropriate hardware.
Cheers,
Shawn
Shawn_2008, I also faced this issue several times. In our project there were a lot of jobs in which we were generating SK's using db2 sequence. We also followed the similar appoach of converting the exection mode of the stage to Sequential mode. I also am not fully aware of this reason. If some bright minds can throw up some light on this particular workaround, It would be much appreciated by learners like us. :D
-
- Premium Member
- Posts: 99
- Joined: Mon Sep 03, 2007 7:49 am
- Location: Stockholm, Sweden
The error:
The solution is to either set something else as partition key or create the surrogate keys in datastage sending it to the target table or use sequential mode at delivery as you all suggested.
Resides in a what has been defined as partition key in your database table. You're definition of your table propably has TSNO set as partition key and since you're using a parallell database connection, datastage tries to go directly to the right node. However when defining where to send each row based on the partition key defined in the target table (fetched from the system tables). Datastega can't find the column (TSNO) to partition on, so it failes and puts that warning in the log.test_account_list_db2: When preparing operator: When binding partitioner interface: Could not find input field "tsno".
The solution is to either set something else as partition key or create the surrogate keys in datastage sending it to the target table or use sequential mode at delivery as you all suggested.
-------------------------------------
http://it.toolbox.com/blogs/bi-aj
my blog on delivering business intelligence using agile principles
http://it.toolbox.com/blogs/bi-aj
my blog on delivering business intelligence using agile principles
-
- Participant
- Posts: 7
- Joined: Wed Jan 16, 2008 6:50 pm