Page 1 of 1

Oracle Upsert errors

Posted: Tue Jun 10, 2008 7:51 am
by dsleo
Hi,

I have a strange problem going on with a job design.
Job has 2 ORA stages. Source ORA stage is executing a SELECT on source table and pulling some data for a couple of columns. Target ORA stage is executing an Upsert - If seq_no (key column) exists then execute an Update else execute an Insert.

problem is that - when target table is empty, the insert is executed and records pulled from source table are inserted to target and there is no warning or error message on job and counts are correct. But data is inconsistent., in the sense, for some records, randomly, instead of the column values nulls are inserted to target table.

Is it due to some partitioning or job design flaw?

To be more clear, let me explain with example

Code: Select all

source records
----------------
seq_no          colA         colB            colC
----------------------------------------------------
1                   abc          def             ghi
2                   jkl            mno           pqr
3                   aaa          bbb            ccc

Target records (after job is run)
--------------------------------------
seq_no      colA             colB             colC
---------------------------------------------
1               abc              def               ghi
2              <NULL>         mno             pqr
3              aaa              <NULL>        <NULL>
Can anyone suggest things to change in this design to make it work right?

thanks in advance

Posted: Tue Jun 10, 2008 8:23 am
by OddJob
Does the job literally have two stages in it? ORA --> ORA?

Just a guess and as something to try, add the following environment variable to your job:
APT_DISABLE_COMBINATION=True

Posted: Tue Jun 10, 2008 8:52 am
by dsleo
Yes, the job has only two stages ORA -> ORA

I will surely try with that environment variable. since this kind of behaviour happens randomly to random records, i will do a couple runs to check if this works.

thanks

Posted: Tue Jun 10, 2008 9:05 am
by dsleo
Unfortunately, adding that environment variable didnt help. the first two test runs still had the same behaviour

Thanks

Posted: Tue Jun 10, 2008 10:55 am
by ArndW
Can you make a copy of the job and table and make the column non-nullable, then put a reject link to the upsert and see if you get some more information explaining this behaviour?

Posted: Tue Jun 10, 2008 11:39 am
by dsleo
ArndW,
I did as you suggested and set one of the columns (colB) to not-nullable and ran the test jobs and nothing was captured in the rejects and the column (colB) was populated correctly for all records.
however i still have inconsistent data for the other columns which are still nllable columns (colA and colC)

But my question is this - what about columns that are genuinely nullable and where source has nulls that are valid values as well.

But i like it that we are making some improvement :). Thank you

Posted: Wed Jun 11, 2008 2:02 am
by ArndW
I expected rejects, not correct records... What about putting a transform stage between source and target, putting in a second output link to a peek stage with a constraint of only sending records where the value of a column is null. That way you could see if the problem is happening on the read or at the write.