Update into Oracle not working

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
ShaneMuir
Premium Member
Premium Member
Posts: 508
Joined: Tue Jun 15, 2004 5:00 am
Location: London

Update into Oracle not working

Post by ShaneMuir »

Hi all

Not sure if anyone can help, but just in case somebody has had a similar experience.

My issue is that it appears that the update statement when loading into an Oracle db using OCIORA8 stage does not seem to be working. The perplexing thing is that I have other jobs which do exactly the same thing and they are working fine.

The design of the job is

Code: Select all

                 ---> OracleDB_insert
               /
Seq --->  Tfr <  
           |   \
           |     ---> OracleDB_update
          Hash
I have taken tests to check if the transformer is passing the right information along each stream, (by writing the fields to a sequential file) and the information passed is as expected.

The update SQL is as follows:

Code: Select all

UPDATE ODS_CONTROLLED_OBJECT_RESTRICT 
SET RESTRICTION_EXPIRY_DATE=TO_DATE(:4, 'YYYY-MM-DD HH24:MI:SS'),
    RESTRICTION_REASON_CODE=:5,
    ODS_JOB_ID=:6,
    ODS_UPDATE_DATE_ID=:7,
    ODS_UPDATE_TIME_ID=:8 
WHERE CONTROLLED_OBJECT_ID=:1 
  AND RESTRICTION_TYPE_CODE=:2 
  AND RESTRICTION_EFFECTIVE_DATE=TO_DATE(:3, 'YYYY-MM-DD HH24:MI:SS')
According to the data flow a record is passed along the update stream, but when I check the DB the update has not occurred. There is no error in the job log either to say that it was rejected.

Other information about the settings in the ORAOCI8 stage
  • insert:
    Array size = 100
    Transaction size = 0
    Update action = Insert rows without clearing
    Transaction Isolation = Read Committed
    SQL = generated
  • Update:
    Array size = 1
    Transaction size = 0
    Update action = Update exising rows only
    Transaction Isolation = Read Committed
    SQL = generated
I have also taken the update query and run it in TOAD and it updates the record correctly.

Does anybody out there have any ideas that I may have missed?

Thanks in advance.
Shane
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Keep in mind the fact that Oracle will happily update nothing. In other words, an update will only generate a reject event if the update itself creates one - trying to set a required field to null, creating a unique constraint violation, RI issues, etc. The mere fact that it updated zero records in spite of your best intentions doesn't phase it a bit. :wink:

First question would be are you updating records you've just inserted or pre-existing records in the database? I'm guessing you're not updating just inserted records or you'd be feeding inserts back into the hashed file.

I would guess that your update key fields don't have exactly what you think they do in them. All it takes is for one to be wrong - an extra space at the end of a field or a pair to be accidentally swapped, for instance - for the sql to update nothing. Running the update query in TOAD just means it is syntactically correct and I don't see how that could be an issue unless for some reason you were using your own sql there.

I'd suggest taking another hard look at the fields and their associated data you send down the update link, make no assumptions about its correctness and see again if you can spot the problem with it.
-craig

"You can never have too many knives" -- Logan Nine Fingers
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

The update DML doesn't require a row to be present for the update to succeed. If the row WHERE criteria has any trim, date, or scale issue and won't match a corresponding row, you can't see the row isn't found.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Craig, you're too fast. I turned my head to catch a replay on MNF and didn't get my answer in fast enough.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

That's ok - I guess that would be the short version of what I said. Shane can pick his poison. :lol:
-craig

"You can never have too many knives" -- Logan Nine Fingers
ShaneMuir
Premium Member
Premium Member
Posts: 508
Joined: Tue Jun 15, 2004 5:00 am
Location: London

Post by ShaneMuir »

chulett wrote: First question would be are you updating records you've just inserted or pre-existing records in the database? I'm guessing you're not updating just inserted records or you'd be feeding inserts back into the hashed file.
Actually I am doing an update on just inserted records - and I am feeding inserts back into the hashed file. Left that part out - sorry. Just as an aside the hash file shows the record as it should appear in the database.

My original thought that it was some sort of mismatch on the date field, but this process works fine in other jobs.
Shane
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Ok... then let me know what happens when you change your Insert Array Size to 1.
-craig

"You can never have too many knives" -- Logan Nine Fingers
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

And disable read and write caching, as well as turn OFF row buffering or inter-process. If necessary, deselect Use Project defaults to force no buffering.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
ShaneMuir
Premium Member
Premium Member
Posts: 508
Joined: Tue Jun 15, 2004 5:00 am
Location: London

Post by ShaneMuir »

chulett wrote:Ok... then let me know what happens when you change your Insert Array Size to 1.
OMG it works. Actually if I set the update array size to 100 it works also.
So it seems that they array sizes have to be the same for it to work?
Shane
ShaneMuir
Premium Member
Premium Member
Posts: 508
Joined: Tue Jun 15, 2004 5:00 am
Location: London

Post by ShaneMuir »

kcbland wrote:And disable read and write caching, as well as turn OFF row buffering or inter-process. If necessary, deselect Use Project defaults to force no buffering.
Unfortunately there is also a previous partitioning stage that requires the inter-process to be turned on, else all the sequential files contain only zeros.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

ShaneMuir wrote:OMG it works. Actually if I set the update array size to 100 it works also. So it seems that they array sizes have to be the same for it to work?
You sound surprised. :lol:

No, they don't need to be the same per se. However, with your previous settings the sending of the Inserts to the database were being deferred until it had 100 to do, while each Update went immediately. If the Update of an Insert went before the Insert itself, it 'failed' silently. So, it's all about the timing of inserts versus updates and how they fall within that Insert to Update array size ratio.

Setting them both to 1 made sure they all happened as they happened. Setting both to 100 worked out the same, you got 100 inserts before you got your first 100 updates. As long as the Insert array size is equal to or smaller than the Update array size, you should be good.

I'll bet your other jobs, if they have a similar mismatch in Array Sizes, have the same issue - it just may not have been as readily apparent... yet. :shock: :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
ShaneMuir
Premium Member
Premium Member
Posts: 508
Joined: Tue Jun 15, 2004 5:00 am
Location: London

Post by ShaneMuir »

chulett wrote: You sound surprised. :lol:
I am constantly surprised :)
chulett wrote: No, they don't need to be the same per se. However, with your previous settings the sending of the Inserts to the database were being deferred until it had 100 to do, while each Update went immediately. If the Update of an Insert went before the Insert itself, it 'failed' silently. So, it's all about the timing of inserts versus updates and how they fall within that Insert to Update array size ratio.
I often wondered about that - now I know.
chulett wrote: I'll bet your other jobs, if they have a similar mismatch in Array Sizes, have the same issue - it just may not have been as readily apparent... yet. :shock: :wink:
I checked my other jobs and lo and behold they had the same array sizes in each case. :shock: :roll:

Thanks for all your help Craig. You too Ken!
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Unless you are referencing and writing to the same hashed file in ONE Transformer stage, buffering of any sort between the reference and write will cause inconsistencies in the result.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
ShaneMuir
Premium Member
Premium Member
Posts: 508
Joined: Tue Jun 15, 2004 5:00 am
Location: London

Post by ShaneMuir »

kcbland wrote:Unless you are referencing and writing to the same hashed file in ONE Transformer stage, buffering of any sort between the reference and write will cause inconsistencies in the result.
Thanks for that information Ken. Luckily the one transformer is writing and referencing the hash file this time. But that is good to know for the future.
Shane
Post Reply