Hi,
We have a requirement where we need to clean the table prior to insert.
Example if src has 100 records.
The idea is, if for some reason the job fails/aborts/hangs (may be due to DB problem or any other reason), when we RE RUN the job it should clear the previous loaded records (if any) and then insert the 100 records.
I tried using 2 ORA Enterprise stage (DB Oracle 10g), one for delete and other for Insert and ordering the Delete link from the transformer as the first one to be executed.
It works fine for a few hundred records, but once we have a 1000 records or more it hangs.
I am executing them sequentially and not in parallel.
Thanks,
NV.
Oracle - Delete & Then Insert
Moderators: chulett, rschirm, roy
What is specificly happening- I guess it is the performance of delete - but you are talking of 1000 records which is really less in volume.
Doing a delete is ofcourse slow as it works on row by row. If you have audit columns - I mean a Create Date and Modified Date in all your target tables then you can delete based on date so that you don't have to match on any other keys.
People also have tables partitioned on date or monthyear depending on their business. They cantruncate the ecent partition and load.
Provide more info
Doing a delete is ofcourse slow as it works on row by row. If you have audit columns - I mean a Create Date and Modified Date in all your target tables then you can delete based on date so that you don't have to match on any other keys.
People also have tables partitioned on date or monthyear depending on their business. They cantruncate the ecent partition and load.
Provide more info
Avishek Mukherjee
Data Integration Architect
Chicago, IL, USA.
Data Integration Architect
Chicago, IL, USA.
Try using a single stage for your two Oracle links. This is a valid solution for a Server job and wondering if it will solve the same transactional issues in a PX job as well.
If not, you should get your DBA involved and have them trace what is going on during the run of the job.
![Confused :?](./images/smilies/icon_confused.gif)
If not, you should get your DBA involved and have them trace what is going on during the run of the job.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Yes, even I had implemented this in a Server job.
Here I am using a single transformer stage having 2 output links, one for delete & other for Insert and executed in that order. But No Luck.
Anyways,I have now split my job in 2 parts..one will Delete and then another will Insert using the DS created from delete job. This will ensure 2 seperate transactions and hence no chances of a Deadlock.
This is working fine now.
Thanks All.
Here I am using a single transformer stage having 2 output links, one for delete & other for Insert and executed in that order. But No Luck.
Anyways,I have now split my job in 2 parts..one will Delete and then another will Insert using the DS created from delete job. This will ensure 2 seperate transactions and hence no chances of a Deadlock.
This is working fine now.
Thanks All.
Actually, I meant using one Oracle stage connected to both links from your Transformer. This allows them to run the in same 'transaction' and generally avoids issues like you seem to be having. When you say 'hangs' that could also mean 'runs really slow', especially if you are doing singleton deletes on every row. We typically constrain the delete link to only fire on the first row and to do all the deleting it needs to do at once right then - if at all possible.nvalia wrote:Yes, even I had implemented this in a Server job. Here I am using a single transformer stage having 2 output links, one for delete & other for Insert and executed in that order. But No Luck.
Of course, breaking the job up into two pieces is a perfectly valid solution as well.
![Wink :wink:](./images/smilies/icon_wink.gif)
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers