I'm facing a mind bogling situation and I'd love some help to resolve it.
Enterprise Edition 7.5.0.1
Oracle 8.1.7
(both on same machine Sun os)
I have an over 200 million rows table of 5 columns.
I built both a server job and a PX job (using Oracle enterprise stage in PX) both gives the over 200 million rows number.
Table is partitioned with 29 parts.
I designed 2 Px jobs:
1. Using Oracle Enterprise stage > modify > seq file > peek to log for rejects.
2. Same job but using DRS instead Oracle Enterprise.
Job 1 returns almost 130 million rows written to seq file (around 7GB file).
Job 2 returns all over 200 million rows written to seq file as expected(around 12GB file).
Both jobs finished with no warnings nor rejects
Both write to the same FS.
Any more info I forgot to supply will be posted by request.
Unloading it using DRS 4500 rows/seconds in over 12 hours is not an option.
Naturally A case with support is already open and a solution when found will be update here.
(simply since not resolving this is not an option)
Any idea people
![Question :?:](./images/smilies/icon_question.gif)
(was hoping for a
![Razz :P](./images/smilies/icon_razz.gif)
![Sad :(](./images/smilies/icon_sad.gif)
Thanks in advance,