ArndW wrote:How about partitioning your database according to this key, then a delete operation could involve just removing one or more database partitions. ...
I have to delete million of record every quarter from a database (Purge operation), It will read from file and get unique ID record which need to be deleted from ORACLE database table.
ArndW wrote:I bet your PX job has the datatype "VarChar" for this column. Declare the type as numeric in your job and perform an explicit conversion in PX. ...
Do you have the same issue if you run on a single node? for upsert you must have unique index on the table, define that unique column(s) as key in the table metadata, and hash partition on that key I'm already specified unique index column as a key in DataStage; when I tried with Hash partition, it...
Hi, I have created a Job in DataStage to load data into a Target using ORACLE enterprise stage .( In source table has approx. 200 million of records) The job is throwing below error message. . ORA-01653 : unable to extend table <schema_name.table_name>BK by 128 in tablespace D_UTMDM_1M_01 SQL*Loader...
ArndW wrote:I bet your PX job has the datatype "VarChar" for this column. Declare the type as numeric in your job and perform an explicit conversion in PX. ...
Are you using an Oracle Enterprise stage? Are you capturing the rejected rows using a reject link? If so, push these rows into some structure (maybe a text file) that you can review with a hex edito ... Yes, Oracle Enterprise stage, I do have reject link in transformer not in Oracle Enterprise stag...
Do you have the same issue if you run on a single node? for upsert you must have unique index on the table, define that unique column(s) as key in the table metadata, and hash partition on that key I'm already specified unique index column as a key in DataStage; when I tried with Hash partition, it...
What is commit size? Do hash partition on the unique key column. I have introduced a environment variable $APT_ORAUPSERT_COMMIT_ROW_INTERVAL and set commit interval value 500, with this value initially job worked fine, but When I increase number of input records (up to 10000) again it getting into ...