Oracle Enterprise Stage silently dropping records.
Posted: Wed Feb 10, 2010 10:35 pm
Hi All,
I have recently discovered an issue with a simple parallel job I am working on.
The job design is as follows;
Oracle Enterprise --> Parallel Transformer --> Oracle Enterprise (Write Method = Load, Write Mode = Truncate)
My source Oracle Enterprise stage is sourcing several million records. Of these records, I noticed that two records were not being loaded to the output Oracle table, even though they meet all the required criteria to be loaded (i.e. don't violate any constraints or business rules). Upon further inspection I found that for the two records that did not load, they both contained non ascii chars in one of the VarChar columns. It appears that the output Oracle Enterprise stage is rejecting them quietly (ie. not writing any warnings to the job log) when using the Write Mode setting = "Load".
To test why the job was behaving this way I tried the following;
TEST 1:
*******
I made a copy of the original job and changed the output Oracle Enterprise stage to use a Write Method = "Upsert" and Write Mode = "Insert then Update". When I run this version of the job I receive the fatal error "Unable to insert a record into the table".
TEST 2:
*******
I re-created this job as a server job (OCI --> Transformer --> OCI). This job runs fine (no warnings or fatal errors) and loads the records in question to the target table.
QUESTION 1:
***********
Has anyone else noticed this sort of behaviour when using the Oracle Enterprise stage? From what I can gather, it may not be an Oracle Enterprise stage (when Write Method = "Load") problem as such, but more a SQL Loader problem? I'm guessing that when the Oracle Enterprise stage executes the SQL Loader to load the records, SQL Loader fails but does not return an error code back to DataStage?
QUESTION 2:
***********
I have read several posts about handling special chars on dsxchange. It appears you can remove special chars from a string using DataStage functions as long as you know what special characters you are looking for. I have resolved this problem by writing a C++ function to remove all non ascii chars from a string, and then created a parallel routine to call this C++ object. Is this the recommended approach to this sort of generic string problem or have people resolved this issue using the core DataStage functionality?
I have recently discovered an issue with a simple parallel job I am working on.
The job design is as follows;
Oracle Enterprise --> Parallel Transformer --> Oracle Enterprise (Write Method = Load, Write Mode = Truncate)
My source Oracle Enterprise stage is sourcing several million records. Of these records, I noticed that two records were not being loaded to the output Oracle table, even though they meet all the required criteria to be loaded (i.e. don't violate any constraints or business rules). Upon further inspection I found that for the two records that did not load, they both contained non ascii chars in one of the VarChar columns. It appears that the output Oracle Enterprise stage is rejecting them quietly (ie. not writing any warnings to the job log) when using the Write Mode setting = "Load".
To test why the job was behaving this way I tried the following;
TEST 1:
*******
I made a copy of the original job and changed the output Oracle Enterprise stage to use a Write Method = "Upsert" and Write Mode = "Insert then Update". When I run this version of the job I receive the fatal error "Unable to insert a record into the table".
TEST 2:
*******
I re-created this job as a server job (OCI --> Transformer --> OCI). This job runs fine (no warnings or fatal errors) and loads the records in question to the target table.
QUESTION 1:
***********
Has anyone else noticed this sort of behaviour when using the Oracle Enterprise stage? From what I can gather, it may not be an Oracle Enterprise stage (when Write Method = "Load") problem as such, but more a SQL Loader problem? I'm guessing that when the Oracle Enterprise stage executes the SQL Loader to load the records, SQL Loader fails but does not return an error code back to DataStage?
QUESTION 2:
***********
I have read several posts about handling special chars on dsxchange. It appears you can remove special chars from a string using DataStage functions as long as you know what special characters you are looking for. I have resolved this problem by writing a C++ function to remove all non ascii chars from a string, and then created a parallel routine to call this C++ object. Is this the recommended approach to this sort of generic string problem or have people resolved this issue using the core DataStage functionality?