Hi All,
In my job I am using db2 as a source and oracle as the target. The job truncates the target table first and then loads the data. To run this job I am getting some sqlldr error. I am giving some log entry from datastage log.
Message: stg_CUSTOMERS: When checking operator: The -index rebuild option has been included;
since the user has not set the environment variable APT_ORACLE_LOAD_OPTIONS,
Orchestrate by default sets the options DIRECT and PARALLEL to TRUE, and
option SKIP_INDEX_MAINTENANCE to YES.
Message: stg_CUSTOMERS,3: SQL*Loader-926: OCI error while uldlfca:OCIDirPathColArrayLoadStream for table MDSTAGE.CUSTOMERS
Message: stg_CUSTOMERS,3: SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
Message: stg_CUSTOMERS,0: SQL*Loader-925: Error while uldlgs: OCIStmtExecute (ptc_hp)
Message: stg_CUSTOMERS,0: ORA-03114: not connected to ORACLE
I am getting this kind of sqlldr error.
Note: that job ran successfully multiple times before.
Please suggest what is the problem? And how to solve it?
SQL Loder problem
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
You are right chullet. The DBA also told the same thing.
Actually there were 18 million records in the source and we are using complex join also. When we decrease the no of records, the job is running fine. The environment is test environment and I think this environment is not capable enough to handle the huge records.
Any comment?
![Question :?:](./images/smilies/icon_question.gif)
Actually there were 18 million records in the source and we are using complex join also. When we decrease the no of records, the job is running fine. The environment is test environment and I think this environment is not capable enough to handle the huge records.
Any comment?
![Question :?:](./images/smilies/icon_question.gif)
You are right chullet. The DBA also told the same thing.
Actually there were 18 million records in the source and we are using complex join also. When we decrease the no of records, the job is running fine. The environment is test environment and I think this environment is not capable enough to handle the huge records.
Any comment?
![Question :?:](./images/smilies/icon_question.gif)
Actually there were 18 million records in the source and we are using complex join also. When we decrease the no of records, the job is running fine. The environment is test environment and I think this environment is not capable enough to handle the huge records.
Any comment?
![Question :?:](./images/smilies/icon_question.gif)