Sqlldr error ...

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
jseclen
Participant
Posts: 133
Joined: Wed Mar 05, 2003 4:19 pm
Location: Lima - Peru. Sudamerica
Contact:

Sqlldr error ...

Post by jseclen »

Hi forum !!

My job is running in an AIX server with DS EE v 7.5.2, now i trying to insert record into oracle using the oracle enterprise plug in. In the sequential file i set the attribute Read First Rows = 150000 and run and insert without problems.

I change the value to 350000 and my job abort 30 minuts later, the error shown is the next

"Oracle_Enterprise_216,0: The call to sqlldr failed; the return code = 589,833; please see the loader logfile: /stag/parallel/node1/Scratch/ora.618582.967574.0.log for details."

This log file is empty, and search the forum for this problem, and i apply the changes for environment variables, in the dsenv file was commented the oracle variables, but the error appear when the value is increased ...

Anybody has an idea about this problem???

Thanks for your help.
Saludos,

Miguel Seclén
Lima - Peru
DS_FocusGroup
Premium Member
Premium Member
Posts: 197
Joined: Sun Jul 15, 2007 11:45 pm
Location: Prague

Post by DS_FocusGroup »

Make sure your oracle paths are setup properly. its not being able to find the SQL loader
jseclen
Participant
Posts: 133
Joined: Wed Mar 05, 2003 4:19 pm
Location: Lima - Peru. Sudamerica
Contact:

Post by jseclen »

Hi DS_FocusGroup,

The sqlldr was founded, because in previous load insert 150000 records, when this number is increased the process abort ...
Saludos,

Miguel Seclén
Lima - Peru
priyadarshikunal
Premium Member
Premium Member
Posts: 1735
Joined: Thu Mar 01, 2007 5:44 am
Location: Troy, MI

Post by priyadarshikunal »

jseclen wrote:Hi DS_FocusGroup,

The sqlldr was founded, because in previous load insert 150000 records, when this number is increased the process abort ...

Have you checked whether your log file is present or not

check your configuration file, from there take the scratch space path.

can you find the log file there?

check whether the oracle table constraint has not been violated.

because just by increasing the number of record you should not get the sql loader error.

and ofcourse check the table space for free space

by putting the query like


Code: Select all

select ((sum(bytes)/1024)/1024)/1024,tablespace_name from dba_free_space where tablespace_name='<tablespace>'

 
Priyadarshi Kunal

Genius may have its limitations, but stupidity is not thus handicapped. :wink:
seell
Participant
Posts: 22
Joined: Fri Nov 07, 2003 9:46 pm

Post by seell »

hi
I hit a similar problem; however I am using the newer version 8.0.1 running on windows 2003 server. my sqlldr return code is 7; Input is 9,999,999 records and it failed about 1.8 million records (trying with 800,000 and 1,5 milllion records were ok).

I have tried to use the normal sqlldr and all the data (9,999,999 records) into could be loaded into the same target table . Does this not mean that it is not a space issue on oracle ???
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Have you talked to your DBA about what might be going on?
-craig

"You can never have too many knives" -- Logan Nine Fingers
seell
Participant
Posts: 22
Joined: Fri Nov 07, 2003 9:46 pm

Post by seell »

my DBA also checked the free space as mentioned in earlier responses; but he thinks that this is not the problem on temp tablespace;
Post Reply