2 GB limit on Oracle Bulk load Stage ?
Posted: Wed May 05, 2010 1:01 pm
We are currently running into an issue when the dat file created by the Oracle Bulk Load Stage exceeds 2 GB.
The design of the job is pretty straight forward.
OCI -----> Transformer ------> Oracle Bulk Load
Error from log
WSLdFactUsage_P1..Trf_Sur_Key: Error writing to the file: /data/etldata/Control_Data_Files/DW_Files/DW.F_USAGE_P1.ctl
Attempting to Cleanup after ABORT was raised in stage WSLdFactUsage_P1..Trf_Sur_Key
WSLdFactUsage_P1..Trf_Sur_Key: Error writing to the file: /data/etldata/Control_Data_Files/DW_Files/DW.F_USAGE_P1.dat
Abnormal termination of stage WSLdFactUsage_P1..Trf_Device_Data detected
Job WSLdFactUsage_P1 aborted.
Below is the output of ulimit -a (Put in the Before-Job Subroutine)
WSLdFactUsage_P1..BeforeJob (ExecSH): Executed command: ulimit -a
*** Output from command was: ***
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 270336
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 270336
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
As a work around we are splitting the records in to multiple dat files.
Has anybody gone through this limitation. Is there a solution/patch?
The design of the job is pretty straight forward.
OCI -----> Transformer ------> Oracle Bulk Load
Error from log
WSLdFactUsage_P1..Trf_Sur_Key: Error writing to the file: /data/etldata/Control_Data_Files/DW_Files/DW.F_USAGE_P1.ctl
Attempting to Cleanup after ABORT was raised in stage WSLdFactUsage_P1..Trf_Sur_Key
WSLdFactUsage_P1..Trf_Sur_Key: Error writing to the file: /data/etldata/Control_Data_Files/DW_Files/DW.F_USAGE_P1.dat
Abnormal termination of stage WSLdFactUsage_P1..Trf_Device_Data detected
Job WSLdFactUsage_P1 aborted.
Below is the output of ulimit -a (Put in the Before-Job Subroutine)
WSLdFactUsage_P1..BeforeJob (ExecSH): Executed command: ulimit -a
*** Output from command was: ***
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 270336
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 270336
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
As a work around we are splitting the records in to multiple dat files.
Has anybody gone through this limitation. Is there a solution/patch?