Page 1 of 1

Linux Red Hat v5 64 bit

Posted: Mon Nov 02, 2009 2:38 pm
by chanaka
Hi Guys,

We have successfully installed and configured DS 8.1 on RHEL 5.0 64 bit version. Initially we had a compiler issue for PX jobs that uses the Transformer stage.

It gave the below error during the compilation of PX jobs that uses the transformer.
<main_program> Error when checking composite operator: Output from subprocess: /bin/ld: skipping incompatible /opt/IBM/InformationServer/Server/PXEngine/lib/liborchi686.so when searching for - lorchi686 /usr/bin/ld: cannot find - lorchi686 collect2: ld returned 1 exit status
Then we configured the compiler and linker settings as given below and that worked.
APT_COMPILEOPT=-O -fPIC -Wno-deprecated -c -m32
APT_COMPILER=g++
APT_LINKER=g++
APT_LINKOPT=-shared -m32 -Wl,-Bsymbolic,--allow-shlib-undefined
Due to a different requirement, now I need to enable 64 bit compiler and compile the PX jobs that uses the transformer stage with that.

It would be great if any of you have done that before and can share your inputs on how to make it work.

Cheers!

Chanaka

Posted: Mon Nov 02, 2009 2:44 pm
by ray.wurlod
DataStage is still a 32-bit application, so you need the 32-bit switch on the compiler options.

Posted: Tue Nov 03, 2009 5:35 pm
by chanaka
Hi Ray,

Is it going to get supported at least in a future release? My problem is basically this. Some of the jobs that does Oracle bulk loads creates .dat files larger than 2 GB. In the current environment I described above, jobs abort the execution as soon as it reaches the 32 bit integer's maximum limit (i.e. dat file size reaches 2147483647 bytes).

Any suggestions?

Cheers!

Chanaka

Posted: Tue Nov 03, 2009 8:10 pm
by chulett
That's got nothing to do with 32b v. 64bit, that just a limit configured into your O/S. You need to ensure that you have LFS or 'Large File Support' capabilities and there's no ulimit getting in your way.

Posted: Wed Nov 04, 2009 5:01 pm
by chanaka
Hi Chulett,

Ulimit appears to be okay.
[user@machine ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 200703
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 65536
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 200703
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
And file size limits given below.
[user@machine ~]# cat /proc/sys/fs/file-max
14417459
[user@machine ~]# cat /proc/sys/fs/file-nr
5610 0 14417459
[user@machine ~]# cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536

# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296


# sem
kernel.sem = 250 128000 32 1024

fs.file-max = 14417459
Even I am able to create larger files from the OS it self via shell commands.

Any thoughts?

Cheers!

Chanaka

Posted: Wed Nov 04, 2009 6:42 pm
by chulett
Post your complete, unedited abort message(s) when you hit this limit. And you need to run "ulimit -a" from inside a job (ExecSH) not from the command line to check properly.

Posted: Thu Nov 05, 2009 4:28 am
by Sreenivasulu
To add to what craig has said.
Only when websphere runs this ulimit is applied. Hence you need to do it with execSH to confirm .

Regards
Sreeni

Posted: Thu Nov 05, 2009 5:03 pm
by chanaka
Hi Guys,

This is the ulimit output from the job.

F_ACCOUNT_PMR..BeforeJob (ExecSH): Executed command: ulimit -a
*** Output from command was: ***
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 200703
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 200703
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

Posted: Thu Nov 05, 2009 5:06 pm
by chulett
OK, now how about your actual abort message?

Posted: Thu Nov 05, 2009 5:23 pm
by chanaka
Hi Craig,

Here is the detailed error messages.
TGT_F_ACCOUNT,0: Failure during execution of operator logic.
TGT_F_ACCOUNT,0: Input 0 consumed 7555633 records.
TGT_F_ACCOUNT,0: Fatal Error: Fatal: Error writing to the file: /data/data_mig/pmr/F_ACCOUNT.ctl
node_node2: Player 2 terminated unexpectedly.
main_program: APT_PMsectionLeader(2, node2), player 2 - Unexpected exit status 1.
main_program: Step execution finished with status = FAILED.
Permissions for the .ctl file is given below.
-rw-rw-r-- 1 dsadm dstage 1551 Nov 5 14:21 F_ACCOUNT.ctl
Cheers!

Posted: Thu Nov 05, 2009 5:50 pm
by chulett
Odd... your 'control file' certainly won't be anywhere near 2GB and that seems to be the file it is having issues with. Have you involved your official support provider yet? :?

Posted: Thu Nov 05, 2009 5:53 pm
by chanaka
Hi Craig,

We will be raising a PMR today. I will share whatever the solutions we get as this is going to help others.

Cheers!

Chanaka

Posted: Thu Nov 05, 2009 11:34 pm
by chulett
Out of curiousity, what user is the job that failed running under, something other than dsadm? Have you confirmed permissions not just on the file but on the entire path to it?

Posted: Fri Nov 06, 2009 12:57 am
by chanaka
Hi Craig,

It has been executed as the dsadm user who is having all the permissions for the directory where the file is being written.
drwxrwxrwx 2 root root 4096 Nov 5 14:21 pmr
Cheers!

Posted: Fri Nov 06, 2009 3:27 am
by Sreenivasulu
Giving permissions as 777 is quite risky.
You can have 755 at least

Regards
Sreeni