Linux Red Hat v5 64 bit

A forum for discussing DataStage<sup>®</sup> basics. If you're not sure where your question goes, start here.

Moderators: chulett, rschirm, roy

Post Reply
chanaka
Premium Member
Premium Member
Posts: 96
Joined: Tue Sep 15, 2009 4:06 am
Location: United States

Linux Red Hat v5 64 bit

Post by chanaka »

Hi Guys,

We have successfully installed and configured DS 8.1 on RHEL 5.0 64 bit version. Initially we had a compiler issue for PX jobs that uses the Transformer stage.

It gave the below error during the compilation of PX jobs that uses the transformer.
<main_program> Error when checking composite operator: Output from subprocess: /bin/ld: skipping incompatible /opt/IBM/InformationServer/Server/PXEngine/lib/liborchi686.so when searching for - lorchi686 /usr/bin/ld: cannot find - lorchi686 collect2: ld returned 1 exit status
Then we configured the compiler and linker settings as given below and that worked.
APT_COMPILEOPT=-O -fPIC -Wno-deprecated -c -m32
APT_COMPILER=g++
APT_LINKER=g++
APT_LINKOPT=-shared -m32 -Wl,-Bsymbolic,--allow-shlib-undefined
Due to a different requirement, now I need to enable 64 bit compiler and compile the PX jobs that uses the transformer stage with that.

It would be great if any of you have done that before and can share your inputs on how to make it work.

Cheers!

Chanaka
Last edited by chanaka on Tue Nov 03, 2009 5:37 pm, edited 1 time in total.
Chanaka Wagoda
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

DataStage is still a 32-bit application, so you need the 32-bit switch on the compiler options.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chanaka
Premium Member
Premium Member
Posts: 96
Joined: Tue Sep 15, 2009 4:06 am
Location: United States

Post by chanaka »

Hi Ray,

Is it going to get supported at least in a future release? My problem is basically this. Some of the jobs that does Oracle bulk loads creates .dat files larger than 2 GB. In the current environment I described above, jobs abort the execution as soon as it reaches the 32 bit integer's maximum limit (i.e. dat file size reaches 2147483647 bytes).

Any suggestions?

Cheers!

Chanaka
Chanaka Wagoda
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

That's got nothing to do with 32b v. 64bit, that just a limit configured into your O/S. You need to ensure that you have LFS or 'Large File Support' capabilities and there's no ulimit getting in your way.
-craig

"You can never have too many knives" -- Logan Nine Fingers
chanaka
Premium Member
Premium Member
Posts: 96
Joined: Tue Sep 15, 2009 4:06 am
Location: United States

Post by chanaka »

Hi Chulett,

Ulimit appears to be okay.
[user@machine ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 200703
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 65536
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 200703
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
And file size limits given below.
[user@machine ~]# cat /proc/sys/fs/file-max
14417459
[user@machine ~]# cat /proc/sys/fs/file-nr
5610 0 14417459
[user@machine ~]# cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536

# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296


# sem
kernel.sem = 250 128000 32 1024

fs.file-max = 14417459
Even I am able to create larger files from the OS it self via shell commands.

Any thoughts?

Cheers!

Chanaka
Chanaka Wagoda
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Post your complete, unedited abort message(s) when you hit this limit. And you need to run "ulimit -a" from inside a job (ExecSH) not from the command line to check properly.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Sreenivasulu
Premium Member
Premium Member
Posts: 892
Joined: Thu Oct 16, 2003 5:18 am

Post by Sreenivasulu »

To add to what craig has said.
Only when websphere runs this ulimit is applied. Hence you need to do it with execSH to confirm .

Regards
Sreeni
chanaka
Premium Member
Premium Member
Posts: 96
Joined: Tue Sep 15, 2009 4:06 am
Location: United States

Post by chanaka »

Hi Guys,

This is the ulimit output from the job.

F_ACCOUNT_PMR..BeforeJob (ExecSH): Executed command: ulimit -a
*** Output from command was: ***
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 200703
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 200703
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Chanaka Wagoda
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

OK, now how about your actual abort message?
-craig

"You can never have too many knives" -- Logan Nine Fingers
chanaka
Premium Member
Premium Member
Posts: 96
Joined: Tue Sep 15, 2009 4:06 am
Location: United States

Post by chanaka »

Hi Craig,

Here is the detailed error messages.
TGT_F_ACCOUNT,0: Failure during execution of operator logic.
TGT_F_ACCOUNT,0: Input 0 consumed 7555633 records.
TGT_F_ACCOUNT,0: Fatal Error: Fatal: Error writing to the file: /data/data_mig/pmr/F_ACCOUNT.ctl
node_node2: Player 2 terminated unexpectedly.
main_program: APT_PMsectionLeader(2, node2), player 2 - Unexpected exit status 1.
main_program: Step execution finished with status = FAILED.
Permissions for the .ctl file is given below.
-rw-rw-r-- 1 dsadm dstage 1551 Nov 5 14:21 F_ACCOUNT.ctl
Cheers!
Chanaka Wagoda
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Odd... your 'control file' certainly won't be anywhere near 2GB and that seems to be the file it is having issues with. Have you involved your official support provider yet? :?
-craig

"You can never have too many knives" -- Logan Nine Fingers
chanaka
Premium Member
Premium Member
Posts: 96
Joined: Tue Sep 15, 2009 4:06 am
Location: United States

Post by chanaka »

Hi Craig,

We will be raising a PMR today. I will share whatever the solutions we get as this is going to help others.

Cheers!

Chanaka
Chanaka Wagoda
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Out of curiousity, what user is the job that failed running under, something other than dsadm? Have you confirmed permissions not just on the file but on the entire path to it?
-craig

"You can never have too many knives" -- Logan Nine Fingers
chanaka
Premium Member
Premium Member
Posts: 96
Joined: Tue Sep 15, 2009 4:06 am
Location: United States

Post by chanaka »

Hi Craig,

It has been executed as the dsadm user who is having all the permissions for the directory where the file is being written.
drwxrwxrwx 2 root root 4096 Nov 5 14:21 pmr
Cheers!
Chanaka Wagoda
Sreenivasulu
Premium Member
Premium Member
Posts: 892
Joined: Thu Oct 16, 2003 5:18 am

Post by Sreenivasulu »

Giving permissions as 777 is quite risky.
You can have 755 at least

Regards
Sreeni
Post Reply