ulimit -a shows different with the same id

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

shin0066
Premium Member
Premium Member
Posts: 69
Joined: Tue Jun 12, 2007 8:42 am

ulimit -a shows different with the same id

Post by shin0066 »

Hi gurus,
we are having Heap size allocation errors - we have 2 TB of scratch space but still jobs are failing when we run huge volume of data.

I read in message board to look for the ulimit for the id which i am using to run the jobs.
on unix command line ulimit -a shows

Code: Select all

time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        unlimited
memory(kbytes)       unlimited
coredump(blocks)     unlimited
nofiles(descriptors) 2000 
but where as when i run a routine in Datastage as
Call DSExecute ("UNIX", "ulimit -a", Output, SystemReturnCode)

it shows different stats as below

Code: Select all

 time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         786432
stack(kbytes)        unlimited
memory(kbytes)       unlimited
coredump(blocks)     unlimited
nofiles(descriptors) 2000 
data(kbytes) are different for the same id - why is like that..?


Any info is appreciated.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

It's not the same user. Look at the environment variables when running a job, to determine the user ID actually running the job. Adjust that user's ulimit.

It may also be the case that the ULIMIT configuration parameter in uvconfig comes into play, something else that may be worth checking.

Finally do you have a ulimit command in the dsenv script? It is unusual, but might serve to reduce process ulimit (a process other than superuser can not increase a ulimit).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
shin0066
Premium Member
Premium Member
Posts: 69
Joined: Tue Jun 12, 2007 8:42 am

Post by shin0066 »

Hi Ray,

i used the same user id to run the command at UNIX command level and logged to DataStage with same ID and executed the same via a routine.

And i did check ULIMIT in uvconfig file it is set to 128000, but i am not sure it is overriding the value while running the ulimit -a via datastage routine.

also checked in dsenv file - there is no ulimit set in it.

Thanks,
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

ULIMIT in uvconfig only comes into play if the UNIX ulimit is lower. Here it isn't. Did you check the job log entry that contains the environment variables, to verify under what user ID the job actually executes?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
shin0066
Premium Member
Premium Member
Posts: 69
Joined: Tue Jun 12, 2007 8:42 am

Post by shin0066 »

Hi Ray,

here is what i see from the job log for environment variables

Code: Select all

Environment variable settings:
_=/usr/bin/nohup
LANG=C
LOGIN=dsadm
LDR_CNTRL=MAXDATA=0x30000000
LOGNAME=user123
.
.
.
.
.APT_CONFIG_FILE=/dsadm/Ascential/DataStage/Configurations/Proj.apt
APT_DEFAULT_TRANSPORT_BLOCK_SIZE=32768
APT_MONITOR_MINTIME=10
APT_STRING_PADCHAR= 
.
.
.
.
.

I verified dsadm id's ulimit via unix command line as below - still different

Code: Select all

time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        unlimited
memory(kbytes)       unlimited
coredump(blocks)     unlimited
nofiles(descriptors) 2000
Same id via datastage routine

Code: Select all

time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         786432
stack(kbytes)        unlimited
memory(kbytes)       unlimited
coredump(blocks)     unlimited
nofiles(descriptors) 2000
Any help is appreciated

Thanks
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Routines run from the test grid are executed by dsapi_slave. It is that user whose data stack size is limited. You can not infer anything from this. You need to execute the ulimit -a command from within a running job (perhaps as a before-job subroutine via ExecSH) to find out what's really happening.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
shin0066
Premium Member
Premium Member
Posts: 69
Joined: Tue Jun 12, 2007 8:42 am

Post by shin0066 »

Hi Ray,

I ran the ulimit -a from Before job routine and i see the below information which produces same as from routine in my previous case.

Code: Select all

time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         786432
stack(kbytes)        unlimited
memory(kbytes)       unlimited
coredump(blocks)     unlimited
nofiles(descriptors) 2000
I talked to my Unix Admin to bump up everything to Unlimited except nofiles and he did - when i do ulimit -a from command line it is showing all unlimited except nofiles part but if i run ulimit -a via a shell script as above. Why the difference for command line and shell script?

Thanks
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

OK, try running ulimit -a ; id from the before-job subroutine. The id command may yield a clue.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
shin0066
Premium Member
Premium Member
Posts: 69
Joined: Tue Jun 12, 2007 8:42 am

Post by shin0066 »

Hi Ray,

my unix admin set all ulimit parameters to unlimited as hard limits, but somehow the data segment value is overwritten via the dsenv profile and i am sure there is not set up for ulimit in our dsenv file. The reason i am telling because it is coming from dsenv profile is - i tested the scenario by removing the dsenv profile sourcing from my id .profile and it showed unlimited for all the ulimit parameters.

I read few message posted by you that we can set the ulimit in dsenv profile to unlimited.

Can you please provide the example and command to set data segment ulimit to unlimited?

Thanks
shin0066
Premium Member
Premium Member
Posts: 69
Joined: Tue Jun 12, 2007 8:42 am

Post by shin0066 »

Hi Ray,

my unix admin set all ulimit parameters to unlimited as hard limits, but somehow the data segment value is overwritten via the dsenv profile. I am sure there is nothing set for ulimit in our dsenv file.

The reason I am telling because it is coming from dsenv profile is - I tested the scenario by removing the dsenv profile sourcing from my id .profile and it showed unlimited for all the ulimit parameters.

I read few message posted by you that we can set the ulimit in dsenv profile to unlimited.

Can you please provide the example and command to set data segment ulimit to unlimited?

Thanks
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

shin0066 wrote:I read few message posted by you that we can set the ulimit in dsenv profile to unlimited.

Can you please provide the example and command to set data segment ulimit to unlimited?
Show me where. I doubt that I ever did.

Only superuser can increase ulimit.

What is the ulimit for the dsrpcd process?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
shin0066
Premium Member
Premium Member
Posts: 69
Joined: Tue Jun 12, 2007 8:42 am

Post by shin0066 »

Hi Ray,

Here is the posting viewtopic.php?t=115283&highlight=data+segment


currently job is aborting as following error

Code: Select all

 The current soft limit on the data segment (heap) size (805306368) is less than the hard limit (2147483647), consider increasing the heap size limit
Also noticed in dsenv file - we are setting memory as
LDR_CNTRL=MAXDATA=0x30000000;export LDR_CNTRL

but 805306368 is equal to 768 MB which should be 2 times of data segments for a process - we are setting it to 3 times in dsenv file it should be 2 GB.

By chance, does this causing to set the data segment not to unlimited?

Thanks for you time
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Ah. That wasn't me, that was ArndW.

You still haven't reported the ulimit for dsrpcd nor the value of USER or LOGNAME environment variable from a job run.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Have you done both "ulimit -Sa" and "ulimit -Ha"? Also, have you tried "ulimit -Ss 2147483647"?
shin0066
Premium Member
Premium Member
Posts: 69
Joined: Tue Jun 12, 2007 8:42 am

Post by shin0066 »

Hi ArndW,

I haven't test with with option you suggested. Do i need to set that in my profile or dsenv profile?

Ray, can you please tell me how to see ulimit for dsrpc? i checked dsrpc using ps -ef | grep dsrpc and it is running as root.

Thanks
Post Reply