It's not the same user. Look at the environment variables when running a job, to determine the user ID actually running the job. Adjust that user's ulimit.
It may also be the case that the ULIMIT configuration parameter in uvconfig comes into play, something else that may be worth checking.
Finally do you have a ulimit command in the dsenv script? It is unusual, but might serve to reduce process ulimit (a process other than superuser can not increase a ulimit).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
i used the same user id to run the command at UNIX command level and logged to DataStage with same ID and executed the same via a routine.
And i did check ULIMIT in uvconfig file it is set to 128000, but i am not sure it is overriding the value while running the ulimit -a via datastage routine.
also checked in dsenv file - there is no ulimit set in it.
ULIMIT in uvconfig only comes into play if the UNIX ulimit is lower. Here it isn't. Did you check the job log entry that contains the environment variables, to verify under what user ID the job actually executes?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Routines run from the test grid are executed by dsapi_slave. It is that user whose data stack size is limited. You can not infer anything from this. You need to execute the ulimit -a command from within a running job (perhaps as a before-job subroutine via ExecSH) to find out what's really happening.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
I talked to my Unix Admin to bump up everything to Unlimited except nofiles and he did - when i do ulimit -a from command line it is showing all unlimited except nofiles part but if i run ulimit -a via a shell script as above. Why the difference for command line and shell script?
my unix admin set all ulimit parameters to unlimited as hard limits, but somehow the data segment value is overwritten via the dsenv profile and i am sure there is not set up for ulimit in our dsenv file. The reason i am telling because it is coming from dsenv profile is - i tested the scenario by removing the dsenv profile sourcing from my id .profile and it showed unlimited for all the ulimit parameters.
I read few message posted by you that we can set the ulimit in dsenv profile to unlimited.
Can you please provide the example and command to set data segment ulimit to unlimited?
my unix admin set all ulimit parameters to unlimited as hard limits, but somehow the data segment value is overwritten via the dsenv profile. I am sure there is nothing set for ulimit in our dsenv file.
The reason I am telling because it is coming from dsenv profile is - I tested the scenario by removing the dsenv profile sourcing from my id .profile and it showed unlimited for all the ulimit parameters.
I read few message posted by you that we can set the ulimit in dsenv profile to unlimited.
Can you please provide the example and command to set data segment ulimit to unlimited?
The current soft limit on the data segment (heap) size (805306368) is less than the hard limit (2147483647), consider increasing the heap size limit
Also noticed in dsenv file - we are setting memory as
LDR_CNTRL=MAXDATA=0x30000000;export LDR_CNTRL
but 805306368 is equal to 768 MB which should be 2 times of data segments for a process - we are setting it to 3 times in dsenv file it should be 2 GB.
By chance, does this causing to set the data segment not to unlimited?