Memory problem - Routine value - Abnormal termination

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
nick.bond
Charter Member
Charter Member
Posts: 230
Joined: Thu Jan 15, 2004 12:00 pm
Location: London

Memory problem - Routine value - Abnormal termination

Post by nick.bond »

Hi,

My 3 input columns are:

name; start_ip; end_ip

and in a transformer I have a routine that works out all the valid ip addresses for each of these ranges.

The routine loops through all the possible IPs concatenating them into a string with the 'NAME', a pipe delimiter and Char(10), all into a long string that is written to a sequential file. I then read that sequential file as a pipe delimited file and hence normalise the data.

e.g: vListIPs = vListIPs : aName : '|' : NewIp : Char(10)

This all works fine on the development environment but is failing in the production environment when the IP range is large. (i.e. 8,323,072 IPs).

I know this seems like a lot to concatenate together into one field and if it hadn't have worked in the dev environment I would have found another way, but it does work so I am baffled.

Until just now I have always just received:
Abnormal termination of stage TfmAuHlrAU.0000_99.tGenerateIPs detected
But when I hard coded the range into the job I got a bit more info:
DataStage Job 509 Phantom 28595
Program "DSD.GetStatus": Line 25,
Available memory exceeded. Unable to continue processing record.
Program "DSD.Startup": Line 142,
Available memory exceeded. Unable to continue processing record.
I can only think that there is a parameter setting on the Prod server that is lower than the dev server. Does anyone have any clues as to what might control this?

Additional info:
  • The prod box has huge physical memory, much more than dev.
    Nothing else of significance is running.
    If I return the result to the transformer but don't use the value there is no issue.
    The meta data of the column was varchar(1000) which although this is less that the length of the data there were no complaints on dev. In prod I have now changed this to LongVarChar(2147483647) (not that I think this will make any difference.)
    The length of the full string is 159,346,176
Thanks for any help,

Nick.
Regards,

Nick.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

DataStage server jobs can handle this length of string without a problem (internally strings are stored in a neat little form of linked lists, so inserting the letter "a" at the beginning of your long string wouldn't mean doing hundreds of thousand shift-rights to the rest of the contents - neat, eh?)

Anyway, in your case my first gues is that your user limits are set lower on production than in development. Could you issue a "ulimit -a" in both environments?
nick.bond
Charter Member
Charter Member
Posts: 230
Joined: Thu Jan 15, 2004 12:00 pm
Location: London

Post by nick.bond »

These were taken from command line, logged in as the DS user I am running the jobs with. Is that ok or do I need to put it into a job?

I see the PROD data)kbytes is slightly lower which may just be under the requirement. Would this be one? It looks like a lot.

DEV
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) 4294836160
stack(kbytes) 131072
memory(kbytes) unlimited
coredump(blocks) 4194303

PROD
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) 4294575040
stack(kbytes) 392192
memory(kbytes) unlimited
coredump(blocks) 4194303
Regards,

Nick.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

I would check the values from within a running job to be certain. Those are some odd ulimit upper bounds (I wonder why they aren't just set to `unlimited'? What OS are you on? Perhaps they are auto-calculated).

Although I cannot think of any uvconfig setting that would directly limit memory size in DataStage jobs, perhaps you could do a UNIX command "smat -t | grep \*" to see which uvconfig settings you might have changed. It would be nice to compare prod and dev results as well.

But check the ulimits from a job (before-job unix call is the quickest) just to make sure that you aren't getting a limitation there.
nick.bond
Charter Member
Charter Member
Posts: 230
Joined: Thu Jan 15, 2004 12:00 pm
Location: London

Post by nick.bond »

When run from the jobs the ulimit result is quite different:

DEV
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) 4063168
stack(kbytes) 131072
memory(kbytes) unlimited
coredump(blocks) 4194303
nofiles(descriptors) 4000

PROD
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) 3802048
stack(kbytes) 392192
memory(kbytes) unlimited
coredump(blocks) 4194303
nofiles(descriptors) 4096

I can see that by reducing the range it works in prod so I just need to find out if 'data' is the correct limit and where it is being set.

In uvconfig there is a setting for
ULIMIT 128000
But this is the same as the limit on both machines.

I have just done a full compare of dev and prod uvconfig files and they are exactly the same.

Where else may these settings come from?

I didn't understand the unix command you gave me,
"smat -t | grep \*"
I appreciate your help, thanks.
Regards,

Nick.
nick.bond
Charter Member
Charter Member
Posts: 230
Joined: Thu Jan 15, 2004 12:00 pm
Location: London

Post by nick.bond »

We are using HP-UX 11.1
Regards,

Nick.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

The "smat" command can be found in your DataStage $DSHOME/bin directory and has a number of options, the "-t" shows the configuration file values and any lines that are changed from their defaults are prefaced with a "*", thus the grep to show only non-default values.

Have you checked your dsenv file? If that does not limit the settings, then most likely you have inherited the settings from the process that started DataStage (either root or another user, depending on your impersonation mode).
nick.bond
Charter Member
Charter Member
Posts: 230
Joined: Thu Jan 15, 2004 12:00 pm
Location: London

Post by nick.bond »

Thanks, smat is handy!

* MFILES = 1500
* T30FILE = 2500
* GLTABSZ = 150
* RLTABSZ = 150
* SYNCALOC = 0
* MAXRLOCK = 149
* UDRBLKS = 0
* IMPERSONATION = 1
* INSTANCETAG = ade

We performed a non-root install but seem to be running in Impersonation mode.

I''l try and get the ulimt of root.
Regards,

Nick.
nick.bond
Charter Member
Charter Member
Posts: 230
Joined: Thu Jan 15, 2004 12:00 pm
Location: London

Post by nick.bond »

Yep - the ulimit values are coming from root so I'll have to try and get them changed.

Thanks for all your help.
Regards,

Nick.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The ULIMIT in uvconfig is ignored if the UNIX ulimit is larger that the value specified in uvconfig.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
nick.bond
Charter Member
Charter Member
Posts: 230
Joined: Thu Jan 15, 2004 12:00 pm
Location: London

Post by nick.bond »

There is only one value for ULIMIT in uvconfig, but a few values for ulimit at the unix/user level. What does the ULIMIT in uvconfig control?

If I set the value in uvconfig higher than the corresponding value of root's ulimit will I the increase the ulimit value at runtime? or does this only work for non-impersonation?

Thanks, Nick.
Regards,

Nick.
Post Reply