DMEMOFF, PMEMOFF, LDR_CNTRL

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
my_stm
Premium Member
Premium Member
Posts: 58
Joined: Mon Mar 19, 2007 9:49 pm
Location: MY

DMEMOFF, PMEMOFF, LDR_CNTRL

Post by my_stm »

Previously I had a job that keeps aborting with the error message:

href_StgExtIdSiMap,0: Could not map table file "/home/dsadm/Ascential/DataStage/Datasets/lookuptable.20080109.ymetadc (size 1133689184 bytes)": Not enough space
Error finalizing / saving table /entis/IMS/hash/lkpfsPStgExtIdSiMap_bkp

This job basically consists of DB2 -> Transformer -> Lookup File Set.

What it does is loading 10 mil ++ data into the lookup file set.

I've contacted IBM and was told that due to my lookup file set partition set as 'Entire', this caused the server to run out of 'contiguous memory blocks'.

Their solution was for us to modify the:

1) Change DMEMOFF to 0x90000000, Change PMEMOFF to 0xa0000000 (from uvconfig file)

2) And also LDR_CNTRL=MAXDATA=0x80000000 (From dsenv file)

At the moment my ds admin has modified item 1, and the job does not seem to abort anymore once I used hashed partition on the lookup file set.

I was wondering what roles the above play in helping to solve the abortion issue?
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

You need a lot more knowledge about the way that the DataStage works in order to make any sense of these. In short, there are up to four separate shared memory segments, the so-called "disk" shared memory segment, in which lock tables and other "disk-related" structures are stored; a "printer" shared memory segment, best thought of as "user" shared memory; "BASIC Catalog" shared memory segment, which is not used by default for DataStage, but is a hangover from when DataStage was a UniVerse application; and "NLS" shared memory segment into which NLS character maps are loaded at startup.

The xMEMOFF configuration parameters determing a starting position in memory for each of these shared memory segments. The distance between the one address and the next effectively places an upper limit on how big a particular shared memory can be.

For example, with DMEMOFF at 0x90000000 and PMEMOFF at 0xa0000000, the largest size of the "disk" shared memory segment is the difference between these two addresses, or 0x10000000 bytes (256MB).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
hsahay
Premium Member
Premium Member
Posts: 175
Joined: Wed Mar 21, 2007 9:35 am

Post by hsahay »

Okay ...i am trying to make sense of what i have read here. Please help me along.

The original values for the 4 memory segments in uvconfig are -

DMEM = 0x40000000
PMEM = 0x50000000
CMEM = 0x60000000
NMEM = 0x70000000

So Ray, what you are saying is that simply by changing DMEM to 0x90000000 and PMEM to 0xa0000000 does not change anything because
the difference between them is still 0x10000000 ? So the disk segment will still be 256 MB ? What is the significance of these segments though ? For example if the disk segment is 256 MB what does it mean in terms of space utilization by parallel jobs ? or lookup stage ? or whatever ?

Now what would be one of the reasons to change these values ? Because i tried it and it did not solve the "out of space" problem on the lookup map table. The thing that finally fixed that problem for us was the solution given by "Fridge" that involves using "ldedit" to change the maxdata value in osh executable header.
vishal
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The proper answer to this question would take about five days (the duration of a "UniVerse Internals" class) which I do not propose to do.

DMEMOFF, PMEMOFF etc relate to shared memory segments used by the DataStage Server Engine. They are set in the uvconfig file.

LDR_CNTRL is an environment variable set to control an entirely different aspect of the operating system's (not purely the DataStage Parallel Engine's) behaviour), namely the size of the data stack. This is set in an operating system shell (usually the dsenv file for DataStage processes).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply