One DataStage Server is going to die
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 142
- Joined: Wed Mar 24, 2004 10:51 am
- Location: Brazil
One DataStage Server is going to die
I have 2 DataStage Servers, Server1 and Server2.
Server1 needs to die and I need to put and make work all the Server1's project into Server2.
I'm comparing TNS, DB2 Catalog, tree of directories, ...
I know that to work they need to have their uvconfig file changed.
Today both work, in separate machines but I'm afraid that don't work together in the same machine. So I would like to know what is the best way to make this.
I know that there is a lot of parameters in uvconfig file but Let's supose that:
If Server1's uvconfig has "MFILES=100" and Server2's uvconfig has "MFILES=150" and like Server1 is goiing to die have I need to put "MFILES=250" in Server2's uvconfig file to work?
Regards,
Fernando Martins
Server1 needs to die and I need to put and make work all the Server1's project into Server2.
I'm comparing TNS, DB2 Catalog, tree of directories, ...
I know that to work they need to have their uvconfig file changed.
Today both work, in separate machines but I'm afraid that don't work together in the same machine. So I would like to know what is the best way to make this.
I know that there is a lot of parameters in uvconfig file but Let's supose that:
If Server1's uvconfig has "MFILES=100" and Server2's uvconfig has "MFILES=150" and like Server1 is goiing to die have I need to put "MFILES=250" in Server2's uvconfig file to work?
Regards,
Fernando Martins
No, you'll be fine. You're worried that having more jobs on one server is going to require higher settings. The MFILES is fine where you have it, just make sure your T30FILES is above 500 to support lots of simultaneous executing jobs.
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 142
- Joined: Wed Mar 24, 2004 10:51 am
- Location: Brazil
SorryDSguru2B wrote:You post heading is so tragic ...
What is the criteria to set DISKCACHE and DCBLOCKSIZE?
# DISKCACHE - Specifies the state of the DISKCACHE subsystem.
# Valid values are the following:
# 0 = REJECT, DISKCACHE is inactive
# and files opened in READONLY or WRITECACHE mode
# will give an error.
# -1 = ALLOW, the default value, DISKCACHE is inactive
# and files opened in READONLY or WRITECACHE mode
# are processed as if opened in READWRITE mode.
# n = DISKCACHE is active, where n is the size of the
# DISKCACHE shared memory in megabytes, and values
# 1-1000 are allowed.
Today I have DISKCACHE = 512! Why?
# DCBLOCKSIZE - Specifies the size of a DISKCACHE buffer
# in 1K units (1024 bytes).
# Valid values are 4, 8, 16, 32, and 64 with a default value of 16.
When DCBLOCKSIZE=16 work better than DCBLOCKSIZE=64?
It is the first time that I'm looking these parameters and I'm having a lot of doubts.
Fernando
Fernando,
the settings are tunables for when you have the DataStage disk cacheing enabled. By setting the DISKCACHE value your system is now enabled for public hashed file cacheing. This does not necessarily mean that you are using the functionality, as a number of conditions and job settings need to be undertaken to start this.
The PDF describing this is on your client PC, in the DataStage directory under \Docs\dsdskche.pdf
the settings are tunables for when you have the DataStage disk cacheing enabled. By setting the DISKCACHE value your system is now enabled for public hashed file cacheing. This does not necessarily mean that you are using the functionality, as a number of conditions and job settings need to be undertaken to start this.
The PDF describing this is on your client PC, in the DataStage directory under \Docs\dsdskche.pdf
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 3593
- Joined: Thu Jan 23, 2003 5:25 pm
- Location: Australia, Melbourne
- Contact:
Your main consideration is how the system will perform if you try and run the load of both servers on one server at the same time. For parallel jobs this means a lot of extra disk I/O, especially on the scratch space and temp space, do you have enough disk space allocated? Even if you run both sets of jobs at different times they may leave behind a lot of datasets that will take up disk space.
The other consideration is that if you overload a parallel system with too many jobs you can get unexpected aborts or slow performance due to lack of resources. A performance test will verify whether you are better off running server 1 and server 2 loads separately or whether they are efficient when running at the same time.
You can increase both the RAM and disk space on the machine if you have concerns.
The other consideration is that if you overload a parallel system with too many jobs you can get unexpected aborts or slow performance due to lack of resources. A performance test will verify whether you are better off running server 1 and server 2 loads separately or whether they are efficient when running at the same time.
You can increase both the RAM and disk space on the machine if you have concerns.
Certus Solutions
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
-
- Participant
- Posts: 142
- Joined: Wed Mar 24, 2004 10:51 am
- Location: Brazil
The server that will continue to live is a very powerful machine.vmcburney wrote:Your main consideration is how the system will perform if you try and run the load of both servers on one server at the same time. For parallel jobs this means a lot of extra disk I/O, especially on the scratch space and temp space, do you have enough disk space allocated? Even if you run both sets of jobs at different times they may leave behind a lot of datasets that will take up disk space.
The other consideration is that if you overload a parallel system with too many jobs you can get unexpected aborts or slow performance due to lack of resources. A performance test will verify whether you are better off running server 1 and server 2 loads separately or whether they are efficient when running at the same time.
You can increase both the RAM and disk space on the machine if you have concerns.
RAM and disk space are not the problem. :D
When you said "scratch space and temp space" do you mean
SCRMAX and SCRSIZE parameters in uvconfig file?
Fernando
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
No - these are for server jobs only. Scratch space is determined by the directories mentioned in the configuration file. Temporary space is determined by the directory mentioned in the TMPDIR environment variable.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 3593
- Joined: Thu Jan 23, 2003 5:25 pm
- Location: Australia, Melbourne
- Contact:
Some sorting and aggregation may go into your temp directory. Most datasets will be saved onto your node directories as defined in your configuration file. Your machine may have a lot of disk space but you need to make sure it is allocated correctly. Disk space monitoring during a test run will tell you what you need to know.
Certus Solutions
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
-
- Participant
- Posts: 142
- Joined: Wed Mar 24, 2004 10:51 am
- Location: Brazil
Ray, sorry but:ray.wurlod wrote:No - these are for server jobs only. Scratch space is determined by the directories mentioned in the configuration file. Temporary space is determined by the directory mentioned in the TMPDIR enviro ...
Does the "configuration file" = uvconfig?
Where can I check how big is my Scratch space?
My TMPDIR enviroment variable is blank.
Fernando
PX sizing = look at your apt_config file for paths used
Server sizing = as mentioned earlier, look at the paths in uvconfig
Server sizing = as mentioned earlier, look at the paths in uvconfig
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Configuration file is a file, probably in $DSHOME/../Configurations, whose name ends in ".apt" - for example default.apt
The current value of the APT_CONFIG_FILE environment variable determines which configuration file is in use.
Every parallel job that runs logs a message indicating which configuration file was used, and its contents. From this you can determine the paths of the directories used for disk and scratchdisk resource. From those you can determine the size of your scratch space.
If TMPDIR is empty, /tmp is used (or \tmp, if it exists, on the current drive on Windows).
The current value of the APT_CONFIG_FILE environment variable determines which configuration file is in use.
Every parallel job that runs logs a message indicating which configuration file was used, and its contents. From this you can determine the paths of the directories used for disk and scratchdisk resource. From those you can determine the size of your scratch space.
If TMPDIR is empty, /tmp is used (or \tmp, if it exists, on the current drive on Windows).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 142
- Joined: Wed Mar 24, 2004 10:51 am
- Location: Brazil