Node configuration

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
rao_2004
Charter Member
Charter Member
Posts: 7
Joined: Wed Nov 10, 2004 3:38 pm
Contact:

Node configuration

Post by rao_2004 »

Hi,
we are trying to add more space to one of the project on a Aix server.Here our main goal is to utilize the new space or new mountpoint for this specific project in performance view.
i mean do i have to edit the config file or just change the dataset or txt file path in environment level.
Is their any better way in utilising the space for the specific project..
Thanks,
rao..
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

Post by Kirtikumar »

There can be many ways to utilize it. Few which I can think of:

If you are creating temp files, you can create them in this space.

If the AIX is SMP machine, then just changing the dataset file path for one of the nodes and the new space would be used.
Regards,
S. Kirtikumar.
rao_2004
Charter Member
Charter Member
Posts: 7
Joined: Wed Nov 10, 2004 3:38 pm
Contact:

Post by rao_2004 »

You mean,i have to to pass parameter $APT_CONFIG_FILE for all jobs that are developed for the project and point at the disk space.
kwwilliams
Participant
Posts: 437
Joined: Fri Oct 21, 2005 10:00 pm

Post by kwwilliams »

Only if you want them to be able to run on more than the default configuration file. (Which you will at some point want the ability to dial a job up or down without effecting every job in your project.)
rao_2004
Charter Member
Charter Member
Posts: 7
Joined: Wed Nov 10, 2004 3:38 pm
Contact:

Post by rao_2004 »

Say for ex If i want to use 100 GB for specific project ABC and i want to run all jobs in specific project ABC on 4 or 6 nodes for more performance.
This is the way i'm following,please correct me if i doing some thing wrong or is their better way:
1.Create a config file for the Project ABC and using $APTCONFIG parameter in all ABC Projects jobs.
2.Node file:
node "node1"
{
fastname "dev"
pools ""
resource disk "/apps/ascential/tst/dspara/ABC/node1/datasets" {pools ""}
resource scratchdisk "/apps/ascential/tst/dspara/ABC/node1/scratch" {pools ""}
}
Added another 3 nodes like to the config file.
Raghavendra
Participant
Posts: 147
Joined: Sat Apr 30, 2005 1:23 am
Location: Bangalore,India

Post by Raghavendra »

I believe it should be okay to proceed.Lets see what our DSGurus comment on this.
Raghavendra
Dare to dream and care to achieve ...
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Each and every job has a node configuration at which it will run best, and this is seldom the maximum number of nodes. By "best" I don't necessarily mean fastest, this could mean 'fastest speed and smallest footprint'.

You can use two basic methods to control your runtime node allocations:

a) different config files as you have already mentioned
b) use one configuration file but within that and the jobs use nodepools.
Post Reply