Regarding Grid Configuration File issues

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
moalik
Participant
Posts: 39
Joined: Thu Sep 15, 2011 8:15 am
Location: Melbourne

Regarding Grid Configuration File issues

Post by moalik »

Hi All,

We have created the Grid Configuration on Datastage Engine. I have configured the Master Configuration file as well as project-level configuration file.

[abc@XXXXX etc]# cat master_config.apt

Code: Select all

{
        // Conductor node entry
        node "CONDUCTOR"
        {
                fastname "ABC Server"
                pools "conductor"
                resource disk "/opt/IBM/InformationServer/Server/Datasets" {pools ""}
                resource scratchdisk "/opt/IBM/InformationServer/Server/Scratch" {pools ""}
        }

        node "COMPUTE_default"
        {
                fastname "ABC Server"
                pools "default"
                resource disk "/opt/IBM/InformationServer/Server/Datasets" {pools ""}
                resource scratchdisk "/opt/IBM/InformationServer/Server/Scratch" {pools ""}
        }

        // Compute node entry
        node "node01"
        {
                fastname "ABC Server"
                pools "node01"
                resource disk "/opt/IBM/InformationServer/Server/Datasets" {pools ""}
                resource scratchdisk "/opt/IBM/InformationServer/Server/Scratch" {pools ""}
        }
        node "node02"
        {
                fastname "GHI Server"
                pools "node02"
                resource disk "/opt/IBM/InformationServer/Server/Datasets" {pools ""}
                resource scratchdisk "/opt/IBM/InformationServer/Server/Scratch" {pools ""}

}
My Project-level Default Configuration file is :

Code: Select all

{
	node "COMPUTE_01"
	{
		fastname "ABC Server"
		pools "node01"
		resource disk "/opt/IBM/InformationServer/Server/Datasets" {}
		resource scratchdisk "/opt/IBM/InformationServer/Server/Scratch" {}
	}
	node "COMPUTE_02"
	{
		fastname "GHI Server"
		pools "node02"
		resource disk "/opt/IBM/InformationServer/Server/Datasets" {}
		resource scratchdisk "/opt/IBM/InformationServer/Server/Scratch" {}
	}
}
When i am trying to run a parallel job,i am getting a compilation error with below details:

The grid resource node pools are no longer available for this project, and will be removed: """
Please go to the designer job properties grid tab to check the current resources for this job.


I did all possible configurations but still getting the above error. Could you please help me in resolving the issue.

Thanks,
Mohsin Khan
Datastage Consultant
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

You seem to be missing the closing } for 'node02'.
-craig

"You can never have too many knives" -- Logan Nine Fingers
PaulVL
Premium Member
Premium Member
Posts: 1315
Joined: Fri Dec 17, 2010 4:36 pm

Post by PaulVL »

Also, you mean CLUSTER and not GRID I believe. Nothing in your APT file indicates that it is dynamically generated (GRID deployment). A Cluster is a static setup across multiple servers.
moalik
Participant
Posts: 39
Joined: Thu Sep 15, 2011 8:15 am
Location: Melbourne

Post by moalik »

Hi All,

Thanks a lot for the information.

Actually, we have our linux servers in cluster. So we just disabled the Grid tab and added the second server node details in configuration details and worked fine.

Thanks,
Mohsin Khan
Mohsin Khan
Datastage Consultant
PaulVL
Premium Member
Premium Member
Posts: 1315
Joined: Fri Dec 17, 2010 4:36 pm

Post by PaulVL »

BTW: I recommend changing your resource and scratch disk locations.

Your scratch is mounted on an NFS or CFS mount, that will affect your job performance quite a bit. You will want something local to each compute node/server.

Your datasets are being saved to the same mount that your engine binaries are on. You run the risk of filling up that mount and crashing the entire datastage engine.
Post Reply