APT_CONFIG_FILE

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
rajeev_prabhuat
Participant
Posts: 136
Joined: Wed Sep 29, 2004 5:56 am
Location: Chennai
Contact:

APT_CONFIG_FILE

Post by rajeev_prabhuat »

Hi,

We are using 2 CPU machine HP Unix Pc and we have made 8 node configuration in the default.apt APT_CONFIG_FILE and we ran the jobs. We bulit 2 jobs, one with Transformer and one without transfomer which loads data from once source table to staging table. The job which has transformer gets aborted saying the folliwng error eight times with different PID inside the dspipe init():
BASIC_Transformer_1,2: dspipe_init(21108): open(/tmp/ODH.ORA_TO_ORA.#2.BASIC_Transformer_1.DSLink4-Output) - No such file or directory
But the job without transformer got sucessfully completed without any error. So what we did later on is that we reduced the number of nodes (2 nodes) in the config file and then ran the job then it ran successfully. Can anyone tell what is the signifincance of increasing and decreasing the nodes in the APT_CINFIG_FILE, but one thing i could notice is that if the number or nodes are more the performance is great. So can anyone give an brief about the APT_CONFIG_FILE and how that is releated to jobs.

Regards,
Rajeev Prabhu
richdhan
Premium Member
Premium Member
Posts: 364
Joined: Thu Feb 12, 2004 12:24 am

Post by richdhan »

Hi Rajeev,

Dont make any changes to the default configuartion file rather create new configuartion files which uses 2 nodes, 4 nodes and 8 nodes respectively. Test your job against each of this configuration file and benchmark the performance.

Sometimes increasing the number of nodes might degrade the performance and I have seen jobs failing with broken pipe errors. As a general rule the number of nodes can be twice the number of CPU's. For a 2 CPU system a 4 node configuartion should be ideal.

HTH
--Rich

Pleasure in job brings perfection in work -- Aristotle
T42
Participant
Posts: 499
Joined: Thu Nov 11, 2004 6:45 pm

Post by T42 »

The more you place demands on your computer, the worse it'll perform.

Plus your system may have some internal restriction on number of processes and memory usage. The more nodes you use, the more processes you spawns and the more memory you suck up. Check with your SysAdmin.
Post Reply