job running on single node
Moderators: chulett, rschirm, roy
Code: Select all
main_program: This step has 4 datasets:
ds0: {op0[1p] (parallel delete data files in delete D:/test/sas.ds)
->eCollectAny
op1[1p] (sequential delete descriptor file in delete D:/test/sas.ds)}
ds1: {op2[1p] (parallel APT_CombinedOperatorController(1):APT_LUTCreateOp in Lookup_2)
eEntire->eCollectAny
op3[1p] (parallel APT_CombinedOperatorController(0):APT_LUTProcessOp in Lookup_2)}
ds2: {op2[1p] (parallel APT_CombinedOperatorController(1):APT_LUTCreateOp in Lookup_2)
eAny->eCollectAny
op3[1p] (parallel APT_CombinedOperatorController(0):APT_LUTProcessOp in Lookup_2)}
ds3: {op3[1p] (parallel APT_CombinedOperatorController(0):APT_LUTProcessOp in Lookup_2)
[pp] ->
D:/test/sas.ds}
It has 4 operators:
op0[1p] {(parallel delete data files in delete D:/test/sas.ds)
on nodes (
node1[op0,p0]
)}
op1[1p] {(sequential delete descriptor file in delete D:/test/sas.ds)
on nodes (
node1[op1,p0]
)}
op2[1p] {(parallel APT_CombinedOperatorController(1):
(Row_Generator_1)
(APT_LUTCreateOp in Lookup_2)
) on nodes (
node1[op2,p0]
)}
op3[1p] {(parallel APT_CombinedOperatorController(0):
(Row_Generator_0)
(Sort_11)
(APT_LUTProcessOp in Lookup_2)
) on nodes (
node1[op3,p0]
)}
It runs 4 processes on 1 node.
vin..[/code]
Code: Select all
main_program: This step has 7 datasets:
ds0: {op0[1p] (parallel Row_Generator_0)
eOther(APT_HashPartitioner { key={ value=empid,
subArgs={ asc }
}
})->eCollectAny
op2[1p] (parallel Sort_11)}
ds1: {op1[1p] (parallel Row_Generator_1)
eEntire->eCollectAny
op3[1p] (parallel APT_LUTCreateOp in Lookup_2)}
ds2: {op2[1p] (parallel Sort_11)
[pp] eSame->eCollectAny
op4[1p] (parallel APT_LUTProcessOp in Lookup_2)}
ds3: {op3[1p] (parallel APT_LUTCreateOp in Lookup_2)
eEntire->eCollectAny
op4[1p] (parallel APT_LUTProcessOp in Lookup_2)}
ds4: {op3[1p] (parallel APT_LUTCreateOp in Lookup_2)
eAny->eCollectAny
op4[1p] (parallel APT_LUTProcessOp in Lookup_2)}
ds5: {op5[1p] (parallel delete data files in delete D:/test/sas.ds)
->eCollectAny
op6[1p] (sequential delete descriptor file in delete D:/test/sas.ds)}
ds6: {op4[1p] (parallel APT_LUTProcessOp in Lookup_2)
[pp] ->
D:/test/sas.ds}
It has 7 operators:
op0[1p] {(parallel Row_Generator_0)
on nodes (
node1[op0,p0]
)}
op1[1p] {(parallel Row_Generator_1)
on nodes (
node1[op1,p0]
)}
op2[1p] {(parallel Sort_11)
on nodes (
node1[op2,p0]
)}
op3[1p] {(parallel APT_LUTCreateOp in Lookup_2)
on nodes (
node1[op3,p0]
)}
op4[1p] {(parallel APT_LUTProcessOp in Lookup_2)
on nodes (
node1[op4,p0]
)}
op5[1p] {(parallel delete data files in delete D:/test/sas.ds)
on nodes (
node1[op5,p0]
)}
op6[1p] {(sequential delete descriptor file in delete D:/test/sas.ds)
on nodes (
node1[op6,p0]
)}
It runs 7 processes on 1 node.
i have changed execution mode for two rowgenerators.i didnt get any warnings..
Thanks
vin...
Hi,
For other jobs also i am getting same problem.one job score check it this one plz.
Thanks
vin...
For other jobs also i am getting same problem.one job score check it this one plz.
Code: Select all
main_program: This step has 10 datasets:
ds0: {op0[1p] (parallel Row_Generator_0)
eAny->eCollectAny
op1[1p] (parallel APT_TransformOperatorImplV0S2_test001_Transformer_2 in Transformer_2)}
ds1: {op1[1p] (parallel APT_TransformOperatorImplV0S2_test001_Transformer_2 in Transformer_2)
eOther(APT_HashPartitioner { key={ value=ename },
key={ value=mid }
})->eCollectAny
op2[1p] (parallel APT_HashedGroup2Operator in Aggregator_6)}
ds2: {op1[1p] (parallel APT_TransformOperatorImplV0S2_test001_Transformer_2 in Transformer_2)
eOther(APT_HashPartitioner { key={ value=ename }
})->eCollectAny
op4[1p] (parallel inserted tsort operator {key={value=ename, subArgs={asc, nulls={value=first}, cs}}}(1) in Join_9)}
ds3: {op2[1p] (parallel APT_HashedGroup2Operator in Aggregator_6)
[pp] eSame->eCollectAny
op3[1p] (parallel inserted tsort operator {key={value=ename, subArgs={asc, cs}}}(0) in Join_9)}
ds4: {op3[1p] (parallel inserted tsort operator {key={value=ename, subArgs={asc, cs}}}(0) in Join_9)
[pp] eSame->eCollectAny
op5[1p] (parallel buffer(0))}
ds5: {op4[1p] (parallel inserted tsort operator {key={value=ename, subArgs={asc, nulls={value=first}, cs}}}(1) in Join_9)
[pp] eSame->eCollectAny
op6[1p] (parallel buffer(1))}
ds6: {op5[1p] (parallel buffer(0))
[pp] eSame->eCollectAny
op7[1p] (parallel APT_JoinSubOperatorNC in Join_9)}
ds7: {op6[1p] (parallel buffer(1))
[pp] eSame->eCollectAny
op7[1p] (parallel APT_JoinSubOperatorNC in Join_9)}
ds8: {op7[1p] (parallel APT_JoinSubOperatorNC in Join_9)
[pp] eSame->eCollectAny
op8[1p] (parallel APT_TransformOperatorImplV0S12_test001_Transformer_12 in Transformer_12)}
ds9: {op8[1p] (parallel APT_TransformOperatorImplV0S12_test001_Transformer_12 in Transformer_12)
[pp] ->eCollectAny
op9[1p] (sequential APT_RealFileExportOperator in Sequential_File_1)}
It has 10 operators:
op0[1p] {(parallel Row_Generator_0)
on nodes (
node1[op0,p0]
)}
op1[1p] {(parallel APT_TransformOperatorImplV0S2_test001_Transformer_2 in Transformer_2)
on nodes (
node1[op1,p0]
)}
op2[1p] {(parallel APT_HashedGroup2Operator in Aggregator_6)
on nodes (
node1[op2,p0]
)}
op3[1p] {(parallel inserted tsort operator {key={value=ename, subArgs={asc, cs}}}(0) in Join_9)
on nodes (
node1[op3,p0]
)}
op4[1p] {(parallel inserted tsort operator {key={value=ename, subArgs={asc, nulls={value=first}, cs}}}(1) in Join_9)
on nodes (
node1[op4,p0]
)}
op5[1p] {(parallel buffer(0))
on nodes (
node1[op5,p0]
)}
op6[1p] {(parallel buffer(1))
on nodes (
node1[op6,p0]
)}
op7[1p] {(parallel APT_JoinSubOperatorNC in Join_9)
on nodes (
node1[op7,p0]
)}
op8[1p] {(parallel APT_TransformOperatorImplV0S12_test001_Transformer_12 in Transformer_12)
on nodes (
node1[op8,p0]
)}
op9[1p] {(sequential APT_RealFileExportOperator in Sequential_File_1)
on nodes (
node1[op9,p0]
)}
It runs 10 processes on 1 node.
vin...
-
- Premium Member
- Posts: 34
- Joined: Fri May 16, 2008 6:24 am
-
- Premium Member
- Posts: 34
- Joined: Fri May 16, 2008 6:24 am
-
- Premium Member
- Posts: 12
- Joined: Wed Jul 29, 2009 2:47 am
- Location: Germany
Please try the below config file. I think you placed brackets at wring location so it is reading only upto first node and ignoring node 2.
Changed Configuration File
{
node "node1"
{
fastname "TOMEDW"
pools ""
resource disk "C:/IBM/InformationServer/Server/Datasets" {pools ""}
resource scratchdisk "C:/IBM/InformationServer/Server/Scratch" {pools ""}
}
node "node2"
{
fastname "TOMEDW"
pools ""
resource disk "E:/datastage/Datasets" {pools ""}
resource scratchdisk "C:/IBM/InformationServer/Server/Scratch" {pools ""}
}
}
Changed Configuration File
{
node "node1"
{
fastname "TOMEDW"
pools ""
resource disk "C:/IBM/InformationServer/Server/Datasets" {pools ""}
resource scratchdisk "C:/IBM/InformationServer/Server/Scratch" {pools ""}
}
node "node2"
{
fastname "TOMEDW"
pools ""
resource disk "E:/datastage/Datasets" {pools ""}
resource scratchdisk "C:/IBM/InformationServer/Server/Scratch" {pools ""}
}
}
-
- Premium Member
- Posts: 12
- Joined: Wed Jul 29, 2009 2:47 am
- Location: Germany
Please try the below config file. I think you placed brackets at wrong location so it is reading only upto first node and ignoring node 2.
Changed Configuration File
{
node "node1"
{
fastname "TOMEDW"
pools ""
resource disk "C:/IBM/InformationServer/Server/Datasets" {pools ""}
resource scratchdisk "C:/IBM/InformationServer/Server/Scratch" {pools ""}
}
node "node2"
{
fastname "TOMEDW"
pools ""
resource disk "E:/datastage/Datasets" {pools ""}
resource scratchdisk "C:/IBM/InformationServer/Server/Scratch" {pools ""}
}
}
Changed Configuration File
{
node "node1"
{
fastname "TOMEDW"
pools ""
resource disk "C:/IBM/InformationServer/Server/Datasets" {pools ""}
resource scratchdisk "C:/IBM/InformationServer/Server/Scratch" {pools ""}
}
node "node2"
{
fastname "TOMEDW"
pools ""
resource disk "E:/datastage/Datasets" {pools ""}
resource scratchdisk "C:/IBM/InformationServer/Server/Scratch" {pools ""}
}
}
-
- Premium Member
- Posts: 12
- Joined: Wed Jul 29, 2009 2:47 am
- Location: Germany
Please try the below config file. I think you placed brackets at wring location so it is reading only upto first node and ignoring node 2.
Changed Configuration File
{
node "node1"
{
fastname "TOMEDW"
pools ""
resource disk "C:/IBM/InformationServer/Server/Datasets" {pools ""}
resource scratchdisk "C:/IBM/InformationServer/Server/Scratch" {pools ""}
}
node "node2"
{
fastname "TOMEDW"
pools ""
resource disk "E:/datastage/Datasets" {pools ""}
resource scratchdisk "C:/IBM/InformationServer/Server/Scratch" {pools ""}
}
}
Changed Configuration File
{
node "node1"
{
fastname "TOMEDW"
pools ""
resource disk "C:/IBM/InformationServer/Server/Datasets" {pools ""}
resource scratchdisk "C:/IBM/InformationServer/Server/Scratch" {pools ""}
}
node "node2"
{
fastname "TOMEDW"
pools ""
resource disk "E:/datastage/Datasets" {pools ""}
resource scratchdisk "C:/IBM/InformationServer/Server/Scratch" {pools ""}
}
}