Hi,
I am trying to load a Teradata Table from Datastage PX job (Unix platform).
Job reads data from a Dataset (I am able to view data from Dataset)
Copy stage drops some column and changes column name and send the records to Teradata Enterprise Stage.
Teradata Enterprise stage is loading Teradata table in 'Append' mode.
I am using default Configuration file with 2 nodes ( no nodes mentioned for database).
Teradata table is not a partitioned table.
When I execute job With above configuration I am getting following error:
main_program: Fatal Error: There are irreconcilable constraints on the number of
partitions of an operator: parallel Teradata_StrTransfer.
The number of partitions is already constrained to 10,
but an eSame partitioned input virtual dataset produced by
parallel Copy_45 has 2.
This step has 3 datasets:
ds0: {/home/vvxb1/Transfer_Details.ds
[pp] eSame=>eCollectAny
op0[2p] (parallel Data_Set_44)}
ds1: {op0[2p] (parallel Data_Set_44)
[pp] eSame#>eCollectAny
op1[2p] (parallel Copy_45)}
ds2: {op1[2p] (parallel Copy_45)
[pp] eSame#>eCollectAny
op2[10p] (parallel Teradata_StrTransfer)}
It has 3 operators:
op0[2p] {(parallel Data_Set_44)
}
op1[2p] {(parallel Copy_45)
}
op2[10p] {(parallel Teradata_StrTransfer)
}
Kindly let me know if you guys have any solution for this problem.
Thanks.
Error when loading Teradata Table
Moderators: chulett, rschirm, roy
-
- Charter Member
- Posts: 88
- Joined: Tue Jan 13, 2004 3:07 pm
Hi,
What is the partitioning type that you are using for your dataset, copy and teradata enterprise stage. The problem seems to be somewhere here
main_program: Fatal Error: There are irreconcilable constraints on the number of partitions of an operator: parallel Teradata_StrTransfer.
The number of partitions is already constrained to 10,
but an eSame partitioned input virtual dataset produced by
parallel Copy_45 has 2.
It has 3 operators:
op0[2p] {(parallel Data_Set_44)
}
op1[2p] {(parallel Copy_45)
}
op2[10p] {(parallel Teradata_StrTransfer)
}
Pls also test with Teradata API stage and let us know what happens.
HTH
--Rich
What is the partitioning type that you are using for your dataset, copy and teradata enterprise stage. The problem seems to be somewhere here
main_program: Fatal Error: There are irreconcilable constraints on the number of partitions of an operator: parallel Teradata_StrTransfer.
The number of partitions is already constrained to 10,
but an eSame partitioned input virtual dataset produced by
parallel Copy_45 has 2.
It has 3 operators:
op0[2p] {(parallel Data_Set_44)
}
op1[2p] {(parallel Copy_45)
}
op2[10p] {(parallel Teradata_StrTransfer)
}
Pls also test with Teradata API stage and let us know what happens.
HTH
--Rich
-
- Charter Member
- Posts: 88
- Joined: Tue Jan 13, 2004 3:07 pm
Well, Dataset is already partitioned in previous job. I am keeping 'same' partition in dataset and copy stage. In Teradata enterprise stage it is auto partitioned.
My configuration file has 2 nodes, but error below says
It has 3 operators:
op0[2p] {(parallel Data_Set_44)
}
op1[2p] {(parallel Copy_45)
}
op2[10p] {(parallel Teradata_StrTransfer)
}
Why is is it [10p] for Teradata stage?
My configuration file has 2 nodes, but error below says
It has 3 operators:
op0[2p] {(parallel Data_Set_44)
}
op1[2p] {(parallel Copy_45)
}
op2[10p] {(parallel Teradata_StrTransfer)
}
Why is is it [10p] for Teradata stage?
Code: Select all
main_program: Fatal Error: There are irreconcilable constraints on the number of
partitions of an operator: parallel Teradata_StrTransfer.
The number of partitions is already constrained to 10,
-Kumar
-
- Charter Member
- Posts: 88
- Joined: Tue Jan 13, 2004 3:07 pm
HiInquisitive wrote:On teradata Enterprise stage I have Auto partition.
Configuration file does not have any node defined for Database, in this scenarion how can teradata load can happen on multiple nodes? How is data partitioned (if on different nodes) when datastage loads it to Teradata.
In Teradata the data is partition by AMP and each node will be configured with multiple AMPs. The data is being partition using the Primary Index in Teradata to the varies AMP using HASH Logic.
-
- Charter Member
- Posts: 88
- Joined: Tue Jan 13, 2004 3:07 pm
Problem solved
Hi All,
we were able to solve this problem and here is the solution:
When I manually selected node map constraints in Teradata Enterprise stage (stage -> Advanced-> Node map constraints) to available nodes (i.e. node1,node2) it went fine and inserted records into the table.
Any body knows how exactly this node map constraints play role in communication between Datastage (Orchestrate) to Teradata.
From error log it looked like file when I execute job with default node constraints (keeping them blank) there were 10 players [10p] for teradata stage and 2 players [2p] for copy stage and Dataset stage.
Can anybody explain the background process.
Thanks all for you inputs.
we were able to solve this problem and here is the solution:
When I manually selected node map constraints in Teradata Enterprise stage (stage -> Advanced-> Node map constraints) to available nodes (i.e. node1,node2) it went fine and inserted records into the table.
Any body knows how exactly this node map constraints play role in communication between Datastage (Orchestrate) to Teradata.
From error log it looked like file when I execute job with default node constraints (keeping them blank) there were 10 players [10p] for teradata stage and 2 players [2p] for copy stage and Dataset stage.
Can anybody explain the background process.
Thanks all for you inputs.