I am getting the following warning from one of my jobs:
<Stage name>,1: Insufficient blocks for partition 1. Reading suppressed.
The stage is a Teradata Enterprise stage reading only one row (max aggregate) from a table. The data (one row) is used in a lookup stage.
The data (one row) seems to be read successfully, and the job result is as expected. I just need to get rid of the warning.
Any ideas on what is causing this warning?
Insufficient blocks for partition. Reading suppressed.
Moderators: chulett, rschirm, roy
How many nodes does the job use? Assuming the 'partition 1' problem is a DataStage issue rather than a Teradata issue, perhaps running on a single node would solve this? Or running the stage in sequential mode, if applicable? Or using 'Entire' for the lookup partitioning methodology? ![Confused :?](./images/smilies/icon_confused.gif)
Guessing here.
![Confused :?](./images/smilies/icon_confused.gif)
Guessing here.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
The (single) row only has two columns, both of which are Integer of length 10, which should result in only a few bytes.
As for the number of nodes, I am not sure (I am new to parallel jobs, and at the same time new at site). An "APT config file" in use specifies two nodes, so I guess two is the answer. I believe these settings are set globally in the project, and not meant to be adjusted for single jobs.
Running the stage in sequential mode does not fix the problem (and causes other warnings, stating that sequential mode is not recommended for production use).
As for the number of nodes, I am not sure (I am new to parallel jobs, and at the same time new at site). An "APT config file" in use specifies two nodes, so I guess two is the answer. I believe these settings are set globally in the project, and not meant to be adjusted for single jobs.
Running the stage in sequential mode does not fix the problem (and causes other warnings, stating that sequential mode is not recommended for production use).
Well, there is a "default" configuration but it is very much able (and meant) to be overriden for any given job. It's as simple as defining different configuations for different numbers / flavors of nodes and then adding the $APT_CONFIG_FILE environment variable to a job to allow it to take something other than the default.
More of an FYI than anything.
More of an FYI than anything.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers