Insufficient blocks for partition. Reading suppressed.

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
vpauls
Premium Member
Premium Member
Posts: 37
Joined: Mon May 09, 2005 2:26 am
Location: Oslo

Insufficient blocks for partition. Reading suppressed.

Post by vpauls »

I am getting the following warning from one of my jobs:

<Stage name>,1: Insufficient blocks for partition 1. Reading suppressed.

The stage is a Teradata Enterprise stage reading only one row (max aggregate) from a table. The data (one row) is used in a lookup stage.

The data (one row) seems to be read successfully, and the job result is as expected. I just need to get rid of the warning.

Any ideas on what is causing this warning?
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

How many nodes does the job use? Assuming the 'partition 1' problem is a DataStage issue rather than a Teradata issue, perhaps running on a single node would solve this? Or running the stage in sequential mode, if applicable? Or using 'Entire' for the lookup partitioning methodology? :?

Guessing here.
-craig

"You can never have too many knives" -- Logan Nine Fingers
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

How many bytes are expected per row? I can only imagine this type of error if you exceed the APT config setting for $APT_MAX_TRANSPORT_BLOCK_SIZE which defaults to 1048576. Or perhaps your jobs's ulimit settings are particularly restrictive.
vpauls
Premium Member
Premium Member
Posts: 37
Joined: Mon May 09, 2005 2:26 am
Location: Oslo

Post by vpauls »

The (single) row only has two columns, both of which are Integer of length 10, which should result in only a few bytes.

As for the number of nodes, I am not sure (I am new to parallel jobs, and at the same time new at site). An "APT config file" in use specifies two nodes, so I guess two is the answer. I believe these settings are set globally in the project, and not meant to be adjusted for single jobs.

Running the stage in sequential mode does not fix the problem (and causes other warnings, stating that sequential mode is not recommended for production use).
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Well, there is a "default" configuration but it is very much able (and meant) to be overriden for any given job. It's as simple as defining different configuations for different numbers / flavors of nodes and then adding the $APT_CONFIG_FILE environment variable to a job to allow it to take something other than the default.

More of an FYI than anything.
-craig

"You can never have too many knives" -- Logan Nine Fingers
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

I just re-read the initial post and saw "max aggregate" and now I wonder if the SQL in the query might be causing problems. Can you post the SQL or write a test job to change it around to see if that changes your error or makes it go away?
Post Reply