Loading a Dataset (Append Mode) and Partitionning

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
ganive
Participant
Posts: 18
Joined: Wed Sep 28, 2005 7:06 am

Loading a Dataset (Append Mode) and Partitionning

Post by ganive »

Hi All,

I just had a few errors with a corrupt Dataset and it seems to be because of a partitionning "error" while loading a Dataset.

Here is the deal :
I have a dataset that is loaded in different jobs, before being used to create a log file and then purged.
The job where I create the Flat file had several problems such as "corrupted dataset" (sorry I don't remember the exact sentence).
The Dataset was here, it appeared in the dataset management tools, but the datas weren't accessible.

It appeared that the problem was due to an "Auto" partition "forgotten" whereas we use Round Robin each time we append to the dataset.

Can someone explain me the phenomenon (seems that auto is using only one node or something like that) ? :?:
Moreover, do we really have to use the same partition when appending to an already partitionned dataset (can you append in Round Robin, then Hash etc... just knowing the last partition will be the effective one for your datas) ? :?:

At the moment, I think it is just a problem with Auto using only One node and discarding the other partitionning choices (using multi-nodes)...
But I'd like to be sure !! :?

ThanX a Lot !!
--------
GaNoU
--------
Post Reply