CFF

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
srekant
Premium Member
Premium Member
Posts: 85
Joined: Wed Jan 19, 2005 6:52 am
Location: Detroit

CFF

Post by srekant »

hi ,
I am getting the following error

"sample_CFF,1: Cannot use multinode: The file length (1146600) is not a multiple of the record length (716)"

when do we get this error.I am using DS EE7.5 Complex flat file
Sree
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

You get this error because DataStage attempts to partition fixed width files by dividing the size of the file by the number of partitions (N) to get the subset of rows to process. However, as an exactitude test, DataStage verifies that 1/N of the file represents a whole number of rows, which is the source of the error.

Your metadata specifies a record length of 716 bytes. With whatever record delimiter you've specified, this is not a whole divisor of the physical file size (1146600 bytes).

Therefore, to read the file, sequential mode operation must be used.
Perhaps you've specified a record delimiter when it should be none? Perhaps your metadata (column widths) are not correct.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
srekant
Premium Member
Premium Member
Posts: 85
Joined: Wed Jan 19, 2005 6:52 am
Location: Detroit

CFF

Post by srekant »

Hi Ray,
Thanks for your prompt response.How can i know the no of partitions in my job since i am using auto partioning and also since it is a fixed width CFF i didnt use any delimiter.But i used the option of read from multi node.
i used the same file of different size it is working fine.
Sree
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Srekant,

think about what Ray and the DS job are telling you:

the file has 1146600 bytes and a fixed width of 716. This is not correct; since the filesize is not a multiple of 716. Because this is wrong, DS cannot partition the reading of this file across processes.

Look at the same file format which works. Is the file size a multiple of 716?
srekant
Premium Member
Premium Member
Posts: 85
Joined: Wed Jan 19, 2005 6:52 am
Location: Detroit

Post by srekant »

ArndW wrote:Srekant,

think about what Ray and the DS job are telling you:

the file has 1146600 bytes and a fixed width of 716. This is not correct; since the filesize is not a multiple of 716. Because this is wrong, DS cannot partition the reading of this file across processes.

Look at the same file format which works. Is the file size a multiple of 716?
Yeah it is a multiple of 716
Sree
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The number of partitions is exactly the number of processing nodes specified in your configuration file (the file specified by the APT_CONFIG_FILE environment variable), unless this is overridden on the Advanced properties to limit execution to a node pool that is a subset of that group of nodes.
Last edited by ray.wurlod on Wed Aug 24, 2005 1:00 am, edited 1 time in total.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

1146600 / 716 = 1601.4 so 1146600 is NOT a whole multiple of 716.

Not only can DataStage not partition this file, it can not read it without encountering at least one incomplete row.

You must get your metadata right, or include handling of incomplete rows, or check that you've been provided with a complete, undamaged source file. If you don't have truly fixed width data, you must specify sequential mode execution.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply