Dataset Data not written in all the nodes?

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
SURA
Premium Member
Premium Member
Posts: 1229
Joined: Sat Jul 14, 2007 5:16 am
Location: Sydney

Dataset Data not written in all the nodes?

Post by SURA »

Hello All

Code: Select all

                           SQLSrvr1                 SQLSrvr2        
                              |                             |
                              |                             |   
SRC_SQLSrvr-->TFM-->Join1------------------->Join2--------------->Dataset

SRC_SQLSrvr -- Auto Partition
TFM -- Auto
Join1 & SQLSrvr1 -- Hash , BK sort
Join2 & SQLSrvr2 -- SAME , BK sort
Dataset  -- SAME, BK Sort (Which is going to be the INPUT KEY for next job)
I used 4 node Dataset file.

When I ran this job, it went smoothly, whereas in the Dataset (Dataset management) I can see the data only in one node and the rest of the nodes are not having any data?

Dose any one can find anything?
Thanks
Ram
----------------------------------
Revealing your ignorance is fine, because you get a chance to learn.
SURA
Premium Member
Premium Member
Posts: 1229
Joined: Sat Jul 14, 2007 5:16 am
Location: Sydney

Re: Dataset Data not written in all the nodes?

Post by SURA »

Code: Select all

##I IIS-DSEE-TFCN-00001 13:05:41(000) <main_program> 
IBM WebSphere DataStage Enterprise Edition 8.5.0.6152 
Copyright (c) 2001, 2005-2008 IBM Corporation. All rights reserved
 


##I IIS-DSEE-TFCN-00006 13:05:41(001) <main_program> conductor uname: -s=Windows_NT; -r=1; -v=6; -n=SERVER; -m=Pentium
##I IIS-DSEE-TFSC-00001 13:05:41(002) <main_program> APT configuration file: /tmp/aptoa79446c1ded2e
##I IIS-DSEE-TOIX-00059 13:05:43(000) <APT_RealFileExportOperator in APT_FileExportOperator,0> Export complete; 58 records exported successfully, 0 rejected.
                 Name:  F:/ETL/Dataset/FILE_tst.ds
              Version:  ORCHESTRATE V8.5.0 DM Block Format 6.
     Time of Creation:  08/30/2012 12:55:19
 Number of Partitions:  4
   Number of Segments:  1
       Valid Segments:  1
Preserve Partitioning:  true
  Partitioning Method:  APT_HashPartitioner
Segment Creation Time:
            0:  08/30/2012 12:55:19

Partition 0
  node   : node1
  records: 9716
  blocks : 37
  bytes  : 4741408
  files  :
    Segment 0 :
           /E=/IBM/Datasets/FILE_tst.ds.SERVER.0000.0000.0000.1ef8.d2081307.0000.d34caebf  4849664 bytes
  total   : 4849664 bytes
Partition 1
  node   : node2
  records: 412646
  blocks : 1540
  bytes  : 201371248
  files  :
    Segment 0 :
           /E=/IBM/Datasets/FILE_tst.ds.SERVER.0000.0001.0000.1ef8.d2081307.0001.9439a8ec  201850880 bytes
  total   : 201850880 bytes
Partition 2
  node   : node3
  records: 0
  blocks : 0
  bytes  : 0
  files  :
    Segment 0 :
           /E=/IBM/Datasets/FILE_tst.ds.SERVER.0000.0002.0000.1ef8.d2081307.0002.02fd46e8  0 bytes
  total   : 0 bytes
Partition 3
  node   : node4
  records: 2481638
  blocks : 9260
  bytes  : 1211039344
  files  :
    Segment 0 :
           /E=/IBM/Datasets/FILE_tst.ds.SERVER.0000.0003.0000.1ef8.d2081307.0003.9c5f3a3c  1213726720 bytes
  total   : 1213726720 bytes

Totals:
  records : 2904000
  blocks  : 10837
  bytes   : 1417152000
  filesize: 1420427264
  min part: 0
  max part: 1213726720

Schema:
record
( 
xxxxxxx
)
##I IIS-DSEE-TFSC-00010 13:05:44(000) <main_program> Step execution finished with status = OK.
When i executed this job again , it loaded the data as above.
Thanks
Ram
----------------------------------
Revealing your ignorance is fine, because you get a chance to learn.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Tell us about the partitioning algorithms you've been using.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
SURA
Premium Member
Premium Member
Posts: 1229
Joined: Sat Jul 14, 2007 5:16 am
Location: Sydney

Post by SURA »

Hi Ray

Code: Select all

                           SQLSrvr1                 SQLSrvr2        
                              |                             | 
                              |                             |    
SRC_SQLSrvr-->TFM-->Join1------------------->Join2--------------->Dataset 

SRC_SQLSrvr -- Auto Partition 
TFM -- Auto 
Join1 & SQLSrvr1 -- Hash , BK sort 
Join2 & SQLSrvr2 -- SAME , BK sort 
Dataset  -- SAME, BK Sort (Which is going to be the INPUT KEY for next job) 

Code: Select all

Source is SQL Server - AUTO
TFM                           - AUTO
First Join                    - Hash, SORT on BK for that JOIN
FIRST REF SQL SRVE  -  Hash, SORT on BK for that JOIN
2nd Join                     - SAME, SORT on BK for that JOIN
2nd SQL Srvr              - HASH, SORT on BK for that JOIN
TARGET DATASET       - SAME, SORT on BK for FOR NEXT JOB
In here BK for that JOIN mean each join stage will have different BKs.
Thanks
Ram
----------------------------------
Revealing your ignorance is fine, because you get a chance to learn.
ShaneMuir
Premium Member
Premium Member
Posts: 508
Joined: Tue Jun 15, 2004 5:00 am
Location: London

Post by ShaneMuir »

Possibly a silly question but is the value that you are partitioning on the same for every record at the first join?
SURA
Premium Member
Premium Member
Posts: 1229
Joined: Sat Jul 14, 2007 5:16 am
Location: Sydney

Post by SURA »

For each Join, KEY columns are differing. But i am not re-partitioning, but in the SORT option I am using different columns.

Say for example, in JOIN1 my KEY column is A, whereas in JOIN2 my key column will be B.

For JOIN1, I am using HASH Partition, SORT by column A and in JOIN2 I am using the SAME Partition and SORT by COLUMN B.
Thanks
Ram
----------------------------------
Revealing your ignorance is fine, because you get a chance to learn.
jwiles
Premium Member
Premium Member
Posts: 1274
Joined: Sun Nov 14, 2004 8:50 pm
Contact:

Post by jwiles »

A key-based partitioning algorithm doesn't guarantee that data will be evenly distributed among the partitions or that data will be sent to all partitions. It merely guarantees that all records with containing the same (identical) value for a key will be sent to the same partition.

Do you have large numbers of records which have the same value for the key?

Is there a reason you are NOT partitioning by Column B for Join 2? It is your primary key for that join, after all :|

Regards,
- james wiles


All generalizations are false, including this one - Mark Twain.
ulab
Participant
Posts: 56
Joined: Mon Mar 16, 2009 4:58 am
Location: bangalore
Contact:

Post by ulab »

Right Said James,

and one more solution could be using a round robin partitioning Ofcourse it degrades the performance in this senario but it makes sure that all the rows are equally distuributed to all nodes, try this once,

IMP NOTE: if the issue is resolved please mark it as resolved this help others and saves time.
Ulab----------------------------------------------------
help, it helps you today or Tomorrow
jwiles
Premium Member
Premium Member
Posts: 1274
Joined: Sun Nov 14, 2004 8:50 pm
Contact:

Post by jwiles »

If a downstream job needs the data to be key-partitioned, then having 0 rows in one or more partitions of a dataset is not necessarily a bad situation. It's sometimes the price to pay at a point in the process to meet the business rule/logic requirements. If a downstream job doesn't require a particular partitioning scheme, then you can look at round-robin or random partitioning prior to writing out the dataset. Don't round-robin for the joins (I hope that wasn't your suggestion)--that would go against the requirements for the join stage.

Still, I question the use of SAME partitioning prior to the second join...WHY?? The framework could still be inserting a hash partition on column B at runtime to meet the requirement of the join key, when the correct partitioning should really be reflected in the job design instead.

Regards,
- james wiles


All generalizations are false, including this one - Mark Twain.
SURA
Premium Member
Premium Member
Posts: 1229
Joined: Sat Jul 14, 2007 5:16 am
Location: Sydney

Post by SURA »

jwiles wrote:Still, I question the use of SAME partitioning prior to the second join...WHY?? The framework could still be inserting a hash partition on column B at runtime to meet the requirement of the join key, when the correct partitioning should really be reflected in the job design instead.
Regards,
Thanks for the reply.

Do you mean, i need to use the HASH partition explicitly rather than SAME in the JOIN2 ?

When the chek the DUMP_SCORE, it is internally placing the HASH.

What is the difference between using SAME Vs explicitly choosing the partition?
Thanks
Ram
----------------------------------
Revealing your ignorance is fine, because you get a chance to learn.
jwiles
Premium Member
Premium Member
Posts: 1274
Joined: Sun Nov 14, 2004 8:50 pm
Contact:

Post by jwiles »

Read this documentation about Same partitioning. You should then understand what Same partitioning does and how it differs from other partitioning.

Had partition insertion been disabled in your job or project, your second join would not have produced the results you required because your data would not have been partitioned correctly for that join operation.

Regards,
- james wiles


All generalizations are false, including this one - Mark Twain.
jwiles
Premium Member
Premium Member
Posts: 1274
Joined: Sun Nov 14, 2004 8:50 pm
Contact:

Post by jwiles »

You CAN use same partitioning, but you don't have to. You could leave it as Auto instead and DataStage would more than likely not repartition the data because it already meets the needs of the downstream stage.

Regards,
- james wiles


All generalizations are false, including this one - Mark Twain.
SURA
Premium Member
Premium Member
Posts: 1229
Joined: Sat Jul 14, 2007 5:16 am
Location: Sydney

Post by SURA »

Thanks James. Now i understood.

But the thread was initiated to know, why data was not written in all the nodes?
Hash partitioning does not necessarily result in an even distribution of data between partitions.
Dose it mean, it may not even write a single row in one more nodes?
Thanks
Ram
----------------------------------
Revealing your ignorance is fine, because you get a chance to learn.
jwiles
Premium Member
Premium Member
Posts: 1274
Joined: Sun Nov 14, 2004 8:50 pm
Contact:

Post by jwiles »

This question was already answered earlier in the thread:
Posted: Thu Aug 30, 2012 7:03 am
A key-based partitioning algorithm doesn't guarantee that data will be evenly distributed among the partitions or that data will be sent to all partitions. It merely guarantees that all records with containing the same (identical) value for a key will be sent to the same partition.
Regards,
- james wiles


All generalizations are false, including this one - Mark Twain.
SURA
Premium Member
Premium Member
Posts: 1229
Joined: Sat Jul 14, 2007 5:16 am
Location: Sydney

Post by SURA »

Thanks James
Thanks
Ram
----------------------------------
Revealing your ignorance is fine, because you get a chance to learn.
Post Reply