About Auto partition

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
jack_dcy
Participant
Posts: 18
Joined: Wed Jun 29, 2005 9:53 pm

About Auto partition

Post by jack_dcy »

Hi all,
As defualt, the partition type at the Input page of each stage in the job, is the 'Auto', can we know witch partition type the osh script this used when the job is runing?

Thanks.
elavenil
Premium Member
Premium Member
Posts: 467
Joined: Thu Jan 31, 2002 10:20 pm
Location: Singapore

Post by elavenil »

When 'Auto' is used as the partition type, PX engine will use the appropriate partition based on the input and the operator that is defined in the job. You can look at the generated OSH to find out which partition is used in the job.

HTWH.

Regards
Saravanan
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

If you haven't already you will need to enable the viewing of Generated OSH in the Administrator (check box on Parallel tab of project properties). You then view the generated OSH via the Generated OSH tab in job properties, after the job has been compiled.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
jack_dcy
Participant
Posts: 18
Joined: Wed Jun 29, 2005 9:53 pm

Post by jack_dcy »

Hi ray and elavenil,

thanks for your help, we already set the environment variable for the project, and saw the Generate OSH in Job properties, but we couldn't find the partition type in the osh script, if we set all the partition type as 'auto'.

a Generate OSH like this, we can't find any information about the partition type.

#################################################################
#### STAGE: S1_tfundday_ccbss_in
## Operator
import
## Operator options
-schema record
{record_delim='\n', delim=none}
(
MK_DATE:string[8] {width=8};
C_FUNDCODE:string[6] {width=6};
F_NETVALUE:nullable decimal[9,4] {width=11, null_field=' '};
F_SUBSCRIBERATIO:nullable decimal[9,8] {width=11, null_field=' '};
C_STATUS:string[1] {width=1};
)
-file '[&S1_tfundday_aaass_in]'
-rejects continue
-reportProgress yes

## General options
[ident('S1_tfundday_aaass_in'); jobmon_ident('S1_tfundday_aaass_in')]
## Outputs
0> [] 'S1_tfundday_aaass_in:Join_11_left.v'
;

#################################################################
#### STAGE: T2_aaasecuprice_ocrm_in
## Operator
export
## Operator options
-schema record
{final_delim=end, delim=',', quote=double}
(
MK_DATE:string[8] {width=8};
C_FUNDCODE:string[6] {width=6};
F_NETVALUE:nullable decimal[9,4] {width=11};
F_LASTASSET:nullable decimal[16,2] {width=18};
F_LASTSHARES:nullable decimal[16,2] {width=18};
C_TACODE:string[3] {width=3};
)
-file '/tmp/zhy1'
-overwrite
-rejects continue

## General options
[ident('T2_aaasecuprice_ocrm_in'); jobmon_ident('T2_aaasecuprice_ocrm_in')]
## Inputs
0< 'Join_61:DSLink63.v'
;

#################################################################
#### STAGE: T1_tfundinfo_in
## Operator
import
## Operator options
-schema record
{record_delim='\n', record_length=fixed, delim=none}
(
C_FUNDCODE:string[6] {width=6};
C_FUNDNAME:string[10] {width=10};
C_MONEYTYPE:string[3] {width=3};
C_FORCEREDEEM:string[1] {width=1};
C_INTERESTDEALTYPE:string[1] {width=1};
)
-file '[&T1_tfundinfo_aaass_in]'
-rejects continue
-reportProgress yes

## General options
[ident('T1_tfundinfo_in'); jobmon_ident('T1_tfundinfo_in')]
## Outputs
0> [] 'T1_tfundinfo_in:DSLink62.v'
;

#################################################################
#### STAGE: Join_61
## Operator
innerjoin
## Operator options
-key 'C_FUNDCODE'

## General options
[ident('Join_61'); jobmon_ident('Join_61')]
## Inputs
0< 'Remove_Duplicates_69:DSLink33.v'
1< 'T1_tfundinfo_in:DSLink62.v'
## Outputs
0> [modify (
keep
MK_DATE,C_FUNDCODE,F_NETVALUE,F_LASTASSET,
F_LASTSHARES,C_TACODE;
)] 'Join_61:DSLink63.v'
;

#################################################################
#### STAGE: Sort_68
## Operator
tsort
## Operator options
-stable
-key 'C_FUNDCODE'
-asc
-key 'MK_DATE'
-asc

## General options
[ident('Sort_68'); jobmon_ident('Sort_68')]
## Inputs
0< 'S1_tfundday_aaass_in:Join_11_left.v'
## Outputs
0> [modify (
keep
MK_DATE,C_FUNDCODE,F_NETVALUE,F_LASTSHARES,
F_LASTASSET;
)] 'Sort_68:DSLink59.v'
;

#################################################################
#### STAGE: Remove_Duplicates_69
## Operator
remdup
## Operator options
-keep first
-key 'C_FUNDCODE'

## General options
[ident('Remove_Duplicates_69'); jobmon_ident('Remove_Duplicates_69')]
## Inputs
0< 'Sort_68:DSLink59.v'
## Outputs
0> [modify (
keep
MK_DATE,C_FUNDCODE,F_NETVALUE,F_LASTASSET,
F_LASTSHARES;
)] 'Remove_Duplicates_69:DSLink33.v'
;
elavenil
Premium Member
Premium Member
Posts: 467
Joined: Thu Jan 31, 2002 10:20 pm
Location: Singapore

Post by elavenil »

Enable 'APT_DUMP_SCORE' to true using 'Administrator client' and it will show the partitions and the operators while running the job.

HTWH.

Regards
Saravanan
richdhan
Premium Member
Premium Member
Posts: 364
Joined: Thu Feb 12, 2004 12:24 am

Post by richdhan »

Hi Jack,

Instead of doing from the Administrator client add the $APT_DUMP_SCORE environment variable to your job parameters and set the default value to 1. It will give you the required information.

HTH
Rich
Post Reply