Search found 99 matches
- Thu Dec 27, 2012 2:05 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Job completes successfully , but is showing ABORTED
- Replies: 5
- Views: 2914
Job completes successfully , but is showing ABORTED
We have migrated a couple of jobs from our DEV environment to a NEW STAGE environment. The Hardware configurations of these two environments are identical. Post deployment, we are facing the following problems : a) Jobs which used to complete within an hour, are taking close to 10-12 hours to comple...
- Thu Dec 20, 2012 2:49 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Oracle connector stage Pre-SQL execution anomaly
- Replies: 2
- Views: 2014
- Thu Dec 20, 2012 1:02 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Oracle connector stage Pre-SQL execution anomaly
- Replies: 2
- Views: 2014
Oracle connector stage Pre-SQL execution anomaly
I have created a job which joins two datasources namely, a) MEMBER table b) VOUCHER table based on MEMBER_NUMBER column. Post join, I am fetching data of TWO columns MEMBER_NAME (from MEMBER table) and VOUCHER_TYPE(from VOUCHER table), and I am UPDATING TWO columns (MEM_NAME and TYPE) of THE SAME VO...
- Tue Dec 18, 2012 3:11 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Issues with MODIFY STAGE
- Replies: 1
- Views: 1016
Issues with MODIFY STAGE
I am taking the reference of a topic marked RESOLVED where a few specification expresssions are mentioned like: UPC:string[20] = string_from_decimal [suppress_zero](UPC) where the name of the column in the output link is SAME as the name of the input column whose DATATYPE is being altered by the mod...
- Fri Dec 14, 2012 2:56 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Problems with STAGE VARIABLE DERIVATION
- Replies: 3
- Views: 2296
Problems with STAGE VARIABLE DERIVATION
I am having a source table named PRODUCT . Its schema is given below. PROD_ID VARCHAR2(11 CHAR) NAME VARCHAR2(50 CHAR) PRICE VARCHAR2(5 CHAR) GROUP_CODE VARCHAR2(4 CHAR) Examples of data in column PRICE are 80, 70, 170, 90, 70 My intention is to use the looping feature of tranformer stage and create...
- Fri Dec 07, 2012 5:30 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: TWO DIFFERENT DATA FLOWS IN A SINGLE JOB
- Replies: 13
- Views: 4909
Let consider the following job scenario : Oci_Src--> Tfm1--> Dataset1 Dataset2--> Tfm2--> Oci_Tgt In the above situation > The first flow uses Dataset1 to write to a file f1.ds > Dataset2 is used to read data written in the file f1.ds an perform any subsequent functions as part of the second flow > ...
- Thu Dec 06, 2012 3:55 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: TWO DIFFERENT DATA FLOWS IN A SINGLE JOB
- Replies: 13
- Views: 4909
- Tue Nov 27, 2012 1:09 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: TWO DIFFERENT DATA FLOWS IN A SINGLE JOB
- Replies: 13
- Views: 4909
TWO DIFFERENT DATA FLOWS IN A SINGLE JOB
I have been longing for quite sometime to ask this quetion here , but was too much tied up in other issues lately. In out Datastage Repository ,there are many jobs, any one of which when I open the job in the Designer Canvas , I see TWO OR MORE INDEPENDENT Data flows within the same job. Someting li...
- Tue Nov 20, 2012 12:22 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Datastage job ran for 11 hours,,, please help
- Replies: 12
- Views: 8247
I debugged the job and used a PEEK stage in place of the target Oci, and the job wrapped up in close to 10 mins, but the problem shows up again when Oci is in place. The target table that I want my data to go into has multiple(close to 20 ) indexes declared, that I dare not to touch .. So is there a...
- Mon Nov 19, 2012 8:01 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Datastage job ran for 11 hours,,, please help
- Replies: 12
- Views: 8247
- Mon Nov 19, 2012 7:57 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Datastage job ran for 11 hours,,, please help
- Replies: 12
- Views: 8247
- Mon Nov 19, 2012 6:17 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Datastage job ran for 11 hours,,, please help
- Replies: 12
- Views: 8247
Oci_Src --Sort1(key=Member ID)-- Join1 --Sort2(key=address_id)- Join2 - Oci_Tgt | | | | | | Oci_Member_master Sort3(key=ADDRESS_ID) | | Oci_Address_master @BI-RMA : Its plain INSERT , and there are many other jobs handling similar volumes of data with plain INSERT in the target table, but they are ...
- Mon Nov 19, 2012 5:16 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Datastage job ran for 11 hours,,, please help
- Replies: 12
- Views: 8247
Datastage job ran for 11 hours,,, please help
My datastage parallel job is expected to handle more than 10 Million records as input. The input data is basically a combination of MEMBER_ID and their relevant addresses indicated by ADDRESS_ID The requirement of this job is to validate the MEMBER_ID and ADDRESS_ID values of a single record of the ...
- Tue Nov 13, 2012 12:15 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: JOIN STAGE takes hell lot of time
- Replies: 6
- Views: 4584
I already have the partition and sort done IDENTICALLY on both the links to the join stage,on ADDRESS_ID column , and still the jobs runs long enough to raise my eyebrow...... As Andrew said , can I have any of the algorithms to compact my ADDRESS_ID field size ....... I just need to give it a try ...
- Mon Nov 12, 2012 1:13 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: JOIN STAGE takes hell lot of time
- Replies: 6
- Views: 4584
Thanks J for your response . But the business logic demands that a join must be made on ADDRESS_ID (basically to lookup the ADDRESS_ID of the input against a reference data), so cant escape the join. What I am looking for is like this : If there is a way that we can encode the ADDRESS_ID column in s...