Search found 72 matches
- Fri Jul 15, 2011 3:18 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Job hangs at the join stage
- Replies: 2
- Views: 3744
I had also faced same issue . The design was seq file ---->transformer ---> join---> dataset. | Seq File . After some time of investing, we found that the data was getting leaked or truncated. A decimal (10) was mapped to decimal (6), resulting in a leak and leading to job hanging at join stage. we ...
- Mon Jun 06, 2011 11:36 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Surrogate key using only one instance.
- Replies: 4
- Views: 4303
Datastage works within nodes. Hence your surrogate key generator logic would work if either it is run one instance or the rows are equally distributed accross all the partitions. To generate , unique consecutive values in datastage a> we used to use transformer ( with partition number and number of ...
- Thu Jun 02, 2011 3:53 am
- Forum: Enhancement Wish List
- Topic: propagate option and view data in excel or other formats
- Replies: 5
- Views: 22298
propagate option and view data in excel or other formats
I would like to see following enhancements ( Not sure if they are already in place). a> Like in informatica, if we add some columns or change the existing the metadata in the preceding stages of the jobs. There should be option given to propagate the same changes in the subsequent stages. So that wh...
- Wed Jun 01, 2011 5:59 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Facing problem with compilation
- Replies: 3
- Views: 5002
Are you using CFF stage or source/target schema with level numbers. If yes, then check the level numbers of the fields. They must be same. As indicated in the error, there seems a difference in the level number of columns lnk_ppsubject_history_to_join.HIST_ACTION_IN lnk_ppsubject_history_to_join.HIS...
- Wed Jun 01, 2011 5:47 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: DSGetStageInfo for Datasets
- Replies: 9
- Views: 6050
- Mon May 16, 2011 10:49 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Datastage Director Hung
- Replies: 1
- Views: 2299
- Mon Dec 27, 2010 6:37 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Tsort insertion
- Replies: 2
- Views: 3240
Tsort insertion
In our job we have source, join on 2 inputs and target. This job ran for 1 million records, but wen we tried for 5 million, it failed giving 'tsort --unable to write to the file....'. As a solution we inserted explicit sort stages before the input links to the join and re-executed the job. The job r...
- Thu Dec 23, 2010 6:24 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Datastage strange behavirs
- Replies: 2
- Views: 2367
- Fri Dec 17, 2010 7:59 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Datastage strange behavirs
- Replies: 2
- Views: 2367
Datastage strange behavirs
Hi , I have been observing some weird behaviors, can someone explain these for me. 1> I have 2 jobs a and b. In the job a 2 datasets are created partitioned on key x and sorted on key x and key y. (The relation between key x and Y is one to many and between Y and X is one to many. e.g branch and acc...
- Tue Aug 24, 2010 6:58 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Writing 64 Target files
- Replies: 3
- Views: 2266
Writing 64 Target files
We have a requirement to write the data into 64 target files. The data is written into one of the 64 files based on value of a column. The target file is an ebcidic file. Same is the design for 6 of the jobs. These leads to bottleneck with most of the execution time involved in write operations. The...
- Fri Aug 20, 2010 1:56 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Warning while capturing the rejected record
- Replies: 0
- Views: 1536
Warning while capturing the rejected record
Hi I am trying to capture the rejected records from the cff stage. The output of reject link is a sequential file. It is giving below warning record type = implicit cannot save rejected records. I have tried to use record type = implicit as well as the normal with quotes etc. But I am getting the sa...
- Wed May 19, 2010 11:13 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Data inconistency with Transfomer n paralle
- Replies: 3
- Views: 3174
Data inconistency with Transfomer n paralle
Hi, I am generating a unique identifcation number (UID) in a transfomer. The job design like Dataset---->Sort ---> Transformer---> Target. I am sorting on columns A,B and generating keychange indicator. In the transformer,When the keychange indicator is 1 , the UID s incremented else the previous va...
- Sun Apr 25, 2010 10:59 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Reading EBCDIC data thru CFF stage
- Replies: 0
- Views: 1497
Reading EBCDIC data thru CFF stage
Hi, We are reading the EBCDIC data thru the CFF stage. Following are the meta data definitions. 1> single fixed array. 2> Multiple fixed array with no ODO ( on depending columns clause) 3> Multiple fixed array with ODO ( on depending columns clause). We would like to generate a ouput which gives the...
- Sat Mar 20, 2010 12:02 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Vector
- Replies: 5
- Views: 3291
- Mon Feb 22, 2010 10:57 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Working of the Partitioning
- Replies: 2
- Views: 2558
Working of the Partitioning
Hi, Just want to understand the partitining concepts in PX. Ex. I have configuration files with 2 nodes . The source file is partitioned on custno. e,g custno are 1,2,3,4. So , datawise there are 4 partitions but only 2 logical nodes. 1> How will the data be allocated to the 2 logical nodes. 2> Also...