Search found 165 matches
- Tue Jun 12, 2007 4:02 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Passing partly sorted data into the Aggregator
- Replies: 6
- Views: 1669
Passing partly sorted data into the Aggregator
Hi, We have a clarification on the way the Aggregator works. We have to aggregate incoming records on 5 columns. Out of these 5 columns, the incoming records are already sorted on 2 of the columns. So, can we assert on the Input tab of the aggregator that the incoming data is sorted on 2 of the colu...
- Thu Jun 07, 2007 3:37 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: A challenging logic
- Replies: 8
- Views: 3201
- Thu Jun 07, 2007 7:25 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: A challenging logic
- Replies: 8
- Views: 3201
Minhajuddin, Yeah, this was the same idea that we too had. But what we were thinking was to avoid explicitly checking the lookup value and passing that source column to the output. We were thinking of trying to pass the input column dynamically using some kind of expressions and/or any available BAS...
- Thu Jun 07, 2007 7:16 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: A challenging logic
- Replies: 8
- Views: 3201
- Wed Jun 06, 2007 3:44 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: A challenging logic
- Replies: 8
- Views: 3201
A challenging logic
Hi, We have a tricky requirement to handle as the one mentioned below. The incoming source link has columns like, - Id, Name, Stat_Cd, Brok_Cd, Fin_Cd, Ln_Cd. The lookup table has columns (Id, Ele_Nm) with these kinds of data, - Id -----> Ele_Nm --------------------- 10 -----> Stat_Cd 20 -----> Brok...
- Wed May 30, 2007 7:16 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: List of Plug-in stages
- Replies: 2
- Views: 1251
- Tue May 29, 2007 11:49 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: List of Plug-in stages
- Replies: 2
- Views: 1251
List of Plug-in stages
Hi, We are currently on the Server Edition and would be moving to the Enterprise Edition shortly. We would like to decide about the Plug-in stages that we might need, before the Server installation phase. For this, where can we look for all the list of plug-in stages. In our current server edition i...
- Tue May 15, 2007 7:44 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Calling user-defined Oracle function
- Replies: 4
- Views: 1130
Thanks Craig. So, you say that the function needs to be called from an SQL. In our case, we would have to call a Oracle function somewhere in the middle of a Job and route the output based on the return value of the function. What could be the best possible solution for this? Is it that we would nee...
- Mon May 14, 2007 2:28 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Calling user-defined Oracle function
- Replies: 4
- Views: 1130
Calling user-defined Oracle function
Hi, We have a requirement to call a user-defined Oracle function to do some logic. Hope we can call such user-defined function through an OCI stage. Based on the return value of the user-defined function, we would need to route the record for further downstream processing. For example, if the functi...
- Sun May 13, 2007 12:08 am
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: A Tricky and a Challenging logic to be implemented
- Replies: 2
- Views: 1314
A Tricky and a Challenging logic to be implemented
Hi, We have a very tricky and a challenging logic to be implemented. We need to build some filter conditions dynamically. I would explain this with an example here. We have to apply some contraints on the records coming from the input. We have to build those contraints based on the data fetched from...
- Wed May 09, 2007 6:57 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Hardware Requirements for setting up the Server Environment
- Replies: 4
- Views: 1219
Ray, We would have 10 million records at the source. We would be applying some filter conditions immediately after reading from the source, which may bring down the record count to around 1 or 2 million. This would be number of records that would be processed in all the others jobs that are downstre...
- Tue May 08, 2007 7:21 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Hardware Requirements for setting up the Server Environment
- Replies: 4
- Views: 1219
Ok, I would give you the details based on information / idea that we have currently. - We would like the processing for 10 million records to be under 1 hour (1 hour could be maximum) - The Source is going to be a file on the Mainframe system. We have a plan of making use of the FTP stage to fetch t...
- Mon May 07, 2007 3:14 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Hardware Requirements for setting up the Server Environment
- Replies: 4
- Views: 1219
Hardware Requirements for setting up the Server Environment
Hi, We have planned to move our platform from Server Edition to the Enterprise Edition and we are planning on the requirements needed for setting up the infrastructure for the PX 7.5 version on UNIX. We would like to hear some suggestions on what could be the needed parameter requirements for establ...
- Thu May 03, 2007 7:13 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Using Server routine in Parallel Job
- Replies: 10
- Views: 4163
- Wed May 02, 2007 3:12 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Using Server routine in Parallel Job
- Replies: 10
- Views: 4163
Why we didn't want to have this done in the database level is, we want this logic to be done for each incoming records. So, we didn't want to establish a connection with database for each incoming record. That was the reason we thought of loading the records into Hashed File once and then making use...