Search found 233 matches
- Tue May 20, 2008 5:41 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: kill process ids
- Replies: 10
- Views: 4222
any best way to kill pid
thanks ray,is there any other way to kill DS related PIDS ,as if it is necessary to kill
- Thu May 15, 2008 3:50 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Broken pipe
- Replies: 2
- Views: 1317
Broken pipe
when do we get such kind of error
ZUNIQID_lkp,1: sendWriteSignal() failed on node iocl21p1 ds = 21 conspart = 0 Broken pipe
ZUNIQID_lkp,1: sendWriteSignal() failed on node iocl21p1 ds = 21 conspart = 0 Broken pipe
- Mon May 05, 2008 4:57 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: buildop stage transfer error
- Replies: 2
- Views: 1165
buildop stage transfer error
I have built a build op stage which has one output link and a reject link. To transfer records to reject link i have used dstransfer()macro in my code and so falsed out the Auto transfer option Under Transfer tab. But when i did this i got the below error. Operator Generation Failed buildop -f -BC /...
- Thu May 01, 2008 9:44 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: column import
- Replies: 4
- Views: 1616
:!: Modify stage uses zero-based counting. Nothing wrong with Transformer stages (these days) either. I can get this funtionality using transformer. But i want to know the functionality of column import. Will it split the data if the data doesnt have any delimiters. Just learning the functionlaity ...
- Thu May 01, 2008 5:15 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: column import
- Replies: 4
- Views: 1616
column import
i have the data like Field1 aaaaaaa bbbbbbb cccccccc ddddddd eeeeeee. Now i have to split this field1 to field 2 and filed 3 where filed2 should contain the 1st 3 chars and field3 should contain the last 4 chars. I can do this using a transformer. But i wish to use column import. Can i split the rec...
- Thu May 01, 2008 3:20 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: fatal errors
- Replies: 1
- Views: 1011
fatal errors
i got error ,which dont have any idea,help me out pls,these r the fatal thrown APS_DB_REMAINING_JOIN_INDICATORADD_tras,10: dspipe_wait(3019222): Writer timed out waiting for Reader to connect. APS_DB_REMAINING_JOIN_INDICATORADD_tras,10: The runLocally() of the operator failed. [api/operator_rep.C:40...
- Wed Apr 30, 2008 5:56 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: write range map
- Replies: 7
- Views: 2845
I don't know that there's an advantage one over the other. It's really driven by the business requirement - do you need to keep ranges of keys contiguous? If not, prefer hash or modulus for a key-based partitioning algorithm; there's no pre-processing required for either of these. Thanks a lot. I g...
- Wed Apr 30, 2008 5:35 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: write range map
- Replies: 7
- Views: 2845
- Wed Apr 30, 2008 5:01 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: write range map
- Replies: 7
- Views: 2845
why do we need to use this stage( range partitioning). we can get the same functionality using the hash partitioning. Also cant we force the job to put specified key values in one partition and the rest in other? When we say hash partioning related data will stay on same partition but what i want is...
- Wed Apr 30, 2008 3:41 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: write range map
- Replies: 7
- Views: 2845
write range map
i have searched about this stage and after going through the explanations , still i feel i am not getting used to this stage. Actually what is range map? and where we should use this range partitioning? Cant we use range parttioning on a data without this range map stage? Is thsi stage used in real ...
- Mon Apr 28, 2008 7:05 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Writing to Fixed width file
- Replies: 5
- Views: 2788
- Mon Apr 28, 2008 6:42 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Writing to Fixed width file
- Replies: 5
- Views: 2788
trim(col_name,' ','A')
trim(col_name,' ','A')
try this
try this
- Mon Apr 28, 2008 6:13 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: routines compile
- Replies: 2
- Views: 1828
routines compile
how to compile routines manually through manager,as we have moved routine folder from DEVELOPMENT to QA,routines are in locked position,i didnt find any manual option to compile routines through manager
- Fri Apr 18, 2008 3:20 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: reject records
- Replies: 1
- Views: 681
reject records
I am reading a fixed width records from unix box through a sequential file and Reject mode option is set to 'FAIL' in sequentafl file stage. Do the datastage specify the line number of the rejected record if there are any rejections or else we have explicitly put a reject link and handle that.
- Wed Apr 16, 2008 4:17 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: row splitter
- Replies: 2
- Views: 1496
row splitter
i have data like 110456789234567 in Column1. Now my output should look line 110 456 789 234 567 all in column 1. First 3 into ist row, next 3 into 2nd row etc... how can i split the data. Is this possible using a row splitter. What delimiter can i give for input data