Search found 39 matches
- Mon Apr 22, 2019 11:53 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Need to publish message to Kafka Cluster
- Replies: 3
- Views: 4709
Need to publish message to Kafka Cluster
Hi, I am using DataStage v11.5. I have to publish message to Kafka cluster using Kafka connector (SSL authentication) but I am not sure what are the configuration and user access is required. Could you please help if you know how to setup the secure connection (SSL) and any other configuration requi...
- Thu Jan 31, 2019 7:58 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: How to retain ONE record after comparing 2 different columns
- Replies: 5
- Views: 4464
After certain tests, got the solution. I compared the 'Col1' and 'Col2' (either it is number or string) and created a new KEY column. If Col1<= Col2 then Key=Col1:Col2 else Key=Col2:Col1 This will give same key for both of the records and then de-duplicate the record based on this 'Key' column. Than...
- Thu Jan 31, 2019 3:34 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: How to retain ONE record after comparing 2 different columns
- Replies: 5
- Views: 4464
- Wed Jan 30, 2019 7:55 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: How to retain ONE record after comparing 2 different columns
- Replies: 5
- Views: 4464
How to retain ONE record after comparing 2 different columns
Hi, I have a file where I need to retain only 1 record out of 2 records where 'Col1/Party1 of record1' is equal to 'Col2/Party2 of other record'. These records may not come in sequence. Sample: ----------- Column Party1 Party2 -------------------------------------- Record1 --> 100 200 Record2 --> 20...
- Sun May 20, 2018 10:57 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Incomplete record schema when using OSH_PRINT_SCHEMAS
- Replies: 1
- Views: 2092
Incomplete record schema when using OSH_PRINT_SCHEMAS
Hi, I need to print record schema for each operator in the job, mainly the record schema of the final target (sequential file). Job Design: CFF Stage (Source) --> Transformer --> Seq File (Target) But the source file record has more than 10000 columns and when I am using OSH_PRINT_SCHEMAS in job, I ...
- Sun Mar 04, 2018 8:25 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Issue while reading sequential file
- Replies: 5
- Views: 4234
To answer queries: 1. Actually we have mechanism to create schema file based on metadata of the file. Since metadata says this is DOS format file so schema file automatically takes '\r\n' as record delimiter string. Other files from same source works fine. 2. I already checked file in UNIX and it ha...
- Fri Mar 02, 2018 3:24 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: How to prevent CHECKSUM stage to re-arrange column names
- Replies: 7
- Views: 7268
To answer everyone's query. we have resolved this issue and may be would be good use case for future. I am aware that DataStage puts '|' after each field passed to 'checksum' operator and it uses HASH MD5. I tried 2 solutions and both worked: 1. In generic stage, I wrote small code for 'transform' o...
- Fri Mar 02, 2018 3:07 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: How to prevent CHECKSUM stage to re-arrange column names
- Replies: 7
- Views: 7268
Best to open a support case then, the documentation doesn't seem to show that as an option. Out of curiosity, what 'existing application' was used to generate the checksum and what did it use to generate them? I've had 'issues' trying to match checksums from different systems, hence the question. E...
- Fri Mar 02, 2018 3:01 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Issue while reading sequential file
- Replies: 5
- Views: 4234
Issue while reading sequential file
Hi, This is very common error but I did not get any suitable answer from other entries so posting this as new query. I am trying to read csv file which has record delimiter as '\r\n'. It is windows file. I can see <CR><LF> in editors after each record. Schema File has been defined as below: record {...
- Mon Jan 15, 2018 10:10 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: How to prevent CHECKSUM stage to re-arrange column names
- Replies: 7
- Views: 7268
Thanks Craig. I need to generate CHECKSUM based on the column order as there is requirement to have keys in specific order. Existing application is running in production where hash was generated based on order of columns and to match with existing hash values we need to follow same order of keys/col...
- Mon Jan 15, 2018 4:46 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: SORT:Restrict Memory Usage
- Replies: 5
- Views: 8169
- Mon Jan 15, 2018 4:37 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: How to prevent CHECKSUM stage to re-arrange column names
- Replies: 7
- Views: 7268
How to prevent CHECKSUM stage to re-arrange column names
Hi Everyone, I have issue while computing hash value using 'CHECKSUM' stage. It seems that CHECKSUM stage re-arranges the columns as per their names while computing the hash value. Example: ---------- Source --> Col1, Col2, Test_Val Case1: If I generate checksum keeping order as (Col1, Col2, Test_Va...
- Thu Jan 11, 2018 10:31 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: How to stop default type conversion in MODIFY stage
- Replies: 3
- Views: 2879
- Thu Jan 11, 2018 10:08 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: How to stop default type conversion in MODIFY stage
- Replies: 3
- Views: 2879
- Thu Jan 11, 2018 8:15 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: How to stop default type conversion in MODIFY stage
- Replies: 3
- Views: 2879
How to stop default type conversion in MODIFY stage
Hi, I have generic job where I am handling NULL and performing TRIM for string type of column. As output column of these, If I do not assign data length for output then by default it increases the length of the column. Example: CASE1: --------- Output_Col:string=handle_null(in_col,' ') Output_Col:st...