Search found 48 matches
- Wed Jun 11, 2014 3:48 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Limit to Multiple instance of sequences and D's parallel job
- Replies: 5
- Views: 3096
- Mon Jun 09, 2014 11:11 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Limit to Multiple instance of sequences and D's parallel job
- Replies: 5
- Views: 3096
Limit to Multiple instance of sequences and D's parallel job
Hi, DS V9.1 AIX V7.0 We have designed a common generic RCP driven DS parallel jobs to read multiple types of source files like EBCDIC & ASCII formats having multiple file layouts using a set of filter, SF, Generic stage to load the data into table in teradata database. I wanted to know is there ...
- Fri Mar 01, 2013 8:25 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Compilation Issue
- Replies: 7
- Views: 5067
- Wed Feb 27, 2013 11:42 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Compilation Issue
- Replies: 7
- Views: 5067
- Tue Feb 26, 2013 10:46 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Compilation Issue
- Replies: 7
- Views: 5067
- Tue Feb 26, 2013 6:16 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Compilation Issue
- Replies: 7
- Views: 5067
Compilation Issue
Hi, I am getting the compilation error. The job reads the data from the sequential flat file, does data validation like - data type (Date, Decimal,Integer) and Null/Empty check for the incoming fields in the source file. There are approximately 150+ fields where the data validation is being done. On...
- Wed Apr 18, 2012 3:55 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: ANSI to UTF-8 Conversion
- Replies: 0
- Views: 2255
ANSI to UTF-8 Conversion
Hi All, I want to convert the sequential file from ANSI to UTF-8 format. I have tried setting the NLS MAP to UTF-8 at the project level and the NLS MAP at the stage level is also set to UTF-8 just to make sure. The record delimiter is set to UNIX Newline. Still the file is not geting created in the ...
- Fri Feb 25, 2011 2:38 am
- Forum: General
- Topic: Problem with Email Notification Stage
- Replies: 5
- Views: 2975
- Fri Feb 25, 2011 1:59 am
- Forum: General
- Topic: Problem with Email Notification Stage
- Replies: 5
- Views: 2975
- Fri Feb 25, 2011 1:48 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Passing Dataset value to Oracle Database stage
- Replies: 4
- Views: 2509
- Thu Feb 17, 2011 4:56 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Load same teradata table 2 times in a single job
- Replies: 21
- Views: 10737
Hi Vidyut, I knew that the first option will result in Deadlock, that's why i had asked you to handle it :wink: The another option is, Load all the unique records using the FASTLOAD method and write all the duplicate records into a file. Write a BTEQ script to load the duplicate records into the sam...
- Thu Feb 17, 2011 3:25 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Load last 2 Entries
- Replies: 14
- Views: 6137
- Thu Feb 17, 2011 2:33 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Load same teradata table 2 times in a single job
- Replies: 21
- Views: 10737
As you said the duplicate records are very minimal (in hundereds), identify these duplicate records and seperate it with the main stream. Using the link ordering execute the link which has only duplicate records and then the other one. Something like below. Source ------> Identify Unique/Duplicates ...
- Tue Feb 15, 2011 10:26 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: How to get the last record in a sur key state file
- Replies: 2
- Views: 2202
- Fri Feb 11, 2011 3:20 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: CSV file issue
- Replies: 4
- Views: 3422
If you are sure that you will get the values seperated by comma in the file Company Name, then read the entire record as a single column and then split based on the field length. Remember while splitting, the starting position of the second column will be n+1 as there is a comma seperating the first...