Search found 200 matches
- Fri Apr 06, 2018 1:28 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Range Lookup question
- Replies: 2
- Views: 2357
Range Lookup question
Hi, I have data like this. key, start, end 1,1,100 1,80,140 2,200,300 4,250,300 Here I am looking to reject/capture the first two records where there is an overlap. i.e for the same key, the end should not fall between the start and end of the other record(with the same key). and the start should no...
- Mon Mar 12, 2018 12:06 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: double quotes in data
- Replies: 3
- Views: 3498
- Mon Mar 12, 2018 11:37 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: double quotes in data
- Replies: 3
- Views: 3498
double quotes in data
Hi, I have double quotes in data and ds is not recognizing the field Data looks like this xxx,yyy,"last,first", "yyyy""",jjj This is a .csv file and I have given the quote character as double quote. This record is not being read in sequential file at all, so If I could ...
- Mon Mar 12, 2018 10:28 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Record by record comparison
- Replies: 1
- Views: 1636
Record by record comparison
Hi I have some data like this KeyID, Data1,Data2 1,test1,test2 1,mmm,nnn 2,ttt,bbb 2,444,888 1,vvv,bbb 1,sss,fff And I want the data like this KeyID,Data1,Data2,Data3,Data4 1,test1,test2,mmm,nnn 2,ttt,bbb,444,888 1,vvv,bbb,sss,fff So, when there is a key, I would like to take all corresponding data ...
- Tue Jan 30, 2018 12:32 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Delete then insert option
- Replies: 5
- Views: 5677
Thanks Rick and Craig!!..And looks like the array size is the one responsible for the job working good when record count is low. Since array count was set to 2000, it didnt cause any problem when it processed 1000 records or so. I tested it by increasing it to 5000 and it worked fine if the record c...
- Mon Jan 29, 2018 6:10 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Delete then insert option
- Replies: 5
- Views: 5677
My data is like this ID value 1 abcd 1 defg 1 rtrt 1 6767 1 7887 As such there is no key in the table. I am specifiying the ID column as the key column in datastage for delete then insert option. My assumption was that When I give Delete then insert based on key(ID), it deletes all the records with ...
- Wed Jan 24, 2018 11:19 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Delete then insert option
- Replies: 5
- Views: 5677
Delete then insert option
Hi, I am using "Delete then insert" option to load data in target Oracle table. Source is a .csv file. Every file is identified with an id and data needs to be replaced if we get the same file again even with different data. I have defined the key as the file id and load option as "De...
- Tue Nov 21, 2017 12:20 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: insert/update with no key
- Replies: 4
- Views: 3457
insert/update with no key
Hi, I have to load a oracle table from a file. There are no keys in the table. In this situation what would be the best approach to load the table with restartability in place..If the job aborts in the middle with some rows inserted, how do i make sure that the same records dont insert into the tabl...
- Mon Nov 13, 2017 12:13 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Date format issues
- Replies: 2
- Views: 2601
Date format issues
Hi, I have a date in the file as 10/11/17 and i am trying to load it into the table. When i use this function StringToDate(Trim(DSLink2.File_Date),'%(d,s)/%(m,s)/%yy'), I tried to output in a seq file and its output is "1917-11-10"..Why is it giving the ouput as 1917 instead or 2017?. Is t...
- Mon Nov 13, 2017 12:05 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Problem in reading .csv file
- Replies: 5
- Views: 4888
- Thu Nov 09, 2017 5:28 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Problem in reading .csv file
- Replies: 5
- Views: 4888
- Thu Nov 09, 2017 4:25 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Problem in reading .csv file
- Replies: 5
- Views: 4888
Problem in reading .csv file
Hi, I have a .csv format file and it comes in the following way For ex it has 4 fields a,345,cd,678 b,"6,878",cd,"9,870" c,"7,989",fg,"7,880" d,567,cd,989 The second and the fourth field are numeric fields and if it is more than 3 digit number the it comes wit...
- Mon Aug 13, 2012 5:06 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Hash value in output
- Replies: 1
- Views: 1560
Hash value in output
Hi, I would like to output a key value(Like hash key) for the input which i send. For the same values i would need to get the same key values all the time. For example For the value "xyz, bg road" if a key is returned something like 9389494949, then everytime when i give the input the same...
- Thu Jun 23, 2011 12:20 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: update very slow
- Replies: 4
- Views: 3775
update very slow
Hi, I have a job which reads a file and updates a table(oracle) based on the unique key of the table. The file has 100 million records. The job has been running for more than 30 hours now and the log still shows Progress 20 percent. When i checked for some of the rows in the file in the table, the r...
- Mon Jun 20, 2011 12:32 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: sparse lookup-join
- Replies: 4
- Views: 3301