Search found 233 matches
- Fri Jun 19, 2015 10:44 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: loading large record in to teradata table
- Replies: 10
- Views: 12584
loading large record in to teradata table
Hello, I am trying to load data in to teradata table from a file. One filed has a size of 90k and is being loaded in to column defined as clob. I have defined it as long varchar in the datastage and trying to load. I able to load records that has size less than 64k but its failing for records greate...
- Fri May 15, 2015 10:26 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: inserting Clob data in to Teradata table
- Replies: 0
- Views: 1432
inserting Clob data in to Teradata table
Hello, I am trying to insert CLOB data in to teradata tables. 1. when i use access method bulk its failing with A column or character expression is larger than the max size error. 2. But when i use Immediate option, I had to make array size 1 to complete it successfully. But this very slow. It took ...
- Thu May 14, 2015 10:38 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Processing datasets
- Replies: 1
- Views: 1086
Processing datasets
Hello Guys, I have a requirement. 1. First job creates data sets and places them in to a folder. 2. second job loads these data sets in to table. 3. when a next data sets arrives it should be processed after the current load completes. 4. Once the data set completed loading it should be moved to arc...
- Sun Mar 22, 2015 1:12 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: delete on table which has simultaneous inserts
- Replies: 2
- Views: 1770
delete on table which has simultaneous inserts
Hello, I am creating a job that deletes records from a table X. I am using DB2 connector stage. There is another process that inserts data in to Table X. Will my delete statement lock the table. I am using all the default settings in connector stage. If both inserts and delete happen simultaneously ...
- Tue Jan 06, 2015 2:59 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Vertical pivot
- Replies: 1
- Views: 1208
Vertical pivot
Hello, I have a scenario where the number of rows for the each key column is differs. I need to convert these rows in to columns. Max number of rows i get for each key column is 3. Example TCN, SEQ_NUM, VALUE 0321 001 BK 0321 002 BF My out put should be TCN, SEQ_NUM_1, SEQ_NUM_2, SEQ_NUM_3, VALUE_1,...
- Tue May 08, 2012 8:29 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: write-through cache datasets
- Replies: 3
- Views: 2203
Following is the link to the pdf
http://www.redbooks.ibm.com/redbooks/pdfs/sg247830.pdf
chapter2: topic 2.2.1
http://www.redbooks.ibm.com/redbooks/pdfs/sg247830.pdf
chapter2: topic 2.2.1
- Mon May 07, 2012 4:17 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: write-through cache datasets
- Replies: 3
- Views: 2203
write-through cache datasets
what does write-through cache dataset mean? I came across this in IBM book.
- Thu Feb 16, 2012 9:04 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: dumping dataset to a file
- Replies: 6
- Views: 2916
dumping dataset to a file
Hello,
I want to move data from dataset to a sequential file using a command. I tired the following command i got an error.
orchadmin dump updateims21.ds update.txt
-ksh: orchadmin: not found [No such file or directory]
Thanks
I want to move data from dataset to a sequential file using a command. I tired the following command i got an error.
orchadmin dump updateims21.ds update.txt
-ksh: orchadmin: not found [No such file or directory]
Thanks
- Tue Jan 24, 2012 10:49 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Job failed due to connection closed error
- Replies: 3
- Views: 3720
Job failed due to connection closed error
Hello, we have a job that load 30-40 million records in to hash file. Job is getting aborted after 2 hours with the following error. I checked with datastage admin team and they told there is no issue on thier end There is no setting in datastage side which define the timeout setting for db2 connect...
- Tue Oct 25, 2011 9:41 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: TRIM function
- Replies: 3
- Views: 1927
TRIM function
Hello, I want to trim the leading and trailing occurance of *. Input field = ***abc***. output should be abc If i use TRIM(***abc****,'*','B') but i am getting the output as abc***. In the help section i found that B will remove all the leading and trailing characters. Just Trim(***abc****,'*') gave...
- Sat Oct 22, 2011 1:21 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: field function
- Replies: 4
- Views: 1361
field function
Hello,
I have used fieldfuction like this
field("88/For the Home/Bathroom Furniture/Wall % Floor Cabinets/" ,"&",1)
output I got is 88/For the Home/Bathroom Furniture/Wall % Floor Cabinets/
as per manual it should have returned me "".
Thanks
I have used fieldfuction like this
field("88/For the Home/Bathroom Furniture/Wall % Floor Cabinets/" ,"&",1)
output I got is 88/For the Home/Bathroom Furniture/Wall % Floor Cabinets/
as per manual it should have returned me "".
Thanks
- Mon Oct 17, 2011 9:53 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: sort funnel and apt_grid_partition
- Replies: 2
- Views: 1708
sort funnel and apt_grid_partition
Hi, I am trying to create a sample job to test sort funnel property in the funnel stage. 1st source file coulmn1 2 6 4 2nd file column1 2 3 7 I am expecting data to be sorted in ascending order and my final file should be 2 2 3 4 6 7 But i got in different order. I added a environmental variable $ap...
- Tue Jun 21, 2011 3:37 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Job completed in designer but its still running in director
- Replies: 1
- Views: 1249
Job completed in designer but its still running in director
Hii,
I have an issue with a job. Its shows that the job is completed in designer(green) but still shows its running in director. The after job sub routine also got executed the file got zipped. Ideally the job finises in 4 hours but now its taking 11 hours from yesterday
I have an issue with a job. Its shows that the job is completed in designer(green) but still shows its running in director. The after job sub routine also got executed the file got zipped. Ideally the job finises in 4 hours but now its taking 11 hours from yesterday
- Thu Dec 23, 2010 5:12 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: handling duplicate records
- Replies: 2
- Views: 2425
handling duplicate records
Guys, I normally use remove duplicate stage to handle duplicates. There many other options such as sort stage, transformer stage, aggregator and hash file(server edition) to handle duplicates. My question are 1. which is the best way to handle duplictes if volume of data is huge. 2. Is it better to ...
- Mon Sep 20, 2010 1:01 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Trim
- Replies: 4
- Views: 2007