Am I looking at the correct variable?
APT_DUMP_SCORE=1
Thanks,
Search found 109 matches
- Fri Jun 28, 2013 9:52 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Aggregator stage error
- Replies: 17
- Views: 8081
- Fri Jun 28, 2013 9:37 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Aggregator stage error
- Replies: 17
- Views: 8081
- Fri Jun 28, 2013 9:32 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Aggregator stage error
- Replies: 17
- Views: 8081
YEs ArndW, I am calculating total sum in aggregator stage and then I am using sort stage, transformer and then a join and finally loading in to a oracle table. I am capturing key change in sort stage and after transfromer stage I dropped those key_change columns. Apart from that I am using everythin...
- Fri Jun 28, 2013 9:17 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Aggregator stage error
- Replies: 17
- Views: 8081
- Fri Jun 28, 2013 8:59 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Bulk load
- Replies: 9
- Views: 2428
- Fri Jun 28, 2013 8:58 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Bulk load
- Replies: 9
- Views: 2428
- Fri Jun 28, 2013 8:52 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Aggregator stage error
- Replies: 17
- Views: 8081
Aggregator stage error
Hi, I am getting error on aggregator stage " Aggregator_43: Error when checking operator: Could not find input field ". The job was running fine until 3 days before suddenly it started throwing error when I tried to run today. I checked input columns and data everything seems fine and stil...
- Thu Jun 27, 2013 7:19 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Bulk load
- Replies: 9
- Views: 2428
Re: Bulk load
Do you mean first write the entire data to a text file and then load into the oracle table ?SURA wrote:If you are doing any transformation, then write the output in to a text file and find any way to load that file using bulk load command.
I did the same approach in SQL Server.
Thank you,
- Thu Jun 27, 2013 7:17 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Bulk load
- Replies: 9
- Views: 2428
If this is Oracle to Oracle you may find that doing the whole thing within Oracle using Oracle utilities is faster. You may still choose to initiate/control that from DataStage. If you prefer to ... Hi Ray, Could you please guide me in that direction, how can I initiate/control from DataStage? Than...
- Wed Jun 26, 2013 2:24 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Delete and Insert
- Replies: 4
- Views: 1485
- Wed Jun 26, 2013 1:33 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Delete and Insert
- Replies: 4
- Views: 1485
- Wed Jun 26, 2013 11:46 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Delete and Insert
- Replies: 4
- Views: 1485
Delete and Insert
Hi, Please suggest me a best way. First run: Source: 30 million rows Target : 30 million rows After first run from second time on wards I have to delete 2013 records on target and insert 2013 records. I designed job like Oracle connector -----> Oracle connector Source side: select * from ***** where...
- Wed Jun 26, 2013 11:18 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Bulk load
- Replies: 9
- Views: 2428
Bulk load
Hi,
I am using bulk load option in oracle connector on target side and enable partition on source side. I am getting around 30 million records on source side. I just curious to know is there any better way to do transfer in efficient way.
Thank you,
I am using bulk load option in oracle connector on target side and enable partition on source side. I am getting around 30 million records on source side. I just curious to know is there any better way to do transfer in efficient way.
Thank you,
- Tue Jun 25, 2013 3:30 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Filter on Max Date ?
- Replies: 3
- Views: 1623
- Tue Jun 25, 2013 3:27 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Concatenation in transformer ?
- Replies: 9
- Views: 5132
If you want to pursue the first method, show us the syntax you used and we'll go from there. ... Stage: DSLink2.KEY:'01' --- DATE (varchar) StringToDecimal(DATE) --- Output (decimal) Output: Output --- Date_output (decimal) But I am getting output same as input. So, I modified logic like this Stage...