Hi,
I need to pull some data from Sybase database and also write some data back.
I can see 3 different stages in Datastage Sybase enterprise, Sybase IQ and Sybase OC.
Since I do not see any Sybase connector stage, I am now confused which stage to use and what scenario's decide the stage choice.
Search found 38 matches
- Tue May 03, 2016 2:13 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Best stage for Sybase database
- Replies: 1
- Views: 2240
- Thu Mar 17, 2016 1:27 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Datastage to Oracle connectivity
- Replies: 10
- Views: 6304
Ok, So i re-ran the DS engine and ASB agent but couldn't connect even now. But my question now is, Datastage runs the dsenv file. But when I log in into Datastage UNIX server through my ID, I do not see the PATH and LD_LIBRARY_PATH as resolved into the values which are provided in dsenv file. Even t...
- Wed Mar 16, 2016 11:05 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Datastage to Oracle connectivity
- Replies: 10
- Views: 6304
Thanks Chulett for the reply. I found a lot of documents including the configuration guide and it has so many steps which look contradictory at many stages. I am not even able to figure out whats in scope in my case and whats not. So please bear with me on this. Also, one more thing. Once I add the ...
- Wed Mar 16, 2016 9:39 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Datastage to Oracle connectivity
- Replies: 10
- Views: 6304
So I have a 64 bit client machine but Oracle client which I need to install, will that be 32 bit or 64 bit. As I heard somewhere that Datastage is a 32 bit system. Also, after installing the client what changes do I need to make at the server side. Specifically looking for the following things: 1. D...
- Tue Mar 15, 2016 11:51 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Datastage to Oracle connectivity
- Replies: 10
- Views: 6304
Datastage to Oracle connectivity
Hi, I am working in a project where the Datastage administrator doesn't know much. I am facing issues connecting to Oracle database through Datastage (through Oracle connector). 1. I have Datastage client installed on my windows machine and even have administrator access to Datastage server on UNIX....
- Tue Mar 08, 2016 11:16 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Using the BDFS stage to connect to Hadoop server
- Replies: 6
- Views: 4405
- Mon Mar 07, 2016 1:54 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Using the BDFS stage to connect to Hadoop server
- Replies: 6
- Views: 4405
- Mon Mar 07, 2016 12:09 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Using the BDFS stage to connect to Hadoop server
- Replies: 6
- Views: 4405
- Thu Mar 03, 2016 12:27 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Using the BDFS stage to connect to Hadoop server
- Replies: 6
- Views: 4405
Using the BDFS stage to connect to Hadoop server
Hi, I want to extract some files from a Hadoop server to my Datastage server using the BDFS stage. I am able to ping the Hadoop server from UNIX server but when I use the BDFS stage, its unable to connect to specified server and port. I went through the links provided by at IBM knowledge center: htt...
- Wed Nov 04, 2015 11:38 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Restart job from the position it failed
- Replies: 5
- Views: 3674
Restart job from the position it failed
Hi,
Can I restart a job from the point it failed. Like if 1000 records have already been inserted and the job aborts, I want to rerun the job after the 1000 records. Is there any such Datastage functionality?
Can I restart a job from the point it failed. Like if 1000 records have already been inserted and the job aborts, I want to rerun the job after the 1000 records. Is there any such Datastage functionality?
- Sun Nov 01, 2015 9:01 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Why not hash partitioning for lookup stage
- Replies: 7
- Views: 8620
Thanks Ray, sorry for being so questionative but I still have a slight doubt. So lookup stage will work faster because no sorting is required. Going by the same logic, if I still use a join stage with entire partition and disable the APT_NO_SORT_INSERTION environment variable, will join and lookup g...
- Wed Oct 28, 2015 7:58 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Why not hash partitioning for lookup stage
- Replies: 7
- Views: 8620
- Sun Oct 25, 2015 6:11 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Why not hash partitioning for lookup stage
- Replies: 7
- Views: 8620
As far as I know, lookup stage (in auto mode) partitions the master data in any partition (except entire) and reference data in entire partition. Is my understanding correct? Because if lookup uses entire partition on both links (master and reference) then output will have duplicate data. In the sam...
- Fri Oct 23, 2015 5:33 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Why not hash partitioning for lookup stage
- Replies: 7
- Views: 8620
Why not hash partitioning for lookup stage
Why is it that we can use a hash partition for join but not for lookup.
Also, why can we use entire partition for join stage. I mean hash partition the master data and entire partition the reference data for a join stage.
Also, why can we use entire partition for join stage. I mean hash partition the master data and entire partition the reference data for a join stage.
- Mon Oct 05, 2015 10:56 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: 9 digit precision in timestamp in datafiles
- Replies: 3
- Views: 2707
9 digit precision in timestamp in datafiles
I have some data files in which the timestamp is like yyyy-mm-dd hh:mm:ss.000000000
So after the seconds field, there are 9 digits (I think it denotes nano seconds). I need to load this data into Db2 but don't think Datastage handles anything more than microseconds.
How can I load such data into Db2?
So after the seconds field, there are 9 digits (I think it denotes nano seconds). I need to load this data into Db2 but don't think Datastage handles anything more than microseconds.
How can I load such data into Db2?