Search found 38 matches
- Thu May 12, 2016 11:01 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Best way for comparison of multiple columns
- Replies: 4
- Views: 4601
Surely, I will try that as soon as I get the DS access. The problem is, right now I am just at the design phase with hardly any access to Datastage but your point taken. But could you please throw some light on the checksum stage. Is checksum always unique for different set of values. How much are t...
- Thu May 12, 2016 9:49 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Best way for comparison of multiple columns
- Replies: 4
- Views: 4601
Best way for comparison of multiple columns
Hi, I have a before and after dataset and I want to identify all the updated records in the after dataset. The problem is that I need to compare records based on all the columns (1 key and all other as change values). I think doing it through a CDC or SCD will take a long time as job would be compar...
- Mon Apr 11, 2016 10:42 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Job failing due to (code 139)
- Replies: 3
- Views: 3518
- Mon Apr 11, 2016 10:19 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Job failing due to (code 139)
- Replies: 3
- Views: 3518
Job failing due to (code 139)
Hi, One of my jobs is failing with the following error: Parallel job reports failure (code 139) Surprisingly, the same job ran fine in dev environment but is failing in QA environment. If I create a copy of the job and run the copy, it runs fine. Earlier also we faced the same issue with one of the ...
- Wed Mar 16, 2016 11:03 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Error while fetching records through Oracle connector stage
- Replies: 2
- Views: 2582
Error while fetching records through Oracle connector stage
Hi, I am trying to connect to Oracle through connector stage. I am able to view data through the 'view data' option. But when I run the job to extract data, it gives me the following error: Oracle_Connector_16: The OCI function OraOCIEnvNlsCreate:NLS_LANG returned status -1. Error code: NULL, Error ...
- Thu Mar 10, 2016 10:19 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Bulk insert option in Teradata connector
- Replies: 6
- Views: 6310
As far as I know, Teradata generally has a primary index. I think only from versions 13 and above, this requirement has been made optional otherwise it was mandatory to have a primary index on Teradata tables (else Taearadat creates it itself). So if I want to use fastload in versions before 13, wil...
- Thu Mar 03, 2016 12:02 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: File FTP in Datastage vs UNIX
- Replies: 8
- Views: 4999
- Thu Mar 03, 2016 11:51 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: File FTP in Datastage vs UNIX
- Replies: 8
- Views: 4999
- Thu Mar 03, 2016 10:04 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: File FTP in Datastage vs UNIX
- Replies: 8
- Views: 4999
File FTP in Datastage vs UNIX
Hi,
I want to FTP a file from Datastage UNIX server to a different server.
Will the Datastage FTP stage be better than UNIX FTP command?
Also, what kind of configuration is needed between the 2 servers to enable FTP?
I want to FTP a file from Datastage UNIX server to a different server.
Will the Datastage FTP stage be better than UNIX FTP command?
Also, what kind of configuration is needed between the 2 servers to enable FTP?
- Wed Nov 04, 2015 11:40 am
- Forum: General
- Topic: Version control in Datastage
- Replies: 2
- Views: 2395
Version control in Datastage
Hi,
Is there a way to do version control in Datastage.
If not, do tools like SVN and other third party tools support Datastage versioning as well.
Is there a way to do version control in Datastage.
If not, do tools like SVN and other third party tools support Datastage versioning as well.
- Fri Oct 09, 2015 11:21 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Required logic to perform
- Replies: 5
- Views: 4313
- Fri Oct 09, 2015 5:52 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Handling time with variable length
- Replies: 3
- Views: 3830
- Wed Oct 07, 2015 12:29 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Handling time with variable length
- Replies: 3
- Views: 3830
Handling time with variable length
Hi,
I have a time field which can either be hhmmss or hmmss. The hour field can be 1-2 digits without a '0' being prefixed. How can I convert such string to time. I used %(h,s)nnss but that didn't work.
I have a time field which can either be hhmmss or hmmss. The hour field can be 1-2 digits without a '0' being prefixed. How can I convert such string to time. I used %(h,s)nnss but that didn't work.
- Mon Oct 05, 2015 11:01 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Auto partition in sort stage with a key
- Replies: 2
- Views: 2840
Auto partition in sort stage with a key
Hi, I am sorting some data in the sort stage and creating a key change column. My doubt is 'does Datastage partitions data as well when it has to sort data?' I mean, if I sorted the data based on a primary key and mention the partition mode as 'auto', will Datastage automatically hash partition data...
- Mon Oct 05, 2015 8:14 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Parameterized primary key for sorting and partitioning
- Replies: 0
- Views: 1464
Parameterized primary key for sorting and partitioning
Hi, I have a generic job to load data from a sequential file into a dataset with sorted and partitioned data (with duplicates removed). I always have a single key primary key and no composite primary key. Can I use a parametrized primary key for loading the data using RCP (since schema is being pass...