Search found 456 matches

by elavenil
Tue Dec 26, 2006 5:45 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: SQL Server Enterprise
Replies: 2
Views: 886

Thanks Ray for your input.

I will try using ODBC Enterprise stage then.

Regards
Elavenil
by elavenil
Tue Dec 26, 2006 1:25 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: SQL Server Enterprise
Replies: 2
Views: 886

SQL Server Enterprise

Hi, We are in the process of setting up the development environment for our data warehouse project. DataStage 7.5.1A EE has been installed in the server and we have configured Oracle database in the ETL server and we need to connect SQL server database as well. I could not find any tips to configure...
by elavenil
Sat Dec 16, 2006 1:52 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Link Rowcount
Replies: 3
Views: 2508

Thanks for your response.

Regards
Elavenil
by elavenil
Fri Dec 15, 2006 12:32 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Link Rowcount
Replies: 3
Views: 2508

Link Rowcount

Hi, We are in the process of defining the process required for a DW implementation. For the reconciliation purposes, we read link information using DSLinkGetInfo to get row count of this link that are available in the extraction, transformation jobs. Load count is extracted from target database. I h...
by elavenil
Tue Nov 21, 2006 7:39 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Windows Vs UNIX
Replies: 8
Views: 2804

Windows Vs UNIX

We are in the process of starting the DW project for one of the bank. DataStage EE licence has been purchased and we are planning to use Windows OS for the development and AIX for production. I would llike to have the following info when the jobs are migrated to prod from devt environment. 1. Parall...
by elavenil
Wed Oct 25, 2006 4:18 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Parallel job is being Aborted -- showing Not enough space
Replies: 14
Views: 8255

We had a similar problem in my earlier project so check the file size limit for the user that is used to execute the job. And lookup table size might have crossed that limit. Solution would be increase the limit for the user and execute the job.

HTWH.

Regards
Saravanan
by elavenil
Mon Aug 21, 2006 4:40 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Write failed record ID
Replies: 10
Views: 3099

Thanks Anupam for the response.

I thouht deletion of the hashed file is faster than clearing a file. Can you just pls share your experience why hashed files cannot be deleted.

Regards
Elavenil
by elavenil
Mon Aug 21, 2006 3:44 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Write failed record ID
Replies: 10
Views: 3099

Thanks Arnd for your response.

Yes. The hashed file is deleted and recreated at every run.

Checked the data and no control characters are found. Is there any way to handle this situation?

Regards
Elavenil
by elavenil
Mon Aug 21, 2006 3:17 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Write failed record ID
Replies: 10
Views: 3099

Thanks Kumar for your response.

Both are run with the smae user id. They are not accessed parallelly.

Regards
by elavenil
Mon Aug 21, 2006 1:50 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Write failed record ID
Replies: 10
Views: 3099

Thanks Ray for your response. Checked those two points and i did not find anything wrong in the value of key field. At the same time, if some special characters in the key value, then that record writing should consistently fail but it is not. When we reran the job it got over successfully. Can you ...
by elavenil
Mon Aug 21, 2006 1:30 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Write failed record ID
Replies: 10
Views: 3099

Write failed record ID

Hi, We encountered Write failed for record id <Key Column Value> while writing the records into hashed file. Hashed file is dynamic hashed file, and other parameters are default. The same job without any modification on data and job ran fine. We checked the file size and it is not more than 2 GB and...
by elavenil
Thu Mar 02, 2006 4:01 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: How can i remove duplicate rows
Replies: 9
Views: 3052

There are few ways to eliminate duplicates in a server job. Load data into hashed file (define the key) and load into target as hashed file will not allow duplicates. Stage variables/aggregation stage can be used to eliminate duplicates as well.

HTWH.

Regards
Elavenil
by elavenil
Thu Mar 02, 2006 3:41 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Group By vs Aggrigator stage
Replies: 7
Views: 2224

Database aggregation should be good compare to DS aggregation if data volume is very high. So database aggregation is recommended when the input data volume is very high.

HTWH.

Regards
Elavenil
by elavenil
Tue Feb 28, 2006 9:40 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Regarding Join Condition
Replies: 4
Views: 2096

Do outer (left/right) join of those two tables and check the other table values to identify (un)matching records then populate the column that you want to derive.

HTWH.

Regards
Elavenil
by elavenil
Wed Feb 22, 2006 9:37 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Performance Tuning
Replies: 3
Views: 1451

Thanks for your responses. I will try measuring/analysing the performance factors as you guys suggessted.

Regards
Elavenil