Search found 62 matches

by sureshreddy2009
Wed Aug 21, 2013 8:45 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Warning, Insert had a high number of retries
Replies: 6
Views: 4228

committ limit

This kind of oracle messages always depends on api vs direct load
Which kind of load you are doing
If you are not committing the records, then the database is open for ever which cause this kind of issues
by sureshreddy2009
Tue May 24, 2011 11:54 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Production Dataset Issue
Replies: 6
Views: 4438

Issue Resolved,
more over there will never be datastage problem
Its always developer work matters. I came to know yesterday the sequence is failed. When I triggered that event it started from that point and that dataset contains yesterday records.

Thanks,
by sureshreddy2009
Tue May 24, 2011 11:53 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Production Dataset Issue
Replies: 6
Views: 4438

Issue Resolved,
more over there will never be datastage problem
Its always developer work matters. I came to know yesterday the sequence is failed. When I triggered that event it started from that point and that dataset contains yesterday records.

Thanks,
by sureshreddy2009
Tue May 24, 2011 11:32 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Production Dataset Issue
Replies: 6
Views: 4438

Hi greggknight,

I know the meaning and definition you copied and pasted from a dictionary site :o

I mean to eloborate some more causes other than synchronization.
If you know the reasons post here
by sureshreddy2009
Tue May 24, 2011 9:34 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Production Dataset Issue
Replies: 6
Views: 4438

Can you please eloborate the cause.
by sureshreddy2009
Tue May 24, 2011 8:22 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Production Dataset Issue
Replies: 6
Views: 4438

Production Dataset Issue

Hi, My production datastage flow is We have extract jobs loading from flat files to staging oracle database. From staging database data quality datastage job will run and load into dataset. From there Data transformation job will run, this will extract the data from previous dataset and load into in...
by sureshreddy2009
Tue Feb 08, 2011 3:01 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Too Many Aggregators
Replies: 4
Views: 3138

Thanks jwiles

I thought we cannot repeat column for calculation multiple times but this is possible, Now I had done my job , Thanks a lot
by sureshreddy2009
Tue Feb 08, 2011 1:03 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Too Many Aggregators
Replies: 4
Views: 3138

The function need to perform in aggregator is Non Missing Columns, It is not allowing for multiple columns, even though grouping columns are same but non missing column is calculated on each column, Could any one help..?
by sureshreddy2009
Tue Feb 08, 2011 12:45 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Too Many Aggregators
Replies: 4
Views: 3138

Too Many Aggregators

Hi Viewers, :D Please give me a thought on this requirement I have one input dataset, from that I have to create one sequential file report which will contains 30 aggregate count columns, There are 4 key columns and 30 additional columns in source file, I have to calculate non missing values on ever...
by sureshreddy2009
Sun Sep 05, 2010 5:11 am
Forum: General
Topic: dsjob command syntax
Replies: 8
Views: 45266

Go to unix server where datastage is installed, type man dsjob , it will display all options, but finally you have to undertand to use how like for running a job dsjob -file <parameter file contains domain name and server name> domainname servername -run -warn 100 projectname jobname for seeing the ...
by sureshreddy2009
Tue Jun 22, 2010 2:57 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to specify the key in schema file
Replies: 15
Views: 9135

Generic job is some thing you have to use parameters
I dont think there is another way with out parameters
by sureshreddy2009
Tue Jun 22, 2010 2:50 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to specify the key in schema file
Replies: 15
Views: 9135

Yes ofcourse
It will be done
use upsert method in oracle and use parameterized sql.
Update #table_name# #sql#

pass complete query like set col1=orchestrate.col1, col2=orchestrate.col2 ...where col5=orchestrate.col5 in #sql#
by sureshreddy2009
Tue Jun 22, 2010 1:32 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to specify the key in schema file
Replies: 15
Views: 9135

Basically if columns present in source and oracle are same then dont need to mention any schema file any where while loading.Enable RCP.It will take care otherwise use column export after sequential file stage to convert into string and column import before oracle stage to convert it into desired st...
by sureshreddy2009
Tue Jun 22, 2010 1:17 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to specify the key in schema file
Replies: 15
Views: 9135

In target also use same type of schema file but you have to change format which is first line in schema files and column names according to target database table, dont need to mention any key here also and we loaded using load method not a upsert option.Try with load option there is nothing wrong us...
by sureshreddy2009
Tue Jun 22, 2010 1:03 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to specify the key in schema file
Replies: 15
Views: 9135

Hi, We implemented the same job reading from sequential file and loading into oracle table with a generic approach for this we used schema files, here is the procedure. source schema file format will be like record {final_delim=none,delim='|',null_field='',quote=none,record_delim_string='{BEG}' { AL...