Search found 62 matches

by sureshreddy2009
Mon Jun 21, 2010 12:00 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Error in Reject file
Replies: 4
Views: 2523

Welcome to DSXchange forum... :D There is no setting needed when you move the jobs to different server. Basically test the job first with out reject file , if it is working then give the reject link to the source , I guess the source file it self do not have that field. and see the reject file mode ...
by sureshreddy2009
Tue Jun 15, 2010 10:37 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Job does not see the change in Shared container
Replies: 8
Views: 2854

This topic is marked as resolved..What steps you took for resolving this issue..
by sureshreddy2009
Tue Jun 15, 2010 11:48 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Job does not see the change in Shared container
Replies: 8
Views: 2854

I have a solution by seeing all above posts
create copy of that job. convert that shared container to stages in the job , compile and run.then its easy to debug once its done.then based on that you can change in job or in shared container . :idea: hope this will helps thanks
by sureshreddy2009
Wed Jun 09, 2010 5:59 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Load into Fixed length file
Replies: 9
Views: 4618

But Null Filed Value () option will take the value common for all , Here i have different length for different columns so, How can I mention the null field value specific to fields.I am stilll working on it to get desired output. Basically my need is i am creating a file using datastage and that fil...
by sureshreddy2009
Wed Jun 09, 2010 5:57 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Load into Fixed length file
Replies: 9
Views: 4618

But Null Filed Value () option will take the value common for all , Here i have different length for different columns so, How can I mention the null field value specific to fields.I am stilll working on it to get desired output. Basically my need is i am creating a file using datastage and that fil...
by sureshreddy2009
Wed Jun 09, 2010 5:46 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: remove all duplicate records
Replies: 6
Views: 2321

If your requirement is to remove all records which are repeated more than once then this is the logic step1:read all the records step2:pass to aggregator and count on particular key column step3:use filter to pass the records where count=1 if you use aggregator basically all columns can't come as ou...
by sureshreddy2009
Wed Jun 09, 2010 5:39 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Key column in schema file
Replies: 12
Views: 9375

Basically we are also using schema files to read the data from a sequential file stage , at job level and at stage level, we mentioned as key.In schema file we are using like following If the datatype is string and nullable colum_name:nullable string[max=10] {quote=none}; If the datatype is string a...
by sureshreddy2009
Wed Jun 09, 2010 2:54 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Load into Fixed length file
Replies: 9
Views: 4618

My expected file is like the following
I am pasting again as the format in previous post is not proper
even if null is coming in any value it has to give spaces equal to length
by sureshreddy2009
Wed Jun 09, 2010 2:47 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Load into Fixed length file
Replies: 9
Views: 4618

The job is running successfull and the data is also loaded into file but not in the format i expected. As you said there is no option directly saying that fixed length, there are 8 columns in my output, the sum of length of all columns is 112, so i mentioned 112 as a value in the option record lengt...
by sureshreddy2009
Wed Jun 09, 2010 2:30 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Load into Fixed length file
Replies: 9
Views: 4618

Load into Fixed length file

Hi, I have a delimited file , I am reading that through sequential file stage after that I am using aggregator,lookup and transformer as a part of transformation process, Finally I want to load into a fixed length file , How can I load into this file and what are the options to select in sequential ...
by sureshreddy2009
Thu Jun 03, 2010 6:27 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to generate row numbers for each file-Job Design
Replies: 9
Views: 2332

Next time I will take care the forum selection :? .. Thanks a ton :)
by sureshreddy2009
Thu Jun 03, 2010 6:12 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to generate row numbers for each file-Job Design
Replies: 9
Views: 2332

But my problem is related to parallel jobs that is why I posted in parallel jobs forum.
Any how I got it thanks
by sureshreddy2009
Thu Jun 03, 2010 6:04 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to generate row numbers for each file-Job Design
Replies: 9
Views: 2332

If your post is explaining about this logic will be implement only in server jobs not in paralell jobs.

I can say one thing that I implemented in parallel jobs 8.0.1
I tested the scenario also.
by sureshreddy2009
Thu Jun 03, 2010 5:01 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: How to generate row numbers for each file-Job Design
Replies: 9
Views: 2332

Now Got the answer

Hi, I got the answer for my question/scenario Here is the solution. I read all the files through sequential file stage by putting file pattern , I put another option file name column. after sequential file stage I used sort stage and is sorted based on file name column including hash paritioning on ...