Search found 22 matches

by ds_user78
Fri Dec 21, 2007 9:38 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Getting a "No space left on device" error
Replies: 15
Views: 7993

Even in the grid env , you should be able to see the resource disk and resource scratchdisk on the dynamic configuration file from the director log. ensure that space is there on those paths.
by ds_user78
Fri Dec 21, 2007 9:30 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Getting a "No space left on device" error
Replies: 15
Views: 7993

Check the scratch disk and resource disk space - you shd get this in the config file. Ensure that enough space is there in these paths.
by ds_user78
Wed Dec 19, 2007 10:49 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: DataStage Schedule - Oracle_Home not set when using 'every'
Replies: 2
Views: 1992

The ORACLE_HOME it set in dsenv. It is also there in .profile file of the unix user.
by ds_user78
Tue Dec 18, 2007 4:44 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: DataStage Schedule - Oracle_Home not set when using 'every'
Replies: 2
Views: 1992

DataStage Schedule - Oracle_Home not set when using 'every'

Hi I am facing an error in scheduling jobs through DataStage scheduler. When I schedule 'Today' and set a time , the ORACLE_HOME environment variable is set in the "Environment variable settings" when the job runs. If I use 'Every' and select some days and set a time, the ORACLE_HOME envir...
by ds_user78
Fri Nov 02, 2007 8:27 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: final delimiter for header record.
Replies: 3
Views: 2494

Thanks every one for the replies.

Just wanted to post the solution so that it could be useful to some one. I used record delimiter string='|\n' and it worked for both the header record and the data records.
by ds_user78
Fri Oct 26, 2007 9:03 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: final delimiter for header record.
Replies: 3
Views: 2494

final delimiter for header record.

Hi, I have a requirement to generate a | delimited file. The final delimter of the record also should be |. I specified the First line is column header to true. I set the record level final delimiter to |. I set the Field level delimiter to |. I am getting everything properly except the final delimi...
by ds_user78
Mon Oct 15, 2007 4:00 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: DS Job stops reading around 1 million records (Grid)
Replies: 1
Views: 1094

DS Job stops reading around 1 million records (Grid)

We have DataStage set up on the sun grid. We have a job that reads from a sequential file and loads into a database table. The input sequential file contains around 5 million records. We are observing a pattern in which the DS job stops reading after 1 million record. It does not abort or give any m...
by ds_user78
Mon Aug 13, 2007 2:15 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Fatal Error: Fork faile - Previous posts didn't help much
Replies: 8
Views: 6076

Does the same job run with single node configuration file?

We also had a similar error and when we tried with a config file with less number of nodes it started working. Something to do with number of UNIX processes I think.
by ds_user78
Thu Mar 02, 2006 3:59 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: How can i remove duplicate rows
Replies: 9
Views: 3062

Use a hashed file selecting the keys as the columns based on which you want eliminate the dups.
by ds_user78
Thu Jan 05, 2006 6:21 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Use key management routines in parallel jobs??
Replies: 8
Views: 4671

Use key management routines in parallel jobs??

Hi, We have been asked to use the key management routines (KeyMgtGetNextValue,KeyMgtGetNextValueConcurrent) of DS for surrogate key generation over Oracle sequences. And we are using parallel jobs. Is BASIC transformer the only way to call these routines? How much of an over head will it be to use B...
by ds_user78
Tue Aug 16, 2005 9:38 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: ds_ipcopen() The system cannot find the file specified
Replies: 1
Views: 1728

I am not sure if this is going to help you. But try to have two passive stages on either side of the link collector. For example join from two sequential files and write to a sequential file before taking to a transformer.
by ds_user78
Mon Aug 15, 2005 8:56 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Kanji Language
Replies: 3
Views: 1914

I think the conversion of # to ? has something to do with the default padding char in the file.
by ds_user78
Wed Jul 06, 2005 2:28 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Merge 240 flat-files into one and then process using DataSta
Replies: 10
Views: 4429

I believe you can do something like type c:\merge\*.txt > c:\merge\outfile.txt This you have to execute at the command prompt.
by ds_user78
Mon Feb 28, 2005 9:21 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: How to read this file?
Replies: 6
Views: 2358

How to read this file?

Hi, I have a data file which has the following layout. How do I read this in DS? 10|ASBT115072 25C4|05|50001670|01|05192003|1|4|EA 40|ASBT115072 25C4|05|50001670|01|SINGLE PHASE PAD TYPE 2 (line 1) 20|ASBT115072 25C4|05|50001670|01|0002||02005|CP05|5RTST|Ratio Test|1.000|EA|X|1|1|1.500|H|5.650|H|3.3...
by ds_user78
Wed Feb 09, 2005 8:41 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: merge data from two files
Replies: 1
Views: 864

merge data from two files

hi, i have two files f1 and f2 which looks like this. f1 c1,c2, .. 100,1,.. 100,2,.. 100,3,.. 200,1,.. 200,2,.. 300,1,.. and f2 c1,c3,.. 100,1000,.. 100,2000,.. 300,1000,.. 300,2000,.. the output file should be c1,c2,c3, 100,1,1000,.. 100,2,2000,.. 100,3,NULL,.. 200,1,NULL,.. 200,2,NULL,.. 300,1,100...