Search found 91 matches

by saikir
Mon Sep 24, 2007 6:02 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Type 30, DATA30 and OVER30 Files
Replies: 3
Views: 2015

Hi Ray,

Thanks a lot for the answer. I have read some where on the forum that by resizing the VOC file to TYPE30, you can increase the performance of Server Jobs. Is this true? Also, i belive there is a command to add entries to the VOC file. Can you please help me on the command?


Sai
by saikir
Mon Sep 24, 2007 5:21 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Type 30, DATA30 and OVER30 Files
Replies: 3
Views: 2015

Type 30, DATA30 and OVER30 Files

Hi All,

Can any one of you please let me know what are Type 30, DATA30 and OVER30 Files. Also what are the differences between these files and in what way does DataStage uses these files.

Regards,
Sai
by saikir
Sun Sep 16, 2007 11:02 pm
Forum: General
Topic: Cannot open Executable Job file RT_CONFIG319
Replies: 9
Views: 17243

Hi, We have faced simialr problems in our project due to some corrupt files. One thing you can always try out is to export the job and re-import it. When you re-import the job a fresh set of config and status files get created. Note: When you import that the design and executable are linked to each ...
by saikir
Fri Aug 24, 2007 3:41 am
Forum: IBM<sup>®</sup> DataStage TX
Topic: difference between datastage server edition and TX
Replies: 4
Views: 7020

Hi, Some information on different version of DataStage: DataStage Server Edition (server jobs): DataStage Server Edition is a bit slower and a bit simpler but capable of some heavy slugging. DataStage Enterprise Edition (parallel jobs and server jobs): With parallel architecture massively scalable, ...
by saikir
Tue Jul 10, 2007 2:57 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Table count to a Parameter
Replies: 4
Views: 1821

Hi,

An other possible way would be to read the records from the DB2 table, pass the output a transformer stage variable and call the job with the stage variable parameter
by saikir
Thu Mar 08, 2007 3:00 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: ORA OCI stage and Oracle sequence
Replies: 10
Views: 5339

Hi,

IF you have enabled 'Cycle' option during the sequencer creation then the sequence numbers would repeat. Say if the max value is 100, on reaching 100 the sequence would again start from 1... 100. In such a case, you would defnitely get the uniques constraint violation

Sai
by saikir
Wed Feb 14, 2007 6:01 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: How to Abort the job based on some logic?
Replies: 9
Views: 4344

Hi,

I am not totally clear of your requirement. However, if you want abort a job you can use UtilityAbortToLog. You can find more information in the datastage documentation by searchin with the keyword Utility Transforms

Sai
by saikir
Tue Jan 16, 2007 1:55 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Changing file format from UNIX to Windows
Replies: 6
Views: 1343

Hu Kumar, Thanks for the reply. There is a problem in changing the line termination to unix that, the same job should handle feeds from UNIX SErver and WIndows. In other words some of the feeds will be in unix and some in Windows and the same job has to handle it. So i was wondering whether there is...
by saikir
Tue Jan 16, 2007 1:24 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Changing file format from UNIX to Windows
Replies: 6
Views: 1343

Changing file format from UNIX to Windows

Hi All,

We have some feed files coming from SAP in unix format. Is there any way in DataStage we can change the file format from UNIX to windows. The existing job accepts only Windows line termination.

Sai.
by saikir
Fri Jan 12, 2007 7:37 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Accessing Datastage Universe tables via Oracle
Replies: 5
Views: 3392

Hi Craig, Thanks a Lot for your prompt reply. Sorry for portraying my question wrongly. In truth i was eager at accessing Universe tables. We develop a lot of server jobs over here and i would say i am kind of familiar with them and read DS Xchane whenever i have time. In some posts i have seen peop...
by saikir
Fri Jan 12, 2007 7:07 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Accessing Datastage Universe tables via Oracle
Replies: 5
Views: 3392

Accessing Datastage Universe tables via Oracle

Hi All,

I am new to DataStage and have never tried to access the DataStage Universe tables via oracle. Can anyone provide me with some hlpfull steps or literature on how to do this?

Sai
by saikir
Fri Dec 01, 2006 2:33 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Lookup_IF condition
Replies: 12
Views: 4763

Hey,

If my understanding of your problem is correct, you need to not use a if condition.

You can put a constraint where currency_type=USD then no calculation in the transformer.

If currency_type<>USD then currency*45 or what ever the dollar value is.
by saikir
Wed Nov 29, 2006 4:36 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: Loading Huge Data
Replies: 11
Views: 4161

Hi,

You can also try using the link partitioner and link collector stage. The link collector will split the load into links and the other end you can collect all the links using link collector.
by saikir
Fri Nov 10, 2006 5:24 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: DS Hashed File
Replies: 13
Views: 6250

Hey, Hash Files can be used for various purposes. If you are beginer. one primary purpose of a hash file is to use it as a look up table. Consider the following scenario: - You have to load data into a fact table with valid customers. All the valid customers are present in some table So every time y...
by saikir
Thu Nov 09, 2006 12:58 am
Forum:
Topic: Taking metadata from a file
Replies: 18
Views: 5990

Hey,

If you are having this data in a remote machine in a file, then you can use the ftp stage to directly pull the data column wise into etl server