Search found 238 matches

by dodda
Mon Jun 22, 2009 10:15 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Field function
Replies: 6
Views: 2159

hello nani i will be getting a string i.e A%%%AB%%%DF%%%CE%%%XE%%%AF in of the fields from the input sequrntial File. I need to read that string and get the values which ar edelimited by %%% i.e InputString=A%%%AB%%%DF%%%CE%%%XE%%%AF outstring1=A outstiring2=AB outstiring3=DF outstiring4=CE outstiri...
by dodda
Mon Jun 22, 2009 9:50 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Field function
Replies: 6
Views: 2159

Field function

Hello I have a requirement where i have a string like A%%%AB%%%DF%%%CE%%%XE%%%AF i want to get the each string which is delimited by %%%. I used Field function but it is not giving me the required result. below are the expecte dresults Field (InputString,'%%%',1)=A Field (InputString,'%%%',2)=AB Fie...
by dodda
Tue Jun 16, 2009 6:52 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Converting String to Month
Replies: 1
Views: 1049

Converting String to Month

Hello

I have a requirement where i am getting a string like 01-JAN-2009 from file. I have to use some logic in such a way that i need to extract JAN and convert that to 01. if get 01-FEB-2009 then i need to get 02 and so on. Is there a function to get that value.

Thanks
by dodda
Mon Jun 01, 2009 1:42 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: count the number of inserts
Replies: 4
Views: 1789

Hello Guru,

Is there a way i can get and write the Job start time, endtime, Status, no of records inserted into a file

Thanks
by dodda
Mon Jun 01, 2009 12:32 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: count the number of inserts
Replies: 4
Views: 1789

Hello DSGuru

The number of inserts vary as we will do some constraint check just before we do insert into database. i want to write number of records that got inserted to a file and in the next job i will read that file to get the number of records that got inserted.

thanks
by dodda
Mon Jun 01, 2009 12:04 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: count the number of inserts
Replies: 4
Views: 1789

count the number of inserts

Hi i have a requirement where i need to read a fixed width flat file and insert the rows into a oracle DB. My input File has detailed records and Footer record. records will be inserted into database based on some constraints. I need to capture the no of records that got inserted and that number has...
by dodda
Thu May 14, 2009 10:11 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: job id & sequence id
Replies: 5
Views: 1738

yes jobnames are unique. but the requirement is to have the job id and sequence id.

please help me in getting those values

Thanks
by dodda
Thu May 14, 2009 9:14 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: job id & sequence id
Replies: 5
Views: 1738

job id & sequence id

Hello

I have a requirement where i need to get the Unique job id and unique sequence id which are unique to every job and every sequene and insert those values in the database.

I have searched through forums and didnt find proper answer.

Appreciate if you can help
by dodda
Wed May 13, 2009 8:23 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: capture rejected records and reason forrejection
Replies: 3
Views: 1289

capture rejected records and reason forrejection

Hello

I have a requirement where i am inserting the data into a database(oracle) from Flat File. In my scenario i need to capture the rejects in a sequential File and the reason for rejection appended to the every row that is rejected in a rejected File.

appreciate if you can help me.

thanks
by dodda
Tue May 05, 2009 9:05 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: reading multiple files
Replies: 6
Views: 2784

chulett wrote:There is, and a search for "file pattern" turned up a number of conversations, including this one which looks like it should help.

Hi

Thanks for the reply. I want to track the filename of the each file being processed in datastage director.
by dodda
Mon May 04, 2009 3:16 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: reading multiple files
Replies: 6
Views: 2784

reading multiple files

Hello I am reading multiple files from sequential file stage by using the Read Method: File patern. My job is Seqfilestage--transformer--seqfilestage. I have 3 files in the input directory with file name pattern similar (*.dat). The job is running fine and i am getting the output which has the conte...
by dodda
Thu Apr 30, 2009 1:50 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Decimal issue
Replies: 1
Views: 818

Decimal issue

Hello I have a requirement where i need to read the file and produce the XML. I have seqential file stage as source transfomer in between and XML output stage as target. I am trying to read the data out of the file for a field called A as decimal(7,2) and in the transfomer i am mapping to a Field ca...
by dodda
Fri Apr 24, 2009 8:06 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: the record is too big to fit in a block
Replies: 12
Views: 15357

Hello Ray,

When i tried to add APT_DEFAULT_TRANSPORT_BLOCK_SIZE environmental variable through administrator it says the variable already exists. But when i looked in the list of variables it is not there. Is there a way that this variable can be configured.

Thanks
dodda
by dodda
Thu Apr 23, 2009 3:32 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: the record is too big to fit in a block
Replies: 12
Views: 15357

the record is too big to fit in a block

Hello I have a job design which reads a seqeuntial file where we are reading each record as a single line and breaking those records into multiple columns with column import stages and we are building the XML chunks for every record and finally we are joining all those chunks to produce a single big...