Rearding interprocess stage

A forum for discussing DataStage<sup>®</sup> basics. If you're not sure where your question goes, start here.

Moderators: chulett, rschirm, roy

Post Reply
manojbh31
Premium Member
Premium Member
Posts: 83
Joined: Thu Jun 21, 2007 6:41 am

Rearding interprocess stage

Post by manojbh31 »

Hi,

I am using datastage server addition.
In one of my job i am planning to use interprocess stage instead of file where the file was written and read.This job is multiple instace job.it is taking more time to process.
so i implemented interprocess stage, it is working fine and reducing the time, but the problem is number of rows is coming less compared to previous job which was using file.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Hello manojbh31 and welcome to DSXChange.

In order for anyone here to help on this problem you will need to supply some additional information. I assume that you are using the "Interprocess Stage" in your server job. The number of rows that go into this stage will be the same as the number coming out. How are you determining that this is not the case? By the row count in the job monitor or by the comparing the number of rows read from source and written to target?
manojbh31
Premium Member
Premium Member
Posts: 83
Joined: Thu Jun 21, 2007 6:41 am

Post by manojbh31 »

ArndW wrote:Hello manojbh31 and welcome to DSXChange.

In order for anyone here to help on this problem you will need to supply some additional information. I assume that you are using the "Interprocess Stage" in your server job. The number of rows that go into this stage will be the same as the number coming out. How are you determining that this is not the case? By the row count in the job monitor or by the comparing the number of rows read from source and written to target?


Thanks

The job in which i have used interprocess stage is in developement,
Daily the job which is using file is running in Production so i am taking the source file from that path and using tat file in development for this job were i am using interprocess stage. so the count is not matching.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

I do not understand; are you saying that the number of rows coming into the IPC stage and going out is not the same in a job run or are you stating that your number of rows is different between development and production?
manojbh31
Premium Member
Premium Member
Posts: 83
Joined: Thu Jun 21, 2007 6:41 am

Post by manojbh31 »

Hi

Wat i am going to say is this job is daily running in production.
I have applied interprocessor stage in development by using the same feed which is coming in production.

there are 2 transformer in this job.
first file then transformer then file, again transformer then target file.
I am using interprocessor stage after first transformer stage.
so it will write the file then read the file simultaneously.

after running the job i am comparing the number of records of target file in development to production job which doesnt have interprocessor stage.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

If you enable interprocess buffering you can dispense with the IPC stage in this job. The only way that the sequential file can be written to and read from simultaneously is if you declare it as a named pipe (create it with mkfifo). Please check your sequential file setting on write and read to make sure the delimiters and other attributes are the same.
Post Reply