Hi,
I am using datastage server addition.
In one of my job i am planning to use interprocess stage instead of file where the file was written and read.This job is multiple instace job.it is taking more time to process.
so i implemented interprocess stage, it is working fine and reducing the time, but the problem is number of rows is coming less compared to previous job which was using file.
Rearding interprocess stage
Moderators: chulett, rschirm, roy
Hello manojbh31 and welcome to DSXChange.
In order for anyone here to help on this problem you will need to supply some additional information. I assume that you are using the "Interprocess Stage" in your server job. The number of rows that go into this stage will be the same as the number coming out. How are you determining that this is not the case? By the row count in the job monitor or by the comparing the number of rows read from source and written to target?
In order for anyone here to help on this problem you will need to supply some additional information. I assume that you are using the "Interprocess Stage" in your server job. The number of rows that go into this stage will be the same as the number coming out. How are you determining that this is not the case? By the row count in the job monitor or by the comparing the number of rows read from source and written to target?
ArndW wrote:Hello manojbh31 and welcome to DSXChange.
In order for anyone here to help on this problem you will need to supply some additional information. I assume that you are using the "Interprocess Stage" in your server job. The number of rows that go into this stage will be the same as the number coming out. How are you determining that this is not the case? By the row count in the job monitor or by the comparing the number of rows read from source and written to target?
Thanks
The job in which i have used interprocess stage is in developement,
Daily the job which is using file is running in Production so i am taking the source file from that path and using tat file in development for this job were i am using interprocess stage. so the count is not matching.
Hi
Wat i am going to say is this job is daily running in production.
I have applied interprocessor stage in development by using the same feed which is coming in production.
there are 2 transformer in this job.
first file then transformer then file, again transformer then target file.
I am using interprocessor stage after first transformer stage.
so it will write the file then read the file simultaneously.
after running the job i am comparing the number of records of target file in development to production job which doesnt have interprocessor stage.
Wat i am going to say is this job is daily running in production.
I have applied interprocessor stage in development by using the same feed which is coming in production.
there are 2 transformer in this job.
first file then transformer then file, again transformer then target file.
I am using interprocessor stage after first transformer stage.
so it will write the file then read the file simultaneously.
after running the job i am comparing the number of records of target file in development to production job which doesnt have interprocessor stage.
If you enable interprocess buffering you can dispense with the IPC stage in this job. The only way that the sequential file can be written to and read from simultaneously is if you declare it as a named pipe (create it with mkfifo). Please check your sequential file setting on write and read to make sure the delimiters and other attributes are the same.