Job Performance
Posted: Tue Sep 14, 2004 9:33 am
Hi,
I have a pretty straight forward job that reads data out of source DRS (SQL server) based on some criteria and inserts records into two target DRS (SQL server) at the same time. As an error handler, I have a reject sequential files one with each target DRS. If for example there are duplicate records that can't be rejected, then they will be written to this reject file.
I have been looking for a way to make this process run faster. I increased the array and transaction sizes on target DRS stages. This dramatically decreased my processing time (which is what I was looking for) but it stopped the rejected records from being written into reject files. I can see that records were rejected in DS Director but they do not get written anywhere.
I called tech support and they said that I need to keep array and transaction sizes at 1 if I want reject file stage to work.
Is there any way to decrease my processing time and still have an exception handling?
Thanks,
Juls.
I have a pretty straight forward job that reads data out of source DRS (SQL server) based on some criteria and inserts records into two target DRS (SQL server) at the same time. As an error handler, I have a reject sequential files one with each target DRS. If for example there are duplicate records that can't be rejected, then they will be written to this reject file.
I have been looking for a way to make this process run faster. I increased the array and transaction sizes on target DRS stages. This dramatically decreased my processing time (which is what I was looking for) but it stopped the rejected records from being written into reject files. I can see that records were rejected in DS Director but they do not get written anywhere.
I called tech support and they said that I need to keep array and transaction sizes at 1 if I want reject file stage to work.
Is there any way to decrease my processing time and still have an exception handling?
Thanks,
Juls.