The loading strategy for my job is truncate and then load. The job runs pretty fast till some 300,000 rows (above 1000 rows / sec ) but then the performance drop to 300 rows / sec. The job is same, server is same, data is of same volume then why there is a sudden drop in processign speed??? Any suggestions??
I your database on the same machine as the datastage server? Does your table have indices? I would have your DBA monitor the database activity since the likelihood is very high that the cause is there.
I am monitorinf the DB Server and DS server. The DB server doesnot show any activity (huge) and the DS serever also doesnot show any big activity.They are on different boxes. We have not seen any network drop also.
The job has not died , its still running but now with very less number of rows/second.
Maybe your DS server is swapping/paging? You can do a "truss -p {pid}" of your DS process to see if there are any system calls or sleeps visible that might help you. Does you job read from a sequential source and does it have lookups ?
when the job starts the table is empty for that reason the performance is better, may be you can use a bulk load to improve performance. Also you need to check the transaction size and the rows per transaction; these two values impact on the performance.