Hi DS Gurus,
I am running so many DS job which having same type of
structure. all the job processing incomming flate files pipe
delimited data and looking into vary larger table's primary key column
and number of records in table are very huge(aroung 500000000). when i started to run all the Datastag jobs which are refering different different table , the speed of the record processing was 500 records per second but when i saw all the job after 2 days the speed was only 50 records per second,
all the jobs are still running .
could any body explain me why it is happening .
Now i want to spped the records processing but dont want to restart
datastge jobs.
Please give me some useful advice to achive good result.
Speed of records processing is decreasing with the time
Moderators: chulett, rschirm, roy
Speed of records processing is decreasing with the time
Regards,
Deepak Singhal
Everything is okay in the end. If it's not okay, then it's not the end.
Deepak Singhal
Everything is okay in the end. If it's not okay, then it's not the end.
Talk to your DBA; this symptom is common when you have indices on the table. Have the DBA monitor what is going on inside the database. If you make a copy of your job and write to a sequential file instead of to the table(s) {make the file /dev/null for real speed} you will see that the job will most likely run much faster, but also that it will perform at a relatively constant speed; this shows that the cause is not in the DS job itself but in the database.
hi Ramesh,
I am looking into Database(Informix) directly only based on Primary key field.
I have tried this wiht Hash lookup also. but creating hash lookup is not easy (becaose i am having huge amount of records in my each table (around 500000000 records ).
will it be feasible sol to create hash lookup instead of looking into table direcltly
Regards,
Sid
I am looking into Database(Informix) directly only based on Primary key field.
I have tried this wiht Hash lookup also. but creating hash lookup is not easy (becaose i am having huge amount of records in my each table (around 500000000 records ).
will it be feasible sol to create hash lookup instead of looking into table direcltly
Regards,
Sid
Regards,
Deepak Singhal
Everything is okay in the end. If it's not okay, then it's not the end.
Deepak Singhal
Everything is okay in the end. If it's not okay, then it's not the end.