Filter in sequential file.
Moderators: chulett, rschirm, roy
Filter in sequential file.
My source file contains 24 months data. I have to extract first 12 months data from that file using filter command in sequential file. Which command i have to use to achive this requirement.
What makes you think you have to use the filter command to achieve this?
That sounds more like a job for a Transformer constraint. How will you recognize these 'first 12 months' of data? What is the business rule you are attempting to implement? The answer to that question will help determine the appropriate place to do the dirty deed.
That sounds more like a job for a Transformer constraint. How will you recognize these 'first 12 months' of data? What is the business rule you are attempting to implement? The answer to that question will help determine the appropriate place to do the dirty deed.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
hi
We already have a job for this with transformer constraints. Since the data volume is high it is taking more time. So we are trying for some methods to improve the performance. Is there anyother way to achive this other than transformer constraints?(which will reduce the time taken).
thanks in advance.
thanks in advance.
chulett wrote:How will you recognize these 'first 12 months' of data? What is the business rule you are attempting to implement? The answer to that question will help determine the appropriate place to do the dirty deed.
ps. Unless your constraints are horribly complex, I don't see how they could be your 'taking more time' culprit. We'd have to have a better idea of your job design in order to provide any specific help there.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
constraints are not that much complex. We will be passing the first month value and will compare this with the date value in the source file and will find the month difference. Based on this month difference we are splitting this 24 months data into 2 files of 12 months data. we have nearly 100 million records. It is taking 4 to 5 hours to load this files.
can we reduce this time.
can we reduce this time.
Make your job multiple instance. Read a subset for each instance. Depending upon the resources available, run as many instances simultaneously.
A much faster approach, bulk load the file into a work table and write a sql query to extract the data. Again two sets of 12 months each.
A much faster approach, bulk load the file into a work table and write a sql query to extract the data. Again two sets of 12 months each.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.