Output needs to move in different jobs
Moderators: chulett, rschirm, roy
Output needs to move in different jobs
Hi,
I have a table with 1000 records. But my output needs to go to different jobs based on conditions.
i. all the header records need to go to Job1
ii. all the tail records need to go to Job2
iii. Remaining records need to go to Job3.
How can we achieve this. What Functions we need to apply in datastage.
I have a table with 1000 records. But my output needs to go to different jobs based on conditions.
i. all the header records need to go to Job1
ii. all the tail records need to go to Job2
iii. Remaining records need to go to Job3.
How can we achieve this. What Functions we need to apply in datastage.
Depending on how difficult it is to identify the header and trailer records in your input table, i would just have the 3 jobs required, each one filtering the input from the source for the appropriate records.
eg
i. Job reads the source table for header records
ii. Job reads the source table for trailer records
iii. Job reads the source table for records which are not header or trailer.
eg
i. Job reads the source table for header records
ii. Job reads the source table for trailer records
iii. Job reads the source table for records which are not header or trailer.
The 'code' that you require will depend on a number of factors, that will in the end effect the job design that you decide upon.
Factors that will influence this can include:
Input type: eg DB or File
Operating Systems,
environment performance,
data size etc
eg if input is an Oracle Table, then you would use an input stage of an oracle connector, or a DRS, or and ODBC connector. You would enter in which ever stage you choose the criteria for your selection. In this instance given your simple request, there would be no need for a transform, you can just write the data directly do where you require it (eg a DB, or a dataset or a file).
Equally you could read all the records from the table and filter those records in a transform stage. But usually a DB will have better performance provided is correctly set up.
If the input was a sequential file, you can apply filters to the data as the file is read. Or you can read all the data and apply a filter in a transformer stage based on the criteria you require.
What I have discussed here is merely a fraction of the things that can be considered even on the most simple jobs.
Factors that will influence this can include:
Input type: eg DB or File
Operating Systems,
environment performance,
data size etc
eg if input is an Oracle Table, then you would use an input stage of an oracle connector, or a DRS, or and ODBC connector. You would enter in which ever stage you choose the criteria for your selection. In this instance given your simple request, there would be no need for a transform, you can just write the data directly do where you require it (eg a DB, or a dataset or a file).
Equally you could read all the records from the table and filter those records in a transform stage. But usually a DB will have better performance provided is correctly set up.
If the input was a sequential file, you can apply filters to the data as the file is read. Or you can read all the data and apply a filter in a transformer stage based on the criteria you require.
What I have discussed here is merely a fraction of the things that can be considered even on the most simple jobs.
@shaneMuir
Thank you for the information.
I am using Sequential file stage.(data is in flat file with 15 rows,basic file --employe.txt).. No database connections.
employee table(eno,ename,salary)
but my confusion here is . you just map the all records to target. I don't know how to separate the rows to target.. No idea about filter condition.
Thank you for the information.
I am using Sequential file stage.(data is in flat file with 15 rows,basic file --employe.txt).. No database connections.
employee table(eno,ename,salary)
but my confusion here is . you just map the all records to target. I don't know how to separate the rows to target.. No idea about filter condition.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
The filter condition exactly reflects the mechanism used in the file to identify header, detail and trailer records. Ordinarily you would wish to separate these into separate outputs in your Transformer stage, therefore the "filter" conditions become the output link constraint expressions in the Transformer stage.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Don't you have anyone you work with that can help mentor you? Or are you truly doing this all on your lonesome somehow?
A constraint is an expression that evaluates to TRUE or FALSE. When it evaluates to true for a given row, that row is allowed down that link. When false, it is not.
So... how do you know it is a Header record? Trailer record? Whatever determines that would dictate what your expression looks like. Let's take a hypothetical... you have a field called RECORD_TYPE and it is either an H, a T or a D for a detail record. Three output links from a transformer could in that case have expressions like:
Link to the header target: RECORD_TYPE = 'H'
Link to the trailer target: RECORD_TYPE = 'T'
Link to the detail target: RECORD_TYPE = 'D'
Note that constraints support an 'otherwise' type so the third one could leverage that. Or you could build three jobs, all of which read the same file and then have a Filter or Transformer constraint that matches their target.
Not really here to teach a DataStage class in a post or two, but hopefully that helps.
![Confused :?](./images/smilies/icon_confused.gif)
A constraint is an expression that evaluates to TRUE or FALSE. When it evaluates to true for a given row, that row is allowed down that link. When false, it is not.
So... how do you know it is a Header record? Trailer record? Whatever determines that would dictate what your expression looks like. Let's take a hypothetical... you have a field called RECORD_TYPE and it is either an H, a T or a D for a detail record. Three output links from a transformer could in that case have expressions like:
Link to the header target: RECORD_TYPE = 'H'
Link to the trailer target: RECORD_TYPE = 'T'
Link to the detail target: RECORD_TYPE = 'D'
Note that constraints support an 'otherwise' type so the third one could leverage that. Or you could build three jobs, all of which read the same file and then have a Filter or Transformer constraint that matches their target.
Not really here to teach a DataStage class in a post or two, but hopefully that helps.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Craig has guessed what your data look like. You have to create the rules (in English, if you like) about what in the data defines the record type, then how to extract it if it's not in a field by itself - for example it might be the first character of each line. Then you can convert your specifications to pseudo code and then, finally, to DataStage expressions.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.