How to pass header and trailor to the output

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
kirankota79
Premium Member
Premium Member
Posts: 315
Joined: Tue Oct 31, 2006 3:38 pm

How to pass header and trailor to the output

Post by kirankota79 »

Iam trying to process a file....which contains header, detail and trailor. Header and trailor are not same length and my job is

seq file....transformer .....seq file.

The link before the transformer is showing the number of rows processed but it is not passing through the transformer and getting zero rows in the output. i dont have any constraints in the transformer and getting the errors

NullTransformer,0: Caught unknown exception from runLocally().

Some input files have delimiters and some files doesn't have delimiters.

I want to pass the header and trailor to the output with only transformations applied to the detail records. I dont want to touch header and trailor

Any help?

Thanks
csri
Participant
Posts: 99
Joined: Wed Jul 11, 2007 8:02 am

Re: How to pass header and trailor to the output

Post by csri »

kirankota79 wrote:Iam trying to process a file....which contains header, detail and trailor. Header and trailor are not same length and my job is

seq file....transformer .....seq file.

The link before the transformer is showing the number of rows processed but it is not passing through the transformer and getting zero rows in the output. i dont have any constraints in the transformer and getting the errors

NullTransformer,0: Caught unknown exception from runLocally().

Some input files have delimiters and some files doesn't have delimiters.

I want to pass the header and trailor to the output with only transformations applied to the detail records. I dont want to touch header and trailor

Any help?

Thanks
What is the format of the header, trailor and your data records? Does the header contain total number of records in the file? Provide us more information on the field layout.
kirankota79
Premium Member
Premium Member
Posts: 315
Joined: Tue Oct 31, 2006 3:38 pm

Post by kirankota79 »

the file format is as below

HDRBAE001I 200808141008000989 CREDITCARDDATA.TXT 00086


10A20080813240893 0058940918190296242291182201108GAMBUCCI, JESSE J PUS
10A20080813544909 9687940918190338143607381201108JULIANO, LILY PUS


TRL200808141008000000029


I have given only one detail record as sample.






HEADER RECORD TYPE From Standard File 3, 1, 3
HEADER PROCESS ID New Value 8, 4, 11
HEADER CREATION-DATE From Standard File 12, 12, 23
HEADER SEQUENCE New Value 6, 24, 29
HEADER PAYROLL RUN ID New Value 10, 30, 39
HEADER FILENAME New Value 30, 40, 69
HEADER LARGEST-RECORD-SIZE New Value 6, 70, 75
HEADER RESERVED-FOR-FUTURE-USE New Value 11, 76, 86
DETAIL DETAIL-REC-TYP From Standard File 2, 1, 2
DETAIL STATUS-CODE From Standard File 1, 3, 3
DETAIL EFF-DATE From Standard File 8, 4, 11
DETAIL EMPLOYER From Standard File 15, 12, 26
DETAIL ACCT-NO From Standard File 16, 27, 42
DETAIL SSN From Standard File 9, 43, 51
DETAIL EXP-DATE From Standard File 6, 52, 57
DETAIL NAME-LINE1 From Standard File 25, 58, 82
DETAIL BANK-NBR From Standard File 4, 83, 86
TRAILER RECORD TYPE New Value 3, 1, 3
TRAILER CREATION-DATE New Value 12, 4, 15
TRAILER TOTAL-NUMBER-OF-RECORDS New Value 9, 16, 24
TRAILER TOTAL-DOLLAR-AMT New Value 15, 25, 39
TRAILER RESERVED-FOR-FUTURE-USE New Value 47, 40, 86

last column represents (Length,Start,stop)

This doesn't have any delimiters...i have some other file with pipe delimited.

Thanks
csri
Participant
Posts: 99
Joined: Wed Jul 11, 2007 8:02 am

Post by csri »

There may be many approaches to solve this. Here is what I can think of:

1. Define the input Sequential file stage as fixed width so that each record comes as one field.
2. Pass it through a transformer and constraint then to exclude the header and trailor rows "Not(if left(field,3) = 'HDR' OR left(field,3) = 'TRL')". Pass the header and trailor as a different stream to output.
3. For all the data records since you know the offset and length you can split them into different fields or process them however you like.
4. If you have to write back to a fixed width file after processing then you can concatenate all the fields and using a link collector collect the header, trailor and ata records before writting to a file.

Hope this helps.
kirankota79
Premium Member
Premium Member
Posts: 315
Joined: Tue Oct 31, 2006 3:38 pm

Post by kirankota79 »

Thanks for the inputs my jobs are working for that type of example.
If you see the above example all the header, detail and trailor are of the same length of 86 and it was easy to design and run the job.

Now i have some files which have columns delimited by pipe( "|" ) and also the header, detail and trailor are of different lenghts. Can you tell me how i can do this?

Thanks
Kiran
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Different job, same logic but different Format in the Sequential File stage that reads the file. Delimiter can not be a job parameter. Rather than parsing the fields out of a single string, they will now be parsed automatically for you.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
csri
Participant
Posts: 99
Joined: Wed Jul 11, 2007 8:02 am

Post by csri »

kirankota79 wrote:Thanks for the inputs my jobs are working for that type of example.
If you see the above example all the header, detail and trailor are of the same length of 86 and it was easy to design and run the job.

Now i have some files which have columns delimited by pipe( "|" ) and also the header, detail and trailor are of different lenghts. Can you tell me how i can do this?

Thanks
Kiran
Kiran, its the same logic. Read the file as fixed width and as a single field and exclude header and trailor. Then using a function which does the substr based on pipe get the fields out and process them. Finally concatenate the field and then using the link collector collect them and write to a file.

Hope this helps.
kirankota79
Premium Member
Premium Member
Posts: 315
Joined: Tue Oct 31, 2006 3:38 pm

Post by kirankota79 »

Hi...Thanks again for the inputs and it is working for different lengths.

Now i have a different scenario....Apart from header and trailor there are some files with detail records having different lengths. This is making difficult to cut down the single column into multiple columns in a particular format. Is there a way that datastage can understand the varying lengths of the detail records.

Thanks
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

You need to make it understand. Not sure why everything starts over with a new question as all of these are variations on the same theme and basically all have the same answer.

Define the record as a single string field using the length of the longest record. Parse the string into individual fields based on your knowledge of the internal format - be it fixed sizes or delimited. You can also check the length of the string (or perhaps an initial field) to know which metadata to use to parse that particular record into individual columns and assign each type of record to its own output link. Assuming, of course, that multiple formats (like your header/detail/trailer records) exist. If there's only a single format then it becomes even easier.

So... what exactly is your difficulty in this 'different' scenario of yours? :?
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply