SEPDI File Format Import.

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
Nisusmage
Premium Member
Premium Member
Posts: 103
Joined: Mon May 07, 2007 1:57 am

SEPDI File Format Import.

Post by Nisusmage »

Hello.

I have a problem with importing a flat file (SEPDI) with a header and a trailer to the data. File definition. http://www.qlink.co.za/QLHome/documenta ... neral.html.

I've looked at the CFF stage, however I'm not convinced that this would work.

The of my file is as follows:

Code: Select all

SEPDI Header Record
    TRANSACTION Records
    TRANSACTION Records
    TRANSACTION Records
SEPDI Trailer Record
I need to somehow strip the header and trailer out to be used in the job for validation and import the transaction data into a SQL Server.

Can anyone help.?
~The simpliest solutions are always the best~
~Trick is to understand the complexity to implement simplicity~
battaliou
Participant
Posts: 155
Joined: Mon Feb 24, 2003 7:28 am
Location: London
Contact:

Post by battaliou »

You can simply use a sequential file stage in combo with a transformer. I'm assuming that this is a CSV file rather than fixed width. Either way you can use your transformer to constrain any rows that begin with the string SEPDI. i.e. DSLink.Type[1,5] <> 'SEPDI'
3NF: Every non-key attribute must provide a fact about the key, the whole key, and nothing but the key. So help me Codd.
Nisusmage
Premium Member
Premium Member
Posts: 103
Joined: Mon May 07, 2007 1:57 am

Post by Nisusmage »

battaliou wrote:You can simply use a sequential file stage in combo with a transformer. I'm assuming that this is a CSV file rather than fixed width. Either way you can use your transformer to constrain any rows that begin with the string SEPDI. i.e. DSLink.Type[1,5] <> 'SEPDI'
The file is fixed width and the first row has a different definition from the last row and the main data has a completely different definition.

Perhaps I should give you a example of the data.

Code: Select all

QTOP00000000004354306543605071003200710
QDEL00104580       Beginsel                 SC      000000709300010               
QDEL00180864       Bates                    EF      0000001409300010
QUPD00976943       Jooste                   PA     00000000000000010
QUPD00316897       Molema                   N       000071001000000000010                             
QUPD00408032       An Nakene                AN     000001310010000000 
QUPD00408032       An Nakene                AN      00001456000000 
QUPD00408032       An Nakene                AN      000003561000000010                             
QUPD00408032       An Nakene                AG      0000000000000010   
QEND00000000565600000000000356543607           
This isn't real data, but it would be very close to this format.

I don't think a Sequnetial stage would work here because the rows have different schema's.

Any ideas?
~The simpliest solutions are always the best~
~Trick is to understand the complexity to implement simplicity~
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

A Sequential File stage will work perfectly well. Each line is read as a single VarChar field (I'm assuming there are line terminators) then parsed in a downstream Transformer stage onto three output links (header, transaction and trailer) based on the first four characters (QTOP, QEND or Qxxx). Parsing can use substring techniques since the columns are fixed width.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Nisusmage
Premium Member
Premium Member
Posts: 103
Joined: Mon May 07, 2007 1:57 am

Post by Nisusmage »

ray.wurlod wrote:A Sequential File stage will work perfectly well. Each line is read as a single VarChar field (I'm assuming there are line terminators) then parsed in a downstream Transformer stage onto three output ...
Thanks Ray, I'm going to give that a try after I've finished trying something else. That did dawn on me when I was working on this other idea (which is prob more complex). I found this beautiful command line (free) prog called SFK.exe (i think there's a unix version as well) and I found a way to filter the data and push it to 3 different files based on the prefix of the line. This should work and I'm almost done, I'm going to finish it for archive purposes.

Thanks for the help, I now have 2 solutions.
I'll let you know and update this post if I have any problems.
~The simpliest solutions are always the best~
~Trick is to understand the complexity to implement simplicity~
Post Reply