how I make to copy a hash file to a sequential file???

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
wahil
Participant
Posts: 23
Joined: Tue Oct 25, 2005 11:14 am

how I make to copy a hash file to a sequential file???

Post by wahil »

Hi for all.

I need create a generic job to copy a hashed file for a sequencial file, because here in my company is not possible see the data in production environment.
[b]Important: The hashed file have different layouts.[/b]
How i can to do it? :roll:

Excuse me, but i don't have much experience in dstage...
:wink:
Thanks
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

you need two stages in your job design for this.
a hashed file stage, containing the hash file you want copied over to a sequential file and the second stage is ofcourse a sequential file stage.
have a link coming out of the hashed file stage into the sequential file stage.
complie the job
run it
right click on the sequential file stage and click on view data
walla :P
you can now see data
isnt DS just a smooth ride :wink:
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
wahil
Participant
Posts: 23
Joined: Tue Oct 25, 2005 11:14 am

Post by wahil »

Ok DSguru2B.

Thanks a lot but the job need be generic, read diferent layouts of hashed file... :(
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

DSguru2b - if only it were that easy. Note that the original poster stated that the hashed files have different formats and that he/she wishes to make a generic job.

What can be done is to use the fact that hashed files are not directly connected to their DDL as is the case with the normal SQL databases. Hashed files have a key and data, and both are represented as strings. The long string of the data portion of the record uses field marks to distinguish between the columns.

One approach to this is to write a generic job that declares n-columns in the source hashed file (i.e. Col01 through Col49 as varchar(32)). Even if the physical hashed file only has 2 columns defined, you will not get an error for columns 3 through 49, their values will default to the empty string. You could then output all of the columns to a flat file and just ignore the 47 unused trailing columns. This is pretty easy and quick to do.

the other approaches that I can think of now are more complex, using either a modified DDL (DICTionary) for the hashed file(s) or a simple bit of DS/Basic code in a routine to return the complete record and writing that out. This would have the advantage that they wouldn't have a lot of extra trailing empty columns at the end.
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

:oops:
i wonder why i do that. It usually happens to me. I always miss the buzz word. I completely overlooked the "generic job". The only thing that registered in my head was "job"
double :oops:
I stand corrected. Thanks for mentoring us ArndW.
I need coffee :?
Regards
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
wahil
Participant
Posts: 23
Joined: Tue Oct 25, 2005 11:14 am

Post by wahil »

Arnd,

Coud you send your post to my mail, please???
I don't have access to "premium content..." I'm very poor!!! :wink:

Mail: wahil@ig.com.br



DSguru, don't worry!
Good coffee!!!:wink:

Thanks a lot!
Wagner
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Wagner, there is a reason behind the premium posting. I am sure someone else will paraphrase what I said or come up with a better alternative. I think for people working with DataStage the premium membership is well worth it - let's assume as a computer professional you are getting paid $50 a day. The annual membership is day's worth of work - do you think that you'll get that much benefit in a year? This question alone might take you 4 hours or more of your own time to solve without DSXChange, so if you do get help it's already half a year's dues in just one question.

To put it another way, if you had a charter membership the first post would also have included a short routine to solve your problem...
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Curiously, this requirement CAN be fulfilled with a variable record layout.

For this example I am assuming that there are no multi-valued fields in the hashed file, but it's an easy adjustment if there are. With hashed files the entire data record is accessible via the @RECORD system variable.

Therefore, to read the entire record as a delimited string, you can use EVAL "@RECORD" as a derivation in a UV stage.

But why not have the engine also convert the delimiters? Set up a UV stage with two columns, the key (derivation @ID) and the data record (derivation EVAL "CONVERT(@FM,Char(09),@RECORD)").

The DSN is localuv, and the table name is a job parameter. Note that you will need to have created a VOC entry (search for SETFILE) for each hashed file to be processed by this means.

The sequential file should have the same delimiter character as in the Convert() function; in this case tab. Presumably the sequential file name is also a job parameter, or you set overwrite/append according to your need.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

I thought of using the @RECORD, but realized that if there are a lot of files and mixes of files without VOC pointers then that approach is more work. But a simple hash file record lookup routine, using COMMON for the file pointer and passing in the record key for the READ and subsequent conversion of @FM to ',' would be less overall work and somewhat easier to maintain. Just goes to show how many different approaches there really are.
Post Reply