About what hash files are, and other things

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
gpbarsky
Participant
Posts: 160
Joined: Tue May 06, 2003 8:20 pm
Location: Argentina

About what hash files are, and other things

Post by gpbarsky »

[:)][:)][:)][:)][:)]

Hi my friends [:)]. I'm happy to contact you.

I would like to know:

1) Are hash files supported by Universe tables, in the DataStage engine ?

2) Are hash files accessible using a UniVerse stage ?

3) Does it exist a way to generate a temporary file, without having the must of deleting it ? I mean, something like "&&filename" (in mainframe).

4) When you change a routine, do you MUST re-compile all the job that use that routine ? I mean, does the code for the routine is included inside the jobs code when they are compiled ?

5) How can you control the quantity of jobs that are executing simultaneously ? Can you do this by a search criteria, something like "I want to run up to 3 jobs like ES*, and up to 5 jobs like ED* ?

I really appreciate your comments.

Bye.

[:)][:)][:)][:)][:)]

Guillermo P. Barsky
Buenos Aires - Argentina
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

Guillermo

1,2,3) Hash files are considered to be temp files by most of us but that does not mean they delete temselves. UV stage and a hash file stage can read and write to the same hash file. They work a little bit differently and can do some extra things with a UV stage if you know Universe. Most of the time I use a hash file stage.

4) Yes

5) There used to be a really cool batch job routine that Ascential gave out that would keep 5 jobs running or whatever number you wanted. I think Ken wrote it. If you have a slower server this is important because you can overload it. I always thought that DataStage should wait until less than 100% CPU was being used but on some servers they never seem to get under 100.

Kim.

Kim Duke
DsWebMon - Monitor over the web
www.Duke-Consulting.com
msigal
Participant
Posts: 31
Joined: Tue Nov 26, 2002 3:19 pm
Location: Denver Metro

Post by msigal »

Regarding #4: It's been my experience that you do not have to recompile a job if you change a routine's code. Yes, you have to recompile the routine, but not the job that calls it. This is one feature we enjoy. We can change a routine in one place and not have to change any jobs.

Shared container changes do require a recompile of the referencing job.

Myles

Myles Sigal
Technical Analyst
Thomson - Medstat
777 E. Eisenhower - 435B
Ann Arbor, MI 48108

myles.sigal@medstat.com
734-913-3466
debajitp
Participant
Posts: 7
Joined: Wed Jun 11, 2003 6:15 am
Location: India
Contact:

Post by debajitp »

Yes that is true. One scenario to share with you -

When I imported my DS jobs with executables to my pre-prod environment, some of them failed due to the common routines which were not separately compiled there.

Either I had to compile the routine once, or compile the job that is using that routine. Once it is done, further changes to routine does not need recompilation of the job.

Regards,
Debajit Paul
Bangalore, India
Post Reply