Do we need to run all the hash jobs (PeopleSoft)

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
spalsikar
Participant
Posts: 10
Joined: Mon Aug 02, 2004 8:08 am

Do we need to run all the hash jobs (PeopleSoft)

Post by spalsikar »

There is a look up in one of our job that is being called but we do not use/load that particular table (on which the hash is built); do we still need to just execute that hash job? Even though if we are not using the underlying table on which the hash is built.

Thanks in advance.

Shashi
sumitgulati
Participant
Posts: 197
Joined: Mon Feb 17, 2003 11:20 pm
Location: India

Re: Do we need to run all the hash jobs (PeopleSoft)

Post by sumitgulati »

I did not quite understand your question but here is what I have to say.
If you have a look up to the hash file in you job then you need to execute the hash job atleast once in order to create the hash file. The correct way to deal with it is to remove the look up if you are not using the underlying table.

Regards,
-Sumit
spalsikar wrote:There is a look up in one of our job that is being called but we do not use/load that particular table (on which the hash is built); do we still need to just execute that hash job? Even though if we are not using the underlying table on which the hash is built.

Thanks in advance.

Shashi
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Post by vmcburney »

If the table is completely empty, and the hash file is empty, then removing it will make your job run faster and should not impact on what the job does. The only reason for leaving it in is if you wanted to leave it as a standard DataStage PeopleSoft job without any on site customisation so it doesn't need any attention in future PeopleSoft upgrades.

If the table is not being used but it has some factory setting rows in there then you need to determine whether some of these values are making it into the EPM loads, in this case the removal of the hash file will replace these factory values with NULLs.
spalsikar
Participant
Posts: 10
Joined: Mon Aug 02, 2004 8:08 am

Post by spalsikar »

Thank you for your suggestion/information, Sumit.

You are exactly right Vincent, we don't want to touch the delivered datastage jobs and leave them as much delivered as we can. May be I did out my question in a right way, all I want to know is it mandatory to run a hash job that is being used in an existing job, even though the hash as well as the underlying table (in source) are empty.


~Shashi
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Post by vmcburney »

You don't have to run the job that populates the hash file if the table that populates that hash file is completely empty or static (because you are not using that functionality). You can remove the hash load job from the overall sequence.

You need to run the job once when you create a new project, such as a new test or UAT project, or move your project to a new location. The hash file does not need to be refreshed but it does need to be present for the jobs that use that hash file.
Post Reply