Can't run any job

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
SonShe
Premium Member
Premium Member
Posts: 65
Joined: Mon Aug 09, 2004 1:48 pm

Can't run any job

Post by SonShe »

Can any one please help me here? A few hours ago I had a long running job aborted with ds_uvput() - failed to write to hash file error message. I searched the forum to find the possible reasons for ds_uvput() message. And sure enough we filled our hash file with more 2 gig of data. Later I cleared the hash file in Datastage Administrator using CLEAR.FILE command.

We are on a Unix server using Datastage server version 6 and have Oracle 8i as the database.

Now when I am trying to run any job, nothing happens. The jobs just sit out there without doing anything. I do not know if the prior situation has any connection to this current situation.

I will appreciate any help.

Thanks.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

It may help to delete the hashed file completely and then to re-create it.

The aborted process may still be holding locks.

Possibly the quickest "fix" is to recycle the DataStage services (shut down and re-start). Obviously no jobs can be running when you do this, but if you can't start any, that's not very likely!

Did you also get "disk full"? In that case there may be other corrupted hashed files, including some in the Repository. If so, there may be even more recovery work required. :cry:
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
SonShe
Premium Member
Premium Member
Posts: 65
Joined: Mon Aug 09, 2004 1:48 pm

Post by SonShe »

ray.wurlod wrote:It may help to delete the hashed file completely and then to re-create it.

The aborted process may still be holding locks.

Possibly the quickest "fix" is to recycle the DataStage services (shut down and re-start). Obviously no jobs can be running when you do this, but if you can't start any, that's not very likely!

Did you also get "disk full"? In that case there may be other corrupted hashed files, including some in the Repository. If so, there may be even more recovery work required. :cry:
Thanks Ray for the reply. We have only 75% DISK full. I will try to delete the hash file and try until we can have the DS server recycled. Hopefully we don't have any other hash files corrupted. How would I know if any other hash file has been corrupted?

Thanks.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

They generate error messages!

If no-one else is using the project, you can use uvbackup. Log in as superuser.

Code: Select all

cd projectname
find . -print | uvbackup -f - -V -s /tmp/summary.txt > /dev/null
Look in /tmp/summary.txt to get a count of corrupted files. At the end you should see something like:

Code: Select all

Total files: 2466  Total bytes : 21654129  Elapsed Time: 00:01:37
2466 operating system files processed, 0 broken, totalling 21654129 data bytes.
0 DataStage files processed, 0 corrupted.
0 DataStage records processed, 0 corrupted, totalling 0 data bytes.
0 extended keys processed, 0 not supported at specified revision level.

EndOfUvbackup
"0 broken" is what you want to see. :D
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply