Page 1 of 1

Can't run any job

Posted: Tue Feb 22, 2005 6:28 pm
by SonShe
Can any one please help me here? A few hours ago I had a long running job aborted with ds_uvput() - failed to write to hash file error message. I searched the forum to find the possible reasons for ds_uvput() message. And sure enough we filled our hash file with more 2 gig of data. Later I cleared the hash file in Datastage Administrator using CLEAR.FILE command.

We are on a Unix server using Datastage server version 6 and have Oracle 8i as the database.

Now when I am trying to run any job, nothing happens. The jobs just sit out there without doing anything. I do not know if the prior situation has any connection to this current situation.

I will appreciate any help.

Thanks.

Posted: Tue Feb 22, 2005 7:21 pm
by ray.wurlod
It may help to delete the hashed file completely and then to re-create it.

The aborted process may still be holding locks.

Possibly the quickest "fix" is to recycle the DataStage services (shut down and re-start). Obviously no jobs can be running when you do this, but if you can't start any, that's not very likely!

Did you also get "disk full"? In that case there may be other corrupted hashed files, including some in the Repository. If so, there may be even more recovery work required. :cry:

Posted: Tue Feb 22, 2005 10:47 pm
by SonShe
ray.wurlod wrote:It may help to delete the hashed file completely and then to re-create it.

The aborted process may still be holding locks.

Possibly the quickest "fix" is to recycle the DataStage services (shut down and re-start). Obviously no jobs can be running when you do this, but if you can't start any, that's not very likely!

Did you also get "disk full"? In that case there may be other corrupted hashed files, including some in the Repository. If so, there may be even more recovery work required. :cry:
Thanks Ray for the reply. We have only 75% DISK full. I will try to delete the hash file and try until we can have the DS server recycled. Hopefully we don't have any other hash files corrupted. How would I know if any other hash file has been corrupted?

Thanks.

Posted: Tue Feb 22, 2005 11:56 pm
by ray.wurlod
They generate error messages!

If no-one else is using the project, you can use uvbackup. Log in as superuser.

Code: Select all

cd projectname
find . -print | uvbackup -f - -V -s /tmp/summary.txt > /dev/null
Look in /tmp/summary.txt to get a count of corrupted files. At the end you should see something like:

Code: Select all

Total files: 2466  Total bytes : 21654129  Elapsed Time: 00:01:37
2466 operating system files processed, 0 broken, totalling 21654129 data bytes.
0 DataStage files processed, 0 corrupted.
0 DataStage records processed, 0 corrupted, totalling 0 data bytes.
0 extended keys processed, 0 not supported at specified revision level.

EndOfUvbackup
"0 broken" is what you want to see. :D