failed to open RT_STATUS42file

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
pongal
Participant
Posts: 77
Joined: Thu Mar 04, 2004 4:46 am

failed to open RT_STATUS42file

Post by pongal »

Hi
I am not able to see one of the parallel job's log file and it is giving an error
"Failed to open RT_STATUS42file" when i open DS Director of the same job. then i closed DSDesigner and Director and again opened the same job in the designer, when i open director now it saying an error "Failed to open CONFIG32file" .
what does those errors mean and why it is coming everytime.
thanks
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

That is because your DS indexes is corrupted. Do a DS.REINDEX ALL from the project via DS Admin or uvsh.
pongal
Participant
Posts: 77
Joined: Thu Mar 04, 2004 4:46 am

Post by pongal »

Hi sainath
I have executed the command DS.REINDEX ALL on current project edw via DSAdmin, but still problem has not solved, getting error "cannot open the execute file CONFIG52" and log is not showing for all my existing jobs particular category.
is there anything else i need to take care?
Thanks
elavenil
Premium Member
Premium Member
Posts: 467
Joined: Thu Jan 31, 2002 10:20 pm
Location: Singapore

Post by elavenil »

There is a possibility that those files are deleted manually. This could cause this issue. One ugly way is to resovle this issue copying RT_STATUS file of some other job and rename with this job number.

You may need to clear status file once you get rid of the problem.

HTWH.

Regards
Saravanan
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

You probably ran out of dick space. RT_STATUS42 is associated with job number 42. You need to find out which job this is.

Code: Select all

SELECT * FROM DS_JOBS WHERE JOBNO = '42';
Run this SQL in the Administrator in the project that has the issue. This will tell you the bad job. You can import this job from a backup which will give this job a new job number.

When a job gets created then it gets the next job number from the DS_JOBS table. Next it creates records in DS_JOBOBJECTS. Then it starts to create several associated hash files to store the logs, run times and several other critical files to make these jobs work as designed by DataStage. All of these files have the job number as part of the name. If nnn represents the job number then here are a few of the files.

RT_LOGnnn - stores the log records of job runs
RT_STATUSnnn - stores instance ids and run times
RT_BPnnn - BASIC code generated by compiling jobs
RT_BPnnn.O - BASIC object code
RT_CONFIGnnn - stores the order stages are run

If any of these get corrupted then the job will not run properly. Some of these can be fixed outside of DataStage if you know enough about Universe.

When a hash file gets created in Universe then it writes a VOC record. If you look at the VOC record for RT_STATUS42 then it should exist.

ED VOC RT_STATUS42
1: F
2: RT_STATUS42
3: D_RT_STATUS

This is called a F pointer or file pointer because "F" is on line 1. The DATA portion is defined on line 2. These are DYNAMIC files so at the UNIX or DOS level you should see a directory below the project named RT_STATUS42. If not then this is where you had a problem when you ran out of disk space. In this directory should be 2 files DATA.30 and OVER.30. The same is true for RT_CONFIG42. The VOC pointer needs to exist as well as the files in the project directory.

Line 3 of the F pointer defines the dictionary file for this hash file. This is a shared file. The normal dictionary whould be D_RT_STATUS42 but all these files share the same dictionary file because they all have the same fields. The dictionary file describes the columns which can used to report on the data in this hash file or used in SQL statements.
Mamu Kim
pongal
Participant
Posts: 77
Joined: Thu Mar 04, 2004 4:46 am

Post by pongal »

hi,
I have again ran the command DS.REINDEX ALL in DSAdmin, after that jobs are working fine without any problem but i was not able to see the job's log file in director.
what need to be done to get that job's log file.
when i open the same job in desiner it is saying that "Failed to open RT_LOG29".
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

You have a second job messed up RT_LOG29. Look for job number 29. You can restore this job from a backup or save it to a new name delete the old job name. Next it save it back as the original name. Either way will give this job a new job number and create a new log file.

Unless you know how to repair RT_LOG29 then this is your only solution.

Make sure you are not out of disk space when you try to fix these problems otherwise you are just creating more problems.

You may have other jobs with bad log files. There are no tools to check for these issues. You may have to try to view all log files and see if you get an error.
Mamu Kim
pongal
Participant
Posts: 77
Joined: Thu Mar 04, 2004 4:46 am

Post by pongal »

hi kduke,

Again i got another error when i try to open the job in Designer
Error Calling Subroutine Error Calling subroutine: *DataStage*DSR.PLADMIN[Action =4];check
DataStage is set up correctly in project edw
[Subroutine failed to complete successfully[30107]]
what does it mean?
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

It means you've hijacked this thread with an unrelated question.
Please post a new thread if, after searching for 30107 on the forum (it IS there) you can't find an answer.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply