Error "Failed to open RT_CONFIG*** " in the direct

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
s_rkhan
Participant
Posts: 20
Joined: Thu Mar 03, 2005 6:26 am

Error "Failed to open RT_CONFIG*** " in the direct

Post by s_rkhan »

Hi All,

In some of our projects we are getting the following error while we are trying to open the director:-

"Failed to open RT_CONFIG***"
"Failed to open RT_STATUS***"

Also most of the jobs are getting aborted in these projects.

If anyone has faced same kind of problem, please provide a resolution to this.

Thanks & Regards
Salman
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Salman,

something bad has happened to your DataStage account(s). After taking a backup, you could try going into the command line or into the administrator and in the DS.TOOLS menu issue the re-index; this might possible correct the problem.

Can you say what might have happened to trigger this state? System crash or disk full while using DS?
s_rkhan
Participant
Posts: 20
Joined: Thu Mar 03, 2005 6:26 am

Failed to open RT_CONFIG

Post by s_rkhan »

Hi,

Thanks for the response but the problem is rectified automatically.

I have tried the options of using fixtool and clearing the status files and RT_LOG files but was able to rectify some files only.

Can you please throw some light on what is actually stored in the RT_STATUS and RT_CONFIG files.

Thanks & Regards
Salman
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Salman,

I'm curious at to how did the problem got solved in the end - I'm not quite sure what you mean by "rectified automatically".

The RT_STATUSnnn file contains information about the run state and history of job executions and the RT_CONFIGnnn file contains job design and configuration information.

Both of these files are visible per job in the project directory, and if someone were to delete them from UNIX it would generate the types of errors you described.
s_rkhan
Participant
Posts: 20
Joined: Thu Mar 03, 2005 6:26 am

Post by s_rkhan »

Hi,

I have found that some of these files were full so I have tried clearing some of the files using CLEAR.FILE. This command worked for some files while for others it throws an error.

This problem has occured because of some disk size issues, and what I came to know from Ascential is that "Sometimes because of Disk size issue, some locks are left and these locks get open after some time automatically". So this may be one of the reason for this problem to get solved automatically.

Also there was one file that was corrupted so I use a fixtool command on that file.

Thanks
Salman
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

You usually do not get the files corrupted because of anything but a full filesystem.
"Failed to open RT_CONFIG***"
"Failed to open RT_STATUS***"
This sometimes can fix itself because some temp file gets removed. What usaully happens is the VOC entry gets created but the UNIX or DOS file does not. If the hash file does not get created then it is difficult to fix without reimporting the job after the filesystem is no longer at 100% full. You can also save the job with a new name. Delete the old job. Save it back as the original file. This will give the job a new job number. So RT_CONFIG123 becomes RT_CONFIG321 or whatever and in most cases it will be RT_CONFIG124.

Sometimes these files can be corrupt. If you recompile then it is possible to fix them. UVFIXFILE can fix most hash files. These can easily be fixed with a recompile afterwards.

DS.TOOLS can fix the indexes but that is about all. If you were unable to create these hash files because the filesytem was full then the indexes were also corrupted and you do need DS.TOOLS. I wish people would quit looking at DS.TOOLS as the fix all for all these problems. If you are corrupting indexes all the time and you are not running out of disk space then you have some other real serious problem. You should never have corrupt indexes without running out of disk space or someone is doing UNLOCK ALL commands. You could also corrupt things with kill -9.

These questions come up way too often on this web site. You should never see these error messages. If you get these errors then you are doing something wrong. It should not happen on a regular basis.
Mamu Kim
Post Reply