Page 1 of 1

Job Failed due to strange reason

Posted: Fri Mar 19, 2004 3:04 pm
by yiminghu
Hi,

I had one extract job, which extracts data from source system, and dumps into hash file. The job was code to pull data of specific site specified in parameter. I also have a main job, in which the above extract job is called several times (depends how many sites I want to process) in a loop. It runs fine with serval rounds, but it fails, and the error message is very strange, please see following.

DataStage Job 476 Phantom 16675
Unable to open "/tmp/capture49324aa" file.
Attempting to Cleanup after ABORT raised in stage CrrExDyTableActual..SRC_CRS
DataStage Phantom Aborting with @ABORT.CODE = 3

I reset the job, and re-run that job with exact same value of parameter which caused job failure, it ran succesfully.

What's the problem, does that mean I could not run this job several times in a row?

Thanks in advance.

Yiming

Re: Job Failed due to strange reason

Posted: Fri Mar 19, 2004 3:09 pm
by ogmios
You would have to supply some more information.

The reason for the abort is simple, "/tmp/capture49324aa" could not be opened.

But what I think is happening is that you have timing problems where one job will not be completely stopped yet and you already start a second run which is using the same files, or something like that.

Ogmios

Posted: Sat Mar 20, 2004 3:58 pm
by ray.wurlod
It may simply mean that the /tmp file system became full.

Use the UVTEMP configuration parameter to configure scratch space somewhere with more space than /tmp, and where there is less likely to be contention with other UNIX processes.

How to configure UVTEMP

Posted: Mon Mar 22, 2004 7:55 am
by yiminghu
Hi Ray,

Could provide more details about how to configure UVTEMP parameter? Is there a configration file under each project?

Thanks,

Carol
ray.wurlod wrote:It may simply mean that the /tmp file system became full.

Use the UVTEMP configuration parameter to configure scratch space somewhere with more space than /tmp, and where there is less likely to be contention with other UNIX processes.

Posted: Mon Mar 22, 2004 8:00 am
by kcbland
Search the forum for uvregen. After changes to uvconfig, you must execute uvregen. This is covered alot on the forum.

Posted: Mon Mar 22, 2004 3:16 pm
by ray.wurlod
UVTEMP is a configuration parameter in the uvconfig file in your DataStage engine directory.
You need to have Administrator privileges for this task.
  • Change directory (cd) to the DataStage Engine directory.

    Edit the uvconfig file with any text editor, to change the pathname specified by UVTEMP.

    Execute the shell script dsenv if you have not done so already. This sets necessary environment variables.

    Back up the .uvconfig file (this is the hidden file with a dot as the first character of its name), but don't use .uvconfig.bak for its name. For example cp .uvconfig .uvconfig.yyyymmdd

    Execute the command bin/uvregen while still attached to the DataStage Engine directory. This will issue a message about the size of the shared memory segment.
Changes will not take effect until DataStage is re-started.

Posted: Mon Mar 22, 2004 3:39 pm
by chulett
ray.wurlod wrote:Execute the command bin/uv while still attached to the DataStage Engine directory. This will issue a message about the size of the shared memory segment.
Ray, don't you always have to regen after making changes to .uvconfig, as Ken mentions? Shouldn't the command be:

Code: Select all

bin/uv -admin -regen
Or is that only for 'certain' changes? :?

Posted: Mon Mar 22, 2004 7:39 pm
by ray.wurlod
Seem to have wiped out the "regen" when I applied the bold! Edited the reply to make it right. bin/uv doesn't issue a message about shared memory segment after all. Apologies if that confused anyone, or if this confuses anyone now that the original error has been edited (if, indeed, it ever existed! - must've been a Type 1 slowly changing dimension!) :twisted: