Unable to open "/tmp/dstemp/xxxxx"

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
rmrama
Participant
Posts: 26
Joined: Wed Oct 15, 2003 1:39 am

Unable to open "/tmp/dstemp/xxxxx"

Post by rmrama »

Hello.

I get the following error messages quite often in Production:

DataStage Job 160 Phantom 16617
Unable to open "/tmp/dstemp/capture49282aa" file.
Attempting to Cleanup after ABORT raised in stage ALS31VDDSAMM05..LOAD_RJCT_LOG.Read_Hashed
DataStage Phantom Aborting with @ABORT.CODE = 3

The occurrence are random. We had a lot of such error messages when our temp space was around 5GB. The number decreased after we increased the temp space to 10GB.

Is there any way I can completely stop such aborts from happening? Dont see the point in icreasing the temp space further coz the % used never seems to increase above 4%!

Regards,
M. Rama
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

The only other message on the Forum like yours I could find was this one. It mentions a couple of things to check. A more generic search for UVTEMP might help as well.

If you are sure you don't have a space issue, then perhaps a permissions problem? :?
-craig

"You can never have too many knives" -- Logan Nine Fingers
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

rmrama,

a sporadic problem as you describe whose behaviour changed after you modified the space available does seem to be related to disk space. Are you 100% certain that you are measuring your disk space on the correct mount point / partition? If you attach to the temp directory while the job is running do you see large files being created and subsequently deleted? Or you could do a "COUNT <filename>" whilst the job is running.
rmrama
Participant
Posts: 26
Joined: Wed Oct 15, 2003 1:39 am

Post by rmrama »

Craig and Arnd, thanks for your inputs.

" then perhaps a permissions problem? "

Well, the mount point is owned by another user id, but the access rights assigned are: drwxrwsr-x. Besides, I can see other temp files created by the user id we use to run ETL jobs.

"A more generic search for UVTEMP might help"

Ok, will do a search on this. Thanks.

"you 100% certain that you are measuring your disk space on the correct mount point / partition?"

My uvconfig file has the following value in it: UVTEMP /tmp/dstemp. The path is correct, so is there any way to see if this path is indeed in effect?

"If you attach to the temp directory while the job is running do you see large files being created and subsequently deleted? Or you could do a "COUNT <filename>" whilst the job is running"

I have not been staying back to monitor such changes, but it should be worth the while if there's a reason for doing such a thing. Like to know what is the point of observing such changes.

Thanks again.
- M. Rama
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

If you attach to the /tmp/dstemp and do a "df -k ." it will not only show you the available space but also the mount point, which might be different from that of /tmp. I am still fairly certain you are running out of space...
rmrama
Participant
Posts: 26
Joined: Wed Oct 15, 2003 1:39 am

Post by rmrama »

Arnd, yes you are right. The mount points for /tmp and /tmp/dstemp are different: An excerpt of the df -k results:

Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd3 2097152 985496 54% 8072 4% /tmp
/dev/dslv01 10485760 10156492 4% 26 1% /tmp/dstemp


And the uvconfig file values are indeed in memory:

dssprod1:/home/dsadm/Ascential/DataStage/DSEngine#bin/uvregen -t
Current tunable parameter settings:
MFILES = 256
T30FILE = 2048
OPENCHK = 1
WIDE0 = 0x3dc00000
UVSPOOL = /tmp
UVTEMP = /tmp/dstemp
SCRMIN = 3
SCRMAX = 5
SCRSIZE = 512
QDEPTH = 16
HISTSTK = 99
QSRUNSZ = 2000
QSBRNCH = 4
QSDEPTH = 8
QSMXKEY = 32
TXMODE = 0
LOGBLSZ = 512
LOGBLNUM = 8
LOGSYCNT = 0
LOGSYINT = 0
TXMEM = 32
OPTMEM = 64
SELBUF = 4
Press any key to continue...


Is UVSPOOL a problem here, coz it is pointing to /tmp.

- M. Rama
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

rmrama,

DS doesn't use the spooler so this won't cause problems.

BTW, here's a computer trivia question - do you know what SPOOLer actually stands for? Common usage is to have it denote a printer or printer operations but it is actually an acronym...
rmrama
Participant
Posts: 26
Joined: Wed Oct 15, 2003 1:39 am

Post by rmrama »

yeah, found out what that parameter meant after posting. thanks anyway.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Simultaneous Peripheral Operations OnLine.

Not that I actually knew that or anything. :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Damn you are good.

I usually use this piece of trivia at bars to enchant and pick up women; which is why I'm still single...

I didn't think that anyone would read this thread through to the bottom, either.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Google is your friend. :wink:

And some of us actually try to read everything, all the way to the bottom. There's plenty I don't respond to, especially when someone else has the problem well in hand. But I do try to read everything and find a wet grey corner to file it all away in.
-craig

"You can never have too many knives" -- Logan Nine Fingers
gpatton
Premium Member
Premium Member
Posts: 47
Joined: Mon Jan 05, 2004 8:21 am

Post by gpatton »

Does the userid you are running the jobs with have any file system limits. I have see some installations where the default amount of disk space a user can use is limited to 1 GB. May be worth checking.
Post Reply