Page 1 of 1

ds_seqput: error in 'write()' - Error 0

Posted: Fri Dec 23, 2005 10:15 am
by maheshsada
Hi

Iam getting the following error

ds_seqput: error in 'write()' - Error 0

I have checked forum, the explanations were given for DS version 6 and 7, is there any restrictions in the .dat file generation.

In my job

Iam selecting data from table and generating a dat file

I have even checked using ulimit -a in the before job_subroutine the output is

Testjob..BeforeJob (ExecSH): Executed command: "ulimit -a"
*** Output from command was: ***
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) unlimited
nofiles(descriptors) 1024
memory(kbytes) unlimited

In the OS also iam getting the same values. Can you please tell whether the restriction is due to DS or OS file system

regards
Magesh S

Posted: Fri Dec 23, 2005 10:20 am
by chulett
Version shouldn't matter, the error is the same for 5, 6, 7... it should mean you ran out of space where you were writing the file. Is that not the case?

Posted: Fri Dec 23, 2005 10:21 am
by ArndW
There can be a number of reasons for a sequential write to fail, but I don't think that the error code of "0" is correct or is going to help.

Can you please try redirecting the output .dat to write to /tmp. Does it work? If yes, then most likely you have a permissions or a filespace issue in the original location.

Does this error occur on writing the first line or at some point later on? If later on, how large is the file - it could be at a 2Gb limit imposed by the file system.

Posted: Fri Dec 23, 2005 12:47 pm
by maheshsada
Hi

The error is coming when the file size reaches 2GB. I have checked the directory it has enough free space(around 5GB) and the job has been running successfully for months.

I have also checked the file size generated in previous runs which is well below 2GB

So this may be the restriction imposed byOS for file size

Is my understanding right

regards

magesh S

Posted: Fri Dec 23, 2005 2:10 pm
by ray.wurlod
Check with your UNIX administrator. Some UNIX operating systems must have large file support explicitly switched on. This may not have occurred, which would explain the 2GB limit.

Posted: Thu Dec 29, 2005 9:37 am
by maheshsada
Hi

Thank you all, the error is due to file size limit in OS, Once Unix admin, changed the setting the job has created a file which is more than 2GB

regards

Magesh S