Page 1 of 1

Random "ds_seqput error File Too Large" errors

Posted: Wed Dec 17, 2003 9:36 am
by AGStafford
I was wondering if anyone on HP Unix 11 using DS 6 Server has ever encountered a probem where the message is:

Error:ds_seqput: error in 'write()' - File too large


This occurs as the file crosses over the 2GB threshold. The job, when it runs successfully, will create a file 4GB's in size.

This is not a file system limitation as files can be created much larger than 2GB (currently the high water mark is 11GB).
This is not a space issue as when the job fails there is tons of DASD left.

This problem is driving us nuts since we cannot identify a cause.

Any ideas would be appreciated.

Andrew

Posted: Wed Dec 17, 2003 10:00 am
by kcbland
Server jobs have a 32bit address limitation, so therefore you cannot write jobs larger than 2.2 gigabytes.

Second, you have the slowest design possible, because obviously you have a single job writing this file. This means you have a single cpu at work. You're on a unix box, so I hope you have multiple cpus. You should partition your source data such that you can have N instantiated jobs working on different subsets of the source data. You will cut your target file to 1/N in size, and probably squeeze under the 2.2 gigabyte limitation. You also will probably reduce your overall runtime by a factor of N. Your resulting files will also be easier to manage, and if you need to combine them a simple unix cat statement does the trick.

Posted: Wed Dec 17, 2003 10:24 am
by AGStafford
As I indicated in the original post, the same job will run successfully. So if it is a 32 bit addressing problem, then it is a random problem. We also have files (created by DataStage) of 11GBytes.

We are currently trying to get a 2CPU upgrade (we currently have 2) and then if that doesn't help a significantly larger Unix box may help our performance issues.

If there was any consistency in when it fails it would be great. However there isn't. It sometimes is a problem sometimes not.

However I will point out to the developer maybe they should try the split method.

Andrew

Posted: Wed Dec 17, 2003 10:38 am
by kcbland
Wow, I missed that one. :oops: They used to have the limitation, I guess I need to get on the later versions. Sorry for the misinformation. :oops: :oops: :oops:

However, you need to think about how long a single job is spewing a file. If you're talking about a 4 hour runtime to produce the single file, multiple instantiations allows you to atleast break the work down to smaller units of work, which means that if 1 of 5 breaks, that's the only piece that needs to re-run. In your case, there's probably no resurrection capability, you simple start it over, re-incuring the total runtime. This can be frustrating, especially when it breaks at the end.

In addition, the instantiation approach positions you for more cpus. If with a 2 cpu box, you could potentially halve your runtime using 2 instances. Not knowing your job design, of course, will influence this capability. If you are querying a table and spewing the results, you could probably run a heck of a lot more instances if the database is remote, this is because the resources on the DS server are probably abundant.

Re: Random "ds_seqput error File Too Large" errors

Posted: Wed Dec 17, 2003 11:32 am
by neena
Hi Stafford,
This is Error message is for the file system and you are trying to create a file on the file system which is more than 2 GB . Send a request to the unix admin to change the system from 32 bit to 64.
Hope this works
Tnks,
Neena