We just migrated a bunch of jobs to our new 11.3 environment. In one sequence job, we have a Routine Activity stage that calls a .bat script through the built-in ExecDOS routine. The .bat script calls the MKSToolkit zip command and moves some files around. There are 8 files being zipped, totaling about 225MB. The script runs fine via a command prompt, and the job runs fine when run directly from the Director or Designer client. However, when the job is scheduled through the Director and run automatically, the zip command errors - "zip error: Out of memory (allocating temp filename)" - and the sequence aborts.
Is there some way to adjust the memory available to automated jobs?
Memory error when job scheduled
Moderators: chulett, rschirm, roy
The difference between your run methods is most likely attributable to the different user-id and permissions (and perhaps environment) when run via batch versus your other call methods and not necessarily due to actual memory constraints for the zip command.
It might be file access permissions in the directories that are missing for the batch user.
It might be file access permissions in the directories that are missing for the batch user.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>