Folder Stage -Large file- 50 mb can't process
Moderators: chulett, rschirm, roy
Folder Stage -Large file- 50 mb can't process
Okay, just in case anyone else has run across this issue, I'm posting my problem on this forum.
Job information:
folder stage to transform to folder stage. I'm reading all files in one directory and writing them to another directory. Initially, the job contained lookups, and seq err outputs, but I've trimmed it down to just folder to transform to folder to keep it simple.
Problem:
If I have a file that is over 49 mb in the source directory, then the job aborts with the error, "Abnormal Termination of Stage". I've tried resetting the job, recompiling the job, and rewriting the job. Same error. I can process many files as long as any single file does not exceed the 49 mb limit.
Solution in the works:
Initially, we checked the ulimit, but it didn't matter if they were all set to unlimited or not. I've opened an ecase with IBM on this issue. They are researching.
Have a nice day.
TAZ
Job information:
folder stage to transform to folder stage. I'm reading all files in one directory and writing them to another directory. Initially, the job contained lookups, and seq err outputs, but I've trimmed it down to just folder to transform to folder to keep it simple.
Problem:
If I have a file that is over 49 mb in the source directory, then the job aborts with the error, "Abnormal Termination of Stage". I've tried resetting the job, recompiling the job, and rewriting the job. Same error. I can process many files as long as any single file does not exceed the 49 mb limit.
Solution in the works:
Initially, we checked the ulimit, but it didn't matter if they were all set to unlimited or not. I've opened an ecase with IBM on this issue. They are researching.
Have a nice day.
TAZ
The suggestion was based on your earlier post where you said:
Why use a job just to move the files from one location to another? Can your job not just do whatever filename processing you need and then use another mechanism (a script, perhaps) to perform the move?
Now that you've clarified you are using the filename and not actually any 'data in each file' for that, the suggestion can be ignored.jatayl wrote:Actually, with my original design, I'm using the data in each file to lookup to a table and a hash file.
Why use a job just to move the files from one location to another? Can your job not just do whatever filename processing you need and then use another mechanism (a script, perhaps) to perform the move?
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Yes, I was mistaken. I was thinking about another job that utilized the data within the xml to conduct lookups.
Yes, I could just read the file names, parse the data from the filenames that I need, and then output to a resulting file. Use that file as a driver file to move the original source data to the destination directory. I believe that would work.
TAZ
Yes, I could just read the file names, parse the data from the filenames that I need, and then output to a resulting file. Use that file as a driver file to move the original source data to the destination directory. I believe that would work.
TAZ
Thanks - by 'reset' I assume you mean 'increase', yes? Any chance you could specify what values you increased them to and what value LDR_CNTRL needed to be set to?
Edited to add: seems like from a quick Google that LDR_CNTRL is an AIX-specific environment variable. If that's the case, then this solution is AIX-specific as well.
Edited to add: seems like from a quick Google that LDR_CNTRL is an AIX-specific environment variable. If that's the case, then this solution is AIX-specific as well.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Yes, I meant increase. This is what was suggested:
"The default hardcoded segment value of 0x40000000 for DMEMOFF should be moved to 0x90000000
The default hardcoded segment value of 0x50000000 for PMEMOFF should be moved to 0xA0000000
Along with the following line should be added to the dsenv file in the $DSHOME directory
LDR_CNTRL=MAXDATA=0x30000000;export LDR_CNTRL"
CAUTION!!!!!
This is an AIX 5.3 specific solution, and should only be made with a full understanding of what could happend to your server. If you make these changes without that knowledge, the server could become unusable.
Thanks,
Jason
"The default hardcoded segment value of 0x40000000 for DMEMOFF should be moved to 0x90000000
The default hardcoded segment value of 0x50000000 for PMEMOFF should be moved to 0xA0000000
Along with the following line should be added to the dsenv file in the $DSHOME directory
LDR_CNTRL=MAXDATA=0x30000000;export LDR_CNTRL"
CAUTION!!!!!
This is an AIX 5.3 specific solution, and should only be made with a full understanding of what could happend to your server. If you make these changes without that knowledge, the server could become unusable.
Thanks,
Jason