Hi
we are in the process of convert jobs from Datastage 8.1 to Dadastage 8.7.
I am getting following error write into Sequential File stage.
F_1,0: Export failed.
F_1,0: Output file full
F_1,0: Input 0 consumed 2995232 records.
F_1,0: Fatal Error: No file descriptor was supplied.
Trans_List,0: Fatal Error: Unable to allocate communication resources
The total number of rows : 3301996 . But this job aborted or I am getting above errors after " Input 0 consumed 2995088 records".
But the same job ran fine in DEV & IT DS environments. all the environments are same DS 8.7 WITH FP1 and Same AIX OS V 6.1.
Is this anywhere I can able to change the file limit like stage or Data stage level?
Appreciate Help.
Thanks
Man
Sequential File Export failed with Output file full
Moderators: chulett, rschirm, roy
There is no DataStage limit to sequential file size, any limitation would be imposed by the OS and would cause a call to write() to fail.
What kind of a stage is "F_1" - is it a parallel job sequential file stage?
How is the path to the file specified and what are the runtime values?
How long does it take to abort?
Is there a disk file afterward and, if so, how big is that file?
Could the disk be full where the file is?
Is the file system on that drive limited to 2Gb or do you have a system-wide 2Gb limit in place?
What kind of a stage is "F_1" - is it a parallel job sequential file stage?
How is the path to the file specified and what are the runtime values?
How long does it take to abort?
Is there a disk file afterward and, if so, how big is that file?
Could the disk be full where the file is?
Is the file system on that drive limited to 2Gb or do you have a system-wide 2Gb limit in place?
F_1 is : Sequential_File Stage.
reading datadeom FileSet - > Transformer -> F_1 Sequential_File Stage
I have checked with ulimit -a.
Executed command: ulimit -a
*** Output from command was: ***
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 4194304
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) 8192
threads(per process) unlimited
processes(per user) unlimited
reading datadeom FileSet - > Transformer -> F_1 Sequential_File Stage
I have checked with ulimit -a.
Executed command: ulimit -a
*** Output from command was: ***
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 4194304
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) 8192
threads(per process) unlimited
processes(per user) unlimited
The limit might be set in the filesystem of your target drive, for example
just copy a file into your directory, call is "x", then "cat x >> y" then "cat y >> x" until you go over the 2Gb limit, or get an error message. I know there are many other ways of doing this, but I'm tired right now and can't recall the commands.
just copy a file into your directory, call is "x", then "cat x >> y" then "cat y >> x" until you go over the 2Gb limit, or get an error message. I know there are many other ways of doing this, but I'm tired right now and can't recall the commands.
Thanks lot.
I have fixed this issue. I have found the issue with filesystem and /GB Block size.
df -g /opt/IBM/IIS/Staging
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/stg 24.00 3.12 88% 5329 1% /opt/IBM/IIS/Staging
I have asked UNIX Admin to increase GB blocks from 24.00 to 400.00 GB and then Job ran fine.
So the issue with /opt/IBM/IIS/Staging space.
Thanks
Man
I have fixed this issue. I have found the issue with filesystem and /GB Block size.
df -g /opt/IBM/IIS/Staging
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/stg 24.00 3.12 88% 5329 1% /opt/IBM/IIS/Staging
I have asked UNIX Admin to increase GB blocks from 24.00 to 400.00 GB and then Job ran fine.
So the issue with /opt/IBM/IIS/Staging space.
Thanks
Man