Sequential File Export failed with Output file full

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
mandyli
Premium Member
Premium Member
Posts: 898
Joined: Wed May 26, 2004 10:45 pm
Location: Chicago

Sequential File Export failed with Output file full

Post by mandyli »

Hi

we are in the process of convert jobs from Datastage 8.1 to Dadastage 8.7.

I am getting following error write into Sequential File stage.

F_1,0: Export failed.
F_1,0: Output file full
F_1,0: Input 0 consumed 2995232 records.
F_1,0: Fatal Error: No file descriptor was supplied.
Trans_List,0: Fatal Error: Unable to allocate communication resources

The total number of rows : 3301996 . But this job aborted or I am getting above errors after " Input 0 consumed 2995088 records".

But the same job ran fine in DEV & IT DS environments. all the environments are same DS 8.7 WITH FP1 and Same AIX OS V 6.1.

Is this anywhere I can able to change the file limit like stage or Data stage level?

Appreciate Help.
Thanks
Man
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

There is no DataStage limit to sequential file size, any limitation would be imposed by the OS and would cause a call to write() to fail.

What kind of a stage is "F_1" - is it a parallel job sequential file stage?
How is the path to the file specified and what are the runtime values?
How long does it take to abort?
Is there a disk file afterward and, if so, how big is that file?
Could the disk be full where the file is?
Is the file system on that drive limited to 2Gb or do you have a system-wide 2Gb limit in place?
mandyli
Premium Member
Premium Member
Posts: 898
Joined: Wed May 26, 2004 10:45 pm
Location: Chicago

Post by mandyli »

F_1 is : Sequential_File Stage.

reading datadeom FileSet - > Transformer -> F_1 Sequential_File Stage

I have checked with ulimit -a.

Executed command: ulimit -a
*** Output from command was: ***
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 4194304
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) 8192
threads(per process) unlimited
processes(per user) unlimited
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Do you see any files larger than 2Gb in your target directory or just attempt to manually create a file larger then 2Gb there.
mandyli
Premium Member
Premium Member
Posts: 898
Joined: Wed May 26, 2004 10:45 pm
Location: Chicago

Post by mandyli »

Not tested that one.

May be I will test with some sample job to write in target dir with 2GB .

But I would like know in where OS level we will limit this.

Thanks
Man
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

The limit might be set in the filesystem of your target drive, for example

just copy a file into your directory, call is "x", then "cat x >> y" then "cat y >> x" until you go over the 2Gb limit, or get an error message. I know there are many other ways of doing this, but I'm tired right now and can't recall the commands.
mandyli
Premium Member
Premium Member
Posts: 898
Joined: Wed May 26, 2004 10:45 pm
Location: Chicago

Post by mandyli »

Thanks lot.

I have fixed this issue. I have found the issue with filesystem and /GB Block size.

df -g /opt/IBM/IIS/Staging

Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/stg 24.00 3.12 88% 5329 1% /opt/IBM/IIS/Staging

I have asked UNIX Admin to increase GB blocks from 24.00 to 400.00 GB and then Job ran fine.

So the issue with /opt/IBM/IIS/Staging space.


Thanks
Man
Post Reply