Processing Error found :Not enough space in /node1 directory

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
Ananda
Participant
Posts: 29
Joined: Mon Sep 20, 2004 12:05 am

Processing Error found :Not enough space in /node1 directory

Post by Ananda »

hi,

I am running a parallel job where data is extracted from Teradata table and through copy stage is moved into lookup file and dataset files. This job had been running successfully for many cycles.


It is now getting aborted with the following error logs in Director:
------------------------------------------------------------------------------
lu_TT_TD,0: Could not map table file "/etld/datastage/data/datasets/node01/lookuptable.20060222.zasjbyb (size 961826400 bytes)": Not enough space

lu_TT_TD,0: Error finalizing / saving table /etld/datastage/data/crd_dev/DataStage/DDW_TEST/CRD/Xref/lu_T_REF_TT_Standard_td.fs

lu_TT_TD,0: Operator's runLocally() failed

-----------------------------------------------------------------------------

The configuration file used is bidev1x4.apt running on 1 node.

Assuming that there is no space in directory /etld/datastage/data/datasets/node01 I have deleted all huge files created in this path even the file mentioned in the logs i.e lookuptable.20060222.zasjbyb

But after rerunning the job it is still aborting with the same error logs.

Please let me know if there are any pointers as to why issue is caused.

Thanks
Anand
rasi
Participant
Posts: 464
Joined: Fri Oct 25, 2002 1:33 am
Location: Australia, Sydney

Post by rasi »

What is the volume of data your are gettting out of teradata. How much space is available in your server?...
Regards
Siva

Listening to the Learned

"The most precious wealth is the wealth acquired by the ear Indeed, of all wealth that wealth is the crown." - Thirukural By Thiruvalluvar
Nageshsunkoji
Participant
Posts: 222
Joined: Tue Aug 30, 2005 2:07 am
Location: pune
Contact:

Post by Nageshsunkoji »

Hi Ananda,

What Force option you selected in the Copy stage after Teradata stage?

If Force=False make it Force=True then copy stage will work as a Optimiser. I think it will solve your problem.
If your doing the above one already then think about Lookup, lookup reference file unable to store the huge data, check the data in the reference file. If it is huge then use Join stage.

Regards
Nagesh.
NageshSunkoji

If you know anything SHARE it.............
If you Don't know anything LEARN it...............
Ananda
Participant
Posts: 29
Joined: Mon Sep 20, 2004 12:05 am

Processing Error found :Not enough space in /node1 directory

Post by Ananda »

rasi wrote:What is the volume of data your are gettting out of teradata. How much space is available in your server?...

Hi Rasi,

The server space does not seems to be an issue.

bidev1:/etld/datastage/data/datasets/node01 >bdf .
Filesystem kbytes used avail %used Mounted on
/dev/vx/dsk/apps/appsdsadm
330301440 263955329 62234608 81% /apps/dsadm

Its only 81%.

The volume of data getting out of teradata is 3974480 records.

In my process data is extracted from tables and latest sources files, data is then compared and then updates are done on the target table with new data.

If I keep the target table empty then I am able to load the data from source files into target table. But with target table initially loaded, it is getting aborted with the error mentioned as


------------------------------------------------------------------------------
lu_TT_TD,0: Could not map table file "/etld/datastage/data/datasets/node01/lookuptable.20060222.zasjbyb (size 961826400 bytes)": Not enough space

lu_TT_TD,0: Error finalizing / saving table /etld/datastage/data/crd_dev/DataStage/DDW_TEST/CRD/Xref/lu_T_REF_TT_Standard_td.fs

lu_TT_TD,0: Operator's runLocally() failed

-----------------------------------------------------------------------------

Let me know if there are any pointers.
Thanks
anand
Ananda
Participant
Posts: 29
Joined: Mon Sep 20, 2004 12:05 am

Post by Ananda »

Nageshsunkoji wrote:Hi Ananda,

What Force option you selected in the Copy stage after Teradata stage?

If Force=False make it Force=True then copy stage will work as a Optimiser. I think it will solve your problem.
If your doing the above one already then think about Lookup, lookup reference file unable to store the huge data, check the data in the reference file. If it is huge then use Join stage.

Regards
Nagesh.

Hi Nagesh,

How can Join Stage replace a copy Stage. I tried using Force=False in Copy stage, but it didn't work with the amount of data being processed.
I reduced the data volume in the table from where data is extracted. After bringing down the volume by half, the process ran fine with no other changes done.
If in earlier case it gave error as 'Not enough space in /node1 directory', where exactly should I increase the space.

Note: The server space is currently 81%.

Please let me know in case of any suggestions.
Thank and regards
Anand
Post Reply