No such file or directory error

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
Christina Lim
Participant
Posts: 74
Joined: Tue Sep 30, 2003 4:25 am
Location: Malaysia

No such file or directory error

Post by Christina Lim »

Hallo all,

The scheduled job run at production machine aborted with the below error.
TWN_D_CHNL_TableLoad_Daily.#0.xfmAddDefaults.lnkXfm-Input.lnkXfm: ds_ipcopen() - Error in open(/tmp/MIFB_PROD.TWN_D_CHNL_TableLoad_Daily.#0.xfmAddDefaults.lnkXfm-Input) - No such file or directory

TWN_D_CHNL_TableLoad_Daily(xfmAddDefaults).#0.xfmAddDefaults: |Error 11 in GCI Link initialisation.|
Attempting to Cleanup after ABORT raised in stage TWN_D_CHNL_TableLoad_Daily(xfmAddDefaults).#0.xfmAddDefaults
Job TWN_D_CHNL_TableLoad_Daily(xfmAddDefaults).#0 aborted
Anyone knows why am i getting this error? This job has been running for months until recently we started to get this error

Thanx
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

HI,
Can u touch the file, and make sure permission is not changed to the or that holding directory....

regards
kumar
Christina Lim
Participant
Posts: 74
Joined: Tue Sep 30, 2003 4:25 am
Location: Malaysia

Post by Christina Lim »

Hallo Kumar,

Yes. I tried touch a dummy file and there is no permission denial.
I am able to create a file in that directory.

Appreciate any insight to my problem.

Thank you
rhys.jones@target.com
Participant
Posts: 24
Joined: Mon Mar 14, 2005 6:42 pm
Location: Minneapolis, Minnesota

capacity

Post by rhys.jones@target.com »

Is this file created by this job, or is it created upstream by a different job? Maybe the upstream job had problems and thus why it isn't there? Or perhaps your /tmp is full and/or not enough available space to write said file? If /tmp is mounted to a SAN disk, you could "df -k" to find out how full that directory is.
Christina Lim
Participant
Posts: 74
Joined: Tue Sep 30, 2003 4:25 am
Location: Malaysia

Post by Christina Lim »

Hi,

Sorry for the silence. We've been getting the same error lately.

As for the question above, there is no diskspace shortage at the point of abortion.

This file "/tmp/MIFB_PROD.TWN_D_CHNL_TableLoad_Daily.#0.xfmAddDefaults.lnkXfm-Input" is a datastage created file. Therefore, not created by upstream jobs.

It's a simple job that reads from a dataset, then using transformer to set to "Undefined" and use a join stage to join the results with data from table read and output to 2 sequential stage.

The join stage having 26 merge keys and output 2 sequential files, one that satisfy the condition and the other file is the rejected data.

Please advise..
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

Post by Kirtikumar »

How many rows are processed by this job?

Try executing it for less number of rows say 100. If it works, then it may be the case the it is not getting enough space for tmp file created in the job for transformer stage.

But does it create temp files for transformers? As per my understanding, only sort, aggregator stage and lookup stage used the scratch space.
Regards,
S. Kirtikumar.
Christina Lim
Participant
Posts: 74
Joined: Tue Sep 30, 2003 4:25 am
Location: Malaysia

Post by Christina Lim »

Hi,

I rerun the job at a later stage and it completed successfully.
Below are the statistics: Only 18 rows processed by this transformer.
TWN_D_CHNL_TableLoad_Daily(xfmAddDefaults).#0.xfmAddDefaults: DSD.StageRun Active stage finishing.
18 rows read from lnkXfm
18 rows written to lnkIncoming
0.010 CPU seconds used, 0.146 seconds elapsed.
I suppose enterprise edition transformer do use temp space.
Please clarify...

I suspect it's because of the transformer. The transformer basically set some new columns to "~Undefined" and trim same columns.
Is there any other way to achieve the same results without using transformer?

Appreciate any insight..
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

Post by Kirtikumar »

I am not sure if transformer uses the temp space or not. From the error message I just made a comment, which may or may not be true.

If you have join immidiatly after transformer then -
It may be the case that as after transformer you have join stage, for join stage's the partitioned data, it is using the temp space i.e. while doing the join, as memory available is not sufficient to hold all the data, join stage is creating the file to store the data as swap space. The name consists of transformer name in it may be bacause the link is coming from transformer to join stage.

One more thought, create a new copy of the job and then remove the join for testing purpose. Run the complete job again and see it again it gives the same error. If no error, then join stage is creating the file. If this runs fine, then transformer is not the having any problem. BTW why you are using transformer?

How to solve problem- increase temp space available in tmp directory?
Regards,
S. Kirtikumar.
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

What was the load on the server during when you get this error. Try to monitor the CPU and Memory usase while you run this job.
Try to change the temp directory from /tmp to some other directory. There are chances where /tmp will be aloted with more space and might be moved into a spereate mount. That might be a root cause for the IO congestion.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

One more option to check, it may sound ridiculous, but I faced one such problem in some old version of Server editon. Diplay length more than the datalength while declaring metadata.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
Christina Lim
Participant
Posts: 74
Joined: Tue Sep 30, 2003 4:25 am
Location: Malaysia

Post by Christina Lim »

Hi,

Thanx for the response...

The job is scheduled daily, but the abortion only happens occasionally, maybe twice a month. Therefore, the transformer shouldn't be causing the problem.

I found one of the join which read from table, is not having consistent data type. The actual table having varchar(100) but etl is setting the varchar(20) for table read.

There is a warning in the log stating
oraD_CHNL_Ref: When checking operator: When binding output interface field "CHNL_SLSFRCLEADERGROUP" to field "CHNL_SLSFRCLEADERGROUP": Implicit conversion; from source type "ustring[max=100]" to result type "ustring[max=20]": Possible truncation of variable length string
The above warning appears before the the active stage of transformer is processed.

Could the abortion of transformer due to these?
Post Reply