The scheduled job run at production machine aborted with the below error.
TWN_D_CHNL_TableLoad_Daily.#0.xfmAddDefaults.lnkXfm-Input.lnkXfm: ds_ipcopen() - Error in open(/tmp/MIFB_PROD.TWN_D_CHNL_TableLoad_Daily.#0.xfmAddDefaults.lnkXfm-Input) - No such file or directory
TWN_D_CHNL_TableLoad_Daily(xfmAddDefaults).#0.xfmAddDefaults: |Error 11 in GCI Link initialisation.|
Attempting to Cleanup after ABORT raised in stage TWN_D_CHNL_TableLoad_Daily(xfmAddDefaults).#0.xfmAddDefaults
Job TWN_D_CHNL_TableLoad_Daily(xfmAddDefaults).#0 aborted
Anyone knows why am i getting this error? This job has been running for months until recently we started to get this error
Is this file created by this job, or is it created upstream by a different job? Maybe the upstream job had problems and thus why it isn't there? Or perhaps your /tmp is full and/or not enough available space to write said file? If /tmp is mounted to a SAN disk, you could "df -k" to find out how full that directory is.
Sorry for the silence. We've been getting the same error lately.
As for the question above, there is no diskspace shortage at the point of abortion.
This file "/tmp/MIFB_PROD.TWN_D_CHNL_TableLoad_Daily.#0.xfmAddDefaults.lnkXfm-Input" is a datastage created file. Therefore, not created by upstream jobs.
It's a simple job that reads from a dataset, then using transformer to set to "Undefined" and use a join stage to join the results with data from table read and output to 2 sequential stage.
The join stage having 26 merge keys and output 2 sequential files, one that satisfy the condition and the other file is the rejected data.
Try executing it for less number of rows say 100. If it works, then it may be the case the it is not getting enough space for tmp file created in the job for transformer stage.
But does it create temp files for transformers? As per my understanding, only sort, aggregator stage and lookup stage used the scratch space.
I rerun the job at a later stage and it completed successfully.
Below are the statistics: Only 18 rows processed by this transformer.
TWN_D_CHNL_TableLoad_Daily(xfmAddDefaults).#0.xfmAddDefaults: DSD.StageRun Active stage finishing.
18 rows read from lnkXfm
18 rows written to lnkIncoming
0.010 CPU seconds used, 0.146 seconds elapsed.
I suppose enterprise edition transformer do use temp space.
Please clarify...
I suspect it's because of the transformer. The transformer basically set some new columns to "~Undefined" and trim same columns.
Is there any other way to achieve the same results without using transformer?
I am not sure if transformer uses the temp space or not. From the error message I just made a comment, which may or may not be true.
If you have join immidiatly after transformer then -
It may be the case that as after transformer you have join stage, for join stage's the partitioned data, it is using the temp space i.e. while doing the join, as memory available is not sufficient to hold all the data, join stage is creating the file to store the data as swap space. The name consists of transformer name in it may be bacause the link is coming from transformer to join stage.
One more thought, create a new copy of the job and then remove the join for testing purpose. Run the complete job again and see it again it gives the same error. If no error, then join stage is creating the file. If this runs fine, then transformer is not the having any problem. BTW why you are using transformer?
How to solve problem- increase temp space available in tmp directory?
What was the load on the server during when you get this error. Try to monitor the CPU and Memory usase while you run this job.
Try to change the temp directory from /tmp to some other directory. There are chances where /tmp will be aloted with more space and might be moved into a spereate mount. That might be a root cause for the IO congestion.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
One more option to check, it may sound ridiculous, but I faced one such problem in some old version of Server editon. Diplay length more than the datalength while declaring metadata.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
The job is scheduled daily, but the abortion only happens occasionally, maybe twice a month. Therefore, the transformer shouldn't be causing the problem.
I found one of the join which read from table, is not having consistent data type. The actual table having varchar(100) but etl is setting the varchar(20) for table read.
There is a warning in the log stating
oraD_CHNL_Ref: When checking operator: When binding output interface field "CHNL_SLSFRCLEADERGROUP" to field "CHNL_SLSFRCLEADERGROUP": Implicit conversion; from source type "ustring[max=100]" to result type "ustring[max=20]": Possible truncation of variable length string
The above warning appears before the the active stage of transformer is processed.