My parallel job has a dataset followed by a remove duplicate stage and the a db2 connector stage.
My parallel job is getting aborted with the following error:
dotcom_migration_location_temp,0: Fatal Error: I/O subsystem: Open of /opt/IBM/pr/IS/Datasets/location.ds.dsadmp.prdil.0001.0000.0000.39e6.552c779a.0000.fef416d7 failed: No such file or directory
My analysis:
I am able to view data in designer client so i guess the dataset is not corrupt.
The permissions for the dataset are also correct.(No issues with the permissions)
In my job for the remove duplicate stage i am getting a warning :
rm_duplicateaccounts.lnk_dotcom_migration_location_temp_Sort: When checking operator: Operator of type "APT_TSortOperator": will partition despite the
preserve-partitioning flag on the data set on input port 0.
that warning has nothing to do with this error, it just states that the partitioning in dataset isn't compatible with keys defined in remove duplicate stage.
Try to do cat on location.ds file in location specified in dataset stage. it should have path to all the data files. check all data files are present and you have right permission to all the data files.
Priyadarshi Kunal
Genius may have its limitations, but stupidity is not thus handicapped.
Paul,
Its a grid environment. The dataset is in the location that can be accessed by the compute node. Its a shared NAS path.
The job is failing as it is trying to search for the wrong segment file for the dataset(as mentioned in my post).
For this dataset i can see a segment file with a different name.
i dont know why this job is trying to access a segment file that does not exists.
Since you are able to see data from designer i believe that atleast one of your data file has been moved or deleted. This is the reason I asked you to check all data files mentioned in descriptor.
in case the file names present in descriptor are not there and has been deleted, I am afraid, you will have to do cleanup and create the dataset again.
If the files are present, check permission on NAS. and if its accessible from all nodes.
Priyadarshi Kunal
Genius may have its limitations, but stupidity is not thus handicapped.
Yes i checked the dataset for the segment files, it has 3 segment files mentioned but one does not exist. Iguess i have to do a cleanup and recreate the dataset as you already mentioned.
But, is their a particular reason of such a case happening, that the segment file is missing by itself? Your views on this?