When I replace it with a sequential file it runs successfully. Any clue on what is happening?
Placing a dataset on any output link in the job gives the same result.
Can you create another job with a DataSet output as a test to see if it is limited to your one job and one data contents? Does the job abort at or before row one or does it happen downstream? If you use a different APT_CONFIG file (with a different amount of nodes), does it work?
I have deleted 7-8 stages and backtracked to the first stage and still the issue persisted. The job aborts immediately without processing any rows.
Then I wrote the dataset in sequential mode, and the job ran successfully!!
All other jobs in the same project are running fine with regard to this issue. I wonder what the problem is?
The problem disappeared mysteriously....But we realised the other day there was a change in the default configuration file used by multiple projects. Most probably this was the culprit.