Hi,
I have a job which sources from database table with order by key column and then i have transformer and then remove duplicate stage which keeps last and then writes them to dataset. This job used to run before sucessfully but not i am getting the following error in the log:
Type: Fatal
Event: APT_CombinedOperatorController,0: Failure during execution of operator logic.
Type: Fatal
Event: APT_CombinedOperatorController,0: Fatal Error: Tsort merger aborting: Scratch space full
Type: Fatal
Event: Remove_Duplicates_33.DSLink6_Sort,0: Failure during execution of operator logic.
Type: Info
Event: Remove_Duplicates_33.DSLink6_Sort,0: Input 0 consumed 170017 records.
Type: Info
Event: Remove_Duplicates_33.DSLink6_Sort,0: Output 0 produced 0 records.
Type: Fatal
Event: APT_CombinedOperatorController,0: Fatal Error: Pipe read failed: short read
Type: Fatal
Event: node_node1: Player 4 terminated unexpectedly.
Type: Fatal
Event: main_program: APT_PMsectionLeader(1, node1), player 4 - Unexpected exit status 1.
Type: Fatal
Event: GROUP_KEYWORDS,0: Failure during execution of operator logic.
Type: Info
Event: GROUP_KEYWORDS,0: Output 0 produced 173880 records.
Type: Fatal
Event: GROUP_KEYWORDS,0: Fatal Error: Unable to allocate communication resources
Type: Fatal
Event: Remove_Duplicates_33,0: Failure during execution of operator logic.
Type: Info
Event: Remove_Duplicates_33,0: Input 0 consumed 0 records.
Type: Info
Event: Remove_Duplicates_33,0: Output 0 produced 0 records.
Type: Fatal
Event: Remove_Duplicates_33,0: Fatal Error: waitForWriteSignal(): Premature EOF on node dev-dwbi-app2 No such file or directory
Type: Fatal
Event: Peek_30,0: Failure during execution of operator logic.
Type: Info
Event: Peek_30,0: Input 0 consumed 0 records.
Type: Fatal
Event: Peek_30,0: Fatal Error: waitForWriteSignal(): Premature EOF on node dev-dwbi-app2 No such file or directory
Type: Fatal
Event: node_node1: Player 1 terminated unexpectedly.
Type: Fatal
Event: main_program: APT_PMsectionLeader(1, node1), player 1 - Unexpected exit status 1.
Type: Fatal
Event: node_node1: Player 2 terminated unexpectedly.
Type: Fatal
Event: main_program: APT_PMsectionLeader(1, node1), player 2 - Unexpected exit status 1.
Type: Fatal
Event: node_node1: Player 3 terminated unexpectedly.
Type: Fatal
Event: main_program: APT_PMsectionLeader(1, node1), player 3 - Unexpected exit status 1.
Type: Fatal
Event: main_program: Step execution finished with status = FAILED.
Type: Info
Event: main_program: Startup time, 0:06; production run time, 0:30.
Type: Control
Event: Job FTP_MF_samplecff aborted.
Plz do let me know if guys came across same issue.
Thanks.
Every person you meet knows something you don't, Learn from them.
-- H. Jackson Brown
Hi Chulett,
The scratch and resource disk space i am writing to is only 1% full but the datastage installation disk space is 99% full, is this causing the issue?
Please advice...
Every person you meet knows something you don't, Learn from them.
-- H. Jackson Brown
ray.wurlod wrote:Why not disable operator combination and find out precisely where the error is occurring?
Thanks Ray!!! It happened to show the error at every stage when disabling the combination. We cleaned up the datastage home directory and now it works fine.
Every person you meet knows something you don't, Learn from them.
-- H. Jackson Brown
chulett wrote:Could you clarify what "cleaned up the DataStage home directory" means, please?
Hi Craig,
In the dev box while we were running jobs that creates Datasets, i noticed that failed jobs created virtual datasets in DS home Dataset directory and were not cleaned up after next run, which filled the space on disk. So had to clean up the Datasets Directory on DS home.
I assume before that a failed job will clearall virtual dataset created after next compilation but later realized that resetting the job is the only way.
Thanks,
Every person you meet knows something you don't, Learn from them.
-- H. Jackson Brown
That's probably because you never changed the supplied configuration file, which puts both disk and scratchdisk resource into the DataStage Engine file system.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Virtual data sets are created in memory and their control files (xyz.v) are created in the project directory, being automatically deleted when the job ends.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
So if config file is not supplied, then it uses the default space. But it should have cleared it right? So he was actually referring to normal datasets instead of virtual datasets. Am I correct?
No, you're not right. Persistent Data Sets (the non-virtual kind) are not cleared. However, the original question relates to scratch disk, which might be used for any kind of temporary file, including "paging space" for virtual Data Sets. Scratch space *should* be cleared automatically but may not be if the job aborts. And the available space in the scratch disk when the job is not running is no good guide to how much is used when the job is running.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.