I don't know what could be causing that offhand. Does it abort at exactly the same row each time (even if the data sort order is different)?
"600 Go" does that mean Gigabyte?
What is your output stage after the merge stage?
One of the files was 1 Go and now is 600 MO (sorry about that).
The job abort at the same row but i didn't try an another sort (i'll try and keep you posted).
The first stage after the merge is a transform who select the row to send to a hashFile.
I tried with a Sequential after the merge and the job still abort.
Stick with a sequential file output for the moment to simplify the problem. Do "Go" and "Mo" mean "gigabyte" and "megabyte" or something else?
How big is the output file when the job aborts?
Hi All
First of all i'm sorry for not answering earlier.
My job works fine now, there are the modifications :
I doubled check the format of the datas -> two columns were wrong in the merge stage.
I reduced the size of the rows (and of the files ) by selecting only the useful columns when i build the files.