ds_seqgetnext: error in 'read()' - Interrupted system call
This does not happen repeatedly but happens now and then. Did any body encounter this error before?
Any input would be greatly appreciated,
Thanks much,
Whale.
Last edited by I_Server_Whale on Thu Nov 16, 2006 6:24 pm, edited 1 time in total.
Anything that won't sell, I don't want to invent. Its sale is proof of utility, and utility is success.
Author: Thomas A. Edison 1847-1931, American Inventor, Entrepreneur, Founder of GE
Thanks for your reply Craig and I'm sorry that I didn't provide sufficient information before.
We are loading a hashed file into an Oracle table using ORABLK stage. And we also have a shared container which is used for 'balancing' purposes.
Looking at the log, the data did get loaded into the table but the job failed when it was going through the shared container.
If I accidentally missed any info. Please let me know. Many Thanks,
Naveen.
Anything that won't sell, I don't want to invent. Its sale is proof of utility, and utility is success.
Author: Thomas A. Edison 1847-1931, American Inventor, Entrepreneur, Founder of GE
naveendronavalli wrote:We are loading a hashed file into an Oracle table using ORABLK stage. And we also have a shared container which is used for 'balancing' purposes.
So... where exactly is this process going wrong? In the Shared Container? What exactly is in it?
And you are spooling a hashed file directly into the ORABLK stage? Interesting...
-craig
"You can never have too many knives" -- Logan Nine Fingers
And you are spooling a hashed file directly into the ORABLK stage? Interesting...
Why? Is it wrong to spool to hashed file into a ORABLK stage? Please let me why it sounded interesting?
As far as the Shared container is concerned, it compares the row counts from this load job to the rows counts in the extract/transform job to see if they match.
Again, Thanks very much for your input,
Naveen.
Anything that won't sell, I don't want to invent. Its sale is proof of utility, and utility is success.
Author: Thomas A. Edison 1847-1931, American Inventor, Entrepreneur, Founder of GE
chulett wrote:Not all that rare, I've seen it before.
Want to share some details as to exactly what you are doing - file type, anything funky? Are you using the Filter option of the stage?
Hi Craig,
I am getting the same error and my jobs are using filter command (grep). Is this something known that I should avoid?
I am processing 16 files with identical layout, first I used multipla instance jobs but after I changed that the error still occurs. Now I have 4 sequences, each running 4 different jobs (identical 4 jobs), it fails when executing 3rd or 4rth sequence. Usually one or two restarts will have the job completed all right.
This is old enough I don't really remember what I was thinking of when I said I've "seen this".
Regardless, please start your own post. This lets us know your version of the product and your O/S and gives you the ability to proclaim the issue 'Resolved' if we get that far. When you do, please provide details of your job design and post the complete, unedited error message from your job, even if it is "the same". In your new post. Please.
-craig
"You can never have too many knives" -- Logan Nine Fingers