Page 1 of 1

Rare Error

Posted: Tue Jul 11, 2006 8:08 am
by I_Server_Whale
Hi all,

One of our load job fails spitting this error message.

Code: Select all

ds_seqgetnext: error in 'read()' - Interrupted system call
This does not happen repeatedly but happens now and then. Did any body encounter this error before?

Any input would be greatly appreciated,

Thanks much,
Whale.

Posted: Tue Jul 11, 2006 8:25 am
by chulett
Not all that rare, I've seen it before. :wink:

Want to share some details as to exactly what you are doing - file type, anything funky? Are you using the Filter option of the stage?

Posted: Tue Jul 11, 2006 8:45 am
by I_Server_Whale
Thanks for your reply Craig and I'm sorry that I didn't provide sufficient information before.

We are loading a hashed file into an Oracle table using ORABLK stage. And we also have a shared container which is used for 'balancing' purposes.

Looking at the log, the data did get loaded into the table but the job failed when it was going through the shared container.

If I accidentally missed any info. Please let me know. Many Thanks,

Naveen.

Posted: Tue Jul 11, 2006 9:02 am
by chulett
naveendronavalli wrote:We are loading a hashed file into an Oracle table using ORABLK stage. And we also have a shared container which is used for 'balancing' purposes.
So... where exactly is this process going wrong? In the Shared Container? What exactly is in it?

And you are spooling a hashed file directly into the ORABLK stage? Interesting...

Posted: Tue Jul 11, 2006 10:52 am
by I_Server_Whale
And you are spooling a hashed file directly into the ORABLK stage? Interesting...
Why? Is it wrong to spool to hashed file into a ORABLK stage? Please let me why it sounded interesting?

As far as the Shared container is concerned, it compares the row counts from this load job to the rows counts in the extract/transform job to see if they match.

Again, Thanks very much for your input,
Naveen.

same error

Posted: Fri Mar 26, 2010 1:12 pm
by basiltarun
chulett wrote:Not all that rare, I've seen it before. :wink:

Want to share some details as to exactly what you are doing - file type, anything funky? Are you using the Filter option of the stage?
Hi Craig,

I am getting the same error and my jobs are using filter command (grep). Is this something known that I should avoid?

I am processing 16 files with identical layout, first I used multipla instance jobs but after I changed that the error still occurs. Now I have 4 sequences, each running 4 different jobs (identical 4 jobs), it fails when executing 3rd or 4rth sequence. Usually one or two restarts will have the job completed all right.

Thanks very much!

Posted: Fri Mar 26, 2010 5:42 pm
by chulett
This is old enough I don't really remember what I was thinking of when I said I've "seen this". :?

Regardless, please start your own post. This lets us know your version of the product and your O/S and gives you the ability to proclaim the issue 'Resolved' if we get that far. When you do, please provide details of your job design and post the complete, unedited error message from your job, even if it is "the same". In your new post. Please. :wink: