Every weekend my jobs runs a full load. Full load generates 60k records in flat file and input it to the Transormer_1, for further lookup and processing.
Intermittently in recent past my job aborts giving error:
No, 60K shouldn't cause the transformer to hiccup. You can try to trace the core file with dbx, but that takes a bit of skill and practice. If we approach the error from the other end - what are you doing in your transform stage that is not merely passing through values and could cause problems? External or user routine calls? Stage variable manipulations?
What version of Oracle are you loading into? I'm curious if you client version matches your server version and if there is an opportunity here to 'upgrade' the client to a slightly higher version.
-craig
"You can never have too many knives" -- Logan Nine Fingers
@ArndW
The job has no routine calls and doesn't uses any stage variables. There are 3 contraints in the transformer which redirects input from file to 3 other transformers for separate file creation.
@chullet
Job doesn't loads to any database. Job has multiple transformers and outputs multiple files by doing lookups.
Is the Oracle upgrade really required coz it's not in my hand and It would need a strong reason to justify upgrade
Ok... wrong choice of words. What version of Oracle are you connecting to? I can't answer your question until you answer mine. And just to double-check, your 'lookups' are all from hashed files, yes?
-craig
"You can never have too many knives" -- Logan Nine Fingers
Well, at least they match which is a Good Thing but not necessarily always the case, hence the question. And the only reason I brought it up is because I personally have seen intermittent core dump issues like this myself caused by a buggy Oracle client.
So as you're trying to solve this, keep that in the back of your mind. We're not talking anything major here, just (perhaps) a maintenance release up to 9.2.0.5 or 9.2.0.6 - ones that "we" found to be more stable. Another query - what flavor of UNIX are you running - HP-UX by any chance? If so, then I'm going to double-down on my bet.
-craig
"You can never have too many knives" -- Logan Nine Fingers
Make sure that all your lookups and source have identical array sizes set. I know this might sound odd but I have had "Abnormal terminations" before and more than once, the array size fix has worked for me. But then again my issues were with the target. Since your target is a flat file, I cannot assure you that this will work, but is worth a try.
If it fails, change all your array sizes to 1 and see if that helps.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
Are there any bugs in Oracle 9.2.0.4, related to this issue which were patched in 9.2.0.5/6 release.
This would be helpful for me to explain to my dba for why I need an upgrade.
Accordingly I'll ask dba for oracle maintenance upgrade to 9.2.0.5/6 and will see if the issue occurs again.
The issue is intermittent, so finding a permanent fix may take some time.
In order to analyze the core you need to understand how an executable program is built together and how runtime memory allocation and processing functions. The dbx program is usually used to do this (it is free with UNIX but not always installed) along with the actual executable that cause the core. While you can do a stack trace and see what is on the top of the stack it takes some practice (and patience) to track down core dumps.
Last edited by ArndW on Thu Jul 02, 2009 6:05 am, edited 1 time in total.