We are using the IBM DB2 UDB Bulk Loader stage and it works OK, but when there are messages in the log which should be warnings or fatal errors they are not handled correctly.
Has anyone written code to inspect the logs? Or is there a better way to handle this?
Here is my latest example: We are loading to a partitioned table (and I know we should use the Enterprise stage but we don't have it working yet)
and every single row is rejected.
The DataStage log shows only this warning:
HEALTH_PLAN_PAYMENT_FACT,0: Warning: Rebate_HlthPlnPymnt_ds_to_HPP_FACT_bulk_ins.HEALTH_PLAN_PAYMENT_FACT: SQL3107W There is at least one warning message in the message file.
but this is actually due to another warning in the log which is unimportant.
Here is the log:
SQL3501W The table space(s) in which the table resides will not be placed in
backup pending state since forward recovery is disabled for the database.
SQL3039W The memory available to LOAD for DATA BUFFER prohibits full LOAD
parallelism. Load parallelism of "1" will be used
SQL1128W SAVECOUNT is ignored. Reason code = "1".
SQL3109N The utility is beginning to load data from file
"/tmp/udbpipe06Ij7a".
SQL3500W The utility is beginning the "LOAD" phase at time "05/15/2008
19:22:26.044906".
SQL3535W The LOAD command parameter "USING" is no longer supported; its value
will be ignored by the LOAD utility.
SQL3519W Begin Load Consistency Point. Input record count = "0".
SQL3520W Load Consistency Point was successful.
SQL0327N The row cannot be inserted into table "DBO.HEALTH_PLAN_PAYMENT_FACT"
because it is outside the bounds of the defined data partition ranges.
SQLSTATE=22525
*** this is the INFORMATIONAL message that should have been FATAL
SQL3110N The utility has completed processing. "28984227" rows were read
from the input file.
SQL3519W Begin Load Consistency Point. Input record count = "28984227".
SQL3520W Load Consistency Point was successful.
SQL3515W The utility has finished the "LOAD" phase at time "05/15/2008
21:58:58.765546".
SQL3500W The utility is beginning the "BUILD" phase at time "05/15/2008
21:58:58.768645".
SQL3213I The indexing mode is "INCREMENTAL".
SQL3515W The utility has finished the "BUILD" phase at time "05/15/2008
21:59:00.507812".
SQL3107W There is at least one warning message in the message file.
Number of rows read = 28984227
Number of rows skipped = 0
Number of rows loaded = 0
Number of rows rejected = 28984227
Number of rows deleted = 0
Number of rows committed = 28984227
** The reject rows above is the other big indication of a problem!
DB2 Bulk Load - log messages - warnings not processed
Moderators: chulett, rschirm, roy
DB2 Bulk Load - log messages - warnings not processed
Michael Favero
2852 Humboldt Ave So
Minneapolis, MN 55408
2852 Humboldt Ave So
Minneapolis, MN 55408
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Has a flag been set in DB2 or in DataStage to demote warning message severities to informational?
You could, as a workaround, post-process the log using an after-job subroutine to search out particular SQLSTATE or DBMSCODE values. Setting ErrorCode to a non-zero value will abort the job.
You could, as a workaround, post-process the log using an after-job subroutine to search out particular SQLSTATE or DBMSCODE values. Setting ErrorCode to a non-zero value will abort the job.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Sorry, don't know anything about DB2 but I have similar fun with Oracle bulk loads on the Server side. Nothing about sqlldr having problems is considered fatal it seems, except perhaps if it can't find the control or data files it is looking for. Once it starts loading, it is perfectly happy to crap out and leave it up to you to determine if indeed there was a problem or not.
For example, I have a routine to both check for the presence of a 'bad' file and to grep the log for ORA errors. Either one of them indicate a 'fatal' problem for me and so *I* then fail the job at that point.
For example, I have a routine to both check for the presence of a 'bad' file and to grep the log for ORA errors. Either one of them indicate a 'fatal' problem for me and so *I* then fail the job at that point.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers