Unable to load data to a DB2 Table

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
chpraveen.msc
Participant
Posts: 26
Joined: Tue Nov 08, 2005 5:36 am

Unable to load data to a DB2 Table

Post by chpraveen.msc »

Hi all,
I have a very simple job which load data to a Target DB2 TABLE reading data from a Dataset. Job was running fine till some time back. All of the sudden the job is getting aborted with the below fatal Error.


main_program: Internal Error: (k == numNodes): db2partutils.C: 773
Traceback: pureAssertion__13APT_FatalPathFPCcT1i() at 0x900000003f9a7f8
groupNodes__11APT_DB2InfoFPCUsT1P12APT_DB2UtilsP12APT_ErrorLogT1() at 0x90000000a39d89c
getTableDB2NodeMap__12APT_DB2UtilsFPCUsPP8APT_NodePiPP14APT_DB2NodeSetP12APT_ErrorLogT1() at 0x90000000a39e1c0
describeOperator__19APT_DB2LoadOperatorFv() at 0x90000000a3c2e3c
wrapDescribeOperator__15APT_OperatorRepFv() at 0x9000000044611d4
check1a__15APT_OperatorRepFv() at 0x9000000044628a4
sequenceAndCheck1Operators__11APT_StepRepFR12APT_ErrorLog() at 0x9000000046c5948
check__11APT_StepRepFv() at 0x9000000046c5470
check__8APT_StepFv() at 0x9000000046bc568
createAndCheckStep__7APT_OSLFP20APT_OSL_SIL_StepSpecR12APT_ErrorLog() at 0x900000009cc99c4
.() at 0x100007368
APT_PMconductorMain__FPFiPPc_i() at 0x9000000049e7e20
APT_SharedMain__FiPPcPFiPPc_i() at 0x9000000049e65fc
.() at 0x10000f484


We are using DB2 Enterprise stage with write method set to "Load". I have checked the connectivity from DS Sever and it looks good. I am able to query the table from DS Server. I am not sure what could be the reason for this issue.

Thanks you so much for your valuable inputs in advance.
chpraveen.msc
Participant
Posts: 26
Joined: Tue Nov 08, 2005 5:36 am

Post by chpraveen.msc »

I tried running the respective job with a PEEK stage and job is running fine. But when i swap it with the DB2 Enterprise stage its giving the above Fatal warning. I have pasted the OSH dump reference. Please help me in resolving the issue.

OSH script
# OSH / orchestrate script for Job CopyOfLoad_MEDIA_BEHAVIOR_POSTAL_DATA1 compiled at 18:45:56 20 AUG 2010
#################################################################
#### STAGE: Copy_of_DB2_MEDIA_BEHV_PSTL_DATA
## Operator
db2load
## Operator options
-db_cs [&DSProjectMapName]
-table '[&"$pDb2SchemaName"].[&"$pDB2TableName"]'
-mode append
-nonrecoverable
-dbname '[&"$pDb2DatabaseName"]'
-server '[&"$pDb2InstanceName"]'
-client_instance '[&"$pDb2ClientInst"]'
-user '[&"$pDb2UserName"]'
-password '[&"$pDb2Password"]'
## General options
[ident('Copy_of_DB2_MEDIA_BEHV_PSTL_DATA'); jobmon_ident('Copy_of_DB2_MEDIA_BEHV_PSTL_DATA')]
## Inputs
0< [] 'Data_Set_10:Lnk_MEDIA_BEHV_PSTL_DATA.v'
;
#################################################################
#### STAGE: Data_Set_10
## Operator
copy
## General options
[ident('Data_Set_10')]
## Inputs
0< [ds] '[&"$pIntermediateDatasetPath"]/abc.ds'
## Outputs
0> [modify (
POST_CD:not_nullable ustring[15]=POST_CD;
MEDIA_CTGY_ID:not_nullable ustring[3]=MEDIA_CTGY_ID;
MEDIA_PREF_BEHV_ID:not_nullable ustring[5]=MEDIA_PREF_BEHV_ID;
YR_NO:not_nullable ustring[4]=YR_NO;
TOT_HOUSE_CT:not_nullable int32=TOT_HOUSE_CT;
TOT_USER_CT:not_nullable int32=TOT_USER_CT;
keep
POST_CD,MEDIA_CTGY_ID,MEDIA_PREF_BEHV_ID,YR_NO,
TOT_HOUSE_CT,TOT_USER_CT;
)] 'Data_Set_10:Lnk_MEDIA_BEHV_PSTL_DATA.v'
;
# End of OSH code
mhester
Participant
Posts: 622
Joined: Tue Mar 04, 2003 5:26 am
Location: Phoenix, AZ
Contact:

Post by mhester »

Since you indicate that this job has run before and I presume there have been no changes then I would focus my attention on

1) The data has changed
2) The DPF environment has somehow changed.
3) A patch has been installed on the ETL server

If #1 and #3 above can be ruled out then I would focus on #2. Some things I would look at would be

1) did a node (server) fail over? - using a valid db2nodes.cfg???
2) did anything about the table change?
3) is the table in a backup or load pending state. I see that you chose non recoverable so even if the job failed it should not leave the table in an unusable state, but worth asking the dba's.

Not sure, but I believe when you choose db2 partitioning that the operator will read the db2nodes.cfg to get the partition map and if this has changed or you are pointing to a local (ETL) copy then this can cause problems, but I would think it would be a more descriptive error message.

Lastly, if possible and none of my suggestions pan out to be true would you be able to stop and start the DB? Maybe there is something hung up?

Load was an option we never used so I am not very knowledgeable regarding failure modes. We used inserts and were happy with the 200k - 300k rows/sec and never really needed what the load had to offer.

Let us know!
chpraveen.msc
Participant
Posts: 26
Joined: Tue Nov 08, 2005 5:36 am

Post by chpraveen.msc »

Hi Michael,
Thank you so much for your valuable inputs and apologies for the delay in my response. I checked with the Admin team and there happens to be a Fix patch upgrade on one of the DS server. I am assuming that the issue was becasue of the upgrade.

DSX rocks :D !!!
Post Reply