Page 1 of 1

Attempting to Cleanup after ABORT raised in stage

Posted: Thu Jul 26, 2007 10:45 pm
by Seya
I have a job that loads two columns from an OCI stage directly without any transformation into a hash file.The source tabel has around 2.8 million records.The job runs fine without any warnings in the DEV environment but it fails with a warning as below in UAT:

Attempting to Cleanup after ABORT raised in stage HL_Tbdwhapp20_Prop_H..Src_Tbdwhapp20_Prop

The source table is from the same database for both DEV and UAT.

Posted: Thu Jul 26, 2007 10:56 pm
by ArndW
Reset the job in the director and look at the entry labelled "from previous run..."; if the problem is still unclear please post that message contents here.

Does the job abort right away or after running a while and do you have different ulimit or uvconfig settings between dev and prod?

Posted: Thu Jul 26, 2007 10:57 pm
by Akumar1
Hi,
request you to send some more logs for the same.

Regards,
Akumar1

Posted: Thu Jul 26, 2007 11:06 pm
by Seya
Yes, i have tried to reset the job and run but i still get the same warning. The log job is as below :
-Starting Job HL_Tbdwhapp20_Prop_H.
raDbOraDSN = SFWHSE
raDbOraUsername = sfdcetluser
raDbOraPassword = ********
HshFilePath = /Data/SFDCUAT/hash/

-Environment variable settings:

-HL_Tbdwhapp20_Prop_H..Xfn: DSD.StageRun Active stage starting, tracemode

-HL_Tbdwhapp20_Prop_H..Src_Tbdwhapp20_Prop: SELECT APP_ID, APP_CD_PIGGY_IND FROM ALSADMIN.TBDWHAPP20

-HL_Tbdwhapp20_Prop_H..Hash_Tbdwhapp20_Prop.Out_Tbdwhapp20: Write caching disabled

-Attempting to Cleanup after ABORT raised in stage HL_Tbdwhapp20_Prop_H..Src_Tbdwhapp20_Prop




ArndW wrote:Reset the job in the director and look at the entry labelled "from previous run..."; if the problem is still unclear please post that message contents here.

Does the job abort right away or after r ...

Posted: Thu Jul 26, 2007 11:08 pm
by ArndW
Seya - please re-read what I requested.

Posted: Thu Jul 26, 2007 11:14 pm
by Seya
After resetting the job the

From previous run shows the follwoing messages:

DataStage Job 45 Phantom 958

Program "DSD.StageRun": Line 657, COMMON size mismatch in subroutine "DSP.Close".

Program "DSD.StageRun": Line 657, Unable to load file "DSP.Close".

Program "DSD.StageRun": Line 657, Unable to load subroutine.

Attempting to Cleanup after ABORT raised in stage HL_Tbdwhapp20_Prop_H..Src_Tbdwhapp20_Prop

Program "DSD.OnAbort": Line 164, COMMON size mismatch in subroutine "DSP.Close".

Program "DSD.OnAbort": Line 164, Unable to load file "DSP.Close".

Program "DSD.OnAbort": Line 164, Unable to load subroutine.


ArndW wrote:Saya - please re-read what I requested. ...

Posted: Thu Jul 26, 2007 11:20 pm
by ArndW
I think you might have imported the job and binary from your development machine and that is at a different rellevel than UAT. Recompile the job in UAT and consider upgrading whichever machine is at a lower version.

Posted: Fri Jul 27, 2007 12:49 am
by Seya
I have tried that too but i get the same warning as beofre.I tired to run this job with the sequential file as target for 2.8 million records, and i still get the same warning.

When I reduce the number of records by applying a filter in the soucre and it seems to work fine.The problem i am assuming might be becos of the large number of records.
ArndW wrote:I think you might have imported the job and binary from your development machine and that is at a different rellevel than UAT. Recompile the job in UAT and consider upgrading whichever machine is at a ...

Posted: Fri Jul 27, 2007 1:33 am
by ray.wurlod
DSP.Close is an internal DataStage function. If there's a COMMON mismatch either you've got your own routine(s) that have happened upon one of the names used for named COMMON area in DSP.Close or, more likely, there's a bug which you need to report to IBM through your support provider.