Hi experts,
anybody encountered this type of error when reading from an xml file
"DSP.ActiveRun": Line 51, Exception raised in GCI subroutine:
Access violation.
TIA
"DSP.ActiveRun": Line 51, Exception raised in GCI
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 6
- Joined: Thu Jan 04, 2007 3:46 am
- Location: Malaysia
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 6
- Joined: Thu Jan 04, 2007 3:46 am
- Location: Malaysia
We are using 4 GB RAM for our local development server.ray.wurlod wrote:How much physical memory is in your DataStage server machine? ...
I'm not sure if the error is due to the way I am reading the xml :
Folder stage - > xml input stage -> transformer -> flat file. Is this the correct way of reading xml?
Thanks
Hi All,
I am stuck up with the same error.
The log after resetting the job is,
DataStage Job 297 Phantom 6020
Unhandled exception raised at address 0x1005AAD6 : Access violation
Attempted to write to address 0x00000000
Aborting DataStage...
Could someone tell me what could be the problem. The job design is reading from a table then aggregating the data and writing to a seq file. The volume of data is 131,000,000 approx.
Thanks,
Sharad
I am stuck up with the same error.
The log after resetting the job is,
DataStage Job 297 Phantom 6020
Unhandled exception raised at address 0x1005AAD6 : Access violation
Attempted to write to address 0x00000000
Aborting DataStage...
Could someone tell me what could be the problem. The job design is reading from a table then aggregating the data and writing to a seq file. The volume of data is 131,000,000 approx.
Thanks,
Sharad
222102
Hi,
The error is thrown up only in the aggregator stage.
The extract happens from a DB thru OCI and then a look up of the data is done. The process data are passed to an aggregator after which we get the error.
The job runs for an hour and a half and aborts in the aggregator stage.
We do not have an XML file in the design. The source is an OCI stage.
The error is thrown up only in the aggregator stage.
The extract happens from a DB thru OCI and then a look up of the data is done. The process data are passed to an aggregator after which we get the error.
The job runs for an hour and a half and aborts in the aggregator stage.
We do not have an XML file in the design. The source is an OCI stage.
222102
![Exclamation :!:](./images/smilies/icon_exclaim.gif)
That's too much data for the Aggregator to process, you'll need to sort your data first based on the grouping keys and then assert that sort order in the Aggregator stage. Do that properly and it should be able to handle pretty much any volume.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers