Hi experts,
anybody encountered this type of error when reading from an xml file
"DSP.ActiveRun": Line 51, Exception raised in GCI subroutine:
Access violation.
TIA
"DSP.ActiveRun": Line 51, Exception raised in GCI
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 6
- Joined: Thu Jan 04, 2007 3:46 am
- Location: Malaysia
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 6
- Joined: Thu Jan 04, 2007 3:46 am
- Location: Malaysia
We are using 4 GB RAM for our local development server.ray.wurlod wrote:How much physical memory is in your DataStage server machine? ...
I'm not sure if the error is due to the way I am reading the xml :
Folder stage - > xml input stage -> transformer -> flat file. Is this the correct way of reading xml?
Thanks
Doing a reset and looking at the log entries as already recommended by Craig will help most at present.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Hi All,
I am stuck up with the same error.
The log after resetting the job is,
DataStage Job 297 Phantom 6020
Unhandled exception raised at address 0x1005AAD6 : Access violation
Attempted to write to address 0x00000000
Aborting DataStage...
Could someone tell me what could be the problem. The job design is reading from a table then aggregating the data and writing to a seq file. The volume of data is 131,000,000 approx.
Thanks,
Sharad
I am stuck up with the same error.
The log after resetting the job is,
DataStage Job 297 Phantom 6020
Unhandled exception raised at address 0x1005AAD6 : Access violation
Attempted to write to address 0x00000000
Aborting DataStage...
Could someone tell me what could be the problem. The job design is reading from a table then aggregating the data and writing to a seq file. The volume of data is 131,000,000 approx.
Thanks,
Sharad
222102
The error is a null pointer somewhere in the program. That makes it more difficult to analyze. Does the error occur immediately or after running for some time? Does it make a difference if you split the large XML into separate files?
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Hi,
The error is thrown up only in the aggregator stage.
The extract happens from a DB thru OCI and then a look up of the data is done. The process data are passed to an aggregator after which we get the error.
The job runs for an hour and a half and aborts in the aggregator stage.
We do not have an XML file in the design. The source is an OCI stage.
The error is thrown up only in the aggregator stage.
The extract happens from a DB thru OCI and then a look up of the data is done. The process data are passed to an aggregator after which we get the error.
The job runs for an hour and a half and aborts in the aggregator stage.
We do not have an XML file in the design. The source is an OCI stage.
222102
Next time, start your own topic please.
That's too much data for the Aggregator to process, you'll need to sort your data first based on the grouping keys and then assert that sort order in the Aggregator stage. Do that properly and it should be able to handle pretty much any volume.
That's too much data for the Aggregator to process, you'll need to sort your data first based on the grouping keys and then assert that sort order in the Aggregator stage. Do that properly and it should be able to handle pretty much any volume.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers