Hi,
We are using DS7.5. We are working on Peoplesoft upgrade project.
If we have more than 500000 records, the job is getting aborted without showing any error.
or sometimes the error looks like as given below
Call to output link returned numeric error code:-100
Please help me.
If we limit the number of rows while running the job to 500000, and we load data incrementally we are not facing the issue.
But sometimes we have a requirement to load complete data from source to target.
Do we have to change any settings to accomidate more rows at a time?
Suggestions are welcome.
Issues while loading more number of rows to target
Moderators: chulett, rschirm, roy
Issues while loading more number of rows to target
Thanks,
Madhavi
Madhavi
Methinks you are bumping up against some limit, such as a 32BIT hash file being stuffed too full, database rollback exceeded, snapshot too old, etc. Can you describe the nature of the job that blows up and paste all yellow and red error messages?
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
-
- Premium Member
- Posts: 503
- Joined: Wed Jun 29, 2005 8:14 am
Re: Issues while loading more number of rows to target
If you are trying to use the delivered jobs for loading then the most important thing to take care of is the IPC Buffer size. As the jobs do have complex logics and many stages in between to process the data so the IPCs time out many times and without any errors in the log.
try to run the job by giving constraints like inrownum>500000 and see if it is a data problem or not.
try to replace the target table(DRS) and the IPC with a flat file and run the job.
try to run the job by giving constraints like inrownum>500000 and see if it is a data problem or not.
try to replace the target table(DRS) and the IPC with a flat file and run the job.
Deepak - the IPC buffer size really should make no difference, if the machine is quite slow then the IPC timeouts should be raised. The default buffers are just fine in 99.9% of jobs and increasing them is most often a waste of precious process virtual memory. Usually either the read or write is going to be slower so the buffer will be either empty or full almost all of the time and it's actual size is irrelevant. But perhaps I'm misunderstanding how buffers work and you have a different take.
pmadhavi - as Ken and Deepak have already noted, it is important to know what sort of a stage you are writing to in order to locate the cause.
pmadhavi - as Ken and Deepak have already noted, it is important to know what sort of a stage you are writing to in order to locate the cause.
-
- Premium Member
- Posts: 503
- Joined: Wed Jun 29, 2005 8:14 am
the proejct in Production is running on DS7.1v where as we are doing the upgarde in 7.5.
So, there's no IPC stage defined in the customised maps.
Anywhere else I can change the Timeout option if we dont have IPC stage?[b/]
Please find the description of the job below
Source-->Aggregator-->Transformer-->Sort Stage --> Aggregator
-->Target
We are getting the error when data is getting loaded from Sort stage to Aggregator.
In the properties of Sort stage
Max rows in Virtual memory is given as: 10000
Temporary Directory: (blank)
The above mention 2 parameters have anything to do with the issue.
Pls let me know if u require any other info.
So, there's no IPC stage defined in the customised maps.
Anywhere else I can change the Timeout option if we dont have IPC stage?[b/]
Please find the description of the job below
Source-->Aggregator-->Transformer-->Sort Stage --> Aggregator
-->Target
We are getting the error when data is getting loaded from Sort stage to Aggregator.
In the properties of Sort stage
Max rows in Virtual memory is given as: 10000
Temporary Directory: (blank)
The above mention 2 parameters have anything to do with the issue.
Pls let me know if u require any other info.
Thanks,
Madhavi
Madhavi