Issues while loading more number of rows to target

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
pmadhavi
Charter Member
Charter Member
Posts: 92
Joined: Fri Jan 27, 2006 2:54 pm

Issues while loading more number of rows to target

Post by pmadhavi »

Hi,
We are using DS7.5. We are working on Peoplesoft upgrade project.
If we have more than 500000 records, the job is getting aborted without showing any error.
or sometimes the error looks like as given below

Call to output link returned numeric error code:-100

Please help me.
If we limit the number of rows while running the job to 500000, and we load data incrementally we are not facing the issue.

But sometimes we have a requirement to load complete data from source to target.

Do we have to change any settings to accomidate more rows at a time?
Suggestions are welcome.
Thanks,
Madhavi
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Methinks you are bumping up against some limit, such as a 32BIT hash file being stuffed too full, database rollback exceeded, snapshot too old, etc. Can you describe the nature of the job that blows up and paste all yellow and red error messages?
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
DeepakCorning
Premium Member
Premium Member
Posts: 503
Joined: Wed Jun 29, 2005 8:14 am

Re: Issues while loading more number of rows to target

Post by DeepakCorning »

If you are trying to use the delivered jobs for loading then the most important thing to take care of is the IPC Buffer size. As the jobs do have complex logics and many stages in between to process the data so the IPCs time out many times and without any errors in the log.
try to run the job by giving constraints like inrownum>500000 and see if it is a data problem or not.
try to replace the target table(DRS) and the IPC with a flat file and run the job.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Deepak - the IPC buffer size really should make no difference, if the machine is quite slow then the IPC timeouts should be raised. The default buffers are just fine in 99.9% of jobs and increasing them is most often a waste of precious process virtual memory. Usually either the read or write is going to be slower so the buffer will be either empty or full almost all of the time and it's actual size is irrelevant. But perhaps I'm misunderstanding how buffers work and you have a different take.

pmadhavi - as Ken and Deepak have already noted, it is important to know what sort of a stage you are writing to in order to locate the cause.
DeepakCorning
Premium Member
Premium Member
Posts: 503
Joined: Wed Jun 29, 2005 8:14 am

Post by DeepakCorning »

I am sorry , U r correct.I wrote down increase Buffer Size in a hurry what I actually wanted write was Time Out. Take care of the time outs as the processing of as delivered jobs is really slow.

thanks for correcting :-)
pmadhavi
Charter Member
Charter Member
Posts: 92
Joined: Fri Jan 27, 2006 2:54 pm

Post by pmadhavi »

the proejct in Production is running on DS7.1v where as we are doing the upgarde in 7.5.
So, there's no IPC stage defined in the customised maps.

Anywhere else I can change the Timeout option if we dont have IPC stage?[b/]

Please find the description of the job below

Source-->Aggregator-->Transformer-->Sort Stage --> Aggregator
-->Target

We are getting the error when data is getting loaded from Sort stage to Aggregator.

In the properties of Sort stage
Max rows in Virtual memory is given as: 10000
Temporary Directory: (blank)

The above mention 2 parameters have anything to do with the issue.

Pls let me know if u require any other info.
Thanks,
Madhavi
Post Reply