Page 1 of 1

Buffer Operator Warning (1st encounter)

Posted: Thu May 21, 2009 9:39 pm
by parag.s.27
Hi All,

I am getting following warning in one of the job that has 10 million plus records to be loaded in the table in Oracle 10g database.

buffer(10),0: APT_BufferOperator warning: newTime < startTime, 1242920537.63075 1242920538.07821

I had been trying to find anything matching in the forum but could not find one directly....

Can any one help in understanding this warning and what is causing this warning?

This warning is causing the entire master sequence to abort after loading 5 million records. :x

Posted: Thu May 21, 2009 9:50 pm
by chulett
Search just for "newTime". That and give us some idea of your job design.

Posted: Thu May 21, 2009 10:19 pm
by parag.s.27
chulett wrote:Search just for "newTime". That and give us some idea of your job design. ...

Hi Chulett,

Thanks for the suggestion as well as apologies for not searching the topic with all permutations and combinations.

Posted: Fri May 22, 2009 6:52 am
by chulett
No worries... sometimes people search for "too much", too specific of a search string. Can help to cut it back, to use a more generic portion of the message. Sure it gives you more to weed thru, but your fish is in the net. Somewhere. :wink:

Do you still need help? If so, please do give us some details of your job design, stages used, settings, etc.

Posted: Sat May 23, 2009 4:13 am
by parag.s.27
chulett wrote:No worries... sometimes people search for "too much", too specific of a search string. Can help to cut it back, to use a more generic portion of the message. Sure it gives you more to weed thru, but your fish is in the net. Somewhere. :wink:

Do you still need help? If so, please do give us some details of your job design, stages used, settings, etc.
Hi,

Thanks a lot for help...What we have figured out is that in one of our job where this problem is occurring, we are having a CDC stage, few joiners and the data is getting loaded in Oracle table. Now the data volume is too high almost 20 million records.

we were checking for the performance of jobs on a project level and we've enabled the $APT_PM_PLAYER_TIMING parameter. This was causing the DataStage server to operate GetTimeOfTheDay function with AIX. This was the prime reason where time between 2 subsequent records was having a difference of 2 milliseconds on the lesser side. So one of our team member who had a good knowledge of AIX set up some parameter that resolved the issue for us.

Posted: Sat May 23, 2009 4:36 am
by sanjay
Please let us know AIX parameter

Thanks
Sanjay
parag.s.27 wrote:
chulett wrote:No worries... sometimes people search for "too much", too specific of a search string. Can help to cut it back, to use a more generic portion of the message. Sure it gives you more to weed thru, but your fish is in the net. Somewhere. :wink:

Do you still need help? If so, please do give us some details of your job design, stages used, settings, etc.
Hi,

Thanks a lot for help...What we have figured out is that in one of our job where this problem is occurring, we are having a CDC stage, few joiners and the data is getting loaded in Oracle table. Now the data volume is too high almost 20 million records.

we were checking for the performance of jobs on a project level and we've enabled the $APT_PM_PLAYER_TIMING parameter. This was causing the DataStage server to operate GetTimeOfTheDay function with AIX. This was the prime reason where time between 2 subsequent records was having a difference of 2 milliseconds on the lesser side. So one of our team member who had a good knowledge of AIX set up some parameter that resolved the issue for us.