Hi, On 2019-03-30 13:52, Marcel Flores wrote:
Hi All,
Noticed a funny little swing in probe responsiveness. It seems to have returned to normal, but was curious if anyone else had seen it. We do one-off measurement where we select all probes, usually the responsiveness rates look something like:
Prelude: the reports on number of participating probes are populated by counting how many different probes we actually receiv4ed data from. If everything goes as expected, this should eventually consistent with the actual measurements results collected, with some delay.
12743 Requested / 10147 Actually Participated (Taken from 18 March)
But on 19 March we started seeing rates more like:
12914 Requested / 7852 Actually Participated
Due to an internal issue, result delivery was delayed around 18-19 March: results were buffered in the infrastructure and only made it to storage later than usual. I strongly suspect this is the reason for the numbers you saw. It's likely that if you sum up all the results you actually got by now, you'll see a much higher rate of "actually participated". (This assumes that these numbers come from the UI, not from your own counting in the first place.)
Which continued until about 28 March, when it returned to about the previous levels:
12916 Requested / 9943 Actually Participated
Something funny happen with the dispatching? Did anybody else see?
Due to the nature of the beast, real-time data delivery is hard and delays as above can occur. These are relatively rare, and even less frequently are noticeable by users. I hope these fluctuations don't cause real harm to you. Regards, Robert
-- *Marcel Flores, PhD* | Sr. Research Scientist research.verizondigitalmedia.com <http://research.verizondigitalmedia.com> | AS15133 <https://www.peeringdb.com/asn/15133> p: +1 310.593.6880e: marcel.flores@verizondigitalmedia.com <mailto:marcel.flores@verizondigitalmedia.com> 13031 W Jefferson Blvd. Building 900, Los Angeles, CA 90094