sisi, <candid> ripe atlas was built for monitoring the infrastructure. the direction you are suggesting bears a significant risk of destroying ripe atlas because we will be regarded as breaking our agreement with probe hosts and probe hosts will disconnect their probes. i am opposed to risking ripe atlas for the sake of the http measurements; specifically meaurements that are bound to make ripe atlas a target. i am all for doing the work you suggest, but i have to suggest that you find people willing to cooperate while knowing the consequences. if we proceed like you suggest we may very well end up with no useful platform for any purpose. daniel ---------- Sent from a hand held device.
On 13.01.2015, at 18:44, Sisi Wei <Sisi.Wei@propublica.org> wrote:
Hi Robert and everyone,
I’m excited to see RIPE take this step in allowing more HTTP measurements across the system. I certainly support this proposal.
In addition, along similar lines to what Brian Trammell has brought up, I’m particularly interested in detecting any type of impairment on the HTTP level. Therefore, I would love to see HTTP measurements be performed with not only RIPE Atlas anchors as targets (though that is a great start), but the rest of the web as well.
Along with Jared’s idea for expanding it to allow testing of your own website, perhaps we could even expand this idea to the ability to test any website that gives you permission (which we could prove we obtained, to RIPE).
Of course all such tests should be limited in both size and frequency, as appropriate with the current test restrictions such as the credit system and the 4 KB limit.
I know however, there are privacy and security issues that would come into play on the side of probe hosts, which must be considered. I would certainly appreciate suggestions on how we could allow HTTP measurements against the general web, while protecting probe hosts from being held reliable by their local governments for their probe’s test histories.
At ProPublica, our research interest lies in detecting whether large news sites are available all over the world — as well as how they come to be unavailable or censored. One of our recent projects did not use RIPE Atlas, but instead relied on other existing testing mechanisms in China, to detect the availability of 18 international news homepages: https://projects.propublica.org/firewall/. We would love to do our own tests using the RIPE network, testing in many more countries than just China, but complete test results could not be created without being able to use HTTP measurements.
Sincerely, Sisi
---
Sisi Wei News Apps Developer ProPublica
On Jan 9, 2015, at 4:44 AM, Robert Kisteleki <robert@ripe.net> wrote:
Dear All,
I'd like to provide some background information mostly about bandwidth usage that can help this discussion.
For RIPE Atlas anchors, we're always asking the host to be prepared for traffic up to 10Mb/sec. However, in reality we're very far from using this much at the moment: anchors that are involved in DNSMON measurements use about 256kb/sec, whereas anchors that don't do DNSMON measurements use about 50kb/sec.
About probes: a v4 only probe uses ~4kb/sec, a v4+v6 probe uses ~7kb/sec. See also: https://atlas.ripe.net/about/faq/#how-much-bandwidth-will-the-probe-consume
All of these numbers are on average, based on checking some random anchors/probes. Surely there are probes that use more (or less) average bandwidth than the numbers I mentioned.
The HTTP service provided by the anchors limits the response size to about 4KB, so even with all the overhead it fits in a few TCP packets, which is practically in the same ballpark as the other measurements we already allow (or even less, if one thinks about traceroutes...)
We'll also enforce the usual limits like maximum number of probes per measurement and minimum measurement frequency.
Regards, Robert
On 2015-01-07 21:34, Bryan Socha wrote: I love the idea but unless you can profile what available capacity the probe/anchor has I don't think the resulting measurements will be useable. There is no way to know your http request was slow because someone with the end point is a hard core torrent user maxing their location out. Also valid for the areas where you have a hard limit and minimal extra bandwidth. ping/traceroute while not always a good test does squeeze through with minimal variance in result when the site bandwidth is congested.
also as an anchor host, can I limit the max bps because some locations is not low cost if everyone decides to http test some speedtest file. Our singapore anchor for example would cost more per month then we spent on the hardware to host an anchor in the first place. I suspect the probe/anchor hosts in other areas like africa, australia, new zealand, and south america would get even larger monthly bills.
Bryan Socha Network Engineer DigitalOcean