Proposal for public HTTP measurements
Dear RIPE Atlas users, The topic of publicly available HTTP measurements in RIPE Atlas comes up from time to time. There were a number of discussions about pros and cons for this feature over the years (including exposing probe hosts to unnecessary risks of other users "measuring" just about any kind of HTTP content out there), with no firm outcome. While we understand that this feature would come handy for some of our users, it does not benefit everyone. Therefore our proposal is the following: 1. We'll enable HTTP measurements to be performed by all atlas users, from any probes. 2. The targets of such measurements can only be RIPE Atlas anchors (these already run HTTP servers, see https://atlas.ripe.net/docs/anchors/). 3. Parameters like costs, minimum frequency, maximum number of probes involved, etc. will be set by the development team, just as with the other measurements. 4. The RIPE NCC will still be able to support other, vetted HTTP measurements as long as it benefits the community, as well as other HTTP measurements that we deem operationally useful. These will be evaluated on a case by case basis. Please speak up at the MAT working group list (mat-wg@ripe.net) if you support / don't support this proposal, of if you have any other opinion about it. Regards, Robert Kisteleki RIPE NCC R&D manager, for the RIPE Atlas team
hi Robert, all, This seems, in general, like a good balance of the tradeoff in utility and security with respect to research questions.
On 05 Jan 2015, at 13:59, Robert Kisteleki <robert@ripe.net> wrote:
Dear RIPE Atlas users,
The topic of publicly available HTTP measurements in RIPE Atlas comes up from time to time. There were a number of discussions about pros and cons for this feature over the years (including exposing probe hosts to unnecessary risks of other users "measuring" just about any kind of HTTP content out there), with no firm outcome.
While we understand that this feature would come handy for some of our users, it does not benefit everyone. Therefore our proposal is the following:
1. We'll enable HTTP measurements to be performed by all atlas users, from any probes.
2. The targets of such measurements can only be RIPE Atlas anchors (these already run HTTP servers, see https://atlas.ripe.net/docs/anchors/).
Since the target will be an anchor which has known expected content, will these measurements include some indication of whether application-layer rewriting was detected? Thanks, cheers, Brian
3. Parameters like costs, minimum frequency, maximum number of probes involved, etc. will be set by the development team, just as with the other measurements.
4. The RIPE NCC will still be able to support other, vetted HTTP measurements as long as it benefits the community, as well as other HTTP measurements that we deem operationally useful. These will be evaluated on a case by case basis.
Please speak up at the MAT working group list (mat-wg@ripe.net) if you support / don't support this proposal, of if you have any other opinion about it.
Regards, Robert Kisteleki RIPE NCC R&D manager, for the RIPE Atlas team
Hi Robert and the MAT-WG, This is an interesting proposal and I'm happy to see RIPE open up HTTP requests a bit more. I hope that as you do, the WG can share thoughts on the balance of privacy and security with measurement objectives, since that will better inform concurrent efforts. On Mon, Jan 5, 2015 at 8:35 AM, Brian Trammell <trammell@tik.ee.ethz.ch> wrote:
Since the target will be an anchor which has known expected content, will these measurements include some indication of whether application-layer rewriting was detected?
Case in point – there are interesting parallels between this proposal and OONI's Header Field Manipulation Test [1]. RIPE might be interested in exploring similar services to return headers to potentially detect inflight modification, or better yet to whitelist OONI's helper services. Cordially, Collin [1] https://github.com/TheTorProject/ooni-spec/blob/master/test-specs/ts-006-hea... -- *Collin David Anderson* averysmallbird.com | @cda | Washington, D.C.
I would be interested in letting us start measurements against ourselves from probes globally. One could publish a DNS record similar to how Google validates domains for receiving requests to authorize the traffic. Jared Mauch
On Jan 5, 2015, at 4:59 AM, Robert Kisteleki <robert@ripe.net> wrote:
Dear RIPE Atlas users,
The topic of publicly available HTTP measurements in RIPE Atlas comes up from time to time. There were a number of discussions about pros and cons for this feature over the years (including exposing probe hosts to unnecessary risks of other users "measuring" just about any kind of HTTP content out there), with no firm outcome.
While we understand that this feature would come handy for some of our users, it does not benefit everyone. Therefore our proposal is the following:
1. We'll enable HTTP measurements to be performed by all atlas users, from any probes.
2. The targets of such measurements can only be RIPE Atlas anchors (these already run HTTP servers, see https://atlas.ripe.net/docs/anchors/).
3. Parameters like costs, minimum frequency, maximum number of probes involved, etc. will be set by the development team, just as with the other measurements.
4. The RIPE NCC will still be able to support other, vetted HTTP measurements as long as it benefits the community, as well as other HTTP measurements that we deem operationally useful. These will be evaluated on a case by case basis.
Please speak up at the MAT working group list (mat-wg@ripe.net) if you support / don't support this proposal, of if you have any other opinion about it.
Regards, Robert Kisteleki RIPE NCC R&D manager, for the RIPE Atlas team
I love the idea but unless you can profile what available capacity the probe/anchor has I don't think the resulting measurements will be useable. There is no way to know your http request was slow because someone with the end point is a hard core torrent user maxing their location out. Also valid for the areas where you have a hard limit and minimal extra bandwidth. ping/traceroute while not always a good test does squeeze through with minimal variance in result when the site bandwidth is congested. also as an anchor host, can I limit the max bps because some locations is not low cost if everyone decides to http test some speedtest file. Our singapore anchor for example would cost more per month then we spent on the hardware to host an anchor in the first place. I suspect the probe/anchor hosts in other areas like africa, australia, new zealand, and south america would get even larger monthly bills. Bryan Socha Network Engineer DigitalOcean On Mon, Jan 5, 2015 at 7:59 AM, Robert Kisteleki <robert@ripe.net> wrote:
Dear RIPE Atlas users,
The topic of publicly available HTTP measurements in RIPE Atlas comes up from time to time. There were a number of discussions about pros and cons for this feature over the years (including exposing probe hosts to unnecessary risks of other users "measuring" just about any kind of HTTP content out there), with no firm outcome.
While we understand that this feature would come handy for some of our users, it does not benefit everyone. Therefore our proposal is the following:
1. We'll enable HTTP measurements to be performed by all atlas users, from any probes.
2. The targets of such measurements can only be RIPE Atlas anchors (these already run HTTP servers, see https://atlas.ripe.net/docs/anchors/).
3. Parameters like costs, minimum frequency, maximum number of probes involved, etc. will be set by the development team, just as with the other measurements.
4. The RIPE NCC will still be able to support other, vetted HTTP measurements as long as it benefits the community, as well as other HTTP measurements that we deem operationally useful. These will be evaluated on a case by case basis.
Please speak up at the MAT working group list (mat-wg@ripe.net) if you support / don't support this proposal, of if you have any other opinion about it.
Regards, Robert Kisteleki RIPE NCC R&D manager, for the RIPE Atlas team
On 07 Jan 2015, at 21:34, Bryan Socha <bryan@digitalocean.com> wrote:
I love the idea but unless you can profile what available capacity the probe/anchor has I don't think the resulting measurements will be useable. There is no way to know your http request was slow because someone with the end point is a hard core torrent user maxing their location out. Also valid for the areas where you have a hard limit and minimal extra bandwidth. ping/traceroute while not always a good test does squeeze through with minimal variance in result when the site bandwidth is congested.
also as an anchor host, can I limit the max bps because some locations is not low cost if everyone decides to http test some speedtest file. Our singapore anchor for example would cost more per month then we spent on the hardware to host an anchor in the first place. I suspect the probe/anchor hosts in other areas like africa, australia, new zealand, and south america would get even larger monthly bills.
So the proposal as I understand it has very low limits on the amount of payload that will be sent in the response (4kB, i.e. four packets), which I presume will reduce the temptation to misuse this measurement for bulk transfer capacity estimation (...and please don't get me started on how utterly pointless using a single TCP flow to estimate bulk transfer capacity is in the first place :) ) In the aggregate, though, you're right, this could lead to significant bandwidth usage, which I presume could be capped by the controller...? Cheers, Brian
Bryan Socha Network Engineer DigitalOcean
On Mon, Jan 5, 2015 at 7:59 AM, Robert Kisteleki <robert@ripe.net> wrote:
Dear RIPE Atlas users,
The topic of publicly available HTTP measurements in RIPE Atlas comes up from time to time. There were a number of discussions about pros and cons for this feature over the years (including exposing probe hosts to unnecessary risks of other users "measuring" just about any kind of HTTP content out there), with no firm outcome.
While we understand that this feature would come handy for some of our users, it does not benefit everyone. Therefore our proposal is the following:
1. We'll enable HTTP measurements to be performed by all atlas users, from any probes.
2. The targets of such measurements can only be RIPE Atlas anchors (these already run HTTP servers, see https://atlas.ripe.net/docs/anchors/).
3. Parameters like costs, minimum frequency, maximum number of probes involved, etc. will be set by the development team, just as with the other measurements.
4. The RIPE NCC will still be able to support other, vetted HTTP measurements as long as it benefits the community, as well as other HTTP measurements that we deem operationally useful. These will be evaluated on a case by case basis.
Please speak up at the MAT working group list (mat-wg@ripe.net) if you support / don't support this proposal, of if you have any other opinion about it.
Regards, Robert Kisteleki RIPE NCC R&D manager, for the RIPE Atlas team
On 01/07/2015 05:34 PM, Bryan Socha wrote:
also as an anchor host, can I limit the max bps because some locations is not low cost if everyone decides to http test some speedtest file. Our singapore anchor for example would cost more per month then we spent on the hardware to host an anchor in the first place. I suspect the probe/anchor hosts in other areas like africa, australia, new zealand, and south america would get even larger monthly bills.
That's right. As an ambassador here in Chile, the first question i receive asking for hosts is the bandwidth a probe takes. This will be a major concern and it'll mean to review all the probes installed, and I think it'll be a showstopper for future ones. Hugo
Dear All, I'd like to provide some background information mostly about bandwidth usage that can help this discussion. For RIPE Atlas anchors, we're always asking the host to be prepared for traffic up to 10Mb/sec. However, in reality we're very far from using this much at the moment: anchors that are involved in DNSMON measurements use about 256kb/sec, whereas anchors that don't do DNSMON measurements use about 50kb/sec. About probes: a v4 only probe uses ~4kb/sec, a v4+v6 probe uses ~7kb/sec. See also: https://atlas.ripe.net/about/faq/#how-much-bandwidth-will-the-probe-consume All of these numbers are on average, based on checking some random anchors/probes. Surely there are probes that use more (or less) average bandwidth than the numbers I mentioned. The HTTP service provided by the anchors limits the response size to about 4KB, so even with all the overhead it fits in a few TCP packets, which is practically in the same ballpark as the other measurements we already allow (or even less, if one thinks about traceroutes...) We'll also enforce the usual limits like maximum number of probes per measurement and minimum measurement frequency. Regards, Robert On 2015-01-07 21:34, Bryan Socha wrote:
I love the idea but unless you can profile what available capacity the probe/anchor has I don't think the resulting measurements will be useable. There is no way to know your http request was slow because someone with the end point is a hard core torrent user maxing their location out. Also valid for the areas where you have a hard limit and minimal extra bandwidth. ping/traceroute while not always a good test does squeeze through with minimal variance in result when the site bandwidth is congested.
also as an anchor host, can I limit the max bps because some locations is not low cost if everyone decides to http test some speedtest file. Our singapore anchor for example would cost more per month then we spent on the hardware to host an anchor in the first place. I suspect the probe/anchor hosts in other areas like africa, australia, new zealand, and south america would get even larger monthly bills.
Bryan Socha Network Engineer DigitalOcean
Hi Robert and everyone, I’m excited to see RIPE take this step in allowing more HTTP measurements across the system. I certainly support this proposal. In addition, along similar lines to what Brian Trammell has brought up, I’m particularly interested in detecting any type of impairment on the HTTP level. Therefore, I would love to see HTTP measurements be performed with not only RIPE Atlas anchors as targets (though that is a great start), but the rest of the web as well. Along with Jared’s idea for expanding it to allow testing of your own website, perhaps we could even expand this idea to the ability to test any website that gives you permission (which we could prove we obtained, to RIPE). Of course all such tests should be limited in both size and frequency, as appropriate with the current test restrictions such as the credit system and the 4 KB limit. I know however, there are privacy and security issues that would come into play on the side of probe hosts, which must be considered. I would certainly appreciate suggestions on how we could allow HTTP measurements against the general web, while protecting probe hosts from being held reliable by their local governments for their probe’s test histories. At ProPublica, our research interest lies in detecting whether large news sites are available all over the world — as well as how they come to be unavailable or censored. One of our recent projects did not use RIPE Atlas, but instead relied on other existing testing mechanisms in China, to detect the availability of 18 international news homepages: https://projects.propublica.org/firewall/. We would love to do our own tests using the RIPE network, testing in many more countries than just China, but complete test results could not be created without being able to use HTTP measurements. Sincerely, Sisi --- Sisi Wei News Apps Developer ProPublica On Jan 9, 2015, at 4:44 AM, Robert Kisteleki <robert@ripe.net<mailto:robert@ripe.net>> wrote: Dear All, I'd like to provide some background information mostly about bandwidth usage that can help this discussion. For RIPE Atlas anchors, we're always asking the host to be prepared for traffic up to 10Mb/sec. However, in reality we're very far from using this much at the moment: anchors that are involved in DNSMON measurements use about 256kb/sec, whereas anchors that don't do DNSMON measurements use about 50kb/sec. About probes: a v4 only probe uses ~4kb/sec, a v4+v6 probe uses ~7kb/sec. See also: https://atlas.ripe.net/about/faq/#how-much-bandwidth-will-the-probe-consume All of these numbers are on average, based on checking some random anchors/probes. Surely there are probes that use more (or less) average bandwidth than the numbers I mentioned. The HTTP service provided by the anchors limits the response size to about 4KB, so even with all the overhead it fits in a few TCP packets, which is practically in the same ballpark as the other measurements we already allow (or even less, if one thinks about traceroutes...) We'll also enforce the usual limits like maximum number of probes per measurement and minimum measurement frequency. Regards, Robert On 2015-01-07 21:34, Bryan Socha wrote: I love the idea but unless you can profile what available capacity the probe/anchor has I don't think the resulting measurements will be useable. There is no way to know your http request was slow because someone with the end point is a hard core torrent user maxing their location out. Also valid for the areas where you have a hard limit and minimal extra bandwidth. ping/traceroute while not always a good test does squeeze through with minimal variance in result when the site bandwidth is congested. also as an anchor host, can I limit the max bps because some locations is not low cost if everyone decides to http test some speedtest file. Our singapore anchor for example would cost more per month then we spent on the hardware to host an anchor in the first place. I suspect the probe/anchor hosts in other areas like africa, australia, new zealand, and south america would get even larger monthly bills. Bryan Socha Network Engineer DigitalOcean
sisi, <candid> ripe atlas was built for monitoring the infrastructure. the direction you are suggesting bears a significant risk of destroying ripe atlas because we will be regarded as breaking our agreement with probe hosts and probe hosts will disconnect their probes. i am opposed to risking ripe atlas for the sake of the http measurements; specifically meaurements that are bound to make ripe atlas a target. i am all for doing the work you suggest, but i have to suggest that you find people willing to cooperate while knowing the consequences. if we proceed like you suggest we may very well end up with no useful platform for any purpose. daniel ---------- Sent from a hand held device.
On 13.01.2015, at 18:44, Sisi Wei <Sisi.Wei@propublica.org> wrote:
Hi Robert and everyone,
I’m excited to see RIPE take this step in allowing more HTTP measurements across the system. I certainly support this proposal.
In addition, along similar lines to what Brian Trammell has brought up, I’m particularly interested in detecting any type of impairment on the HTTP level. Therefore, I would love to see HTTP measurements be performed with not only RIPE Atlas anchors as targets (though that is a great start), but the rest of the web as well.
Along with Jared’s idea for expanding it to allow testing of your own website, perhaps we could even expand this idea to the ability to test any website that gives you permission (which we could prove we obtained, to RIPE).
Of course all such tests should be limited in both size and frequency, as appropriate with the current test restrictions such as the credit system and the 4 KB limit.
I know however, there are privacy and security issues that would come into play on the side of probe hosts, which must be considered. I would certainly appreciate suggestions on how we could allow HTTP measurements against the general web, while protecting probe hosts from being held reliable by their local governments for their probe’s test histories.
At ProPublica, our research interest lies in detecting whether large news sites are available all over the world — as well as how they come to be unavailable or censored. One of our recent projects did not use RIPE Atlas, but instead relied on other existing testing mechanisms in China, to detect the availability of 18 international news homepages: https://projects.propublica.org/firewall/. We would love to do our own tests using the RIPE network, testing in many more countries than just China, but complete test results could not be created without being able to use HTTP measurements.
Sincerely, Sisi
---
Sisi Wei News Apps Developer ProPublica
On Jan 9, 2015, at 4:44 AM, Robert Kisteleki <robert@ripe.net> wrote:
Dear All,
I'd like to provide some background information mostly about bandwidth usage that can help this discussion.
For RIPE Atlas anchors, we're always asking the host to be prepared for traffic up to 10Mb/sec. However, in reality we're very far from using this much at the moment: anchors that are involved in DNSMON measurements use about 256kb/sec, whereas anchors that don't do DNSMON measurements use about 50kb/sec.
About probes: a v4 only probe uses ~4kb/sec, a v4+v6 probe uses ~7kb/sec. See also: https://atlas.ripe.net/about/faq/#how-much-bandwidth-will-the-probe-consume
All of these numbers are on average, based on checking some random anchors/probes. Surely there are probes that use more (or less) average bandwidth than the numbers I mentioned.
The HTTP service provided by the anchors limits the response size to about 4KB, so even with all the overhead it fits in a few TCP packets, which is practically in the same ballpark as the other measurements we already allow (or even less, if one thinks about traceroutes...)
We'll also enforce the usual limits like maximum number of probes per measurement and minimum measurement frequency.
Regards, Robert
On 2015-01-07 21:34, Bryan Socha wrote: I love the idea but unless you can profile what available capacity the probe/anchor has I don't think the resulting measurements will be useable. There is no way to know your http request was slow because someone with the end point is a hard core torrent user maxing their location out. Also valid for the areas where you have a hard limit and minimal extra bandwidth. ping/traceroute while not always a good test does squeeze through with minimal variance in result when the site bandwidth is congested.
also as an anchor host, can I limit the max bps because some locations is not low cost if everyone decides to http test some speedtest file. Our singapore anchor for example would cost more per month then we spent on the hardware to host an anchor in the first place. I suspect the probe/anchor hosts in other areas like africa, australia, new zealand, and south america would get even larger monthly bills.
Bryan Socha Network Engineer DigitalOcean
On Jan 13, 2015, at 5:28 PM, Daniel Karrenberg <daniel.karrenberg@ripe.net> wrote:
sisi,
<candid>
ripe atlas was built for monitoring the infrastructure. the direction you are suggesting bears a significant risk of destroying ripe atlas because we will be regarded as breaking our agreement with probe hosts and probe hosts will disconnect their probes. i am opposed to risking ripe atlas for the sake of the http measurements; specifically meaurements that are bound to make ripe atlas a target. i am all for doing the work you suggest, but i have to suggest that you find people willing to cooperate while knowing the consequences. if we proceed like you suggest we may very well end up with no useful platform for any purpose.
Perhaps this is a good use-case of having a ‘hosted VM’ type probe? The NLNOG RING is an example of something where it’s a pure full-mesh community but without credits. If there was something like a software probe, i could ask our systems team to deploy one VM per site we have for these purposes, and it could provide this capability with reasonable constraints. - Jared
Jared, I’d love to know more about this. Daniel — let me be clear that I have no intention of endangering RIPE atlas, and your concerns are the same as mine when it comes to the system, as well as all of the probe hosts. That’s why we specifically haven’t used RIPE to power any projects. Hence, I’m extremely interested in whether there’s any way that limited HTTP measurements could exist without being a threat. Sisi
On Jan 13, 2015, at 5:41 PM, Jared Mauch <jared@puck.nether.net> wrote:
On Jan 13, 2015, at 5:28 PM, Daniel Karrenberg <daniel.karrenberg@ripe.net> wrote:
sisi,
<candid>
ripe atlas was built for monitoring the infrastructure. the direction you are suggesting bears a significant risk of destroying ripe atlas because we will be regarded as breaking our agreement with probe hosts and probe hosts will disconnect their probes. i am opposed to risking ripe atlas for the sake of the http measurements; specifically meaurements that are bound to make ripe atlas a target. i am all for doing the work you suggest, but i have to suggest that you find people willing to cooperate while knowing the consequences. if we proceed like you suggest we may very well end up with no useful platform for any purpose.
Perhaps this is a good use-case of having a ‘hosted VM’ type probe? The NLNOG RING is an example of something where it’s a pure full-mesh community but without credits.
If there was something like a software probe, i could ask our systems team to deploy one VM per site we have for these purposes, and it could provide this capability with reasonable constraints.
- Jared
My only request in this discussion: If the decision is to allow http measurements, please allow me to select whether or not I want to avail the probe I host for public http measurements. A checkbox on the website for the probe (similar to the ~allow for public use~ selection) would be fine. Thanks.
On 2015-01-14 16:32, Mike. wrote:
My only request in this discussion:
If the decision is to allow http measurements, please allow me to select whether or not I want to avail the probe I host for public http measurements.
A checkbox on the website for the probe (similar to the ~allow for public use~ selection) would be fine.
Thanks.
Hi, Our proposal restricts the target to be safe sites with little bandwidth requirements -- this addresses most of the concerns of probe hosts. Of course we can look into opting out from these if there's consensus that we should. Regards, Robert
On 1/21/2015 at 2:28 PM Robert Kisteleki wrote: |On 2015-01-14 16:32, Mike. wrote: |> |> My only request in this discussion: |> |> If the decision is to allow http measurements, please allow me to |> select whether or not I want to avail the probe I host for public |> http measurements. |> |> A checkbox on the website for the probe (similar to the ~allow for |> public use~ selection) would be fine. |> |> Thanks. | |Hi, | |Our proposal restricts the target to be safe sites with little bandwidth |requirements -- this addresses most of the concerns of probe hosts. Of |course we can look into opting out from these if there's consensus that we |should. | |Regards, |Robert ============= Part of my concern is that the definition of "safe sites" can be country-dependent, and is continually changing. What may be a "safe site" this month may not be a "safe site" next month, due to some over-eager political cause somewhere.
Dne 21.1.2015 v 17:51 Mike. napsal(a):
Part of my concern is that the definition of "safe sites" can be country-dependent, and is continually changing.
What may be a "safe site" this month may not be a "safe site" next month, due to some over-eager political cause somewhere
This is probably not the case for RIPE Atlas anchors. I can't imagine how could possibly the JSON returned by Anchors become unsafe in any country :) -- Regards, Ondřej Caletka
participants (10)
-
Brian Trammell
-
Bryan Socha
-
Collin Anderson
-
Daniel Karrenberg
-
Hugo Salgado
-
Jared Mauch
-
Mike.
-
Ondřej Caletka
-
Robert Kisteleki
-
Sisi Wei