Proposal: Generic HTTP measurements [GENERIC-HTTP]
Dear RIPE Atlas users, We recently published a RIPE Labs article containing a few proposals: https://labs.ripe.net/author/kistel/five-proposals-for-a-better-ripe-atlas/. We'd like to encourage you to express your comments about this proposal (if you'd like to share them) here. Regards, Robert Kisteleki For the RIPE Atlas team
Dear Robert, As I have expressed at other times, I think we have to be very careful with enabling HTTP probes due to traffic increase issues. I have personally delivered quite a few probes to remote locations in regions with very poor connectivity, and the host's first concern is how much bandwidth the measurements take up. I have always reassured them that these are only network level connectivity measurements, and no web traffic. It seems to me that in the first world it may be difficult to imagine places where a few megabytes per month of traffic matters, but that is the case in remote regions, and I think the benefit of having an Atlas network with worldwide coverage is greater than being able to perform higher layer measurements or monitoring. It seems to me that the idea of making it "opt-in" with a tag on the probe is the right one, even if it makes its deployment and usability slow at first. I sincerely believe we have a responsibility to hosts that currently help in complex regions. Thanks and regards, Hugo Salgado NIC Chile - .CL On 15:03 14/12, Robert Kisteleki wrote:
Dear RIPE Atlas users,
We recently published a RIPE Labs article containing a few proposals: https://labs.ripe.net/author/kistel/five-proposals-for-a-better-ripe-atlas/. We'd like to encourage you to express your comments about this proposal (if you'd like to share them) here.
Regards, Robert Kisteleki For the RIPE Atlas team
-- ripe-atlas mailing list ripe-atlas@ripe.net https://lists.ripe.net/mailman/listinfo/ripe-atlas
Hello, On Fri, 16 Dec 2022 at 14:15, Hugo Salgado <hsalgado@nic.cl> wrote:
Dear Robert, As I have expressed at other times, I think we have to be very careful with enabling HTTP probes due to traffic increase issues.
I have personally delivered quite a few probes to remote locations in regions with very poor connectivity, and the host's first concern is how much bandwidth the measurements take up. I have always reassured them that these are only network level connectivity measurements, and no web traffic. It seems to me that in the first world it may be difficult to imagine places where a few megabytes per month of traffic matters, but that is the case in remote regions, and I think the benefit of having an Atlas network with worldwide coverage is greater than being able to perform higher layer measurements or monitoring.
I doubt that the average HTTP HEAD request consumes more traffic than a traceroute. Which is why I compared the traffic of a 13 hop traceroute with a HTTPS (not HTTP) HEAD request, here are my results: traceroute to api.backblazeb2.com (13th hop) Upload: 4134 Bytes across 39 Packets Download: 5138 Bytes across 37 Packets Total: 9272 Bytes across 76 Packets HTTPS (not HTTP) HEAD request against api.backblazeb2.com: Upload: 1048 Bytes across 11 Packets Download: 5743 Bytes across 11 Packets Total 6791 Bytes across 22 Packets Say we don't do HEAD, but a GET request and get a 2 KB response payload, in this case the HTTPS request would still be 433 Bytes short of the traffic consumed by the traceroute in this comparison. I can only disagree with the assumption that because a request is HTTP or HTTPS, it will consume more traffic than other measurement types.
It seems to me that the idea of making it "opt-in" with a tag on the probe is the right one, even if it makes its deployment and usability slow at first. I sincerely believe we have a responsibility to hosts that currently help in complex regions.
I believe so too, but I disagree with the "opt-in" approach and artificially restricting some probes to certain measurements types. Instead if traffic increases as ATLAS becomes more and more useful (and popular) over time, perhaps new features are required for probe owners to restrict the amount of activity somehow. We already have a credit system today that should be representative to the work the probe is doing, so one idea may be to allow probe owners to set a credit ceiling as in "my probe should not generate more than X amount of credits per month", therefore allowing the probe owner to set its own restriction. Maybe the credit system is not the right tool for the job. Maybe it should just simply be the amount of measurements. The point is: when the amount of activity surpasses what is acceptable for the probe owner, all activity should be restricted, as opposed to blacklisting certain measurements types unconditionally and still surpassing acceptable limits for the probe owner. I'm sure that a solution can be found, if this becomes a real problem as ATLAS grows in users and features. But restricting ATLAS features unconditionally instead and blocking its growth does not seem like the correct answer imho. Thanks, Lukas
participants (3)
-
Hugo Salgado
-
Lukas Tribus
-
Robert Kisteleki