Hi,
We are interested (like many others, I guess) in the ability to perform HTTP measurements at our own non-anchored network. Understanding the potential for abuse, I would like to suggest the following authentication protocol, which is based on best practices exhibited by other services with such potential (abuse or privacy implications).
1. Confirm control of the domain registration:
* This is usually done by mailing the technical contact for the relevant WHOIS entry with a confirmation email containing a unique hash, thus validating ownership.
2. Confirm control of the DNS servers:
* This is usually done by editing the root TXT record with a unique hash or publishing a CNAME with unique hash.
3. Confirm control of the Web servers:
* This is usually done by placing a uniquely-hashed file in the webserver root directory, a unique hash in the meta-tags for the index html file or a unique value in a file such as robots.txt.
I believe this protocol is sufficient to ensure that a web site owner agrees to the implications of allowing free HTTP measurements against their servers and that no unwilling server will ever be probed. At most during the protocol, the only resource that can be hit is a static file or robots.txt specifically, which has very little capability to overwhelm a web server, especially if negative responses are cached for a considerable amount of time / validation is done via a few nodes and propagated across the network.
thoughts/ideas welcome.
Regards,
Gil Bahat,
DevOps Engineer,
Magisto Ltd.