Feature request for Validated Timestamps
I was starting to work on a mapping measurement project where I wanted to delegate measurements to remote machines. There are a number of schemes that have been used where a not fully trusted node can come away from a measurement with a record that can be used to verify that it actually made the measurement. Many of these make use of a time stamp signed by the "server" as a way to prevent replaying old measurements. Things that have been done with this feature also include tlsnotary[1], which works with web servers where the "server random" bytes of the tls handshake includes the server timestamp (the default for openssl). The implementation of ssl currently used by probes does not appear to include the timestamp in the server hello. Another approach would be to add a "roughtime" service on probes Interested to hear if there would be willingness to have probe software support something in this direction. [1] https://tlsnotary.org/ --Will
would you care to share reasons for suspicion which would warrant raising the level of authenticity? randy --- randy@psg.com `gpg --locate-external-keys --auto-key-locate wkd randy@psg.com` signatures are back, thanks to dmarc header butchery
To expand on the immediate area of research, I'd like to have a distributed system where nodes self-identify what location they're in. There are a number of situations - be it operator misconfiguration or active attempts to misreport where it is more concrete to instead have nodes report their location based on time bounding their distance to known locations. The atlas looks like a great existing set of known anchor points to base such measurements on, but that attestation of location is very difficult to construct without additional authenticity. (Nodes may be behind NATs make it seem difficult to measure outbound from atlas probes to them) --Will On Thu, Apr 1, 2021 at 1:57 PM Randy Bush <randy@psg.com> wrote:
would you care to share reasons for suspicion which would warrant raising the level of authenticity?
randy
--- randy@psg.com `gpg --locate-external-keys --auto-key-locate wkd randy@psg.com` signatures are back, thanks to dmarc header butchery
To expand on the immediate area of research, I'd like to have a distributed system where nodes self-identify what location they're in. There are a number of situations - be it operator misconfiguration or active attempts to misreport where it is more concrete to instead have nodes report their location based on time bounding their distance to known locations. The atlas looks like a great existing set of known anchor points to base such measurements on, but that attestation of location is very difficult to construct without additional authenticity.
(Nodes may be behind NATs make it seem difficult to measure outbound from atlas probes to them)
Candela, M., Gregori, E., Luconi, V., Vecchio, A. (2019). Using RIPE Atlas for geolocating IP infrastructure https://www.semanticscholar.org/paper/Using-RIPE-Atlas-for-Geolocating-IP-In... randy --- randy@psg.com `gpg --locate-external-keys --auto-key-locate wkd randy@psg.com` signatures are back, thanks to dmarc header butchery
Agree that there's already a bunch of geolocation based measurement that uses Atlas, which increases the value of having a consistent set of anchor points in subsequent measurement work. I'm not sure if there's a process for this sort of feature request beyond this mailing list. Would it help if I propose a more concrete PR against https://github.com/RIPE-NCC/ripe-atlas-software-probe --Will On Thu, Apr 1, 2021 at 2:23 PM Randy Bush <randy@psg.com> wrote:
To expand on the immediate area of research, I'd like to have a distributed system where nodes self-identify what location they're in. There are a number of situations - be it operator misconfiguration or active attempts to misreport where it is more concrete to instead have nodes report their location based on time bounding their distance to known locations. The atlas looks like a great existing set of known anchor points to base such measurements on, but that attestation of location is very difficult to construct without additional authenticity.
(Nodes may be behind NATs make it seem difficult to measure outbound from atlas probes to them)
Candela, M., Gregori, E., Luconi, V., Vecchio, A. (2019). Using RIPE Atlas for geolocating IP infrastructure
https://www.semanticscholar.org/paper/Using-RIPE-Atlas-for-Geolocating-IP-In...
randy
--- randy@psg.com `gpg --locate-external-keys --auto-key-locate wkd randy@psg.com` signatures are back, thanks to dmarc header butchery
On 2021/04/06 17:04 , Will wrote:
I'm not sure if there's a process for this sort of feature request beyond this mailing list. Would it help if I propose a more concrete PR against https://github.com/RIPE-NCC/ripe-atlas-software-probe <https://github.com/RIPE-NCC/ripe-atlas-software-probe>
To the extend that you would like secure geolocation in presence of a malicious probe, it would make sense to me to start with documenting the protocol you would like to use. The current way of geolocating probes works by having to probe report round trip times to a number locations. Obviously, a malicious probe could report any rtt value. I'm sure we can come up with a protocol if we have a sufficient number of trusted servers. However, such a protocol would need to be documented and deployed on those servers. Philip
Couldn't this functionality be extended to the Atlas Anchors? Rick -----Original Message----- From: ripe-atlas <ripe-atlas-bounces@ripe.net> On Behalf Of Philip Homburg Sent: 07 April 2021 10:27 To: ripe-atlas@ripe.net Subject: Re: [atlas] Feature request for Validated Timestamps On 2021/04/06 17:04 , Will wrote:
I'm not sure if there's a process for this sort of feature request beyond this mailing list. Would it help if I propose a more concrete PR against https://github.com/RIPE-NCC/ripe-atlas-software-probe <https://github.com/RIPE-NCC/ripe-atlas-software-probe>
To the extend that you would like secure geolocation in presence of a malicious probe, it would make sense to me to start with documenting the protocol you would like to use. The current way of geolocating probes works by having to probe report round trip times to a number locations. Obviously, a malicious probe could report any rtt value. I'm sure we can come up with a protocol if we have a sufficient number of trusted servers. However, such a protocol would need to be documented and deployed on those servers. Philip
To Rick's point: Providing a signed timestamp could be done on Atlas Anchors, and perhaps that's a better place to start than the software probes. To Philip's point, the basic protocol I am hoping to enable looks as follows: (I'll describe the roughtime variant) * A machine wants to attest that it is in (e.g. Seattle). It picks a set of machines that are believed to be in Seattle (perhaps atlas machines, or more generally machines that the measurement system has agreed on locations of in advance.) * It generates a statement saying "i am doing a measurement", and signs it. * It requests the time with it's own signature as a nonce and receives a (timestamp, uncertainty, signature) from the server, per https://blog.cloudflare.com/roughtime/ * It then immediately req-requests the time, using the signature it received as the nonce this time, and receives a second timestamp, uncertainty, signature. The combination of this data can then be provided to any other machine to place a bound on the RTT between the machine and the chosen measurement machine (the difference between the two timestamps). This can be validated without trusting the machine claiming it's latency bound, since the results are signed by the anchor. There are some complexities - e.g could the machine attesting it's location delegate the request to a different machine closer to the anchor? Depending on the situation there are mitigations for this, like asking that some piece of data the machine that's attesting it's location be hashed into the nonce, in a way that's difficult for the attesting machine to predict ahead of time (so that it would need to move all of its data to the delegate machine at which point it's already in a sense also in that secondary location at that time.) But the ability to locate client software deployment via latency with some guarantee that someone running it isn't spoofing their location is useful. An equivalent to this protocol can be used against tls1.2 servers that include the timestamp in their server randomness, with the caveat that those timestamps are only at second granularity. To compensate, the client will repeat the hashing RTT process for 1 second, so that the validatable data that is presented shows that the client was able to do 'n' RTT's with the server within one second. --Will On Wed, Apr 7, 2021 at 4:12 AM Rick Havern <richard.havern@geant.org> wrote:
Couldn't this functionality be extended to the Atlas Anchors?
Rick
-----Original Message----- From: ripe-atlas <ripe-atlas-bounces@ripe.net> On Behalf Of Philip Homburg Sent: 07 April 2021 10:27 To: ripe-atlas@ripe.net Subject: Re: [atlas] Feature request for Validated Timestamps
On 2021/04/06 17:04 , Will wrote:
I'm not sure if there's a process for this sort of feature request beyond this mailing list. Would it help if I propose a more concrete PR against https://github.com/RIPE-NCC/ripe-atlas-software-probe <https://github.com/RIPE-NCC/ripe-atlas-software-probe>
To the extend that you would like secure geolocation in presence of a malicious probe, it would make sense to me to start with documenting the protocol you would like to use.
The current way of geolocating probes works by having to probe report round trip times to a number locations. Obviously, a malicious probe could report any rtt value.
I'm sure we can come up with a protocol if we have a sufficient number of trusted servers. However, such a protocol would need to be documented and deployed on those servers.
Philip
On 2021/04/14 6:00 , Will wrote:
To Philip's point, the basic protocol I am hoping to enable looks as follows: (I'll describe the roughtime variant) * A machine wants to attest that it is in (e.g. Seattle). It picks a set of machines that are believed to be in Seattle (perhaps atlas machines, or more generally machines that the measurement system has agreed on locations of in advance.) * It generates a statement saying "i am doing a measurement", and signs it. * It requests the time with it's own signature as a nonce and receives a (timestamp, uncertainty, signature) from the server, per https://blog.cloudflare.com/roughtime/ <https://blog.cloudflare.com/roughtime/> * It then immediately req-requests the time, using the signature it received as the nonce this time, and receives a second timestamp, uncertainty, signature.
It seems to me that this can be simplified a bit, the first step can be just a nonce. Basically: - The client generates a random nonce - The client uses to nonce to obtain the time from a server using the nonce and the roughtime protocol - The client generates a new nonce from the reply by hashing the reply - The client uses the second nonce to once again obtain the time. - The client packs the two nonces and the two replies and reports it as measurement result. Later, a verifier can check the signatures in the two replies to verify that responses came from the right server. Then a check is made that the second nonce is derived from the first reply. Finally a check is made that the replies correspond to the nonces. Note that the first nonce serves no value, so could be omitted from the measurement result and verification. Open issue: key management. How to store all public keys of all servers for ever. This seems fine, however...
There are some complexities - e.g could the machine attesting it's location delegate the request to a different machine closer to the anchor? Depending on the situation there are mitigations for this, like asking that some piece of data the machine that's attesting it's location be hashed into the nonce, in a way that's difficult for the attesting machine to predict ahead of time (so that it would need to move all of its data to the delegate machine at which point it's already in a sense also in that secondary location at that time.) But the ability to locate client software deployment via latency with some guarantee that someone running it isn't spoofing their location is useful.
If we assume that the client is malicous, what prevents the client from using a collaborating machine to execute the protocol, proving the location of the collaborating machine. The client just submits the results as its own? Philip
On 15 Apr 2021, at 12:16, Philip Homburg wrote:
… How to store all public keys of all servers for ever. ….
Suggestion: embed them somehow with the data sets generated by those servers. e.g. make them queryable just like normal measurement data. Daniel
participants (5)
-
Daniel Karrenberg
-
Philip Homburg
-
Randy Bush
-
Rick Havern
-
Will