Folks, This page at Atlas provides a great visualization on the RTT of DNS queries over TCP/UDP to the Root DNS servers: https://atlas.ripe.net/results/maps/root-server-performance/ We can see that the RTT of TCP queries is at least twice as long than UDP ones (which, according to the page " TCP is expected to be 2-3 times " longer. I cannot replicate these results in my setup, even in large scale from various vantage points: I always get very similar results for either TCP or UDP. Example: UDP: * $ dig example.nl @ns1.dns.nl Query time: 8 msec TCP: * $dig +tcp example.nl @ns1.dns.nl Query time: 9 msec I think there might be something with the definition of RTT. My hypothesis is that the field *rtt* on DNS queries on Atlas[1], for TCP, is, in fact, measuring 2 RTTs: the RTT of the TCP handshake, and the RTT of query/response itself. By definition, however, RTT is "the length of time it takes for a signal to be sent plus the length of time it takes for an acknowledgement of that signal to be received."[2]. So if DNS TCP measurements starts measuring from the SYN packet, that', in fact, be two RTTs. Is this the case with Atlas? I know it may sound a bit like nitpicking, but I just want to be sure of what's exactly being measured. thanks a lot, /giovane [1] https://atlas.ripe.net/docs/data_struct/#v4750_dns [2] https://en.wikipedia.org/wiki/Round-trip_delay_time