
Hi Max,
do you know if this only affects ntp0.testdns.nl or could it be a problem with NTP packets in general?
AFIK it's only with this (odd) server from Marco, which runs his own code: https://github.com/mdavids/ntptools/tree/main/go/fake-ntp-server I've done many measurements with NTP in the past and have not seen such level of lack of responses. I thought it could be some sanitation on Atlas side, like, if some values would be too odd... like ,ref_timestamp > transmit_ts (violation of RFC) (I only found two probes which have strange processing time on the client -- like, how long it took to capture the packet from when it was crated -- one is -50s, the other is 100s) But I could not find any other more severe issues for the other missing 40 probes (i pushed an update on the data analysis at [0])
Also, did you do traceroutes/MTRs to the affected probes to see, if there are similarities in routing – like the same transit carrier? In the past, some carriers (CenturyLink, as an example) had traffic filter rules in place on their edges to prevent NTP reflection or amplification attacks, that killed regular NTP traffic as well, so I assume this could be the case here, too, not necessarily with CenturyLink/Colt as the transit.
no clue, it could be
You could try doing a traceroute with port 123/UDP as the source towards these probes to see, if the packets get eaten on some network demarcation:
traceroute -U --sport 123 <ip-of-the-probe>
I guess I need to update my traceroute for it.. the json on my repositiory [0] has the probe Ips in case you wanna give a shot
If it helps, you can try with my server ntp2.301-moved.de as well – I have access to sflow data from that host, so I can precisely tell if there have been answer packets coming from that system.
Let's see if the Atlas folks can confirm something on their side.,like sanitaion or something thanks a lot for your help