Looking for a tool to find the fastest DNS server
I'm looking for a small tool (running on Unix) that can take a list of DNS servers, a query to ask (such as "A ns1.nic.fr.") and which can print the fastest server to reply. Something like "netselect" which only exercices the kernel, not the actual servers, unfortunately (and which can be blocked by filters): ~ % netselect {a,b,c,d,e,f,g,h,i,j,k,l,m}.root-servers.net 123 i.root-servers.net BIND9's lib/dns/resolver.c has code to do so and I wonder if someone extracted it in a simple tool?
I am sure Eric Wassenaar's host (ftp://ftp.ripe.net/tools/dns/host.tar.Z) can be easily adapted to that if it does not do it alreeady. daniel On 04.08 13:46, Stephane Bortzmeyer wrote:
I'm looking for a small tool (running on Unix) that can take a list of DNS servers, a query to ask (such as "A ns1.nic.fr.") and which can print the fastest server to reply.
Something like "netselect" which only exercices the kernel, not the actual servers, unfortunately (and which can be blocked by filters):
~ % netselect {a,b,c,d,e,f,g,h,i,j,k,l,m}.root-servers.net 123 i.root-servers.net
BIND9's lib/dns/resolver.c has code to do so and I wonder if someone extracted it in a simple tool?
On Mon, Aug 04, 2003 at 02:04:27PM +0200, Daniel Karrenberg <daniel.karrenberg@ripe.net> wrote a message of 20 lines which said:
if it does not do it alreeady.
No, it does not.
On Monday, 4 August 2003, at 07:46AM, Stephane Bortzmeyer wrote:
I'm looking for a small tool (running on Unix) that can take a list of DNS servers, a query to ask (such as "A ns1.nic.fr.") and which can print the fastest server to reply.
Something like "netselect" which only exercices the kernel, not the actual servers, unfortunately (and which can be blocked by filters):
~ % netselect {a,b,c,d,e,f,g,h,i,j,k,l,m}.root-servers.net 123 i.root-servers.net
BIND9's lib/dns/resolver.c has code to do so and I wonder if someone extracted it in a simple tool?
#!/bin/sh # q=$1; shift # [ -z "$*" ] && echo "Syntax: $0 query server..." && exit 1 # for m in 0 1 2 3; do for n in $*; do dig @${n} ${q}; done done | \ awk '/^;; Query time:/ { qt = $4; } \ /^;; SERVER: / { sum[$3] += qt; n[$3]++; } \ END { for (s in sum) { print int(sum[s]/n[s]), s; } }' | \ sort -n | head -1 [jabley@snowfall]% ./qtest.sh ". NS" a.root-servers.net b.root-servers.net c.root-servers.net d.root-servers.net e.root-servers.net f.root-servers.net g.root-servers.net h.root-servers.net i.root-servers.net j.root-servers.net k.root-servers.net l.root-servers.net m.root-servers.net 40 192.58.128.30#53(j.root-servers.net) [jabley@snowfall]%
On Mon, Aug 04, 2003 at 08:17:34AM -0400, Joe Abley <jabley@isc.org> wrote a message of 39 lines which said:
#!/bin/sh
Works fine. Thanks for the fast coding. Suggestions if someone wants to improve it: 1) Parallelize requests (the tool can take a long time if some servers are slow). 2) Better handling of errors instead of printing: dig: Couldn't find server 'Truncated,': Name or service not known dig: Couldn't find server 'retrying': Name or service not known dig: Couldn't find server 'TCP': Name or service not known dig: Couldn't find server 'mode.': Name or service not known dig: Couldn't find server ';;': Name or service not known
At 2:37 PM +0200 2003/08/04, Stephane Bortzmeyer wrote:
Suggestions if someone wants to improve it:
1) Parallelize requests (the tool can take a long time if some servers are slow).
Speaking as the current maintainer of `doc`, parallelizing shell code is a bitch. Moreover, if you were going to do that, you should really do a statistically useful number of queries for each target, so that you have a more reasonable conclusion -- As of time T, server X is Y% faster than the second-fastest server Z, with a minimum response time of M1, an average response time of A, a maximum response time of M2, and a sample standard deviation of S. This is the sort of thing I tried to do in my root/gTLD/ccTLD survey for my presentation at RIPE44. I still need to make all those tools publicly available, as promised. If you want them, periodically bug me about it and I'll try to remember to put them up.
2) Better handling of errors instead of printing:
dig: Couldn't find server 'Truncated,': Name or service not known dig: Couldn't find server 'retrying': Name or service not known dig: Couldn't find server 'TCP': Name or service not known dig: Couldn't find server 'mode.': Name or service not known dig: Couldn't find server ';;': Name or service not known
Yup. I have that problem with `doc`, too. I keep saying that I'm going to import all this functionality into `dnswalk` and get rid of `doc`, but I still haven't found time to do it. Maybe one of these days. -- Brad Knowles, <brad.knowles@skynet.be> "They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, Historical Review of Pennsylvania. GCS/IT d+(-) s:+(++)>: a C++(+++)$ UMBSHI++++$ P+>++ L+ !E-(---) W+++(--) N+ !w--- O- M++ V PS++(+++) PE- Y+(++) PGP>+++ t+(+++) 5++(+++) X++(+++) R+(+++) tv+(+++) b+(++++) DI+(++++) D+(++) G+(++++) e++>++++ h--- r---(+++)* z(+++)
On Mon, Aug 04, 2003 at 02:48:54PM +0200, Brad Knowles <brad.knowles@skynet.be> wrote a message of 43 lines which said:
Speaking as the current maintainer of `doc`, parallelizing shell code is a bitch.
Yes, I was not seriously suggesting that.
Moreover, if you were going to do that, you should really do a statistically useful number of queries for each target,
Right, that's the difference between Joe Abley's tool (simple, effective, delivered on time) and a proper and professionnal tool, which only has the problem of not existing yet.
participants (4)
-
Brad Knowles
-
Daniel Karrenberg
-
Joe Abley
-
Stephane Bortzmeyer