[after finishing I see that Berthold already send out info an important part of my text below...]
Interesting, but: If it is only used for the name server, I would say that 16M memory will do, BIND tends to get big, lots of cache. Maybe it will do for a machine that runs *only* root server, but not if it has to run primary/secondary for a bunch of other domains too. By comparison: ns.EU.net is an ELC; it runs BIND 4.8.3; it runs primary/secondary for some 1700 domains; the SIZE and RSS are around 10M on the average but can go well above 15M; queries/day average between 200,000 and 1,200,000. Crucial point is swapping, which has to be avoided at all cost. Which means 16M of memory just isn't enough, especially when you take into consideration that things like zone transfers by secondary servers off ns.EU.net cause the named process to fork off *the complete core image*. Even the current 24M in ns.EU.net isn't enough to cope with that, so we're about to upgrade it to 40M of memory.
deins.Informatik.Uni-Dortmund.DE is just a 3/60 with 16 MB physical. It carries some 500 zones, and the rate of queries varies between 0,5 to some 10 per second in peaks with an average of 1 query/sec. Virtual memory of the named process is just below 10 MB with just the zones loaded and usually is about 12.5 MB (with very rare peaks into the 14 MB range), usually the RSS is in the 3 MB range. Low horse power is no problem: the named process including children just consumes < 5% of the cpu cycles. We do NOT get paging problems (though overall we are close to one disk I/O per query handled). We experienced severe memory problems on outgoing zone transfers - but they actually were NOT paging but running out of swap space (after updating one of our primary zones we saw a lot of requests to pull that zone - and each child process allocates swap space for the complete activated virtual address space of course needing almost none). With a bit of hacking elimanated that problem. Our BIND version has a number of home-grown patches, most important in this context: - log some statistics on resources used and traffic supported - (in a rather crude way) identify the most frequently asked queries/most frequently asking clients [points out a lot of actual or latent problems without extensive logging of full queries] - logging of what zone transfers are done (want to see about unpublished secondaries) - outgoing zone transfers NOT handled by a forked copy of of the regular named but by extra program pulling the zone data from the disk (also eliminates bad extra glue as accumulated in the cache) So, DE-NIC experience is a bit different from the problems Piet sees, though the DE-NIC server falls into the same class of (the few) servers that carry a lot of zones and accumulate considerable cache data for a large set of clients. So my conclusion is that requirements are indeed not too high. However I think that Piet's and DE-NIC's configurations and should not be the modell for a root name server. Root name servers are not required to carry a lot of zones. They don't need to acquire a huge cache (at least not if you choose to reject recursive queries). In fact I think that a root server better be a small system and configured so the small system is sufficient - and completely dedicated (with only very few very seriously considered exceptions) to be root server (i.e. hardware cost for a complete server could be below 10 K$). If you want to go for being everybodys secondary use a separate system for that. If you want to be everybodys forwarder and support the apropriate cache. In particular with the current bad state of the names server software with regards to using only authoritative data as authoritative this kind of separation one should avoid to have the root servers answer much they are not really authoritative on. The most important question regarding root NS experiences has not been addressed so far: what are the statistics for the traffic generated by BIND's regular queries to all name servers (to learn which one is closest), and what levels of traffic are generated by the usual kinds of misconfigurations. There was a report - I think from ISI - with in depth analysis of root name server traffic (and conclusions like: we are seeing about 20fold the traffic that really is neccessary and negative caching would NOT improve things significantly from a global point of view). In my opinion it would be best to base considerations about how many root servers should be placed where mainly on data and arguments as provided by that report and some check how that relates to experience with NORDUnet's server. Ruediger Ruediger Volk Universitaet Dortmund, Informatik IRB DE-NIC Postfach 500 500 D-W-4600 Dortmund 50 Germany E-Mail: rv@Informatik.Uni-Dortmund.DE Phone: +49 231 755 4760 Fax: +49 231 755 2386