Gert Doering wrote:
Hi,
On Tue, May 27, 2003 at 07:56:15PM +0200, Andre Oppermann wrote:
Also the Internet doesn't get more instable with more prefixes.
Can you back that claim by some facts or research studies?
No, but by mathematical logic. Can you claim otherwise?
I know that people at a couple of universities are researching into this right now, and currently the primary assumption is "yes, it does" (because the sheer size of the lists is longer, routers *need* more time to process them, delaying convergence, plus every now and then you hit a boundary that causes BGP flaps due to out-of-memory and/or necessary router upgrades).
The "assumption" that it hurts does not count because I "assume" that it does not hurt. Please provide scientific facts as you have asked me to. Longer lists are fine. You just need more processing power. That's why we have Moore's law. If I look at the advantages (see IEEE and ACM magazines) forwarding table implementations in soft- as well as hardware which have been made because of higher PPS needs/wants I don't have any worries that all these well-paid bright people come up with some optimisations which will take us well beyond one million active prefixes and triple the path. Simple engineering again. If they can do it with PPS and wirespeed they should be able to do it with BGP processing as well. A router that has a bug or runs out of memory will always happen. There is no silver bullet against that. It's the percentage of the entire prefix/path base that matters. If you are looking at absolute number you will definatly see an increase, no doubt. If I remember correctly a Juniper has got a Mobile-PII 450MHz CPU as control processor. Hardly on par with today's available processing power. You could have said your same stuff five years ago when we had 70'000 prefixes. But now we have 127'000 prefixes it still works fine as before. My "assumptions" are based on the mathematical properties of the BGP distance vector routing protocol. Of course the larger the mean AS distance grows, the more "instable" in itself it gets. But that is normal Internet growth. Or do you want stop the Internet to grow? Is it large enough? Do we need more ISPs? We see a very common occurence here. The moment an open and very competitive market has matured, the (remaining) players start (whether implicit or explicit) to hinder new entrancies into the market. This is either done by denial of (direct) access or policy barriers. Do we have enough ISPs? Do we have enough parties with routable IP address space? All the "technical" arguments I've heard so far simply define the status quo (currently available and deployed routers developed three to five years ago) as end of all means. From which the rational of no more space for significant growth (prefixes/ASs) is derived from. This is wrong. If we would have applied the same rational at a time when a T3 backbone was gigantic, then we'd never been here to have this conversation today. Do we? -- Andre Oppermann