Re: (IPng 4984) Re: Last Call: IP Version 6 Addressing Architecture to Proposed Standard

Daniel said: I have seen some of this discussion. I am afraid I have seen no documented discussion revealing the reasoning behind fixing the TLA length and fixing it at 13 bits. Frankly I have been surprised by the sudden speed of the provider based addressing standardisation. last year in regards to what is now RFC 2050 I asked the lawyers that do work for the IESG what restrictions in flexibility we (the IETF) have in the area of defining rules and technology that restricts ISP practices. I was told that the only time we can be restrictive is when there is no other technically reasonable option - I support Daniel here, if this field is to be restricted to a specific length then we must have very good technical reasons for doing so. Scott

Try "The length of the TLA field is fixed at a relatively small size so as to guarantee that the default-free routing table is certain not to exceed a size known to be technically feasible." If that is untrue, then we can't justify the fixed size. Brian Carpenter
-- Scott Bradner wrote:
Daniel said: I have seen some of this discussion. I am afraid I have seen no documented discussion revealing the reasoning behind fixing the TLA length and fixing it at 13 bits. Frankly I have been surprised by the sudden speed of the provider based addressing standardisation.
last year in regards to what is now RFC 2050 I asked the lawyers that do work for the IESG what restrictions in flexibility we (the IETF) have in the area of defining rules and technology that restricts ISP practices. I was told that the only time we can be restrictive is when there is no other technically reasonable option - I support Daniel here, if this field is to be restricted to a specific length then we must have very good technical reasons for doing so.
Scott -------------------------------------------------------------------- IETF IPng Working Group Mailing List IPng Home Page: http://playground.sun.com/ipng FTP archive: ftp://playground.sun.com/pub/ipng Direct all administrative requests to majordomo@sunroof.eng.sun.com --------------------------------------------------------------------

brian's reason is exactly the goal which was in mind: to bound the maximum complexity of the default-free region at values believed to be viable with some margin. the margin is important because even routers normally though to be "default-free" will probably carry a significant number of more-specific prefixes for optimizing paths, both internal to a TLA and between TLAs. and note, once again, the issue is not the size of the default-free region, but the complexity of the topology, which determines how many copies of the full default-free region one must examine before arriving at the forwarding table with one entry per TLA. it is now routine to see an announced prefix 15 times via different paths, only one of which must be selected for use. the complexity of the topology is only expected to increase, both internally and externally, so it is not unreasonable to attempt to bound the size of the set as that is the only parameter which is in any sense "tunable". as for 13 - anything smaller was felt to be clearly too small, and it becomes harder and harder to argue for bigger numbers in light of the complexity management which is mandatory. if anyone expects a magic formula which says "13" and not something else, you won't get it. what is very clear is that it is pretty easy for it to to "too big", and then it eats into the other topology bits which have their own set of long arguments. would 14 work - certainly. Like everything else, 13 is an engineering compromise - chosen to balance one set of considerations against a bunch of others, and after ruminating over it a long time, the consensus was 13 was the best choice. and as someone else pointed out, the TLA space can be expanded laterally into other reserved areas, so there are more available. now, to look at things from a different vantage point.... I think one deep issue here is that the IPv6 address design, in some sense, appears to threaten the existing registries. the design assumes that each TLA act as a registry for its region of the address space, leaving the registries to only allocate TLAs, which will be infrequent. just so. but given the work required to run a registry, existing registries with good track records would be in an excellent position to compete for the right to provide registration services for *any* TLA or delegation within a TLA, not just ones they assigned. this is a natural fall-out of attempting to provide an addressing structure which can pave the way for making the network self-organizing. if that were really the case, unlike today, there would be little need for registry organizations (little - not "no", but little) which use humans clerical workers to execute a simple resource allocation algorithm for the world's largest distributed computer. -mo

Before we wander too far down this path, there was a very interesting paper at the last sigcom in which the authors managed to shrink the size of the then current routing table (~40k routes) into less than 200K of memory. In short, I differ from Mike in that my values for "believed to be viable" differ, apparently wildly, from his and brians. I'm unconvinced that this will remain a true, long term technological argument. I'd like to see something besides, "too hard with 1990's technologies". "Long time" ought to have a better spec than say, Internet Dog Years?
brian's reason is exactly the goal which was in mind:
to bound the maximum complexity of the default-free region at values believed to be viable with some margin.
Like everything else, 13 is an engineering compromise - chosen to balance one set of considerations against a bunch of others, and after ruminating over it a long time, the consensus was 13 was the best choice.
-- --bill

bill, i'm surprised by your remark. i thought you have been around long enough to understand this. we've been over all this before. the problem is not now, nor has it ever been, the size of the forwarding table measured in any unit - routes, bytes, feet, or kilograms. (yes, there have been a few episodes where gross inadequacies of popular hardware created *serious* tactical headaches, but don't confuse that with the underlying problem.) the fundamental problem is the complexity of the computation which produces the forwarding table from many, many of copies various subsets of the global routing information. the complexity of that computation is driven by two things - the cardinality of the set of visible nodes in the global topology graph, and the complexity of the topology connecting those nodes. (note that "node" here does not imply a router but whole networks). of these two things, we cannot readily control the edge topology of the graph, so we are only left with controlling the cardinality of the node set of the graph if we wish to influence that complexity. of course, you can argue that we don't need to care about the problem - that somehow processors will keep getting fast enough fast enough to retain reasonable convergence times. but then you are betting on a race between two exponentials - and are making the bet that the smaller exponent will win. i know *i* don't want to wager the future on that bet. -mo

bill,
i'm surprised by your remark. i thought you have been around long enough to understand this. we've been over all this before.
the complexity of that computation is driven by two things - the cardinality of the set of visible nodes in the global topology graph, and the complexity of the topology connecting those nodes. (note that "node" here does not imply a router but whole networks).
of these two things, we cannot readily control the edge topology of the graph, so we are only left with controlling the cardinality of the node set of the graph if we wish to influence that complexity.
of course, you can argue that we don't need to care about the problem - that somehow processors will keep getting fast enough fast enough to retain reasonable convergence times.
but then you are betting on a race between two exponentials - and are making the bet that the smaller exponent will win.
i know *i* don't want to wager the future on that bet.
-mo
My argument is that the basic premise that the assumptions wrt the cardinality of the set of visable nodes in the topology graph -may- be wrong. Other than that, I agree with you. Chalk it up to a fit of Noel-Stev syndrome. With a bit of rest, I'm sure it will pass. (and I do recommend "Small Forwarding Tables for Fast Routing Lookup" from SigCom'97, which is one more reason why I think these arguments just might be off) -- --bill

i've read Pink, et al. it has to do with quick forwarding, not the computation which decides what goes in the forwarding table given all the possibilities to chose from. (longest match covering, AS path selection, etc, etc, etc) and i would love to be wrong - but even being wrong by a lot doesn't change the number of bits by a large amount given the way the size participates in the compuations. -mo
participants (4)
-
bmanning@ISI.EDU
-
Brian E Carpenter
-
Mike O'Dell
-
Scott Bradner