RE: [ppml] [address-policy-wg] Those pesky ULAs again
Consolidated response to this barrage: Randy Bush wrote:
ok, i give. if ula address space is assigned/managed by registries, how is it actually different from pi space?
Policy expectations. If I get PI space from a registry, the ISP I call on knows I had to do some level of justification to get that. If I show up trying to get a ULA space routed, they know that took no justification, making it easier to refuse.
if ipv6 space is effectively infinite (and we once thought ipv4 space was), then what is the use of ula address space? why not just assign vanilla ipv6 space?
If people could get PI everywhere, without expectation that it would ever be routed, then your argument about equating them would hold for some uses. PI is not globally available, and even when it is the justification is based on need to route that much space.
i am very confused by all the smoke and am trying to find the core of this stuff.
Stirring the smoke around is not helping ... There is a very basic policy issue here, can an organization get space without being tied to a service provider? In some areas that is possible, if one is willing to subject themselves to scrutiny. ULA-L allows that organization to get space with a probability of uniqueness, while ULA-C provides their management 'more assurance' that there will not be a cost down the road related to partnerships or m&a. The other issue that is being mixed in is how and where filtering is done. Yes an organization with PI space could firewall off a portion of that and have a similar effect for some purposes. At that point you are trusting that there are no operational errors in the firewall config over time, and that it can keep up with the attacks. Using the ULA approach, -there is no route- so there is less work for the firewall to do, and a very simple filter that implemented by both the organization and the ISP. If the need to firewall off machines does not map to subnets though, the complexity of the firewall rules may exceed the ability of the products on the market. This is a trade-off issue where use of ULA space alongside either PA or PI allows the complexity to be spread out over device management, where those that don't need external access are only accessible using the ULA prefix. Jeroen Massar wrote:
ULA space should be !A'ed out by routers per default and have a special switch to enable forwarding for them.
Randy Bush wrote:
you are asking that routers hard code the association between routability and address space. and next you only want this at site border routers and not 'internal' routers. this was called site-local, and was soundly rejected as a disaster in the making.
There is a big difference between hard-coding and default configurations. Strict RPF is another thing that should be 'on by default'; because in both cases the people that need it turned on are not aware they need it, while those that need it turned off are smart enough to turn the knob (and if they are not smart enough they probably really need it turned on). SL was killed off due to fear mongering, not because it was it was a disaster. There were no rational arguments, just a mob of chicken-little screaming. There was nothing in the spec that said the bits had to be zero, but there was also nothing in the spec that said they should not be. Rather than fixing it by telling people to use non-zero values, the mob said 'kill the monster because we are afraid...'. Iljitsch van Beijnum
It troubles me that so many people are willing to deprive others of something that those others consider useful just because they themselves don't find that thing useful.
Get used to it because some of the people on these lists are control freaks that just want to deprive others. Shane Kerr wrote:
But I do not think ULA central is useful to anyone.
You are entitled to your opinion.
Even if ULA central is useful, I don't think it is something the RIRs need to be involved in.
To avoid the perpetual arguments about ULA-C vs. PI, it would be best if both were handled by the same organization to avoid the additional nonsense about an end run around the process. There is also the case that only organizations that really care would even be asking for ULA-C, and if they care enough they would be willing to become RIR members if need be. Additional recurring revenue for what is essentially a one-time effort should be enough of a reason for the RIR's to be involved. Rich Emmings wrote:
As I mentioned earlier, one of the barriers to getting management buy in on IPv6 is the fact that the standards keep changing, and this is a good example. To use an analogy, the financial boys won't sign off on starting the building until they get a final floor plan. Keep rewriting the spec to try and get it 'perfect' instead of 'good enough' and it'll still be in redesign as the last IPv4 address goes out the door?
What problem are we trying to solve here? Is it a valid concern, or are we fighting the last war and blithly ignoring what will be the real problems with IPv6. (Hint, you don't know what it is either. If it's the same problem, it's solved. If it's something you can think of, it's probably being solved. It's novel.)
The standards are not being changed here, it is policy, and policy changes all the time. The problem in this thread is that people keep mixing various policy arguments to justify their position on why this space is needed or not. There are several problems at hand, and various people don't want some of them solved. This results in the confused discussion that keeps happening, and will keep happening until the lack of IPv4 space forces a resolution. Unfortunately crisis based resolutions are not well thought out, and frequently have unintended long term consequences. Stephen Sprunk wrote:
You have the flawed assumption that everyone who uses RFC1918 space today will want/need ULA-C in the future. The vast majority of folks will be fine with ULA-L (or PA) space, and the target market for ULA-C is identical to the target market for PIv6. It will be the same number of orgs regardless of which type of space they request, so the debate comes down to why we want to put orgs on ULA-C space instead of just giving them PI space. If they're truly going to use it privately, they won't consume routing slots in the DFZ, and if they aren't they'll be using PIv6 anyways and won't have a need for ULA-C.
I agree that the ULA-C need will map to the PI need, though ULA-C may be a subset. The last line though is IPv4-thinking, in that it assumes people will only use one address range at a time. People may well use PI for their nodes that have public access, at the same time using ULA-C for the ones that don't to minimize the firewall rules. The only 'value' to ULA-C over ULA-L is the assurance to management that 'this has been registered so the risk of a required renumbering event down the road is reduced'. It really doesn't matter if human error is greater than the probability of collision in ULA-L, this is not a technical issue, it is strictly a feel-good that people are willing to pay for. The RIRs should be all over customers like this, because their demands are low and their willingness to pay is high. Leo Bicknell wrote:
- IPv6 space is not infinite. It's a 64-72 bit address space. That's right, subnets with > 256 hosts are very uncommon today, so we've wasted 64 bits to number 256 things. That makes the space effectively on the long end 72 bits.
This goes back to Iljitsch's comment: just because you don't see the need, don't deny others the opportunity to innovate... The original proposal for 64 bits met the design goals for IPv6 by 3 orders of magnitude, but the routing world was concerned that there would not be enough space for the hierarchy of providers, so the entire 64 bits was given to them and the debate raged about how many more bits to add for the hosts. For operational simplicity in auto-configuring hosts it was decided that EUI-64 would be the best choice because the IEEE would run out of EUI-48's within the life expectancy of IPv6. Now we find the routing world arguing about 'waste' because they are just a greedy bunch that really don't want others to have something that they think they control, even if the can't use them. To prove the point, line this up with the FUD about not being able to route /48's as PI space because there are just too many of them. If there are too many /48's, what in the world would the routing system do with more bits? This occurs in the /48-/56 discussion as well because again if there are too many /48's why do we need to make sure end sites only get /56's. This is not about 'waste', it is all about 'control'. The policy realm is dominated by routing types, and they want to make sure they have control over the use of the bit space, even when they know they couldn't possibly use it. If you really want to see waste, allocate all the bits to routing. The world will move on to some other technology long before we run out of even /48's, because this is not the be-all-end-all of protocols. If the allocations allow innovation, there will be new approaches that might even minimize the workload of the routing world. Paul_Vixie wrote:
the real problems with IPv6 are those it shares with IPv4, so let's just call it "the real problems with IP". they've been argued forever and go by many names. from ppml's point of view, the right name of the biggest problem is "lack of EID/RID split". since we're using one address for both identity and location, it actually matters whether that address is universal or private,PI or PA, etc. as tli pointed out fairly early on, a solution to this problem would have added a lot more to the IP address system lifetime than adding more bits has added or will add. so, the problem isn't novel, but general recognition of the problem would certainly be novel.
The EID/RID split is a red-herring used to confuse those that don't understand that ISP's operate private networks, and they want absolute control over their networks. That is fine, they should have control over their infrastructure. The point is that IP is an inter-network technology, something the telco world doesn't really get (and the traditional ISPs have forgotten during the assimilation). It has been argued that the reason the Internet won out over the traditional telco world is exactly due to the identity and locator being merged. This allowed a chain of organizations to have a clean-slate view of what this packet is about; where header rewriting techniques only provide the upstream organization's interpretation. We have the solution to the perceived problem of each ISP wanting to write its own interpretation of what to do with this packet. It is called MPLS. No matter what you call it, every attempt to overwrite the end-to-end semantics of the IPv6 header with local semantics is nothing more than a label operation. Since it is being done just for the local network, it doesn't belong in the inter-network header, it belongs in a lower layer that already carries local-only network semantics. The problem is not that the EID/RID are merged, it is that the destination network is trying to effect policy over network providers, and the network providers are trying to prevent them from doing that so they can control their own network. Again, I have no problem with ISPs wanting to control their own network, but they should be using the tool that was designed for that rather than breaking the merged semantics that the entire existing security model is built on. David Williamson wrote:
Uh, neither of those reasons undermines the solution others have proposed: use PI space. You can always just not announce some part (or all) of your space. That would make it private.
So you have a printer on the same subnet as the laptop, and the laptop needs internet access while the printer does not. Do you explicitly list every device that is to be allowed/blocked in the firewall; or do you overlay two prefix ranges from the public space and try to keep straight which one has public access to make sure the printer is in the right one? It would be much simpler operationally to have a very different range for the local-only devices. ULA provides that. ULA-C provides an assurance to management that there is a human maintaining a list that says 'this space is ours so we will not have to renumber for partnerships or m&a'. Owen DeLong wrote:
Um, no. It's like saying that counterfeit money is bad and we'd rather not create a sponsored system for printing it.
The thing that distinguishes counterfeit from 'real' is who prints it. If the same organization does the printing it is all 'real', unless they intentionally create bogus records (errors happen even in real things). ULA-C has value to some organizations. If you really don't want that to be 'counterfeit' space, then take it, manage it, and make it real. Otherwise don't be surprised when addresses that are not managed by the RIRs start showing up in real networks. David Williamson wrote:
My argument, however, is that there's no problem solved by ULA-C that can't be solved by PI space, and the creation of ULA-C would entirely undermine the RIR-based PI system.
Exactly how does ULA-C undermine RIR-PI? RIR-PI space is managed with the expectation that it would be publicly routable if the recipient wanted to. ULA-C space would be managed with the expectation that it would never be publicly routed. As long as those are managed by the same organization there would be no cross-purpose in the allocation, and as long as being an RIR member was the prerequisite for either, then no organization would be motivated to use one for functions where the other was expected. If the RIRs choose not to manage ULA-C, there will be something created to do that, so the assurances of mixing purposes goes away and it becomes a bidding war for the cheapest registry. The fees are for membership, not address space so every RIR member should be allocated a PI & ULA-C block, even if they never use them. This would take all the FUD about misuse off the table. Edward Lewis wrote:
What I *sense* is happening here is that the RIRs are being used to do an end-run around the IETF process. This 'sense' is based on reading the draft (and seeing that this is along the lines of site local), looking at the mail list archives (which lacks overwhelming support to promote this), and the hint that the IETF is failing to promote ULA (perhaps that is just from the choice of words).
Despite what the message from the chairs says, the reason it was dropped was that RIRs perceived ULA-C was an end-run around their lack of policy on PI (really an unstated policy of anti-PI at the time). I guess it was politically nice to leave the inter-org dirty laundry out of it .... ULA-C will only ever see the light of day as PI in the routing system if the remaining RIRs refuse to create real PI. PI is required legally in some areas, and will exist despite policy for others. ULA-C will be required by management by many of those same organizations because it offers 'fat finger' protection through no-routing-entry in the public network. The RIRs need to recognize reality and create the policies that manage the space. The IETF could take up the draft and publish an RFC, but if the RIRs don't want to manage the space then an alternate registry gets set up. At that point there will be a bidding war over which registry provided the most space for the least cost and everything about the status quo will end. If the RIRs want to avoid that situation then they need to establish a policy that RIR members get PI & ULA-C space, even if they never intend to use it. Tony
In a message written on Tue, May 29, 2007 at 05:05:56PM -0700, Tony Hain wrote:
Even if ULA central is useful, I don't think it is something the RIRs need to be involved in.
To avoid the perpetual arguments about ULA-C vs. PI, it would be best if both were handled by the same organization to avoid the additional nonsense about an end run around the process. There is also the case that only organizations that really care would even be asking for ULA-C, and if they care enough they would be willing to become RIR members if need be. Additional recurring revenue for what is essentially a one-time effort should be enough of a reason for the RIR's to be involved.
I'm not sure I've seen the argument made this way before, but to me it's a new, a good argument. Specifically "RIR's should administer ULA-C so they can help direct people to ULA-C or PI as appropriate." I think that's an accurate morph of your statement. While I'm still not sure about ULA-C, thinking about it that way makes me think that if it is implemented, the RIR's are the right place.
Now we find the routing world arguing about 'waste' because they are just a greedy bunch that really don't want others to have something that they think they control, even if the can't use them. To prove the point, line this up with the FUD about not being able to route /48's as PI space because there are just too many of them. If there are too many /48's, what in the world would the routing system do with more bits? This occurs in the /48-/56
This is distorting a near-term problem. If in 1993 I told you that your AGS+ with 8M of ram would need to route 200,000 prefixes from 200 different BGP sessions you would have laughed me out of the room. Today we think of a 5,000,000 prefix Internet as an impossibility. No hardware could ever do that. However, 20 years on I'm not sure a 5 million route Internet will be surprising to anyone. The fact that we might not be able to route a /48 for everyone who wants PI with today's hardware, or even in 5 years time does not mean we should flush the possibility down the drain by giving those who are worth of of space today so much space there will be none left tomorrow. At the time of the AGS+, giving HP a /8 and DEC a /8 "made sense". Today having one company that doesn't even provide any Internet services directly having a /7 (15/8 and 16/8) seems absolutely absurd. At the time having them consume a single route in your AGS seemed like a great idea. Today having them consume 4 /19's (as an example) might seem like a better tradeoff, 4 routing slots in exchange for using 10 bits less of address space. The thing of it is, as we're seeing with IPv4, taking space back is really hard. Now, some people think that a 25 year lifespan for IPv6 is "doing good". I think if by allocating addresses more prudently we could make it 50 years, that's billions of future dollars and effort saved, and more important to me, perhaps avoiding the next version of this transition in my lifetime! I believe the problem when we get into these arguments though is that it seems like there two options for the pendulum, the far left and the far right. I would hope as a community we could do a better job to find the middle, but no one on either side of the argument seems interested in understanding where the middle might actually be located. "You're a big company, and you have to use Class C subnets, so you should have 65536 of them, here's a Class A." "You're a big company, you have to give out /48 subnets, so you should have 65536 of them, here's a /32." Seems like we could have at least been creative this time around and picked a different number. Why do big companies need 65536 subnets? The number must be magical. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
Leo Bicknell <bicknell@ufp.org> writes:
Today we think of a 5,000,000 prefix Internet as an impossibility. No hardware could ever do that. However, 20 years on I'm not sure a 5 million route Internet will be surprising to anyone.
Who is the "we" you refer to above/ Actually, quite a few people are worried that a 5M prefix Internet is a possibility. There are also debates (i.e., no consensus) that when that happens, routers will actually be able to cope with the load in practice. I.e., see draft-iab-raws-report-02.txt and the efforts going on in the IETF, e.g.: http://www1.ietf.org/mail-archive/web/int-area/current/msg00763.html http://www1.ietf.org/mail-archive/web/int-area/current/msg00783.html Just to give one data point, router vendors are saying that today, they have routers that can support 1M routers. Operators (at least some of them), are skeptical, because that hasn't been proven operationaly in the field. So, there is at least some uncertainty as to what can be supported in practice.
The thing of it is, as we're seeing with IPv4, taking space back is really hard. Now, some people think that a 25 year lifespan for IPv6 is "doing good". I think if by allocating addresses more prudently we could make it 50 years, that's billions of future dollars and effort saved, and more important to me, perhaps avoiding the next version of this transition in my lifetime!
Glad to see you are thinking of an lifespan of more than just a few years. Indeed, I think many are thinking of even longer time frames. For example, policy proposal "2005-8: Proposal to amend ARIN IPv6 assignment and utilisation requirement" (http://www.arin.net/policy/proposals/2005_8.html), which ARIN has adopted, was motivated very much by looking at time lines 100 years and longer... Thomas
On Wed, May 30, 2007 at 11:03:52AM -0400, Thomas Narten wrote:
Leo Bicknell <bicknell@ufp.org> writes:
Today we think of a 5,000,000 prefix Internet as an impossibility. No hardware could ever do that. However, 20 years on I'm not sure a 5 million route Internet will be surprising to anyone.
Who is the "we" you refer to above/
Actually, quite a few people are worried that a 5M prefix Internet is a possibility. There are also debates (i.e., no consensus) that when that happens, routers will actually be able to cope with the load in practice.
hum... given that w/ a /32 "boundary" - there exists the possibility of 2x32 routing table entries... clearly the /32 boundary is not to preserve routing table slots. if one is seriously considering a 1-5m entry routing table then it becomes important to (proxy) aggregate to the /8 or /9 level to keep within the 1 to 5m entries. --bill
In a message written on Wed, May 30, 2007 at 11:03:52AM -0400, Thomas Narten wrote:
Today we think of a 5,000,000 prefix Internet as an impossibility. No hardware could ever do that. However, 20 years on I'm not sure a 5 million route Internet will be surprising to anyone.
Who is the "we" you refer to above/
A number of operators keep standing up at ARIN meetings and telling us that if the IPv6 Internet had the same number of routes as the IPv4 Internet (e.g. ~200k) that the world would end. While I'm making a bit of an assumption, if we can't support that rate, how would we ever get to 5M in 20 years?
Actually, quite a few people are worried that a 5M prefix Internet is a possibility. There are also debates (i.e., no consensus) that when that happens, routers will actually be able to cope with the load in practice.
I have no worry. 1 order of magnitude growth in 20 years is way below the rate computing power and bandwidth are increasing. Heck, I think a 50 million entry table in 20 years is well within the advances in hardware we will see in that time.
Glad to see you are thinking of an lifespan of more than just a few years. Indeed, I think many are thinking of even longer time frames.
Which is excellent. I think a 50-100 year planning window may be pushing the limits of what we can achieve, but I'm all for trying! -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
From the point of view of the FIB, this direct lookup
People either trying to achieve massive expansion of the IPv4 and IPv6 "global routing tables" - or trying to figure out a way of limiting their growth and devising new protocols to achieve multihoming and traffic engineering without relying on BGP - may wish to become involved in two lists: The RAM list, arising from the IAB's RAW workshop last year: http://www1.ietf.org/mailman/listinfo/ram http://tools.ietf.org/html/draft-iab-raws-report http://www.iab.org/about/workshops/routingandaddressing/ RRG - the IRTF's Routing and Research Group: http://www.irtf.org/charter?gtype=rg&group=rrg http://psg.com/lists/rrg/2007/ There is also a closed list which is worth watching: IETF Routing and Addressing Directorate http://www.ietf.org/IESG/content/radir.html http://www1.ietf.org/mail-archive/web/radir/current/ Guidance on which list is best for which topic is: http://www1.ietf.org/mail-archive/web/ram/current/msg01428.html RADIR is working on a Problem Statement I-D and the RRG is working on a Design Goals I-D. With IPv4 with a /24 limit on BGP advertisements it is relatively easy and very fast to do the FIB with a single lookup into RAM - provided you have enough fast RAM. Some high-end routers such as the CRS-1, M120 and MX960 probably have enough RAM to do it already. Beyond some number of prefixes (wild guess 500,000) it would be more memory- efficient to do a direct lookup into RAM with 24 bits of address than with Tree-Bitmap or similar ASIC + RAM based algorithms which use a single memory access for every 3 or 4 bits of address which needs to be classified. My proposal for direct RAM lookups is: http://www.firstpr.com.au/ip/sram-ip-forwarding/ but Reduced Latency DRAM would be fine too. The idea was first proposed in 1998 by Nick McKeown, Pankaj Gupta and Steven Lin: Routing Lookups in Hardware at Memory Access Speeds. (IEEE INFOCOM April 1998, Vol 3, pp. 1240-1247.) http://tiny-tera.stanford.edu/%7Enickm/papers/Infocom98_lookup.pdf My proposal extends the idea to IPv6, but with a significant restriction in the address range. There would still be plenty of space. proposal would enable IPv4 or IPv6 space to be assigned without any concern about route aggregation. This would mean that address space could be assigned in smaller chunks for shorter-term demand, which means it could be used much more efficiently. An optimised FIB proposal such as mine doesn't make much sense unless BGP can cope with millions of prefixes. On the RAM list earlier this year, it seems no-one had much hope of achieving this - so most of the discussion was about LISP, a proposal for "Locator/ID Separation Protocol" to achieve TE and multihoming without BGP and without changing host software. But LISP has its difficulties and is at a very early stage of development. Now there is some discussion on the RRG list about significant improvements to BGP. Here are some URLs relating to improving BGP: http://tools.ietf.org/html/draft-li-bgp-stability http://www.sigcomm.org/sigcomm2005/paper-SubCae.pdf http://www.ieee-icnp.org/2006/papers/s4a4.pdf http://www.ieee-icnp.org/2006/papers/s8a2.pdf http://www.beyondbgp.net/pubs/2005/bbgp_comnet05.pdf I have a list of such documents at: http://www.firstpr.com.au/ip/sram-ip-forwarding/#BGP_improvements Please suggest additions to this list. - Robin
Hi, On Tue, May 29, 2007 at 09:01:10PM -0400, Leo Bicknell wrote:
"You're a big company, you have to give out /48 subnets, so you should have 65536 of them, here's a /32."
Seems like we could have at least been creative this time around and picked a different number. Why do big companies need 65536 subnets? The number must be magical.
It nicely fits on a nibble (even octet) boundary, which is good for DNS :) ... and it's a workable compromise between ISPs that want "more space" to handle numbering their customers, and others that want(ed) to be ultra-conservative about IPv6 distribution... Gert Doering -- APWG chair -- Total number of prefixes smaller than registry allocations: 113403 SpaceNet AG Vorstand: Sebastian v. Bomhard Joseph-Dollinger-Bogen 14 Aufsichtsratsvors.: A. Grundner-Culemann D-80807 Muenchen HRB: 136055 (AG Muenchen) Tel: +49 (89) 32356-444 USt-IdNr.: DE813185279
participants (6)
-
bmanning@karoshi.com
-
Gert Doering
-
Leo Bicknell
-
Robin Whittle
-
Thomas Narten
-
Tony Hain