(re-sent due to issue with list and rephrasing) I agree with Tim here and do not support any such restrictions. I also think that the service should be available to anyone, not just LIRs. As Tim and others have mentioned it doesn't make sense to compare RPKI and IRR because RPKI doesn't rely on where the information is published. If there are any arguments for restricting that are not based on comparisons with IRR then I would very much like to hear them. -Cynthia On Fri, Sep 30, 2022 at 10:00 AM Tim Bruijnzeels <tim@nlnetlabs.nl> wrote:
Hi,
On 29 Sep 2022, at 16:15, Felipe Victolla Silveira <fvictolla@ripe.net> wrote:
The service was originally designed to allow all objects to be published in our repositories, regardless of whether the associated resources are part of the RIPE NCC or another RIR, and this is how we would like to proceed. However, it has been argued that there should be a restriction in this service so that it allows only RIPE NCC resources to be published and not resources belonging to a different RIR.
I disagree with introducing restrictions on principle. This creates a barrier to entry that is not helpful in reducing the number of RPKI repositories - and the amount and reliability of these repositories is a mess and a concern. Furthermore, if I am a member under multiple RIRs I would like to use the repository with the best service level and availability. And, yes, if $cdn would offer a service with higher quality then I might use that.
I disagree that having out-of-region objects in the RIPE NCC RPKI repository creates a mess. The comparison with IRR is wrong for a number of reasons.
Most importantly IRR objects are human readable text files which are trusted because of where they are found. This is not true for RPKI. RPKI objects are intended for machine (relying party/validator) parsing, they are signed and they are validated. It does not matter where they are found: rsync, rrdp, another continent, printed on a t-shirt.. They can always be validated downwards from a Trust Anchor and the chain of trust will be known and verifiable. We call this "object security".
This separation of concerns between RPKI CAs and Publication Servers was very much in the minds of the people designing the relevant protocols for this (RFC 8181, 8183). The freedom to publish all things in a repository that is separate from the parent (preferably a repository with the best quality) was treated as a requirement rather than a bug.
As a result the standards have no support for expressing restrictions during setup time, nor at run-time. There is nothing in the RFC 8183 "Repository Response" that informs about any restrictions (as mentioned above, by design). So, if non-standard restrictions are added, then there is nothing that the CA software can do to prevent that the wrong repository is associated in the context of a parent. In fact, if the publication server just checks resources then publication of a manifest and CRL under the certificate received from another parent will still be accepted (manifest just say 'inherit resources'). But then when a ROA is published this would result in a runtime error. The error codes (section 2.5 RFC 8181) have no useful signal.
This issue becomes even harder if there are sub-delegated "grand-children" under the LIR. There was a request to support this model and allow them to publish in the RIPE NCC repository. This model can be quite useful as it would allow delegating the control of specific resources to dedicated business units and/or customers. Requiring them to run their own publication server is huge barrier. But - how would they know what can be published? They don't even know (based on current standards) who their grand-parent is - so matching parent to repository is not trivial.
Furthermore, while it may be tempting to add validation concerns to the publication server, this is not without risks either. What happens if a publication server cannot parse an object? Perhaps because it's a new object type, or perhaps it has a bug. Would it now start to reject the content for a CA? How would the CA know, and what can they do? This introduces more fragility.
So, to summarise for now.. these restrictions are in my view not needed and they do more harm than good. They are also unimplementable under the current standards. If restrictions such as these should exist, then the standards need to be updated, i.e. a discussion in the IETF would be required.
Tim
--
To unsubscribe from this mailing list, get a password reminder, or change your subscription options, please visit: https://lists.ripe.net/mailman/listinfo/routing-wg