2016-02 New Policy Proposal (Resource Authentication Key ( RAK ) code for third party authentication)
Dear colleagues, A new RIPE Policy Proposal 2016-02, "Resource Authentication Key ( RAK ) code for third party authentication" has been made and is now available for discussion. The proposal aims to allow all number resources, in exacts and more specifics, to be authenticated via an date expiring API-key. You can find the full proposal at: https://www.ripe.net/participate/policies/proposals/2016-02 We encourage you to review this proposal and send your comments to <ncc-services-wg@ripe.net> before 31 May 2016. Regards, Marco Schmidt Policy Development Officer RIPE NCC
Hi, On 02.05.2016 14:12, Marco Schmidt wrote:
We encourage you to review this proposal and send your comments to <ncc-services-wg@ripe.net> before 31 May 2016.
Isn't this something that is already possible using the RPKI ROAs ? Regards André
Hi, On Mon, May 02, 2016 at 02:12:40PM +0200, Marco Schmidt wrote:
You can find the full proposal at:
https://www.ripe.net/participate/policies/proposals/2016-02
We encourage you to review this proposal and send your comments to <ncc-services-wg@ripe.net> before 31 May 2016.
I might be a bit old and stupid, but let me paraphrase if I understand this right: - people put crap into the RADB all day long - so we add an API for the RIPE DB so that the RADB operators can auto-check whether a given (prefix,as) tupel has been authorized by the owner in their corresponding registry (here: RIPE) correct? If yes, I don't think this is a good approach - because if the RADB and other operators actually were *interested* in reducing the amount of crap in their database, they could cross-check RIPE route:/route6: objects already today, without any new API needed. Evidence shows that they are not interested, even when presented with "hey, there is garbage in your database, look at the RIPE DB for the correct route: object" nothing happens. Ceterum censeo: RADB must die, and as this proposal will not speed up the process, so it's not helping. (NTT, on the other side, is already cross-checking - so I'm not sure I see the benefit for them. But if Job convinces me that it makes life easier for them, I stand corrected) Gert Doering -- router operator, and victim of RADB garbage -> hijacks -- have you enabled IPv6 on something today...? SpaceNet AG Vorstand: Sebastian v. Bomhard Joseph-Dollinger-Bogen 14 Aufsichtsratsvors.: A. Grundner-Culemann D-80807 Muenchen HRB: 136055 (AG Muenchen) Tel: +49 (0)89/32356-444 USt-IdNr.: DE813185279
Hi Gert, On Mon, May 02, 2016 at 02:31:46PM +0200, Gert Doering wrote:
On Mon, May 02, 2016 at 02:12:40PM +0200, Marco Schmidt wrote:
You can find the full proposal at:
https://www.ripe.net/participate/policies/proposals/2016-02
We encourage you to review this proposal and send your comments to <ncc-services-wg@ripe.net> before 31 May 2016.
If yes, I don't think this is a good approach - because if the RADB and other operators actually were *interested* in reducing the amount of crap in their database, they could cross-check RIPE route:/route6: objects already today, without any new API needed.
Evidence shows that they are not interested, even when presented with "hey, there is garbage in your database, look at the RIPE DB for the correct route: object" nothing happens.
It is my experience with both RADB and NTTCOM that when you email the database operator and present them with evidence, it gets cleaned up (might take up to three weeks). Admittedly I've not been succesful with all database operators out there.
Ceterum censeo: RADB must die, and as this proposal will not speed up the process, so it's not helping.
We can put the third-party databases to rest once we have 100% feature parity in their respective replacements, those successors need to be accessible both in terms of reading & writing to all relevant stakeholders.
(NTT, on the other side, is already cross-checking - so I'm not sure I see the benefit for them. But if Job convinces me that it makes life easier for them, I stand corrected)
Just like RADB, NTTCOM (the IRR Registry by NTT) is _not_ doing any cross-checking at this moment. It's unfortunate, but any NTTCOM mntner can create garbage objects. When NTT staff come across garbage (or are made aware), the garbage is mopped up. NTTCOM & RADB use the same IRRd software. Differences are the operating company & staff, original database content and mirror selection criteria. As third-party IRR database operator, I have a strong interest in anything that can help improve the quality of the data. I can't imagine it's any different for Merit. I think you are referring "irrlockdown" which is a slightly different approach on route-filter generation. IRRLockdown promotes the idea of outright ignoring route-objects which are covered by RIPE NCC managed IP space, from all IRR databases except RIPE itself. Today "irrlockdown" has not been deployed in NTT due to certain as of yet unresolved software & communication challenges. I guess irrlockdown and proposal 2016-02 aim for the same result, but come from very different directions. One approach hinges on "don't publish or consume unverifiable data" the other is I guess "make it possible to verify data prioir to publishing and consumption". As to the policy proposal itself: I would probably benefit from a (highlevel) diagram displaying which interactions on which pieces of the data happen where and how that results in something useful. Kind regards, Job
Hi, On Mon, May 02, 2016 at 03:30:22PM +0200, Job Snijders wrote:
I think you are referring "irrlockdown" which is a slightly different approach on route-filter generation. IRRLockdown promotes the idea of outright ignoring route-objects which are covered by RIPE NCC managed IP space, from all IRR databases except RIPE itself. Today "irrlockdown" has not been deployed in NTT due to certain as of yet unresolved software & communication challenges.
Oh, indeed. I assumed that this has been implemented and is live already (and I applaud you for the effort, even if it's still being stalled).
I guess irrlockdown and proposal 2016-02 aim for the same result, but come from very different directions. One approach hinges on "don't publish or consume unverifiable data" the other is I guess "make it possible to verify data prioir to publishing and consumption".
As to the policy proposal itself: I would probably benefit from a (highlevel) diagram displaying which interactions on which pieces of the data happen where and how that results in something useful.
... and some argument why it cannot be done with a RPKI ROA lookup, or a plain whois lookup. Gert Doering -- NetMaster -- have you enabled IPv6 on something today...? SpaceNet AG Vorstand: Sebastian v. Bomhard Joseph-Dollinger-Bogen 14 Aufsichtsratsvors.: A. Grundner-Culemann D-80807 Muenchen HRB: 136055 (AG Muenchen) Tel: +49 (0)89/32356-444 USt-IdNr.: DE813185279
On 05/02/2016 09:30 AM, job at instituut.net (Job Snijders) wrote:
Hi Gert,
On Mon, May 02, 2016 at 02:31:46PM +0200, Gert Doering wrote:
On Mon, May 02, 2016 at 02:12:40PM +0200, Marco Schmidt wrote:
You can find the full proposal at:
https://www.ripe.net/participate/policies/proposals/2016-02
We encourage you to review this proposal and send your comments to <ncc-services-wg at ripe.net> before 31 May 2016.
If yes, I don't think this is a good approach - because if the RADB and other operators actually were *interested* in reducing the amount of crap in their database, they could cross-check RIPE route:/route6: objects already today, without any new API needed.
Evidence shows that they are not interested, even when presented with "hey, there is garbage in your database, look at the RIPE DB for the correct route: object" nothing happens.
It is my experience with both RADB and NTTCOM that when you email the database operator and present them with evidence, it gets cleaned up (might take up to three weeks). Admittedly I've not been succesful with all database operators out there.
(Apologies for belated reply -- I'm on the Database WG list, but just recently joined the NCC Services WG list) Just to follow up on Job's comment. We (Merit RADB) have been working on training our 24x7 NOC team the last couple years to provide RADB support and integration with their existing ticketing system and believe ourselves to be responsive to requests for clean up's. We've also be more proactive on monitoring new account creation requests and looking for certain suspicious indications.
Ceterum censeo: RADB must die, and as this proposal will not speed up the process, so it's not helping.
We can put the third-party databases to rest once we have 100% feature parity in their respective replacements, those successors need to be accessible both in terms of reading & writing to all relevant stakeholders.
(NTT, on the other side, is already cross-checking - so I'm not sure I see the benefit for them. But if Job convinces me that it makes life easier for them, I stand corrected)
Just like RADB, NTTCOM (the IRR Registry by NTT) is _not_ doing any cross-checking at this moment. It's unfortunate, but any NTTCOM mntner can create garbage objects. When NTT staff come across garbage (or are made aware), the garbage is mopped up.
NTTCOM & RADB use the same IRRd software. Differences are the operating company & staff, original database content and mirror selection criteria. As third-party IRR database operator, I have a strong interest in anything that can help improve the quality of the data. I can't imagine it's any different for Merit.
We certainly agree with this statement and have resources we can allocate to making improvements.
I think you are referring "irrlockdown" which is a slightly different approach on route-filter generation. IRRLockdown promotes the idea of outright ignoring route-objects which are covered by RIPE NCC managed IP space, from all IRR databases except RIPE itself. Today "irrlockdown" has not been deployed in NTT due to certain as of yet unresolved software & communication challenges.
I guess irrlockdown and proposal 2016-02 aim for the same result, but come from very different directions. One approach hinges on "don't publish or consume unverifiable data" the other is I guess "make it possible to verify data prioir to publishing and consumption".
As to the policy proposal itself: I would probably benefit from a (highlevel) diagram displaying which interactions on which pieces of the data happen where and how that results in something useful.
I can see value in this proposal and would also welcome a highlevel diagram with some additional details of interactions. Regards, Larry Blunk Merit Network
Hi Gert,
I might be a bit old and stupid, but let me paraphrase if I understand this right:
- people put crap into the RADB all day long
- so we add an API for the RIPE DB so that the RADB operators can auto-check whether a given (prefix,as) tupel has been authorized by the owner in their corresponding registry (here: RIPE)
correct?
That is the initial described intention .. but it could be used in the future for other things as well. Like a digital LOA .. or apps ..
If yes, I don't think this is a good approach - because if the RADB and other operators actually were *interested* in reducing the amount of crap in their database, they could cross-check RIPE route:/route6: objects already today, without any new API needed.
If there would be a route object in the RIPE DB, the problem wouldn't exist would it .. ? The issue is specifically for NON-RIPE AS numbers with RIPE IP Resources .. that aren't maintained for route objects in the RIPE DB ...
Evidence shows that they are not interested, even when presented with "hey, there is garbage in your database, look at the RIPE DB for the correct route: object" nothing happens.
I don't share the same experience that you have on this with RADB, I do see that with Savvy for instance ..... Level3 just takes a month.. but it will be picked up is my current experience..
Ceterum censeo: RADB must die, and as this proposal will not speed up the process, so it's not helping. That is a bit harsh ...
(NTT, on the other side, is already cross-checking - so I'm not sure I see the benefit for them. But if Job convinces me that it makes life easier for them, I stand corrected)
I'm sure that NTT could provide insight in how they are currently doing it.
Gert Doering -- router operator, and victim of RADB garbage -> hijacks
The goal is to limit the options so that spammers can't initiate hijacks ... So there is a common goal .. Regards, Erik Bais
Hi, On Mon, May 02, 2016 at 02:02:39PM +0000, Erik Bais wrote:
If yes, I don't think this is a good approach - because if the RADB and other operators actually were *interested* in reducing the amount of crap in their database, they could cross-check RIPE route:/route6: objects already today, without any new API needed.
If there would be a route object in the RIPE DB, the problem wouldn't exist would it .. ? The issue is specifically for NON-RIPE AS numbers with RIPE IP Resources .. that aren't maintained for route objects in the RIPE DB ...
Well, RPKI ROAs would certainly work for arbitrary ASes - so "RIPE IP resources with non-RIPE AS numbers" have a working technical mechanism today. I'm not exactly sure about non-RIPE AS numbers, but that *should* be doable for route:/route6: objects as well (there's always the "must be authorized by the AS holder" catch, which might get in the way here).
Evidence shows that they are not interested, even when presented with "hey, there is garbage in your database, look at the RIPE DB for the correct route: object" nothing happens.
I don't share the same experience that you have on this with RADB, I do see that with Savvy for instance ..... Level3 just takes a month.. but it will be picked up is my current experience..
"A month" is a very very long time in Internet standards.
Ceterum censeo: RADB must die, and as this proposal will not speed up the process, so it's not helping. That is a bit harsh ...
Yeah, it's a bit unfair to single out RADB. "Any IRR DB that is used by people to build BGP filters and at the same time permits arbitrary people to put in arbitrary routing resource records without verification with their home RIR must die." 20 years ago, when the Internet was a place full of friendly people that worked together to the common good, such a open databases had their place and were a good thing. Today, it's highly problematic. People have good intentions and *do* build BGP filters from these heaps of crap, which effectively do nothing to stop hijackers and intentional abusers. [..]
The goal is to limit the options so that spammers can't initiate hijacks ... So there is a common goal ..
I still fail to see why this new technical mechanism is better than the existing mechanism (RPKI, at least, and possibly route/route6 objects), *and* why these database operators would actually *use* them, if they fail to do verification today for the existing and easy cases. OTOH, some evidence of buy-in from L3, RADB, etc. that this is something they are only waiting for and stuff will get implemented quickly afterwards would convince me that it's a good thing :-) Gert Doering -- have you enabled IPv6 on something today...? SpaceNet AG Vorstand: Sebastian v. Bomhard Joseph-Dollinger-Bogen 14 Aufsichtsratsvors.: A. Grundner-Culemann D-80807 Muenchen HRB: 136055 (AG Muenchen) Tel: +49 (0)89/32356-444 USt-IdNr.: DE813185279
On Mon, May 02, 2016 at 02:26:46PM +0200, Gert Doering wrote:
[..] On Mon, May 02, 2016 at 02:02:39PM +0000, Erik Bais wrote:
The goal is to limit the options so that spammers can't initiate hijacks ... So there is a common goal ..
I still fail to see why this new technical mechanism is better than the existing mechanism (RPKI, at least, and possibly route/route6 objects), *and* why these database operators would actually *use* them, if they fail to do verification today for the existing and easy cases.
The issue with RPKI .. is that that not everything is signed ... Another issue due to the distributed options in RPKI, one can run their own certification environment.. The intention is to use a system that would fix it for the all IP resources within a single region (RIPE in this example) in order to close idiotic input into those DB's.... Yes, there is currently garbage in some of those databases.. I'm personally still waiting on a reply from the Savvy DB maintainers for a ticket update send 2 months ago .. perhaps longer .. If the third party database isn't updating to the new way of authorizing prefixes to the originating RIR and not doing any housekeeping on expired objects ... their usefulness is close to zero for future use and inclusion in any prefix filter development that is in the future. I would personally like to see future developments of bgpq3 or the IRR Toolset to limit their default inclusion to those DB's that actually validate input ...
OTOH, some evidence of buy-in from L3, RADB, etc. that this is something they are only waiting for and stuff will get implemented quickly afterwards would convince me that it's a good thing :-)
Agree, I think that this would indeed be a good thing ... Regards, Erik Bais
participants (7)
-
André Keller
-
Erik Bais
-
Erik Bais
-
Gert Doering
-
Job Snijders
-
Larry Blunk
-
Marco Schmidt