New on RIPE Labs: RIPE NCC and the Cloud - Let’s Start Again
I see the discussion about "RIPE NCC and cloud" in the archives and believe one of the reasons there seems to be a confusion is that indeed many issues are mixed in the same...hmm...whatever. I personally think deployment "in the cloud" implies an ability to deploy services in such a way (orchestration) that resources are dynamically allocated depending on for example load. This implies automatic scaling and dynamic allocation of storage, memory and CPU. Efficient such deployment can to some degree be independent of who is the provider of "the cloud". Quite often it is mostly efficient if in normal load it is deployed "on prem" while dynamic allocation during extreme load is "to an external provider". This to get most efficient use of the hardware one have invested in. For this to work, it is important to not(!) use specific services that often can also be offered by the provider of the cloud service. And this regardless of whether the deployment is on prem or externally. My suggested request to RIPE NCC is because of this to implement orchestration of services in such a way that the mechanism used is as provider neutral as possible. The 2nd request could be to remind RIPE NCC to do the actual deployment in an as efficient way as possible (whether on prem or not), but to be honest, this is what I always expect the Managing Director to do, and I do not think we members should do micro management. On top of this we have policy related requirements that in turn can be divided in two: 1. That RIPE NCC do not violate any by law requirements on data, data transfer, data storage and such things; and 2. That the policy expressed by the RIPE Community that is on top of the regulative requirements is implemented. Regarding these, I sure hope RIPE NCC is not violate any regulation. If they do we have some other issues. The 2nd list of requirements might have to do with the quite complicated situation RIPE NCC is in, to have customers in many different jurisdictions and the union of these legislations might be implemented in multiple ways, and when doing this selection on how to do "the right thing", RIPE community might have some interest in influencing it. I believe we see this for example in the DB WG on how to implement the database and here in this WG. But, separate from this I see questions where RIPE members do ask RIPE NCC to be more transparent on how (for example) legislation in the Netherlands is implemented. What I think we in the community should do though is to separate between when we want transparency and when we do believe we really do have input on issues that do not micro manage the work the managing director do. Personally, I think the Managing Director and his staff is doing good stuff and see no reason what so ever to question what decisions they have made. That said, I am curious in some cases to know what decisions they have made as maybe I have the same or similar deliberations to do at Netnod and might want to know more. But I do not question them. So, many things are intertwined and specifically mixed up are "us members being curious" and "us members actually wanting to provide input". From a technical stand point, I think the most important thing for "cloud" is to choose interoperable solutions so that migration from one cloud to another is possible, or at least as easy as possible. Including on-prem-clouds. Best, Patrik
Patrik Fältström wrote on 22/06/2021 21:23:
I personally think deployment "in the cloud" implies an ability to deploy services in such a way (orchestration) that resources are dynamically allocated depending on for example load. This implies automatic scaling and dynamic allocation of storage, memory and CPU.
"the cloud" just means someone else's computer. There are plenty of reasons to use this as a procurement model, e.g. outsourcing of specific types of headaches, ease of scale, connectivity, insta-cdn functionality, etc. But in the end it's rental of infrastructure and specific infrastructure-related competencies, and it's a legitimate business decision to deploy infrastructure in this way.
but to be honest, this is what I always expect the Managing Director to do, and I do not think we members should do micro management.
Pretty much this. We don't tell the RIPE NCC legal or finance or outreach or any other departments how to do their jobs. The NCC is an executive and it needs the freedom to be able to make its own operational decisions. Just because something happens to be within our area of expertise as stakeholders, that doesn't mean that we should start back-seat driving because they made a decision to go one way that someone else might have made to go another. With regard to the ripe database and the rpki repo, it doesn't look like there are any specific legal issues that haven't been considered. All of this information is publicly accessible anyway. There may well be a different set of considerations for other types of data.
Personally, I think the Managing Director and his staff is doing good stuff and see no reason what so ever to question what decisions they have made. That said, I am curious in some cases to know what decisions they have made as maybe I have the same or similar deliberations to do at Netnod and might want to know more. But I do not question them.
So, many things are intertwined and specifically mixed up are "us members being curious" and "us members actually wanting to provide input".
From a technical stand point, I think the most important thing for "cloud" is to choose interoperable solutions so that migration from one cloud to another is possible, or at least as easy as possible. Including on-prem-clouds.
yep all this, in spades ^^^ The important takeaways here are two: 1. ensuring service stability and 2. ensuring that long term business continuity isn't compromised (e.g. vendor lock-in). Once these requirements are fulfilled, it's great to get an inside view of what the NCC's plans are. Nick
From a technical stand point, I think the most important thing for "cloud" is to choose interoperable solutions so that migration from one cloud to another is possible, or at least as easy as possible. Including on-prem-clouds.
After reading the new article and the comments I find it disturbing that the area of cyber-security is almost totally ignored. In the original RIPE article it stated "Organisations like AWS offer infrastructure at a global scale, along with mature security,..." as well as "The primary goal of our cloud strategy is to ensure availability, security,... " RIPE has done an admirable job of protecting its assets - specifically the DB. Moving to the cloud means RIPE will now allow the cloud provider to do that function for them. But in the era where more and more organizations are placing their assets in the cloud, if I were a state sponsored cyber attack team - where would I invest my time and effort in trying to breach security? One by one companies or 3 big global cloud providers? If you think I am making this up, see my references below. Yes, cloud providers invest heavily in security but the breaches that worry me are the ones they don't know about yet and that don't appear in the press. When bank robber Willie Sutton was asked "why do you rob banks?" he answered "because that is where the money is". If we move RIPE functions to the cloud don't be surprised 3-4 years from now when objects suddenly get changed and no one can understand just how that happened. -Hank References: https://techmonitor.ai/techonology/cloud/aws-servers-hacked-rootkit-in-the-c... https://www.bankinfosecurity.com/crypto-keys-stolen-from-amazon-cloud-a-8581 https://www.washingtonpost.com/national-security/russia-hack-microsoft-cloud... https://securityaffairs.co/wordpress/101015/data-breach/cloud-db-exposes-200...
Before making a decision, worth the read: https://a16z.com/2021/05/27/cost-of-cloud-paradox-market-cap-cloud-lifecycle... -Hank
Hank Nussbacher wrote on 30/06/2021 09:17:
Before making a decision, worth the read: https://a16z.com/2021/05/27/cost-of-cloud-paradox-market-cap-cloud-lifecycle...
that makes sense in the context of the companies that the article examined, which is to say companies where the annual spend on cloud services runs into tens of millions of dollars. At that level, there's a clear argument that economy of scale can achieved by in-housing. If the RIPE NCC had the same level of compute requirements as those companies, e.g. dropbox, palantir or zoom, I'd totally be solutioneering an in-house option from my comfy armchair. Nick
On 30 Jun 2021, at 11:08, Nick Hilliard wrote:
Hank Nussbacher wrote on 30/06/2021 09:17:
Before making a decision, worth the read: https://a16z.com/2021/05/27/cost-of-cloud-paradox-market-cap-cloud-lifecycle...
that makes sense in the context of the companies that the article examined, which is to say companies where the annual spend on cloud services runs into tens of millions of dollars. At that level, there's a clear argument that economy of scale can achieved by in-housing. If the RIPE NCC had the same level of compute requirements as those companies, e.g. dropbox, palantir or zoom, I'd totally be solutioneering an in-house option from my comfy armchair.
Exactly my first impression. On second thought I read this article very much as a strong argument to both recognise and avoid vendor-lock for other reasons besides cost. Quite a few non cost related reasons have been discussed here already. This holds true also for companies smaller sizes than the examples in the article. Of course one should probably not buy kit and rent colo when all one currently has is the proverbial garage and an idea. The RIPE NCC is a little further along after 30 years. Have a nice weekend and stay healthy, Daniel
On 23/06/2021 07:30, Hank Nussbacher wrote:
When bank robber Willie Sutton was asked "why do you rob banks?" he answered "because that is where the money is". If we move RIPE functions to the cloud don't be surprised 3-4 years from now when objects suddenly get changed and no one can understand just how that happened.
https://www.techradar.com/news/this-hyper-v-vulnerability-could-plague-micro... https://www.securityweek.com/researchers-publish-details-recent-critical-hyp... "The security issue that Guardicore Labs (in collaboration with SafeBreach Labs) discovered was likely in production for more than a year, as it first appeared in a vmswitch build in August 2019. The vulnerability affects Windows 7, 8.1 and 10 and Windows Server 2008, 2012, 2016 and 2019. An attacker with an Azure virtual machine could exploit the security bug by sending a crafted packet to the Hyper-V host. This could have resulted in the attacker running code on the Hyper-V host and potentially taking down entire regions of the cloud." -Hank
On 22 Jun 2021, at 23:50, Nick Hilliard wrote:
Patrik Fältström wrote on 22/06/2021 21:23:
I personally think deployment "in the cloud" implies an ability to deploy services in such a way (orchestration) that resources are dynamically allocated depending on for example load. This implies automatic scaling and dynamic allocation of storage, memory and CPU.
"the cloud" just means someone else's computer. There are plenty of reasons to use this as a procurement model, e.g. outsourcing of specific types of headaches, ease of scale, connectivity, insta-cdn functionality, etc. But in the end it's rental of infrastructure and specific infrastructure-related competencies, and it's a legitimate business decision to deploy infrastructure in this way.
My point is exactly the opposite. That "in the cloud" implies dynamic scaling of resources in the most optimal method possible. That one use some orchestration (quite often based on Kubernetes today) to manage the matching between available resources and need. For economical reasons the most effective way is quite often to orchestrate things primarily on your your silicon, but you can scale when needed into resources someone else have invested in, and you "just" rent the resources when you need it. Quite often to a higher cost than your downpayment of your own silicon. So I specifically disagree with the statement that "the cloud is someone else's computer". It's a modern very efficient technology to manage compute resources.
but to be honest, this is what I always expect the Managing Director to do, and I do not think we members should do micro management.
Pretty much this. We don't tell the RIPE NCC legal or finance or outreach or any other departments how to do their jobs. The NCC is an executive and it needs the freedom to be able to make its own operational decisions. Just because something happens to be within our area of expertise as stakeholders, that doesn't mean that we should start back-seat driving because they made a decision to go one way that someone else might have made to go another.
Well written!
With regard to the ripe database and the rpki repo, it doesn't look like there are any specific legal issues that haven't been considered. All of this information is publicly accessible anyway. There may well be a different set of considerations for other types of data.
Agree.
Personally, I think the Managing Director and his staff is doing good stuff and see no reason what so ever to question what decisions they have made. That said, I am curious in some cases to know what decisions they have made as maybe I have the same or similar deliberations to do at Netnod and might want to know more. But I do not question them.
So, many things are intertwined and specifically mixed up are "us members being curious" and "us members actually wanting to provide input".
From a technical stand point, I think the most important thing for "cloud" is to choose interoperable solutions so that migration from one cloud to another is possible, or at least as easy as possible. Including on-prem-clouds.
yep all this, in spades ^^^
Good!
The important takeaways here are two: 1. ensuring service stability and 2. ensuring that long term business continuity isn't compromised (e.g. vendor lock-in). Once these requirements are fulfilled, it's great to get an inside view of what the NCC's plans are.
+1 Patrik
Nick, On 22/06/2021 23.50, Nick Hilliard wrote:
Patrik Fältström wrote on 22/06/2021 21:23:
With regard to the ripe database and the rpki repo, it doesn't look like there are any specific legal issues that haven't been considered. All of this information is publicly accessible anyway. There may well be a different set of considerations for other types of data.
I don't think that is is okay to say "this information is publicly accessible anyway". On a RIPE Database or RPKI server there is meta-data about *who* is accessing the database, including timestamps, source addresses, and possibly other data. There is also meta-data about *what queries* are made to the database. There also things to be learned about replication delays between servers, and surely a lot more that might be of interest to creative folks. I don't know about now, but at one point there were firewalls and/or intrusion-detection systems that would query the RIPE Database to give the admin information about the source of suspicious traffic. An attacker trying to penetrate a network might be able to identify which security products were in use if given unrestricted access to WHOIS query logs. I'm not saying this is a likely scenario, I'm saying we should be cautious about declaring access to data safe. Humans (and increasingly AI) are ingenious about ways to use systems in unintended ways. As a thought experiment to try to demonstrate the idea, how would you feel about a proposal to provide public access to complete system logs of all RIPE Database servers? If that makes you nervous in any way - and I think that it should! - then this is exactly why we should consider the operators hosting RIPE Database (and RPKI) resources important. Cheers, -- Shane
FYI APNIC left a dump from its Whois SQL database in a public Google Cloud bucket https://www.theregister.com/2021/06/22/apnic_whois_data_exposed/ The Asia Pacific Network Information Centre (APNIC), the internet registry for the region, has admitted it left at least a portion of its Whois SQL database, which contains sensitive information, facing the public internet for three months. Jome ---- jorma@jmellin.net Quoting Shane Kerr <shane@time-travellers.org>:
Nick,
On 22/06/2021 23.50, Nick Hilliard wrote:
Patrik Fältström wrote on 22/06/2021 21:23:
With regard to the ripe database and the rpki repo, it doesn't look like there are any specific legal issues that haven't been considered. All of this information is publicly accessible anyway. There may well be a different set of considerations for other types of data.
I don't think that is is okay to say "this information is publicly accessible anyway". On a RIPE Database or RPKI server there is meta-data about *who* is accessing the database, including timestamps, source addresses, and possibly other data. There is also meta-data about *what queries* are made to the database. There also things to be learned about replication delays between servers, and surely a lot more that might be of interest to creative folks.
I don't know about now, but at one point there were firewalls and/or intrusion-detection systems that would query the RIPE Database to give the admin information about the source of suspicious traffic. An attacker trying to penetrate a network might be able to identify which security products were in use if given unrestricted access to WHOIS query logs. I'm not saying this is a likely scenario, I'm saying we should be cautious about declaring access to data safe. Humans (and increasingly AI) are ingenious about ways to use systems in unintended ways.
As a thought experiment to try to demonstrate the idea, how would you feel about a proposal to provide public access to complete system logs of all RIPE Database servers? If that makes you nervous in any way - and I think that it should! - then this is exactly why we should consider the operators hosting RIPE Database (and RPKI) resources important.
Cheers,
-- Shane
Colleagues You beat me to it Shane. I was thinking the same (and more) last night. There are several issues to consider here that have not yet been mentioned, as Shane pointed some of them out. First of all let me suggest that we drop this thought that this is simply a matter of moving the public RIPE Database to a different platform. It is much more than that. The RIPE Database is a complex system. It is not just the data the public sees. So what else is it? Shane mentioned the 'log files'. Every operation on the RIPE Database is logged in fine detail. So for an update details are stored of who did what to which bit of data, at what time, from where, using which authorisation method and whose specific authentication tokens and the outcome of that update request. I am not sure if the clear text password is still logged in email updates containing them or if the passwords are stripped out. Who is notified of these changes is also logged. None of this data is public and a court order is needed to access it. For queries details of who queries for what and when is also logged. This is also not public information. The RIPE NCC's proposals and impact analyses make no mention of these log files. Will they all be stored on the cloud in this future scenario? Will updates still be sent to the RIPE NCC for logging and pre processing with only the database changes sent to the cloud? Even if logs are downloaded by the NCC daily and deleted from the cloud they may still exist in cloud backups. Any kind of disassociation between the object data and these log files would be complex. Then there is data history. This is built into the fundamental database design and architecture. Every version of every object ever created in the last 20 years is an integral part of the database. Historical queries only allow public access to limited amounts of operational data. The full history of all personal data, organisational data, security management, even forward domains still exists as an integral part of the database. To separate this out would require significant and major re-design of the database structure and operation. The RIPE Database never forgets anything or anyone. Many domain registries used the database as their primary domain registry in the past. They may think all that data has long since gone. But the database never forgets. I have had domains for 20+ years. If that data was in the database it is still there and still correct. Any new features or purposes added to the RIPE Database in the future would also have to take into account the legal jurisdiction of the data. Moving the 'RIPE Database' into a legal jurisdiction outside of the EU has many consequences if foreign governments have powers to access this data. So this is not just a matter of moving public data to 'someone else's computer'. It is not a matter of trying to micro manage the technical operations of the RIPE NCC. This proposal has significant legal, political and policy consequences. cheers denis co-chair DB-WG On Wed, 23 Jun 2021 at 09:19, Shane Kerr <shane@time-travellers.org> wrote:
Nick,
On 22/06/2021 23.50, Nick Hilliard wrote:
Patrik Fältström wrote on 22/06/2021 21:23:
With regard to the ripe database and the rpki repo, it doesn't look like there are any specific legal issues that haven't been considered. All of this information is publicly accessible anyway. There may well be a different set of considerations for other types of data.
I don't think that is is okay to say "this information is publicly accessible anyway". On a RIPE Database or RPKI server there is meta-data about *who* is accessing the database, including timestamps, source addresses, and possibly other data. There is also meta-data about *what queries* are made to the database. There also things to be learned about replication delays between servers, and surely a lot more that might be of interest to creative folks.
I don't know about now, but at one point there were firewalls and/or intrusion-detection systems that would query the RIPE Database to give the admin information about the source of suspicious traffic. An attacker trying to penetrate a network might be able to identify which security products were in use if given unrestricted access to WHOIS query logs. I'm not saying this is a likely scenario, I'm saying we should be cautious about declaring access to data safe. Humans (and increasingly AI) are ingenious about ways to use systems in unintended ways.
As a thought experiment to try to demonstrate the idea, how would you feel about a proposal to provide public access to complete system logs of all RIPE Database servers? If that makes you nervous in any way - and I think that it should! - then this is exactly why we should consider the operators hosting RIPE Database (and RPKI) resources important.
Cheers,
-- Shane
-----Original message-----
The important takeaways here are two: 1. ensuring service stability and 2. ensuring that long term business continuity isn't compromised (e.g. vendor lock-in). Once these requirements are fulfilled, it's great to get an inside view of what the NCC's plans are.
During the previous discussion thread on this mailinglist I mentioned the risk for loss of skill/proficiency, in the newly published Labs article I haven't seen any statements about that? This could be a direct impact on business continuity. The original article mentioned the engineers already have worries about maintaining the current infrastructure, instead of helping them to the next level, the NCC is planning to take away all their playground and hand it over to some cloud provider who now will do that job. Over time as more services could be migrated to the cloud, the required skill-set for self-hosted services will fade, engineers might even get bored and leave to other employers where they can build and maintain their own infra again. In the end, the competence needed for the NCC to take everything back in-house when said cloud provider cannot fulfill anymore will be a PITA. Once more, I would like to point to a very interesting and eye-opening presentation by Bert Hubert about this very topic: https://www.youtube.com/watch?v=PQccNdwm8Tw (transcript: https://berthub.eu/articles/posts/how-tech-loses-out/ )
Once more, I would like to point to a very interesting and eye-opening presentation by Bert Hubert about this very topic: https://www.youtube.com/watch?v=PQccNdwm8Tw (transcript: https://berthub.eu/articles/posts/how-tech-loses-out/ )
brilliant piece. very strongly recommend reading, separate from this particular discussion. randy --- randy@psg.com `gpg --locate-external-keys --auto-key-locate wkd randy@psg.com` signatures are back, thanks to dmarc header butchery
participants (9)
-
Daniel Karrenberg
-
denis walker
-
Hank Nussbacher
-
jorma@jmellin.net
-
Michiel Klaver
-
Nick Hilliard
-
Patrik Fältström
-
Randy Bush
-
Shane Kerr