-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extend KBS to provide the resources required to create an encrypted overlay network #396
Comments
Thanks for the writeup, the plugin architecture sounds intriguing and something that could evolve into a notion of a "virtual resource" as an abstraction over all sort of confidential resources. At the moment, pragmatically, such a resource simply maps 1:1 to a file on a local fs, without much regard to (distributed) state management yet. re 3) Since this is rather invasive change and probably not something that could be introduced in the short term, have you considered implementing a specific API service to generate those ad-hoc secrets? It could be enlightened to support KBS attestation-tokens (by using KBS code as a library) to support remote attestation via RCAR. |
This is an interesting point. Supporting Nebula CA seems to represent a broader need for KBS, that is, after a successful RCAR protocol, the trustee needs to return a certificate to the client. The endorsement of this certificate comes from the TCB of user/trustee. Then Nebula could use the certificate to work as key-exchange/TLS cert to encrypt data flow between pods. For other scenarios, for example, this cert could be used as a "client identity" to access resources. |
It seems likely that we'll need to support more than just the localfs backend in the longterm. Currently we allow people to swap out localfs for something else at build time, but nothing else is supported and you can only have one backend. We will probably want some framework for supporting different types of resources at the same time. As @cclaudio suggests we could also extend the KBS protocol to cover things other than resources, but I think it's probably easiest/more scalable to treat everything as a resource. If we can settle on a framework, we could use that as a basis for building PKI. I'm not sure if Nebula will be able to piggy-back on a generic PKI implementation or if it would still need its own plugin. |
(We had a similar train of thought when looking at this; could be appealing, but we've tabled it for now.)
Can you clarify a bit here? This looks non-trivial to me, at least compared with cclaudio's extension of the KBS resource structs and functions. At first I imagined you meant that the same token from the KBS could be reused to fetch more resources from this new nebula service; but maybe you mean that this new nebula service would just have similar RCAR facilities (and a new session and token) when a workload wants to connect to it and join the secure mesh; or maybe it's something else you have in mind?
There's a curiosity in the code related to this exact comment. "Repository" takes on two different meanings. There's a "Repository", which is a trait/interface here; but there's also a "repository" passed to |
@portersrc really good point about the repository. Maybe we should just use the repository field of the resource URI to select which repository backend you want to use. We could assign If we don't do that we should rename the Repository trait because it is potentially confusing. |
Sounds good. Maybe we could start this thread again. Logically, once initdata is implemented, we could have a way to inject something into the guest to work as a client id for registering a certificate. Hopefully we could get a converge on the design of this : )
I agree. |
ah, I meant that the there could be a lightweight nebula-specific kbs that re-uses kbs_protocol and has it's own notion of resources. However I like the idea of making "/$resources" abstract instead of prescribing a hierarchy that attempts to map to all potential use cases. However, I don't think it's a good idea to map resource uris to implementation details like storage backend,that should be of no concern to the API client (unless this is a feature of the API to pick storage tiers, but I wouldn't recommend to go there). |
Got it. Yes, this could work. I feel that because KBS already has the notion of multiple backing stores; the fact that it only supports local-fs right now; and the relatively straightforward integration between nebula resources and KBS' get-resource endpoint -- there's a cleaner path right now that doesn't involve splitting out a separate nebula-specific KBS.
Good point. Does this mean that you prefer the idea of using a |
If we want to include something like Nebula, we would need to cover the use case in the API at least. Currently, the path triplet /repo/name/tag is currently passed to rego policy evaluation in KBS, so we would have to consider dynamic parts (?ip=1.2.3.4) too or generalize the resource descriptor somehow. Also if Nebula resources need a TTL, we need a component managing the lifecycle of resources, ... |
Yes, I think both, maybe: pull extra params and put them into the resource descriptor. Make sure those get passed to the policy engine here. And then it's up to the user if they want a rego policy that restricts those params in some way.
I think that's right, but I didn't imagine that as (necessarily) being part of KBS or its @mkulke just on the question of a more generic interface that "solves" the backing store question, do you have any strong opinions there? So, setting aside nebula, how would you prefer to support multiple backends/plugins? as plugins that could be added in addition to the repository backend (presumably triggered by a reserved word somewhere in the resource URI) or as different backends (where some part of the resource URI would always correspond to the backend)? |
I think we should distinguish between the API, that is conceptual considerations (e.g. kbs-prescribed hierarchies/ontologies) and storage implementation details (where/how is the data stored). I'd ignore the latter for the time being. e.g. on s3 buckets there is a flexible notion of "path", it's just part of an object's key that happens to contain "/"s, so a user can organize their secrets in a conventional hierarchy, but we don't have reflect that in the storage. I would not recommend to ignore HTTP semantics. if we want to do arbitrary RPC we should use gRPC. e.g. in the suggestion I could imagine a specific handler per store, could be
In this model each store would implement their own parameter extractor and provide the parameters in some normalized form to the policy engine. The other option would be to generalize the current scheme:
We'd extract path and have prescribe some semantics for query_params and body, which we then can provide it to the policy_engine. Not every store would need to implement all verbs. I think I favor the first options, with specialized handlers for individual stores. Custom stores we could include at build-time behind a feature flag from a |
I think I am not a Nebula expert, but I think this key generation functionality that the plugin in question would expose is not stateful. It simply requires more input parameters than would fit in the resource uri, hence the query string. Nebula has another component, the lighthouse, that keeps track of connections, but this plugin is just for generating certs. (There is state involved for keeping track of the private key and such but it is not updated by requests from guests) Overall, this plugin or virtual resource idea would still need to conform to the semantics of get_resource, meaning that it would only be used for getting resources (although some would be dynamically generated). I don't think this proposal is trying to support stateful plugins. Keep in mind that today there is no support for POSTing resources to any of the backends from the guest. This sort of configuration can only done by the KBS operator.
I'm not sure I understand the details here. You are suggesting adding an additional element to the path (after resource, but before the resource_uri) that would point to a different "store?" It seems like this would make things more complicated in the guest. How would the guest know which store contains which resources? The resource URI is supposed to be a universal locator of a resource. It seems like the plugin/store information should be included in that.
Integration with the policy engine seems trivial for any of the approaches suggested. |
ohh. thanks for clearing this up! It's actually mentioned in the original description, I somehow skipped over that. if we don't have to think about state in this context, it should be relatively straightforward, we just need a facility to parameterize a resource that's more flexible than the
the idea was to turn the |
The nebula plugin can be used to deliver credentials for nodes (confidential PODs or VMs) to join a Nebula overlay network. Within the nebula network, the communication between nodes is automatically encrypted by Nebula. A nebula credential can be requested using the kbs-client: kbs-client --url http://127.0.0.1:8080 \ get-resource \ --path 'plugin/nebula/credential?ip[ip]=10.11.12.13&ip[netbits]=21&name=pod1' at least the IPv4 address (in CIDR notation) and the name of the node must be provided in the query string. The other parameters supported can be found in the struct NebulaCredentialParams. After receiving a credential request, the nebula plugin will call the nebula-cert binary to create a key pair and sign a certificate using the Nebula CA. The generated node.crt and node.key, as well as the ca.rt are then returned to the caller. During the nebula-plugin initialization, a self signed Nebula CA can be created if 'ca_generation_policy = 1' in the nebula-config.toml, the file contains all parameters supported. Another option is to pre-install a ca.key and ca.crt, and set 'ca_generation_policy = 2'. The nebula-plugin cargo feature is set by default, however the plugin itself is not initialized by default. In order to initialize it, you need to add 'nebula' to 'manager_plugin_config.enabled_plugins' in the kbs-config.toml. Closes confidential-containers#396 Signed-off-by: Claudio Carvalho <[email protected]>
The nebula plugin can be used to deliver credentials for nodes (confidential PODs or VMs) to join a Nebula overlay network. Within the nebula network, the communication between nodes is automatically encrypted by Nebula. A nebula credential can be requested using the kbs-client: kbs-client --url http://127.0.0.1:8080 \ get-resource \ --path 'plugin/nebula/credential?ip[ip]=10.11.12.13&ip[netbits]=21&name=pod1' at least the IPv4 address (in CIDR notation) and the name of the node must be provided in the query string. The other parameters supported can be found in the struct NebulaCredentialParams. After receiving a credential request, the nebula plugin will call the nebula-cert binary to create a key pair and sign a certificate using the Nebula CA. The generated node.crt and node.key, as well as the ca.rt are then returned to the caller. During the nebula-plugin initialization, a self signed Nebula CA can be created if 'ca_generation_policy = 1' in the nebula-config.toml, the file contains all parameters supported. Another option is to pre-install a ca.key and ca.crt, and set 'ca_generation_policy = 2'. The nebula-plugin cargo feature is set by default, however the plugin itself is not initialized by default. In order to initialize it, you need to add 'nebula' to 'manager_plugin_config.enabled_plugins' in the kbs-config.toml. Closes confidential-containers#396 Signed-off-by: Claudio Carvalho <[email protected]>
Sorry to share some late ideas. I think this would be a practical choice. We could follow this with some change, s.t. This helps the architecture of KBS. Like PR #451, we could have a The plugin |
The nebula plugin can be used to deliver credentials for nodes (confidential PODs or VMs) to join a Nebula overlay network. Within the nebula network, the communication between nodes is automatically encrypted by Nebula. A nebula credential can be requested using the kbs-client: kbs-client --url http://127.0.0.1:8080 \ get-resource \ --path 'plugin/nebula/credential?ip[ip]=10.11.12.13&ip[netbits]=21&name=pod1' at least the IPv4 address (in CIDR notation) and the name of the node must be provided in the query string. The other parameters supported can be found in the struct NebulaCredentialParams. After receiving a credential request, the nebula plugin will call the nebula-cert binary to create a key pair and sign a certificate using the Nebula CA. The generated node.crt and node.key, as well as the ca.rt are then returned to the caller. During the nebula-plugin initialization, a self signed Nebula CA can be created if 'ca_generation_policy = 1' in the nebula-config.toml, the file contains all parameters supported. Another option is to pre-install a ca.key and ca.crt, and set 'ca_generation_policy = 2'. The nebula-plugin cargo feature is set by default, however the plugin itself is not initialized by default. In order to initialize it, you need to add 'nebula' to 'manager_plugin_config.enabled_plugins' in the kbs-config.toml. Closes confidential-containers#396 Signed-off-by: Claudio Carvalho <[email protected]>
This is a follow-up on the encrypted overlay network support we proposed in the May/09/2024 meeting to secure data in-transit between CoCo PODs. We'd like to use this issue to discuss the plugin interface design and which implementation option seems more appropriate.
Background
The overlay network would be created using Nebula.
In Nebula, a node can join the overlay network only if it provides a valid credential generated by the Nebula Certificate Authority (CA); the node can request a credential to the Nebula CA by providing the IP address and the name it wants to have in the Nebula overlay network. Later, the node can use the returned certificate and key to join the Nebula overlay network.
For example, in a deployment with 10 CoCo PODs, the Nebula overlay network would have 10 CoCo PODs and one Nebula Lighthouse (assumed to already exist). Each POD would have two network interfaces one created by kubernetes and another one created by Nebula. The traffic sent through the Nebula interface is automatically encrypted by Nebula.
Proposed design related to KBS
Plugin interface requirements
Implementation options for the plugin interface
Re-use the existing KBS resource URI
/kbs/v0/resource/<repository>/<type>/<tag>
Pros: May have less duplicated code. Re-use of the KBS resource URI
Cons: It may look like a hack. The plugin repository may need to be reserved.
The
get-resource()
function would do the RCAR dance as necessary and then if repository=plugin the other two fields would actually be interpreted as/kbs/v0/resource/plugin/<plugin_name>/<plugin_action>
. The <plugin_action> would be forwarded to the appropriate <plugin_name>.The plugin interface could be implemented as RepositoryConfig::Plugin, but for that we would need to properly select between Plugin or LocalFs when a KBS get-resource call is received. As a RepositoryConfig::Plugin, it could be initialized in the same function as LocalFs.
If the <plugin_action> requires multiple parameters:
/kbs/v0/resource/plugin/<plugin_name>/<plugin_action>?ip=10.10.10.10&name=pod1
. If that is not supported by the KBS protocol v0, we could try to replace the?
by something not standard such as;
.Add a new URI for plugins e.g.
/kbs/v0.1/plugin/<plugin_name>/<plugin_action>
Pros: Have a proper URI for plugins. Plugin workflow is better isolated from get-resource.
Cons: The KBS protocol version may need to be increased to something like v0.1. May have some duplicated code compared to get-resource
The plugin URI is not supposed to change the KBS state, so it would be accessible through the HTTP GET method
The new
plugin-handler()
function would do the RCAR as needed (similar to the get-resource) and then it would forward the to the proper .The plugin interface would have its own PluginConfig enum for plugin initialization rather than a new RepositoryConfig enum value.
Additional parameters for could be provided by using HTTP Query String.
Other. Any thoughts?
The text was updated successfully, but these errors were encountered: