-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add user stories for disk support #1681
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
#### Hypershift / Hosted Control Planes | ||
|
||
This proposal does not affect HyperShift. | ||
HyperShift does not leverage Machine API. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hosted Control Planes do support the ability to inject a MachineConfig into the NodePool definition on the management cluster. For the use cases described above, it seems like we should be able to provide the same capability for either form factor, and customers (particularly the swap scenario) would benefit ensuring we have coverage for both form factors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@derekwaynecarr what would be the motivation for HCP swap? We were thinking that swap should only be enabled on the worker nodes. I don't see a benefit of swap on control plane nodes at the moment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hosted Control Planes do support the ability to inject a MachineConfig into the NodePool definition on the management cluster. For the use cases described above, it seems like we should be able to provide the same capability for either form factor, and customers (particularly the swap scenario) would benefit ensuring we have coverage for both form factors.
I should look more into this. Do you have a link or a design doc handy for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The motivation is to ensure that you can enable the desired disk layout for swap on the worker nodes that join a Hosted Control Plane. Those worker nodes are configured via the NodePool abstraction on the management cluster which supports the ability to inject a MachineConfig.
For customers exploring OpenShift Virtualization to support their virtualization workload, we see a lot of interest in having OpenShift Virtualization running on a cluster that uses the HCP form-factor with bare-metal workers in order to bring the number of physical nodes needed to support the number of control planes down. This is important if you have a large number of virtualization machines and therefore need multiple clusters to support virtualization in a given data center.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the example documented here:
https://hypershift-docs.netlify.app/how-to/automated-machine-management/configure-machines/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the link. We will make sure to consider NodePool in this design.
|
||
## Motivation | ||
|
||
Custormers request the ability to add disks for day 0 and day 1 operations. Some of the common areas include designed disk for etcd, dedicated disk for swap partitions, container runtime filesystem, and a separate filesystem for container images. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Custormers request the ability to add disks for day 0 and day 1 operations. Some of the common areas include designed disk for etcd, dedicated disk for swap partitions, container runtime filesystem, and a separate filesystem for container images. | |
Customers request the ability to add disks for day 0 and day 1 operations. Some of the common areas include designed disk for etcd, dedicated disk for swap partitions, container runtime filesystem, and a separate filesystem for container images. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth defining days 0, 1, and 2?
|
||
### Goals | ||
|
||
TBD |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Define a common interface for infrastructure platforms to implement to use additional disks for a defined set of specific uses
- Implement common behaviour to safely use the above disks when they have been presented by the infrastructure platform
|
||
### Non-Goals | ||
|
||
- Adding disk support in CAPI providers where it is not supported upstream |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Adding generic support for mounting arbitrary additional disks
|
||
Custormers request the ability to add disks for day 0 and day 1 operations. Some of the common areas include designed disk for etcd, dedicated disk for swap partitions, container runtime filesystem, and a separate filesystem for container images. | ||
|
||
All of these features are possible to support through a combination of machine configs and machine API changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Speaking of definitions, I wonder if it's worth defining 'infrastructure platform' (or some better term for the same thing). Something like: "A platform-specific combination of machine config and machine API configuration"?
@kannon92: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Inactive enhancement proposals go stale after 28d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Mark the proposal as fresh by commenting If this proposal is safe to close now please do so with /lifecycle stale |
No description provided.