-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Want mechanism to forcibly remove an instance's active VMMs irrespective of instance state #4004
Comments
Related: #3209 |
In #4194, sled agent's "instance unregister" API assumes that it can produce the correct posterior VMM and instance states by emulating a "Propolis destroyed" state transition. (That is, sled agent's "rudely terminate this instance" function computes the next state by pretending that it immediately got a message from the current VMM that says "I am destroyed and my current migration has failed.") This is fine today because the unregister API is only used when unwinding start and migrate sagas, where the VMMs that are subject to unregistration have by definition not gotten to do anything interesting yet. It's less fine if Propolis has already begun to run, and especially not fine if we're force-quitting an instance that's a migration target:
This seems dangerous. We probably want to adjust the synchronization here to be something more like the following:
This would need to be done in a saga to ensure the whole process runs to completion. More design work's needed here. We just need to do that work before we hook up any external APIs to the existing |
Also related: #4872 |
A number of bugs relating to guest instance lifecycle management have been observed. These include: - Instances getting "stuck" in a transient state, such as `Starting` or `Stopping`, with no way to forcibly terminate them (#4004) - Race conditions between instances starting and receiving state updates, which cause provisioning counters to underflow (#5042) - Instances entering and exiting the `Failed` state when nothing is actually wrong with them, potentially leaking virtual resources (#4226) These typically require support intervention to resolve. Broadly , these issues exist because the control plane's current mechanisms for understanding and managing an instance's lifecycle state machine are "kind of a mess". In particular: - **(Conceptual) ownership of the CRDB `instance` record is currently split between Nexus and sled-agent(s).** Although Nexus is the only entity that actually reads or writes to the database, the instance's runtime state is also modified by the sled-agents that manage its active Propolis (and, if it's migrating, it's target Propolis), and written to CRDB on their behalf by Nexus. This means that there are multiple copies of the instance's state in different places at the same time, which can potentially get out of sync. When an instance is migrating, its state is updated by two different sled-agents, and they may potentially generate state updates that conflict with each other. And, splitting the responsibility between Nexus and sled-agent makes the code more complex and harder to understand: there is no one place where all instance state machine transitions are performed. - **Nexus doesn't ensure that instance state updates are processed reliably.** Instance state transitions triggered by user actions, such as `instance-start` and `instance-delete`, are performed by distributed sagas, ensuring that they run to completion even if the Nexus instance executing them comes to an untimely end. This is *not* the case for operations that result from instance state transitions reported by sled-agents, which just happen in the HTTP APIs for reporting instance states. If the Nexus processing such a transition crashes, gets network partition'd, or encountering a transient error, the instance is left in an incomplete state and the remainder of the operation will not be performed. This branch rewrites much of the control plane's instance state management subsystem to resolve these issues. At a high level, it makes the following high-level changes: - **Nexus is now the sole owner of the `instance` record.** Sled-agents no longer have their own copies of an instance's `InstanceRuntimeState`, and do not generate changes to that state when reporting instance observations to Nexus. Instead, the sled-agent only publishes updates to the `vmm` and `migration` records (which are never modified by Nexus directly) and Nexus is the only entity responsible for determining how an instance's state should change in response to a VMM or migration state update. - **When an instance has an active VMM, its effective external state is determined primarily by the active `vmm` record**, so that fewer state transitions *require* changes to the `instance` record. PR #5854 laid the ground work for this change, but it's relevant here as well. - **All updates to an `instance` record (and resources conceptually owned by that instance) are performed by a distributed saga.** I've introduced a new `instance-update` saga, which is responsible for performing all changes to the `instance` record, virtual provisioning resources, and instance network config that are performed as part of a state transition. Moving this to a saga helps us to ensure that these operations are always run to completion, even in the event of a sudden Nexus death. - **Consistency of instance state changes is ensured by distributed locking.** State changes may be published by multiple sled-agents to different Nexus replicas. If one Nexus replica is processing a state change received from a sled-agent, and then the instance's state changes again, and the sled-agent publishes that state change to a *different* Nexus...lots of bad things can happen, since the second state change may be performed from the previous initial state, when it *should* have a "happens-after" relationship with the other state transition. And, some operations may contradict each other when performed concurrently. To prevent these race conditions, this PR has the dubious honor of using the first _distributed lock_ in the Oxide control plane, the "instance updater lock". I introduced the locking primitives in PR #5831 --- see that branch for more discussion of locking. - **Background tasks are added to prevent missed updates**. To ensure we cannot accidentally miss an instance update even if a Nexus dies, hits a network partition, or just chooses to eat the state update accidentally, we add a new `instance-updater` background task, which queries the database for instances that are in states that require an update saga without such a saga running, and starts the requisite sagas. Currently, the instance update saga runs in the following cases: - An instance's active VMM transitions to `Destroyed`, in which case the instance's virtual resources are cleaned up and the active VMM is unlinked. - Either side of an instance's live migration reports that the migration has completed successfully. - Either side of an instance's live migration reports that the migration has failed. The inner workings of the instance-update saga itself is fairly complex, and has some kind of interesting idiosyncrasies relative to the existing sagas. I've written up a [lengthy comment] that provides an overview of the theory behind the design of the saga and its principles of operation, so I won't reproduce that in this commit message. [lengthy comment]: https://github.com/oxidecomputer/omicron/blob/357f29c8b532fef5d05ed8cbfa1e64a07e0953a5/nexus/src/app/sagas/instance_update/mod.rs#L5-L254
Do we expect the interface into the forcibly-terminate operation to be exposed in the external API, and if so, would we want to make it a more privileged operation than the normal instance-stop and instance-delete APIs? |
I think the answers are "yes" and "no." The idea of the API is to give users a crowbar that they can use to unstick an instance in the unlikely event that it gets stuck in a transitional Propolis state like Starting or Stopping. I would give the API an appropriately forceful name ("force-quit"? "force-terminate"?) to try to emphasize that this just blows away the entire VM process and doesn't give anything in it the chance to run any cleanup logic, and I'm not sure I'd add a console option for it (right away, anyway), but I do think it should be available to regular users. |
Yup, that makes sense. |
We recently discussed this, and came to the conclusion that it seems unfortunate to present the user with two different ways to stop an instance, one of which has a big warning label on it that says "only do this in case of emergencies" but have no difference in observable effects from the guest's perspective.1 This forces the user to decide which way of essentially just pulling the virtual power cord out of their VM to use. Instead, we might consider just making the system resilient to Propolis getting stuck or misbehaving whilst shutting down --- a normal instance-stop request could cause sled-agent to set a timeout, after which the Propolis zone is forcefully deleted if Propolis doesn't report in to say it's exited normally. Furthermore, we would like to eventually make Propolis attempt to shut guests down more gracefully, but that's out of scope for this issue. See oxidecomputer/propolis#784 Footnotes
|
Today, requests to stop a running instance must by necessity involve the instance's active Propolis (sled agent sends the stop request to Propolis; the instance is stopped when Propolis says so, at which point sled agent cleans up the Propolis zone and all related objects). If an instance's Propolis is not responding, or there is no active Propolis, there is no obvious way to clean up the instance and recover.
A short-term workaround is to grant some form of API access to sled agent's "unregister instance" API, which forcibly executes the termination path (tearing down the Propolis zone and removing the instance from the sled's instance table) and can get force an instance into a stopped state.
In the long run instance lifecycle management needs to be made more robust to Propolis failure and/or non-responsiveness.
The text was updated successfully, but these errors were encountered: