You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We updated our Elasticsearch instances to Version 8.15.0 using the Rolling update feature of the collection, which works good. The cluster was available all the time.
With the update we changed some parameters in the Elasticsearch config. Now the Collection performs the parameter change as part of the "normal" installation process, after all Nodes where updated. This means the config is changed on all nodes and all Nodes are restarted at once using a handler. This full cluster restart causes that the cluster is unavailable for some time.
For us it makes no sense to perform a rolling update with a lot of tasks to make shure that the Cluster is available all the time and perform afterwards a full cluster restart which leads to the opposite.
Please implement a "graceful" cluster restart (with rolling restarts and cluster health checks) after a config change of Elasticsearch
The text was updated successfully, but these errors were encountered:
I'd say this is a bug and not a feature. Should be easy enough to create a handler for a rolling cluster restart (all the code is in the repo already), but I'm not sure how to best handle it without duplicating most of the code from elasticsearch-rolling-upgrade.yml in a handler.
Is there a way to inject tasks into a handler? have two tasks files, one with all the tasks to gracefully stop a node, and one with everything to bring it back online and wait for cluster to become green. They could then be included in both a cluster restart handler and the rolling-upgrade file?
So the handler would look something like this:
- name: Gracefully stop nodeansible.builtin.include_tasks:
file: cluster_restart_stop_node.yaml
- name: Start node and wait for green clusteransible.builtin.include_tasks:
file: cluster_restart_start_node.yaml
And the Be careful about upgrade when Elasticsearch is running block in elasticsearch-rolling-upgrade.yml would be reduced to something like this
- name: Gracefully stop nodeansible.builtin.include_tasks:
file: cluster_restart_stop_node.yaml# Tasks to upgrade packages
- name: Start node and wait for green clusteransible.builtin.include_tasks:
file: cluster_restart_start_node.yaml
Describe the feature request
We updated our Elasticsearch instances to Version 8.15.0 using the Rolling update feature of the collection, which works good. The cluster was available all the time.
With the update we changed some parameters in the Elasticsearch config. Now the Collection performs the parameter change as part of the "normal" installation process, after all Nodes where updated. This means the config is changed on all nodes and all Nodes are restarted at once using a handler. This full cluster restart causes that the cluster is unavailable for some time.
For us it makes no sense to perform a rolling update with a lot of tasks to make shure that the Cluster is available all the time and perform afterwards a full cluster restart which leads to the opposite.
Please implement a "graceful" cluster restart (with rolling restarts and cluster health checks) after a config change of Elasticsearch
The text was updated successfully, but these errors were encountered: