From 6690fbc66af6d71623e3463f559ed4de40bafcb4 Mon Sep 17 00:00:00 2001 From: PaulRMellor <47596553+PaulRMellor@users.noreply.github.com> Date: Fri, 25 Oct 2024 05:35:51 -0400 Subject: [PATCH] docs(storage): updates the storage content for KRaft and node pools (#10731) Signed-off-by: prmellor --- .../configuring/assembly-config.adoc | 8 +- .../configuring/assembly-storage.adoc | 65 ++--- .../deploying/assembly-kraft-mode.adoc | 3 +- .../configuring/con-config-storage-kraft.adoc | 250 ++++++++++++++++++ .../con-config-storage-zookeeper.adoc | 175 ++++++++++++ .../con-considerations-for-data-storage.adoc | 24 +- .../proc-adding-volumes-to-jbod-storage.adoc | 62 ----- .../proc-managing-storage-node-pools.adoc | 2 + ...oc-removing-volumes-from-jbod-storage.adoc | 55 ---- .../proc-resizing-persistent-volumes.adoc | 82 ------ .../configuring/ref-storage-ephemeral.adoc | 53 ---- .../modules/configuring/ref-storage-jbod.adoc | 111 -------- .../configuring/ref-storage-persistent.adoc | 228 ---------------- .../configuring/ref-storage-tiered.adoc | 2 +- .../proc-cluster-recovery-volume.adoc | 2 +- documentation/shared/attributes.adoc | 1 - 16 files changed, 471 insertions(+), 652 deletions(-) create mode 100644 documentation/modules/configuring/con-config-storage-kraft.adoc create mode 100644 documentation/modules/configuring/con-config-storage-zookeeper.adoc delete mode 100644 documentation/modules/configuring/proc-adding-volumes-to-jbod-storage.adoc delete mode 100644 documentation/modules/configuring/proc-removing-volumes-from-jbod-storage.adoc delete mode 100644 documentation/modules/configuring/proc-resizing-persistent-volumes.adoc delete mode 100644 documentation/modules/configuring/ref-storage-ephemeral.adoc delete mode 100644 documentation/modules/configuring/ref-storage-jbod.adoc delete mode 100644 documentation/modules/configuring/ref-storage-persistent.adoc diff --git a/documentation/assemblies/configuring/assembly-config.adoc b/documentation/assemblies/configuring/assembly-config.adoc index 410d35998dd..5496095304e 100644 --- a/documentation/assemblies/configuring/assembly-config.adoc +++ b/documentation/assemblies/configuring/assembly-config.adoc @@ -98,10 +98,11 @@ include::../../modules/configuring/proc-moving-node-pools.adoc[leveloffset=+2] include::../../modules/configuring/con-config-node-pool-roles.adoc[leveloffset=+2] include::../../modules/configuring/proc-splitting-node-pool-roles.adoc[leveloffset=+2] include::../../modules/configuring/proc-joining-node-pool-roles.adoc[leveloffset=+2] -include::../../modules/configuring/proc-managing-storage-node-pools.adoc[leveloffset=+2] -include::../../modules/configuring/proc-managing-storage-affinity-node-pools.adoc[leveloffset=+2] include::../../modules/configuring/proc-migrating-clusters-node-pools.adoc[leveloffset=+2] +//configuring storage +include::assembly-storage.adoc[leveloffset=+1] + //`Kafka` config for operators include::../../modules/configuring/ref-kafka-entity-operator.adoc[leveloffset=+1] //topic operator config @@ -171,9 +172,6 @@ include::../../modules/configuring/con-config-mirrormaker.adoc[leveloffset=+1] //`KafkaBridge` resource config include::../../modules/configuring/con-config-kafka-bridge.adoc[leveloffset=+1] -//configuring Kafka and ZooKeeper storage -include::../../assemblies/configuring/assembly-storage.adoc[leveloffset=+1] - //configuring CPU and memory resources and limits include::../../modules/configuring/con-config-resources.adoc[leveloffset=+1] diff --git a/documentation/assemblies/configuring/assembly-storage.adoc b/documentation/assemblies/configuring/assembly-storage.adoc index 30b456da44b..ed7c322f40a 100644 --- a/documentation/assemblies/configuring/assembly-storage.adoc +++ b/documentation/assemblies/configuring/assembly-storage.adoc @@ -3,54 +3,43 @@ // assembly-config.adoc [id='assembly-storage-{context}'] -= Configuring Kafka and ZooKeeper storage += Configuring Kafka storage [role="_abstract"] -Strimzi provides flexibility in configuring the data storage options of Kafka and ZooKeeper. +Strimzi supports different Kafka storage options. +You can choose between the following basic types: -The supported storage types are: +Ephemeral storage:: Ephemeral storage is temporary and only persists while a pod is running. +When a pod is deleted, the data is lost, though data can be recovered in a highly available environment. +Due to its transient nature, ephemeral storage is only recommended for development and testing environments. -* Ephemeral (Recommended for development only) -* Persistent -* JBOD (Kafka only; not available for ZooKeeper) -* Tiered storage (Early access) +Persistent storage:: Persistent storage retains data across pod restarts and system disruptions, making it ideal for production environments. -To configure storage, you specify `storage` properties in the custom resource of the component. -The storage type is set using the `storage.type` property. -When using node pools, you can specify storage configuration unique to each node pool used in a Kafka cluster. -The same storage properties available to the `Kafka` resource are also available to the `KafkaNodePool` pool resource. +JBOD (Just a Bunch of Disks) storage allows you to configure your Kafka cluster to use multiple disks or volumes as ephemeral or persistent storage. -Tiered storage provides more flexibility for data management by leveraging the parallel use of storage types with different characteristics. -For example, tiered storage might include the following: +.JBOD storage (multiple volumes) +When specifying JBOD storage, you must still decide between using ephemeral or persistent volumes for each disk. +Even if you start with only one volume, using JBOD allows for future scaling by adding more volumes as needed, and that is why it is always recommended. -* Higher performance and higher cost block storage -* Lower performance and lower cost object storage +NOTE: Persistent, ephemeral, and JBOD storage types cannot be changed after a Kafka cluster is deployed. +However, you can add or remove volumes of different types from the JBOD storage. +You can also create and migrate to node pools with new storage specifications. -Tiered storage is an early access feature in Kafka. -To configure tiered storage, you specify `tieredStorage` properties. -Tiered storage is configured only at the cluster level using the `Kafka` custom resource. +.Tiered storage (advanced) -The storage-related schema references provide more information on the storage configuration properties: +Tiered storage, currently available as an early access feature, provides additional flexibility for managing Kafka data by combining different storage types with varying performance and cost characteristics. +It allows Kafka to offload older data to cheaper, long-term storage (such as object storage) while keeping recent, frequently accessed data on faster, more expensive storage (such as block storage). -* link:{BookURLConfiguring}#type-EphemeralStorage-reference[`EphemeralStorage` schema reference^] -* link:{BookURLConfiguring}#type-PersistentClaimStorage-reference[`PersistentClaimStorage` schema reference^] -* link:{BookURLConfiguring}#type-JbodStorage-reference[`JbodStorage` schema reference^] -* link:{BookURLConfiguring}#type-TieredStorageCustom-reference[`TieredStorageCustom` schema reference^] - -WARNING: The storage type cannot be changed after a Kafka cluster is deployed. +Tiered storage is an add-on capability. +After configuring storage (ephemeral, persistent, or JBOD) for Kafka nodes, you can configure tiered storage at the cluster level and enable it for specific topics using the `remote.storage.enable` topic-level configuration. include::../../modules/configuring/con-considerations-for-data-storage.adoc[leveloffset=+1] -include::../../modules/configuring/ref-storage-ephemeral.adoc[leveloffset=+1] - -include::../../modules/configuring/ref-storage-persistent.adoc[leveloffset=+1] - -include::../../modules/configuring/proc-resizing-persistent-volumes.adoc[leveloffset=+1] - -include::../../modules/configuring/ref-storage-jbod.adoc[leveloffset=+1] - -include::../../modules/configuring/proc-adding-volumes-to-jbod-storage.adoc[leveloffset=+1] - -include::../../modules/configuring/proc-removing-volumes-from-jbod-storage.adoc[leveloffset=+1] - -include::../../modules/configuring/ref-storage-tiered.adoc[leveloffset=+1] +//KRaft storage +include::../../modules/configuring/con-config-storage-kraft.adoc[leveloffset=+1] +include::../../modules/configuring/proc-managing-storage-node-pools.adoc[leveloffset=+2] +include::../../modules/configuring/proc-managing-storage-affinity-node-pools.adoc[leveloffset=+2] +//ZooKeeper storage +include::../../modules/configuring/con-config-storage-zookeeper.adoc[leveloffset=+1] +//tiered storage +include::../../modules/configuring/ref-storage-tiered.adoc[leveloffset=+1] \ No newline at end of file diff --git a/documentation/assemblies/deploying/assembly-kraft-mode.adoc b/documentation/assemblies/deploying/assembly-kraft-mode.adoc index 324dde05138..fd1056c0b13 100644 --- a/documentation/assemblies/deploying/assembly-kraft-mode.adoc +++ b/documentation/assemblies/deploying/assembly-kraft-mode.adoc @@ -30,6 +30,7 @@ Kafka uses this metadata to coordinate changes and manage the cluster effectivel Broker nodes act as observers, storing the metadata log passively to stay up-to-date with the cluster's state. Each node fetches updates to the log independently. +If you are using JBOD storage, you can xref:con-storing-metadata-log-{context}[change the volume that stores the metadata log]. NOTE: The KRaft metadata version used in the Kafka cluster must be supported by the Kafka version in use. Both versions are managed through the `Kafka` resource configuration. @@ -55,7 +56,5 @@ Currently, the KRaft mode in Strimzi has the following major limitations: * Scaling of KRaft controller nodes up or down is not supported. -NOTE: If you are using JBOD storage, you can xref:ref-jbod-storage-str[change the volume that stores the metadata log]. - //migrating to KRaft include::../../modules/deploying/proc-deploy-migrate-kraft.adoc[leveloffset=+1] \ No newline at end of file diff --git a/documentation/modules/configuring/con-config-storage-kraft.adoc b/documentation/modules/configuring/con-config-storage-kraft.adoc new file mode 100644 index 00000000000..4aebc00f454 --- /dev/null +++ b/documentation/modules/configuring/con-config-storage-kraft.adoc @@ -0,0 +1,250 @@ +// Module included in the following assemblies: +// +// assembly-storage.adoc + +[id='con-config-storage-kraft-{context}'] += Configuring Kafka storage in KRaft mode + +[role="_abstract"] +Use the `storage` properties of the `KafkaNodePool` custom resource to configure storage for a deployment of Kafka in KRaft mode. + +== Configuring ephemeral storage + +To use ephemeral storage, specify `ephemeral` as the storage type. + +.Example configuration for ephemeral storage +[source,yaml,subs="+attributes"] +---- +apiVersion: {KafkaNodePoolApiVersion} +kind: KafkaNodePool +metadata: + name: my-node-pool + labels: + strimzi.io/cluster: my-cluster +spec: + replicas: 3 + roles: + - broker + storage: + type: ephemeral + # ... +---- + +Ephemeral storage uses {K8sEmptyDir} volumes, which are created when a pod is assigned to a node. +You can limit the size of the `emptyDir` volume with the `sizeLimit` property. + +The ephemeral volume used by Kafka brokers for log directories is mounted at `/var/lib/kafka/data/kafka-log`. + +IMPORTANT: Ephemeral storage is not suitable for Kafka topics with a replication factor of 1. + +For more information on ephemeral storage configuration options, see the link:{BookURLConfiguring}#type-EphemeralStorage-reference[`EphemeralStorage` schema reference^]. + +== Configuring persistent storage + +To use persistent storage, specify one of the following as the storage type: + +* `persistent-claim` for a single persistent volume +* `jbod` for multiple persistent volumes in a Kafka cluster (Recommended for Kafka in a production environment) + +.Example configuration for persistent storage +[source,yaml,subs="+attributes"] +---- +apiVersion: {KafkaNodePoolApiVersion} +kind: KafkaNodePool +metadata: + name: my-node-pool + labels: + strimzi.io/cluster: my-cluster +spec: + replicas: 3 + roles: + - broker + storage: + type: persistent-claim + size: 500Gi + deleteClaim: true + # ... +---- + +Strimzi uses {K8sPersistentVolumeClaims} (PVCs) to request storage on persistent volumes (PVs). +The PVC binds to a PV that meets the requested storage criteria, without needing to know the underlying storage infrastructure. + +PVCs created for Kafka pods follow the naming convention `data---`, and the persistent volumes for Kafka logs are mounted at `/var/lib/kafka/data/kafka-log`. + +You can also specify custom storage classes ({K8SStorageClass}) and volume selectors in the storage configuration. + +.Example class and selector configuration +[source,yaml,subs="attributes+"] +---- +# ... + storage: + type: persistent-claim + size: 500Gi + class: my-storage-class + selector: + hdd-type: ssd + deleteClaim: true +# ... +---- + +Storage classes define storage profiles and dynamically provision persistent volumes (PVs) based on those profiles. +This is useful, for example, when storage classes are restricted to different availability zones or data centers. +If a storage class is not specified, the default storage class in the Kubernetes cluster is used. +Selectors specify persistent volumes that offer specific features, such as solid-state drive (SSD) volumes. + +For more information on persistent storage configuration options, see the link:{BookURLConfiguring}#type-PersistentClaimStorage-reference[`PersistentClaimStorage` schema reference^]. + +[id='proc-resizing-persistent-volumes-{context}'] +== Resizing persistent volumes + +Persistent volumes can be resized by changing the `size` storage property without any risk of data loss, as long as the storage infrastructure supports it. +Following a configuration update to change the size of the storage, Strimzi instructs the storage infrastructure to make the change. + +Storage expansion is supported in Strimzi clusters that use persistent-claim volumes. +Decreasing the size of persistent volumes is not supported in Kubernetes. +For more information about resizing persistent volumes in Kubernetes, see {K8sResizingPersistentVolumesUsingKubernetes}. + +After increasing the value of the `size` property, Kubernetes increases the capacity of the selected persistent volumes in response to a request from the Cluster Operator. +When the resizing is complete, the Cluster Operator restarts all pods that use the resized persistent volumes. +This happens automatically. + +In this example, the volumes are increased to 2000Gi. + +.Kafka configuration to increase volume size to `2000Gi` +[source,yaml,subs=attributes+] +---- +apiVersion: {KafkaNodePoolApiVersion} +kind: KafkaNodePool +metadata: + name: my-node-pool + labels: + strimzi.io/cluster: my-cluster +spec: + replicas: 3 + roles: + - broker + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 2000Gi + deleteClaim: false + - id: 1 + type: persistent-claim + size: 2000Gi + deleteClaim: false + - id: 2 + type: persistent-claim + size: 2000Gi + deleteClaim: false + # ... +---- + +Returning information on the PVs verifies the changes: + +[source,shell,subs=+quotes] +---- +kubectl get pv +---- + +.Storage capacity of PVs +[source,shell,subs="+quotes,attributes"] +---- +NAME CAPACITY CLAIM +pvc-0ca459ce-... 2000Gi my-project/data-my-cluster-my-node-pool-2 +pvc-6e1810be-... 2000Gi my-project/data-my-cluster-my-node-pool-0 +pvc-82dc78c9-... 2000Gi my-project/data-my-cluster-my-node-pool-1 +---- + +The output shows the names of each PVC associated with a broker pod. + +NOTE: Storage _reduction_ is only possible when using multiple disks per broker. +You can remove a disk after moving all partitions on the disk to other volumes within the same broker (intra-broker) or to other brokers within the same cluster (intra-cluster). + + +== Configuring JBOD storage + +To use JBOD storage, specify `jbod` as the storage type and add configuration for the JBOD volumes. +JBOD volumes can be persistent or ephemeral, with the configuration options and constraints applicable to each type. + +.Example configuration for JBOD storage +[source,yaml,subs="+attributes"] +---- +apiVersion: {KafkaNodePoolApiVersion} +kind: KafkaNodePool +metadata: + name: my-node-pool + labels: + strimzi.io/cluster: my-cluster +spec: + replicas: 3 + roles: + - broker + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 100Gi + deleteClaim: false + - id: 1 + type: persistent-claim + size: 100Gi + deleteClaim: false + - id: 2 + type: persistent-claim + size: 100Gi + deleteClaim: false + # ... +---- + +PVCs are created for the JBOD volumes using the naming convention `data----`, and the JBOD volumes used for log directories are mounted at `/var/lib/kafka/data-/kafka-log`. + +[id='proc-adding-removing-volumes-{context}'] +== Adding or removing volumes from JBOD storage + +Volume IDs cannot be changed once JBOD volumes are created, though you can add or remove volumes. +When adding a new volume to the to the `volumes` array under an `id` which was already used in the past and removed, make sure that the previously used `PersistentVolumeClaims` have been deleted. + +Use Cruise Control to reassign partitions when adding or removing volumes. +For information on intra-broker disk balancing, see xref:con-rebalance-{context}[]. + +[id='con-storing-metadata-log-{context}'] +== Configuring KRaft metadata log storage + +In KRaft mode, each node (including brokers and controllers) stores a copy of the Kafka cluster's metadata log on one of its data volumes. +By default, the log is stored on the volume with the lowest ID, but you can specify a different volume using the `kraftMetadata` property. + +For controller-only nodes, storage is exclusively for the metadata log. +Since the log is always stored on a single volume, using JBOD storage with multiple volumes does not improve performance or increase available disk space. + +In contrast, broker nodes or nodes that combine broker and controller roles can share the same volume for both the metadata log and partition replica data, optimizing disk utilization. +They can also use JBOD storage, where one volume is shared for the metadata log and partition replica data, while additional volumes are used solely for partition replica data. + +Changing the volume that stores the metadata log triggers a rolling update of the cluster nodes, involving the deletion of the old log and the creation of a new one in the specified location. +If `kraftMetadata` isn't specified, adding a new volume with a lower ID also prompts an update and relocation of the metadata log. + +.Example JBOD storage configuration using volume with ID 1 to store the KRaft metadata +[source,yaml,subs="attributes+"] +---- +apiVersion: {KafkaApiVersion} +kind: KafkaNodePool +metadata: + name: pool-a + # ... +spec: + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 100Gi + deleteClaim: false + - id: 1 + type: persistent-claim + size: 100Gi + kraftMetadata: shared + deleteClaim: false + # ... +---- \ No newline at end of file diff --git a/documentation/modules/configuring/con-config-storage-zookeeper.adoc b/documentation/modules/configuring/con-config-storage-zookeeper.adoc new file mode 100644 index 00000000000..624241bc384 --- /dev/null +++ b/documentation/modules/configuring/con-config-storage-zookeeper.adoc @@ -0,0 +1,175 @@ +// Module included in the following assemblies: +// +// assembly-storage.adoc + +[id='con-config-storage-zookeeper-{context}'] += Configuring Kafka storage with ZooKeeper + +[role="_abstract"] +If you are using ZooKeeper, configure its storage in the `Kafka` resource. +Depending on whether the deployment uses node pools, configure storage for the Kafka cluster in `Kafka` or `KafkaNodePool` resources. + +This section focuses only on ZooKeeper storage and Kafka storage configuration in the `Kafka` resource. +For detailed information on Kafka storage, refer to the section describing xref:con-config-storage-kraft-{context}[storage configuration using node pools]. +The same configuration options for storage are available in the `Kafka` resource. + +NOTE: Replicated storage is not required for ZooKeeper, as it has built-in data replication. + +== Configuring ephemeral storage + +To use ephemeral storage, specify `ephemeral` as the storage type. + +.Example configuration for ephemeral storage +[source,yaml,subs="attributes+"] +---- +apiVersion: {KafkaApiVersion} +kind: Kafka +metadata: + name: my-cluster +spec: + kafka: + storage: + type: ephemeral + zookeeper: + storage: + type: ephemeral + # ... +---- + +The ephemeral volume used by Kafka brokers for log directories is mounted at `/var/lib/kafka/data/kafka-log`. + +IMPORTANT: Ephemeral storage is unsuitable for single-node ZooKeeper clusters or Kafka topics with a replication factor of 1. + +== Configuring persistent storage + +The same persistent storage configuration options available for node pools can also be specified for Kafka in the `Kafka` resource. +For more information, see the section on xref:con-config-storage-kraft-{context}[configuring Kafka storage using node pools]. +The `size` property can also be adjusted to xref:proc-resizing-persistent-volumes-{context}[resize persistent volumes]. + +The storage type must always be `persistent-claim` for ZooKeeper, as it does not support JBOD storage. + +.Example configuration for persistent storage +[source,yaml,subs="attributes+"] +---- +apiVersion: {KafkaApiVersion} +kind: Kafka +metadata: + name: my-cluster +spec: + kafka: + storage: + type: persistent-claim + size: 500Gi + deleteClaim: true + # ... + zookeeper: + storage: + type: persistent-claim + size: 1000Gi +---- + +PVCs created for Kafka pods when storage is configured in the `Kafka` resource use the naming convention `data--kafka-`, and the persistent volumes for Kafka logs are mounted at `/var/lib/kafka/data/kafka-log`. + +PVCs created for ZooKeeper follow the naming convention `data--zookeeper-`. + +NOTE: As in KRaft mode, you can also specify custom storage classes and volume selectors. + +== Configuring JBOD storage + +ZooKeeper does not support JBOD storage, but Kafka nodes in a ZooKeeper-based cluster can still be configured to use JBOD storage. +The same JBOD configuration options available for node pools can also be specified for Kafka in the `Kafka` resource. +For more information, see the section on xref:con-config-storage-kraft-{context}[configuring Kafka storage using node pools]. +The `volumes` array can also be adjusted to xref:proc-adding-removing-volumes-{context}[add or remove volumes]. + +.Example configuration for JBOD storage +[source,yaml,subs="attributes+"] +---- +apiVersion: {KafkaApiVersion} +kind: Kafka +metadata: + name: my-cluster +spec: + kafka: + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 100Gi + deleteClaim: false + - id: 1 + type: persistent-claim + size: 100Gi + deleteClaim: false + - id: 2 + type: persistent-claim + size: 100Gi + deleteClaim: false + # ... + zookeeper: + storage: + type: persistent-claim + size: 1000Gi +---- + +== Migrating from storage class overrides (deprecated) + +The use of node pools to change the storage classes used by volumes replaces the deprecated `overrides` properties previously used for Kafka and ZooKeeper in the `Kafka` resource. + +.Example storage configuration with class overrides +[source,yaml,subs="attributes+"] +---- +apiVersion: {KafkaApiVersion} +kind: Kafka +metadata: + labels: + app: my-cluster + name: my-cluster + namespace: myproject +spec: + # ... + kafka: + replicas: 3 + storage: + type: jbod + volumes: + - id: 0 + type: persistent-claim + size: 100Gi + deleteClaim: false + class: my-storage-class + overrides: + - broker: 0 + class: my-storage-class-zone-1a + - broker: 1 + class: my-storage-class-zone-1b + - broker: 2 + class: my-storage-class-zone-1c + # ... + # ... + zookeeper: + replicas: 3 + storage: + deleteClaim: true + size: 100Gi + type: persistent-claim + class: my-storage-class + overrides: + - broker: 0 + class: my-storage-class-zone-1a + - broker: 1 + class: my-storage-class-zone-1b + - broker: 2 + class: my-storage-class-zone-1c + # ... +---- + +If you are using storage class overrides for Kafka, we encourage you to transition to using node pools instead. +To migrate the existing configuration, follow these steps: + +1. Make sure you already use node pools resources. + If not, you should xref:proc-migrating-clusters-node-pools-str[migrate the cluster to use node pools] first. +2. Create new xref:config-node-pools-str[node pools] with storage configuration using the desired storage class without using the overrides. +3. Move all partition replicas from the old broker using the storage class overrides. + You can do this using xref:cruise-control-concepts-str[Cruise Control] or xref:assembly-reassign-tool-str[using the partition reassignment tool]. +4. Delete the old node pool with the old brokers using the storage class overrides. \ No newline at end of file diff --git a/documentation/modules/configuring/con-considerations-for-data-storage.adoc b/documentation/modules/configuring/con-considerations-for-data-storage.adoc index d5e56cea396..9eecdf94e59 100644 --- a/documentation/modules/configuring/con-considerations-for-data-storage.adoc +++ b/documentation/modules/configuring/con-considerations-for-data-storage.adoc @@ -3,19 +3,20 @@ // assembly-storage.adoc [id='considerations-for-data-storage-{context}'] -= Data storage considerations += Storage considerations [role="_abstract"] -For Strimzi to work well, an efficient data storage infrastructure is essential. -We strongly recommend using block storage. -Strimzi is only tested for use with block storage. -File storage, such as NFS, is not tested and there is no guarantee it will work. +Efficient data storage is essential for Strimzi to operate effectively, and block storage is strongly recommended. +Strimzi has been tested only with block storage, and file storage solutions like NFS are not guaranteed to work. -Choose one of the following options for your block storage: +Common block storage types supported by Kubernetes include: -* A cloud-based block storage solution, such as {aws-ebs} -* Persistent storage using {K8sLocalPersistentVolumes} -* Storage Area Network (SAN) volumes accessed by a protocol such as _Fibre Channel_ or _iSCSI_ +* Cloud-based block storage solutions: +** Amazon EBS (for AWS) +** Azure Disk Storage (for Microsoft Azure) +** Persistent Disk (for Google Cloud) +* Persistent storage (for bare metal deployments) using {K8sLocalPersistentVolumes} +* Storage Area Network (SAN) volumes accessed by protocols like Fibre Channel or iSCSI NOTE: Strimzi does not require Kubernetes raw block volumes. @@ -28,9 +29,6 @@ Consider the underlying architecture and requirements of your deployment when ch For more information, refer to {ApacheKafkaFileSystem} in the Kafka documentation. == Disk usage -Use separate disks for Apache Kafka and ZooKeeper. - Solid-state drives (SSDs), though not essential, can improve the performance of Kafka in large clusters where data is sent to and received from multiple topics asynchronously. -SSDs are particularly effective with ZooKeeper, which requires fast, low latency data access. -NOTE: You do not need to provision replicated storage because Kafka and ZooKeeper both have built-in data replication. +NOTE: Replicated storage is not required, as Kafka provides built-in data replication. diff --git a/documentation/modules/configuring/proc-adding-volumes-to-jbod-storage.adoc b/documentation/modules/configuring/proc-adding-volumes-to-jbod-storage.adoc deleted file mode 100644 index 6c4369f3941..00000000000 --- a/documentation/modules/configuring/proc-adding-volumes-to-jbod-storage.adoc +++ /dev/null @@ -1,62 +0,0 @@ -// Module included in the following assemblies: -// -// assembly-storage.adoc - -[id='proc-adding-volumes-to-jbod-storage-{context}'] -= Adding volumes to JBOD storage - -This procedure describes how to add volumes to a Kafka cluster configured to use JBOD storage. -It cannot be applied to Kafka clusters configured to use any other storage type. - -NOTE: When adding a new volume under an `id` which was already used in the past and removed, you have to make sure that the previously used `PersistentVolumeClaims` have been deleted. - -.Prerequisites - -* A Kubernetes cluster -* A running Cluster Operator -* A Kafka cluster with JBOD storage - -.Procedure - -. Edit the `spec.kafka.storage.volumes` property in the `Kafka` resource. -Add the new volumes to the `volumes` array. -For example, add the new volume with id `2`: -+ -[source,yaml,subs=attributes+] ----- -apiVersion: {KafkaApiVersion} -kind: Kafka -metadata: - name: my-cluster -spec: - kafka: - # ... - storage: - type: jbod - volumes: - - id: 0 - type: persistent-claim - size: 100Gi - deleteClaim: false - - id: 1 - type: persistent-claim - size: 100Gi - deleteClaim: false - - id: 2 - type: persistent-claim - size: 100Gi - deleteClaim: false - # ... - zookeeper: - # ... ----- - -. Create or update the resource: -+ -[source,shell,subs=+quotes] -kubectl apply -f __ - -. Create new topics or reassign existing partitions to the new disks. -+ -TIP: Cruise Control is an effective tool for reassigning partitions. -To perform an intra-broker disk balance, you set `rebalanceDisk` to `true` under the `KafkaRebalance.spec`. diff --git a/documentation/modules/configuring/proc-managing-storage-node-pools.adoc b/documentation/modules/configuring/proc-managing-storage-node-pools.adoc index 6f9b4abe350..024b5f3648d 100644 --- a/documentation/modules/configuring/proc-managing-storage-node-pools.adoc +++ b/documentation/modules/configuring/proc-managing-storage-node-pools.adoc @@ -12,6 +12,8 @@ Node pools simplify this process, because you can set up separate node pools tha In this procedure we create and manage storage for a node pool called `pool-a` containing three nodes. We show how to change the storage class (`volumes.class`) that defines the type of persistent storage it uses. You can use the same steps to change the storage size (`volumes.size`). +This approach is particularly useful if you want to reduce disk sizes. +When increasing disk sizes, you have the option to xref:proc-resizing-persistent-volumes-{context}[dynamically resize persistent volumes]. NOTE: We strongly recommend using block storage. Strimzi is only tested for use with block storage. diff --git a/documentation/modules/configuring/proc-removing-volumes-from-jbod-storage.adoc b/documentation/modules/configuring/proc-removing-volumes-from-jbod-storage.adoc deleted file mode 100644 index 0f142ae7779..00000000000 --- a/documentation/modules/configuring/proc-removing-volumes-from-jbod-storage.adoc +++ /dev/null @@ -1,55 +0,0 @@ -// Module included in the following assemblies: -// -// assembly-storage.adoc - -[id='proc-removing-volumes-from-jbod-storage-{context}'] -= Removing volumes from JBOD storage - -This procedure describes how to remove volumes from a Kafka cluster configured to use JBOD storage. -It cannot be applied to Kafka clusters configured to use any other storage type. -The JBOD storage always has to contain at least one volume. - -IMPORTANT: To avoid data loss, you have to move all partitions before removing the volumes. - -.Prerequisites - -* A Kubernetes cluster -* A running Cluster Operator -* A Kafka cluster with JBOD storage with two or more volumes - -.Procedure - -. Reassign all partitions from the disks which are you going to remove. -Any data in partitions still assigned to the disks which are going to be removed might be lost. -+ -TIP: You can use the `kafka-reassign-partitions.sh` tool to reassign the partitions. - -. Edit the `spec.kafka.storage.volumes` property in the `Kafka` resource. -Remove one or more volumes from the `volumes` array. -For example, remove the volumes with ids `1` and `2`: -+ -[source,yaml,subs=attributes+] ----- -apiVersion: {KafkaApiVersion} -kind: Kafka -metadata: - name: my-cluster -spec: - kafka: - # ... - storage: - type: jbod - volumes: - - id: 0 - type: persistent-claim - size: 100Gi - deleteClaim: false - # ... - zookeeper: - # ... ----- - -. Create or update the resource: -+ -[source,shell,subs=+quotes] -kubectl apply -f __ diff --git a/documentation/modules/configuring/proc-resizing-persistent-volumes.adoc b/documentation/modules/configuring/proc-resizing-persistent-volumes.adoc deleted file mode 100644 index 4fdd8a041c3..00000000000 --- a/documentation/modules/configuring/proc-resizing-persistent-volumes.adoc +++ /dev/null @@ -1,82 +0,0 @@ -// Module included in the following assemblies: -// -// assembly-storage.adoc - -[id='proc-resizing-persistent-volumes-{context}'] -= Resizing persistent volumes - -[role="_abstract"] -Persistent volumes used by a cluster can be resized without any risk of data loss, as long as the storage infrastructure supports it. -Following a configuration update to change the size of the storage, Strimzi instructs the storage infrastructure to make the change. -Storage expansion is supported in Strimzi clusters that use persistent-claim volumes. - -Storage reduction is only possible when using multiple disks per broker. -You can remove a disk after moving all partitions on the disk to other volumes within the same broker (intra-broker) or to other brokers within the same cluster (intra-cluster). - -IMPORTANT: You cannot decrease the size of persistent volumes because it is not currently supported in Kubernetes. - -.Prerequisites - -* A Kubernetes cluster with support for volume resizing. -* The Cluster Operator is running. -* A Kafka cluster using persistent volumes created using a storage class that supports volume expansion. - -.Procedure - -. Edit the `Kafka` resource for your cluster. -+ -Change the `size` property to increase the size of the persistent volume allocated to a Kafka cluster, a ZooKeeper cluster, or both. -+ --- -* For Kafka clusters, update the `size` property under `spec.kafka.storage`. -* For ZooKeeper clusters, update the `size` property under `spec.zookeeper.storage`. --- -+ -.Kafka configuration to increase the volume size to `2000Gi` -[source,yaml,subs=attributes+] ----- -apiVersion: {KafkaApiVersion} -kind: Kafka -metadata: - name: my-cluster -spec: - kafka: - # ... - storage: - type: persistent-claim - size: 2000Gi - class: my-storage-class - # ... - zookeeper: - # ... ----- - -. Create or update the resource: -+ -[source,shell,subs=+quotes] -kubectl apply -f __ -+ -Kubernetes increases the capacity of the selected persistent volumes in response to a request from the Cluster Operator. -When the resizing is complete, the Cluster Operator restarts all pods that use the resized persistent volumes. -This happens automatically. - -. Verify that the storage capacity has increased for the relevant pods on the cluster: -+ -[source,shell,subs=+quotes] -kubectl get pv -+ -.Kafka broker pods with increased storage -[source,shell,subs="+quotes,attributes"] ----- -NAME CAPACITY CLAIM -pvc-0ca459ce-... 2000Gi my-project/data-my-cluster-kafka-2 -pvc-6e1810be-... 2000Gi my-project/data-my-cluster-kafka-0 -pvc-82dc78c9-... 2000Gi my-project/data-my-cluster-kafka-1 ----- -+ -The output shows the names of each PVC associated with a broker pod. - -[role="_additional-resources"] -.Additional resources - -* For more information about resizing persistent volumes in Kubernetes, see {K8sResizingPersistentVolumesUsingKubernetes}. diff --git a/documentation/modules/configuring/ref-storage-ephemeral.adoc b/documentation/modules/configuring/ref-storage-ephemeral.adoc deleted file mode 100644 index e98cbc9539b..00000000000 --- a/documentation/modules/configuring/ref-storage-ephemeral.adoc +++ /dev/null @@ -1,53 +0,0 @@ -// Module included in the following assemblies: -// -// assembly-storage.adoc - -[id='ref-ephemeral-storage-{context}'] -= Ephemeral storage - -[role="_abstract"] -Ephemeral data storage is transient. -All pods on a node share a local ephemeral storage space. -Data is retained for as long as the pod that uses it is running. -The data is lost when a pod is deleted. -Although a pod can recover data in a highly available environment. - -Because of its transient nature, ephemeral storage is only recommended for development and testing. - -Ephemeral storage uses `{K8sEmptyDir}` volumes to store data. -An `emptyDir` volume is created when a pod is assigned to a node. -You can set the total amount of storage for the `emptyDir` using the `sizeLimit` property . - -IMPORTANT: Ephemeral storage is not suitable for single-node ZooKeeper clusters or Kafka topics with a replication factor of 1. - -To use ephemeral storage, you set the storage type configuration in the `Kafka` or `ZooKeeper` resource to `ephemeral`. -If you are using node pools, you can also specify `ephemeral` in the storage configuration of individual node pools. - -.Example ephemeral storage configuration -[source,yaml,subs="attributes+"] ----- -apiVersion: {KafkaApiVersion} -kind: Kafka -metadata: - name: my-cluster -spec: - kafka: - storage: - type: ephemeral - # ... - zookeeper: - storage: - type: ephemeral - # ... ----- - -== Mount path of Kafka log directories - -The ephemeral volume is used by Kafka brokers as log directories mounted into the following path: - -[source,shell,subs="+quotes,attributes"] ----- -/var/lib/kafka/data/kafka-log__IDX__ ----- - -Where `_IDX_` is the Kafka broker pod index. For example `/var/lib/kafka/data/kafka-log0`. diff --git a/documentation/modules/configuring/ref-storage-jbod.adoc b/documentation/modules/configuring/ref-storage-jbod.adoc deleted file mode 100644 index 50c683d0a1b..00000000000 --- a/documentation/modules/configuring/ref-storage-jbod.adoc +++ /dev/null @@ -1,111 +0,0 @@ -// Module included in the following assemblies: -// -// assembly-storage.adoc - -[id='ref-jbod-storage-{context}'] -= JBOD storage - -[role="_abstract"] -JBOD storage allows you to configure your Kafka cluster to use multiple disks or volumes. -This approach provides increased data storage capacity for Kafka nodes, and can lead to performance improvements. -A JBOD configuration is defined by one or more volumes, each of which can be either xref:ref-ephemeral-storage-{context}[ephemeral] or xref:ref-persistent-storage-{context}[persistent]. -The rules and constraints for JBOD volume declarations are the same as those for ephemeral and persistent storage. -For example, you cannot decrease the size of a persistent storage volume after it has been provisioned, nor can you change the value of `sizeLimit` when the type is `ephemeral`. - -NOTE: JBOD storage is supported for *Kafka only*, not for ZooKeeper. - -To use JBOD storage, you set the storage type configuration in the `Kafka` resource to `jbod`. -If you are using node pools, you can also specify `jbod` in the storage configuration for nodes belonging to a specific node pool. - -The `volumes` property allows you to describe the disks that make up your JBOD storage array or configuration. - -.Example JBOD storage configuration -[source,yaml,subs="attributes+"] ----- -apiVersion: {KafkaApiVersion} -kind: Kafka -metadata: - name: my-cluster -spec: - kafka: - storage: - type: jbod - volumes: - - id: 0 - type: persistent-claim - size: 100Gi - deleteClaim: false - - id: 1 - type: persistent-claim - size: 100Gi - deleteClaim: false - # ... ----- - -The IDs cannot be changed once the JBOD volumes are created. -You can add or remove volumes from the JBOD configuration. - -[id='ref-jbod-storage-pvc-{context}'] -== PVC resource for JBOD storage - -When persistent storage is used to declare JBOD volumes, it creates a PVC with the following name: - -`data-_id_-_cluster-name_-kafka-_idx_`:: - -PVC for the volume used for storing data for the Kafka broker pod `_idx_`. -The `_id_` is the ID of the volume used for storing data for Kafka broker pod. - -== Mount path of Kafka log directories - -The JBOD volumes are used by Kafka brokers as log directories mounted into the following path: - -[source,shell,subs="+quotes,attributes"] ----- -/var/lib/kafka/data-_id_/kafka-log__idx__ ----- - -Where `_id_` is the ID of the volume used for storing data for Kafka broker pod `_idx_`. For example `/var/lib/kafka/data-0/kafka-log0`. - -== Configuring the storage volume used to store the KRaft metadata log - -In KRaft mode, a copy of the Kafka cluster's metadata log is stored on every node, including brokers and controllers. -Each node uses one of its data volumes for the KRaft metadata log. -By default, the log is stored on the volume with the lowest ID. -However, you can specify another volume using the `kraftMetadata` property. - -For controller-only nodes, which don't handle data, storage is used only used for the metadata log. -The metadata log is always stored only on one volume, so using JBOD storage with multiple volumes does not improve the performance or increase the available disk space. - -Meanwhile, broker nodes or nodes combining broker and controller roles share the same volume for storing both the metadata log and partition replica data. -This sharing optimizes disk utilization. -They can also utilize JBOD storage with multiple volumes so that one of the volumes is shared by the metadata log and partition replica data and any additional volumes are used for partition replica data only. - -Changing the volume that stores the metadata log triggers a rolling update of nodes in the cluster. -This process involves deleting the old metadata log and creating a new one in the new location. -If `kraftMetadata` isn't specified on any volume, adding a new volume with a lower ID also triggers an update and relocation of the metadata log. - -NOTE: JBOD storage in KRaft mode is considered early-access in Apache Kafka 3.7.x. - -.Example JBOD storage configuration using volume with ID 1 to store the KRaft metadata -[source,yaml,subs="attributes+"] ----- -apiVersion: {KafkaApiVersion} -kind: KafkaNodePool -metadata: - name: pool-a - # ... -spec: - storage: - type: jbod - volumes: - - id: 0 - type: persistent-claim - size: 100Gi - deleteClaim: false - - id: 1 - type: persistent-claim - size: 100Gi - kraftMetadata: shared - deleteClaim: false - # ... ----- diff --git a/documentation/modules/configuring/ref-storage-persistent.adoc b/documentation/modules/configuring/ref-storage-persistent.adoc deleted file mode 100644 index e07f0a10d04..00000000000 --- a/documentation/modules/configuring/ref-storage-persistent.adoc +++ /dev/null @@ -1,228 +0,0 @@ -// Module included in the following assemblies: -// -// assembly-storage.adoc - -[id='ref-persistent-storage-{context}'] -= Persistent storage - -[role="_abstract"] -Persistent data storage retains data in the event of system disruption. -For pods that use persistent data storage, data is persisted across pod failures and restarts. -Because of its permanent nature, persistent storage is recommended for production environments. - -The following examples show common types of persistent volumes supported by Kubernetes: - -* If your Kubernetes cluster runs on Amazon AWS, Kubernetes can provision Amazon EBS volumes -* If your Kubernetes cluster runs on Microsoft Azure, Kubernetes can provision Azure Disk Storage volumes -* If your Kubernetes cluster runs on Google Cloud, Kubernetes can provision Persistent Disk volumes -* If your Kubernetes cluster runs on bare metal, Kubernetes can provision local persistent volumes - -To use persistent storage in Strimzi, you specify `persistent-claim` in the storage configuration of the `Kafka` or `ZooKeeper` resources. -If you are using node pools, you can also specify `persistent-claim` in the storage configuration of individual node pools. - -You configure the resource so that pods use {K8sPersistentVolumeClaims} (PVCs) to make storage requests on persistent volumes (PVs). -PVs represent storage volumes that are created on demand and are independent of the pods that use them. -The PVC requests the amount of storage required when a pod is being created. -The underlying storage infrastructure of the PV does not need to be understood. -If a PV matches the storage criteria, the PVC is bound to the PV. - -You have two options for specifying the storage type: - -`storage.type: persistent-claim`:: If you choose `persistent-claim` as the storage type, a single persistent storage volume is defined. - -`storage.type: jbod`:: When you select `jbod` as the storage type, you have the flexibility to define an array of persistent storage volumes using unique IDs. - -In a production environment, it is recommended to configure the following: - -* For Kafka or node pools, set `storage.type` to `jbod` with one or more persistent volumes. -* For ZooKeeper, set `storage.type` as `persistent-claim` for a single persistent volume. - -Persistent storage also has the following configuration options: - -`id` (optional):: -A storage identification number. This option is mandatory for storage volumes defined in a JBOD storage declaration. -Default is `0`. - -`size` (required):: -The size of the persistent volume claim, for example, "1000Gi". - -`class` (optional):: -PVCs can request different types of persistent storage by specifying a {K8SStorageClass}. -Storage classes define storage profiles and dynamically provision PVs based on that profile. -If a storage class is not specified, the storage class marked as default in the Kubernetes cluster is used. -Persistent storage options might include SAN storage types or {K8sLocalPersistentVolumes}. - -`selector` (optional):: -Configuration to specify a specific PV. -Provides key:value pairs representing the labels of the volume selected. - -`deleteClaim` (optional):: -Boolean value to specify whether the PVC is deleted when the cluster is uninstalled. -Default is `false`. - -WARNING: Increasing the size of persistent volumes in an existing Strimzi cluster is only supported in Kubernetes versions that support persistent volume resizing. The persistent volume to be resized must use a storage class that supports volume expansion. -For other versions of Kubernetes and storage classes that do not support volume expansion, you must decide the necessary storage size before deploying the cluster. -Decreasing the size of existing persistent volumes is not possible. - -.Example persistent storage configuration for Kafka and ZooKeeper -[source,yaml,subs="attributes+"] ----- -apiVersion: {KafkaApiVersion} -kind: Kafka -metadata: - name: my-cluster -spec: - kafka: - storage: - type: jbod - volumes: - - id: 0 - type: persistent-claim - size: 100Gi - deleteClaim: false - - id: 1 - type: persistent-claim - size: 100Gi - deleteClaim: false - - id: 2 - type: persistent-claim - size: 100Gi - deleteClaim: false - # ... - zookeeper: - storage: - type: persistent-claim - size: 1000Gi - # ... ----- - -.Example persistent storage configuration with specific storage class -[source,yaml,subs="attributes+"] ----- -# ... -storage: - type: persistent-claim - size: 500Gi - class: my-storage-class -# ... ----- - -Use a `selector` to specify a labeled persistent volume that provides certain features, such as an SSD. - -.Example persistent storage configuration with selector -[source,yaml,subs="attributes+"] ----- -# ... -storage: - type: persistent-claim - size: 1Gi - selector: - hdd-type: ssd - deleteClaim: true -# ... ----- - -== Storage class overrides - -WARNING: Storage class overrides are deprecated and will be removed in the future. As a replacement, use `KafkaNodePool` resources instead. - -Instead of using the default storage class, you can specify a different storage class for one or more Kafka or ZooKeeper nodes. -This is useful, for example, when storage classes are restricted to different availability zones or data centers. -You can use the `overrides` field for this purpose. - -In this example, the default storage class is named `my-storage-class`: - -.Example storage configuration with class overrides -[source,yaml,subs="attributes+"] ----- -apiVersion: {KafkaApiVersion} -kind: Kafka -metadata: - labels: - app: my-cluster - name: my-cluster - namespace: myproject -spec: - # ... - kafka: - replicas: 3 - storage: - type: jbod - volumes: - - id: 0 - type: persistent-claim - size: 100Gi - deleteClaim: false - class: my-storage-class - overrides: - - broker: 0 - class: my-storage-class-zone-1a - - broker: 1 - class: my-storage-class-zone-1b - - broker: 2 - class: my-storage-class-zone-1c - # ... - # ... - zookeeper: - replicas: 3 - storage: - deleteClaim: true - size: 100Gi - type: persistent-claim - class: my-storage-class - overrides: - - broker: 0 - class: my-storage-class-zone-1a - - broker: 1 - class: my-storage-class-zone-1b - - broker: 2 - class: my-storage-class-zone-1c - # ... ----- - -As a result of the configured `overrides` property, the volumes use the following storage classes: - -* The persistent volumes of ZooKeeper node 0 use `my-storage-class-zone-1a`. -* The persistent volumes of ZooKeeper node 1 use `my-storage-class-zone-1b`. -* The persistent volumes of ZooKeeper node 2 use `my-storage-class-zone-1c`. -* The persistent volumes of Kafka broker 0 use `my-storage-class-zone-1a`. -* The persistent volumes of Kafka broker 1 use `my-storage-class-zone-1b`. -* The persistent volumes of Kafka broker 2 use `my-storage-class-zone-1c`. - -The `overrides` property is currently used only to override the storage `class`. -Overrides for other storage configuration properties is not currently supported. - -=== Migrating from storage class overrides to node pools - -Storage class overrides are deprecated and will be removed in the future. -If you are using storage class overrides, we encourage you to transition to using node pools instead. -To migrate the existing configuration, follow these steps: - -1. Make sure you already use node pools resources. - If not, you should xref:proc-migrating-clusters-node-pools-str[migrate the cluster to use node pools] first. -2. Create new xref:config-node-pools-str[node pools] with storage configuration using the desired storage class without using the overrides. -3. Move all partition replicas from the old broker using the storage class overrides. - You can do this using xref:cruise-control-concepts-str[Cruise Control] or xref:assembly-reassign-tool-str[using the partition reassignment tool]. -4. Delete the old node pool with the old brokers using the storage class overrides. - -[id='ref-persistent-storage-pvc-{context}'] -== PVC resources for persistent storage - -When persistent storage is used, it creates PVCs with the following names: - -`data-_cluster-name_-kafka-_idx_`:: -PVC for the volume used for storing data for the Kafka broker pod `_idx_`. - -`data-_cluster-name_-zookeeper-_idx_`:: -PVC for the volume used for storing data for the ZooKeeper node pod `_idx_`. - -== Mount path of Kafka log directories - -The persistent volume is used by the Kafka brokers as log directories mounted into the following path: - -[source,shell,subs="+quotes,attributes"] ----- -/var/lib/kafka/data/kafka-log__IDX__ ----- - -Where `_IDX_` is the Kafka broker pod index. For example `/var/lib/kafka/data/kafka-log0`. diff --git a/documentation/modules/configuring/ref-storage-tiered.adoc b/documentation/modules/configuring/ref-storage-tiered.adoc index 9d90e4473ec..f4364626665 100644 --- a/documentation/modules/configuring/ref-storage-tiered.adoc +++ b/documentation/modules/configuring/ref-storage-tiered.adoc @@ -10,7 +10,7 @@ Due to its https://kafka.apache.org/documentation/#tiered_storage_limitation[cur Tiered storage requires an implementation of Kafka's `RemoteStorageManager` interface to handle communication between Kafka and the remote storage system, which is enabled through configuration of the `Kafka` resource. Strimzi uses Kafka's https://github.com/apache/kafka/blob/trunk/storage/src/main/java/org/apache/kafka/server/log/remote/metadata/storage/TopicBasedRemoteLogMetadataManager.java[`TopicBasedRemoteLogMetadataManager`^] for Remote Log Metadata Management (RLMM) when custom tiered storage is enabled. -The RLMM manages the metadata related to remote storage. +The RLMM manages the metadata related to remote storage. To use custom tiered storage, do the following: diff --git a/documentation/modules/managing/proc-cluster-recovery-volume.adoc b/documentation/modules/managing/proc-cluster-recovery-volume.adoc index 3eefb9baa89..1372a6351d9 100644 --- a/documentation/modules/managing/proc-cluster-recovery-volume.adoc +++ b/documentation/modules/managing/proc-cluster-recovery-volume.adoc @@ -19,7 +19,7 @@ WARNING: If the User Operator is enabled and Kafka users are not recreated, user In this procedure, it is essential that PVs are mounted into the correct PVC to avoid data corruption. A `volumeName` is specified for the PVC and this must match the name of the PV. -For more information, see xref:ref-persistent-storage-{context}[Persistent storage]. +For more information, see xref:assembly-storage-{context}[]. .Procedure diff --git a/documentation/shared/attributes.adoc b/documentation/shared/attributes.adoc index 9b2fe0c7daf..daeaabb874a 100644 --- a/documentation/shared/attributes.adoc +++ b/documentation/shared/attributes.adoc @@ -64,7 +64,6 @@ :oauth-demo-hydra: link:https://github.com/strimzi/strimzi-kafka-oauth/tree/{OAuthVersion}/examples/docker#running-with-hydra-using-ssl-and-opaque-tokens[Using Hydra as the OAuth 2.0 authorization server^] // External links -:aws-ebs: link:https://aws.amazon.com/ebs/[Amazon Elastic Block Store (EBS)^] :JavaServiceProvider: link:https://www.baeldung.com/java-spi[Java Service Provider Interface^] :JQTool: link:https://github.com/jqlang/jq[command line JSON parser tool^] :kubernetes-docs: link:https://kubernetes.io/docs/home/