Releases: percona/percona-server-mongodb-operator
v1.19.0
Release Highlights
Using remote file server for backups (tech preview)
The new filesystem
backup storage type was added in this release in addition to already existing s3
and azure
types.
It allows users to mount a remote file server to a local directory, and make Percona Backup for MongoDB using this directory as a storage for backups. The approach is based on common Network File System (NFS) protocol, and should be useful in network-restricted environments without S3-compatible storage or in cases with a non-standard storage service supporting NFS access.
To use NFS-capable remote file server as a backup storage, user needs to mount the remote storage as a sidecar volume in the replsets
section of the Custom Resource (and also configsvrReplSet
in case of a sharded cluster):
replsets:
...
sidecarVolumes:
- name: backup-nfs
nfs:
server: "nfs-service.storage.svc.cluster.local"
path: "/psmdb-some-name-rs0"
...
Finally, this new storage needs to be configured in the same Custom Resource as a normal storage for backups:
backup:
...
storages:
backup-nfs:
filesystem:
path: /mnt/nfs/
type: filesystem
...
volumeMounts:
- mountPath: /mnt/nfs/
name: backup-nfs
See more in our documentation about this storage type.
Generated passwords for custom MongoDB users
A new improvement for the declarative management of custom MongoDB users brings the possibility to use automatic generation of users passwords. When you specify a new user in deploy/cr.yaml
configuration file, you can omit specifying a reference to an already existing Secret with the user’s password, and the Operator will generate it automatically:
...
users:
- name: my-user
db: admin
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: admin
Find more details on this automatically created Secret in our documentation.
Percona Server for MongoDB 8.0 support
Percona Server for MongoDB 8.0 is now supported by the Operator in addition to 6.0 and 7.0 versions. The appropriate images are now included into the list of Percona-certified images. See this blogpost for details about the latest MongoDB 8.0 features with the added reliability and performance improvements.
New Features
- K8SPSMDB-1109: Backups can now be stored on a remote file server
- K8SPSMDB-921: IAM Roles for Service Accounts (IRSA) allow automating access to AWS S3 buckets based on Identity Access Management with no need to specify the S3 credentials explicitly
- K8SPSMDB-1133: Manual change of Replica Set Member Priority in Percona Server MongoDB Operator is now possible with the new
replsetOverrides.MEMBER-NAME.priority
Custom Resource option - K8SPSMDB-1164: Add the possibility to create users in the
$external
database for external authentication purposes
Improvements
- K8SPSMDB-1123: Percona Server for MongoDB 8.0 is now supported
- K8SPSMDB-1171: The declarative user management was enchanced with the possibility to automatically generate passwords
- K8SPSMDB-1174: Telemetry was improved to to track whether the custom users and roles management, automatic volume expansion, and multi-cluster services features are enabled
- K8SPSMDB-1179: It is now possible to configure externalTrafficPolicy for mongod, configsvr and mongos instances
- K8SPSMDB-1205: Backups in unmanaged clusters are now supported, removing a long-standing limitation of cross-site replication that didn’t allow backups on replica clusters
Bugs Fixed
- K8SPSMDB-1215: Fix a bug where ExternalTrafficPolicy was incorrectly set for LoadBalancer and NodePort services (Thanks to Anton Averianov for contributing)
- K8SPSMDB-675: Fix a bug where disabling sharding failed on a running cluster with enabled backups
- K8SPSMDB-754: Fix a bug where some error messages had “INFO” log level and therefore were not seen in logs with the “ERROR” log level turned on
- K8SPSMDB-1088: Fix a bug which caused the Operator starting two backup operations if the user patches the backup object while its state is empty or Waiting
- K8SPSMDB-1156: Fix a bug that prevented the Operator with enabled backups to recover from invalid TLS configurations (Thanks to KOS for reporting)
- K8SPSMDB-1172: Fix a bug where backup user’s password username with special characters caused Percona Backup for MongoDB to fail
- K8SPSMDB-1212: Stop disabling balancer during restores, because it is not required for Percona Backup for MongoDB 2.x
Deprecation, Rename and Removal
- The
psmdbCluster
option from thedeploy/backup/backup.yaml
manifest used for on-demand backups, which was deprecated since the Operator version 1.12.0 in favor of theclusterName
option, has been removed and is no longer supported. - Percona Server for MongoDB 5.0 has reached its end of life and in no longer supported by the Operator
Supported Platforms
The Operator was developed and tested with Percona Server for MongoDB 6.0.19-16, 7.0.15-9, and 8.0.4-1. Other options may also work but have not been tested. The Operator also uses Percona Backup for MongoDB 2.8.0.
Percona Operators are designed for compatibility with all CNCF-certified
Kubernetes distributions. Our release process includes targeted testing and validation on major cloud provider platforms and OpenShift, as detailed below for Operator version 1.19.0:
- Google Kubernetes Engine (GKE) 1.28-1.30
- Amazon Elastic Container Service for Kubernetes (EKS) 1.29-1.31
- OpenShift Container Platform 4.14.44 - 4.17.11
- Azure Kubernetes Service (AKS) 1.28-1.31
- Minikube 1.34.0 based on Kubernetes 1.31.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.
v1.18.0
Release Highlights
Enhancements of the declarative user management
The declarative management of custom MongoDB users was improved compared to its initial implementation in the previous release, where the Operator did not track and sync user-related changes in the Custom Resource and the database. Also, starting from now you can create custom MongoDB roles on various databases just like users in the deploy/cr.yaml
manifest:
...
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: admin
See the documentation to find more details about this feature.
Support for selective restores
Percona Backup for MongoDB 2.0.0 has introduced a new functionality that allows partial restores, which means selectively restoring only with the desired subset of data. Now the Operator also supports this feature, allowing you to restore a specific database or a collection from a backup. You can achieve this by using an additional selective
section in the PerconaServerMongoDBRestore
Custom Resource:
spec:
selective:
withUsersAndRoles: true
namespaces:
- "db.collection"
You can find more on selective restores and their limitations in our documentation.
Splitting the replica set of the database cluster over multiple Kubernetes clusters
Recent improvements in cross-site replication made it possible to keep the replica set of the database cluster in different data centers. The Operator itself cannot deploy MongoDB replicas to other data centers, but this still can be achieved with a number of Operator deployments, equal to the size of your replica set: one Operator to control the replica set via cross-site replication, and at least two Operators to bootstrap the unmanaged clusters with other MongoDB replica set instances. Splitting the replica set of the database cluster over multiple Kubernetes clusters can be useful to get a fault-tolerant system in which all replicas are in different data centers.
You can find more about configuring such a multi-datacenter MongoDB cluster and the limitations of this solution on the dedicated documentation page.
New Features
K8SPSMDB-894: It is now possible to restore a subset of data (a specific database or a collection) from a backup which is useful to reduce time on restore operations when fixing corrupted data fragment
K8SPSMDB-1113: The new percona.com/delete-pitr-chunks finalizer allows the deletion of PITR log files from the backup storage when deleting a cluster so that leftover data does not continue to take up space in the cloud
K8SPSMDB-1124 and K8SPSMDB-1146: Declarative user management now covers creating and managing user roles, and syncs user-related changes between the Custom Resource and the database
K8SPSMDB-1140 and K8SPSMDB-1141: Multi-datacenter cluster deployment is now possible
Improvements
K8SPSMDB-739: A number of Service exposure options in the replsets, sharding.configsvrReplSet, and sharding.mongos were renamed for unification with other Percona Operators
K8SPSMDB-1002: New Custom Resource options under the replsets.primaryPreferTagSelector` subsection allow providing Primary instance selection preferences based on specific zone and region, which may be especially useful within the planned zone switchover process (Thanks to sergelogvinov for contribution)
K8SPSMDB-1096: Restore logs were improved to contain pbm-agent logs in mongod containers, useful to debug failures in the backup restoration process
K8SPSMDB-1135: Split-horizon DNS for external (unmanaged) nodes is now configurable via the replsets.externalNodes subsection in Custom Resource
K8SPSMDB-1152: Starting from now, the Operator uses multi-architecture images of Percona Server for MongoDB and Percona Backup for MongoDB, making it easier to deploy a cluster on ARM
K8SPSMDB-1160: The PVC resize feature introduced in previous release can now be enabled or disabled via the enableVolumeExpansion Custom Resource option (false by default), which protects the cluster from storage resize triggered by mistake
K8SPSMDB-1132: A new secrets.keyFile Custom Resource option allows to configure custom name for the Secret with the MongoDB internal auth key file
Bugs Fixed
K8SPSMDB-912: Fix a bug where the full backup connection string including the password was visible in logs in case of the Percona Backup for MongoDB errors
K8SPSMDB-1047: Fix a bug where the Operator was changing writeConcernMajorityJournalDefault to “true” during the replica set reconfiguring, ignoring the value set by user
K8SPSMDB-1168: Fix a bug where successful backups could obtain a failed state in case of the Operator configured with watchAllNamespaces: true and having the same name for MongoDB clusters across multiple namespaces (Thanks to Markus Küffner for contribution)
K8SPSMDB-1170: Fix a bug that prevented deletion of a cluster with the active percona.com/delete-psmdb-pods-in-order finalizer in case of the cluster error state (e.g. when mongo replset failed to reconcile)
K8SPSMDB-1184: Fix a bug where the Operator failed to reconcile when using the container security context with readOnlyRootFilesystem set to true (Thanks to applejag for contribution)
Deprecation, Rename and Removal
-
The new
enableVolumeExpansion
Custom Resource option allows users to disable the automated storage scaling with Volume Expansion capability. The default value of this option isfalse
, which means that the automated scaling is turned off by default. -
A number of Service exposure Custom Resource options in the
replsets
,sharding.configsvrReplSet
, andsharding.mongos
subsections were renamed to provide a unified experience with other Percona Operators:expose.serviceAnnotations
option renamed toexpose.annotations
expose.serviceLabels
option renamed toexpose.labels
expose.exposeType
option renamed toexpose.type
Supported Platforms
The Operator was developed and tested with Percona Server for MongoDB 5.0.29-25,
6.0.18-15, and 7.0.14-8. Other options may also work but have not been tested. The
Operator also uses Percona Backup for MongoDB 2.7.0.
The following platforms were tested and are officially supported by the Operator
1.18.0:
- Google Kubernetes Engine (GKE) 1.28-1.30
- Amazon Elastic Container Service for Kubernetes (EKS) 1.28-1.31
- OpenShift Container Platform 4.13.52 - 4.17.3
- Azure Kubernetes Service (AKS) 1.28-1.31
- Minikube 1.34.0 based on Kubernetes 1.31.0
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.
v1.17.0
Release Highlights
Declarative user management (technical preview)
Before the Operator version 1.17.0 custom MongoDB users had to be created manually. Now the declarative creation of custom MongoDB users is supported via the users
subsection in the Custom Resource. You can specify a new user in deploy/cr.yaml
manifest, setting the user’s login name and database, PasswordSecretRef (a reference to a key in a Secret resource containing user’s password) and as well as MongoDB roles on various databases which should be assigned to this user:
...
users:
- name: my-user
db: admin
passwordSecretRef:
name: my-user-password
key: my-user-password-key
roles:
- name: clusterAdmin
db: admin
- name: userAdminAnyDatabase
db: admin
See documentation to find more details about this feature with additional explanations and the list of current limitations.
Liveness check improvements
Several improvements in logging were made related to the liveness checks, to allow getting more information for debugging, and to make these logs persist on failures to allow further examination.
Liveness check logs are stored in the /data/db/mongod-data/logs/mongodb-healthcheck.log
file, which can be accessed in the corresponding Pod if needed. Starting from now, Liveness check generates more log messages, and the default log level is set to DEBUG
.
Each time the health check fails, the current log is saved to a gzip compressed file named mongodb-healthcheck-<timestamp>.log.gz
, and the mongodb-healthcheck.log
log file is reset.
Logs older than 24 hours are automatically deleted.
New Features
- K8SPSMDB-253: It is now possible to create and manage users via the Custom Resource
Improvements
- K8SPSMDB-899: Add Labels for all Kubernetes objects created by Operator (backups/restores, Secrets, Volumes, etc.) to make them clearly distinguishable
- K8SPSMDB-919: The Operator now checks if the needed Secrets exist and connects to the storage to check the validity of credentials and the existence of a backup before starting the restore process
- K8SPSMDB-934: Liveness checks are providing more debug information and keeping separate log archives for each failure with the 24 hours retention
- K8SPSMDB-1057: Finalizers were renamed to contain fully qualified domain names (FQDNs), avoiding potential conflicts with other finalizer names in the same Kubernetes environment
- K8SPSMDB-1108: The new Custom Resource option allows setting custom containerSecurityContext for PMM containers
- K8SPSMDB-994: Remove a limitation where it wasn’t possible to create a new cluster with splitHorizon enabled, leaving the only way to enable it later on the running cluster
Bugs Fixed
- K8SPSMDB-925: Fix a bug where the Operator generated “failed to start balancer” and “failed to get mongos connection” log messages when using Mongos with servicePerPod and LoadBalancer services, while the cluster was operating properly
- K8SPSMDB-1105: The memory requests and limits for backups were increased in the deploy/cr.yaml configuration file example to reflect the Percona Backup for MongoDB minimal pbm-agents requirement of 1 Gb RAM needed for stable operation
- K8SPSMDB-1074: Fix a bug where MongoDB Cluster could not failover in case of all Pods downtime and exposeType Custom Resource option set to either NodePort or LoadBalancer
- K8SPSMDB-1089: Fix a bug where it was impossible to delete a cluster in error state with finalizers present
- K8SPSMDB-1092: Fix a bug where Percona Backup for MongoDB log messages during physical restore were not accessible with the kubectl logs command
- K8SPSMDB-1094: Fix a bug where it wasn’t possible to create a new cluster with upgradeOptions.setFCV Custom Resource option set to true
- K8SPSMDB-1110: Fix a bug where nil Custom Resource annotations were causing the Operator panic
Deprecation, Rename and Removal
Finalizers were renamed to contain fully qualified domain names to comply with the Kubernetes standards.
PerconaServerMongoDB
Custom Resource:delete-psmdb-pods-in-order
finalizer renamed topercona.com/delete-psmdb-pods-in-order
delete-psmdb-pvc
finalizer renamed topercona.com/delete-psmdb-pvc
PerconaServerMongoDBBackup
Custom Resource:delete-backup
finalizer renamed topercona.com/delete-backup
Supported Platforms
The Operator was developed and tested with Percona Server for MongoDB 5.0.28-24,
6.0.16-13, and 7.0.12-7. Other options may also work but have not been tested. The
Operator also uses Percona Backup for MongoDB 2.5.0.
The following platforms were tested and are officially supported by the Operator
1.17.0:
- Google Kubernetes Engine (GKE) 1.27-1.30
- Amazon Elastic Container Service for Kubernetes (EKS) 1.28-1.30
- OpenShift Container Platform 4.13.48 - 4.16.9
- Azure Kubernetes Service (AKS) 1.28-1.30
- Minikube 1.33.1
This list only includes the platforms that the Percona Operators are specifically tested on as part of the release process. Other Kubernetes flavors and versions depend on the backward compatibility offered by Kubernetes itself.