Skip to content

Commit

Permalink
Merge branch 'main' into array-version-update
Browse files Browse the repository at this point in the history
  • Loading branch information
donatwork authored Oct 30, 2024
2 parents 037b4fc + 3e02001 commit 79bc6c3
Show file tree
Hide file tree
Showing 5 changed files with 22 additions and 12 deletions.
1 change: 1 addition & 0 deletions content/docs/csidriver/release/powermax.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ description: Release notes for PowerMax CSI driver
| Automatic SRDF group creation is failing with "Unable to get Remote Port on SAN for Auto SRDF" for PowerMaxOS 10.1 arrays | Create the SRDF Group and add it to the storage class |
| [Node stage is failing with error "wwn for FC device not found"](https://github.com/dell/csm/issues/1070)| This is an intermittent issue, rebooting the node will resolve this issue |
| When the driver is installed using CSM Operator , few times, pods created using block volume are getting stuck in containercreating/terminating state or devices are not available inside the pod. | Update the daemonset with parameter `mountPropagation: "Bidirectional"` for volumedevices-path under volumeMounts section.|
| When running CSI-PowerMax with Replication in a multi-cluster configuration, the driver on the target cluster fails and the following error is seen in logs: `error="CSI reverseproxy service host or port not found, CSI reverseproxy not installed properly"` | The reverseproxy service needs to be created manually on the target cluster. Follow [the instructions here](../../../deployment/csmoperator/modules/replication#configuration-steps) to create it.|
### Note:

- Support for Kubernetes alpha features like Volume Health Monitoring will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters.
1 change: 1 addition & 0 deletions content/docs/csidriver/troubleshooting/powermax.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,3 +20,4 @@ description: Troubleshooting PowerMax Driver
| nodestage is failing with error `Error invalid IQN Target iqn.EMC.0648.SE1F` | 1. Update initiator name to full default name , ex: iqn.1993-08.org.debian:01:e9afae962192 <br> 2.Ensure that the iSCSI initiators are available on all the nodes where the driver node plugin will be installed and it should be full default name. |
| Volume mount is failing on few OS(ex:VMware Virtual Platform) during node publish with error `wrong fs type, bad option, bad superblock` | 1. Check the multipath configuration(if enabled) 2. Edit Vm Advanced settings->hardware and add the param `disk.enableUUID=true` and reboot the node |
| Standby controller pod is in crashloopbackoff state | Scale down the replica count of the controller pod's deployment to 1 using ```kubectl scale deployment <deployment_name> --replicas=1 -n <driver_namespace>``` |
| When running CSI-PowerMax with Replication in a multi-cluster configuration, the driver on the target cluster fails and the following error is seen in logs: `error="CSI reverseproxy service host or port not found, CSI reverseproxy not installed properly"` | The reverseproxy service needs to be created manually on the target cluster. Follow [the instructions here](../../../deployment/csmoperator/modules/replication#configuration-steps) to create it.|
16 changes: 16 additions & 0 deletions content/docs/deployment/csmoperator/modules/replication.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,3 +80,19 @@ To configure Replication perform the following steps:
kubectl patch deployment -n dell-replication-controller dell-replication-controller-manager \
-p '{"spec":{"template":{"spec":{"hostAliases":[{"hostnames":["<remote-FQDN>"],"ip":"<remote-IP>"}]}}}}'
```
9. **If installing replication via operator with the PowerMax driver on two clusters:** you will need to create a Kubernetes service for the reverseproxy on the target cluster. Insert values from your deployment into this service.yaml file and then create it on the target cluster using `kubectl create -f service.yaml`.
```
apiVersion: v1
kind: Service
metadata:
name: csipowermax-reverseproxy
namespace: <INSERT DRIVER NAMESPACE>
spec:
ports:
- port: <INSERT X_CSI_REVPROXY_PORT FROM DRIVER SAMPLE FILE>
protocol: TCP
targetPort: 2222
selector:
app: <INSERT DRIVER DEPLOYMENT NAME>-controller
type: ClusterIP
```
15 changes: 3 additions & 12 deletions content/docs/replication/release/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,20 +5,8 @@ weight: 9
Description: >
Dell Container Storage Modules (CSM) release notes for replication
---

## Release Notes - CSM Replication 1.10.0












### New Features/Changes

- [#1359 - [FEATURE]: Add Support for OpenShift Container Platform (OCP) 4.16 ](https://github.com/dell/csm/issues/1359)
Expand All @@ -30,3 +18,6 @@ Description: >
- [#1385 - [BUG]: Enable static build of repctl](https://github.com/dell/csm/issues/1385)

### Known Issues
| Symptoms | Prevention, Resolution or Workaround |
| --- | --- |
| When running CSI-PowerMax with Replication in a multi-cluster configuration, the driver on the target cluster fails and the following error is seen in logs: `error="CSI reverseproxy service host or port not found, CSI reverseproxy not installed properly"` | The reverseproxy service needs to be created manually on the target cluster. Follow [the instructions here](../../deployment/csmoperator/modules/replication#configuration-steps) to create it.|
1 change: 1 addition & 0 deletions content/docs/replication/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,4 @@ description: >
| After upgrading to Replication v1.4.0, if `kubectl get rg` returns an error `Unable to list "replication.storage.dell.com/v1alpha1, Resource=dellcsireplicationgroups"`| This means `kubectl` still doesn't recognize the new version of CRD `dellcsireplicationgroups.replication.storage.dell.com` after upgrade. Running the command `kubectl get DellCSIReplicationGroup.v1.replication.storage.dell.com/<rg-id> -o yaml` will resolve the issue. |
| To add or delete PV s in the existing SYNC Replication Group in PowerStore, you may encounter the error `The operation is restricted as sync replication session for resource <Replication Group Name> is not paused` | To resolve this, you need to pause the replication group, add the PV, and then resume the replication group (RG). The commands for the pause and resume operations are: `repctl --rg <rg-id> exec -a suspend` `repctl --rg <rg-id> exec -a resume` |
| To delete the last volume from the existing SYNC Replication Group in Powerstore, you may encounter the error 'failed to remove volume from volume group: The operation cannot be completed on metro or replicated volume group because volume group will become empty after last members are removed' | To resolve this, unassign the protection policy from the corresponding volume group on the PowerStore Manager UI. After that, you can successfully delete the last volume in that SYNC Replication Group.|
| When running CSI-PowerMax with Replication in a multi-cluster configuration, the driver on the target cluster fails and the following error is seen in logs: `error="CSI reverseproxy service host or port not found, CSI reverseproxy not installed properly"` | The reverseproxy service needs to be created manually on the target cluster. Follow [the instructions here](../../deployment/csmoperator/modules/replication#configuration-steps) to create it.|

0 comments on commit 79bc6c3

Please sign in to comment.