diff --git a/content/docs/csidriver/release/powermax.md b/content/docs/csidriver/release/powermax.md
index 0a38803f15..e40971302a 100644
--- a/content/docs/csidriver/release/powermax.md
+++ b/content/docs/csidriver/release/powermax.md
@@ -45,6 +45,7 @@ description: Release notes for PowerMax CSI driver
| Automatic SRDF group creation is failing with "Unable to get Remote Port on SAN for Auto SRDF" for PowerMaxOS 10.1 arrays | Create the SRDF Group and add it to the storage class |
| [Node stage is failing with error "wwn for FC device not found"](https://github.com/dell/csm/issues/1070)| This is an intermittent issue, rebooting the node will resolve this issue |
| When the driver is installed using CSM Operator , few times, pods created using block volume are getting stuck in containercreating/terminating state or devices are not available inside the pod. | Update the daemonset with parameter `mountPropagation: "Bidirectional"` for volumedevices-path under volumeMounts section.|
+| When running CSI-PowerMax with Replication in a multi-cluster configuration, the driver on the target cluster fails and the following error is seen in logs: `error="CSI reverseproxy service host or port not found, CSI reverseproxy not installed properly"` | The reverseproxy service needs to be created manually on the target cluster. Follow [the instructions here](../../../deployment/csmoperator/modules/replication#configuration-steps) to create it.|
### Note:
- Support for Kubernetes alpha features like Volume Health Monitoring will not be available in Openshift environment as Openshift doesn't support enabling of alpha features for Production Grade clusters.
diff --git a/content/docs/csidriver/troubleshooting/powermax.md b/content/docs/csidriver/troubleshooting/powermax.md
index 66a3026544..27af1ef2c4 100644
--- a/content/docs/csidriver/troubleshooting/powermax.md
+++ b/content/docs/csidriver/troubleshooting/powermax.md
@@ -20,3 +20,4 @@ description: Troubleshooting PowerMax Driver
| nodestage is failing with error `Error invalid IQN Target iqn.EMC.0648.SE1F` | 1. Update initiator name to full default name , ex: iqn.1993-08.org.debian:01:e9afae962192
2.Ensure that the iSCSI initiators are available on all the nodes where the driver node plugin will be installed and it should be full default name. |
| Volume mount is failing on few OS(ex:VMware Virtual Platform) during node publish with error `wrong fs type, bad option, bad superblock` | 1. Check the multipath configuration(if enabled) 2. Edit Vm Advanced settings->hardware and add the param `disk.enableUUID=true` and reboot the node |
| Standby controller pod is in crashloopbackoff state | Scale down the replica count of the controller pod's deployment to 1 using ```kubectl scale deployment --replicas=1 -n ``` |
+| When running CSI-PowerMax with Replication in a multi-cluster configuration, the driver on the target cluster fails and the following error is seen in logs: `error="CSI reverseproxy service host or port not found, CSI reverseproxy not installed properly"` | The reverseproxy service needs to be created manually on the target cluster. Follow [the instructions here](../../../deployment/csmoperator/modules/replication#configuration-steps) to create it.|
diff --git a/content/docs/deployment/csmoperator/modules/replication.md b/content/docs/deployment/csmoperator/modules/replication.md
index dd0d761961..7cac9add29 100644
--- a/content/docs/deployment/csmoperator/modules/replication.md
+++ b/content/docs/deployment/csmoperator/modules/replication.md
@@ -80,3 +80,19 @@ To configure Replication perform the following steps:
kubectl patch deployment -n dell-replication-controller dell-replication-controller-manager \
-p '{"spec":{"template":{"spec":{"hostAliases":[{"hostnames":[""],"ip":""}]}}}}'
```
+9. **If installing replication via operator with the PowerMax driver on two clusters:** you will need to create a Kubernetes service for the reverseproxy on the target cluster. Insert values from your deployment into this service.yaml file and then create it on the target cluster using `kubectl create -f service.yaml`.
+ ```
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: csipowermax-reverseproxy
+ namespace:
+ spec:
+ ports:
+ - port:
+ protocol: TCP
+ targetPort: 2222
+ selector:
+ app: -controller
+ type: ClusterIP
+ ```
diff --git a/content/docs/replication/release/_index.md b/content/docs/replication/release/_index.md
index 9987923954..49cf6fd78c 100644
--- a/content/docs/replication/release/_index.md
+++ b/content/docs/replication/release/_index.md
@@ -5,20 +5,8 @@ weight: 9
Description: >
Dell Container Storage Modules (CSM) release notes for replication
---
-
## Release Notes - CSM Replication 1.10.0
-
-
-
-
-
-
-
-
-
-
-
### New Features/Changes
- [#1359 - [FEATURE]: Add Support for OpenShift Container Platform (OCP) 4.16 ](https://github.com/dell/csm/issues/1359)
@@ -30,3 +18,6 @@ Description: >
- [#1385 - [BUG]: Enable static build of repctl](https://github.com/dell/csm/issues/1385)
### Known Issues
+| Symptoms | Prevention, Resolution or Workaround |
+| --- | --- |
+| When running CSI-PowerMax with Replication in a multi-cluster configuration, the driver on the target cluster fails and the following error is seen in logs: `error="CSI reverseproxy service host or port not found, CSI reverseproxy not installed properly"` | The reverseproxy service needs to be created manually on the target cluster. Follow [the instructions here](../../deployment/csmoperator/modules/replication#configuration-steps) to create it.|
diff --git a/content/docs/replication/troubleshooting.md b/content/docs/replication/troubleshooting.md
index a8ee179fa7..325e9459c5 100644
--- a/content/docs/replication/troubleshooting.md
+++ b/content/docs/replication/troubleshooting.md
@@ -18,3 +18,4 @@ description: >
| After upgrading to Replication v1.4.0, if `kubectl get rg` returns an error `Unable to list "replication.storage.dell.com/v1alpha1, Resource=dellcsireplicationgroups"`| This means `kubectl` still doesn't recognize the new version of CRD `dellcsireplicationgroups.replication.storage.dell.com` after upgrade. Running the command `kubectl get DellCSIReplicationGroup.v1.replication.storage.dell.com/ -o yaml` will resolve the issue. |
| To add or delete PV s in the existing SYNC Replication Group in PowerStore, you may encounter the error `The operation is restricted as sync replication session for resource is not paused` | To resolve this, you need to pause the replication group, add the PV, and then resume the replication group (RG). The commands for the pause and resume operations are: `repctl --rg exec -a suspend` `repctl --rg exec -a resume` |
| To delete the last volume from the existing SYNC Replication Group in Powerstore, you may encounter the error 'failed to remove volume from volume group: The operation cannot be completed on metro or replicated volume group because volume group will become empty after last members are removed' | To resolve this, unassign the protection policy from the corresponding volume group on the PowerStore Manager UI. After that, you can successfully delete the last volume in that SYNC Replication Group.|
+| When running CSI-PowerMax with Replication in a multi-cluster configuration, the driver on the target cluster fails and the following error is seen in logs: `error="CSI reverseproxy service host or port not found, CSI reverseproxy not installed properly"` | The reverseproxy service needs to be created manually on the target cluster. Follow [the instructions here](../../deployment/csmoperator/modules/replication#configuration-steps) to create it.|