Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

got an error when applying envoyfilter-redis-proxy.yaml #1

Open
drmy opened this issue Dec 31, 2020 · 14 comments
Open

got an error when applying envoyfilter-redis-proxy.yaml #1

drmy opened this issue Dec 31, 2020 · 14 comments

Comments

@drmy
Copy link

drmy commented Dec 31, 2020

Hi Huabing,

Happy New Year!
I really appreciate your work. Because I am also trying to implement the similar thing for my work. I've got an issue when following the instructions here. I've not got enough time to figure it out but just want to firstly send you my issue.

So, I actually have installed the latest Istio 1.8.1 on my k8s cluster. I know your REPLACE PR has been merged to this release, and I am not sure if this post is still available for 1.8.1.

And then my issue is that, when I was running the following command:
kubectl apply -f istio/envoyfilter-redis-proxy.yaml

I got the following error:
Error from server: error when creating "istio/envoyfilter-redis-proxy.yaml": admission webhook "validation.istio.io" denied the request: error decoding configuration: unknown value "REPLACE" for enum istio.networking.v1alpha3.EnvoyFilter_Patch_Operation

I am wondering if this resource has already been integrated into 1.8.1 or if I need to update the yaml file so as to apply it successfully?

BTW. I am running your solution is because I found that the bitnami/redis and bitnami/redis-cluster helm chart don't work well when Istio is enabled, therefore I am looking at your redis solution for my Istio environment.

Thank you very much!
@drmy

@zhaohuabing
Copy link
Owner

@drmy I just tried it on Istio 1.8.0 and it works. What's the output of istioctl version in your Istion installation?

@drmy
Copy link
Author

drmy commented Jan 4, 2021

@drmy I just tried it on Istio 1.8.0 and it works. What's the output of istioctl version in your Istion installation?

Hi @zhaohuabing ,

Thank you for your help.

I tested it on Istio 1.8.1 and got the following errors.
Maybe I haven't understood the theory. My redis namespace is not configured istio-proxy side-car automatically injected, do we need that? Actually I tested that way and it failed on creating the cluster...

And with the Istio 1.8.1, do we still apply the instruction of Envoy Redis Proxy?
istio/envoyfilter-crd.yaml

[root@kubemaster221 istio-redis-culster]# sed -i .bak "s/${REDIS_VIP}/kubectl get svc redis-cluster -n redis -o=jsonpath='{.spec.clusterIP}'/" istio/envoyfilter-redis-proxy.yaml
sed: -e expression #1, char 1: unknown command: .' [root@kubemaster221 istio-redis-culster]# sed -i "s/\${REDIS_VIP}/kubectl get svc redis-cluster -n redis -o=jsonpath='{.spec.clusterIP}'/" istio/envoyfilter-redis-proxy.yaml [root@kubemaster221 istio-redis-culster]# kubectl apply -f istio/envoyfilter-redis-proxy.yaml envoyfilter.networking.istio.io/add-redis-proxy created **[root@kubemaster221 istio-redis-culster]# kubectl exec -it kubectl get pod -l app=redis-client -n redis -o jsonpath="{.items[0].metadata.name}"` -c redis-client -n redis -- redis-cli -h redis-cluster
redis-cluster:6379> set a a
(error) MOVED 15495 10.244.7.61:6379
redis-cluster:6379> set b b
(error) MOVED 3300 10.244.5.56:6379
redis-cluster:6379> set c c
(error) MOVED 7365 10.244.1.49:6379**

@zhaohuabing
Copy link
Owner

@drmy I just tried it on Istio 1.8.0 and it works. What's the output of istioctl version in your Istion installation?

Hi @zhaohuabing ,

Thank you for your help.

I tested it on Istio 1.8.1 and got the following errors.
Maybe I haven't understood the theory. My redis namespace is not configured istio-proxy side-car automatically injected, do we need that? Actually I tested that way and it failed on creating the cluster...

No, we don't need to inject sidecar in the redis side because the envoyfilter only add redis proxy at the client side.

And with the Istio 1.8.1, do we still apply the instruction of Envoy Redis Proxy?
istio/envoyfilter-crd.yaml

No, we wo don't need to apply the crd because the REPLACE operation has already been supported in 1.8

[root@kubemaster221 istio-redis-culster]# sed -i .bak "s/${REDIS_VIP}/kubectl get svc redis-cluster -n redis -o=jsonpath='{.spec.clusterIP}'/" istio/envoyfilter-redis-proxy.yaml
sed: -e expression #1, char 1: unknown command: .' [root@kubemaster221 istio-redis-culster]# sed -i "s/\${REDIS_VIP}/kubectl get svc redis-cluster -n redis -o=jsonpath='{.spec.clusterIP}'/" istio/envoyfilter-redis-proxy.yaml [root@kubemaster221 istio-redis-culster]# kubectl apply -f istio/envoyfilter-redis-proxy.yaml envoyfilter.networking.istio.io/add-redis-proxy created **[root@kubemaster221 istio-redis-culster]# kubectl exec -it kubectl get pod -l app=redis-client -n redis -o jsonpath="{.items[0].metadata.name}"` -c redis-client -n redis -- redis-cli -h redis-cluster
redis-cluster:6379> set a a
(error) MOVED 15495 10.244.7.61:6379
redis-cluster:6379> set b b
(error) MOVED 3300 10.244.5.56:6379
redis-cluster:6379> set c c
(error) MOVED 7365 10.244.1.49:6379**

@drmy
Copy link
Author

drmy commented Jan 5, 2021

Hi @zhaohuabing ,

I followed your guide and made more attempts in my other labs and still failed at verification with the errors below.
Maybe something wrong with my environment, I will double check. Thank you anyway. If you have some other hints please let me know, Thank you again.

redis-cluster:6379> set c c
(error) MOVED 7365 10.244.5.18:6379

[root@kubemaster231 istio-redis-culster]# kubectl exec -it redis-cluster-0 -c redis -n redis -- redis-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:507
cluster_stats_messages_pong_sent:529
cluster_stats_messages_sent:1036
cluster_stats_messages_ping_received:524
cluster_stats_messages_pong_received:507
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:1036

@zhaohuabing
Copy link
Owner

It can be tricky to configure Istio and Envoy manually to support Redis. I have created project Aeraki to support layer-7 protocols in an Istio service mesh. Thrift, Dubbo, Kafka, Zookeeper have been implemented, Redis is on the way.
https://github.com/aeraki-framework/aeraki

@drmy
Copy link
Author

drmy commented Jan 6, 2021

It can be tricky to configure Istio and Envoy manually to support Redis. I have created project Aeraki to support layer-7 protocols in an Istio service mesh. Thrift, Dubbo, Kafka, Zookeeper have been implemented, Redis is on the way.
https://github.com/aeraki-framework/aeraki

Great! I will take a look at this project and learn from it.
Thank you so much for the info, much helpful, huabing.

@rhzs
Copy link

rhzs commented Jan 11, 2021

Hi @zhaohuabing I got similar error with Istio 1.8.1. I've not applied CRD for REPLACE op.

❯ kubectl exec -it `kubectl get pod -l app=redis-client -n redis -o jsonpath="{.items[0].metadata.name}"` -c redis-client -n redis -- redis-cli -h redis-cluster
redis-cluster:6379> set a a
OK
redis-cluster:6379> set b c
(error) MOVED 3300 10.56.1.108:6379
redis-cluster:6379> set b c
(error) MOVED 3300 10.56.1.108:6379
redis-cluster:6379> set b d
(error) MOVED 3300 10.56.1.108:6379
redis-cluster:6379> set b e
(error) MOVED 3300 10.56.1.108:6379
redis-cluster:6379> set b h
(error) MOVED 3300 10.56.1.108:6379
redis-cluster:6379> exit

Screen Shot 2021-01-11 at 21 44 47

@zhaohuabing
Copy link
Owner

It seems that the default tcp filter has not been replaced by an Redis proxy. You can check the config dump of the day proxy.

@rhzs
Copy link

rhzs commented Jan 12, 2021

It seems that the default tcp filter has not been replaced by an Redis proxy. You can check the config dump of the day proxy.

How do you check it?

@zhaohuabing
Copy link
Owner

@drmy
Copy link
Author

drmy commented Feb 27, 2021

Hi @zhaohuabing ,
Thank you very much for the update. I am going to have a try in our deployment and send you my feedback.

@santinoncs
Copy link

Hi ,

Any ideas to get the REDIS_VIP without using kubectl?
I mean, the IP is dynamic
isnt it dangerous?

@zhaohuabing
Copy link
Owner

@santinoncs

Aeraki (https://github.com/aeraki-framework/aeraki) can help with that. It's a control plane component working alongside Istio, automatically generating the configuration for redis proxies in the data plane. By using Aeraki, you don't have to worry about how to keep vip updated in the Envoy configuration, because Aeraki takes care of it for you. Aeraki also provides user friendly k8s crd
to help you mange Redis configuration such as routing and auth.

An example can be found here: https://github.com/aeraki-framework/aeraki/tree/master/test/e2e/redis

@jdafda
Copy link

jdafda commented Aug 11, 2023

It seems that the default tcp filter has not been replaced by an Redis proxy. You can check the config dump of the day proxy.

I am getting the same error , also check config dump , I can see tcp proxy has been replaced.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants