-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize LB use #318
Comments
Hello, any feedback on this ? thanks |
We proceed to investigate the most efficient way to use Load Balancers in AWS It was thought to use 2 LB, one internal and one external. For this the decision was made to use 2 Network LB instead of 4 Classic LB. To use NLB it was investigated and it is necessary to install AWS Load Balancers Controller as a prerequisite https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html We proceeded to install it and then proceeded to modify the services, for this we proceeded to create 2 new LBs: External LB: apiVersion: v1
kind: Service
metadata:
name: wazuh-external-lb
namespace: wazuh
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
service.beta.kubernetes.io/aws-load-balancer-name: wazuh-external-lb
spec:
type: LoadBalancer
ports:
- name: manager-worker-agents-events
port: 1514
targetPort: agents-events
- name: manager-cluster
port: 1516
targetPort: cluster-port
- name: manager-master-registration
port: 1515
targetPort: registration
- name: manager-master-api
port: 55000
targetPort: api-port
- name: dashboard
port: 443
targetPort: dashboard-port
selector:
lbtype: external Internal LB: apiVersion: v1
kind: Service
metadata:
name: wazuh-internal-lb
namespace: wazuh
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: internal
service.beta.kubernetes.io/aws-load-balancer-scheme: internal
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
service.beta.kubernetes.io/aws-load-balancer-name: wazuh-internal-lb
spec:
type: LoadBalancer
ports:
- name: indexer-rest
port: 9200
targetPort: indexer-rest
- name: indexer-nodes
port: 9300
targetPort: indexer-nodes
selector:
lbtype: internal
app: wazuh-indexer The CN of the certificates created for each of the services was modified, according to what LB uses: generate_certs.sh: echo "* Node cert"
echo "create: node-key-temp.pem"
openssl genrsa -out node-key-temp.pem 2048
echo "create: node-key.pem"
openssl pkcs8 -inform PEM -outform PEM -in node-key-temp.pem -topk8 -nocrypt -v1 PBE-SHA1-3DES -out node-key.pem
echo "create: node.csr"
openssl req -days 3650 -new -key node-key.pem -out node.csr -subj "/C=US/L=California/O=Company/CN=wazuh-internal-lb"
echo "create: node.pem"
openssl x509 -req -days 3650 -in node.csr -CA root-ca.pem -CAkey root-ca-key.pem -CAcreateserial -sha256 -out node.pem
echo "* dashboard cert"
echo "create: dashboard-key-temp.pem"
openssl genrsa -out dashboard-key-temp.pem 2048
echo "create: dashboard-key.pem"
openssl pkcs8 -inform PEM -outform PEM -in dashboard-key-temp.pem -topk8 -nocrypt -v1 PBE-SHA1-3DES -out dashboard-key.pem
echo "create: dashboard.csr"
openssl req -days 3650 -new -key dashboard-key.pem -out dashboard.csr -subj "/C=US/L=California/O=Company/CN=wazuh-external-lb"
echo "create: dashboard.pem"
openssl x509 -req -days 3650 -in dashboard.csr -CA root-ca.pem -CAkey root-ca-key.pem -CAcreateserial -sha256 -out dashboard.pem
echo "* Filebeat cert"
echo "create: filebeat-key-temp.pem"
openssl genrsa -out filebeat-key-temp.pem 2048
echo "create: filebeat-key.pem"
openssl pkcs8 -inform PEM -outform PEM -in filebeat-key-temp.pem -topk8 -nocrypt -v1 PBE-SHA1-3DES -out filebeat-key.pem
echo "create: filebeat.csr"
openssl req -days 3650 -new -key filebeat-key.pem -out filebeat.csr -subj "/C=US/L=California/O=Company/CN=wazuh-external-lb"
echo "create: filebeat.pem"
openssl x509 -req -days 3650 -in filebeat.csr -CA root-ca.pem -CAkey root-ca-key.pem -CAcreateserial -sha256 -out filebeat.pem Additionally, some metadata was added to the deployments and statefulset used for the stack deployment: spec:
selector:
matchLabels:
lbtype: external / internal spec:
template:
metadata:
labels:
lbtype: external All variables that had routes pointing to other services or pods were also modified, since the previous services are no longer available: env:
- name: INDEXER_URL
value: 'https://wazuh-internal-lb:9200' - name: WAZUH_API_URL
value: https://wazuh-external-lb server.host: 0.0.0.0
server.port: 5601
opensearch.hosts: https://wazuh-internal-lb:9200 A separate version of this deployment is being analyzed, since it is not compatible with a deployment in an on-premise environment. |
An attempt was made to connect an agent to the Wazuh stack, which registered correctly but then did not send information, it was verified within the agent log and it had problems connecting to port 1514:
We proceeded to try adding the Wazuh manager master node within the balancer of port 1514 and when the LB matched with the master it could communicate, later if it matched with any worker node communication was lost and it failed again:
It was verified and there was no connection between the worker nodes and the master: root@wazuh-manager-master-0:/var/ossec/bin# ./cluster_control -l
4NAME TYPE VERSION ADDRESS
wazuh-manager-master master 4.7.2 wazuh-manager-master-0.wazuh-cluster.wazuh
It was reviewed and there was no correct connection from the worker nodes to the manager, so the ClusterIP that existed on port 1516 was created again so that there can be communication between the Wazuh manager nodes: apiVersion: v1
kind: Service
metadata:
name: wazuh-cluster
namespace: wazuh
labels:
app: wazuh-manager
spec:
selector:
app: wazuh-manager
ports:
- name: cluster
port: 1516
targetPort: 1516
clusterIP: None After carrying out this deployment of the service, the connection with the Wazuh manager cliuster nodes began to work well and the agent was able to send information: Wazuh manager master:
Wazuh agent:
Once the correct functioning of the Wazuh stack was verified, the deployment files were reorganized, leaving the local-env directory pointing to the manifests of the wazuh directory as it was after the changes, an eks directory that also points to the wazuh directory in case you want to keep the old version of the deployment and a new eks-nlb directory which points to a new wazuh-eks directory, which contains all the changes we have made: |
@vcerenu Hi! Great job! Thanks. Also do you consider using of Ingress or similar way of publishing the Wazuh? I'll explain my use case.
So I really wonder why we need a dedicated LBs for Wazuh. Any considerations / ideas ? |
sorry to hi-jack this PR but That would be more efficient to create an (base/production) overlay freed from any feature from cloud provider that would serve as a base, and build eks/gke from it ? |
@bmm-alc Hi! Totally agree with you. Thanks for your idea! |
Now the Wazuh deployment with Kubernetes uses 4 Load Balancers, 1 for each service deployed. Investigating how to use a lower quantity of Load Balancer resources in the Wazuh deployment is necessary.
The text was updated successfully, but these errors were encountered: