Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no ['master_node'] value - script error #21

Open
Tradeforlife opened this issue Mar 1, 2024 · 4 comments
Open

no ['master_node'] value - script error #21

Tradeforlife opened this issue Mar 1, 2024 · 4 comments

Comments

@Tradeforlife
Copy link

As of 1/03/2024 Proxmox hosts fully patched using a ubuntu 23.x LXC within a python env and the current build of Proxmox-load-balancer of 1/03/2024. when I run the program after updating the config file with correct info I get.

(.env) root@prox-lb:/opt/Proxmox-load-balancer# python plb.py 
INFO | START ***Load-balancer!***
Traceback (most recent call last):
  File "/opt/Proxmox-load-balancer/plb.py", line 496, in <module>
    main()
  File "/opt/Proxmox-load-balancer/plb.py", line 466, in main
    cluster = Cluster(server_url)
              ^^^^^^^^^^^^^^^^^^^
  File "/opt/Proxmox-load-balancer/plb.py", line 96, in __init__
    self.cl_nodes: dict = self.cluster_hosts()  # All cluster nodes
                          ^^^^^^^^^^^^^^^^^^^^
  File "/opt/Proxmox-load-balancer/plb.py", line 169, in cluster_hosts
    self.master_node = rr.json()['data']['manager_status']['master_node']
                       ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
KeyError: 'master_node'

I edit the plb.py to show the output of the rr.json() within the cluster_hosts(self) function and it showed the below output. there is no master_node key. I'm not sure if it's not required anymore of I have something changed in my cluster.. either way it's not working.

{'data': {'manager_status': {'node_status': {}}, 'quorum': {'quorate': '1', 'node': 'pve-quorum'}}}

Cluster Information

root@pve-quorum:~# pvecm status
Cluster information
-------------------
Name:             cluster1
Config Version:   12
Transport:        knet
Secure auth:      on

Quorum information
------------------
Date:             Fri Mar  1 13:31:53 2024
Quorum provider:  corosync_votequorum
Nodes:            4
Node ID:          0x00000002
Ring ID:          1.848
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   4
Highest expected: 4
Total votes:      4
Quorum:           3  
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes    Qdevice Name
0x00000001          1  NA,NV,NMW 192.168.10.11
0x00000002          1         NR 192.168.10.30 (local)
0x00000003          1         NR 192.168.10.10
0x00000004          1         NR 192.168.10.12
0x00000000          0            Qdevice (votes 0)
root@pve-quorum:~# 
@Tradeforlife
Copy link
Author

NOTE : I just commented out

self.master_node = rr.json()['data']['manager_status']['master_node']

any it seems to be running, not sure if this is required anywhere else.

@cvk98
Copy link
Owner

cvk98 commented Mar 1, 2024

This is required if the script is installed on all nodes. In this case, only 1 should decide how to balance. And thus the HA master is selected. If you have 1 instance, this mechanism is not needed at all.

@cvk98
Copy link
Owner

cvk98 commented Mar 1, 2024

Without HA, there is no master node in the cluster.

@Tradeforlife
Copy link
Author

I uncommented the line and enabled HA and now it's working as expected, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants