Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[求助/Help]上传系统镜像失败:save_fail: save image to storage s3 #18458

Closed
chenjacken opened this issue Oct 26, 2023 · 23 comments
Closed
Labels
question Further information is requested

Comments

@chenjacken
Copy link

1,版本
v3.10.6
三个控制节点高可用部署
后端用ceph块存储
2,在web端上传镜像,出现几次保存失败

saving=>save_fail: save image to storage s3: cloudprovider.UploadObject: bucket.UploadPart: UploadPart: Put "http://minio.onecloud-minio:9000/onecloud-images/6b3ae333-3562-4846-8859-03a3158b0ce6.qcow2?partNumber=104&uploadId=777bf813-87ea-41fd-ae98-996f78fb6dc7": readfrom tcp 10.40.136.20:40054->10.97.107.152:9000: write tcp 10.40.136.20:40054->10.97.107.152:9000: write: connection reset by peer

请问,这个连接突然断开,一般需要怎么排查。?

@chenjacken chenjacken added the question Further information is requested label Oct 26, 2023
@chenjacken
Copy link
Author

也会报这种错误

{
    "copy_from": "http://10.0.1.2:8000/download/HD.qcow2",
    "reason": {
        "__reason__": "saveImageFromStream: unexpected EOF",
        "__stage__": "OnImageImportComplete",
        "__status__": "ERROR"
    }
}

@chenjacken
Copy link
Author

在回收站删除硬盘,出现报错:

{
    "__reason__": "rbd 2023-10-27 01:21:48.864700 7f4278333700  0 monclient: hunting for new mon\n2023-10-27 01:21:48.878565 7f42939f7d80 -1 librbd: cannot obtain exclusive lock - not removing\n\rRemoving image: 0% complete...failed.\nrbd: error: image still has watchers\nThis means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout.\n: exit status 16",
    "__stage__": "OnGuestDiskDeleteComplete",
    "__status__": "error"
}

@zexi
Copy link
Member

zexi commented Oct 27, 2023

@chenjacken 感觉是你的 k8s 集群内部网络出了问题,这个 minio 和 rook-ceph 默认都是走的 k8s 集群内部网络通信。

@chenjacken
Copy link
Author

@chenjacken 感觉是你的 k8s 集群内部网络出了问题,这个 minio 和 rook-ceph 默认都是走的 k8s 集群内部网络通信。

嗯,不知道如何排查这个问题。

@zexi
Copy link
Member

zexi commented Oct 27, 2023

@chenjacken 感觉是你的 k8s 集群内部网络出了问题,这个 minio 和 rook-ceph 默认都是走的 k8s 集群内部网络通信。

嗯,不知道如何排查这个问题。

@chenjacken 这里 https://www.cloudpods.org/zh/docs/ops/k8s/dnserror/ 有一些排查网络问题的文档。

@chenjacken
Copy link
Author

chenjacken commented Oct 27, 2023

@zexi 谢谢,我学习学习下。

我这环境是高可用部署,DB是双主高可用,VIP是172.16.1.991;三台控制节点,VIP地址是172.16.1.100,然后会出现这种网络问题:
image

[root@master2 ~]# climc image-upload --format qcow2 --os-type Linux --os-arch x86_64 --standard CentOS7-Nginx ./Nginx.qcow2
1.12 GiB / 75.38 GiB [->_______________________________________________________________________________________________] 1.49% 6.36 MiB p/s 3m1s
Post "https://172.16.1.100:30292/v1/images": write tcp 172.16.1.100:54778->172.16.1.100:30292: write: connection timed out
{"error":{"class":"ClientError","code":499,"details":"Post \"https://172.16.1.100:30292/v1/images\": write tcp 172.16.1.100:54778->172.16.1.100:30292: write: connection timed out","request":{"headers":{"Content-Length":"80933879808","Content-Type":"application/octet-stream","User-Agent":"yunioncloud-go/201708","X-Auth-Token":"*","X-Image-Meta-Disk-Format":"qcow2","X-Image-Meta-Is_standard":"true","X-Image-Meta-Name":"CentOS7-Nginx","X-Image-Meta-Os_arch":"x86_64","X-Image-Meta-Property-Os_arch":"x86_64","X-Image-Meta-Property-Os_type":"Linux"},"method":"POST","url":"https://172.16.1.100:30292/v1/images"}}}
[root@master2 ~]# 

@chenjacken
Copy link
Author

chenjacken commented Oct 28, 2023

另外,重启控制节点1,之后在控制节点1上ping不同VIP(172.16.1.100),DB VIP(172.16.1.99),而另外两台控制节点是ping通
控制节点1网络:

[root@master1 ~]# ip addr show eno1
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 6c:0b:84:81:d2:ec brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.8/24 brd 172.16.1.255 scope global noprefixroute eno1
       valid_lft forever preferred_lft forever
    inet6 fe80::5a47:be9a:91f5:757f/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

[root@master1 ~]# kubectl get pods -n onecloud
Unable to connect to the server: dial tcp 172.16.1.100:6443: connect: no route to host

[root@master1 ~]# ping 172.16.1.100
PING 172.16.1.100 (172.16.1.100) 56(84) bytes of data.
From 172.16.1.8 icmp_seq=1 Destination Host Unreachable
From 172.16.1.8 icmp_seq=2 Destination Host Unreachable
From 172.16.1.8 icmp_seq=3 Destination Host Unreachable
From 172.16.1.8 icmp_seq=4 Destination Host Unreachable
From 172.16.1.8 icmp_seq=5 Destination Host Unreachable
From 172.16.1.8 icmp_seq=6 Destination Host Unreachable
^C
--- 172.16.1.100 ping statistics ---
6 packets transmitted, 0 received, +6 errors, 100% packet loss, time 5093ms

控制节点2网络:

[root@master2 ~]# ip addr show eno1
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether 6c:0b:84:10:ff:08 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.99/32 scope global eno1
       valid_lft forever preferred_lft forever
    inet 172.16.1.100/32 scope global eno1
       valid_lft forever preferred_lft forever
    inet6 fe80::16d6:c1b0:f2:aa4a/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

[root@master2 ~]# ping 172.16.1.99
PING 172.16.1.99 (172.16.1.99) 56(84) bytes of data.
64 bytes from 172.16.1.99: icmp_seq=1 ttl=64 time=0.059 ms
64 bytes from 172.16.1.99: icmp_seq=2 ttl=64 time=0.050 ms
^C
--- 172.16.1.99 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.050/0.054/0.059/0.008 ms

[root@master2 ~]# ping 172.16.1.100
PING 172.16.1.100 (172.16.1.100) 56(84) bytes of data.
64 bytes from 172.16.1.100: icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from 172.16.1.100: icmp_seq=2 ttl=64 time=0.032 ms
^C
--- 172.16.1.100 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.032/0.045/0.058/0.013 ms
[root@master2 ~]# 

@chenjacken
Copy link
Author

[root@master1 ~]# kubectl logs default-region-7d6cc966bd-8tb5d -n onecloud --tail 1000 -f
[info 231029 01:24:02 loader.init.0(loader.go:45)] Loading cloud providers ...
[info 231029 01:24:02 options.ParseOptions(options.go:318)] Use configuration file: /etc/yunion/region.conf
[info 231029 01:24:02 options.ParseOptions(options.go:340)] Set log level to "info"
[info 2023-10-29 01:24:02 service.StartService(service.go:63)] Port V2 30888 is specified, use v2 port
[error 2023-10-29 01:26:53 auth.(*authManager).authAdmin(auth.go:254)] Admin auth failed: {"error":{"class":"ClientError","code":499,"details":"Fail to read body: read tcp 10.40.180.33:38968->10.96.234.0:30357: read: connection reset by peer","request":{"body":"{\"auth\":{\"context\":{\"source\":\"srv\"},\"identity\":{\"methods\":[\"password\"],\"password\":{\"user\":{\"domain\":...ult\"},\"name\":\"system\"}}}}","headers":{"Content-Length":"240","Content-Type":"application/json","User-Agent":"yunioncloud-go/201708"},"method":"POST","url":"https://default-keystone:30357/v3/auth/tokens"}}}
[fatal 2023-10-29 01:26:53 auth.AsyncInit(auth.go:433)] Auth manager init err: authAdmin: {"error":{"class":"ClientError","code":499,"details":"Fail to read body: read tcp 10.40.180.33:38968->10.96.234.0:30357: read: connection reset by peer","request":{"body":"{\"auth\":{\"context\":{\"source\":\"srv\"},\"identity\":{\"methods\":[\"password\"],\"password\":{\"user\":{\"domain\":...ult\"},\"name\":\"system\"}}}}","headers":{"Content-Length":"240","Content-Type":"application/json","User-Agent":"yunioncloud-go/201708"},"method":"POST","url":"https://default-keystone:30357/v3/auth/tokens"}}}
[root@master1 ~]# ipvsadm -Ln | grep -A 3 10.96.0.10
TCP  10.96.0.10:53 rr
  -> 10.40.137.85:53              Masq    1      0          0         
  -> 10.40.180.47:53              Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.40.137.85:9153            Masq    1      0          0         
  -> 10.40.180.47:9153            Masq    1      0          0         
TCP  10.96.2.188:30891 rr
--
UDP  10.96.0.10:53 rr
  -> 10.40.137.85:53              Masq    1      0          41        
  -> 10.40.180.47:53              Masq    1      0          41        
[root@master1 ~]# ip route | grep 10.40.137
blackhole 10.40.137.64/26 proto bird 
10.40.137.78 dev calib25ded1e444 scope link 
10.40.137.81 dev califadb39a69e4 scope link 
10.40.137.82 dev calic201a021d02 scope link 
10.40.137.83 dev calibbefd5685f0 scope link 
10.40.137.85 dev cali61762828116 scope link 
10.40.137.86 dev cali0f6c631a73d scope link 
10.40.137.90 dev cali1fdbb5c637a scope link 
[root@master1 ~]# ip route | grep 10.40.180
10.40.180.0/26 via 10.0.1.9 dev tunl0 proto bird onlink 
10.40.180.64/26 via 10.0.1.9 dev tunl0 proto bird onlink 
[root@master1 ~]# 

这种问题不知道如何解决了

@chenjacken
Copy link
Author

chenjacken commented Oct 29, 2023

@zexi 谢谢,我学习学习下。

我这环境是高可用部署,DB是双主高可用,VIP是172.16.1.991;三台控制节点,VIP地址是172.16.1.100,然后会出现这种网络问题: image

[root@master2 ~]# climc image-upload --format qcow2 --os-type Linux --os-arch x86_64 --standard CentOS7-Nginx ./Nginx.qcow2
1.12 GiB / 75.38 GiB [->_______________________________________________________________________________________________] 1.49% 6.36 MiB p/s 3m1s
Post "https://172.16.1.100:30292/v1/images": write tcp 172.16.1.100:54778->172.16.1.100:30292: write: connection timed out
{"error":{"class":"ClientError","code":499,"details":"Post \"https://172.16.1.100:30292/v1/images\": write tcp 172.16.1.100:54778->172.16.1.100:30292: write: connection timed out","request":{"headers":{"Content-Length":"80933879808","Content-Type":"application/octet-stream","User-Agent":"yunioncloud-go/201708","X-Auth-Token":"*","X-Image-Meta-Disk-Format":"qcow2","X-Image-Meta-Is_standard":"true","X-Image-Meta-Name":"CentOS7-Nginx","X-Image-Meta-Os_arch":"x86_64","X-Image-Meta-Property-Os_arch":"x86_64","X-Image-Meta-Property-Os_type":"Linux"},"method":"POST","url":"https://172.16.1.100:30292/v1/images"}}}
[root@master2 ~]# 

++++++++++++++++++++++
出现以上错误时候,glancepod的日志报错:

[root@master1 ~]# kubectl -n onecloud logs default-glance-6f778dfdd-6n6lk --tail 1000 -f
[info 2023-10-29 13:45:58 appsrv.(*Application).ServeHTTP(appsrv.go:288)] xJmWJN1YMwyR5jd34S4dnvCufjk= 200 29ec76 GET /v1/scope-resources (172.16.1.99:53366:identity/cron-service) 0.74ms
[warning 2023-10-29 13:46:13 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 2 cycles...
[warning 2023-10-29 13:46:43 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 3 cycles...
[warning 2023-10-29 13:47:13 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 4 cycles...
[warning 2023-10-29 13:47:43 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 5 cycles...
[warning 2023-10-29 13:48:13 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 6 cycles...
[warning 2023-10-29 13:48:43 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 7 cycles...
[warning 2023-10-29 13:49:13 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 8 cycles...
[warning 2023-10-29 13:49:43 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 9 cycles...
[warning 2023-10-29 13:50:13 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 10 cycles...
[warning 2023-10-29 13:50:43 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 11 cycles...
[warning 2023-10-29 13:51:13 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 12 cycles...
[warning 2023-10-29 13:51:43 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 13 cycles...
[warning 2023-10-29 13:52:13 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 14 cycles...
[warning 2023-10-29 13:52:43 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 15 cycles...
[warning 2023-10-29 13:53:13 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 16 cycles...
error: http2: server sent GOAWAY and closed the connection; LastStreamID=87, ErrCode=NO_ERROR, debug=""

会出现这样的错误:

[info 2023-10-29 14:14:45 appsrv.(*Application).ServeHTTP(appsrv.go:288)] xJmWJN1YMwyR5jd34S4dnvCufjk= 200 0997ad-1c7534 GET /v1/images?is_guest_image=false&limit=50&scope=system&show_fail_reason=true (172.16.1.99:37504:apigateway) 31.63ms
[info 2023-10-29 14:14:46 appsrv.(*Application).ServeHTTP(appsrv.go:288)] xJmWJN1YMwyR5jd34S4dnvCufjk= 200 56725d-2c563d GET /v1/images/detail?is_guest_image=false&limit=50&scope=system&show_fail_reason=true (172.16.1.99:37504:apigateway) 18.88ms
[info 2023-10-29 14:14:48 appsrv.(*Application).ServeHTTP(appsrv.go:288)] xJmWJN1YMwyR5jd34S4dnvCufjk= 200 d70aea-b65ead GET /v1/images/statistics?details=true&is_guest_image=false&limit=50&scope=system&show_fail_reason=true (172.16.1.99:37504:apigateway) 1.34ms
[info 2023-10-29 14:15:44 appsrv.(*Application).ServeHTTP(appsrv.go:288)] xJmWJN1YMwyR5jd34S4dnvCufjk= 200 ad0b4e-ad87a9 GET /v1/images?id=d85c91d7-065a-4428-8865-be178383996b&scope=system (172.16.1.99:38492:apigateway) 20.18ms
Get "https://default-keystone:30357/v3/auth/tokens/invalid": dial tcp 10.96.234.0:30357: connect: connection refused
[error 2023-10-29 14:18:11 auth.(*authManager).startRefreshRevokeTokens(auth.go:182)] client.FetchInvalidTokens: jsonRequest: {"error":{"class":"ConnectRefusedError","code":499,"details":"Get \"https://default-keystone:30357/v3/auth/tokens/invalid\": dial tcp 10.96.234.0:30357: connect: connection refused","request":{"headers":{"User-Agent":"yunioncloud-go/201708","X-Auth-Token":"*"},"method":"GET","url":"https://default-keystone:30357/v3/auth/tokens/invalid"}}}
[info 2023-10-29 14:20:54 appsrv.(*Application).registerCleanShutdown.func1(appsrv.go:511)] Quit signal received!!! do cleanup...
[info 2023-10-29 14:20:54 appsrv.(*Application).waitCleanShutdown(appsrv.go:535)] Service stopped.
rpc error: code = DeadlineExceeded desc = context deadline exceeded

@chenjacken
Copy link
Author

chenjacken commented Oct 29, 2023

速度从开始几百MB/s,慢慢减速,然后就是10多KB
172.16.1.100是控制节点的VIP,感觉是keepalived做VIP这里面导致的问题?

[root@master2 ~]# climc image-upload --format qcow2 --os-type Linux --os-arch x86_64 --standard JS-Nginx ./Nginx.qcow2 
5.32 GiB / 75.38 GiB [------>_________________________________________________________________________________________] 7.06% 531.65 MiB p/s ETA 2m14s
8.35 GiB / 75.38 GiB [---------->_____________________________________________________________________________________] 11.08% 532.59 MiB p/s ETA 2m8s
10.41 GiB / 75.38 GiB [------------>_________________________________________________________________________________] 13.81% 443.43 MiB p/s ETA 2m30s
12.80 GiB / 75.38 GiB [--------------->______________________________________________________________________________] 16.98% 402.79 MiB p/s ETA 2m39s
16.01 GiB / 75.38 GiB [------------------->__________________________________________________________________________] 21.24% 405.73 MiB p/s ETA 2m29s
19.16 GiB / 75.38 GiB [------------------------>______________________________________________________________________] 25.42% 466.32 MiB p/s ETA 2m3s
21.40 GiB / 75.38 GiB [-------------------------->___________________________________________________________________] 28.39% 314.04 MiB p/s ETA 2m55s
22.31 GiB / 75.38 GiB [--------------------------->__________________________________________________________________] 29.59% 124.05 MiB p/s ETA 7m18s
22.35 GiB / 75.38 GiB [--------------------------->__________________________________________________________________] 29.65% 26.73 MiB p/s ETA 33m51s
22.37 GiB / 75.38 GiB [--------------------------->_________________________________________________________________] 29.67% 8.38 MiB p/s ETA 1h47m54s
22.37 GiB / 75.38 GiB [-------------------------->_______________________________________________________________] 29.68% 981.98 KiB p/s ETA 15h43m16s
22.37 GiB / 75.38 GiB [----------------------------->____________________________________________________________________] 29.68% 148.11 MiB p/s 2m35s
Post "https://172.16.1.100:30292/v1/images": write tcp 172.16.1.100:57140->172.16.1.100:30292: write: connection timed out
{"error":{"class":"ClientError","code":499,"details":"Post \"https://172.16.1.100:30292/v1/images\": write tcp 172.16.1.100:57140->172.16.1.100:30292: write: connection timed out","request":{"headers":{"Content-Length":"80933879808","Content-Type":"application/octet-stream","User-Agent":"yunioncloud-go/201708","X-Auth-Token":"*","X-Image-Meta-Disk-Format":"qcow2","X-Image-Meta-Is_standard":"true","X-Image-Meta-Name":"JS-Nginx","X-Image-Meta-Os_arch":"x86_64","X-Image-Meta-Property-Os_arch":"x86_64","X-Image-Meta-Property-Os_type":"Linux"},"method":"POST","url":"https://172.16.1.100:30292/v1/images"}}}

glance pod的日志报错:

[info 2023-10-29 15:06:34 appsrv.(*Application).ServeHTTP(appsrv.go:288)] UdXm3TO7H48jkD_TczwEg1xL9Os= 200 c6ef27-eb4b95 HEAD /v1/images/3428ffc9-b4e3-4941-860f-d911032d8e5e?details=true&is_guest_image=false&scope=system&show_fail_reason=true (10.0.1.9:41422:apigateway) 5.96ms
[warning 2023-10-29 15:06:41 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 8 cycles...
[info 2023-10-29 15:06:45 appsrv.(*Application).ServeHTTP(appsrv.go:288)] UdXm3TO7H48jkD_TczwEg1xL9Os= 200 4e1716-0a4a15 HEAD /v1/images/3428ffc9-b4e3-4941-860f-d911032d8e5e?details=true&is_guest_image=false&scope=system&show_fail_reason=true (10.0.1.9:41630:apigateway) 5.23ms
[info 2023-10-29 15:06:56 appsrv.(*Application).ServeHTTP(appsrv.go:288)] UdXm3TO7H48jkD_TczwEg1xL9Os= 200 836104-5f3b9e HEAD /v1/images/3428ffc9-b4e3-4941-860f-d911032d8e5e?details=true&is_guest_image=false&scope=system&show_fail_reason=true (10.0.1.9:41808:apigateway) 18.52ms
[info 2023-10-29 15:07:07 appsrv.(*Application).ServeHTTP(appsrv.go:288)] UdXm3TO7H48jkD_TczwEg1xL9Os= 200 d84617-bb24b4 HEAD /v1/images/3428ffc9-b4e3-4941-860f-d911032d8e5e?details=true&is_guest_image=false&scope=system&show_fail_reason=true (10.0.1.9:41944:apigateway) 6.40ms
[warning 2023-10-29 15:07:11 appsrv.do_worker_watchdog(workers_watchdog.go:64)] WorkerManager image_streaming_worker has been busy for 9 cycles...
[info 2023-10-29 15:07:18 appsrv.(*Application).ServeHTTP(appsrv.go:288)] UdXm3TO7H48jkD_TczwEg1xL9Os= 200 c3bf06-70f881 HEAD /v1/images/3428ffc9-b4e3-4941-860f-d911032d8e5e?details=true&is_guest_image=false&scope=system&show_fail_reason=true (10.0.1.9:42156:apigateway) 6.35ms

error: read tcp 172.16.1.100:55460->172.16.1.100:6443: read: connection timed out

calico-kube-controllers的报错:

[root@master1 ~]# kubectl -n kube-system logs calico-kube-controllers-696f98578f-p5r59 --tail 100 -f 
2023-10-29 15:28:27.624 [INFO][1] main.go 88: Loaded configuration from environment config=&config.Config{LogLevel:"info", ReconcilerPeriod:"5m", CompactionPeriod:"10m", EnabledControllers:"node", WorkloadEndpointWorkers:1, ProfileWorkers:1, PolicyWorkers:1, NodeWorkers:1, Kubeconfig:"", HealthEnabled:true, SyncNodeLabels:true, DatastoreType:"kubernetes"}
2023-10-29 15:28:27.625 [INFO][1] k8s.go 228: Using Calico IPAM
W1029 15:28:27.625819       1 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2023-10-29 15:28:27.627 [INFO][1] main.go 109: Ensuring Calico datastore is initialized
2023-10-29 15:28:29.162 [INFO][1] main.go 183: Starting status report routine
2023-10-29 15:28:29.162 [INFO][1] main.go 368: Starting controller ControllerType="Node"
2023-10-29 15:28:29.162 [INFO][1] node_controller.go 133: Starting Node controller
2023-10-29 15:28:29.262 [INFO][1] node_controller.go 146: Node controller is now running
2023-10-29 15:28:42.552 [INFO][1] kdd.go 167: Node and IPAM data is in sync
2023-10-29 15:31:03.394 [ERROR][1] main.go 234: Failed to reach apiserver error=<nil>
2023-10-29 15:49:49.165 [ERROR][1] main.go 234: Failed to reach apiserver error=<nil>
2023-10-29 15:50:21.098 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation="default" error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2023-10-29 15:50:21.098 [ERROR][1] main.go 203: Failed to verify datastore error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2023-10-29 15:50:23.100 [ERROR][1] main.go 234: Failed to reach apiserver error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2023-10-29 15:50:43.101 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation="default" error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2023-10-29 15:50:43.101 [ERROR][1] main.go 203: Failed to verify datastore error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2023-10-29 15:50:45.102 [ERROR][1] main.go 234: Failed to reach apiserver error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2023-10-29 15:50:56.422 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation="default" error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: http2: server sent GOAWAY and closed the connection; LastStreamID=539, ErrCode=NO_ERROR, debug=""
2023-10-29 15:50:56.422 [ERROR][1] main.go 203: Failed to verify datastore error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: http2: server sent GOAWAY and closed the connection; LastStreamID=539, ErrCode=NO_ERROR, debug=""
E1029 15:50:56.423075       1 reflector.go:280] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:96: Failed to watch *v1.Node: Get https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=2098452&timeoutSeconds=431&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
2023-10-29 15:52:36.717 [ERROR][1] client.go 255: Error getting cluster information config ClusterInformation="default" error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2023-10-29 15:52:36.717 [ERROR][1] main.go 203: Failed to verify datastore error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2023-10-29 15:52:38.720 [ERROR][1] main.go 234: Failed to reach apiserver error=Get https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default: context deadline exceeded
2023-10-29 17:29:36.572 [ERROR][1] main.go 234: Failed to reach apiserver error=<nil>
2023-10-29 17:29:49.978 [ERROR][1] main.go 234: Failed to reach apiserver error=<nil>
2023-10-29 17:30:21.996 [ERROR][1] main.go 234: Failed to reach apiserver error=<nil>
2023-10-29 17:30:50.929 [ERROR][1] main.go 234: Failed to reach apiserver error=<nil>
2023-10-29 17:31:20.020 [ERROR][1] main.go 234: Failed to reach apiserver error=<nil>

2023-10-29 17:32:14.492 [ERROR][1] main.go 234: Failed to reach apiserver error=<nil>

@zexi
Copy link
Member

zexi commented Oct 30, 2023

@chenjacken 先用 docker ps | grep apiserver 看下 kube-apiserver 有没有重启,如果没有重启,那应该就是 keepalived vip 漂移导致的,需要 docker ps | grep keepalive 找到容器 id ,然后看下 keepalived 的日志。

@zexi
Copy link
Member

zexi commented Oct 30, 2023

@chenjacken 另外也需要看下会不会是上传镜像太大,导致磁盘空闲率小于 5% 了,如果太小了,会触发 k8s 的 eviction 机制,也可能造成这种情况。

@chenjacken
Copy link
Author

chenjacken commented Oct 30, 2023

@chenjacken 另外也需要看下会不会是上传镜像太大,导致磁盘空闲率小于 5% 了,如果太小了,会触发 k8s 的 eviction 机制,也可能造成这种情况。

三台控制节点的磁盘占用40%,基本还剩200G。待上传的镜像是76GB。

@chenjacken
Copy link
Author

之前的文档可能存在问题,keepalived配置有问题,导致vip频繁偏移引发的问题。

根据https://www.cloudpods.org/zh/docs/setup/db-ha/ 最新的内容,修改了keepalived备节点的配置,重启keepalived,在测试和观察。有结果再反馈。

谢谢领导!@qiu

@chenjacken
Copy link
Author

控制服务的VIP切换的keepavlied配置文件位于 /etc/kubernetes/manifests/keepalived.yaml :

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: keepalived
    tier: control-plane
  name: keepalived
  namespace: kube-system
spec:
  containers:
  - command:
    - /container/tool/run
    env:
    - name: KEEPALIVED_PRIORITY
      value: "90"
    - name: KEEPALIVED_VIRTUAL_IPS
      value: '#PYTHON2BASH:[''172.16.1.100'']'
    - name: KEEPALIVED_STATE
      value: BACKUP
    - name: KEEPALIVED_PASSWORD
      value: 80ef5502
    - name: KEEPALIVED_ROUTER_ID
      value: "145"
    - name: KEEPALIVED_NODE_IP
      value: 172.16.1.10
    - name: KEEPALIVED_INTERFACE
      value: eno1
    image: registry.cn-beijing.aliyuncs.com/yunionio/keepalived:v2.0.25
    imagePullPolicy: IfNotPresent
    name: keepalived
    resources: {}
    securityContext:
      capabilities:
        add:
        - SYS_NICE
        - NET_ADMIN
        - NET_BROADCAST
        - NET_RAW
      privileged: true
  hostNetwork: true
  priorityClassName: system-cluster-critical
status: {}

其中

- name: KEEPALIVED_INTERFACE
      value: eno1

eno1做了网桥,br0了,是否KEEPALIVED_INTERFACE要配置值为br0 ??

2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether 6c:0b:84:bd:ec:cb brd ff:ff:ff:ff:ff:ff
    inet6 fe80::980:4901:f9d7:7f5/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 6c:0b:84:bd:ec:cc brd ff:ff:ff:ff:ff:ff
4: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether 00:1b:21:bc:71:6e brd ff:ff:ff:ff:ff:ff
    inet6 fe80::2bbc:40a2:9419:fb6f/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
12: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 6c:0b:84:bd:ec:cb brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.10/24 brd 172.16.1.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::6e0b:84ff:febd:eccb/64 scope link 
       valid_lft forever preferred_lft forever
16: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 00:1b:21:bc:71:6e brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.10/24 brd 10.0.1.255 scope global br1
       valid_lft forever preferred_lft forever
    inet6 fe80::21b:21ff:febc:716e/64 scope link 
       valid_lft forever preferred_lft forever

kube-system 下的 keepalived-master3 会崩溃,出现CreateContainerError,看日志:

[root@master1 ~]# kubectl -n kube-system logs keepalived-master3 --tail 100 -f 
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-10-30 10:19:09] https://172.16.1.10:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-10-30 10:19:12] https://172.16.1.10:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-10-30 10:19:15] https://172.16.1.10:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-10-30 10:19:18] https://172.16.1.10:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
Mon Oct 30 10:19:18 2023: VRRP_Script(check_kube) failed (exited with status 1)
Mon Oct 30 10:19:18 2023: (VI_1) Entering FAULT STATE
Fault, what ?
Mon Oct 30 10:19:18 2023: Stopping
Unknown state
Mon Oct 30 10:19:19 2023: Stopped - used 0.091127 user time, 0.288940 system time
Mon Oct 30 10:19:19 2023: Stopped Keepalived v2.0.20 (01/22,2020)
***  INFO   | 2023-10-30 10:19:19 | /container/run/process/keepalived/run exited with status 0
***  INFO   | 2023-10-30 10:19:19 | Running /container/run/process/keepalived/finish...
***  INFO   | 2023-10-30 10:19:19 | Killing all processes...
[root@master1 ~]# 

出错时候,glance无法连接了

[root@master1 ~]# kubectl -n onecloud logs default-glance-6f778dfdd-j5zp2 --tail 100 -f 

Unable to retrieve container logs for docker://af5c9024433314d71a5af70a6a37fdbf83898ca1fe8010b129a83056d740768a
然后不能连接上了

看了下这个glance就是在master3的节点上,就是 keepalived-master3所在的节点,通过docker ps发现该glance已经不是up了

@chenjacken
Copy link
Author

之前的文档可能存在问题,keepalived配置有问题,导致vip频繁偏移引发的问题。

根据https://www.cloudpods.org/zh/docs/setup/db-ha/ 最新的内容,修改了keepalived备节点的配置,重启keepalived,在测试和观察。有结果再反馈。

谢谢领导!@qiu

这个方法已经解决了数据库双主配置时候,keepalived配置数据库VIP会偏移不稳定的问题。注意看主备两个节点的keepalived不一样的配置内容。

@swordqiu
Copy link
Member

控制服务的VIP切换的keepavlied配置文件位于 /etc/kubernetes/manifests/keepalived.yaml :

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: keepalived
    tier: control-plane
  name: keepalived
  namespace: kube-system
spec:
  containers:
  - command:
    - /container/tool/run
    env:
    - name: KEEPALIVED_PRIORITY
      value: "90"
    - name: KEEPALIVED_VIRTUAL_IPS
      value: '#PYTHON2BASH:[''172.16.1.100'']'
    - name: KEEPALIVED_STATE
      value: BACKUP
    - name: KEEPALIVED_PASSWORD
      value: 80ef5502
    - name: KEEPALIVED_ROUTER_ID
      value: "145"
    - name: KEEPALIVED_NODE_IP
      value: 172.16.1.10
    - name: KEEPALIVED_INTERFACE
      value: eno1
    image: registry.cn-beijing.aliyuncs.com/yunionio/keepalived:v2.0.25
    imagePullPolicy: IfNotPresent
    name: keepalived
    resources: {}
    securityContext:
      capabilities:
        add:
        - SYS_NICE
        - NET_ADMIN
        - NET_BROADCAST
        - NET_RAW
      privileged: true
  hostNetwork: true
  priorityClassName: system-cluster-critical
status: {}

其中

- name: KEEPALIVED_INTERFACE
      value: eno1

eno1做了网桥,br0了,是否KEEPALIVED_INTERFACE要配置值为br0 ??

2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether 6c:0b:84:bd:ec:cb brd ff:ff:ff:ff:ff:ff
    inet6 fe80::980:4901:f9d7:7f5/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 6c:0b:84:bd:ec:cc brd ff:ff:ff:ff:ff:ff
4: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP group default qlen 1000
    link/ether 00:1b:21:bc:71:6e brd ff:ff:ff:ff:ff:ff
    inet6 fe80::2bbc:40a2:9419:fb6f/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
12: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 6c:0b:84:bd:ec:cb brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.10/24 brd 172.16.1.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::6e0b:84ff:febd:eccb/64 scope link 
       valid_lft forever preferred_lft forever
16: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 00:1b:21:bc:71:6e brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.10/24 brd 10.0.1.255 scope global br1
       valid_lft forever preferred_lft forever
    inet6 fe80::21b:21ff:febc:716e/64 scope link 
       valid_lft forever preferred_lft forever

kube-system 下的 keepalived-master3 会崩溃,出现CreateContainerError,看日志:

[root@master1 ~]# kubectl -n kube-system logs keepalived-master3 --tail 100 -f 
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-10-30 10:19:09] https://172.16.1.10:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-10-30 10:19:12] https://172.16.1.10:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-10-30 10:19:15] https://172.16.1.10:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-10-30 10:19:18] https://172.16.1.10:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
Mon Oct 30 10:19:18 2023: VRRP_Script(check_kube) failed (exited with status 1)
Mon Oct 30 10:19:18 2023: (VI_1) Entering FAULT STATE
Fault, what ?
Mon Oct 30 10:19:18 2023: Stopping
Unknown state
Mon Oct 30 10:19:19 2023: Stopped - used 0.091127 user time, 0.288940 system time
Mon Oct 30 10:19:19 2023: Stopped Keepalived v2.0.20 (01/22,2020)
***  INFO   | 2023-10-30 10:19:19 | /container/run/process/keepalived/run exited with status 0
***  INFO   | 2023-10-30 10:19:19 | Running /container/run/process/keepalived/finish...
***  INFO   | 2023-10-30 10:19:19 | Killing all processes...
[root@master1 ~]# 

出错时候,glance无法连接了

[root@master1 ~]# kubectl -n onecloud logs default-glance-6f778dfdd-j5zp2 --tail 100 -f 

Unable to retrieve container logs for docker://af5c9024433314d71a5af70a6a37fdbf83898ca1fe8010b129a83056d740768a
然后不能连接上了

看了下这个glance就是在master3的节点上,就是 keepalived-master3所在的节点,通过docker ps发现该glance已经不是up了

/etc/kubernetes/manifests/keepalived.yaml内容是正确的,keepalived容器内有脚本会自动探测eno1改变为br0

日志看是master3的VIP漂移导致的,看下master3上kube-apiserver的日志是否正常

@chenjacken
Copy link
Author

chenjacken commented Oct 31, 2023

https://www.cloudpods.org/zh/docs/setup/db-ha/ 这个文章的数据库keepalived配置还是不够严谨'interface $DB_NETIF'写'eno1'还是'br0'都会有问题:
1,写'br0'时,服务器刚启动起来时候还没有'br0',kee会报错
2,写'eno1'时,服务器运行起来,有'br0'了,重启kee也会报错

增加多一个脚本检测变化?


master3上kube-apiserver的日志是否正常

今天再测试下,有结果再反馈。谢谢。

@chenjacken
Copy link
Author

chenjacken commented Nov 1, 2023

climc image-upload时候,我监控了三个控制节点服务器的情况,我把节点1和节点2的日志贴上来,麻烦帮忙看看了哈。
@zexi @swordqiu
谢谢

一,控制节点1--kube-apiserver的日志部分

I1101 15:16:48.381817       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.381705       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.381782       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:48.381855       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:48.381875       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.381856       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.381781       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.381819       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1101 15:16:48.381825       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
W1101 15:16:48.381894       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
I1101 15:16:48.381795       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.381943       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.381943       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.381878       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.381896       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.381981       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1101 15:16:48.381928       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
I1101 15:16:48.381960       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.381934       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.381963       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.382017       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
W1101 15:16:48.382017       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
I1101 15:16:48.382610       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1101 15:16:48.382634       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
I1101 15:16:48.382637       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1101 15:16:48.382676       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
W1101 15:16:48.382717       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
W1101 15:16:48.382786       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
I1101 15:16:48.389545       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.390020       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.390053       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.390072       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.390087       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.390117       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.390603       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.391339       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:48.392825       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E1101 15:16:49.566818       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:49.567023       1 trace.go:81] Trace[1803922500]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/master1" (started: 2023-11-01 15:16:45.566919765 +0000 UTC m=+143633.597398428) (total time: 4.000076825s):
Trace[1803922500]: [4.000076825s] [4.000036198s] END
E1101 15:16:49.629904       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:49.630218       1 trace.go:81] Trace[1086833991]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node5" (started: 2023-11-01 15:16:45.629930157 +0000 UTC m=+143633.660408677) (total time: 4.000254749s):
Trace[1086833991]: [4.000254749s] [4.000228622s] END
E1101 15:16:49.670499       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:49.670662       1 trace.go:81] Trace[288931599]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node9" (started: 2023-11-01 15:16:45.670706795 +0000 UTC m=+143633.701185259) (total time: 3.999934706s):
Trace[288931599]: [3.999934706s] [3.999888279s] END
E1101 15:16:49.732118       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:49.732394       1 trace.go:81] Trace[1657248077]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node8" (started: 2023-11-01 15:16:45.732304319 +0000 UTC m=+143633.762783092) (total time: 4.000066842s):
Trace[1657248077]: [4.000066842s] [4.00003139s] END
I1101 15:16:49.956059       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:49.956095       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:49.956144       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:49.956166       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:49.963398       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E1101 15:16:50.199475       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:50.199771       1 trace.go:81] Trace[168685972]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node6" (started: 2023-11-01 15:16:46.199411815 +0000 UTC m=+143634.229890279) (total time: 4.000334081s):
Trace[168685972]: [4.000334081s] [4.000304892s] END
I1101 15:16:50.256802       1 trace.go:81] Trace[1090463084]: "GuaranteedUpdate etcd3: *coordination.Lease" (started: 2023-11-01 15:16:46.256941436 +0000 UTC m=+143634.287419907) (total time: 3.999837187s):
Trace[1090463084]: [3.999837187s] [3.999656834s] END
E1101 15:16:50.256834       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:50.257024       1 trace.go:81] Trace[1482572531]: "Update /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/master3" (started: 2023-11-01 15:16:46.256833514 +0000 UTC m=+143634.287312027) (total time: 4.000173321s):
Trace[1482572531]: [4.000173321s] [4.000101434s] END
E1101 15:16:50.257177       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:50.258098       1 trace.go:81] Trace[1500453492]: "Create /api/v1/namespaces/kube-system/events" (started: 2023-11-01 15:16:48.260431562 +0000 UTC m=+143636.290910369) (total time: 1.997647188s):
Trace[1500453492]: [1.997647188s] [1.997470661s] END
E1101 15:16:50.531652       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:50.531817       1 trace.go:81] Trace[955465454]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node4" (started: 2023-11-01 15:16:46.531780435 +0000 UTC m=+143634.562259118) (total time: 4.000019945s):
Trace[955465454]: [4.000019945s] [3.999993264s] END
E1101 15:16:52.085929       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:52.086161       1 trace.go:81] Trace[1211665504]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/master2" (started: 2023-11-01 15:16:48.086045686 +0000 UTC m=+143636.116524167) (total time: 4.000087168s):
Trace[1211665504]: [4.000087168s] [4.000046162s] END
E1101 15:16:52.677435       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:52.677711       1 trace.go:81] Trace[62772104]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2023-11-01 15:16:42.677728162 +0000 UTC m=+143630.708206780) (total time: 9.999950714s):
Trace[62772104]: [9.999950714s] [9.99991056s] END
I1101 15:16:52.979266       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:52.979302       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:52.979374       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:52.979395       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:52.987950       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E1101 15:16:53.600620       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:53.600800       1 trace.go:81] Trace[2023595142]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2023-11-01 15:16:43.600847568 +0000 UTC m=+143631.631326101) (total time: 9.999932496s):
Trace[2023595142]: [9.999932496s] [9.999898077s] END
E1101 15:16:53.692974       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:53.693126       1 trace.go:81] Trace[1002135711]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2023-11-01 15:16:43.69315653 +0000 UTC m=+143631.723634994) (total time: 9.999954548s):
Trace[1002135711]: [9.999954548s] [9.999922874s] END
E1101 15:16:53.767255       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:53.767422       1 trace.go:81] Trace[1338814913]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/master1" (started: 2023-11-01 15:16:49.767466926 +0000 UTC m=+143637.797945610) (total time: 3.99993765s):
Trace[1338814913]: [3.99993765s] [3.999908543s] END
E1101 15:16:53.800928       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:53.801071       1 trace.go:81] Trace[1968777419]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2023-11-01 15:16:43.801143919 +0000 UTC m=+143631.831622527) (total time: 9.999909372s):
Trace[1968777419]: [9.999909372s] [9.999876222s] END
I1101 15:16:53.821708       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:53.821757       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:53.821837       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:53.821859       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:53.821869       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:53.828826       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E1101 15:16:53.830350       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:53.830489       1 trace.go:81] Trace[1618450203]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node5" (started: 2023-11-01 15:16:49.830659883 +0000 UTC m=+143637.861138569) (total time: 3.999812427s):
Trace[1618450203]: [3.999812427s] [3.999768736s] END
I1101 15:16:53.864492       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:53.864522       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:53.864553       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:53.864573       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:53.864586       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:53.871463       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E1101 15:16:53.873252       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:53.873410       1 trace.go:81] Trace[1080500551]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node9" (started: 2023-11-01 15:16:49.873461233 +0000 UTC m=+143637.903939953) (total time: 3.999931925s):
Trace[1080500551]: [3.999931925s] [3.999885825s] END
I1101 15:16:53.875506       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:53.875522       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:53.875533       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:53.875543       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:53.875561       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:53.875578       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:53.875589       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:53.883878       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:53.885390       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E1101 15:16:53.932505       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:53.932645       1 trace.go:81] Trace[748142239]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node8" (started: 2023-11-01 15:16:49.93299211 +0000 UTC m=+143637.963470754) (total time: 3.999636666s):
Trace[748142239]: [3.999636666s] [3.99959068s] END
I1101 15:16:54.007540       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:54.007587       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.007661       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.012192       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:54.012229       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.012248       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.017632       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.019138       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.048947       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:54.048973       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.049009       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.049053       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.049079       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.055921       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E1101 15:16:54.067200       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:54.067355       1 trace.go:81] Trace[1430368795]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2023-11-01 15:16:44.067865943 +0000 UTC m=+143632.098344534) (total time: 9.999461972s):
Trace[1430368795]: [9.999461972s] [9.999397897s] END
I1101 15:16:54.138601       1 trace.go:81] Trace[1135512896]: "List etcd3: key=/minions, resourceVersion=, limit: 0, continue: " (started: 2023-11-01 15:16:42.475625299 +0000 UTC m=+143630.506103767) (total time: 11.662957828s):
Trace[1135512896]: [11.662957828s] [11.662957828s] END
E1101 15:16:54.138621       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E1101 15:16:54.138705       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:54.138745       1 trace.go:81] Trace[380115484]: "List etcd3: key=/jobs, resourceVersion=, limit: 500, continue: " (started: 2023-11-01 15:16:49.315976721 +0000 UTC m=+143637.346455187) (total time: 4.822745505s):
Trace[380115484]: [4.822745505s] [4.822745505s] END
E1101 15:16:54.138768       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:54.138773       1 trace.go:81] Trace[785233298]: "List /api/v1/nodes" (started: 2023-11-01 15:16:42.475606912 +0000 UTC m=+143630.506085376) (total time: 11.66314833s):
Trace[785233298]: [11.66314833s] [11.663138397s] END
E1101 15:16:54.138775       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:54.139891       1 trace.go:81] Trace[115282746]: "Get /api/v1/namespaces/local-path-storage/endpoints/rancher.io-local-path" (started: 2023-11-01 15:16:44.134363495 +0000 UTC m=+143632.164842334) (total time: 10.005508238s):
Trace[115282746]: [10.005508238s] [10.005492462s] END
I1101 15:16:54.140993       1 trace.go:81] Trace[776412740]: "List /apis/batch/v1/jobs" (started: 2023-11-01 15:16:49.315903679 +0000 UTC m=+143637.346382143) (total time: 4.825073611s):
Trace[776412740]: [4.825073611s] [4.825009814s] END
I1101 15:16:54.257503       1 trace.go:81] Trace[350469701]: "GuaranteedUpdate etcd3: *coordination.Lease" (started: 2023-11-01 15:16:50.264547005 +0000 UTC m=+143638.295025472) (total time: 3.992936365s):
Trace[350469701]: [3.992936365s] [3.992912625s] END
E1101 15:16:54.257527       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:54.257740       1 trace.go:81] Trace[284730297]: "Update /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/master3" (started: 2023-11-01 15:16:50.264437021 +0000 UTC m=+143638.294915561) (total time: 3.993280556s):
Trace[284730297]: [3.993280556s] [3.993207642s] END
I1101 15:16:54.322666       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:54.322695       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.322744       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.329741       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E1101 15:16:54.401563       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:54.401731       1 trace.go:81] Trace[390027903]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node6" (started: 2023-11-01 15:16:50.401598221 +0000 UTC m=+143638.432076853) (total time: 4.0001149s):
Trace[390027903]: [4.0001149s] [4.000092211s] END
I1101 15:16:54.765131       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:54.765191       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.765245       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.765284       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.772510       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E1101 15:16:54.932398       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:54.932690       1 trace.go:81] Trace[1964898314]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node4" (started: 2023-11-01 15:16:50.932556424 +0000 UTC m=+143638.963035092) (total time: 4.0001084s):
Trace[1964898314]: [4.0001084s] [4.000063639s] END
E1101 15:16:54.964316       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:54.964563       1 trace.go:81] Trace[479061486]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2023-11-01 15:16:44.964381182 +0000 UTC m=+143632.994859732) (total time: 10.000154997s):
Trace[479061486]: [10.000154997s] [10.000125645s] END
I1101 15:16:55.111411       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:55.111538       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.111632       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.111652       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.111663       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.118892       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.304567       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:55.304683       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.304726       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.304748       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.311937       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E1101 15:16:55.318688       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:16:55.318941       1 trace.go:81] Trace[1884173981]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/rook-ceph-cephfs-csi-ceph-com" (started: 2023-11-01 15:16:45.318221022 +0000 UTC m=+143633.348699616) (total time: 10.000690108s):
Trace[1884173981]: [10.000690108s] [10.000682976s] END
I1101 15:16:55.381511       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:55.381540       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.381611       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.381638       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:55.381765       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.381802       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.381845       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.382916       1 trace.go:81] Trace[1345701325]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-cephfs-csi-ceph-com" (started: 2023-11-01 15:16:46.338325877 +0000 UTC m=+143634.368804517) (total time: 9.04456373s):
Trace[1345701325]: [9.044465423s] [9.044456047s] About to write a response
I1101 15:16:55.383152       1 trace.go:81] Trace[491728791]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/rook-ceph-cephfs-csi-ceph-com" (started: 2023-11-01 15:16:45.241061149 +0000 UTC m=+143633.271539759) (total time: 10.142048166s):
Trace[491728791]: [10.141940717s] [10.141932941s] About to write a response
I1101 15:16:55.383153       1 trace.go:81] Trace[1318111093]: "Get /api/v1/namespaces/kube-system/pods/kube-scheduler-master1" (started: 2023-11-01 15:16:52.860904899 +0000 UTC m=+143640.891383504) (total time: 2.522219423s):
Trace[1318111093]: [2.522093793s] [2.522084829s] About to write a response
I1101 15:16:55.383273       1 trace.go:81] Trace[1704216263]: "GuaranteedUpdate etcd3: *core.Event" (started: 2023-11-01 15:16:45.113398175 +0000 UTC m=+143633.143876639) (total time: 10.269844604s):
Trace[1704216263]: [10.269829058s] [10.269829058s] initial value restored
I1101 15:16:55.383385       1 trace.go:81] Trace[1483848562]: "Patch /api/v1/namespaces/kube-system/events/kube-apiserver-master1.1793113e9fa3e8f9" (started: 2023-11-01 15:16:45.113308996 +0000 UTC m=+143633.143787543) (total time: 10.270032309s):
Trace[1483848562]: [10.269922332s] [10.269893248s] About to apply patch
I1101 15:16:55.383422       1 trace.go:81] Trace[1353115182]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/master2" (started: 2023-11-01 15:16:52.486749191 +0000 UTC m=+143640.517227655) (total time: 2.896640904s):
Trace[1353115182]: [2.896552891s] [2.896502638s] About to write a response
I1101 15:16:55.383585       1 trace.go:81] Trace[564829811]: "Get /apis/crd.projectcalico.org/v1/clusterinformations/default" (started: 2023-11-01 15:16:48.694199557 +0000 UTC m=+143636.724678148) (total time: 6.689363515s):
Trace[564829811]: [6.689231321s] [6.689212716s] About to write a response
I1101 15:16:55.383678       1 trace.go:81] Trace[2114031748]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node9" (started: 2023-11-01 15:16:54.273933421 +0000 UTC m=+143642.304412052) (total time: 1.109710326s):
Trace[2114031748]: [1.109663055s] [1.109635591s] About to write a response
I1101 15:16:55.383711       1 trace.go:81] Trace[1340830750]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node6" (started: 2023-11-01 15:16:54.802251687 +0000 UTC m=+143642.832730405) (total time: 581.431492ms):
Trace[1340830750]: [581.380824ms] [581.350013ms] About to write a response
I1101 15:16:55.383760       1 trace.go:81] Trace[773460989]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:16:45.241792184 +0000 UTC m=+143633.272270787) (total time: 10.141925408s):
Trace[773460989]: [10.141865607s] [10.14185747s] About to write a response
I1101 15:16:55.383768       1 trace.go:81] Trace[2032217017]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node5" (started: 2023-11-01 15:16:54.231068095 +0000 UTC m=+143642.261546558) (total time: 1.152652052s):
Trace[2032217017]: [1.152608149s] [1.152581613s] About to write a response
I1101 15:16:55.383713       1 trace.go:81] Trace[359565127]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node8" (started: 2023-11-01 15:16:54.333140265 +0000 UTC m=+143642.363618898) (total time: 1.050540228s):
Trace[359565127]: [1.050492593s] [1.050465871s] About to write a response
I1101 15:16:55.383838       1 trace.go:81] Trace[378549365]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/master1" (started: 2023-11-01 15:16:54.167933124 +0000 UTC m=+143642.198411672) (total time: 1.215872163s):
Trace[378549365]: [1.215831242s] [1.215799471s] About to write a response
I1101 15:16:55.383957       1 trace.go:81] Trace[526141007]: "GuaranteedUpdate etcd3: *coordination.Lease" (started: 2023-11-01 15:16:54.264073381 +0000 UTC m=+143642.294551867) (total time: 1.119857309s):
Trace[526141007]: [1.119857309s] [1.119821608s] END
I1101 15:16:55.383996       1 trace.go:81] Trace[1315873277]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-cephfs-csi-ceph-com" (started: 2023-11-01 15:16:46.127178149 +0000 UTC m=+143634.157656853) (total time: 9.256793374s):
Trace[1315873277]: [9.256721432s] [9.25670737s] About to write a response
I1101 15:16:55.384033       1 trace.go:81] Trace[808509936]: "Update /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/master3" (started: 2023-11-01 15:16:54.263834973 +0000 UTC m=+143642.294313461) (total time: 1.120175913s):
Trace[808509936]: [1.120175913s] [1.120001611s] END
I1101 15:16:55.384086       1 trace.go:81] Trace[1987763367]: "Get /api/v1/namespaces/default" (started: 2023-11-01 15:16:49.204925252 +0000 UTC m=+143637.235403716) (total time: 6.179140817s):
Trace[1987763367]: [6.179105424s] [6.179095674s] About to write a response
I1101 15:16:55.392855       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.392882       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.397102       1 trace.go:81] Trace[1731010830]: "List etcd3: key=/pods/onecloud, resourceVersion=, limit: 0, continue: " (started: 2023-11-01 15:16:46.605047496 +0000 UTC m=+143634.635525959) (total time: 8.79203287s):
Trace[1731010830]: [8.79203287s] [8.79203287s] END
I1101 15:16:55.397631       1 trace.go:81] Trace[994987652]: "List /api/v1/namespaces/onecloud/pods" (started: 2023-11-01 15:16:46.604977358 +0000 UTC m=+143634.635455937) (total time: 8.792634252s):
Trace[994987652]: [8.792138681s] [8.792075911s] Listing from storage done
I1101 15:16:55.415137       1 trace.go:81] Trace[1890958081]: "GuaranteedUpdate etcd3: *core.Event" (started: 2023-11-01 15:16:44.909861058 +0000 UTC m=+143632.940339524) (total time: 10.505246749s):
Trace[1890958081]: [10.473820396s] [10.473820396s] initial value restored
I1101 15:16:55.415242       1 trace.go:81] Trace[399581218]: "Patch /api/v1/namespaces/kube-system/events/kube-apiserver-master2.1793063aff5863b1" (started: 2023-11-01 15:16:44.909791382 +0000 UTC m=+143632.940269977) (total time: 10.505423679s):
Trace[399581218]: [10.473893274s] [10.473865195s] About to apply patch
I1101 15:16:55.415523       1 trace.go:81] Trace[2041273436]: "Get /apis/apps/v1/namespaces/onecloud/deployments/default-webconsole" (started: 2023-11-01 15:16:42.283189856 +0000 UTC m=+143630.313668462) (total time: 13.132308504s):
Trace[2041273436]: [13.131918645s] [13.13190947s] About to write a response
I1101 15:17:03.973545       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:17:03.973748       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:03.973844       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:03.981067       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:13.205025       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io

I1101 15:17:43.973882       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:43.973967       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:43.981216       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1101 15:17:49.307728       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.16.1.10 172.16.1.8 172.16.1.9]
E1101 15:17:49.370417       1 controller.go:218] unable to sync kubernetes service: Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again
I1101 15:17:54.278118       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:17:54.278259       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:54.278330       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:54.278359       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:54.278371       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:54.285406       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:54.323053       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:17:54.323148       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:54.323233       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:54.323258       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:54.330450       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:54.765560       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:17:54.765668       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:54.765739       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:54.765757       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:54.765782       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:54.772647       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:55.304985       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:17:55.305202       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:55.305272       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:55.312540       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:03.973895       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:03.974082       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:03.974114       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:03.981202       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:13.208203       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
I1101 15:18:14.278270       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:14.278435       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:14.278522       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:14.278546       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:14.278593       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:14.285885       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:14.323176       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:14.323291       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:14.323375       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:14.330667       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:14.765726       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:14.765851       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:14.765888       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:14.765922       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:14.765932       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:14.773033       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:15.305141       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:15.305289       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:15.305405       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:15.312671       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:23.527908       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:23.527968       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:23.528006       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:23.535435       1 trace.go:81] Trace[1082105129]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2023-11-01 15:18:21.211215907 +0000 UTC m=+143729.241694383) (total time: 2.324184139s):
Trace[1082105129]: [2.324142606s] [2.32392543s] Transaction committed
I1101 15:18:23.535546       1 trace.go:81] Trace[1682209964]: "Update /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2023-11-01 15:18:21.211104297 +0000 UTC m=+143729.241582761) (total time: 2.324423742s):
Trace[1682209964]: [2.32435641s] [2.324283196s] Object stored in database
I1101 15:18:23.535980       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:23.974056       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:23.974271       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:23.974388       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:23.981615       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:25.111994       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:25.112030       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:25.112073       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:25.112093       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:25.119427       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:25.955196       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:25.955338       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:25.955397       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:25.962826       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E1101 15:18:26.295481       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:18:26.295664       1 trace.go:81] Trace[1475935685]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node6" (started: 2023-11-01 15:18:22.29551217 +0000 UTC m=+143730.325990794) (total time: 4.000128203s):
Trace[1475935685]: [4.000128203s] [4.000098627s] END
E1101 15:18:26.306846       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:18:26.307030       1 trace.go:81] Trace[1184478592]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/master2" (started: 2023-11-01 15:18:22.306928526 +0000 UTC m=+143730.337407131) (total time: 4.000077852s):
Trace[1184478592]: [4.000077852s] [4.000054823s] END
E1101 15:18:26.307091       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E1101 15:18:26.307184       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E1101 15:18:26.307215       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E1101 15:18:26.307274       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E1101 15:18:26.307550       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:18:26.308105       1 trace.go:81] Trace[741155508]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/master1" (started: 2023-11-01 15:18:22.307157222 +0000 UTC m=+143730.337635930) (total time: 4.000928437s):
Trace[741155508]: [4.000928437s] [4.000900199s] END
I1101 15:18:26.309199       1 trace.go:81] Trace[2103340014]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node9" (started: 2023-11-01 15:18:22.307315597 +0000 UTC m=+143730.337794305) (total time: 4.001863931s):
Trace[2103340014]: [4.001863931s] [4.001840884s] END
I1101 15:18:26.310392       1 trace.go:81] Trace[692271989]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node5" (started: 2023-11-01 15:18:22.30736885 +0000 UTC m=+143730.337847312) (total time: 4.002998264s):
Trace[692271989]: [4.002998264s] [4.002966125s] END
I1101 15:18:26.311532       1 trace.go:81] Trace[570742331]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/master3" (started: 2023-11-01 15:18:22.307430853 +0000 UTC m=+143730.337909324) (total time: 4.00407073s):
Trace[570742331]: [4.00407073s] [4.004033102s] END
I1101 15:18:26.312630       1 trace.go:81] Trace[1933138365]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node8" (started: 2023-11-01 15:18:22.307648611 +0000 UTC m=+143730.338127214) (total time: 4.004954278s):
Trace[1933138365]: [4.004954278s] [4.004932609s] END
E1101 15:18:26.772047       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:18:26.772298       1 trace.go:81] Trace[1919921483]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node4" (started: 2023-11-01 15:18:22.772105451 +0000 UTC m=+143730.802584251) (total time: 4.000171371s):
Trace[1919921483]: [4.000171371s] [4.000139444s] END
I1101 15:18:28.955370       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:28.955541       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:28.955629       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:28.955651       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:28.955663       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:28.963111       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E1101 15:18:30.495962       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:18:30.496183       1 trace.go:81] Trace[189455044]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node6" (started: 2023-11-01 15:18:26.496116844 +0000 UTC m=+143734.526595661) (total time: 4.0000337s):
Trace[189455044]: [4.0000337s] [3.999997862s] END
E1101 15:18:30.507426       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E1101 15:18:30.507594       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:18:30.507644       1 trace.go:81] Trace[5228041]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/master2" (started: 2023-11-01 15:18:26.507537854 +0000 UTC m=+143734.538016447) (total time: 4.000079468s):
Trace[5228041]: [4.000079468s] [4.00004698s] END
E1101 15:18:30.507757       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E1101 15:18:30.507867       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E1101 15:18:30.507869       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
E1101 15:18:30.508151       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:18:30.508690       1 trace.go:81] Trace[1282479670]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/master1" (started: 2023-11-01 15:18:26.507723378 +0000 UTC m=+143734.538202007) (total time: 4.000945438s):
Trace[1282479670]: [4.000945438s] [4.000914567s] END
I1101 15:18:30.509762       1 trace.go:81] Trace[1292861625]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node9" (started: 2023-11-01 15:18:26.50789834 +0000 UTC m=+143734.538376823) (total time: 4.001844931s):
Trace[1292861625]: [4.001844931s] [4.00180058s] END
I1101 15:18:30.510848       1 trace.go:81] Trace[1633548222]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node5" (started: 2023-11-01 15:18:26.508014381 +0000 UTC m=+143734.538493153) (total time: 4.002815834s):
Trace[1633548222]: [4.002815834s] [4.002782337s] END
I1101 15:18:30.511940       1 trace.go:81] Trace[1598312695]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/master3" (started: 2023-11-01 15:18:26.508041412 +0000 UTC m=+143734.538520200) (total time: 4.003884019s):
Trace[1598312695]: [4.003884019s] [4.003851732s] END
I1101 15:18:30.513134       1 trace.go:81] Trace[1447567640]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node8" (started: 2023-11-01 15:18:26.508251189 +0000 UTC m=+143734.538729764) (total time: 4.004864894s):
Trace[1447567640]: [4.004864894s] [4.004832039s] END
I1101 15:18:30.528000       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:30.528009       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:30.528041       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:30.528071       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:30.528099       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:30.528110       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:30.528124       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:30.528133       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:30.528144       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:30.528169       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:30.528138       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:30.528213       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:30.528229       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:30.528250       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:30.528274       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:30.528292       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W1101 15:18:30.528443       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
W1101 15:18:30.528506       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
I1101 15:18:30.528656       1 trace.go:81] Trace[130733300]: "Get /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node4" (started: 2023-11-01 15:18:26.972908454 +0000 UTC m=+143735.003386922) (total time: 3.555715603s):
Trace[130733300]: [3.555650215s] [3.555618464s] About to write a response
I1101 15:18:30.528752       1 trace.go:81] Trace[1763887538]: "Get /api/v1/namespaces/default" (started: 2023-11-01 15:18:29.206265824 +0000 UTC m=+143737.236744420) (total time: 1.322461756s):
Trace[1763887538]: [1.322415579s] [1.322405574s] About to write a response
I1101 15:18:30.528998       1 trace.go:81] Trace[933874258]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-cephfs-csi-ceph-com" (started: 2023-11-01 15:18:23.575699301 +0000 UTC m=+143731.606177765) (total time: 6.953265143s):
Trace[933874258]: [6.953149684s] [6.953139385s] About to write a response
I1101 15:18:30.529101       1 trace.go:81] Trace[698719125]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2023-11-01 15:18:23.682877018 +0000 UTC m=+143731.713355605) (total time: 6.846199826s):
Trace[698719125]: [6.846140128s] [6.846096337s] About to write a response
I1101 15:18:30.529310       1 trace.go:81] Trace[1880489279]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:18:28.587666954 +0000 UTC m=+143736.618145759) (total time: 1.941619334s):
Trace[1880489279]: [1.941552861s] [1.941541021s] About to write a response
I1101 15:18:30.529541       1 trace.go:81] Trace[1539631945]: "Get /apis/crd.projectcalico.org/v1/clusterinformations/default" (started: 2023-11-01 15:18:25.642036347 +0000 UTC m=+143733.672514809) (total time: 4.887480323s):
Trace[1539631945]: [4.887394986s] [4.887378122s] About to write a response
I1101 15:18:30.529693       1 trace.go:81] Trace[1178291245]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/rook-ceph-cephfs-csi-ceph-com" (started: 2023-11-01 15:18:28.587950386 +0000 UTC m=+143736.618429124) (total time: 1.941713653s):
Trace[1178291245]: [1.941634133s] [1.941622546s] About to write a response
I1101 15:18:30.529861       1 trace.go:81] Trace[1306623707]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2023-11-01 15:18:25.537395277 +0000 UTC m=+143733.567873925) (total time: 4.992432037s):
Trace[1306623707]: [4.992389935s] [4.992325294s] About to write a response
I1101 15:18:30.529958       1 trace.go:81] Trace[435621152]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-cephfs-csi-ceph-com" (started: 2023-11-01 15:18:23.583834359 +0000 UTC m=+143731.614313358) (total time: 6.946093059s):
Trace[435621152]: [6.945985793s] [6.945974544s] About to write a response
I1101 15:18:30.529984       1 trace.go:81] Trace[1843898610]: "List etcd3: key=/jobs, resourceVersion=, limit: 500, continue: " (started: 2023-11-01 15:18:26.31418586 +0000 UTC m=+143734.344664324) (total time: 4.215765891s):
Trace[1843898610]: [4.215765891s] [4.215765891s] END
I1101 15:18:30.530153       1 trace.go:81] Trace[2033902272]: "Get /apis/ceph.rook.io/v1/namespaces/rook-ceph/cephclusters/rook-ceph" (started: 2023-11-01 15:18:21.263491314 +0000 UTC m=+143729.293969792) (total time: 9.266639995s):
Trace[2033902272]: [9.266009784s] [9.265994006s] About to write a response
I1101 15:18:30.530450       1 trace.go:81] Trace[367910637]: "List etcd3: key=/minions, resourceVersion=, limit: 0, continue: " (started: 2023-11-01 15:18:29.738173312 +0000 UTC m=+143737.768651776) (total time: 792.258994ms):
Trace[367910637]: [792.258994ms] [792.258994ms] END
I1101 15:18:30.530752       1 trace.go:81] Trace[190418557]: "List /apis/batch/v1/jobs" (started: 2023-11-01 15:18:26.314121846 +0000 UTC m=+143734.344600412) (total time: 4.216610148s):
Trace[190418557]: [4.215877835s] [4.215822224s] Listing from storage done
I1101 15:18:30.531404       1 trace.go:81] Trace[473688063]: "List /api/v1/nodes" (started: 2023-11-01 15:18:29.738150895 +0000 UTC m=+143737.768629359) (total time: 793.230152ms):
Trace[473688063]: [792.31361ms] [792.301308ms] Listing from storage done
I1101 15:18:30.533695       1 trace.go:81] Trace[1839599246]: "List etcd3: key=/deployments/rook-ceph, resourceVersion=, limit: 0, continue: " (started: 2023-11-01 15:18:28.339234883 +0000 UTC m=+143736.369713348) (total time: 2.194435312s):
Trace[1839599246]: [2.194435312s] [2.194435312s] END
I1101 15:18:30.533759       1 trace.go:81] Trace[885452319]: "List /apis/apps/v1/namespaces/rook-ceph/deployments" (started: 2023-11-01 15:18:28.339148148 +0000 UTC m=+143736.369626771) (total time: 2.194598058s):
Trace[885452319]: [2.194558333s] [2.1944821s] Listing from storage done
I1101 15:18:30.535602       1 trace.go:81] Trace[1324526574]: "List etcd3: key=/pods/onecloud, resourceVersion=, limit: 0, continue: " (started: 2023-11-01 15:18:24.497926523 +0000 UTC m=+143732.528404988) (total time: 6.037653294s):
Trace[1324526574]: [6.037653294s] [6.037653294s] END
I1101 15:18:30.535823       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:30.536114       1 trace.go:81] Trace[1422972507]: "List /api/v1/namespaces/onecloud/pods" (started: 2023-11-01 15:18:24.497833282 +0000 UTC m=+143732.528311746) (total time: 6.03825197s):
Trace[1422972507]: [6.037782203s] [6.037698854s] Listing from storage done
I1101 15:18:30.536474       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:30.536517       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:30.559295       1 trace.go:81] Trace[1114255220]: "GuaranteedUpdate etcd3: *core.Event" (started: 2023-11-01 15:18:21.764790761 +0000 UTC m=+143729.795269230) (total time: 8.794486918s):
Trace[1114255220]: [8.794479162s] [8.794479162s] initial value restored
I1101 15:18:30.559318       1 trace.go:81] Trace[748229474]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2023-11-01 15:18:21.351009968 +0000 UTC m=+143729.381488544) (total time: 9.20828804s):
Trace[748229474]: [9.208257943s] [9.20821828s] About to write a response
I1101 15:18:30.559333       1 trace.go:81] Trace[322876615]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2023-11-01 15:18:21.307376345 +0000 UTC m=+143729.337854896) (total time: 9.251937506s):
Trace[322876615]: [9.251908064s] [9.251876737s] About to write a response
I1101 15:18:30.559363       1 trace.go:81] Trace[1156015787]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2023-11-01 15:18:22.021873821 +0000 UTC m=+143730.052352389) (total time: 8.537449745s):
Trace[1156015787]: [8.537406218s] [8.537378115s] About to write a response
I1101 15:18:30.559378       1 trace.go:81] Trace[725484967]: "Patch /api/v1/namespaces/kube-system/events/kube-apiserver-master3.17930bebc89a8331" (started: 2023-11-01 15:18:21.764698145 +0000 UTC m=+143729.795176703) (total time: 8.794642788s):
Trace[725484967]: [8.794573501s] [8.794542001s] About to apply patch
I1101 15:18:30.582481       1 trace.go:81] Trace[248869568]: "GuaranteedUpdate etcd3: *core.Event" (started: 2023-11-01 15:18:24.910043255 +0000 UTC m=+143732.940521725) (total time: 5.67241879s):
Trace[248869568]: [5.619674701s] [5.619674701s] initial value restored
I1101 15:18:30.582530       1 trace.go:81] Trace[1242859951]: "GuaranteedUpdate etcd3: *core.Event" (started: 2023-11-01 15:18:25.113392239 +0000 UTC m=+143733.143870701) (total time: 5.469118072s):
Trace[1242859951]: [5.416228299s] [5.416228299s] initial value restored
I1101 15:18:30.582552       1 trace.go:81] Trace[5560823]: "Patch /api/v1/namespaces/kube-system/events/kube-apiserver-master2.1793063aff5863b1" (started: 2023-11-01 15:18:24.909950199 +0000 UTC m=+143732.940428845) (total time: 5.672585861s):
Trace[5560823]: [5.619769707s] [5.619737155s] About to apply patch
I1101 15:18:30.582587       1 trace.go:81] Trace[1942213644]: "Patch /api/v1/namespaces/kube-system/events/kube-apiserver-master1.1793113e9fa3e8f9" (started: 2023-11-01 15:18:25.113312635 +0000 UTC m=+143733.143791250) (total time: 5.469259168s):
Trace[1942213644]: [5.416312566s] [5.416270396s] About to apply patch
I1101 15:18:33.822380       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:33.822582       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:33.822686       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:33.829872       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:33.865149       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:33.865261       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:33.865365       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:33.872339       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]

二,控制节点1--keepalived-master1的日志部分

[2023-11-01 15:21:17] https://172.16.1.8:6443/healthz ok
[2023-11-01 15:21:22] got router interface: br0
[2023-11-01 15:21:22] interface br0 OK
[2023-11-01 15:21:22] https://172.16.1.8:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
Wed Nov  1 15:21:22 2023: Script `check_kube` now returning 1
[2023-11-01 15:21:25] https://172.16.1.8:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:21:28] https://172.16.1.8:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:21:31] https://172.16.1.8:6443/healthz ok
Wed Nov  1 15:21:31 2023: Script `check_kube` now returning 0
[2023-11-01 15:21:32] https://172.16.1.8:6443/healthz ok
[2023-11-01 15:21:36] https://172.16.1.8:6443/healthz ok
[2023-11-01 15:21:37] got router interface: br0
[2023-11-01 15:21:37] interface br0 OK
[2023-11-01 15:21:38] https://172.16.1.8:6443/healthz ok
[2023-11-01 15:21:41] https://172.16.1.8:6443/healthz ok
[2023-11-01 15:21:45] https://172.16.1.8:6443/healthz ok
[2023-11-01 15:21:47] https://172.16.1.8:6443/healthz ok
[2023-11-01 15:21:50] https://172.16.1.8:6443/healthz ok
[2023-11-01 15:21:52] got router interface: br0
[2023-11-01 15:21:52] interface br0 OK
[2023-11-01 15:21:53] https://172.16.1.8:6443/healthz ok
[2023-11-01 15:21:56] https://172.16.1.8:6443/healthz ok
[2023-11-01 15:21:59] https://172.16.1.8:6443/healthz ok
[2023-11-01 15:22:03] https://172.16.1.8:6443/healthz ok
[2023-11-01 15:22:06] https://172.16.1.8:6443/healthz ok
[2023-11-01 15:22:07] got router interface: br0
[2023-11-01 15:22:07] interface br0 OK
[2023-11-01 15:22:09] https://172.16.1.8:6443/healthz ok

@chenjacken
Copy link
Author

控制节点3--kube-apiserver的日志部分

I1101 15:13:17.320470       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:13:17.320545       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:13:17.320639       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:13:17.320674       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:13:17.327583       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:13:18.075705       1 trace.go:81] Trace[1500675076]: "Get /api/v1/namespaces/rook-ceph/services/rook-ceph-mgr" (started: 2023-11-01 15:13:13.143364778 +0000 UTC m=+137136.263703818) (total time: 4.93230451s):
Trace[1500675076]: [4.932156545s] [4.932128691s] About to write a response
I1101 15:13:18.075998       1 trace.go:81] Trace[1804269931]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:13:14.610786562 +0000 UTC m=+137137.731125759) (total time: 3.465187859s):
Trace[1804269931]: [3.465107113s] [3.465094428s] About to write a response
I1101 15:13:18.076053       1 trace.go:81] Trace[1535047480]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:13:14.778616033 +0000 UTC m=+137137.898955238) (total time: 3.297412271s):
Trace[1535047480]: [3.297356542s] [3.297348007s] About to write a response
I1101 15:13:18.076061       1 trace.go:81] Trace[1480872066]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:13:14.81730794 +0000 UTC m=+137137.937647172) (total time: 3.25873062s):
Trace[1480872066]: [3.258666062s] [3.258649209s] About to write a response
I1101 15:13:18.076150       1 trace.go:81] Trace[839780947]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-cephfs-csi-ceph-com" (started: 2023-11-01 15:13:15.440796191 +0000 UTC m=+137138.561135094) (total time: 2.635285731s):
Trace[839780947]: [2.635171891s] [2.635160756s] About to write a response
I1101 15:13:18.076184       1 trace.go:81] Trace[511651170]: "Get /api/v1/namespaces/rook-ceph/services/rook-ceph-mgr" (started: 2023-11-01 15:13:13.64642479 +0000 UTC m=+137136.766763794) (total time: 4.429729106s):
Trace[511651170]: [4.42960064s] [4.429586453s] About to write a response
I1101 15:13:18.076230       1 trace.go:81] Trace[602343793]: "Get /api/v1/namespaces/default" (started: 2023-11-01 15:13:15.865259627 +0000 UTC m=+137138.985598694) (total time: 2.210945709s):
Trace[602343793]: [2.210893308s] [2.210878575s] About to write a response
I1101 15:13:18.076369       1 trace.go:81] Trace[2039106890]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-cephfs-csi-ceph-com" (started: 2023-11-01 15:13:16.469212942 +0000 UTC m=+137139.589552043) (total time: 1.607125334s):
Trace[2039106890]: [1.607068529s] [1.607054871s] About to write a response
I1101 15:13:19.785364       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:13:19.785534       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:13:19.785630       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:13:19.785653       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:13:19.785694       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:13:19.792180       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:13:39.785529       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:13:39.785597       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:13:39.785694       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:13:39.792741       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]


W1101 15:13:46.055326       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.16.1.10 172.16.1.8]



I1101 15:13:56.371863       1 trace.go:81] Trace[145184416]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2023-11-01 15:13:55.871551654 +0000 UTC m=+137178.991890554) (total time: 500.258272ms):
Trace[145184416]: [356.272796ms] [353.981469ms] Transaction prepared
Trace[145184416]: [500.219547ms] [143.946751ms] Transaction committed
I1101 15:14:02.642632       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io





W1101 15:14:16.272187       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.16.1.10 172.16.1.8]
W1101 15:14:36.386827       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.16.1.10 172.16.1.8]
I1101 15:15:02.644195       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
W1101 15:15:16.052482       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.16.1.10 172.16.1.8 172.16.1.9]
W1101 15:15:45.983343       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.16.1.10 172.16.1.8]
I1101 15:16:02.645473       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
I1101 15:16:04.199027       1 trace.go:81] Trace[1589278015]: "GuaranteedUpdate etcd3: *coordination.Lease" (started: 2023-11-01 15:16:03.623397346 +0000 UTC m=+137306.743736248) (total time: 575.601894ms):
Trace[1589278015]: [575.583392ms] [575.349371ms] Transaction committed
I1101 15:16:04.199145       1 trace.go:81] Trace[713006158]: "Update /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:16:03.623187033 +0000 UTC m=+137306.743526282) (total time: 575.942372ms):
Trace[713006158]: [575.876682ms] [575.714312ms] Object stored in database
I1101 15:16:04.199828       1 trace.go:81] Trace[2002752178]: "Get /api/v1/namespaces/rook-ceph/services/rook-ceph-mgr" (started: 2023-11-01 15:16:03.642490755 +0000 UTC m=+137306.762829647) (total time: 557.318192ms):
Trace[2002752178]: [557.239956ms] [557.229488ms] About to write a response
I1101 15:16:04.200398       1 trace.go:81] Trace[1780543949]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:16:03.647136583 +0000 UTC m=+137306.767475648) (total time: 553.242057ms):
Trace[1780543949]: [553.199368ms] [553.188997ms] About to write a response
W1101 15:16:06.143062       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.16.1.10 172.16.1.8]




I1101 15:16:44.325357       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:44.325535       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:44.325646       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:44.332474       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:47.323620       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []

控制节点3--keepalived-master3的日志部分

2023-11-01 15:21:09] https://172.16.1.10:6443/healthz ok
[2023-11-01 15:21:12] https://172.16.1.10:6443/healthz ok
[2023-11-01 15:21:13] got router interface: br0
[2023-11-01 15:21:13] interface br0 OK
[2023-11-01 15:21:15] https://172.16.1.10:6443/healthz ok
[2023-11-01 15:21:18] https://172.16.1.10:6443/healthz ok
[2023-11-01 15:21:23] https://172.16.1.10:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
Wed Nov  1 15:21:23 2023: Script `check_kube` now returning 1
[2023-11-01 15:21:24] https://172.16.1.10:6443/healthz ok
Wed Nov  1 15:21:24 2023: Script `check_kube` now returning 0
[2023-11-01 15:21:27] https://172.16.1.10:6443/healthz ok
[2023-11-01 15:21:28] got router interface: br0
[2023-11-01 15:21:28] interface br0 OK
[2023-11-01 15:21:30] https://172.16.1.10:6443/healthz ok
[2023-11-01 15:21:33] https://172.16.1.10:6443/healthz ok
[2023-11-01 15:21:36] https://172.16.1.10:6443/healthz ok
[2023-11-01 15:21:39] https://172.16.1.10:6443/healthz ok
[2023-11-01 15:21:42] https://172.16.1.10:6443/healthz ok
[2023-11-01 15:21:43] got router interface: br0

@chenjacken
Copy link
Author

chenjacken commented Nov 1, 2023

三,控制节点2--kube-apiserver的日志部分

I1101 15:14:08.993825       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:14:08.993890       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:08.993964       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:09.001610       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:10.440006       1 trace.go:81] Trace[1578235393]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:14:02.478762625 +0000 UTC m=+131238.080542905) (total time: 7.96120212s):
Trace[1578235393]: [7.961067025s] [7.961047076s] About to write a response
I1101 15:14:10.440078       1 trace.go:81] Trace[2084954579]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:14:02.478803112 +0000 UTC m=+131238.080583411) (total time: 7.961234248s):
Trace[2084954579]: [7.961117083s] [7.961099442s] About to write a response
I1101 15:14:11.995517       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:14:11.995677       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:11.995805       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:12.004071       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:14.725825       1 trace.go:81] Trace[1646897068]: "Get /api/v1/namespaces/default/endpoints/kubernetes" (started: 2023-11-01 15:14:03.268328603 +0000 UTC m=+131238.870108560) (total time: 11.457455414s):
Trace[1646897068]: [11.457384199s] [11.457371874s] About to write a response
I1101 15:14:14.910424       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:14:14.910459       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:14.910502       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:14.910524       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:14.917517       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:14.995554       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:14:14.995700       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:14.995760       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:15.003453       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]


I1101 15:14:17.936220       1 trace.go:81] Trace[433395527]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:14:15.701286643 +0000 UTC m=+131251.303066709) (total time: 2.234903375s):
Trace[433395527]: [2.234818873s] [2.234805563s] About to write a response
I1101 15:14:17.936220       1 trace.go:81] Trace[2101078210]: "List etcd3: key=/masterleases/, resourceVersion=0, limit: 0, continue: " (started: 2023-11-01 15:14:14.726188601 +0000 UTC m=+131250.327968556) (total time: 3.210007136s):
Trace[2101078210]: [3.210007136s] [3.210007136s] END
I1101 15:14:17.936279       1 trace.go:81] Trace[1663642752]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:14:15.701511418 +0000 UTC m=+131251.303291420) (total time: 2.234733821s):
Trace[1663642752]: [2.234625417s] [2.234616924s] About to write a response
W1101 15:14:17.936351       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.16.1.10 172.16.1.8]
I1101 15:14:17.937037       1 trace.go:81] Trace[1497358909]: "List etcd3: key=/services/specs, resourceVersion=, limit: 0, continue: " (started: 2023-11-01 15:14:14.726864735 +0000 UTC m=+131250.328644690) (total time: 3.21015424s):
Trace[1497358909]: [3.21015424s] [3.21015424s] END
I1101 15:14:17.937271       1 trace.go:81] Trace[1096625169]: "List etcd3: key=/services/specs, resourceVersion=, limit: 0, continue: " (started: 2023-11-01 15:14:14.726105157 +0000 UTC m=+131250.327885112) (total time: 3.2111427s):
Trace[1096625169]: [3.2111427s] [3.2111427s] END
I1101 15:14:17.937710       1 trace.go:81] Trace[1938219750]: "List /api/v1/services" (started: 2023-11-01 15:14:14.726851089 +0000 UTC m=+131250.328631146) (total time: 3.210839967s):
Trace[1938219750]: [3.210196749s] [3.210188178s] Listing from storage done
I1101 15:14:17.938078       1 trace.go:81] Trace[977517178]: "List /api/v1/services" (started: 2023-11-01 15:14:14.726084912 +0000 UTC m=+131250.327864978) (total time: 3.21197478s):
Trace[977517178]: [3.211199806s] [3.21118983s] Listing from storage done
E1101 15:14:17.939341       1 controller.go:218] unable to sync kubernetes service: Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again


I1101 15:14:19.860465       1 trace.go:81] Trace[351280407]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2023-11-01 15:14:17.944634851 +0000 UTC m=+131253.546414808) (total time: 1.915797981s):
Trace[351280407]: [729.26068ms] [727.409285ms] Transaction prepared
Trace[351280407]: [1.915770016s] [1.186509336s] Transaction committed
W1101 15:14:19.868226       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.16.1.10 172.16.1.8 172.16.1.9]
I1101 15:14:20.477830       1 trace.go:81] Trace[2119154356]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2023-11-01 15:14:19.868587273 +0000 UTC m=+131255.470367230) (total time: 609.191194ms):
Trace[2119154356]: [609.149044ms] [608.922294ms] Transaction committed
I1101 15:14:20.477938       1 trace.go:81] Trace[648740648]: "Update /api/v1/namespaces/default/endpoints/kubernetes" (started: 2023-11-01 15:14:19.868503118 +0000 UTC m=+131255.470283073) (total time: 609.404671ms):
Trace[648740648]: [609.349983ms] [609.304597ms] Object stored in database
I1101 15:14:23.994361       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:14:23.994401       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:23.994439       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:23.994458       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:24.001602       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:24.910263       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:14:29.995705       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:29.995848       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:30.005726       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:32.950196       1 trace.go:81] Trace[371083734]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:14:26.893190293 +0000 UTC m=+131262.494970357) (total time: 6.056981035s):
Trace[371083734]: [6.056925002s] [6.056912083s] About to write a response
I1101 15:14:32.950196       1 trace.go:81] Trace[300350844]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:14:26.893106036 +0000 UTC m=+131262.494886100) (total time: 6.057064792s):
Trace[300350844]: [6.056989662s] [6.056976597s] About to write a response
I1101 15:14:32.994492       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:14:32.994523       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:32.994565       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:33.001602       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:34.703277       1 trace.go:81] Trace[1644849603]: "Get /api/v1/namespaces/default" (started: 2023-11-01 15:14:27.940151216 +0000 UTC m=+131263.541931174) (total time: 6.763091813s):
Trace[1644849603]: [6.763031911s] [6.763015281s] About to write a response
I1101 15:14:35.993364       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:14:35.993527       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:35.993565       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:36.000746       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]



I1101 15:14:38.994142       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:14:38.994281       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:38.994348       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:38.994364       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:38.994375       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:39.001452       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:39.409835       1 trace.go:81] Trace[1583383815]: "Get /api/v1/namespaces/default/services/kubernetes" (started: 2023-11-01 15:14:34.703980513 +0000 UTC m=+131270.305760579) (total time: 4.705819088s):
Trace[1583383815]: [4.705756683s] [4.705744264s] About to write a response
I1101 15:14:41.993834       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:14:41.993873       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:41.993938       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:42.001757       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:42.237177       1 trace.go:81] Trace[1031913002]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:14:40.041829489 +0000 UTC m=+131275.643609443) (total time: 2.195318256s):
Trace[1031913002]: [2.195246459s] [2.195235778s] About to write a response
I1101 15:14:42.237177       1 trace.go:81] Trace[768737043]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:14:40.041853635 +0000 UTC m=+131275.643633656) (total time: 2.195296123s):
Trace[768737043]: [2.195220055s] [2.19521176s] About to write a response
I1101 15:14:42.865714       1 trace.go:81] Trace[310901254]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2023-11-01 15:14:39.410162203 +0000 UTC m=+131275.011942167) (total time: 3.455515461s):
Trace[310901254]: [2.826987276s] [2.826987276s] initial value restored
Trace[310901254]: [3.133178452s] [306.191176ms] Transaction prepared
Trace[310901254]: [3.455494231s] [322.315779ms] Transaction committed
W1101 15:14:43.230328       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.16.1.10 172.16.1.8 172.16.1.9]
I1101 15:14:43.866135       1 trace.go:81] Trace[1078002122]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2023-11-01 15:14:43.315504949 +0000 UTC m=+131278.917284904) (total time: 550.590649ms):
Trace[1078002122]: [453.861237ms] [452.16235ms] Transaction prepared
I1101 15:14:46.656433       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:14:46.656560       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:46.656627       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:46.656643       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:14:46.663724       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:06.656688       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:06.656775       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:06.656798       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:06.656830       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:06.664014       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:08.262704       1 trace.go:81] Trace[1589436928]: "Get /api/v1/namespaces/default" (started: 2023-11-01 15:15:03.311745781 +0000 UTC m=+131298.913525832) (total time: 4.950916075s):
Trace[1589436928]: [4.950842053s] [4.950828353s] About to write a response
I1101 15:15:08.262757       1 trace.go:81] Trace[1952931239]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:15:00.310364554 +0000 UTC m=+131295.912144635) (total time: 7.952353425s):
Trace[1952931239]: [7.952244201s] [7.952231372s] About to write a response
I1101 15:15:08.262828       1 trace.go:81] Trace[1167503010]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:15:00.310364502 +0000 UTC m=+131295.912144458) (total time: 7.952438177s):
Trace[1167503010]: [7.952381075s] [7.952368847s] About to write a response
I1101 15:15:08.994005       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:15:08.994134       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:08.994207       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:08.994243       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:08.994259       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:09.001500       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:11.994015       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:15:11.994202       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:11.994257       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:12.001514       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:13.014745       1 trace.go:81] Trace[1337460274]: "Get /api/v1/namespaces/default/services/kubernetes" (started: 2023-11-01 15:15:08.263576464 +0000 UTC m=+131303.865356573) (total time: 4.751132039s):
Trace[1337460274]: [4.751055533s] [4.751043684s] About to write a response
I1101 15:15:14.920684       1 trace.go:81] Trace[648937508]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:15:13.403802141 +0000 UTC m=+131309.005582150) (total time: 1.516854699s):
Trace[648937508]: [1.516792564s] [1.516780771s] About to write a response
I1101 15:15:14.920684       1 trace.go:81] Trace[392155311]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:15:13.403830251 +0000 UTC m=+131309.005610264) (total time: 1.516826061s):
Trace[392155311]: [1.516754086s] [1.516741614s] About to write a response
I1101 15:15:15.691487       1 trace.go:81] Trace[1843228790]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2023-11-01 15:15:13.015124163 +0000 UTC m=+131308.616904123) (total time: 2.676319455s):
Trace[1843228790]: [1.905372923s] [1.905372923s] initial value restored
Trace[1843228790]: [2.330145961s] [424.773038ms] Transaction prepared
Trace[1843228790]: [2.676288975s] [346.143014ms] Transaction committed
I1101 15:15:16.326055       1 trace.go:81] Trace[1877311496]: "Get /api/v1/namespaces/default/endpoints/kubernetes" (started: 2023-11-01 15:15:15.691964232 +0000 UTC m=+131311.293744244) (total time: 634.059033ms):
Trace[1877311496]: [634.020198ms] [634.00642ms] About to write a response
I1101 15:15:17.170328       1 trace.go:81] Trace[1218842334]: "List etcd3: key=/masterleases/, resourceVersion=0, limit: 0, continue: " (started: 2023-11-01 15:15:16.326335809 +0000 UTC m=+131311.928115772) (total time: 843.955916ms):
Trace[1218842334]: [843.955916ms] [843.955916ms] END
I1101 15:15:26.656678       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:15:26.656811       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:26.656857       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:26.664085       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:32.994484       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:15:44.996606       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:44.996666       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:45.004050       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:46.075251       1 trace.go:81] Trace[228028141]: "Get /api/v1/namespaces/default" (started: 2023-11-01 15:15:37.171110894 +0000 UTC m=+131332.772890968) (total time: 8.904061106s):
Trace[228028141]: [8.903971062s] [8.903958474s] About to write a response
I1101 15:15:46.075282       1 trace.go:81] Trace[1501902518]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:15:37.491333187 +0000 UTC m=+131333.093113209) (total time: 8.5839039s):
Trace[1501902518]: [8.583759485s] [8.583747636s] About to write a response
I1101 15:15:46.075538       1 trace.go:81] Trace[1432803320]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:15:37.491328539 +0000 UTC m=+131333.093108561) (total time: 8.584147597s):
Trace[1432803320]: [8.584072691s] [8.584056724s] About to write a response
I1101 15:15:46.226867       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:15:46.227063       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:46.227152       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:46.227176       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:46.227207       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:46.234426       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:46.656794       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:15:46.656841       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:46.656885       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:46.656918       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:46.664102       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:46.972320       1 trace.go:81] Trace[2025638539]: "Get /api/v1/namespaces/default/services/kubernetes" (started: 2023-11-01 15:15:46.076543191 +0000 UTC m=+131341.678323167) (total time: 895.741375ms):
Trace[2025638539]: [895.668464ms] [895.650805ms] About to write a response
W1101 15:15:47.367764       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.16.1.10 172.16.1.8 172.16.1.9]
I1101 15:15:48.516190       1 trace.go:81] Trace[38440018]: "Get /api/v1/namespaces/default" (started: 2023-11-01 15:15:47.820257458 +0000 UTC m=+131343.422037413) (total time: 695.899191ms):
Trace[38440018]: [695.856016ms] [695.842847ms] About to write a response
I1101 15:15:53.994676       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:15:53.994729       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:53.994770       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:53.994791       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:54.002817       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:57.010593       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:57.010702       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:57.017758       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:57.734375       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
I1101 15:15:59.909854       1 trace.go:81] Trace[2012896034]: "Get /api/v1/namespaces/kube-system" (started: 2023-11-01 15:15:52.35276318 +0000 UTC m=+131347.954543240) (total time: 7.557053279s):
Trace[2012896034]: [7.556998653s] [7.556986506s] About to write a response
I1101 15:15:59.909882       1 trace.go:81] Trace[1291583662]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:15:52.966388136 +0000 UTC m=+131348.568168225) (total time: 6.943438775s):
Trace[1291583662]: [6.943339192s] [6.943324372s] About to write a response
I1101 15:15:59.910117       1 trace.go:81] Trace[425217238]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:15:52.966551341 +0000 UTC m=+131348.568331301) (total time: 6.943545632s):
Trace[425217238]: [6.943502835s] [6.943486996s] About to write a response
I1101 15:15:59.994819       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:15:59.994958       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:15:59.995011       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:00.002151       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:02.994495       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:06.657102       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:06.657188       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:06.657212       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:06.657224       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:06.664308       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:08.250285       1 trace.go:81] Trace[127545505]: "Get /api/v1/namespaces/default" (started: 2023-11-01 15:15:57.820409057 +0000 UTC m=+131353.422189012) (total time: 10.429825075s):
Trace[127545505]: [10.429743275s] [10.429731517s] About to write a response
I1101 15:16:08.994688       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:08.994735       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:08.994826       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:09.002484       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:09.883644       1 trace.go:81] Trace[1311323426]: "Get /api/v1/namespaces/kube-public" (started: 2023-11-01 15:15:59.910664758 +0000 UTC m=+131355.512444713) (total time: 9.972946714s):
Trace[1311323426]: [9.972896176s] [9.972886989s] About to write a response
I1101 15:16:09.883722       1 trace.go:81] Trace[395288861]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:16:04.960132712 +0000 UTC m=+131360.561912687) (total time: 4.923562361s):
Trace[395288861]: [4.923478109s] [4.923465516s] About to write a response
I1101 15:16:09.883745       1 trace.go:81] Trace[154597705]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:16:04.959884571 +0000 UTC m=+131360.561664677) (total time: 4.92383935s):
Trace[154597705]: [4.923791301s] [4.923779138s] About to write a response
I1101 15:16:09.884058       1 trace.go:81] Trace[1877861723]: "Get /api/v1/namespaces/default/services/kubernetes" (started: 2023-11-01 15:16:08.251249792 +0000 UTC m=+131363.853029940) (total time: 1.63276951s):
Trace[1877861723]: [1.632708459s] [1.632697005s] About to write a response
I1101 15:16:10.866403       1 trace.go:81] Trace[1265611010]: "Get /api/v1/namespaces/default/endpoints/kubernetes" (started: 2023-11-01 15:16:10.31863502 +0000 UTC m=+131365.920415060) (total time: 547.730605ms):
Trace[1265611010]: [547.666955ms] [547.654334ms] About to write a response
W1101 15:16:10.868655       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.16.1.10 172.16.1.8 172.16.1.9]
I1101 15:16:11.451809       1 trace.go:81] Trace[196617823]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2023-11-01 15:16:10.86918057 +0000 UTC m=+131366.470960526) (total time: 582.589583ms):
Trace[196617823]: [582.570391ms] [582.337349ms] Transaction committed
I1101 15:16:11.451886       1 trace.go:81] Trace[358052927]: "Update /api/v1/namespaces/default/endpoints/kubernetes" (started: 2023-11-01 15:16:10.869084408 +0000 UTC m=+131366.470864449) (total time: 582.786976ms):
Trace[358052927]: [582.746795ms] [582.688973ms] Object stored in database
I1101 15:16:12.208222       1 trace.go:81] Trace[2030783293]: "Get /api/v1/namespaces/default/services/kubernetes" (started: 2023-11-01 15:16:11.483156 +0000 UTC m=+131367.084936050) (total time: 725.02557ms):
Trace[2030783293]: [724.958013ms] [724.946909ms] About to write a response
I1101 15:16:12.740893       1 trace.go:81] Trace[582975484]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2023-11-01 15:16:12.2085004 +0000 UTC m=+131367.810280389) (total time: 532.339436ms):
Trace[582975484]: [212.78369ms] [211.529348ms] Transaction prepared
Trace[582975484]: [532.318765ms] [319.535075ms] Transaction committed
I1101 15:16:17.994772       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:17.994871       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:17.994899       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:18.002159       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:20.994720       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:20.994757       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:20.994831       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:21.002058       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:23.689860       1 trace.go:81] Trace[366186475]: "Get /api/v1/namespaces/default" (started: 2023-11-01 15:16:21.452843993 +0000 UTC m=+131377.054624017) (total time: 2.236972692s):
Trace[366186475]: [2.23692253s] [2.236908807s] About to write a response
I1101 15:16:23.690051       1 trace.go:81] Trace[1933080669]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:16:20.289499435 +0000 UTC m=+131375.891279419) (total time: 3.400530273s):
Trace[1933080669]: [3.400455755s] [3.400443039s] About to write a response
I1101 15:16:23.690051       1 trace.go:81] Trace[1617462205]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:16:20.28785749 +0000 UTC m=+131375.889637610) (total time: 3.402170811s):
Trace[1617462205]: [3.402090708s] [3.40207783s] About to write a response
I1101 15:16:23.995422       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:23.995456       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:23.995559       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:24.003885       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:24.910072       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:26.995066       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:26.995150       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:27.002191       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:28.966662       1 trace.go:81] Trace[542178693]: "Get /api/v1/namespaces/default/services/kubernetes" (started: 2023-11-01 15:16:23.690588993 +0000 UTC m=+131379.292369080) (total time: 5.276032676s):
Trace[542178693]: [5.275972302s] [5.275962167s] About to write a response
I1101 15:16:31.896596       1 trace.go:81] Trace[543025291]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2023-11-01 15:16:28.967066358 +0000 UTC m=+131384.568846321) (total time: 2.929488708s):
Trace[543025291]: [1.003251004s] [1.001607113s] Transaction prepared
Trace[543025291]: [2.929459599s] [1.926208595s] Transaction committed
I1101 15:16:32.376014       1 trace.go:81] Trace[96471273]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:16:30.916980715 +0000 UTC m=+131386.518760744) (total time: 1.459002132s):
Trace[96471273]: [1.458916142s] [1.45890468s] About to write a response
I1101 15:16:32.376149       1 trace.go:81] Trace[2065409607]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:16:30.916977296 +0000 UTC m=+131386.518757365) (total time: 1.459122605s):
Trace[2065409607]: [1.459026618s] [1.459013889s] About to write a response
W1101 15:16:32.377561       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.16.1.10 172.16.1.8 172.16.1.9]
I1101 15:16:33.516951       1 trace.go:81] Trace[1468118496]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2023-11-01 15:16:32.725804065 +0000 UTC m=+131388.327584020) (total time: 791.078902ms):
Trace[1468118496]: [648.383676ms] [646.628472ms] Transaction prepared
Trace[1468118496]: [791.057105ms] [142.673429ms] Transaction committed
I1101 15:16:44.910127       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:44.910159       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:44.910223       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:44.910240       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:44.910261       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:44.917583       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.910370       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.910497       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:54.917782       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.514852       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:55.514895       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.514945       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.514963       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:55.522279       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:56.997688       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:56.997817       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:56.997922       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:57.005207       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:57.736781       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
I1101 15:16:58.875681       1 trace.go:81] Trace[1005266963]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:16:46.843322572 +0000 UTC m=+131402.445102591) (total time: 12.032328687s):
Trace[1005266963]: [12.03224616s] [12.032234254s] About to write a response
I1101 15:16:58.875763       1 trace.go:81] Trace[490757766]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:16:46.843689346 +0000 UTC m=+131402.445469702) (total time: 12.032048404s):
Trace[490757766]: [12.031994072s] [12.03198482s] About to write a response
I1101 15:16:58.875824       1 trace.go:81] Trace[1508522725]: "Get /api/v1/namespaces/default" (started: 2023-11-01 15:16:42.721678815 +0000 UTC m=+131398.323458774) (total time: 16.154111289s):
Trace[1508522725]: [16.154070582s] [16.154054319s] About to write a response
I1101 15:16:59.995687       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:16:59.995946       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:59.996015       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:59.996060       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:16:59.996090       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:00.003286       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:01.761858       1 trace.go:81] Trace[801507814]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/rook-ceph-cephfs-csi-ceph-com" (started: 2023-11-01 15:16:55.953776918 +0000 UTC m=+131411.555556945) (total time: 5.808042056s):
Trace[801507814]: [5.807931893s] [5.807920109s] About to write a response
I1101 15:17:01.761875       1 trace.go:81] Trace[1869638031]: "Get /api/v1/namespaces/default/services/kubernetes" (started: 2023-11-01 15:16:58.876583472 +0000 UTC m=+131414.478363427) (total time: 2.885269914s):
Trace[1869638031]: [2.885210424s] [2.885198765s] About to write a response
I1101 15:17:01.767425       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1101 15:17:02.584256       1 trace.go:81] Trace[394586654]: "GuaranteedUpdate etcd3: *coordination.Lease" (started: 2023-11-01 15:17:01.767084601 +0000 UTC m=+131417.368864565) (total time: 817.132753ms):
Trace[394586654]: [817.110908ms] [816.742392ms] Transaction committed
I1101 15:17:02.584426       1 trace.go:81] Trace[2127054890]: "Update /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/rook-ceph-cephfs-csi-ceph-com" (started: 2023-11-01 15:17:01.765072517 +0000 UTC m=+131417.366852552) (total time: 819.317336ms):
Trace[2127054890]: [819.211071ms] [817.243489ms] Object stored in database
I1101 15:17:04.909762       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:17:04.909827       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:04.909919       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:04.909946       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:04.917525       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:11.997436       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:11.997516       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:12.004936       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E1101 15:17:12.586708       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:17:12.586984       1 trace.go:81] Trace[338405994]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/rook-ceph-cephfs-csi-ceph-com" (started: 2023-11-01 15:17:02.585854518 +0000 UTC m=+131418.187634617) (total time: 10.001093281s):
Trace[338405994]: [10.001093281s] [10.001082854s] END
E1101 15:17:12.607400       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
I1101 15:17:12.607696       1 trace.go:81] Trace[2112766898]: "Create /api/v1/namespaces/rook-ceph/events" (started: 2023-11-01 15:17:02.586286188 +0000 UTC m=+131418.188066269) (total time: 10.021370648s):
Trace[2112766898]: [10.021370648s] [10.021114028s] END
I1101 15:17:12.907627       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:17:12.907742       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:12.907801       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:12.907830       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:12.907823       1 trace.go:81] Trace[1498397616]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2023-11-01 15:17:01.762199868 +0000 UTC m=+131417.363979829) (total time: 11.145589706s):
Trace[1498397616]: [821.90598ms] [821.90598ms] initial value restored
Trace[1498397616]: [4.143265944s] [3.321359964s] Transaction prepared
Trace[1498397616]: [11.145589706s] [7.002323762s] END
E1101 15:17:12.907857       1 controller.go:218] unable to sync kubernetes service: etcdserver: request timed out
I1101 15:17:12.915166       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:20.997494       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:20.997571       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:21.004829       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:23.288246       1 trace.go:81] Trace[441677164]: "Get /api/v1/namespaces/kube-system" (started: 2023-11-01 15:17:09.887213766 +0000 UTC m=+131425.488993790) (total time: 13.400993303s):
Trace[441677164]: [13.400937183s] [13.400924487s] About to write a response
I1101 15:17:23.288305       1 trace.go:81] Trace[1018680536]: "Get /api/v1/namespaces/default" (started: 2023-11-01 15:17:12.90832299 +0000 UTC m=+131428.510103084) (total time: 10.379932469s):
Trace[1018680536]: [10.379880681s] [10.379867103s] About to write a response
I1101 15:17:23.289211       1 trace.go:81] Trace[1256144494]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:17:08.498229032 +0000 UTC m=+131424.100009133) (total time: 14.790962698s):
Trace[1256144494]: [14.790897124s] [14.790880967s] About to write a response
I1101 15:17:23.289220       1 trace.go:81] Trace[440119816]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:17:08.498430313 +0000 UTC m=+131424.100210370) (total time: 14.790768929s):
Trace[440119816]: [14.790722248s] [14.790706204s] About to write a response
I1101 15:17:24.957729       1 trace.go:81] Trace[496549705]: "Get /api/v1/namespaces/kube-node-lease" (started: 2023-11-01 15:17:23.294198221 +0000 UTC m=+131438.895978245) (total time: 1.663493711s):
Trace[496549705]: [1.663420598s] [1.663411519s] About to write a response
I1101 15:17:24.957729       1 trace.go:81] Trace[59345672]: "GuaranteedUpdate etcd3: *core.RangeAllocation" (started: 2023-11-01 15:17:23.295964867 +0000 UTC m=+131438.897744823) (total time: 1.661711579s):
Trace[59345672]: [1.661655236s] [1.661655236s] initial value restored
I1101 15:17:24.958662       1 trace.go:81] Trace[353533537]: "GuaranteedUpdate etcd3: *core.RangeAllocation" (started: 2023-11-01 15:17:23.296147059 +0000 UTC m=+131438.897927023) (total time: 1.662471491s):
Trace[353533537]: [1.662189979s] [1.662189979s] initial value restored
I1101 15:17:25.674190       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:17:25.674249       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:25.674339       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:25.674373       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:41.998145       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:41.998245       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:42.005585       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:42.687341       1 trace.go:81] Trace[1281842187]: "Get /api/v1/namespaces/default" (started: 2023-11-01 15:17:31.959428625 +0000 UTC m=+131447.561208579) (total time: 10.727877634s):
Trace[1281842187]: [10.727818877s] [10.727805623s] About to write a response
I1101 15:17:42.687348       1 trace.go:81] Trace[1869748746]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:17:33.03820424 +0000 UTC m=+131448.639984376) (total time: 9.649104931s):
Trace[1869748746]: [9.648992002s] [9.648979249s] About to write a response
I1101 15:17:42.687596       1 trace.go:81] Trace[1468558400]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:17:33.038124588 +0000 UTC m=+131448.639904617) (total time: 9.649434663s):
Trace[1468558400]: [9.649320394s] [9.649307671s] About to write a response
I1101 15:17:44.909696       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:17:44.909733       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:44.909835       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:44.917185       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:44.996045       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:17:47.996009       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:47.996112       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:47.996124       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:48.003145       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:48.866803       1 trace.go:81] Trace[1728623886]: "Get /api/v1/namespaces/default/services/kubernetes" (started: 2023-11-01 15:17:42.688370134 +0000 UTC m=+131458.290150097) (total time: 6.178402803s):
Trace[1728623886]: [6.178353875s] [6.17833756s] About to write a response
W1101 15:17:49.309146       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.16.1.10 172.16.1.8 172.16.1.9]
I1101 15:17:52.575045       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:17:52.575177       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:52.575256       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:52.575286       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:52.575300       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:17:52.584081       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:14.910463       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:14.910562       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:14.918210       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:15.000563       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:15.000728       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:15.000794       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:15.008407       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:17.997993       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:17.998044       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:17.998088       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:17.998115       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:18.005493       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:18.499267       1 trace.go:81] Trace[1521114499]: "Get /api/v1/namespaces/default/services/kubernetes" (started: 2023-11-01 15:18:07.640231253 +0000 UTC m=+131483.242011358) (total time: 10.858988008s):
Trace[1521114499]: [10.858911424s] [10.858894818s] About to write a response
I1101 15:18:19.602093       1 trace.go:81] Trace[496814951]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:18:02.686667338 +0000 UTC m=+131478.288447365) (total time: 16.915389119s):
Trace[496814951]: [16.915302827s] [16.915292939s] About to write a response
I1101 15:18:19.602596       1 trace.go:81] Trace[2037962904]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:18:02.687684879 +0000 UTC m=+131478.289465046) (total time: 16.914878078s):
Trace[2037962904]: [16.914801239s] [16.91478876s] About to write a response
I1101 15:18:19.753157       1 trace.go:81] Trace[1919431575]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2023-11-01 15:18:18.499687733 +0000 UTC m=+131494.101467689) (total time: 1.253429574s):
Trace[1919431575]: [1.102438095s] [1.102438095s] initial value restored
Trace[1919431575]: [1.228847831s] [126.409736ms] Transaction prepared
W1101 15:18:19.758238       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [172.16.1.10 172.16.1.8 172.16.1.9]
I1101 15:18:23.996840       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:23.996930       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:23.997020       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:24.004629       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:24.909796       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:38.999722       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:38.999865       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:38.999906       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:39.007415       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:39.674732       1 trace.go:81] Trace[1448124584]: "Get /api/v1/namespaces/default" (started: 2023-11-01 15:18:29.811375851 +0000 UTC m=+131505.413155911) (total time: 9.863317031s):
Trace[1448124584]: [9.863244635s] [9.863228658s] About to write a response
I1101 15:18:39.675006       1 trace.go:81] Trace[847392406]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:18:29.539020766 +0000 UTC m=+131505.140800841) (total time: 10.135957981s):
Trace[847392406]: [10.135858241s] [10.13584036s] About to write a response
I1101 15:18:39.675149       1 trace.go:81] Trace[889747252]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:18:29.539203079 +0000 UTC m=+131505.140983133) (total time: 10.135912542s):
Trace[889747252]: [10.135841105s] [10.135830975s] About to write a response
I1101 15:18:41.518662       1 trace.go:81] Trace[854396675]: "Get /api/v1/namespaces/kube-system" (started: 2023-11-01 15:18:24.958949283 +0000 UTC m=+131500.560729330) (total time: 16.559677918s):
Trace[854396675]: [16.559633112s] [16.559618829s] About to write a response
I1101 15:18:41.519848       1 trace.go:81] Trace[576221541]: "Get /api/v1/namespaces/default/services/kubernetes" (started: 2023-11-01 15:18:39.675720507 +0000 UTC m=+131515.277500613) (total time: 1.844108832s):
Trace[576221541]: [1.844063236s] [1.844045432s] About to write a response
I1101 15:18:44.909927       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:44.910008       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:44.910090       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:44.910122       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:44.917529       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:44.996905       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:56.997981       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:56.998060       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:56.998088       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:56.998161       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:57.006100       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:57.558479       1 trace.go:81] Trace[162668349]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:18:46.78663651 +0000 UTC m=+131522.388416572) (total time: 10.771801515s):
Trace[162668349]: [10.77171526s] [10.771702791s] About to write a response
I1101 15:18:57.558596       1 trace.go:81] Trace[1777171442]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:18:46.784319367 +0000 UTC m=+131522.386099465) (total time: 10.774229498s):
Trace[1777171442]: [10.774141669s] [10.774128524s] About to write a response
I1101 15:18:57.754765       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
I1101 15:18:59.999169       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:18:59.999225       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:18:59.999306       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:00.007034       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:02.318261       1 trace.go:81] Trace[781684173]: "Get /api/v1/namespaces/default" (started: 2023-11-01 15:18:51.982333716 +0000 UTC m=+131527.584113670) (total time: 10.335884224s):
Trace[781684173]: [10.335835428s] [10.335821501s] About to write a response
I1101 15:19:02.999111       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:19:02.999171       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:02.999272       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:03.006658       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:14.910178       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:14.910206       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:14.918074       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:15.073940       1 trace.go:81] Trace[42561512]: "Get /api/v1/namespaces/default/services/kubernetes" (started: 2023-11-01 15:19:02.318984715 +0000 UTC m=+131537.920764747) (total time: 12.75490073s):
Trace[42561512]: [12.754821671s] [12.754812254s] About to write a response
I1101 15:19:15.074165       1 trace.go:81] Trace[1714003616]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-resizer-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:19:06.158559124 +0000 UTC m=+131541.760339184) (total time: 8.915574194s):
Trace[1714003616]: [8.915489833s] [8.915481051s] About to write a response
I1101 15:19:15.074259       1 trace.go:81] Trace[2088085986]: "Get /apis/coordination.k8s.io/v1/namespaces/rook-ceph/leases/external-attacher-leader-rook-ceph-rbd-csi-ceph-com" (started: 2023-11-01 15:19:06.15766608 +0000 UTC m=+131541.759446306) (total time: 8.916546673s):
Trace[2088085986]: [8.916428129s] [8.91641469s] About to write a response
I1101 15:19:19.085944       1 controller.go:176] Shutting down kubernetes service endpoint reconciler
I1101 15:19:19.086043       1 controller.go:87] Shutting down OpenAPI AggregationController
I1101 15:19:19.086057       1 crd_finalizer.go:267] Shutting down CRDFinalizer
I1101 15:19:19.086078       1 available_controller.go:388] Shutting down AvailableConditionController
I1101 15:19:19.086086       1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
I1101 15:19:19.086095       1 controller.go:120] Shutting down OpenAPI controller
I1101 15:19:19.086103       1 nonstructuralschema_controller.go:203] Shutting down NonStructuralSchemaConditionController
I1101 15:19:19.086106       1 naming_controller.go:299] Shutting down NamingConditionController
I1101 15:19:19.086124       1 autoregister_controller.go:164] Shutting down autoregister controller
I1101 15:19:19.086112       1 crdregistration_controller.go:143] Shutting down crd-autoregister controller
I1101 15:19:19.086150       1 establishing_controller.go:84] Shutting down EstablishingController
I1101 15:19:19.086124       1 customresource_discovery_controller.go:219] Shutting down DiscoveryController
I1101 15:19:19.086568       1 secure_serving.go:160] Stopped listening on [::]:6443
I1101 15:19:25.675048       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:19:25.675086       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:25.675136       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:25.679461       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:19:25.679648       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:25.679713       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:25.682481       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:25.686845       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:25.730021       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:19:25.730222       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:25.730312       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:25.730337       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:25.730348       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:25.733261       1 trace.go:81] Trace[511356949]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2023-11-01 15:19:15.074326547 +0000 UTC m=+131550.676106502) (total time: 10.658905967s):
Trace[511356949]: [10.382170231s] [10.382170231s] initial value restored
Trace[511356949]: [10.607818329s] [225.648098ms] Transaction prepared
E1101 15:19:25.733551       1 controller.go:218] unable to sync kubernetes service: Get https://[::1]:6443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp [::1]:6443: connect: connection refused
I1101 15:19:25.737119       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:25.924474       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
I1101 15:19:25.924585       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I1101 15:19:25.924656       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E1101 15:19:25.926379       1 controller.go:179] Get https://[::1]:6443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp [::1]:6443: connect: connection refused
rpc error: code = DeadlineExceeded desc = context deadline exceeded[root@master2 ~]# 

四、控制节点2--keepalived-master2的日志部分

[2023-11-01 15:18:15] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:18:18] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:18:18] got router interface: br0
[2023-11-01 15:18:18] interface br0 OK
[2023-11-01 15:18:19] https://172.16.1.9:6443/healthz ok
Wed Nov  1 15:18:19 2023: Script `check_kube` now returning 0
[2023-11-01 15:18:24] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
Wed Nov  1 15:18:24 2023: Script `check_kube` now returning 1
[2023-11-01 15:18:27] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:18:30] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:18:33] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:18:33] got router interface: br0
[2023-11-01 15:18:33] interface br0 OK
[2023-11-01 15:18:36] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:18:39] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:18:41] https://172.16.1.9:6443/healthz ok
Wed Nov  1 15:18:41 2023: Script `check_kube` now returning 0
[2023-11-01 15:18:45] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
Wed Nov  1 15:18:45 2023: Script `check_kube` now returning 1
[2023-11-01 15:18:48] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:18:48] got router interface: br0
[2023-11-01 15:18:48] interface br0 OK
[2023-11-01 15:18:51] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:18:54] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:18:57] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:19:00] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:19:03] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:19:03] got router interface: br0
[2023-11-01 15:19:03] interface br0 OK
[2023-11-01 15:19:06] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:19:09] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
[2023-11-01 15:19:12] https://172.16.1.9:6443/healthz [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
healthz check failed
Wed Nov  1 15:19:12 2023: VRRP_Script(check_kube) failed (exited with status 1)
Wed Nov  1 15:19:12 2023: (VI_1) Entering FAULT STATE
Fault, what ?
Wed Nov  1 15:19:12 2023: Stopping
Unknown state
Wed Nov  1 15:19:13 2023: Stopped - used 1.193087 user time, 4.274274 system time
Wed Nov  1 15:19:13 2023: Stopped Keepalived v2.0.20 (01/22,2020)
***  INFO   | 2023-11-01 15:19:13 | /container/run/process/keepalived/run exited with status 0
***  INFO   | 2023-11-01 15:19:13 | Running /container/run/process/keepalived/finish...
***  INFO   | 2023-11-01 15:19:13 | Killing all processes...
rpc error: code = DeadlineExceeded desc = context deadline exceeded
[root@master2 ~]# 
[root@master2 ~]# 
[root@master2 ~]# 

节点二的keepalived-master2直接退出了日志监控了,应该是keepalived都退出了。

没看出引发问题的点在那里:etcdserver ——> apiservice ——> keepalived ——> glance ?

@chenjacken
Copy link
Author

控制节点2的etcd-master2的日志

2023-11-01 15:55:39.977681 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (800.448455ms) to execute
2023-11-01 15:55:40.631710 W | etcdserver: request "header:<ID:18142060068240046386 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/master3\" mod_revision:3547432 > success:<request_put:<key:\"/registry/leases/kube-node-lease/master3\" value_size:221 >> failure:<>>" with result "size:20" took too long (100.360985ms) to execute
2023-11-01 15:55:40.673549 W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:7" took too long (1.072823496s) to execute
2023-11-01 15:55:40.674152 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:252" took too long (695.137274ms) to execute
2023-11-01 15:55:41.361863 W | etcdserver: request "header:<ID:18142060068240046406 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-scheduler\" mod_revision:3547448 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-scheduler\" value_size:352 >> failure:<>>" with result "size:20" took too long (109.20726ms) to execute
2023-11-01 15:55:42.125556 W | etcdserver: request "header:<ID:18142060068240046415 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/rook-ceph/rook-ceph-cephfs-csi-ceph-com\" mod_revision:3547442 > success:<request_put:<key:\"/registry/leases/rook-ceph/rook-ceph-cephfs-csi-ceph-com\" value_size:232 >> failure:<>>" with result "size:20" took too long (139.415838ms) to execute
2023-11-01 15:55:43.849973 W | etcdserver: request "header:<ID:18142060068240046454 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\" mod_revision:3547465 > success:<request_put:<key:\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\" value_size:366 >> failure:<>>" with result "size:20" took too long (454.55041ms) to execute
2023-11-01 15:55:44.425416 W | etcdserver: request "header:<ID:18142060068240046461 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:7bc58b82e53f397c>" with result "size:44" took too long (458.847461ms) to execute
2023-11-01 15:55:45.495346 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (316.570483ms) to execute
2023-11-01 15:55:46.670411 W | etcdserver: read-only range request "key:\"/registry/replicasets\" range_end:\"/registry/replicasett\" count_only:true " with result "range_response_count:0 size:10" took too long (407.360502ms) to execute
2023-11-01 15:55:48.118189 W | etcdserver: request "header:<ID:18142060068240046520 > lease_revoke:<id:3ce38b82e57bea0b>" with result "size:32" took too long (252.748246ms) to execute
2023-11-01 15:55:49.287457 W | etcdserver: request "header:<ID:18142060068240046531 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/master1\" mod_revision:3547476 > success:<request_put:<key:\"/registry/leases/kube-node-lease/master1\" value_size:221 >> failure:<>>" with result "size:20" took too long (370.468074ms) to execute
2023-11-01 15:55:49.754475 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (1.575968281s) to execute
2023-11-01 15:55:49.754662 W | etcdserver: request "header:<ID:18142060068240046551 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\" mod_revision:3547493 > success:<request_put:<key:\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\" value_size:366 >> failure:<>>" with result "size:20" took too long (229.634541ms) to execute
2023-11-01 15:55:50.322534 W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:7" took too long (721.971927ms) to execute
2023-11-01 15:55:50.322576 W | etcdserver: read-only range request "key:\"/registry/leases\" range_end:\"/registry/leaset\" count_only:true " with result "range_response_count:0 size:9" took too long (951.12928ms) to execute
2023-11-01 15:55:50.322611 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:174" took too long (1.723489653s) to execute
2023-11-01 15:55:52.453063 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:295" took too long (2.128628569s) to execute
2023-11-01 15:55:52.453240 W | etcdserver: request "header:<ID:18142060068240046573 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/rook-ceph/external-resizer-rook-ceph-cephfs-csi-ceph-com\" mod_revision:3547489 > success:<request_put:<key:\"/registry/leases/rook-ceph/external-resizer-rook-ceph-cephfs-csi-ceph-com\" value_size:249 >> failure:<>>" with result "size:20" took too long (347.174053ms) to execute
2023-11-01 15:55:53.136583 W | etcdserver: request "header:<ID:18142060068240046583 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/master1\" mod_revision:3547497 > success:<request_put:<key:\"/registry/leases/kube-node-lease/master1\" value_size:221 >> failure:<>>" with result "size:20" took too long (453.884537ms) to execute
2023-11-01 15:55:53.177027 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context deadline exceeded" took too long (2.000036272s) to execute
WARNING: 2023/11/01 15:55:53 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2023-11-01 15:55:53.606277 W | etcdserver: read-only range request "key:\"/registry/priorityclasses\" range_end:\"/registry/priorityclasset\" count_only:true " with result "range_response_count:0 size:9" took too long (1.211835649s) to execute
2023-11-01 15:55:53.606296 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (695.79307ms) to execute
2023-11-01 15:55:53.606354 W | etcdserver: read-only range request "key:\"/registry/statefulsets\" range_end:\"/registry/statefulsett\" count_only:true " with result "range_response_count:0 size:9" took too long (2.52928378s) to execute
2023-11-01 15:55:53.606374 W | etcdserver: read-only range request "key:\"/registry/secrets\" range_end:\"/registry/secrett\" count_only:true " with result "range_response_count:0 size:9" took too long (2.41890577s) to execute
2023-11-01 15:55:53.606401 W | etcdserver: read-only range request "key:\"/registry/controllerrevisions\" range_end:\"/registry/controllerrevisiont\" count_only:true " with result "range_response_count:0 size:9" took too long (2.557026072s) to execute
2023-11-01 15:55:53.606484 W | etcdserver: read-only range request "key:\"/registry/csinodes\" range_end:\"/registry/csinodet\" count_only:true " with result "range_response_count:0 size:9" took too long (1.566082915s) to execute
2023-11-01 15:55:53.606533 W | etcdserver: read-only range request "key:\"/registry/masterleases/172.16.1.9\" " with result "range_response_count:1 size:135" took too long (1.152295989s) to execute
2023-11-01 15:55:54.136241 W | etcdserver: read-only range request "key:\"/registry/podsecuritypolicy\" range_end:\"/registry/podsecuritypolicz\" count_only:true " with result "range_response_count:0 size:9" took too long (549.107992ms) to execute
2023-11-01 15:55:54.136420 W | etcdserver: request "header:<ID:4387503856301107761 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:3ce38b82e57bea30>" with result "size:44" took too long (154.931889ms) to execute
2023-11-01 15:55:55.036479 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (857.426133ms) to execute
2023-11-01 15:55:55.641387 W | etcdserver: request "header:<ID:4387503856301107766 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:3ce38b82e57bea35>" with result "size:44" took too long (123.781981ms) to execute
2023-11-01 15:55:56.492818 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses\" range_end:\"/registry/runtimeclasset\" count_only:true " with result "range_response_count:0 size:7" took too long (1.097875847s) to execute
2023-11-01 15:55:56.493000 W | etcdserver: request "header:<ID:18142060068240046630 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/node5\" mod_revision:3547515 > success:<request_put:<key:\"/registry/leases/kube-node-lease/node5\" value_size:215 >> failure:<>>" with result "size:20" took too long (364.597318ms) to execute
2023-11-01 15:55:57.447933 W | etcdserver: read-only range request "key:\"/registry/certificatesigningrequests\" range_end:\"/registry/certificatesigningrequestt\" count_only:true " with result "range_response_count:0 size:7" took too long (1.356512382s) to execute
2023-11-01 15:55:57.929195 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:236" took too long (1.434223158s) to execute
2023-11-01 15:55:57.929225 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (750.252199ms) to execute
2023-11-01 15:55:57.929259 W | etcdserver: read-only range request "key:\"/registry/services/specs\" range_end:\"/registry/services/spect\" count_only:true " with result "range_response_count:0 size:9" took too long (726.995671ms) to execute
2023-11-01 15:55:57.929292 W | etcdserver: read-only range request "key:\"/registry/controllerrevisions\" range_end:\"/registry/controllerrevisiont\" count_only:true " with result "range_response_count:0 size:9" took too long (727.784849ms) to execute
2023-11-01 15:55:58.410556 W | etcdserver: read-only range request "key:\"/registry/volumeattachments\" range_end:\"/registry/volumeattachmentt\" count_only:true " with result "range_response_count:0 size:7" took too long (381.815899ms) to execute
2023-11-01 15:56:00.138245 W | wal: sync duration of 1.137110292s, expected less than 1s
2023-11-01 15:56:00.138431 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:174" took too long (1.539277577s) to execute
2023-11-01 15:56:01.264373 W | etcdserver: read-only range request "key:\"/registry/rolebindings\" range_end:\"/registry/rolebindingt\" count_only:true " with result "range_response_count:0 size:9" took too long (1.955998628s) to execute
2023-11-01 15:56:02.017380 W | etcdserver: read-only range request "key:\"/registry/daemonsets\" range_end:\"/registry/daemonsett\" count_only:true " with result "range_response_count:0 size:9" took too long (2.211666349s) to execute
2023-11-01 15:56:02.017440 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:295" took too long (1.877363956s) to execute
2023-11-01 15:56:02.017588 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (1.840838239s) to execute
2023-11-01 15:56:02.018333 W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:7" took too long (2.410183465s) to execute
2023-11-01 15:56:02.692262 W | etcdserver: request "header:<ID:4387503856301107780 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.16.1.9\" mod_revision:3547541 > success:<request_put:<key:\"/registry/masterleases/172.16.1.9\" value_size:65 lease:4387503856301107778 >> failure:<request_range:<key:\"/registry/masterleases/172.16.1.9\" > >>" with result "size:20" took too long (203.522217ms) to execute
2023-11-01 15:56:03.513172 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (602.608161ms) to execute
2023-11-01 15:56:03.513418 W | etcdserver: read-only range request "key:\"/registry/pods\" range_end:\"/registry/podt\" count_only:true " with result "range_response_count:0 size:10" took too long (563.442731ms) to execute
2023-11-01 15:56:04.275758 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (1.097149171s) to execute
2023-11-01 15:56:04.275875 W | etcdserver: request "header:<ID:18142060068240046740 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/node5\" mod_revision:3547557 > success:<request_put:<key:\"/registry/leases/kube-node-lease/node5\" value_size:215 >> failure:<>>" with result "size:20" took too long (127.376442ms) to execute
2023-11-01 15:56:05.298024 W | etcdserver: read-only range request "key:\"/registry/csidrivers\" range_end:\"/registry/csidrivert\" count_only:true " with result "range_response_count:0 size:9" took too long (823.086346ms) to execute
2023-11-01 15:56:05.298633 W | etcdserver: read-only range request "key:\"/registry/events\" range_end:\"/registry/eventt\" count_only:true " with result "range_response_count:0 size:10" took too long (2.030182181s) to execute
2023-11-01 15:56:05.561269 W | etcdserver: request "header:<ID:18142060068240046764 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-scheduler\" mod_revision:3547577 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-scheduler\" value_size:352 >> failure:<>>" with result "size:20" took too long (131.408572ms) to execute
2023-11-01 15:56:06.195310 W | etcdserver: request "header:<ID:16219304503945605799 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/rook-ceph/external-resizer-rook-ceph-rbd-csi-ceph-com\" mod_revision:3547566 > success:<request_put:<key:\"/registry/leases/rook-ceph/external-resizer-rook-ceph-rbd-csi-ceph-com\" value_size:243 >> failure:<>>" with result "size:20" took too long (256.266924ms) to execute
2023-11-01 15:56:08.942108 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitiont\" count_only:true " with result "range_response_count:0 size:9" took too long (1.729728045s) to execute
2023-11-01 15:56:08.942167 W | etcdserver: request "header:<ID:18142060068240046792 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/node5\" mod_revision:3547581 > success:<request_put:<key:\"/registry/leases/kube-node-lease/node5\" value_size:215 >> failure:<>>" with result "size:20" took too long (167.288755ms) to execute
2023-11-01 15:56:09.445421 W | etcdserver: request "header:<ID:18142060068240046813 > lease_revoke:<id:3ce38b82e57bea30>" with result "size:32" took too long (263.800552ms) to execute
2023-11-01 15:56:09.446042 W | etcdserver: read-only range request "key:\"/registry/crd.projectcalico.org/ipamhandles\" range_end:\"/registry/crd.projectcalico.org/ipamhandlet\" count_only:true " with result "range_response_count:0 size:9" took too long (1.334922857s) to execute
2023-11-01 15:56:09.694637 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (516.031417ms) to execute
2023-11-01 15:56:09.694670 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:174" took too long (1.095125362s) to execute
2023-11-01 15:56:09.694761 W | etcdserver: read-only range request "key:\"/registry/configmaps\" range_end:\"/registry/configmapt\" count_only:true " with result "range_response_count:0 size:9" took too long (334.570128ms) to execute
2023-11-01 15:56:09.695345 W | etcdserver: read-only range request "key:\"/registry/deployments\" range_end:\"/registry/deploymentt\" count_only:true " with result "range_response_count:0 size:9" took too long (101.878524ms) to execute
2023-11-01 15:56:10.777151 W | etcdserver: request "header:<ID:4387503856301107797 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:3ce38b82e57bea54>" with result "size:44" took too long (719.713497ms) to execute
2023-11-01 15:56:10.777309 W | etcdserver: read-only range request "key:\"/registry/priorityclasses\" range_end:\"/registry/priorityclasset\" count_only:true " with result "range_response_count:0 size:9" took too long (357.826339ms) to execute
2023-11-01 15:56:11.412939 W | etcdserver: request "header:<ID:16219304503945605829 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/rook-ceph/external-resizer-rook-ceph-rbd-csi-ceph-com\" mod_revision:3547591 > success:<request_put:<key:\"/registry/leases/rook-ceph/external-resizer-rook-ceph-rbd-csi-ceph-com\" value_size:243 >> failure:<>>" with result "size:20" took too long (256.042512ms) to execute
2023-11-01 15:56:11.798632 W | etcdserver: request "header:<ID:4387503856301107799 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.16.1.9\" mod_revision:3547575 > success:<request_put:<key:\"/registry/masterleases/172.16.1.9\" value_size:65 lease:4387503856301107796 >> failure:<request_range:<key:\"/registry/masterleases/172.16.1.9\" > >>" with result "size:20" took too long (129.163513ms) to execute
2023-11-01 15:56:12.310082 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices\" range_end:\"/registry/apiregistration.k8s.io/apiservicet\" count_only:true " with result "range_response_count:0 size:9" took too long (1.008953905s) to execute
2023-11-01 15:56:13.205551 W | etcdserver: request "header:<ID:18142060068240046860 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/node9\" mod_revision:3547606 > success:<request_put:<key:\"/registry/leases/kube-node-lease/node9\" value_size:216 >> failure:<>>" with result "size:20" took too long (767.332581ms) to execute
2023-11-01 15:56:13.861284 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (951.202843ms) to execute
2023-11-01 15:56:13.861324 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (1.682299343s) to execute
2023-11-01 15:56:13.861348 W | etcdserver: read-only range request "key:\"/registry/rolebindings\" range_end:\"/registry/rolebindingt\" count_only:true " with result "range_response_count:0 size:9" took too long (2.373990477s) to execute
2023-11-01 15:56:13.861401 W | etcdserver: read-only range request "key:\"/registry/services/endpoints\" range_end:\"/registry/services/endpointt\" count_only:true " with result "range_response_count:0 size:9" took too long (1.277313153s) to execute
2023-11-01 15:56:13.861430 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:252" took too long (2.061496924s) to execute
2023-11-01 15:56:14.840661 W | etcdserver: read-only range request "key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" " with result "range_response_count:3 size:394" took too long (977.928063ms) to execute
2023-11-01 15:56:15.972653 W | etcdserver: request "header:<ID:16219304503945605853 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/rook-ceph/external-resizer-rook-ceph-rbd-csi-ceph-com\" mod_revision:3547613 > success:<request_put:<key:\"/registry/leases/rook-ceph/external-resizer-rook-ceph-rbd-csi-ceph-com\" value_size:243 >> failure:<>>" with result "size:20" took too long (401.281069ms) to execute
2023-11-01 15:56:15.973441 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (794.051575ms) to execute
2023-11-01 15:56:17.430898 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " with result "range_response_count:0 size:7" took too long (1.660422808s) to execute
2023-11-01 15:56:18.195429 W | etcdserver: request "header:<ID:18142060068240046946 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-scheduler\" mod_revision:3547636 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-scheduler\" value_size:352 >> failure:<>>" with result "size:20" took too long (632.498177ms) to execute
2023-11-01 15:56:18.195743 W | etcdserver: read-only range request "key:\"/registry/persistentvolumeclaims\" range_end:\"/registry/persistentvolumeclaimt\" count_only:true " with result "range_response_count:0 size:9" took too long (1.172158981s) to execute
2023-11-01 15:56:18.991569 W | etcdserver: read-only range request "key:\"/registry/crd.projectcalico.org/ipamconfigs\" range_end:\"/registry/crd.projectcalico.org/ipamconfigt\" count_only:true " with result "range_response_count:0 size:7" took too long (729.778685ms) to execute
2023-11-01 15:56:18.991617 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (812.299824ms) to execute
2023-11-01 15:56:18.991639 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:174" took too long (392.069323ms) to execute
2023-11-01 15:56:19.781300 W | etcdserver: request "header:<ID:18142060068240046972 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" mod_revision:3547654 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" value_size:361 >> failure:<>>" with result "size:20" took too long (261.724671ms) to execute
2023-11-01 15:56:20.461869 W | etcdserver: read-only range request "key:\"/registry/deployments\" range_end:\"/registry/deploymentt\" count_only:true " with result "range_response_count:0 size:9" took too long (1.438632511s) to execute
2023-11-01 15:56:20.461930 W | etcdserver: request "header:<ID:18142060068240046982 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\" mod_revision:3547656 > success:<request_put:<key:\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\" value_size:366 >> failure:<>>" with result "size:20" took too long (420.914341ms) to execute
2023-11-01 15:56:21.721864 W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:7" took too long (2.128978396s) to execute
2023-11-01 15:56:21.721926 W | etcdserver: read-only range request "key:\"/registry/controllerrevisions\" range_end:\"/registry/controllerrevisiont\" count_only:true " with result "range_response_count:0 size:9" took too long (1.964654015s) to execute
2023-11-01 15:56:21.722579 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:252" took too long (1.259258268s) to execute
2023-11-01 15:56:21.722609 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (545.260307ms) to execute
2023-11-01 15:56:21.722643 W | etcdserver: read-only range request "key:\"/registry/poddisruptionbudgets\" range_end:\"/registry/poddisruptionbudgett\" count_only:true " with result "range_response_count:0 size:9" took too long (949.407835ms) to execute
2023-11-01 15:56:24.466133 W | etcdserver: request "header:<ID:18142060068240047050 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/node5\" mod_revision:3547662 > success:<request_put:<key:\"/registry/leases/kube-node-lease/node5\" value_size:215 >> failure:<>>" with result "size:20" took too long (236.576647ms) to execute
2023-11-01 15:56:24.954047 W | etcdserver: request "header:<ID:18142060068240047056 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/master1\" mod_revision:3547665 > success:<request_put:<key:\"/registry/leases/kube-node-lease/master1\" value_size:222 >> failure:<>>" with result "size:20" took too long (248.371893ms) to execute
2023-11-01 15:56:25.338786 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (1.160119688s) to execute
2023-11-01 15:56:25.338955 W | etcdserver: request "header:<ID:18142060068240047070 > lease_revoke:<id:3ce38b82e57bea54>" with result "size:32" took too long (132.895009ms) to execute
2023-11-01 15:56:25.873042 W | etcdserver: request "header:<ID:18142060068240047075 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-scheduler\" mod_revision:3547681 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-scheduler\" value_size:352 >> failure:<>>" with result "size:20" took too long (284.087494ms) to execute
2023-11-01 15:56:27.788832 W | etcdserver: request "header:<ID:18142060068240047088 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" mod_revision:3547693 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" value_size:361 >> failure:<>>" with result "size:20" took too long (261.480859ms) to execute
2023-11-01 15:56:28.048717 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (869.233804ms) to execute
2023-11-01 15:56:28.048757 W | etcdserver: read-only range request "key:\"/registry/controllers\" range_end:\"/registry/controllert\" count_only:true " with result "range_response_count:0 size:7" took too long (894.665858ms) to execute
2023-11-01 15:56:30.159627 W | wal: sync duration of 1.149544405s, expected less than 1s
2023-11-01 15:56:30.971494 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:9" took too long (2.601216166s) to execute
2023-11-01 15:56:30.971558 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:174" took too long (2.371844891s) to execute
2023-11-01 15:56:30.971622 W | etcdserver: request "header:<ID:18142060068240047118 > lease_revoke:<id:7bc58b82e53f3b20>" with result "size:32" took too long (811.82881ms) to execute
2023-11-01 15:56:31.628176 W | wal: sync duration of 1.46841036s, expected less than 1s
2023-11-01 15:56:32.021122 W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:7" took too long (2.343517398s) to execute
2023-11-01 15:56:32.179552 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "error:context canceled" took too long (1.999857702s) to execute
WARNING: 2023/11/01 15:56:32 grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
2023-11-01 15:56:32.285879 W | etcdserver: request "header:<ID:18142060068240047150 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/rook-ceph/rook-ceph-cephfs-csi-ceph-com\" mod_revision:3547702 > success:<request_put:<key:\"/registry/leases/rook-ceph/rook-ceph-cephfs-csi-ceph-com\" value_size:231 >> failure:<>>" with result "size:20" took too long (132.090401ms) to execute
2023-11-01 15:56:32.912935 W | etcdserver: request "header:<ID:18142060068240047152 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/node5\" mod_revision:3547706 > success:<request_put:<key:\"/registry/leases/kube-node-lease/node5\" value_size:215 >> failure:<>>" with result "size:20" took too long (255.139473ms) to execute
2023-11-01 15:56:32.914134 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " with result "range_response_count:0 size:7" took too long (2.316685596s) to execute
2023-11-01 15:56:32.914225 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:295" took too long (1.940618764s) to execute
2023-11-01 15:56:33.290981 W | etcdserver: request "header:<ID:4387503856301107832 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:3ce38b82e57bea77>" with result "size:44" took too long (128.250694ms) to execute
2023-11-01 15:56:33.291145 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers\" range_end:\"/registry/horizontalpodautoscalert\" count_only:true " with result "range_response_count:0 size:7" took too long (214.671344ms) to execute
2023-11-01 15:56:34.908245 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (1.729466082s) to execute
2023-11-01 15:56:34.908297 W | etcdserver: request "header:<ID:4387503856301107834 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.16.1.9\" mod_revision:3547661 > success:<request_put:<key:\"/registry/masterleases/172.16.1.9\" value_size:65 lease:4387503856301107831 >> failure:<request_range:<key:\"/registry/masterleases/172.16.1.9\" > >>" with result "size:18" took too long (240.574568ms) to execute
2023-11-01 15:56:34.908451 W | etcdserver: read-only range request "key:\"/registry/validatingwebhookconfigurations\" range_end:\"/registry/validatingwebhookconfigurationt\" count_only:true " with result "range_response_count:0 size:7" took too long (583.167448ms) to execute
2023-11-01 15:56:34.908581 W | etcdserver: read-only range request "key:\"/registry/crd.projectcalico.org/ipamblocks\" range_end:\"/registry/crd.projectcalico.org/ipamblockt\" count_only:true " with result "range_response_count:0 size:9" took too long (468.689457ms) to execute
2023-11-01 15:56:35.676094 W | etcdserver: request "header:<ID:4387503856301107839 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.16.1.9\" mod_revision:0 > success:<request_put:<key:\"/registry/masterleases/172.16.1.9\" value_size:65 lease:4387503856301107837 >> failure:<request_range:<key:\"/registry/masterleases/172.16.1.9\" > >>" with result "size:20" took too long (263.168926ms) to execute
2023-11-01 15:56:36.675058 W | etcdserver: request "header:<ID:18142060068240047205 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\" mod_revision:3547737 > success:<request_put:<key:\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\" value_size:366 >> failure:<>>" with result "size:20" took too long (175.408531ms) to execute
2023-11-01 15:56:38.098431 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:252" took too long (2.420536481s) to execute
2023-11-01 15:56:38.098568 W | etcdserver: request "header:<ID:18142060068240047239 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" mod_revision:3547741 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" value_size:361 >> failure:<>>" with result "size:20" took too long (133.645908ms) to execute
2023-11-01 15:56:38.099445 W | etcdserver: read-only range request "key:\"/registry/networkpolicies\" range_end:\"/registry/networkpoliciet\" count_only:true " with result "range_response_count:0 size:7" took too long (229.78646ms) to execute
2023-11-01 15:56:38.099514 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:180" took too long (1.958805547s) to execute
2023-11-01 15:56:38.099530 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (1.920092149s) to execute
2023-11-01 15:56:39.130087 W | etcdserver: request "header:<ID:4387503856301107851 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.16.1.9\" mod_revision:3547740 > success:<request_put:<key:\"/registry/masterleases/172.16.1.9\" value_size:65 lease:4387503856301107849 >> failure:<request_range:<key:\"/registry/masterleases/172.16.1.9\" > >>" with result "size:20" took too long (129.5282ms) to execute
2023-11-01 15:56:39.130221 W | etcdserver: read-only range request "key:\"/registry/volumeattachments\" range_end:\"/registry/volumeattachmentt\" count_only:true " with result "range_response_count:0 size:7" took too long (167.727697ms) to execute
2023-11-01 15:56:39.525560 W | etcdserver: request "header:<ID:18142060068240047271 > lease_revoke:<id:7bc58b82e53f3bc2>" with result "size:32" took too long (264.905887ms) to execute
2023-11-01 15:56:39.525798 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:252" took too long (394.82711ms) to execute
2023-11-01 15:56:39.526560 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (349.317695ms) to execute
2023-11-01 15:56:40.051756 W | etcdserver: request "header:<ID:18142060068240047281 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-scheduler\" mod_revision:3547760 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-scheduler\" value_size:352 >> failure:<>>" with result "size:20" took too long (132.751433ms) to execute
2023-11-01 15:56:41.195838 W | etcdserver: read-only range request "key:\"foo\" " with result "range_response_count:0 size:7" took too long (1.60393401s) to execute
2023-11-01 15:56:41.877661 W | etcdserver: request "header:<ID:18142060068240047320 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" mod_revision:3547763 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" value_size:361 >> failure:<>>" with result "size:20" took too long (286.626054ms) to execute
2023-11-01 15:56:42.349335 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:9" took too long (1.861626284s) to execute
2023-11-01 15:56:42.889103 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:7" took too long (709.711766ms) to execute

@chenjacken
Copy link
Author

chenjacken commented Nov 6, 2023

目前测试,此问题已经解决,引发此问题可能的点会有如下几点:
一,数据库做主备,keepalived配置问题,导致数据库频繁切换,导致镜像上传失败。可以根据最新的文档来核对下主备做VIP的配置:https://www.cloudpods.org/zh/docs/setup/db-ha/

二,minio的配置是standalone模式,只有一个pod节点在运作mino,我的修改方式如下

kubectl edit oc -n onecloud
mode值有:standalone和distributed

  minio:
    enable: true
    mode: standalone

    minio:
      accessKey: monitor-admin
      mode: standalone
      secretKey: Mfsk5wsAk8C9q6mq

这样minio的POD就存在几个了:
image

三,检查宿主机的磁盘空间,在控制节点宿主机上查看PVC的空间:

du -smh /opt/local-path-provisioner/pvc-*

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants