80/TCP 98s
-
-$ curl 10.103.122.157
-
-
-
-Welcome to nginx!
-
-
-
- Welcome to nginx!
- If you see this page, the nginx web server is successfully installed and
- working. Further configuration is required.
-
- For online documentation and support please refer to
- nginx.org.
- Commercial support is available at
- nginx.com.
-
- Thank you for using nginx.
-
-
- ```
-
-## Service 为 NodePort
-1. 修改 service.yaml 文件
- ```yaml
- apiVersion: v1
- kind: Service
- metadata:
- creationTimestamp: null
- labels:
- app: web
- name: web
- spec:
- type: NodePort
- ports:
- - port: 80
- protocol: TCP
- targetPort: 80
- selector:
- app: web
- status:
- loadBalancer: {}
- ```
-2. 重新部署
- ```bash
- $ kubectl apply -f service.yaml
-
- $ kubectl get pod,svc
- NAME READY STATUS RESTARTS AGE
- pod/web-5dcb957ccc-c4fjh 1/1 Running 0 6m26s
-
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- service/kubernetes ClusterIP 10.96.0.1 443/TCP 8d
- service/web NodePort 10.101.97.233 80:32283/TCP 3m56s
- ```
-3. 访问
- 此时, 可以看见 `service/web` 是对外暴露了 32283 端口的, 在浏览器中输入: :, 可以看见 Nginx 页面.
-
-## Service 为 LoadBalancer
-1. 修改 service.yaml 文件
- ```yaml
- apiVersion: v1
- kind: Service
- metadata:
- creationTimestamp: null
- labels:
- app: web
- name: web
- spec:
- type: LoadBalancer
- ports:
- - port: 80
- protocol: TCP
- targetPort: 80
- selector:
- app: web
- status:
- loadBalancer: {}
- ```
-
-2. 重新部署
- ```bash
- $ kubectl apply -f service.yaml
-
- $ kubectl get pod,svc
- NAME READY STATUS RESTARTS AGE
- pod/web-5dcb957ccc-c4fjh 1/1 Running 0 6m26s
-
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- service/kubernetes ClusterIP 10.96.0.1 443/TCP 8d
- service/web LoadBalancer 10.101.97.233 80:32283/TCP 7m4s
- ```
\ No newline at end of file
+
diff --git a/note/Kubernetes/k8s-pod.md b/note/Kubernetes/k8s-pod.md
index 1aa9ffb..5a3e60a 100644
--- a/note/Kubernetes/k8s-pod.md
+++ b/note/Kubernetes/k8s-pod.md
@@ -240,18 +240,22 @@ spec:
这个yaml中, 在容器启动后, 执行 echo 输出了一句话, 在容器被杀死之前, 指定 nginx 的退出命令, 然后杀死容器, 实现了容器的 "优雅退出"
# 八、Pod 健康检查(探针)
-Kubernetes 中, 有 3 种 探针, 实现了对 Pod 的健康检查
-- startupProbe 探针: 判断容器是否启动(start)
-- LivenessProbe 探针: 判断容器是否存活(running), 如果探针探测失败, 则标识容器也是失败
-- ReadinessProbe 探针: 用于判断容器是否启动完成(ready)
+Kubernetes 中, 有 3 种 探针, 实现了对 Pod 的健康检查:
+
+- 启动探针(startupProbe 探针): 判断容器是否启动(start), 控制容器在启动后再进行就绪、存活检查。**启动探针为一次性的**。
+- 就绪探针(livenessProbe 探针): 判断容器是否准备好接受请求流量, 在未准备好时, 会从 Service 的负载均衡器中剔除,在就绪探针失败时,会重启容器。
+- 存活探针(readinessProbe 探针): 用于判断容器是否启动完成(ready),如果捕捉到死锁,会重启容器。
+
+**以上三种探针, 在探测失败后, 都会重启容器。**
Probe 支持三种检查方式:
- httpGet: 发送 Http 请求, 返回 200-400 范围状态码为成功
- exec: 执行 Shell 命令返回状态码是 0 为成功
- tcpSocket: 发起 TCP Socket 建立成功
-httpGet 的方式:
+## 8.1 httpGet 的方式:
+
```yaml
$ vim livenessProbe-httpGet.yaml
apiVersion: v1
@@ -281,7 +285,8 @@ spec:
restartPolicy: Always
```
-exec 的方式:
+## 8.2 exec 的方式:
+
```yaml
$ vim livenessProbe-exec.yaml
apiVersion: v1
@@ -315,7 +320,8 @@ spec:
restartPolicy: Always
```
-tcpSocket 的方式:
+## 8.3 tcpSocket 的方式:
+
```yaml
$ vim livenessProbe-tcp.yaml
apiVersion: v1
@@ -373,6 +379,26 @@ spec:
在这个例子中, nginx-container 和 debian-container 都声明挂载了 shared-data 这个 volume, 而 shared-data 是 hostPath 类型, 所以, 它对应宿主机上的目录就是 /data, 而这个目录, 就同时被绑定进了这两个容器中, 这样的话 nginx-container 容器就可以从 /usr/share/nginx/html 目录中读取到 debian-container 生成的 index.html 文件了
+
+对于 HTTP 或者 TCP 存活检测可以使用命名的 [ContainerPort](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#containerport-v1-core)
+
+```bash
+ports:
+- name: liveness-port
+ containerPort: 8080
+ hostPort: 8080
+
+livenessProbe:
+ httpGet:
+ path: /healthz
+ port: liveness-port
+```
+
+
+
+
+
+
# 九、Pod 的镜像拉取策略
```yaml
$ vim pod-imagePollPolicy.yaml
diff --git a/note/Kubernetes/volume.md b/note/Kubernetes/volume.md
index a939406..39f8642 100644
--- a/note/Kubernetes/volume.md
+++ b/note/Kubernetes/volume.md
@@ -653,4 +653,386 @@ mysql> show variables like '%max_connections%';
| mysqlx_max_connections | 100 |
+------------------------+-------+
2 rows in set (0.00 sec)
-```
\ No newline at end of file
+```
+
+
+# 五、NFS
+[centos7 nfs安装部署](../../note/软件部署/centos/nfs.md)
+
+## 5.1 使用 nfs 挂载
+```bash
+vim nginx-nfs.yaml
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx
+ namespace: storage
+ labels:
+ app: nginx
+spec:
+ selector:
+ matchLabels:
+ app: nginx
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx
+ resources:
+ requests:
+ cpu: 100m
+ memory: 100Mi
+ limits:
+ cpu: 100m
+ memory: 100Mi
+ ports:
+ - containerPort: 80
+ name: nginx
+ volumeMounts:
+ - name: nfs-volume
+ mountPath: /usr/share/nginx/html/
+ volumes:
+ - name: nfs-volume
+ nfs:
+ server: 192.168.156.60
+ path: /app/nfs
+ restartPolicy: Always
+
+ kubectl apply -f nginx-nfs.yaml
+
+ # 在 /app/nfs 目录下创建 index.html 文件
+ echo "hello nfs volume" > index.html
+
+
+ # 访问 nginx
+ kubectl get pod -n storage -o wide
+ pod/nginx-5d59d8d7fd-v744j 1/1 Running 0 3m44s 10.244.2.11 flink03
+
+curl 10.244.2.11
+hello nfs volume
+```
+
+
+# 六、PV、PVC
+[官方 PV、PVC 实战](https://kubernetes.io/zh/docs/tasks/configure-pod-container/configure-persistent-volume-storage/)
+
+[PV、PVC 的官方文档](https://kubernetes.io/zh/docs/concepts/storage/persistent-volumes/#access-modes)
+
+## 6.1 持久卷(PersistentVolume )
+
+- 持久卷(PersistentVolume,PV)是集群中的一块存储,可以由管理员事先供应,或者 使用[存储类(Storage Class)](https://kubernetes.io/zh/docs/concepts/storage/storage-classes/)来动态供应。
+- 持久卷是集群资源,就像节点也是集群资源一样。PV 持久卷和普通的 Volume 一样,也是使用 卷插件来实现的,只是它们拥有独立于使用他们的Pod的生命周期。
+- 此 API 对象中记述了存储的实现细节,无论其背后是 NFS、iSCSI 还是特定于云平台的存储系统。
+
+### 6.1.1 示例
+```bash
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: pv-0001
+ namespace: storage
+spec:
+ capacity:
+ storage: 5Gi
+ volumeMode: Filesystem
+ accessModes:
+ - ReadWriteOnce
+ persistentVolumeReclaimPolicy: Recycle
+ storageClassName: slow
+ nfs:
+ path: /app/nfs
+ server: 192.168.156/60
+```
+参数说明:
+1. capacity.storage: [该 PV 的容量](https://kubernetes.io/zh/docs/concepts/storage/persistent-volumes/#capacity)
+2. volumeMode: [卷模式, 默认为 Filesystem](https://kubernetes.io/zh/docs/concepts/storage/persistent-volumes/#volume-mode)
+3. accessModes: [访问模式](https://kubernetes.io/zh/docs/concepts/storage/persistent-volumes/#access-modes)
+ - ReadWriteOnce: 可以被一个节点以只读的方式挂载;
+ - ReadOnlyMany: 可以被多个节点以只读的方式挂载;
+ - ReadWriteMany: 可以被多个节点以读写的方式挂载;
+ - ReadWriteOncePod: 卷可以被单个 Pod 以只读的方式挂载。如果需要集群中只有一个 Pod 可以读写该 PVC, 就需要用此访问模式.
+4. persistentVolumeReclaimPolicy: [回收策略](https://kubernetes.io/zh/docs/concepts/storage/persistent-volumes/#reclaim-policy)
+5. storageClassName: [用于和 PVC 匹配](https://kubernetes.io/zh/docs/concepts/storage/persistent-volumes/#class)
+
+
+## 6.2 持久卷申请(PersistentVolumeClaim,PVC)
+
+- 表达的是用户对存储的请求
+- 概念上与 Pod 类似。 Pod 会耗用节点资源,而 PVC 申领会耗用 PV 资源。
+- Pod 可以请求特定数量的资源(CPU 和内存);同样 PVC 申领也可以请求特定的大小和访问模式 (例如,可以要求 PV 卷能够以 ReadWriteOnce、ReadOnlyMany 或 ReadWriteMany 模式之一来挂载,参见[访问模式](https://kubernetes.io/zh/docs/concepts/storage/persistent-volumes/#access-modes))。
+
+### 6.2.1 示例
+```bash
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: myclaim
+ namespace: storage
+spec:
+ accessModes:
+ - ReadWriteOnce
+ volumeMode: Filesystem
+ resources:
+ requests:
+ storage: 8Gi
+ storageClassName: slow
+```
+
+## 6.3 PV、PVC 实操
+使用 PV、PVC、ConfigMap 部署一个单机版的 MySQL
+```bash
+# 1. 创建 my.cnf 文件并 create 为 configmap
+cat >> my.cnf << EOF
+[mysql]
+default-character-set=utf8
+
+[client]
+default-character-set=utf8
+
+[mysqld]
+# 字符集
+default-character-set=utf8
+init_connect='SET NAMES utf8'
+# 最大连接数
+max_connections=1000
+# binlog
+log-bin=mysql-bin
+binlog-format=ROW
+# 忽略大小写
+lower_case_table_names=2
+EOF
+
+kubectl create configmap mysql-config --from-file=my.cnf -n storage
+
+# 2. 创建 PV
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: pv-mysql-1
+ namespace: storage
+spec:
+ capacity:
+ storage: 50Gi
+ volumeMode: Filesystem
+ accessModes:
+ - ReadWriteOnce
+ persistentVolumeReclaimPolicy: Recycle
+ storageClassName: my-nfs
+ nfs:
+ path: /app/nfs
+ server: 192.168.156/60
+
+# 3. 创建 PVC
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: pvc-mysql-1
+ namespace: storage
+spec:
+ accessModes:
+ - ReadWriteOnce
+ volumeMode: Filesystem
+ resources:
+ requests:
+ storage: 50Gi
+ storageClassName: my-nfs
+
+# 4. 创建 deploy
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: mysql-pv
+ namespace: storage
+ labels:
+ app: mysql-pv
+spec:
+ selector:
+ matchLabels:
+ app: mysql-pv
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: mysql-pv
+ spec:
+ containers:
+ - name: mysql-pv
+ image: mysql:8
+ env:
+ - name: MYSQL_ROOT_PASSWORD
+ value: "123456"
+ ports:
+ - containerPort: 80
+ name: mysql-pv
+ volumeMounts:
+ - name: mysql-storage
+ mountPath: /var/lib/mysql
+ - name: my-config
+ mountPath: /etc/mysql/conf.d
+ - name: time-zone
+ mountPath: /etc/localtime
+ volumes:
+ - name: mysql-storage
+ persistentVolumeClaim:
+ claimName: pvc-mysql-1
+ - name: my-config
+ configMap:
+ name: mysql-config
+ - name: time-zone
+ hostPath:
+ path: /etc/localtime
+ restartPolicy: Always
+
+```
+
+## 6.4 存储类(Storage Class)
+
+- 尽管 PersistentVolumeClaim 允许用户消耗抽象的存储资源,常见的情况是针对不同的 问题用户需要的是具有不同属性(如,性能)的 PersistentVolume 卷。
+- 集群管理员需要能够提供不同性质的 PersistentVolume,并且这些 PV 卷之间的差别不 仅限于卷大小和访问模式,同时又不能将卷是如何实现的这些细节暴露给用户。
+- 为了满足这类需求,就有了 *存储类(StorageClass)* 资源。
+
+[官方部署文档](https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner)
+
+### 6.4.1 部署
+```bash
+# 创建存储类
+apiVersion: storage.k8s.io/v1
+kind: StorageClass
+metadata:
+ name: managed-nfs-storage
+# provisioner: 指定供应商的名字
+provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
+parameters:
+ archiveOnDelete: "false"
+
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nfs-client-provisioner
+ labels:
+ app: nfs-client-provisioner
+ # replace with namespace where provisioner is deployed
+ namespace: default
+spec:
+ replicas: 1
+ strategy:
+ type: Recreate
+ selector:
+ matchLabels:
+ app: nfs-client-provisioner
+ template:
+ metadata:
+ labels:
+ app: nfs-client-provisioner
+ spec:
+ serviceAccountName: nfs-client-provisioner
+ containers:
+ - name: nfs-client-provisioner
+ image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
+ volumeMounts:
+ - name: nfs-client-root
+ mountPath: /persistentvolumes
+ env:
+ - name: PROVISIONER_NAME
+ value: k8s-sigs.io/nfs-subdir-external-provisioner
+ # 改成自己的 NFS 地址
+ - name: NFS_SERVER
+ value: 192.168.156.60
+ - name: NFS_PATH
+ value: /app/nfs
+ volumes:
+ - name: nfs-client-root
+ nfs:
+ server: 192.168.156.60
+ path: /app/nfs
+
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: nfs-client-provisioner
+ # replace with namespace where provisioner is deployed
+ namespace: default
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: nfs-client-provisioner-runner
+rules:
+ - apiGroups: [""]
+ resources: ["nodes"]
+ verbs: ["get", "list", "watch"]
+ - apiGroups: [""]
+ resources: ["persistentvolumes"]
+ verbs: ["get", "list", "watch", "create", "delete"]
+ - apiGroups: [""]
+ resources: ["persistentvolumeclaims"]
+ verbs: ["get", "list", "watch", "update"]
+ - apiGroups: ["storage.k8s.io"]
+ resources: ["storageclasses"]
+ verbs: ["get", "list", "watch"]
+ - apiGroups: [""]
+ resources: ["events"]
+ verbs: ["create", "update", "patch"]
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: run-nfs-client-provisioner
+subjects:
+ - kind: ServiceAccount
+ name: nfs-client-provisioner
+ # replace with namespace where provisioner is deployed
+ namespace: default
+roleRef:
+ kind: ClusterRole
+ name: nfs-client-provisioner-runner
+ apiGroup: rbac.authorization.k8s.io
+---
+kind: Role
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: leader-locking-nfs-client-provisioner
+ # replace with namespace where provisioner is deployed
+ namespace: default
+rules:
+ - apiGroups: [""]
+ resources: ["endpoints"]
+ verbs: ["get", "list", "watch", "create", "update", "patch"]
+---
+kind: RoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+metadata:
+ name: leader-locking-nfs-client-provisioner
+ # replace with namespace where provisioner is deployed
+ namespace: default
+subjects:
+ - kind: ServiceAccount
+ name: nfs-client-provisioner
+ # replace with namespace where provisioner is deployed
+ namespace: default
+roleRef:
+ kind: Role
+ name: leader-locking-nfs-client-provisioner
+ apiGroup: rbac.authorization.k8s.io
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git "a/note/Kubernetes/\344\275\277\347\224\250kubeadm\351\203\250\347\275\262k8s\351\233\206\347\276\244.md" "b/note/Kubernetes/\344\275\277\347\224\250kubeadm\351\203\250\347\275\262k8s\351\233\206\347\276\244.md"
index 598e7d5..0d4dbe0 100644
--- "a/note/Kubernetes/\344\275\277\347\224\250kubeadm\351\203\250\347\275\262k8s\351\233\206\347\276\244.md"
+++ "b/note/Kubernetes/\344\275\277\347\224\250kubeadm\351\203\250\347\275\262k8s\351\233\206\347\276\244.md"
@@ -431,3 +431,4 @@ $ kubectl get pod,svc
+https://blog.csdn.net/a749227859/article/details/118732605
\ No newline at end of file
diff --git "a/note/Kubernetes/\345\270\270\347\224\250\346\234\215\345\212\241\351\203\250\347\275\262.md" "b/note/Kubernetes/\345\270\270\347\224\250\346\234\215\345\212\241\351\203\250\347\275\262.md"
new file mode 100644
index 0000000..15977db
--- /dev/null
+++ "b/note/Kubernetes/\345\270\270\347\224\250\346\234\215\345\212\241\351\203\250\347\275\262.md"
@@ -0,0 +1,133 @@
+
+
+
+
+
+---
+# 一、单机 MySQL8
+```bash
+---
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: mysql8-pv
+ namespace: database
+spec:
+ capacity:
+ storage: 50Gi
+ volumeMode: Filesystem
+ accessModes:
+ - ReadWriteOnce
+ - ReadWriteMany
+ storageClassName: nfs-client
+ persistentVolumeReclaimPolicy: Retain
+ nfs:
+ path: /app/nfs/
+ server: 192.168.1.122
+
+---
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: mysql8-pvc
+ namespace: database
+spec:
+ accessModes:
+ - ReadWriteOnce
+ - ReadWriteMany
+ storageClassName: nfs-client
+ resources:
+ requests:
+ storage: 50Gi
+
+---
+kind: ConfigMap
+apiVersion: v1
+metadata:
+ name: mysql8-config
+ namespace: database
+data:
+ my.cnf: |-
+ [mysql]
+ default-character-set=utf8
+
+ [client]
+ default-character-set=utf8
+
+ [mysqld]
+ # 字符集
+ character_set_server=utf8
+ init_connect='SET NAMES utf8'
+ # 最大连接数
+ max_connections=1000
+ # binlog
+ log-bin=mysql-bin
+ binlog-format=ROW
+ # 忽略大小写
+ lower_case_table_names=2
+
+---
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: mysql8
+ namespace: database
+spec:
+ selector:
+ matchLabels:
+ app: mysql8
+ strategy:
+ type: Recreate
+ template:
+ metadata:
+ labels:
+ app: mysql8
+ spec:
+ containers:
+ - image: mysql:8
+ name: mysql8
+ volumeMounts:
+ - name: time-zone
+ mountPath: /etc/localtime
+ - name: mysql8-persistent-storage
+ mountPath: /var/lib/mysql
+ - name: config
+ mountPath: /etc/mysql/conf.d
+ env:
+ - name: MYSQL_ROOT_PASSWORD
+ value: "123456" # change: Ioubuy123
+ ports:
+ - containerPort: 3306
+ name: mysql8
+ volumes:
+ - name: mysql8-persistent-storage
+ persistentVolumeClaim:
+ claimName: mysql8-pvc
+ - name: time-zone
+ hostPath:
+ path: /etc/localtime
+ - name: config
+ configMap:
+ name: mysql8-config
+
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: mysql8-svc
+ namespace: database
+spec:
+ selector:
+ app: mysql8
+ type: NodePort
+ ports:
+ - name: mysql8-svc
+ protocol: TCP
+ port: 3306
+ targetPort: 3306
+ nodePort: 31236
+
+```
+
+
diff --git "a/note/MySQL/MySQL-\345\270\270\347\224\250\351\205\215\347\275\256.md" "b/note/MySQL/MySQL-\345\270\270\347\224\250\351\205\215\347\275\256.md"
index 7fbd02a..4c54a6f 100644
--- "a/note/MySQL/MySQL-\345\270\270\347\224\250\351\205\215\347\275\256.md"
+++ "b/note/MySQL/MySQL-\345\270\270\347\224\250\351\205\215\347\275\256.md"
@@ -69,4 +69,18 @@ show variables like '%max_connections%';
# 查看当前连接数
show status like 'Threads%';
-```
\ No newline at end of file
+```
+
+
+
+# 四、修改密码
+
+```mysql
+SET PASSWORD FOR 'root'@'localhost'= "Kino123.";
+```
+
+# 五、强制修改密码
+
+```bash
+```
+