老余博客上线了!!!

用kubeadm创建高可用kubernetes集群后,如何重新添加控制平面

热点新闻 老余 10℃ 0评论

场景 出于某些原因删除了k8s-001节点,现在需要将k8s-001节点重新作为控制平面加入集群,在加入集群过程中出错 集群信息 集群版本:1.13.1 3个控制平面,2个worker节点 k8s-001:10.0.3.4 control plane k8s-002:10.0....

场景

出于某些原因删除了k8s-001节点,现在需要将k8s-001节点重新作为控制平面加入集群,在加入集群过程中出错

集群信息

集群版本:1.13.1

3个控制平面,2个worker节点

  • k8s-001:10.0.3.4 control plane
  • k8s-002:10.0.3.5 control plane
  • k8s-003:10.0.3.6 control plane
  • k8s-004:10.0.3.7 worker
  • k8s-005:10.0.3.8 worker
  • vip::10.0.3.9

解决

解决kubeadm加入集群时etcd健康检查失败的问题

一般直接重新加入集群的话会出现下面的问题

[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Checking Etcd cluster health
error syncing endpoints with etc: dial tcp 10.0.3.4:2379: connect: connection refused

这是因为控制平面10.0.3.4(k8s-001)已经被删除了,但是configmap:kubeadm-config中存在未删除的状态

[email protected]:/home# kubectl get configmaps -n kube-system kubeadm-config -oyaml

.
.
.
  ClusterStatus: |
    apiEndpoints:
      k8s-001:
        advertiseAddress: 10.0.3.4
        bindPort: 6443
      k8s-002:
        advertiseAddress: 10.0.3.5
        bindPort: 6443
      k8s-003:
        advertiseAddress: 10.0.3.6
        bindPort: 6443
    apiVersion: kubeadm.k8s.io/v1beta1
    kind: ClusterStatus
.
.
.

可以看到集群信息中k8s-001仍然存在,在使用kubeadm重新加入集群时会检测节点上的etcd健康状态

因此要从配置文件中删掉k8s-001

[email protected]:/home# kubectl edit configmaps -n kube-system kubeadm-config

删除如下的k8s-001内容,保存

      k8s-001:
        advertiseAddress: 10.0.3.4
        bindPort: 6443

删除失效的etcd集群成员

用kubeadm搭建的集群,如果是非手动部署etcd(kubeadm自动搭建)的话,etcd是在每个控制平面都启动一个实例的,当删除k8s-001节点时,etcd集群未自动删除此节点上的etcd成员,因此需要手动删除

首先查看etcd集群成员信息

先设置快捷方式

[email protected]:/home# export ETCDCTL_API=3
[email protected]:/home# alias etcdctl='etcdctl --endpoints=https://10.0.3.5:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key'

查看etcd集群成员信息

[email protected]:/home# etcdctl member list

57b3a6dc282908df, started, k8s-003, https://10.0.3.6:2380, https://10.0.3.6:2379
58bfa292d53697d0, started, k8s-001, https://10.0.3.4:2380, https://10.0.3.4:2379
f38fd5735de92e88, started, k8s-002, https://10.0.3.5:2380, https://10.0.3.5:2379

虽然看起来集群很健康,但实际上k8s-001已经不存在了,如果这时加入集群,就会报如下错误

[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Checking Etcd cluster health
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-001" as an annotation
error creating local etcd static pod manifest file: etcdserver: unhealthy cluster

删除失效成员(k8s-001)

[email protected]:/home# etcdctl member remove 58bfa292d53697d0
Member 58bfa292d53697d0 removed from cluster f06e01da83f7000d
[email protected]:/home# etcdctl member list
57b3a6dc282908df, started, k8s-003, https://10.0.3.6:2380, https://10.0.3.6:2379
f38fd5735de92e88, started, k8s-002, https://10.0.3.5:2380, https://10.0.3.5:2379

再次使用kubeadm加入控制平面

一切正常

[email protected]:/home# kubectl get pod --all-namespaces 
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   calico-node-4956t                 1/1     Running   0          128m
kube-system   calico-node-hkcmq                 1/1     Running   0          5h58m
kube-system   calico-node-lsqsg                 1/1     Running   0          5h58m
kube-system   calico-node-q2zpt                 1/1     Running   0          5h58m
kube-system   calico-node-qdg49                 1/1     Running   0          5h58m
kube-system   coredns-89cc84847-sl2s5           1/1     Running   0          6h3m
kube-system   coredns-89cc84847-x57kv           1/1     Running   0          6h3m
kube-system   etcd-k8s-001                      1/1     Running   0          39m
kube-system   etcd-k8s-002                      1/1     Running   1          3h8m
kube-system   etcd-k8s-003                      1/1     Running   0          3h7m
kube-system   kube-apiserver-k8s-001            1/1     Running   0          128m
kube-system   kube-apiserver-k8s-002            1/1     Running   1          6h1m
kube-system   kube-apiserver-k8s-003            1/1     Running   2          6h
kube-system   kube-controller-manager-k8s-001   1/1     Running   0          128m
kube-system   kube-controller-manager-k8s-002   1/1     Running   1          6h1m
kube-system   kube-controller-manager-k8s-003   1/1     Running   0          6h
kube-system   kube-proxy-5stnn                  1/1     Running   0          5h59m
kube-system   kube-proxy-92vtd                  1/1     Running   0          6h1m
kube-system   kube-proxy-sz998                  1/1     Running   0          5h59m
kube-system   kube-proxy-wp2jx                  1/1     Running   0          6h
kube-system   kube-proxy-xl5nn                  1/1     Running   0          128m
kube-system   kube-scheduler-k8s-001            1/1     Running   0          128m
kube-system   kube-scheduler-k8s-002            1/1     Running   0          6h1m
kube-system   kube-scheduler-k8s-003            1/1     Running   1          6h
[email protected]:/home# etcdctl member list
57b3a6dc282908df, started, k8s-003, https://10.0.3.6:2380, https://10.0.3.6:2379
f38fd5735de92e88, started, k8s-002, https://10.0.3.5:2380, https://10.0.3.5:2379
fc790bd58a364c97, started, k8s-001, https://10.0.3.4:2380, https://10.0.3.4:2379

一些注意点

每次k8s-001执行kubeadm join失败后,需要执行kubeadm reset重置节点状态,重置状态后,如果要重新作为控制平面加入集群的话,需要从其它健康的控制平面节点的/etc/kubernetes/pki目录下向k8s-001拷贝证书,具体证书如下:

  • ca.crt
  • ca.key
  • sa.pub
  • sa.key
  • front-proxy-ca.crt
  • front-proxy-ca.key
  • etcd/ca.crt
  • etcd/ca.key

打印加入集群的kubeadm join命令

[email protected]:~# kubeadm token create --print-join-command 
kubeadm join your.k8s.domain:6443 --token xxxxxx.xxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

作为普通节点加入集群

kubeadm join your.k8s.domain:6443 --token xxxxxx.xxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

作为控制平面加入集群

kubeadm join your.k8s.domain:6443 --token xxxxxx.xxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx --experimental-control-plane

注意,--experimental-control-plane参数在1.15+版本需要替换为--control-plane

转载请注明:老余博客 » 用kubeadm创建高可用kubernetes集群后,如何重新添加控制平面

读后有收获可以请作者喝咖啡:

喜欢 (0)or分享 (0)
发表我的评论
取消评论
表情

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址