快捷搜索:  汽车  科技

如何让kube-proxy工作在ipvs模式(用kubeadm部署生产级)

如何让kube-proxy工作在ipvs模式(用kubeadm部署生产级)安装运⾏时(docker)#操作系统版本(⾮必须,仅为此处案例) $cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core) #内核版本(⾮必须,仅为此处案例) $uname -r 4.17.8-1.el7.elrepo.x86_64 #数据盘开启ftype(在每台节点上执⾏) umount /data mkfs.xfs -n ftype=1 -f /dev/vdb #禁⽤swap swapoff -a sed -i "s#^/swapfile#\#/swapfile#g" /etc/fstab mount -a docker kubelet kubeadm 的安装(所有节点)本⽂主要介绍 kubeadm 对⾼可⽤集群的部署⽅式有两种机环境准备

概述

kubeadm 已⽀持集群部署,且在1.13 版本中 GA,⽀持多 master,多 etcd 集群化部署,它也是官⽅最为推荐的部署⽅式,⼀来是由它的 sig 组来推进的,⼆来 kubeadm 在很多⽅⾯确实很好的利⽤了 kubernetes 的许多特性,接下来⼏篇我们来实践并了解下它的魅⼒。

⽬标

1. 通过 kubeadm 搭建⾼可⽤ kubernetes 集群,并新建管理⽤户

2. 为后续做版本升级演示,此处使⽤1.13.1版本,到下⼀篇再升级到 v1.14

3. kubeadm 的原理解读

本⽂主要介绍 kubeadm 对⾼可⽤集群的部署

kubeadm 部署 k8s v1.13 ⾼可⽤集群

⽅式有两种

  • Stacked etcd topology

如何让kube-proxy工作在ipvs模式(用kubeadm部署生产级)(1)

  • 即每台 etcd 各⾃独⽴,分别部署在 3 台 master 上,互不通信,优点是简单,缺点是缺乏 etcd ⾼可⽤性
  • 需要⾄少 4 台机器(3master 和 etcd 1node)
  • External etcd topology

如何让kube-proxy工作在ipvs模式(用kubeadm部署生产级)(2)

  • 即采⽤集群外 etcd 拓扑结构,这样的冗余性更好,但需要⾄少7台机器(3master 3etcd 1node)
  • ⽣产环境建议采⽤该⽅案
  • 本⽂也采⽤这个拓扑
步骤
  • 环境准备
  • 安装组件:docker kubelet kubeadm(所有节点)
  • 使⽤上述组件部署 etcd ⾼可⽤集群
  • 部署 master
  • 加⼊node
  • ⽹络安装
  • 验证
  • 总结

机环境准备

  • 系统环境

#操作系统版本(⾮必须,仅为此处案例) $cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core) #内核版本(⾮必须,仅为此处案例) $uname -r 4.17.8-1.el7.elrepo.x86_64 #数据盘开启ftype(在每台节点上执⾏) umount /data mkfs.xfs -n ftype=1 -f /dev/vdb #禁⽤swap swapoff -a sed -i "s#^/swapfile#\#/swapfile#g" /etc/fstab mount -a

docker kubelet kubeadm 的安装(所有节点)

安装运⾏时(docker)

  • k8s1.13 版本根据官⽅建议,暂不采⽤最新的 18.09,这⾥我们采⽤18.06,安装时需指 定版本
  • 来源:kubeadm now properly recognizes Docker 18.09.0 and newer but still treats 18.06 as the default supported version.
  • 安装脚本如下(在每台节点上执⾏):

如何让kube-proxy工作在ipvs模式(用kubeadm部署生产级)(3)

安装 kubeadm kubelet kubectl

  • 官⽅的 Google yum 源⽆法从国内服务器上直接下载,所以可先在其他渠道下载好,在上传到服务器上

cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF $ yum -y install --downloadonly --downloaddir=k8s kubelet-1.13.1 kubeadm-1.13.1 kubectl-1.13.1 $ ls k8s/ 25cd948f63fea40e81e43fbe2e5b635227cc5bbda6d5e15d42ab52decf09a5ac-kubelet-1.13.1- 0.x86_64.rpm 53edc739a0e51a4c17794de26b13ee5df939bd3161b37f503fe2af8980b41a89-cri-tools- 1.12.0-0.x86_64.rpm 5af5ecd0bc46fca6c51cc23280f0c0b1522719c282e23a2b1c39b8e720195763-kubeadm-1.13.1- 0.x86_64.rpm 7855313ff2b42ebcf499bc195f51d56b8372abee1a19bbf15bb4165941c0229d-kubectl-1.13.1- 0.x86_64.rpm fe33057ffe95bfae65e2f269e1b05e99308853176e24a4d027bc082b471a07c0-kubernetes-cni- 0.6.0-0.x86_64.rpm socat-1.7.3.2-2.el7.x86_64.rpm

  • 本地安装

# 禁⽤SELINUX setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config # 本地安装 yum localinstall -y k8s/*.rpm systemctl enable --now kubelet

  • ⽹络修复,已知 centos7 会因 iptables 被绕过⽽将流量错误路由,因此需确保sysctl 配置中的 net.bridge.bridgenf-call-iptables 被设置为 1

cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system

使⽤上述组件部署 etcd ⾼可⽤集群

1. 在 etcd 节点上,将 etcd 服务设置为由 kubelet 启动管理

cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf [Service] ExecStart= ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifestpath=/etc/kubernetes/manifests --allow-privileged=true Restart=always EOF systemctl daemon-reload systemctl restart kubelet

2. 给每台 etcd 主机⽣成 kubeadm 配置⽂件,确保每台主机运⾏⼀个 etcd 实例:在 etcd1(即上述的 hosts0)上执⾏上述

命令,可以在 /tmp ⽬录下看到⼏个主机名的⽬录

# Update HOST0 HOST1 and HOST2 with the IPs or resolvable names of your hosts export HOST0=10.10.184.226 export HOST1=10.10.213.222 export HOST2=10.10.239.108 # Create temp directories to store files that will end up on other hosts. mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/ ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2}) NAMES=("infra0" "infra1" "infra2") ​

如何让kube-proxy工作在ipvs模式(用kubeadm部署生产级)(4)

3. 制作 CA:在host0上执⾏命令⽣成证书,它将创建两个⽂件:/etc/kubernetes/pki/etcd/ca.crt/etc/kubernetes/pki/etcd/ca.key (这⼀步需要翻墙)

[root@10-10-184-226 ~]# kubeadm init phase certs etcd-ca [certs] Generating "etcd/ca" certificate and key

4. 在 host0 上给每个 etcd 节点⽣成证书:

export HOST0=10.10.184.226 export HOST1=10.10.213.222 export HOST2=10.10.239.108 kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml kubeadm init phase certs etcd-healthcheck-client -- config=/tmp/${HOST2}/kubeadmcfg.yaml kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml cp -R /etc/kubernetes/pki /tmp/${HOST2}/ # cleanup non-reusable certificates find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml kubeadm init phase certs etcd-healthcheck-client -- config=/tmp/${HOST1}/kubeadmcfg.yaml kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml cp -R /etc/kubernetes/pki /tmp/${HOST1}/ find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml kubeadm init phase certs etcd-healthcheck-client -- config=/tmp/${HOST0}/kubeadmcfg.yaml kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml # No need to move the certs because they are for HOST0 # clean up certs that should not be copied off this host find /tmp/${HOST2} -name ca.key -type f -delete find /tmp/${HOST1} -name ca.key -type f -delete

  • 将证书和 kubeadmcfg.yaml 下发到各个 etcd 节点上效果为

如何让kube-proxy工作在ipvs模式(用kubeadm部署生产级)(5)

5. ⽣成静态 pod manifest 在 3 台 etcd 节点上分别执⾏:(需翻墙)

$ kubeadm init phase etcd local --config=/root/kubeadmcfg.yaml [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

6. 检查 etcd 集群状态,⾄此 etcd 集群搭建完成

docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes k8s.gcr.io/etcd:3.2.24 etcdctl --cert-file /etc/kubernetes/pki/etcd/peer.crt --keyfile /etc/kubernetes/pki/etcd/peer.key --ca-file /etc/kubernetes/pki/etcd/ca.crt -- endpoints https://${HOST0}:2379 cluster-health member 9969ee7ea515cbd2 is healthy: got healthy result from https://10.10.213.222:2379 member cad4b939d8dfb250 is healthy: got healthy result from https://10.10.239.108:2379 member e6e86b3b5b495dfb is healthy: got healthy result from https://10.10.184.226:2379 cluster is healthy

使⽤ kubeadm 部署 master

  • 将任意⼀台 etcd 上的证书拷⻉到 master1 节点

export CONTROL_PLANE="ubuntu@10.0.0.7" scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}": scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}": scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}":

  • 在第⼀台 master 上编写配置⽂件 kubeadm-config.yaml 并初始化

如何让kube-proxy工作在ipvs模式(用kubeadm部署生产级)(6)

注意:这⾥的 k8s.paas.test:6443 是⼀个 LB,如果没有可⽤虚拟 IP 来做

  • 使⽤私有仓库(⾃定义镜像功能) kubeadm ⽀持通过修改配置⽂件中的参数来灵活定制集群初始化⼯作,如 imageRepository 可以设置镜像前缀,我们可以在将镜像传到⾃⼰内部私服上之后,编辑 kubeadm-config.yaml 中的该参数之后再执⾏ init
  • 在 master1 上执⾏:kubeadm init --config kubeadm-config.yaml

如何让kube-proxy工作在ipvs模式(用kubeadm部署生产级)(7)

安装另外2台 master

  • master1 上的 admin.conf 配置⽂件和 pki 相关证书拷⻉到另外 2 台 master 同样⽬录下如:

/etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/ca.key /etc/kubernetes/pki/sa.key /etc/kubernetes/pki/sa.pub /etc/kubernetes/pki/front-proxy-ca.crt /etc/kubernetes/pki/front-proxy-ca.key /etc/kubernetes/pki/etcd/ca.crt /etc/kubernetes/pki/etcd/ca.key (官⽅⽂档中此处需要拷⻉,但实际不需要) /etc/kubernetes/admin.conf

注:官⽹⽂档少了两个⽂件/etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/apiserver-etcdclient.key,不加 apiserver 会启动失败并报错:

Unable to create storage backend: config (&{ /registry [] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcdclient.crt /etc/kubernetes/pki/etcd/ca.crt true 0xc000133c20 <nil> 5m0s 1m0s}) err (open /etc/kubernetes/pki/apiserver-etcd-client.crt: no such file or directory)

  • 在 2 和 3 master 中执⾏加⼊:

如何让kube-proxy工作在ipvs模式(用kubeadm部署生产级)(8)

加⼊ node 节点

[root@k8s-n1 ~] $ kubeadm join k8s.paas.test:6443 --token f1oygc.3zlc31yjcut46prf --discovery-tokenca-cert-hash sha256:078b63e29378fb6dcbedd80dd830b83e37521f294b4e3416cd77e854041d912f [preflight] Running pre-flight checks [discovery] Trying to connect to API Server "k8s.paas.test:6443" [discovery] Created cluster-info discovery client requesting info from "https://k8s.paas.test:6443" [discovery] Requesting info from "https://k8s.paas.test:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots will use API Server "k8s.paas.test:6443" [discovery] Successfully established connection with API Server "k8s.paas.test:6443" [join] Reading configuration from the cluster... [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-n1" as an annotation ​ This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. ​ Run 'kubectl get nodes' on the master to see this node join the cluster.

网络安装

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878d b11b/Documentation/kube-flannel.yml kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-86c58d9df4-dc4t2 1/1 Running 0 14m 172.17.0.3 k8s-m1 <none> <none> kube-system coredns-86c58d9df4-jxv6v 1/1 Running 0 14m 172.17.0.2 k8s-m1 <none> <none> kube-system kube-apiserver-k8s-m1 1/1 Running 0 13m 10.10.119.128 k8s-m1 <none> <none> kube-system kube-apiserver-k8s-m2 1/1 Running 0 5m 10.10.76.80 k8s-m2 <none> <none> kube-system kube-apiserver-k8s-m3 1/1 Running 0 4m58s 10.10.56.27 k8s-m3 <none> <none> kube-system kube-controller-manager-k8s-m1 1/1 Running 0 13m 10.10.119.128 k8s-m1 <none> <none> kube-system kube-controller-manager-k8s-m2 1/1 Running 0 5m 10.10.76.80 k8s-m2 <none> <none> kube-system kube-controller-manager-k8s-m3 1/1 Running 0 4m58s 10.10.56.27 k8s-m3 <none> <none> kube-system kube-flannel-ds-amd64-nvmtk 1/1 Running 0 44s 10.10.56.27 k8s-m3 <none> <none> kube-system kube-flannel-ds-amd64-pct2g 1/1 Running 0 44s 10.10.76.80 k8s-m2 <none> <none> kube-system kube-flannel-ds-amd64-ptv9z 1/1 Running 0 44s 10.10.119.128 k8s-m1 <none> <none> kube-system kube-flannel-ds-amd64-zcv49 1/1 Running 0 44s 10.10.175.146 k8s-n1 <none> <none> kube-system kube-proxy-9cmg2 1/1 Running 0 2m34s 10.10.175.146 k8s-n1 <none> <none> kube-system kube-proxy-krlkf 1/1 Running 0 4m58s 10.10.56.27 k8s-m3 <none> <none> kube-system kube-proxy-p9v66 1/1 Running 0 14m 10.10.119.128 k8s-m1 <none> <none> kube-system kube-proxy-wcgg6 1/1 Running 0 5m 10.10.76.80 k8s-m2 <none> <none> kube-system kube-scheduler-k8s-m1 1/1 Running 0 13m 10.10.119.128 k8s-m1 <none> <none> kube-system kube-scheduler-k8s-m2 1/1 Running 0 5m 10.10.76.80 k8s-m2 <none> <none> kube-system kube-scheduler-k8s-m3 1/1 Running 0 4m58s 10.10.56.27 k8s-m3 <none> <none>

安装完成

验证

  • ⾸先验证 kube-apiserver kube-controller-manager kube-scheduler pod network 是否正常:

$kubectl create deployment nginx --image=nginx:alpine $kubectl get pods -l app=nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-54458cd494-r6hqm 1/1 Running 0 5m24s 10.244.4.2 k8s-n1 <none> <none>

  • kube-proxy 验证

$kubectl expose deployment nginx --port=80 --type=NodePort service/nginx exposed ​ [root@k8s-m1 ~] $kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 122m nginx NodePort 10.108.192.221 <none> 80:30992/TCP 4s [root@k8s-m1 ~] $kubectl get pods -l app=nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-54458cd494-r6hqm 1/1 Running 0 6m53s 10.244.4.2 k8s-n1 <none> <none> ​ $curl -I k8s-n1:30992 HTTP/1.1 200 OK

  • 验证 dns pod ⽹络状态

kubectl run --generator=run-pod/v1 -it curl --image=radial/busyboxplus:curl If you don't see a command prompt try pressing enter. [ root@curl:/ ]$ nslookup nginx Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: nginx Address 1: 10.108.192.221 nginx.default.svc.cluster.local

  • ⾼可⽤

关机 master1,在⼀个随机 pod 中访问 Nginx 关机 master1,在⼀个随机 pod 中访问 Nginx

while true;do curl -I nginx && sleep 1 ;done 总结

关于版本

  • 内核版本 4.19 更加稳定此处不建议4.17(再新就是5了)
  • docker 最新稳定版是 1.17.12,此处是 1.18.06,虽 k8s 官⽅也确认兼容1.18.09 了但⽣产上还是建议 1.17.12

关于⽹络,各家选择不同,flannel 在中⼩公司较为普遍,但部署前要选好⽹络插件,在配置⽂件中提前设置好(官⽅博客⼀开始的 kubeadm 配置中没写,后⾯在⽹络设置中⼜要求必须加)

出错处理

  • 想重置环境的话,kubeadm reset 是个很好的⼯具,但它并不会完全重置,在etcd 中的的部分数据(如 configmap secure 等)是没有被清空的,所以如果有必要重置真个环境,记得在 reset 后将 etcd 也重置。
  • 重置 etcd 办法为清空 etcd 节点 /var/lib/etcd,重启 docker 服务)

翻墙

  • 镜像:kubeadm 已⽀持⾃定义镜像前缀 kubeadm-config.yaml 中设置 imageRepository 即可
  • yum,可提前下载导⼊,也可以设置 http_proxy 来访问
  • init,签发证书和 init 时需要连接 google,也可以设置 http_proxy来访问。

更多

  • 证书、升级问题将在下⼀篇继续介绍。

参考:

  • https://kubernetes.io/docs/setup/independent/setup-ha-etcd-with-kubeadm/

想要加群的可以添加WeChat:18310139238

猜您喜欢: