1. 基本环境配置
K8s Service网段:10.96.0.0/12
K8s Pod网段:172.16.0.0/12
系统环境:
1 2 root@k8s-master01 ~:# cat /etc/redhat-release CentOS Linux release 7.5.1804 (Core)
配置所有节点主机名:
1 2 3 4 5 hostnamectl set-hostname k8s-master01 hostnamectl set-hostname k8s-master02 hostnamectl set-hostname k8s-master03 hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node01
配置所有节点hosts文件:
1 2 3 4 5 6 7 8 9 10 root@k8s-master01 ~:# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.20.103.181 k8s-master01 172.20.103.182 k8s-master02 172.20.103.183 k8s-master03 172.20.103.180 k8s-master-lb # 如果不是高可用集群,该IP为Master01的IP 172.20.103.184 k8s-node01 172.20.103.185 k8s-node02
CentOS 7替换yum源如下:
1 2 3 4 curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
安装必备工具:
1 yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
所有节点关闭firewalld 、dnsmasq、selinux(CentOS7需要关闭NetworkManager,CentOS8不需要):
1 2 3 4 5 6 systemctl disable --now firewalld systemctl disable --now dnsmasq systemctl disable --now NetworkManager setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
所有节点关闭swap分区,fstab注释swap:
1 2 swapoff -a && sysctl -w vm.swappiness=0 sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
所有节点同步时间:
1 2 3 4 5 6 7 8 rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm yum install ntpdate -y ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime echo 'Asia/Shanghai' >/etc/timezone ntpdate time2.aliyun.com \# 加入到crontab crontab -e */5 * * * * /usr/sbin/ntpdate time2.aliyun.com
所有节点配置limit:
1 2 3 4 5 6 7 8 9 ulimit -SHn 65535 cat >> /etc/security/limits.conf << EOF * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * soft memlock unlimited * hard memlock unlimited EOF
Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作,阿里云或者AWS上需要单独一台kubectl服务器。密钥配置如下:
1 [root@k8s-master01 ~]# ssh-keygen -t rsa
Master01配置免密码登录其他节点:
1 [root@k8s-master01 ~]# for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
Master01下载安装文件:
1 2 3 4 5 6 7 8 [root@k8s-master01 ~]# cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git Cloning into 'k8s-ha-install'... remote: Enumerating objects: 12, done. remote: Counting objects: 100% (12/12), done. remote: Compressing objects: 100% (11/11), done. remote: Total 461 (delta 2), reused 5 (delta 1), pack-reused 449 Receiving objects: 100% (461/461), 19.52 MiB | 4.04 MiB/s, done. Resolving deltas: 100% (163/163), done.
所有节点升级系统并重启,此处升级没有升级内核,下节会单独升级内核:
1 yum update -y --exclude=kernel* && reboot #CentOS7需要升级,CentOS8可以按需升级系统
2. 内核升级
CentOS7 需要升级内核至4.18+,本地升级的版本为4.19
在master01节点下载内核:
1 2 3 cd /root wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
从master01节点传到其他节点:
1 for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
所有节点安装内核:
1 cd /root && yum localinstall -y kernel-ml*
所有节点更改内核启动顺序:
1 2 grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
检查默认内核是不是4.19:
1 2 [root@k8s-master02 ~]# grubby --default-kernel /boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
所有节点重启,然后检查内核是不是4.19
1 2 [root@k8s-master02 ~]# uname -a Linux k8s-master02 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
所有节点安装ipvsadm:
1 yum install ipvsadm ipset sysstat conntrack libseccomp -y
所有节点配置ipvs模块,在内核4.19+版本nf_conntrack_ipv4已经改为nf_conntrack, 4.18以下使用nf_conntrack_ipv4即可:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack vim /etc/modules-load.d/ipvs.conf # 加入以下内容 ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip 然后执行: systemctl enable --now systemd-modules-load.service
检查是否加载:
1 2 3 4 [root@k8s-master01 ~]# lsmod | grep -e ip_vs -e nf_conntrack nf_conntrack_ipv4 16384 23 nf_defrag_ipv4 16384 1 nf_conntrack_ipv4 nf_conntrack 135168 10 xt_conntrack,nf_conntrack_ipv6,nf_conntrack_ipv4,nf_nat,nf_nat_ipv6,ipt_MASQUERADE,nf_nat_ipv4,xt_nat,nf_conntrack_netlink,ip_vs
开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 EOF sysctl --system
所有节点配置完内核后,重启服务器,保证重启后内核依旧加载
1 2 reboot lsmod | grep --color=auto -e ip_vs -e nf_conntrack
3. 基本组件安装
本节主要安装的是集群中用到的各种组件,比如Docker-ce、Kubernetes各组件等。
3.1 Docker安装
所有节点安装Docker-ce 19.03
1 yum install docker-ce-19.03.* -y
温馨提示:
由于新版kubelet建议使用systemd,所以可以把docker的CgroupDriver改成systemd
1 2 3 4 5 6 mkdir /etc/docker cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"] } EOF
所有节点设置开机自启动Docker:
1 systemctl daemon-reload && systemctl enable --now docker
3.2 K8s及etcd安装
Master01下载kubernetes安装包
[root@k8s-master01 ~]# wget https://dl.k8s.io/v1.20.0/kubernetes-server-linux-amd64.tar.gz
注意目前版本是1.20.0学员安装时需要下载最新的1.20.x版本:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md
打开页面后点击:
以下操作都在master01执行
下载etcd安装包:
1 [root@k8s-master01 ~]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
解压kubernetes安装文件:
1 [root@k8s-master01 ~]# tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
解压etcd安装文件:
1 [root@k8s-master01 ~]# tar -zxvf etcd-v3.4.13-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.4.13-linux-amd64/etcd{,ctl}
版本查看:
1 2 3 4 5 [root@k8s-master01 ~]# kubelet --version Kubernetes v1.20.0 [root@k8s-master01 ~]# etcdctl version etcdctl version: 3.4.13 API version: 3.4
将组件发送到其他节点:
1 2 3 4 MasterNodes='k8s-master02 k8s-master03' WorkNodes='k8s-node01 k8s-node02' for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done for NODE in $WorkNodes; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
k8s github : https://github.com/kubernetes/kubernetes/
所有节点创建/opt/cni/bin目录:
切换到1.20.x分支(其他版本可以切换到其他分支):
1 cd k8s-ha-install && git checkout manual-installation-v1.20.x
4. 生成证书
二进制安装最关键步骤,一步错误全盘皆输,一定要注意每个步骤都要是正确的
Master01下载生成证书工具(下载不成功可以去百度网盘)
1 2 3 wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
4.1 etcd证书
所有Master节点创建etcd证书目录:
所有节点创建kubernetes相关目录:
1 mkdir -p /etc/kubernetes/pki
Master01节点生成etcd证书,生成证书的CSR文件:证书签名请求文件,配置了一些域名、公司、单位
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 [root@k8s-master01 pki]# cd /root/k8s-ha-install/pki #生成etcd CA证书和CA证书的key cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca cfssl gencert \ -ca=/etc/etcd/ssl/etcd-ca.pem \ -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,172.20.103.181,172.20.103.182,172.20.103.183 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd #执行结果 2021/03/25 15:37:35 [INFO] generating a new CA key and certificate from CSR 2021/03/25 15:37:35 [INFO] generate received request 2021/03/25 15:37:35 [INFO] received CSR 2021/03/25 15:37:35 [INFO] generating key: rsa-2048 2021/03/25 15:37:36 [INFO] encoded CSR 2021/03/25 15:37:36 [INFO] signed certificate with serial number 531361379310927773836849333894840754500131829602 2021/03/25 15:38:06 [INFO] generate received request 2021/03/25 15:38:06 [INFO] received CSR 2021/03/25 15:38:06 [INFO] generating key: rsa-2048 2021/03/25 15:38:06 [INFO] encoded CSR 2021/03/25 15:38:06 [INFO] signed certificate with serial number 237443018015069176569281183324409939856062503697
将证书复制到其他节点:
1 2 3 4 5 6 7 8 MasterNodes='k8s-master02 k8s-master03' WorkNodes='k8s-node01 k8s-node02' for NODE in $MasterNodes; do ssh $NODE "mkdir -p /etc/etcd/ssl" for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE} done done
4.2 k8s组件证书
Master01生成kubernetes证书:
1 2 [root@k8s-master01 pki]# cd /root/k8s-ha-install/pki cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 #10.96.0.是k8s service的网段,如果说需要更改k8s service网段,那就需要更改10.96.0.1, #如果不是高可用集群,172.20.103.180为Master01的IP cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,172.20.103.180,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,172.20.103.181,172.20.103.182,172.20.103.183 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver #生成apiserver的聚合证书。Requestheader-client-xxx requestheader-allowwd-xxx:aggerator cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client #返回结果(忽略警告) 2021/03/25 15:46:58 [INFO] generate received request 2021/03/25 15:46:58 [INFO] received CSR 2021/03/25 15:46:58 [INFO] generating key: rsa-2048 2021/03/25 15:46:58 [INFO] encoded CSR 2021/03/25 15:46:58 [INFO] signed certificate with serial number 540745177069209309072809901900604794276852591352 2021/03/25 15:46:58 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
生成controller-manage的证书:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager #注意,如果不是高可用集群,172.20.103.180:8443改为master01的地址,8443改为apiserver的端口,默认是6443 # set-cluster:设置一个集群项, kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://172.20.103.180:8443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 设置一个环境项,一个上下文 kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # set-credentials 设置一个用户项 kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # 使用某个环境当做默认环境 kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler # 注意,如果不是高可用集群,172.20.103.180:8443改为master01的地址,8443改为apiserver的端口,默认是6443 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://172.20.103.180:8443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin # 注意,如果不是高可用集群,172.20.103.180:8443改为master01的地址,8443改为apiserver的端口,默认是6443 kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://172.20.103.180:8443 --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig #创建ServiceAccount Key à secret openssl genrsa -out /etc/kubernetes/pki/sa.key 2048 #返回结果 Generating RSA private key, 2048 bit long modulus (2 primes) ...................................................................................+++++ ...............+++++ e is 65537 (0x010001) openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub #发送证书至其他节点 for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done
查看证书文件
1 2 3 4 5 6 7 [root@k8s-master01 pki]# ls /etc/kubernetes/pki/ admin.csr apiserver.csr ca.csr controller-manager.csr front-proxy-ca.csr front-proxy-client.csr sa.key scheduler-key.pem admin-key.pem apiserver-key.pem ca-key.pem controller-manager-key.pem front-proxy-ca-key.pem front-proxy-client-key.pem sa.pub scheduler.pem admin.pem apiserver.pem ca.pem controller-manager.pem front-proxy-ca.pem front-proxy-client.pem scheduler.csr [root@k8s-master01 pki]# ls /etc/kubernetes/pki/ |wc -l 23
5. Kubernetes系统组件配置
5.1 Etcd配置
etcd配置大致相同,注意修改每个Master节点的etcd配置的主机名和IP地址
5.1.1 Master01
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 vim /etc/etcd/etcd.config.yml name: 'k8s-master01' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://172.20.103.181:2380' listen-client-urls: 'https://172.20.103.181:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://172.20.103.181:2380' advertise-client-urls: 'https://172.20.103.181:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://172.20.103.181:2380,k8s-master02=https://172.20.103.182:2380,k8s-master03=https://172.20.103.183:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false
5.1.2 Master02
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 vim /etc/etcd/etcd.config.yml name: 'k8s-master02' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://172.20.103.182:2380' listen-client-urls: 'https://172.20.103.182:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://172.20.103.182:2380' advertise-client-urls: 'https://172.20.103.182:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://172.20.103.181:2380,k8s-master02=https://172.20.103.182:2380,k8s-master03=https://172.20.103.183:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false
5.1.3 Master03
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 vim /etc/etcd/etcd.config.yml name: 'k8s-master03' data-dir: /var/lib/etcd wal-dir: /var/lib/etcd/wal snapshot-count: 5000 heartbeat-interval: 100 election-timeout: 1000 quota-backend-bytes: 0 listen-peer-urls: 'https://172.20.103.183:2380' listen-client-urls: 'https://172.20.103.183:2379,http://127.0.0.1:2379' max-snapshots: 3 max-wals: 5 cors: initial-advertise-peer-urls: 'https://172.20.103.183:2380' advertise-client-urls: 'https://172.20.103.183:2379' discovery: discovery-fallback: 'proxy' discovery-proxy: discovery-srv: initial-cluster: 'k8s-master01=https://172.20.103.181:2380,k8s-master02=https://172.20.103.182:2380,k8s-master03=https://172.20.103.183:2380' initial-cluster-token: 'etcd-k8s-cluster' initial-cluster-state: 'new' strict-reconfig-check: false enable-v2: true enable-pprof: true proxy: 'off' proxy-failure-wait: 5000 proxy-refresh-interval: 30000 proxy-dial-timeout: 1000 proxy-write-timeout: 5000 proxy-read-timeout: 0 client-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true peer-transport-security: cert-file: '/etc/kubernetes/pki/etcd/etcd.pem' key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem' peer-client-cert-auth: true trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem' auto-tls: true debug: false log-package-levels: log-outputs: [default] force-new-cluster: false
5.1.4 创建Service
所有Master节点创建etcd service并启动
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 vim /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Service Documentation=https://coreos.com/etcd/docs/latest/ After=network.target [Service] Type=notify ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml Restart=on-failure RestartSec=10 LimitNOFILE=65536 [Install] WantedBy=multi-user.target Alias=etcd3.service
所有Master节点创建etcd的证书目录:
1 2 3 4 mkdir /etc/kubernetes/pki/etcd ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/ systemctl daemon-reload systemctl enable --now etcd
查看etcd状态:
1 2 3 4 5 6 7 8 9 10 11 export ETCDCTL_API=3 etcdctl --endpoints="172.20.103.181:2379,172.20.103.182:2379,172.20.103.183:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint status --write-out=table +---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | 172.20.103.181:2379 | e5b14038459a91bb | 3.4.13 | 20 kB | false | false | 28 | 19 | 19 | | | 172.20.103.182:2379 | 31f5c99f69828895 | 3.4.13 | 20 kB | false | false | 28 | 19 | 19 | | | 172.20.103.183:2379 | 2668af6475671e28 | 3.4.13 | 20 kB | true | false | 28 | 19 | 19 | | +---------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
未完待续
参考链接:
[http://www.kubeasy.com/] :