kubeadm 部署k8s1.18.5

前提:centos7系统已经完成初始化适合k8s运行的环境,etcd集群已经部署完成。

测试机器环境:192.168.81.136(k8s-master),192.168.81.137(k8s-node1),192.168.61.138(k8s-node2)

  1. docker安装(136,137,138全部操作)
1
sudo cd /etc/yum.repos.d/  &&  wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
1
sudo yum install docker-ce -y
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
sudo cp /usr/share/bash-completion/completions/docker /etc/bash_completion.d/

sudo mkdir -p /etc/docker/

sudo cat > /etc/docker/daemon.json <<EOF
{
"log-driver": "json-file",
"exec-opts": ["native.cgroupdriver=systemd"],
"log-opts": {
"max-size": "100m",
"max-file": "3"
},
"live-restore": true,
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 10,
"registry-mirrors": ["https://2lefsjdg.mirror.aliyuncs.com"],
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF

sudo systemctl enable --now docker
  1. kubeadm部署

136,137,138节点操作

1
2
3
4
5
6
7
8
9
sudo vim /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

master节点(136操作)

1
sudo yum install kubelet-1.18.5 kubeadm-1.18.5 kubectl-1.18.5 --disableexcludes=kubernetes

node节点(137,138操作)

1
sudo yum install kubelet-1.18.5 kubeadm-1.18.5  --disableexcludes=kubernetes

加入开机自启(136,137,138操作)

1
sudo systemctl enable kubelet
  1. 使用kubeadm安装集群

为control plane node节点创建证书,在拥有根CA证书的机器上(之前的etcd上也就是本次的136上)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
sudo vim /zhanghao/data/certs/apiserver-etcd-client-ca-csr.json 

{
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"O": "zh",
"OU": "zh",
"L": "bj",
"ST": "bj",
"C": "china"
}
],
"CN": "apiserver-etcd-client"
}

sudo /zhanghao/soft/cfssl/cfssl gencert \
--ca /zhanghao/data/certs/etcd-root-ca.pem \
--ca-key /zhanghao/data/certs/etcd-root-ca-key.pem \
--config /zhanghao/data/certs/etcd-gencert.json \
/zhanghao/data/certs/apiserver-etcd-client-ca-csr.json | sudo /zhanghao/soft/cfssl/cfssljson --bare /zhanghao/data/certs/apiserver-etcd-client

将证书拷贝至node节点即137,138节点

1
2
3
sudo scp /zhanghao/data/certs/{etcd-root-ca.pem,apiserver-etcd-client-key.pem,apiserver-etcd-client.pem} 192.168.61.137:/zhanghao/data/certs/

sudo scp /zhanghao/data/certs/{etcd-root-ca.pem,apiserver-etcd-client-key.pem,apiserver-etcd-client.pem} 192.168.61.138:/zhanghao/data/certs/

编写kubeadm初始化配置文件(136节点)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
vim kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: 1.18.5
apiServer:
certSANs:
- "192.168.81.136"
controlPlaneEndpoint: "192.168.81.136:6443"
etcd:
external:
endpoints:
- https://192.168.81.136:2379
- https://192.168.81.137:2379
- https://192.168.81.138:2379
caFile: /etc/kubernetes/pki/etcd-root-ca.pem
certFile: /etc/kubernetes/pki/apiserver-etcd-client.pem
keyFile: /etc/kubernetes/pki/apiserver-etcd-client-key.pem
networking:
podSubnet: "10.100.0.0/16"
serviceSubnet: "10.99.0.0/16"
certificatesDir: /zhanghao/data/certs

创建证书目录(136节点)

1
sudo mkdir /etc/kubernetes/pki

拷贝证书到相应的目录(136节点)

1
sudo cp etcd-root-ca.pem apiserver-etcd-client-key.pem apiserver-etcd-client.pem /etc/kubernetes/pki/

添加hosts(136,137,138)

1
2
3
4
5
6
sudo vim /etc/hosts

192.168.81.136 k8s-master
192.168.81.137 k8s-node1
192.168.81.138 k8s-node2
151.101.76.133 raw.githubusercontent.com

执行初始化(136)

1
sudo kubeadm init --config=kubeadm-config.yaml

出现下面提示表示初始化成功

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

kubeadm join 192.168.81.136:6443 --token fh1n70.tlfg9hh62gnafrr8 \
--discovery-token-ca-cert-hash sha256:e1f53680668636cb6cd17c9b511169349aa09a6ac6f5c1e1ab50f783c6b05252 \
--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.81.136:6443 --token fh1n70.tlfg9hh62gnafrr8 \
--discovery-token-ca-cert-hash sha256:e1f53680668636cb6cd17c9b511169349aa09a6ac6f5c1e1ab50f783c6b05252

会在/zhanghao/data/certs目录下生成以下证书及秘钥:(136节点)

1
2
3
4
5
6
7
8
9
10
11
12
apiserver.crt
apiserver.key
apiserver-kubelet-client.crt
apiserver-kubelet-client.key
ca.crt
ca.key
front-proxy-ca.crt
front-proxy-ca.key
front-proxy-client.crt
front-proxy-client.key
sa.key
sa.pub

查看docker镜像(136节点)

1
2
3
4
5
6
7
8
9
sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.18.5 a1daed4e2b60 4 months ago 117MB
k8s.gcr.io/kube-apiserver v1.18.5 08ca24f16874 4 months ago 173MB
k8s.gcr.io/kube-controller-manager v1.18.5 8d69eaf196dc 4 months ago 162MB
k8s.gcr.io/kube-scheduler v1.18.5 39d887c6621d 4 months ago 95.3MB
k8s.gcr.io/pause 3.2 80d28bedfe5d 8 months ago 683kB
k8s.gcr.io/coredns 1.6.7 67da37a9a360 9 months ago 43.8MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 12 months ago 288MB

备份镜像供其他节点使用,在kube-master节点将镜像备份出来,便于后续传输给其他node节点(136节点)

1
2
3
4
5
6
7
sudo docker save k8s.gcr.io/kube-proxy:v1.18.5 \
k8s.gcr.io/kube-apiserver:v1.18.5 \
k8s.gcr.io/kube-controller-manager:v1.18.5 \
k8s.gcr.io/kube-scheduler:v1.18.5 \
k8s.gcr.io/pause:3.2 \
k8s.gcr.io/coredns:1.6.7 \
k8s.gcr.io/etcd:3.4.3-0 > k8s-imagesV1.18.5.tar

在master136节点上执行

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装网络组件flannel(136节点)

1
2
sudo wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sudo kubectl apply -f kube-flannel.yml

添加环境变量(136,137,138节点)

1
2
3
4
5
6
7
8
9
10
11
12
13
sudo vim /etc/profile

# Kubernetes

export KUBECONFIG=/etc/kubernetes/admin.conf

sudo /etc/profile

vim .bash_profile

export KUBECONFIG=$HOME/.kube/config

source .bash_profile

查看集群pod状态(136节点)

1
2
3
4
5
6
7
8
9
10
11
12
13
sudo kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-j622s 1/1 Running 1 42h
coredns-66bff467f8-n6bqt 1/1 Running 1 42h
kube-apiserver-k8s-master 1/1 Running 2 42h
kube-controller-manager-k8s-master 1/1 Running 4 42h
kube-flannel-ds-gl4fm 1/1 Running 2 20h
kube-flannel-ds-tlfpj 1/1 Running 1 19h
kube-flannel-ds-zmhvn 1/1 Running 1 19h
kube-proxy-788tj 1/1 Running 1 19h
kube-proxy-gffbb 1/1 Running 1 19h
kube-proxy-trnxc 1/1 Running 2 42h
kube-scheduler-k8s-master 1/1 Running 3 42h
  1. node节点加入集群

下载flannel镜像(137,138节点)

1
sudo docker pull quay.io/coreos/flannel:v0.13.0

拷贝k8s相关镜像到node节点(136节点)

1
2
sudo scp k8s-imagesV1.18.5.tar root@192.168.81.137:/root
sudo scp k8s-imagesV1.18.5.tar root@192.168.81.138:/root

导入k8s镜像(137,138节点)

1
sudo docker load < k8s-imagesV1.18.5.tar

137,138节点创建证书目录

1
sudo mkdir -p /etc/kubernetes/pki

拷贝配置文件、证书和私钥至node节点(136节点)

1
2
3
4
5
sudo cd /zhanghao/data/certs/
sudo scp sa* front-proxy-ca* apiserver-etcd-client* root@192.168.81.137:/etc/kubernetes/pki/
sudo scp /etc/kubernetes/admin.conf root@192.168.81.137:/etc/kubernetes/
sudo scp sa* front-proxy-ca* apiserver-etcd-client* root@192.168.81.138:/etc/kubernetes/pki/
sudo scp /etc/kubernetes/admin.conf root@192.168.81.138:/etc/kubernetes/

使用master节点初始化给出的kubeadm join命令,添加node到集群(137,138节点)

1
2
sudo kubeadm join 192.168.81.136:6443 --token fh1n70.tlfg9hh62gnafrr8 \
--discovery-token-ca-cert-hash sha256:e1f53680668636cb6cd17c9b511169349aa09a6ac6f5c1e1ab50f783c6b05252

查看集群node节点状态(136节点)

1
2
3
4
5
6
sudo kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 45h v1.18.5
k8s-node1 Ready <none> 22h v1.18.5
k8s-node2 Ready <none> 22h v1.18.5