- 一、前置知识点
- 1.1 生成环境部署K8S集群的两种方式
- 1.2 准备环境
- 1.3 操作系统初始化配置
- 二、部署Etcd集群
- 2.1 准备cfssl证书生成工具
- 2.2 生成Etcd证书
- 1.自签证书颁发架构(CA)
- 2 使用自签CA签发Etcd HTTPS证书
- 2.3 从Github下载二进制文件
- 2.4 部署Etcd集群
- 1. 创建工作目录并解压二进制包
- 2. 创建etcd配置文件
- 3. systemd管理etcd
- 4. 拷贝刚才生成的证书
- 5. 启动并设置开机启动
- 6. 将上面节点1所有生成的文件拷贝到节点2和节点3
- 7. 查看集群状态
- 三、安装Docker
- 3.1 解压二进制包
- 3.2 systemd管理docker
- 3.3 创建配置文件
- 3.4 启动并设置开机启动
- 四、部署Master Node
- 4.1 生成kube-apiserver证书
- 1. 自签证书颁发机构(CA)
- 2. 使用自签CA签发kube-apiserver HTTPS证书
- 4.2 从Github下载二进制文件
- 4.3 解压二进制包
- 4.4 部署kube-apiserver
- 1. 创建配置文件
- 2. 拷贝刚才生成的证书
- 3. 启用 TLS Bootstrapping 机制
- 4. systemd管理apiserver
- 5. 启动并设置开机启动
- 4.5 部署kube-controller-manager
- 1. 创建配置文件
- 2. 生成kubeconfig文件
- 3. systemd管理controller-manager
- 4. 启动并设置开机启动
- 4.6 部署kube-scheduler
- 1. 创建配置文件
- 2. 生成kubeconfig文件
- 3. systemd管理scheduler
- 4. 启动并设置开机启动
- 5. 查看集群状态
- 6. 授权kubelet-bootstrap用户允许请求证书
- 五、部署Worker Node
- 5.1 创建工作目录并拷贝二进制文件
- 5.2 部署kubelet
- 1. 创建配置文件
- 2. 配置参数文件
- 3. 生成kubelet初次加入集群引导kubeconfig文件
- 4. systemd管理kubelet
- 5. 启动并设置开机启动
- 5.3 批准kubelet证书申请并加入集群
- 5.4 部署kube-proxy
- 1. 创建配置文件
- 2. 配置参数文件
- 3. 生成kube-proxy.kubeconfig文件
- 4. systemd管理kube-proxy
- 5. 启动并设置开机启动
- 5.5 部署 CNI 网络
- 5.6 授权apiserver访问kubelet
- 5.7 新增加Worker Node
- 1. 拷贝已部署好的Node相关文件到新节点
- 2. 删除kubelet证书和kubeconfig文件
- 3. 修改主机名
- 4. 启动并设置开机启动
- 5. 在Master上批准新Node kubelet证书申请
- 6. 查看Node状态
- 7.启用kubectl命令的自动补全功能
- 六、部署CoreDNS
- 七、kubernetes的服务暴露插件–ingress-nginx
- 八、Kubernetes的服务暴露插件–traefik
- 九、扩容多Master(高可用架构)
- 6.1 部署Master2 Node
- 1. 安装docker
- 2. 创建etcd证书目录
- 3. 拷贝文件(Master1操作)
- 4. 删除证书文件(Master2操作)
- 5. 修改配置文件IP和主机名
- 6. 启动设置开机启动
- 7. 查看集群状态
- 6.2 部署Nginx+Keepalived高可用负载均衡器
- 1. 安装软件包(主/备)
- 2. Nginx配置文件(主/备一样)
- 3. keepalived配置文件(Nginx Master)
- 4. keepalived配置文件(Nginx Backup)
- 5. 启动并设置开机启动
- 6. 查看keepalived工作状态
- 7. Nginx+Keepalived高可用测试
- 8. 访问负载均衡器测试
- 6.3 修改所有Worker Node连接LB VIP
- 十、部署Dashboard
- 十一、部署Metrics-server
- 1、核心指标监控
- 2、下载yaml文件
- 3、修改yaml文件
- 4、部署metrics-server
- 5、查看metrics-server部署节点
- 6、查看kubernetes集群的API群里列表
- 7、查看资源使用情况
一、前置知识点
1.1 生成环境部署K8S集群的两种方式
- kubeadm
Kubeadm是一个K8S部署工具,提供kubeadm init 和 kubeadm join,用于快速部署Kubernetes集群。
- 二进制包
从GitHub下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。
- 小结:
Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。
1.2 准备环境
- 服务器要求:
• 建议最小硬件配置:2核CPU、2G内存、30G硬盘。
• 服务器最好可以访问外网,会有从网上拉取镜像需求,如果服务器不能上网,需要提前下载对应镜像并导入节点。
- 软件环境:
软件 版本
操作系统 CentOS7.x_x64
容器引擎 Docker CE 19
Kubernetes Kubernetes v1.20
- 服务器规划:
角色 | IP | 组件 |
---|---|---|
k8s-master1 | 192.168.31.71 | kube-apiserver,kube-controller-manager,kube-scheduler,docker,etcd |
k8s-master2 | 192.168.31.74 | kube-apiserver,kube-controller-manager,kube-scheduler,docker |
k8s-node1 | 192.168.31.72 | kubelet,kube-proxy,docker,etcd |
k8s-node2 | 192.168.31.73 | kubelet,kube-proxy,docker,etcd |
负载均衡器 | 192.168.31.88 | nginx,keepalvied |
须知:考虑到有些朋友电脑配置较低,一次性开四台机器会跑不动,所以搭建这套K8s高可用集群分两部分实施,先部署一套单Master架构(3台),再扩容为多Master架构(4台或6台),顺便再熟悉下Master扩容流程。
单Master架构图:
单Master服务器规划:
角色 | IP | 组件 |
---|---|---|
k8s-master1 | 192.168.31.71 | kube-apiserver,kube-controller-manager,kube-scheduler,etcd |
k8s-node1 | 192.168.31.72 | kubelet,kube-proxy,docker,etcd |
k8s-node2 | 192.168.31.73 | kubelet,kube-proxy,docker,etcd |
1.3 操作系统初始化配置
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
# 关闭swap
swapoff -a # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
# 根据规划设置主机名
hostnamectl set-hostname <hostname>
# 在各个节点上添加hosts
cat >> /etc/hosts << EOF
192.168.31.71 k8s-master1
192.168.31.72 k8s-node1
192.168.31.73 k8s-node2
EOF
# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system # 生效
# 时间同步
yum install ntpdate -y
echo '*/5 * * * * /usr/sbin/ntpdate time1.aliyun.com >/dev/null 2>&1' >>/var/spool/cron/root
# 文件描述符
ulimit -SHn 65535
echo "* - nofile 65535" >>/etc/security/limits.conf
# 更新epel源
yum install epel-release -y
# 加载ipvs模块
yum install -y ipvsadm
cat /root/ip_vs.sh
#!/bin/bash
ipvs_mods_dir="/usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs"
for i in $(ls $ipvs_mods_dir|grep -o "^[^.]*")
do
/sbin/modinfo -F filename $i &>/dev/null
if [ $? -eq 0 ];then
/sbin/modprobe $i
fi
done
sh /root/ip_vs.sh
[root@k8s-node2 ~]# lsmod |grep ip_vs
ip_vs_wrr 16384 0
ip_vs_wlc 16384 0
ip_vs_sh 16384 0
ip_vs_sed 16384 0
ip_vs_rr 16384 0
ip_vs_pe_sip 16384 0
nf_conntrack_sip 32768 1 ip_vs_pe_sip
ip_vs_ovf 16384 0
ip_vs_nq 16384 0
ip_vs_lc 16384 0
ip_vs_lblcr 16384 0
ip_vs_lblc 16384 0
ip_vs_ftp 16384 0
nf_nat 36864 1 ip_vs_ftp
ip_vs_fo 16384 0
ip_vs_dh 16384 0
ip_vs 176128 28 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_ovf,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_pe_sip,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack 163840 3 nf_nat,nf_conntrack_sip,ip_vs
nf_defrag_ipv6 20480 2 nf_conntrack,ip_vs
libcrc32c 16384 4 nf_conntrack,nf_nat,xfs,ip_vs
# 为了便捷操作,在k8s-master1上创建免密登录其他节点
ssh-keygen -t rsa
ssh-copy-id k8s-node1
ssh-copy-id k8s-node2
二、部署Etcd集群
Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群,可容忍2台机器故障。
节点名称 | IP |
---|---|
etcd-1 | 192.168.31.71 |
etcd-2 | 192.168.31.72 |
etcd-3 | 192.168.31.73 |
注:为了节省机器,这里与K8s节点机器复用。也可以独立于k8s集群之外部署,只要apiserver能连接到就行。
2.1 准备cfssl证书生成工具
cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。
找任意一台服务器操作,这里用Master节点。
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
2.2 生成Etcd证书
1.自签证书颁发架构(CA)
- 创建工作目录:
mkdir -p ~/TLS/{etcd,k8s}
cd ~/TLS/etcd
- 自签CA:
[root@k8s-master1 src]# cd ~/TLS/etcd/
[root@k8s-master1 etcd]# cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"www": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
[root@k8s-master1 etcd]# cat > ca-csr.json << EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
- 生成证书:
[root@k8s-master1 etcd]# cfssl gencert -initca ca-csr.json |cfssljson -bare ca -
[root@k8s-master1 etcd]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
[root@k8s-master1 etcd]# ls ca*pem
ca-key.pem ca.pem
[root@k8s-master1 etcd]#
会生成ca.pem和ca-key.pem文件
2 使用自签CA签发Etcd HTTPS证书
- 创建证书申请文件:
[root@k8s-master1 etcd]# cat > server-csr.json << EOF
{
"CN": "etcd",
"hosts": [
"192.168.31.71",
"192.168.31.72",
"192.168.31.73"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
- 生成证书
[root@k8s-master1 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
[root@k8s-master1 etcd]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem
[root@k8s-master1 etcd]# ls server*pem
server-key.pem server.pem
[root@k8s-master1 etcd]#
会生成server.pem和server-key.pem文件。
2.3 从Github下载二进制文件
下载地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
[root@k8s-master1 ~]# cd /usr/local/src/
[root@k8s-master1 src]# ls
[root@k8s-master1 src]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
2.4 部署Etcd集群
以下在节点1上操作,为简化操作,待会将节点1生成的所有文件拷贝到节点2和节点3.
1. 创建工作目录并解压二进制包
[root@k8s-master1 src]# tar xf etcd-v3.4.9-linux-amd64.tar.gz
[root@k8s-master1 src]# mkdir /opt/etcd/{bin,cfg,ssl} -p
[root@k8s-master1 src]# mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
2. 创建etcd配置文件
cat > /opt/etcd/cfg/etcd.conf << EOF
#[Member]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEERURLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIALCLUSTER_TOKEN:集群Token
ETCD_INITIALCLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
3. systemd管理etcd
cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
--logger=zap
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
4. 拷贝刚才生成的证书
把刚才生成的证书拷贝到配置文件中的路径:
[root@k8s-master1 ~]# cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/
[root@k8s-master1 ~]# ls /opt/etcd/ssl/
ca-key.pem ca.pem server-key.pem server.pem
[root@k8s-master1 ~]#
5. 启动并设置开机启动
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
6. 将上面节点1所有生成的文件拷贝到节点2和节点3
scp -r /opt/etcd/ root@k8s-node1:/opt/
scp /usr/lib/systemd/system/etcd.service root@k8s-node1:/usr/lib/systemd/system/
scp -r /opt/etcd/ root@k8s-node2:/opt/
scp /usr/lib/systemd/system/etcd.service root@k8s-node2:/usr/lib/systemd/system/
- 然后在节点2和节点3分别修改etcd.conf配置文件中的节点名称和当前服务器IP:
vi /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1" # 修改此处,节点2改为etcd-2,节点3改为etcd-3
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380" # 修改此处为当前服务器IP
ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379" # 修改此处为当前服务器IP
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380" # 修改此处为当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379" # 修改此处为当前服务器IP
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
最后启动etcd并设置开机启动,同上。
7. 查看集群状态
[root@k8s-node1 ~]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.31.71:2379,https://192.168.31.72:2379,https://192.168.31.73:2379" endpoint health --write-out=table
+----------------------------+--------+-------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+----------------------------+--------+-------------+-------+
| https://192.168.31.71:2379 | true | 11.973732ms | |
| https://192.168.31.72:2379 | true | 12.155194ms | |
| https://192.168.31.73:2379 | true | 58.487706ms | |
+----------------------------+--------+-------------+-------+
[root@k8s-node1 ~]#
如果输出上面信息,就说明集群部署成功。
如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd。
三、安装Docker
这里使用Docker作为容器引擎,也可以换成别的,例如containerd
下载地址:https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
以下在所有节点操作。这里采用二进制安装,用yum安装也一样。
3.1 解压二进制包
tar zxvf docker-19.03.9.tgz
rsync -av docker/* /usr/bin/
## 同步到节点
rsync -av docker/* k8s-node1:/usr/bin/
rsync -av docker/* k8s-node2:/usr/bin/
3.2 systemd管理docker
cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
## 同步到节点
rsync -av /usr/lib/systemd/system/docker.service k8s-node1:/usr/lib/systemd/system/docker.service
rsync -av /usr/lib/systemd/system/docker.service k8s-node2:/usr/lib/systemd/system/docker.service
3.3 创建配置文件
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com"],
"registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"]
}
EOF
## 同步到节点
rsync -av /etc/docker k8s-node1:/etc/
rsync -av /etc/docker k8s-node2:/etc/
3.4 启动并设置开机启动
systemctl daemon-reload
systemctl start docker
systemctl enable docker
## 验证docker启动状态
docker info
四、部署Master Node
4.1 生成kube-apiserver证书
1. 自签证书颁发机构(CA)
cd ~/TLS/k8s
[root@k8s-master1 k8s]# cat > ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "175200h"
},
"profiles": {
"kubernetes": {
"expiry": "175200h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
[root@k8s-master1 k8s]# cat > ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
生成证书:
[root@k8s-master1 k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
[root@k8s-master1 k8s]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
[root@k8s-master1 k8s]# ls ca*pem
ca-key.pem ca.pem
[root@k8s-master1 k8s]#
#会生成ca.pem和ca-key.pem文件。
2. 使用自签CA签发kube-apiserver HTTPS证书
- 创建证书申请文件:
[root@k8s-master1 k8s]# cat > server-csr.json << EOF
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.31.71",
"192.168.31.72",
"192.168.31.73",
"192.168.31.74",
"192.168.31.75",
"192.168.31.76",
"192.168.31.77",
"192.168.31.88",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
注:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。其中10.0.0.1 这个是service网络的地址,service网络的地址范围是10.0.0.0/16。用哪个网段的service,就填写哪个。
- 生成证书:
[root@k8s-master1 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
[root@k8s-master1 k8s]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem
[root@k8s-master1 k8s]# ll server*pem
-rw------- 1 root root 1679 Apr 12 23:01 server-key.pem
-rw-r--r-- 1 root root 1667 Apr 12 23:01 server.pem
[root@k8s-master1 k8s]#
会生成server.pem和server-key.pem文件。
4.2 从Github下载二进制文件
下载地址: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md
注:打开链接你会发现里面有很多包,下载一个server包就够了,包含了Master和Worker Node二进制文件。
[root@k8s-master1 ~]# cd /usr/local/src/
[root@k8s-master1 src]# wget https://dl.k8s.io/v1.20.5/kubernetes-server-linux-amd64.tar.gz
4.3 解压二进制包
[root@k8s-master1 src]# mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
[root@k8s-master1 src]# tar xf kubernetes-server-linux-amd64.tar.gz
[root@k8s-master1 src]# cd kubernetes/server/bin/
[root@k8s-master1 bin]# cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin/
[root@k8s-master1 bin]# cp kubectl /usr/bin/
[root@k8s-master1 bin]#
4.4 部署kube-apiserver
1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--etcd-servers=https://192.168.31.71:2379,https://192.168.31.72:2379,https://192.168.31.73:2379 \\
--bind-address=192.168.31.71 \\
--secure-port=6443 \\
--advertise-address=192.168.31.71 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.0.0.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth=true \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-32767 \\
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-account-issuer=api \\
--service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem \\
--etcd-cafile=/opt/etcd/ssl/ca.pem \\
--etcd-certfile=/opt/etcd/ssl/server.pem \\
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--proxy-client-cert-file=/opt/kubernetes/ssl/server.pem \\
--proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem \\
--requestheader-allowed-names=kubernetes \\
--requestheader-extra-headers-prefix=X-Remote-Extra- \\
--requestheader-group-headers=X-Remote-Group \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
EOF
参数说明:
• –logtostderr:启用日志
• —v:日志等级
• –log-dir:日志目录
• –etcd-servers:etcd集群地址
• –bind-address:监听地址
• –secure-port:https安全端口
• –advertise-address:集群通告地址
• –allow-privileged:启用授权
• –service-cluster-ip-range:Service虚拟IP地址段
• –enable-admission-plugins:准入控制模块
• –authorization-mode:认证授权,启用RBAC授权和节点自管理
• –enable-bootstrap-token-auth:启用TLS bootstrap机制
• –token-auth-file:bootstrap token文件
• –service-node-port-range:Service nodeport类型默认分配端口范围
• –kubelet-client-xxx:apiserver访问kubelet客户端证书
• –tls-xxx-file:apiserver https证书
• 1.20版本必须加的参数:–service-account-issuer,–service-account-signing-key-file
• –etcd-xxxfile:连接Etcd集群证书
• –audit-log-xxx:审计日志
• 启动聚合层相关配置:–requestheader-client-ca-file,–proxy-client-cert-file,–proxy-client-key-file,–requestheader-allowed-names,–requestheader-extra-headers-prefix,–requestheader-group-headers,–requestheader-username-headers,–enable-aggregator-routing
2. 拷贝刚才生成的证书
把刚才生成的证书拷贝到配置文件中的路径:
[root@k8s-master1 ~]# cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/
[root@k8s-master1 ~]# ls /opt/kubernetes/ssl/
ca-key.pem ca.pem server-key.pem server.pem
[root@k8s-master1 ~]#
3. 启用 TLS Bootstrapping 机制
TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。
TLS bootstraping 工作流程:
- 创建上述配置文件中token文件:
cat > /opt/kubernetes/cfg/token.csv << EOF
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
EOF
[root@k8s-master1 ~]# more /opt/kubernetes/cfg/token.csv
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"
[root@k8s-master1 ~]#
格式:token,用户名,UID,用户组
token也可自行生成替换:
head -c 16 /dev/urandom | od -An -t x | tr -d ' '
4. systemd管理apiserver
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
5. 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver
systemctl enable kube-apiserver
4.5 部署kube-controller-manager
1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect=true \\
--kubeconfig=/opt/kubernetes/cfg/kube-controller-manager.kubeconfig \\
--bind-address=127.0.0.1 \\
--allocate-node-cidrs=true \\
--cluster-cidr 10.244.0.0/16 \\
--service-cluster-ip-range=10.0.0.0/24 \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--cluster-signing-duration=87600h0m0s"
EOF
–kubeconfig:连接apiserver配置文件
–leader-elect:当该组件启动多个时,自动选举(HA)
–cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致
–cluster-cidr: pods地址范围
2. 生成kubeconfig文件
- 生成kube-controller-manager证书:
# 切换工作目录
cd ~/TLS/k8s
# 创建证书请求文件
cat > kube-controller-manager-csr.json << EOF
{
"CN": "system:kube-controller-manager",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
[root@k8s-master1 k8s]# ll kube-controller-manager*pem
-rw------- 1 root root 1679 Apr 13 10:43 kube-controller-manager-key.pem
-rw-r--r-- 1 root root 1436 Apr 13 10:43 kube-controller-manager.pem
[root@k8s-master1 k8s]#
- 生成kubeconfig文件(以下是shell命令,直接在终端执行):
[root@k8s-master1 k8s]#
[root@k8s-master1 k8s]# pwd
/root/TLS/k8s
KUBE_CONFIG="/opt/kubernetes/cfg/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://192.168.31.71:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-controller-manager \
--client-certificate=./kube-controller-manager.pem \
--client-key=./kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-controller-manager \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
3. systemd管理controller-manager
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
4. 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
4.6 部署kube-scheduler
1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--leader-elect \\
--kubeconfig=/opt/kubernetes/cfg/kube-scheduler.kubeconfig \\
--bind-address=127.0.0.1"
EOF
–kubeconfig:连接apiserver配置文件
–leader-elect:当该组件启动多个时,自动选举(HA)
2. 生成kubeconfig文件
- 生成kube-scheduler证书:
# 切换工作目录
cd ~/TLS/k8s
# 创建证书请求文件
cat > kube-scheduler-csr.json << EOF
{
"CN": "system:kube-scheduler",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
[root@k8s-master1 k8s]# ll kube-scheduler*pem
-rw------- 1 root root 1675 Apr 13 10:59 kube-scheduler-key.pem
-rw-r--r-- 1 root root 1424 Apr 13 10:59 kube-scheduler.pem
[root@k8s-master1 k8s]#
- 生成kubeconfig文件(以下是shell命令,直接在终端执行):
[root@k8s-master1 ~]# cd /root/TLS/k8s/
[root@k8s-master1 k8s]#
KUBE_CONFIG="/opt/kubernetes/cfg/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://192.168.31.71:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-scheduler \
--client-certificate=./kube-scheduler.pem \
--client-key=./kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-scheduler \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
3. systemd管理scheduler
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
4. 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-scheduler
systemctl enable kube-scheduler
5. 查看集群状态
- 生成kubectl连接集群的证书:
[root@k8s-master1 ~]# cd /root/TLS/k8s/
[root@k8s-master1 k8s]#
cat > admin-csr.json <<EOF
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
[root@k8s-master1 k8s]# ll admin*pem
-rw------- 1 root root 1675 Apr 13 11:06 admin-key.pem
-rw-r--r-- 1 root root 1399 Apr 13 11:06 admin.pem
[root@k8s-master1 k8s]#
- 生成kubeconfig文件:
[root@k8s-master1 k8s]# mkdir /root/.kube
[root@k8s-master1 k8s]# pwd
/root/TLS/k8s
[root@k8s-master1 k8s]#
KUBE_CONFIG="/root/.kube/config"
KUBE_APISERVER="https://192.168.31.71:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials cluster-admin \
--client-certificate=./admin.pem \
--client-key=./admin-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=cluster-admin \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
- 通过kubectl工具查看当前集群组件状态:
[root@k8s-master1 k8s]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
[root@k8s-master1 k8s]#
如上输出说明Master节点组件运行正常。
6. 授权kubelet-bootstrap用户允许请求证书
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
五、部署Worker Node
也可以在Master Node上 部署worker Node,部署过程是一样的,这里的话就分开部署了,直接在Worker Node上部署。
5.1 创建工作目录并拷贝二进制文件
- 在所有worker node创建工作目录:
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
- 下载二进制包并解压,拷贝文件
[root@k8s-node1 ~]# cd /usr/local/src/
# 下载二进制包
[root@k8s-node1 src]# tar xf kubernetes-server-linux-amd64.tar.gz
[root@k8s-node1 src]# cd kubernetes/server/bin/
[root@k8s-node1 bin]# cp kubelet kube-proxy /opt/kubernetes/bin/
[root@k8s-node1 bin]# cp kubectl /usr/bin/
#从master节点拷贝证书
[root@k8s-node1 ~]# scp root@192.168.31.71:/opt/kubernetes/ssl/ca.pem /opt/kubernetes/ssl/
[root@k8s-node1 ~]# ls /opt/kubernetes/ssl/
ca.pem
[root@k8s-node1 ~]#
5.2 部署kubelet
1. 创建配置文件
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-node1 \\
--network-plugin=cni \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=hebye/pause-amd64:3.0"
EOF
–hostname-override:显示名称,集群中唯一
–network-plugin:启用CNI
–kubeconfig:空路径,会自动生成,后面用于连接apiserver
–bootstrap-kubeconfig:首次启动向apiserver申请证书
–config:配置参数文件
–cert-dir:kubelet证书生成目录
–pod-infra-container-image:管理Pod网络容器的镜像
2. 配置参数文件
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
3. 生成kubelet初次加入集群引导kubeconfig文件
#k8s-node1节点上操作
KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://192.168.31.71:6443" # apiserver IP:PORT
TOKEN="c47ffb939f5ca36231d9e3121a252940" # 与token.csv里保持一致
# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
4. systemd管理kubelet
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
5. 启动并设置开机启动
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
5.3 批准kubelet证书申请并加入集群
# 查看kubelet证书请求,master节点查看
[root@k8s-master1 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-zSj57IbZ-O5qqoAv0LixydsBziNdXqQj96w3cXrsJhA 2m24s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
[root@k8s-master1 ~]#
# 批准申请
[root@k8s-master1 ~]# kubectl certificate approve node-csr-zSj57IbZ-O5qqoAv0LixydsBziNdXqQj96w3cXrsJhA
certificatesigningrequest.certificates.k8s.io/node-csr-zSj57IbZ-O5qqoAv0LixydsBziNdXqQj96w3cXrsJhA approved
[root@k8s-master1 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-zSj57IbZ-O5qqoAv0LixydsBziNdXqQj96w3cXrsJhA 3m24s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
[root@k8s-master1 ~]#
#查看节点
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 NotReady <none> 3m36s v1.20.5
[root@k8s-master1 ~]#
注:由于网络插件还没有部署,节点会没有准备就绪 NotReady
5.4 部署kube-proxy
1. 创建配置文件
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF
2. 配置参数文件
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
iptables:
masqueradeAll: true
masqueradeBit: null
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
masqueradeAll: true
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: "rr"
strictARP: false
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
mode: "ipvs"
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-node1
clusterCIDR: 10.0.0.0/24
EOF
注意:修改hostnameOverride为节点hostname
clusterCIDR: kube-proxy 根据 –cluster-cidr 判断集群内部和外部流量,指定 –cluster-cidr 或 –masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT
clusterCIDR: 10.0.0.0/24这个是集群service段,和kube-apiserver.conf还有kube-controller-manager.conf中–service-cluster-ip-range=10.0.0.0/24参数保持一致
3. 生成kube-proxy.kubeconfig文件
- 生成kube-proxy证书:
#master节点上生成证书
# 切换工作目录
cd ~/TLS/k8s
# 创建证书请求文件
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF
# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
[root@k8s-master1 k8s]# ll kube-proxy*pem
-rw------- 1 root root 1675 Apr 13 15:29 kube-proxy-key.pem
-rw-r--r-- 1 root root 1403 Apr 13 15:29 kube-proxy.pem
[root@k8s-master1 k8s]#
#将证书拷贝到k8s-node1节点上
[root@k8s-master1 k8s]# rsync -av kube-proxy*pem k8s-node1:/opt/kubernetes/ssl/
#k8s-node1节点上,生成kubeconfig文件:
[root@k8s-node1 ssl]# pwd
/opt/kubernetes/ssl
[root@k8s-node1 ssl]#
KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://192.168.31.71:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
4. systemd管理kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
5. 启动并设置开机启动
systemctl daemon-reload
systemctl start kube-proxy
systemctl enable kube-proxy
5.5 部署 CNI 网络
先准备好 CNI 二进制文件: 下载地址: https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
- 解压二进制包并移动到默认工作目录:
[root@k8s-master1 ~]# mkdir /opt/cni/bin/ -p
[root@k8s-master1 src]# wget https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
[root@k8s-master1 src]# tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
#拷贝到计算节点
[root@k8s-master1 src]# scp -r /opt/cni/ root@192.168.31.72:/opt/
- 部署 CNI 网络:
[root@k8s-master1 src]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#默认的配置文件里面的网络是10.244.0.0/16,根据自己定义的pod的网络进行修改。
[root@k8s-master1 src]# kubectl apply -f kube-flannel.yml
- 查看网络pod
[root@k8s-master1 src]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-6hzgv 1/1 Running 0 16m
[root@k8s-master1 src]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready <none> 89m v1.20.5
[root@k8s-master1 src]#
- 查看ipvs模式
[root@k8s-node1 ~]# yum install ipvsadm -y
[root@k8s-node1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.0.0.1:443 rr
-> 192.168.31.71:6443 Masq 1 0 0
[root@k8s-node1 ~]#
部署好网络插件,Node 准备就绪。
5.6 授权apiserver访问kubelet
应用场景:例如 kubectl logs
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
[root@k8s-master1 src]# kubectl apply -f apiserver-to-kubelet-rbac.yaml
[root@k8s-master1 ~]# kubectl logs kube-flannel-ds-6hzgv -n kube-system
5.7 新增加Worker Node
1. 拷贝已部署好的Node相关文件到新节点
在k8s-node1节点将Worker Node涉及文件拷贝到新节点192.168.31.73。
[root@k8s-node1 ~]# scp -r /opt/kubernetes root@k8s-node2:/opt/
[root@k8s-node1 ~]# scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@k8s-node2:/usr/lib/systemd/system
[root@k8s-node1 ~]# scp -r /usr/bin/kubectl root@k8s-node2:/usr/bin/
[root@k8s-node1 ~]# scp /opt/kubernetes/ssl/ca.pem root@k8s-node2:/opt/kubernetes/ssl
[root@k8s-node1 ~]# scp -r /opt/cni/ root@k8s-node2:/opt/
2. 删除kubelet证书和kubeconfig文件
[root@k8s-node2 ~]# rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
[root@k8s-node2 ~]# rm -f /opt/kubernetes/ssl/kubelet*
[root@k8s-node2 ~]# rm -rf /opt/kubernetes/logs/*
注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除
3. 修改主机名
vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node2
vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node2
4. 启动并设置开机启动
[root@k8s-node2 ~]# systemctl daemon-reload
[root@k8s-node2 ~]# systemctl start kubelet kube-proxy
[root@k8s-node2 ~]# systemctl enable kubelet kube-proxy
5. 在Master上批准新Node kubelet证书申请
#查看证书请求
[root@k8s-master1 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-OSXO6bkhsJS8Da1RnmerXLrvINqYcKsQWyslsrJSdoU 2m35s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
#授权请求
[root@k8s-master1 ~]# kubectl certificate approve node-csr-OSXO6bkhsJS8Da1RnmerXLrvINqYcKsQWyslsrJSdoU
certificatesigningrequest.certificates.k8s.io/node-csr-OSXO6bkhsJS8Da1RnmerXLrvINqYcKsQWyslsrJSdoU approved
[root@k8s-master1 ~]#
6. 查看Node状态
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready <none> 120m v1.20.5
k8s-node2 Ready <none> 56s v1.20.5
[root@k8s-master1 ~]#
7.启用kubectl命令的自动补全功能
echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc
六、部署CoreDNS
#准备资源配置清单
[root@k8s-master1 coredns]# cat cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
log
health
ready
kubernetes cluster.local 10.0.0.0/16
forward . 202.106.0.20
cache 30
loop
reload
loadbalance
}
[root@k8s-master1 coredns]#
[root@k8s-master1 coredns]# cat rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
[root@k8s-master1 coredns]#
[root@k8s-master1 coredns]# cat dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/name: "CoreDNS"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: coredns
template:
metadata:
labels:
k8s-app: coredns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
containers:
- name: coredns
image: coredns/coredns:1.8.3
args:
- -conf
- /etc/coredns/Corefile
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
[root@k8s-master1 coredns]#
[root@k8s-master1 coredns]# cat svc.yaml
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: coredns
clusterIP: 10.0.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
- name: metrics
port: 9153
[root@k8s-master1 coredns]#
#应用资源配置清单
kubectl apply -f cm.yaml
kubectl apply -f rbac.yaml
kubectl apply -f dp.yaml
kubectl apply -f svc.yaml
- 验证
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-658c9cd845-n9lct 1/1 Running 0 10m
[root@k8s-master1 ~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP,9153/TCP 10m
[root@k8s-master1 ~]#
- 使用dig测试解析
coredns已经能解析外网域名了,因为coredns的配置中,写了它的上级DNS为202.106.0.20,如果它自己解析不出来域名,会通过递归查询,一级级查找.[root@k8s-node1 ~]# dig -t A www.baidu.com @10.0.0.2 +short www.a.shifen.com. 110.242.68.3 110.242.68.4 [root@k8s-node1 ~]#
但是coredns我们不是用来做外网解析的,而是用来做service名称和serviceIP的解析。 - 创建一个service资源来验证
#1 创建deployment控制器
[root@k8s-master1 ~]# kubectl create deployment nginx-dp --image=nginx -n kube-public
deployment.apps/nginx-dp created
[root@k8s-master1 ~]#
[root@k8s-master1 ~]# kubectl get deployment -n kube-public
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-dp 1/1 1 1 3m19s
[root@k8s-master1 ~]# kubectl get pods -n kube-public
NAME READY STATUS RESTARTS AGE
nginx-dp-5fcfbd685-svbhg 1/1 Running 0 3m35s
[root@k8s-master1 ~]#
# 2 创建service
[root@k8s-master1 ~]# kubectl expose deployment nginx-dp --port=80 -n kube-public
service/nginx-dp exposed
[root@k8s-master1 ~]# kubectl get service -n kube-public
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-dp ClusterIP 10.0.0.104 <none> 80/TCP 11s
[root@k8s-master1 ~]#
# 3 验证解析
[root@k8s-node1 ~]# dig -t A nginx-dp.kube-public.svc.cluster.local @10.0.0.2 +short
10.0.0.104
[root@k8s-node1 ~]#
#完整的域名格式:服务名.名称空间.svc.cluster.local
# 4 进入pod内部再次验证
[root@k8s-master1 ~]# kubectl exec -it nginx-dp-5fcfbd685-svbhg /bin/bash -n kube-public
root@nginx-dp-5fcfbd685-svbhg:/# ping nginx-dp.kube-public.svc.cluster.local
PING nginx-dp.kube-public.svc.cluster.local (10.0.0.104): 48 data bytes
56 bytes from 10.0.0.104: icmp_seq=0 ttl=64 time=0.046 ms
root@nginx-dp-5fcfbd685-svbhg:/# ping nginx-dp
PING nginx-dp.kube-public.svc.cluster.local (10.0.0.104): 48 data bytes
56 bytes from 10.0.0.104: icmp_seq=0 ttl=64 time=0.046 ms
#短域名和全域名都可以解析出来,短域名能够解析出来是因为容器里面写好了
root@nginx-dp-5fcfbd685-svbhg:/# more /etc/resolv.conf
nameserver 10.0.0.2
search kube-public.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
root@nginx-dp-5fcfbd685-svbhg:/#
备注: coredns的yaml文件根据情况进行修改,本次实验 coredns是 10.0.0.2
七、kubernetes的服务暴露插件–ingress-nginx
K8S的DNS实现了服务在集群内被自动发现,那如何使得服务在K8S集群外被使用和访问呢?
- 使用NodePort类型的Service
注意:无法使用kube-proxy的ipvs模型,只能使用iptables模型 - 使用Ingress资源
注意:Ingress只能调度暴露7层应用,特指http和https协议
- 使用NodePort类型的Service
ingress是K8S API的标准资源类型之一,也是一种核心资源,它其实就是一组基于域名和URL路径,把用户的请求转发至特定Service资源的规则
可以将集群外部的请求流量,转发至集群内部,从而实现”服务暴露”
ingress控制器是能够为ingress资源监听某套接字,然后根据ingress规则匹配机制路由调度流量的一个组件
说白了,ingress没有啥神秘的,就是简化版的nginx+一段go脚本而已
常用的ingress控制器的实现软件
- ingress-nginx
- HAProxy
- Traefik
- ……
创建deployment
kubectl create deployment web --image=nginx
- 创建service
kubectl expose deployment web --port=80
- 部署ingress controller
[root@k8s-master1 ingress]# cat ingress-controller.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
hostNetwork: true
# wait up to five minutes for the drain of connections
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: lizhenliang/nginx-ingress-controller:0.30.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 101
runAsUser: 101
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
apiVersion: v1
kind: LimitRange
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
limits:
- min:
memory: 90Mi
cpu: 100m
type: Container
[root@k8s-master1 ingress]#
[root@k8s-master1 ingress]# kubectl apply -f ingress-controller.yaml
#查看ingress controller状态
[root@k8s-master1 ~]# kubectl get pods -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-controller-5dc64b58f-qzddp 1/1 Running 0 71s 192.168.31.7 k8s-node1 <none> <none>
[root@k8s-master1 ~]#
#k8s-node1会监听80和443
[root@k8s-node1 ~]# netstat -lntup|grep 80
tcp 0 0 192.168.31.7:2380 0.0.0.0:* LISTEN 1409/etcd
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 12022/nginx: master
tcp6 0 0 :::80 :::* LISTEN 12022/nginx: master
[root@k8s-node1 ~]# netstat -lntup|grep 443
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 12022/nginx: master
tcp6 0 0 :::443 :::* LISTEN 12022/nginx: master
[root@k8s-node1 ~]#
- 创建ingress规则
基于HTTP协议域名:a.ttibo.com
[root@k8s-master1 ingress]# cat ingress-web.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ttibo-ingress
spec:
rules:
- host: a.ttibo.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: web
port:
number: 80
[root@k8s-master1 ingress]#
[root@k8s-master1 ingress]# kubectl apply -f ingress-web.yaml
ingress.networking.k8s.io/ttibo-ingress created
[root@k8s-master1 ingress]#
[root@k8s-master1 ingress]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ttibo-ingress <none> a.ttibo.com 80 50s
[root@k8s-master1 ingress]#
修改域名解析
将域名a.ttibo.com 解析到 k8s-node1的外网地址(106.75.209.85)。
浏览器访问 http://a.ttibo.com/基于HTTPS协议域名:ttibo.com
准备好证书文件ttibo.com.key,ttibo.com.crt- 创建secret
[root@k8s-master1 ingress]# kubectl create secret tls ttibo.com --key ttibo.com.key --cert ttibo.com.crt secret/ttibo.com created [root@k8s-master1 ingress]#
- 创建ingress规则
[root@k8s-master1 ingress]# cat ingress-https.yaml
# https
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-example-ingress
spec:
tls:
- hosts:
- ttibo.com
secretName: ttibo.com
rules:
- host: ttibo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 80
[root@k8s-master1 ingress]#
[root@k8s-master1 ingress]# kubectl apply -f ingress-https.yaml
ingress.networking.k8s.io/tls-example-ingress created
[root@k8s-master1 ingress]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
tls-example-ingress <none> ttibo.com 80, 443 33s
[root@k8s-master1 ingress]#
域名解析:ttibo.com 解析到k8s-node的外网地址(106.75.209.85)
浏览器访问: https://ttibo.com/
八、Kubernetes的服务暴露插件–traefik
[root@k8s-master1 traefik]# cat rbac.yaml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
[root@k8s-master1 traefik]#
[root@k8s-master1 traefik]# cat ds.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v1.7
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
- --kubernetes.endpoint=https://192.168.31.71:6443
- --accesslog
- --accesslog.filepath=/var/log/traefik_access.log
- --traefiklog
- --traefiklog.filepath=/var/log/traefik.log
- --metrics.prometheus
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
[root@k8s-master1 traefik]#
#ingress的dashboard
[root@k8s-master1 traefik]# cat ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: traefik.od.com
http:
paths:
- path: /
backend:
serviceName: traefik-ingress-service
servicePort: 8080
[root@k8s-master1 traefik]#
备注: ingress控制器采用DaemonSet方式部署,并且监控所有node节点的80端口,根据需求进行修改。
查看相关ingress信息
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-658c9cd845-bkt5x 1/1 Running 0 10d
kube-flannel-ds-85lzq 1/1 Running 0 11d
kube-flannel-ds-rgrm9 1/1 Running 0 11d
traefik-ingress-controller-2v6lw 1/1 Running 0 3h57m
traefik-ingress-controller-nprpx 1/1 Running 0 2d5h
[root@k8s-master1 ~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns ClusterIP 192.168.0.2 <none> 53/UDP,53/TCP,9153/TCP 10d
traefik-ingress-service ClusterIP 192.168.178.213 <none> 80/TCP,8080/TCP 2d5h
[root@k8s-master1 ~]# kubectl get ingress -n kube-system
NAME CLASS HOSTS ADDRESS PORTS AGE
traefik-web-ui <none> traefik.od.com 80 2d5h
[root@k8s-master1 ~]#
- 配置反向代理
这里采用nginx作为代理软件,转发至后端的ingress控制器
[root@k8s-master1 ~]# more /etc/nginx/conf.d/od.com.conf
upstream default_backend_traefik {
server 192.168.31.72:80 max_fails=3 fail_timeout=10s;
server 192.168.31.73:80 max_fails=3 fail_timeout=10s;
}
server {
server_name *.od.com;
location / {
proxy_pass http://default_backend_traefik;
proxy_set_header Host $http_host;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
}
}
[root@k8s-master1 ~]#
[root@k8s-master1 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@k8s-master1 ~]# nginx -s reload
#把所有的解析解析到k8s-master这台服务器上。
- 浏览器访问
九、扩容多Master(高可用架构)
Kubernetes作为容器集群系统,通过健康检查+重启策略实现了Pod故障自我修复能力,通过调度算法实现将Pod分布式部署,并保持预期副本数,根据Node失效状态自动在其他Node拉起Pod,实现了应用层的高可用性。
针对Kubernetes集群,高可用性还应包含以下两个层面的考虑:Etcd数据库的高可用性和Kubernetes Master组件的高可用性。 而Etcd我们已经采用3个节点组建集群实现高可用,本节将对Master节点高可用进行说明和实施。
Master节点扮演着总控中心的角色,通过不断与工作节点上的Kubelet和kube-proxy进行通信来维护整个集群的健康工作状态。如果Master节点故障,将无法使用kubectl工具或者API做任何集群管理。
Master节点主要有三个服务kube-apiserver、kube-controller-manager和kube-scheduler,其中kube-controller-manager和kube-scheduler组件自身通过选择机制已经实现了高可用,所以Master高可用主要针对kube-apiserver组件,而该组件是以HTTP API提供服务,因此对他高可用与Web服务器类似,增加负载均衡器对其负载均衡即可,并且可水平扩容。
多Master架构图:
6.1 部署Master2 Node
现在需要再增加一台新服务器,作为Master2 Node,IP是192.168.31.74。
Master2 与已部署的Master1所有操作一致。所以我们只需将Master1所有K8s文件拷贝过来,再修改下服务器IP和主机名启动即可。
1. 安装docker
[root@k8s-master2 ~]# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
[root@k8s-master2 ~]# mkdir /etc/docker
[root@k8s-master2 ~]# mkdir -p /data/docker
[root@k8s-master2 ~]# vim /etc/docker/daemon.json
[root@k8s-master2 ~]# cat /etc/docker/daemon.json
{
"graph": "/data/docker",
"storage-driver": "overlay2",
"insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com"],
"registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"],
"bip": "172.7.74.1/24",
"exec-opts": ["native.cgroupdriver=cgroupfs"],
"live-restore": true
}
# 在Master2启动Docker
systemctl daemon-reload
systemctl start docker
systemctl enable docker
2. 创建etcd证书目录
在Master2创建etcd证书目录:
[root@k8s-master2 ~]# mkdir -p /opt/etcd/ssl
3. 拷贝文件(Master1操作)
拷贝Master1上所有K8s文件和etcd证书到Master2:
scp -r /opt/kubernetes root@192.168.31.74:/opt
scp -r /opt/etcd/ssl root@192.168.31.74:/opt/etcd
scp /usr/lib/systemd/system/kube* root@192.168.31.74:/usr/lib/systemd/system
scp /usr/bin/kubectl root@192.168.31.74:/usr/bin
scp -r ~/.kube root@192.168.31.74:~
4. 删除证书文件(Master2操作)
删除kubelet证书和kubeconfig文件:
[root@k8s-master2 ~]# rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
[root@k8s-master2 ~]# rm -f /opt/kubernetes/ssl/kubelet*
5. 修改配置文件IP和主机名
修改apiserver配置文件为本地IP:
vi /opt/kubernetes/cfg/kube-apiserver.conf
...
--bind-address=192.168.31.74 \
--advertise-address=192.168.31.74 \
...
6. 启动设置开机启动
systemctl daemon-reload
systemctl start kube-apiserver kube-controller-manager kube-scheduler
systemctl enable kube-apiserver kube-controller-manager kube-scheduler
7. 查看集群状态
# 修改连接master为本机IP
vi ~/.kube/config
...
server: https://192.168.31.74:6443
[root@k8s-master2 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
[root@k8s-master2 ~]#
[root@k8s-master2 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready <none> 3h20m v1.20.5
k8s-node2 Ready <none> 80m v1.20.5
[root@k8s-master2 ~]#
6.2 部署Nginx+Keepalived高可用负载均衡器
kube-apiserver高可用架构图:
- Nginx是一个主流Web服务和反向代理服务器,这里用四层实现对apiserver实现负载均衡。
- Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双机热备,在上述拓扑中,Keepalived主要根据Nginx运行状态判断是否需要故障转移(漂移VIP),例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用。
注1:为了节省机器,这里与K8s Master节点机器复用。也可以独立于k8s集群之外部署,只要nginx与apiserver能通信就行。
注2:如果你是在公有云上,一般都不支持keepalived,那么你可以直接用它们的负载均衡器产品,直接负载均衡多台Master kube-apiserver,架构与上面一样。
在两台Master节点操作。
1. 安装软件包(主/备)
配置4层反向代理,代理kube-apiserver
yum install epel-release -y
yum install nginx keepalived -y
2. Nginx配置文件(主/备一样)
cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.31.71:6443 max_fails=3 fail_timeout=30s; # Master1 APISERVER IP:PORT
server 192.168.31.74:6443 max_fails=3 fail_timeout=30s; # Master2 APISERVER IP:PORT
}
server {
listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
proxy_connect_timeout 2s;
proxy_timeout 900s;
proxy_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80 default_server;
server_name _;
location / {
}
}
}
EOF
3. keepalived配置文件(Nginx Master)
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface eth0 # 修改为实际网卡名
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
# 虚拟IP
virtual_ipaddress {
192.168.31.88/24
}
track_script {
check_nginx
}
}
EOF
vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)
virtual_ipaddress:虚拟IP(VIP)
准备上述配置文件中检查nginx运行状态的脚本:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
4. keepalived配置文件(Nginx Backup)
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_BACKUP
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.31.88/24
}
track_script {
check_nginx
}
}
EOF
准备上述配置文件中检查nginx运行状态的脚本:
cat > /etc/keepalived/check_nginx.sh << "EOF"
#!/bin/bash
count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。
5. 启动并设置开机启动
systemctl daemon-reload
systemctl start nginx keepalived
systemctl enable nginx keepalived
6. 查看keepalived工作状态
在k8s-master1服务器上查看
[root@k8s-master1 ~]# ip add|grep 192.168.31.88
inet 192.168.31.88/24 scope global secondary eth0
[root@k8s-master1 ~]#
在k8s-master2服务器上查看
[root@k8s-master2 ~]# ip add|grep 192.168.31.88
[root@k8s-master2 ~]#
可以看到,在eth0网卡绑定了192.168.31.88 虚拟IP,说明工作正常
7. Nginx+Keepalived高可用测试
关闭主节点Nginx,测试VIP是否漂移到备节点服务器。
[root@k8s-master1 ~]# systemctl stop nginx
[root@k8s-master1 ~]# ip add|grep 192.168.31.88
[root@k8s-master1 ~]#
[root@k8s-master2 ~]# ip add|grep 192.168.31.88
inet 192.168.31.88/24 scope global secondary eth0
[root@k8s-master2 ~]#
在Nginx Master执行 pkill nginx;
[root@k8s-master1 ~]# pkill nginx
[root@k8s-master1 ~]# ip add|grep 192.168.31.88
[root@k8s-master1 ~]#
[root@k8s-master2 ~]# ip add|grep 192.168.31.88
inet 192.168.31.88/24 scope global secondary eth0
[root@k8s-master2 ~]#
在Nginx Backup,ip addr命令查看已成功绑定VIP。
[root@k8s-master2 ~]# ip add|grep 192.168.31.88
inet 192.168.31.88/24 scope global secondary eth0
[root@k8s-master2 ~]#
8. 访问负载均衡器测试
找K8s集群中任意一个节点,使用curl查看K8s版本测试,使用VIP访问:
[root@k8s-node2 ~]# curl -k https://192.168.31.88:16443/version
{
"major": "1",
"minor": "20",
"gitVersion": "v1.20.5",
"gitCommit": "6b1d87acf3c8253c123756b9e61dac642678305f",
"gitTreeState": "clean",
"buildDate": "2021-03-18T01:02:01Z",
"goVersion": "go1.15.8",
"compiler": "gc",
"platform": "linux/amd64"
}[root@k8s-node2 ~]#
可以正确获取到K8s版本信息,说明负载均衡器搭建正常。该请求数据流程:curl -> vip(nginx) -> apiserver
通过查看Nginx日志也可以看到转发apiserver IP:
[root@k8s-master1 ~]# tail -f /var/log/nginx/k8s-access.log
192.168.31.73 192.168.31.71:6443 - [13/Apr/2021:20:31:54 +0800] 200 421
192.168.31.73 192.168.31.74:6443 - [13/Apr/2021:20:32:13 +0800] 200 421
192.168.31.73 192.168.31.71:6443 - [13/Apr/2021:20:32:13 +0800] 200 421
192.168.31.73 192.168.31.74:6443 - [13/Apr/2021:20:32:14 +0800] 200 421
192.168.31.73 192.168.31.71:6443 - [13/Apr/2021:20:32:14 +0800] 200 421
6.3 修改所有Worker Node连接LB VIP
试想下,虽然我们增加了Master2 Node和负载均衡器,但是我们是从单Master架构扩容的,也就是说目前所有的Worker Node组件连接都还是Master1 Node,如果不改为连接VIP走负载均衡器,那么Master还是单点故障。
因此接下来就是要改所有Worker Node(kubectl get node命令查看到的节点)组件配置文件,由原来192.168.31.71修改为192.168.31.88(VIP)
- 在所有Worker Node执行:
[root@k8s-node1 ~]# sed -i 's#192.168.31.71:6443#192.168.31.88:16443#' /opt/kubernetes/cfg/*
[root@k8s-node2 ~]# sed -i 's#192.168.31.71:6443#192.168.31.88:16443#' /opt/kubernetes/cfg/*
[root@k8s-node1 ~]# systemctl restart kubelet kube-proxy
[root@k8s-node2 ~]# systemctl restart kubelet kube-proxy
- 检查节点状态:
[root@k8s-master2 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready <none> 5h22m v1.20.5
k8s-node2 Ready <none> 3h22m v1.20.5
[root@k8s-master2 ~]#
至此,一套完整的 Kubernetes 高可用集群就部署完成了!
十、部署Dashboard
在 Kubernetes 社区中,有一个很受欢迎的 Dashboard 项目,它可以给用户提供一个可视化的 Web 界面来查看当前集群的各种信息。用户可以用 Kubernetes Dashboard 部署容器化的应用、监控应用的状态、执行故障排查任务以及管理 Kubernetes 各种资源。
官方参考文档:
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
使用nodeport方式将dashboard服务暴露在集群外,指定使用30443端口。
#下载相关的yaml文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
vim recommended.yaml
##修改servcie部分
32 kind: Service
33 apiVersion: v1
34 metadata:
35 labels:
36 k8s-app: kubernetes-dashboard
37 name: kubernetes-dashboard
38 namespace: kubernetes-dashboard
39 spec:
40 type: NodePort
41 ports:
42 - port: 443
43 targetPort: 8443
44 nodePort: 30443
45 selector:
46 k8s-app: kubernetes-dashboard
kubectl apply -f recommended.yaml
# 查看部署
kubectl get pods,svc -n kubernetes-dashboard
访问地址:https://NodeIP:30443
创建service account并绑定默认cluster-admin管理员集群角色:
cat > dashboard-adminuser.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
说明:上面创建了一个叫admin-user的服务账号,并放在kubernetes-dashboard 命名空间下,并将cluster-admin角色绑定到admin-user账户,这样admin-user账户就有了管理员的权限。默认情况下,kubeadm创建集群时已经创建了cluster-admin角色,我们直接绑定即可。
应用资源配置清单
kubectl apply -f dashboard-adminuser.yaml
查看admin-user账户的token
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
使用输出的token登录Dashboard。
十一、部署Metrics-server
1、核心指标监控
之前使用的是heapster,但是1.12后就废弃了,之后使用的替代者是metrics-server;metrics-server是由用户开发的一个api server,用于服务资源指标。
#查看是否安装
[root@k8s-master1 ~]# kubectl api-versions |grep metrics
[root@k8s-master1 ~]#
2、下载yaml文件
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.4/components.yaml
(镜像可以提前在各个节点下载,或者推送镜像到私有仓库中)
docker pull k8s.gcr.io/metrics-server/metrics-server:v0.4.4
(或者把制作好的镜像包上传到服务器上)
docker load < metrics-server.tar
docker tag 181172b235b2 k8s.gcr.io/metrics-server/metrics-server:v0.4.4
3、修改yaml文件
#修改内容如下
vim components.yaml
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- /metrics-server #添加内容
- --kubelet-insecure-tls #添加内容
image: k8s.gcr.io/metrics-server/metrics-server:v0.4.4
4、部署metrics-server
kubectl apply -f components.yaml
5、查看metrics-server部署节点
[root@k8s-master1 ~]# kubectl get pods -n kube-system -o wide|grep metrics-server
metrics-server-57dc7c748-6psv4 1/1 Running 0 30s 10.244.0.7 k8s-node1 <none> <none>
[root@k8s-master1 ~]#
6、查看kubernetes集群的API群里列表
[root@k8s-master1 ~]# kubectl api-versions |grep metrics
metrics.k8s.io/v1beta1
[root@k8s-master1 ~]#
7、查看资源使用情况
[root@k8s-master1 ~]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master1 139m 13% 1169Mi 67%
k8s-node1 109m 5% 911Mi 24%
k8s-node2 102m 5% 990Mi 26%
[root@k8s-master1 ~]# kubectl top pod -n kube-system
NAME CPU(cores) MEMORY(bytes)
coredns-746fcb4bc5-vvxzv 2m 17Mi
kube-flannel-ds-lf6pd 4m 47Mi
kube-flannel-ds-p5wsq 3m 19Mi
kube-flannel-ds-ppcd4 4m 47Mi
metrics-server-bd7f64558-9pbqx 5m 21Mi
[root@k8s-master1 ~]#