Kubernetes-v1-24版安装部署之Node节点安装部署

K8S Node简介

  • k8s的node节点需要安装三个组件:containerd/kubelet/kube-proxy
  • pod是存储容器的容器,但容器不止docker一种。
  • CRI:container runtime interface
  • kubelet:用于操作containerd容器,维持pod的生命周期 。
  • kube proxy:负载均衡就是通过kube proxy组件完成的。并且,pod与pod之间的通信,各pod间的负载均衡都是通过它来实现的。默认是通过操作防火墙firewall来实现pod的映射。

kubelet安装部署

创建工作目录

1
2
3
4
5
6
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
tar xf kubernetes-v1.24.0.tar.gz
cd /server/tools/kubernetes/server/bin
cp kubelet /opt/kubernetes/bin
## 复制证书到所有节点
scp 10.1.1.11:/opt/certs/ca.pem /opt/kubernetes/ssl/

配置文件编写

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=4 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-master \\
--cgroup-driver=systemd \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--container-runtime=remote \\
--runtime-request-timeout=15m \\
--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice \\
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=kubernetes/pause"
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=4 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-node1 \\
--cgroup-driver=systemd \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--container-runtime=remote \\
--runtime-request-timeout=15m \\
--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice \\
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=kubernetes/pause"
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cat > /opt/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \\
--v=4 \\
--log-dir=/opt/kubernetes/logs \\
--hostname-override=k8s-node2 \\
--cgroup-driver=systemd \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--container-runtime=remote \\
--runtime-request-timeout=15m \\
--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice \\
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \\
--config=/opt/kubernetes/cfg/kubelet-config.yml \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=kubernetes/pause"
EOF

配置参数文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 192.168.0.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF

生成kubelet初次加入集群引导kubeconfig文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# 设置环境变量
KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
KUBE_APISERVER="https://10.1.1.100:6443" # apiserver IP:PORT
TOKEN="bc43e407e311d78b60da186fdd347fc8" # 与token.csv里保持一致

# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}


## 分发到node节点
scp /opt/kubernetes/cfg/bootstrap.kubeconfig k8s-node1:/opt/kubernetes/cfg/bootstrap.kubeconfig
scp /opt/kubernetes/cfg/bootstrap.kubeconfig k8s-node2:/opt/kubernetes/cfg/bootstrap.kubeconfig

kubelet启动文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=containerd.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

启动kubelet

1
2
systemctl daemon-reload
systemctl enable --now kubelet

批准kubelet证书申请并加入集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 查看kubelet证书请求
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
node-csr-XaZydv0EWtYv8U1N-AApIQ7vDxGqXq7UhKKEbSxBX1M 11m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Pending
node-csr-rXbrF-8XLWZdd0p2SQtMPDRPZpaQBAm-4Kq2vztXP-I 10m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Pending
node-csr-zx87_5I9BK91wf-yI7eAl_2iew7u_3pFcEFYO9mfi_4 9m34s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Pending

# 批准申请(每个人的证书都不一样)
kubectl certificate approve node-csr-XaZydv0EWtYv8U1N-AApIQ7vDxGqXq7UhKKEbSxBX1M
kubectl certificate approve node-csr-rXbrF-8XLWZdd0p2SQtMPDRPZpaQBAm-4Kq2vztXP-I
kubectl certificate approve node-csr-zx87_5I9BK91wf-yI7eAl_2iew7u_3pFcEFYO9mfi_4

# 查看节点
[root@k8s-master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady <none> 18s v1.24.0
k8s-node1 NotReady <none> 15s v1.24.0
k8s-node2 NotReady <none> 6s v1.24.0

遇到问题

node节点名称写错

1
E0517 13:18:19.033666    7310 kubelet.go:2419] "Error getting node" err="node \"k8s-node2\" not found"

解决:

在node2节点上删除master节点批准其加入集群时,自动颁发的证书:

  • 自动颁发的证书,在Node节点上的目录:/opt/kubernetes/ssl/
  • 删除证书自动颁发的是kubelet的证书
1
2
3
4
rm -f /opt/kubernetes/ssl/kubelet*

# 重启kubelet;
systemctl restart kubelet
  • 此时在master节点执行:kubectl get csr 可以看到node01节点重新申请加入集群;

kube-proxy安装

创建证书请求文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
cat > /opt/certs/kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
EOF

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

# 复制证书
scp kube-proxy*.pem 10.1.1.100:/opt/kubernetes/ssl/
scp kube-proxy*.pem 10.1.1.120:/opt/kubernetes/ssl/
scp kube-proxy*.pem 10.1.1.130:/opt/kubernetes/ssl/

配置证书、执行脚本

1
2
3
4
cd /server/tools/kubernetes/server/bin
cp kube-proxy /opt/kubernetes/bin
# 复制证书到所有节点
scp 10.1.1.11:/opt/certs/kube-proxy*.pem /opt/kubernetes/ssl/

生成kubeconfig文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
KUBE_APISERVER="https://10.1.1.100:6443"

kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}

## 分发到node节点
scp /opt/kubernetes/cfg/kube-proxy.kubeconfig k8s-node1:/opt/kubernetes/cfg/kube-proxy.kubeconfig
scp /opt/kubernetes/cfg/kube-proxy.kubeconfig k8s-node2:/opt/kubernetes/cfg/kube-proxy.kubeconfig

创建配置文件

1
2
3
4
5
6
cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/opt/kubernetes/logs \\
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

配置参数文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
KUBE_PROXY_OPTS="--logtostderr=false \
--v=4 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 10.1.1.100:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: 172.7.0.0/16
# 以下可选
#configSyncPeriod: 15m0s
#conntrack:
# max: null
# maxPerCore: 32768
# min: 131072
# tcpCloseWaitTimeout: 1h0m0s
# tcpEstablishedTimeout: 24h0m0s
#enableProfiling: false
#healthzBindAddress: 0.0.0.0:10256
#hostnameOverride: ""
#iptables:
# masqueradeAll: false
# masqueradeBit: 14
# minSyncPeriod: 0s
# syncPeriod: 30s
#ipvs:
# masqueradeAll: true
# minSyncPeriod: 5s
# scheduler: "rr"
# syncPeriod: 30s
#mode: "ipvs"
#nodePortAddresses: null
#oomScoreAdj: -999
#portRange: ""
#udpIdleTimeout: 250ms
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
KUBE_PROXY_OPTS="--logtostderr=false \
--v=4 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 10.1.1.120:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-node1
clusterCIDR: 172.7.0.0/16
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
KUBE_PROXY_OPTS="--logtostderr=false \
--v=4 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
EOF

cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 10.1.1.130:10249
clientConnection:
kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-node2
clusterCIDR: 172.7.0.0/16
EOF

配置kube-proxy启动文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

启动kube-proxy

1
2
3
4
5
6
7
systemctl daemon-reload
systemctl enable --now kube-proxy

# 检查
[root@k8s-node2 ~]# netstat -lnpt|grep kube-proxy
tcp 0 0 10.1.1.130:10249 0.0.0.0:* LISTEN 2030/kube-proxy
tcp6 0 0 :::10256 :::* LISTEN 2030/kube-proxy

安装网络组件(flannel)

大家可以发现k8s所有组件都安装完成后集群状态还是NotReady,这里通过kubelet报错就会发现缺少网络组件,这里就不介绍了,详情可以看我另一篇文章。

这里选用flannel来作为网络组件使用,其他组件也是大同小异。

安装flannel

flannel下载地址

1
2
3
4
5
6
7
8
9
10
11
12
13
### 进入上传文件目录
cd /server/tools

### 创建所需要文件
mkdir -p /opt/flannel-v0.17.0-linux-amd64/ssl/

### 解压创建软连接
tar xf flannel-v0.17.0-linux-amd64.tar.gz -C /opt/flannel-v0.17.0-linux-amd64/
ln -sf /opt/flannel-v0.17.0-linux-amd64/ /opt/flannel

## 复制etcd证书
cp /opt/etcd/ssl/ca.pem /opt/flannel/ssl/
cp /opt/etcd/ssl/etcd*.pem /opt/flannel/ssl/

创建网段配置文件

flannel集群各主机的配置略有不同,部署其他节点时注意修改。这样为了方便通过pod地址就可以看出在那个物理主机上。

1
2
3
4
5
6
7
8
9
10
# 创建网段
/opt/etcd/etcdctl set /coreos.com/network/config '{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}'

mkdir /run/flannel
cat > /run/flannel/subnet.env << EOF
FLANNEL_NETWORK=172.7.0.0/16
FLANNEL_SUBNET=172.7.100.1/24
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false
EOF
1
2
3
4
5
6
7
mkdir /run/flannel
cat > /run/flannel/subnet.env << EOF
FLANNEL_NETWORK=172.7.0.0/16
FLANNEL_SUBNET=172.7.120.1/24
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false
EOF
1
2
3
4
5
6
7
mkdir /run/flannel
cat > /run/flannel/subnet.env << EOF
FLANNEL_NETWORK=172.7.0.0/16
FLANNEL_SUBNET=172.7.130.1/24
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false
EOF

编辑启动文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
cat > /usr/lib/systemd/system/flanneld.service <<EOF           
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=containerd.service
[Service]
Type=notify
ExecStart=/opt/flannel/flanneld \\
-etcd-cafile=/opt/flannel/ssl/ca.pem \\
-etcd-certfile=/opt/flannel/ssl/etcd.pem \\
-etcd-keyfile=/opt/flannel/ssl/etcd-key.pem \\
-etcd-endpoints=https://10.1.1.100:2379,https://10.1.1.130:2379,https://10.1.1.120:2379 \\
-etcd-prefix=/coreos.com/network \\
-subnet-file=/run/flannel/subnet.env \\
-iface=eth0 \\
-healthz-port=2401
Restart=always
RestartSec=5
StartLimitInterval=0
[Install]
WantedBy=multi-user.target
RequiredBy=containerd.service
EOF

启动flannel

1
2
3
4
5
6
7
8
9
10
11
12
13
systemctl daemon-reload           
systemctl enable --now flanneld.service

# 验证flannel
netstat -lnpt|grep flanneld
tcp6 0 0 :::2401 :::* LISTEN 22665/flanneld

## 所有节点都是正常状态
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready <none> 4h13m v1.24.0
k8s-node1 Ready <none> 4h12m v1.24.0
k8s-node2 Ready <none> 3h57m v1.24.0

优化SNAT转换

为什么做SNAT优化?

flannel之SNAT规则优化的目的是由于在K8S中的容器内,访问不同宿主机中的容器的资源的时候,日志文件会记录为宿主机的IP地址,而不是记录为容器本身自己的IP地址,建议在不同的宿主机上的容器互访的时候,在日志文件中查询到的IP地址均为容器的真实的IP地址。如下图所示,是为宿主机或进入宿主机的容器中进行curl访问另外node节点的容器,都会被记录成宿主机的IP地址,这样就会导致不同宿主机的容器互访,会经过一次SNAT转换,而实际上,不同宿主机容器之间的访问,应该会被记录为容器的实际IP地址而非宿主机的IP地址

解决两宿主机容器之间的透明访问,如不进行优化,容器之间的访问,日志记录为宿主机的IP地址。如下图

没优化前

基于Docker网络SNAT优化方案

1
2
3
4
5
6
7
8
9
10
11
12
yum -y install iptables-services
## 启动iptables
systemctl start iptables.service
systemctl enable iptables.service

## 删除这俩条规则
iptables -t filter -D INPUT -j REJECT --reject-with icmp-host-prohibited
iptables -t filter -D FORWARD -j REJECT --reject-with icmp-host-prohibited

## 优化SNAT规则
iptables -t nat -D POSTROUTING -s 172.7.21.0/24 ! -o docker0 -j MASQUERADE
iptables -t nat -I POSTROUTING -s 172.7.21.0/24 ! -d 172.7.0.0/16 ! -o docker0 -j MASQUERADE

基于cni网络SNAT优化

1
2
3
4
5
6
7
8
yum -y install iptables-services
## 启动iptables
systemctl start iptables.service
systemctl enable iptables.service

## 删除这俩条规则
iptables -t filter -D INPUT -j REJECT --reject-with icmp-host-prohibited
iptables -t filter -D FORWARD -j REJECT --reject-with icmp-host-prohibited

优化步骤

  1. 查看iptables cni规则

    1
    2
    3
    4
    5
    6
    7
    [root@k8s-master1 ~]# iptables-save |grep -i cni
    -A CNI-24765be1eb930181d4596108 -d 172.7.100.0/24 -m comment --comment "name: \"flannel\" id: \"7827fb4e07150d739ebe225e63cb7224e1567640ecd34bb31fc895eedd1e181d\"" -j ACCEPT
    -A CNI-24765be1eb930181d4596108 ! -d 224.0.0.0/4 -m comment --comment "name: \"flannel\" id: \"7827fb4e07150d739ebe225e63cb7224e1567640ecd34bb31fc895eedd1e181d\"" -j MASQUERADE
    -A CNI-29992bd4932ac4a521568c23 -d 172.7.100.0/24 -m comment --comment "name: \"flannel\" id: \"bfe43a2a876b07d82505e5d19eb4f70b3ae3073fd970e71411fa137a668e374e\"" -j ACCEPT
    -A CNI-29992bd4932ac4a521568c23 ! -d 224.0.0.0/4 -m comment --comment "name: \"flannel\" id: \"bfe43a2a876b07d82505e5d19eb4f70b3ae3073fd970e71411fa137a668e374e\"" -j MASQUERADE
    -A CNI-2d0cf8cf6f5d817c605f5d00 -d 172.7.100.0/24 -m comment --comment "name: \"flannel\" id: \"44d0c2234bc82140650b7a72dd81f95c295b1ecfe510bab0845ed00adc8b55f1\"" -j ACCEPT
    -A CNI-2d0cf8cf6f5d817c605f5d00 ! -d 224.0.0.0/4 -m comment --comment "name: \"flannel\" id: \"44d0c2234bc82140650b7a72dd81f95c295b1ecfe510bab0845ed00adc8b55f1\"" -j MASQUERADE
  2. 通过修改iptables规则

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
      iptables -S -t nat
    iptables -t nat -I POSTROUTING -s 172.7.100.0/24 -d 172.7.0.0/16 -j RETURN
    iptables -t nat -I POSTROUTING -s 172.7.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
    iptables -t nat -I POSTROUTING ! -s 172.7.0.0/16 -d 172.7.100.0/24 -j RETURN
    iptables -t nat -I POSTROUTING ! -s 172.7.0.0/16 -d 172.7.100.0/24 -j MASQUERADE
    ## 上述四条规则作用分别是:
    # 集群内部pod间流量,不做NAT
    # 集群内部pod访问外部(非组播流量, 非集群内部pod),需要走NAT(snat)出去
    # 非集群内部pod流量访问集群本节点,不需要做NAT
    # 非集群内部pod流量访问集群内部pod流量(非本节点)走NAT(snat)

    > **如有问题还请大佬指点一下吧!!!!**

    ## 网段修改方法

    `Flannel` 网卡未设置成期望的网段修改办法。

    ```shell
    # 停止flannel服务
    systemctl stop flanneld.service

    # 清理虚拟接口
    ifconfig cni0 down
    ifconfig del cni0

    # 删除缓存
    rm -rf /var/lib/cni/flannel/* && rm -rf /var/lib/cni/networks/flannel/* && ip link delete cni0 && rm -rf /var/lib/cni/network/cni0/*

    # 修改flannel 网段文件
    cat /run/flannel/subnet.env
    FLANNEL_NETWORK=172.7.0.0/16
    FLANNEL_SUBNET=172.7.130.1/24
    FLANNEL_MTU=1500
    FLANNEL_IPMASQ=false

    # 清理etcd 中网段
    /opt/etcd/etcdctl rm /coreos.com/network/subnets/172.7.57.0-24

    # 重新启动flannel
    systemctl start flanneld.service