云计算运维一步步编译安装Kubernetes之插件安装

介绍flannel

Flannel是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,其目的在于帮助每一个使用 Kuberentes 的 CoreOS 主机拥有一个完整的子网。这次的分享内容将从Flannel的介绍、工作原理及安装和配置三方面来介绍这个工具的使用方法。

Flannel通过给每台宿主机分配一个子网的方式为容器提供虚拟网络,它基于Linux TUN/TAP,使用UDP封装IP包来创建overlay网络,并借助etcd维护网络的分配情况。

Flannel is a simple and easy way to configure a layer 3 network fabric designed for Kubernetes.

flannel工作模式

模式特点
host-gw这种方式就是直接路由(相当于route add )
vxlan是flannel推荐的方式。需要通信的网络设备能够支持vxlan协议(相当于VPN)
udp该方式与vxlan很类似,它对ip层网络进行包装。通常用于调试环境或者不支持vxlan协议网络环境中。

flannel下载地址

主机名角色ip
k8s-node01.boysec.cnflannel10.1.1.100
k8s-node02.boysec.cnflannel10.1.1.110

注意:这里部署文档以k8s-node01.boysec.cn主机为例,另外一台运算节点安装部署方法类似

下载软件,解压,做软连接

k8s-node01上

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
### 进入上传文件目录
cd /server/tools
### 创建所需要文件
mkdir -p /opt/flannel-v0.11.0-linux-amd64/cert
### 解压创建软连接
tar xf flannel-v0.11.0-linux-amd64.tar.gz -C /opt/flannel-v0.11.0-linux-amd64/
ln -s /opt/flannel-v0.11.0-linux-amd64 /opt/flannel
ls -l /opt|grep flannel
### 拷贝证书
[root@k8s-dns certs]# scp ca.pem client.pem client-key.pem k8s-master1:/opt/flannel-v0.11.0-linux-amd64/cert

## k8s-node01
[root@k8s-node01 flannel]# tree
.
├── cert
│ ├── ca.pem
│ ├── client-key.pem
│ └── client.pem

操作etcd,增加host-gw

1
2
3
4
5
6
7
8
9
### 查询哪一个etcd服务为主
/opt/etcd/etcdctl member list
391901db80420245: name=etcd-server-110 peerURLs=https://10.1.1.110:2380 clientURLs=http://127.0.0.1:2379,https://10.1.1.110:2379 isLeader=false
b0e9893d2afd604d: name=etcd-server-130 peerURLs=https://10.1.1.130:2380 clientURLs=http://127.0.0.1:2379,https://10.1.1.100:2379 isLeader=false
cf6b4f78d74de8cf: name=etcd-server-100 peerURLs=https://10.1.1.100:2380 clientURLs=http://127.0.0.1:2379,https://10.1.1.100:2379 isLeader=true
### 在k8s-node01上
/opt/etcd/etcdctl set /coreos.com/network/config '{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}'

{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}

创建配置

k8s-node01上:

vi /opt/flannel/subnet.env

1
2
3
4
FLANNEL_NETWORK=172.7.0.0/16
FLANNEL_SUBNET=172.7.21.1/24
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false

注意:flannel集群各主机的配置略有不同,部署其他节点时注意修改。

创建启动脚本

k8s-node01上:

vi /opt/flannel/flanneld.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/bin/sh
./flanneld \
--public-ip=10.1.1.100 \
--etcd-endpoints=https://10.1.1.100:2379,https://10.1.1.110:2379,https://10.1.1.130:2379 \
--etcd-keyfile=./cert/client-key.pem \
--etcd-certfile=./cert/client.pem \
--etcd-cafile=./cert/ca.pem \
--iface=eth0 \
--subnet-file=./subnet.env \
--healthz-port=2401

### 检查配置,权限,创建日志目录
chmod +x /opt/flannel/flanneld.sh
mkdir -p /data/logs/flanneld

创建supervisor配置

vim /etc/supervisord.d/flanneld.ini

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[program:flanneld-100]
command=/opt/flannel/flanneld.sh ; the program (relative uses PATH, can take args)
numprocs=1 ; number of processes copies to start (def 1)
directory=/opt/flannel ; directory to cwd to before exec (def no cwd)
autostart=true ; start at supervisord start (default: true)
autorestart=true ; retstart at unexpected quit (default: true)
startsecs=30 ; number of secs prog must stay running (def. 1)
startretries=3 ; max # of serial start failures (default 3)
exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
user=root ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/flanneld/flanneld.stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)

启动服务并检查

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
supervisorctl update
supervisorctl status
netstat -lnpt|grep 2401
[root@k8s-node01 flannel]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.1.1.2 0.0.0.0 UG 100 0 0 eth0
10.1.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
172.7.21.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
172.7.22.0 10.1.1.110 255.255.255.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-ff445de2ba86
### 成功
[root@k8s-node01 flannel]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ds-76457c4c9d-knbts 1/1 Running 0 48m 172.7.21.3 10.1.1.100 <none> <none>
nginx-ds-76457c4c9d-xhrrh 1/1 Running 0 27h 172.7.22.2 10.1.1.110 <none> <none>
[root@k8s-node01 flannel]# ping 172.7.22.2
PING 172.7.22.2 (172.7.22.2) 56(84) bytes of data.
64 bytes from 172.7.22.2: icmp_seq=1 ttl=63 time=0.707 ms
64 bytes from 172.7.22.2: icmp_seq=2 ttl=63 time=0.571 ms
64 bytes from 172.7.22.2: icmp_seq=3 ttl=63 time=0.846 ms

优化SNAT转换

flannel之SNAT规则优化的目的是由于在K8S中的容器内,访问不同宿主机中的容器的资源的时候,日志文件会记录为宿主机的IP地址,而不是记录为容器本身自己的IP地址,建议在不同的宿主机上的容器互访的时候,在日志文件中查询到的IP地址均为容器的真实的IP地址。如下图所示,是为宿主机或进入宿主机的容器中进行curl访问另外node节点的容器,都会被记录成宿主机的IP地址,这样就会导致不同宿主机的容器互访,会经过一次SNAT转换,而实际上,不同宿主机容器之间的访问,应该会被记录为容器的实际IP地址而非宿主机的IP地址

解决两宿主机容器之间的透明访问,如不进行优化,容器之间的访问,日志记录为宿主机的IP地址。

1
2
3
4
5
6
7
8
9
10
11
12
yum -y install iptables-services
## 启动iptables
systemctl start iptables.service
systemctl enable iptables.service

## 删除这俩条规则
iptables -t filter -D INPUT -j REJECT --reject-with icmp-host-prohibited
iptables -t filter -D FORWARD -j REJECT --reject-with icmp-host-prohibited

## 优化SNAT规则
iptables -t nat -D POSTROUTING -s 172.7.21.0/24 ! -o docker0 -j MASQUERADE
iptables -t nat -I POSTROUTING -s 172.7.21.0/24 ! -d 172.7.0.0/16 ! -o docker0 -j MASQUERADE

10.1.1.100主机上的,源地址是172.7.21.0/24,请求目标ip不是172.7.0.0/16段和网络发包不从docker0设备出站的,才进行SNAT转换

优化前后访问

1
2
3
4
kubectl logs nginx-ds-85fc6f4dff-6g6tb
10.1.1.100 - - [06/Aug/2020:10:43:23 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1" "-"

172.7.22.2 - - [06/Aug/2020:10:46:23 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1" "-"

部署kube-dns(coredns)

部署k8s资源配置清单的内网http服务

k8s-dns上

1
2
3
4
5
6
7
8
9
10
11
12
13
14
vim /etc/nginx/conf.d/k8s-yaml.od.com.conf
server {
listen 80;
server_name k8s-yaml.od.com;

location / {
autoindex on;
default_type text/plain;
root /var/k8s-yaml;
}
}

### 创建目录
mkdir /var/k8s-yaml/coredns -p

准备coredns镜像

运维主机k8s-dns上:

1
2
3
docker pull coredns/coredns:1.6.5
docker tag coredns/coredns:1.6.5 harbor.od.com/public/coredns:v1.6.5
docker push harbor.od.com/public/coredns:v1.6.5

准备资源配置清单

运维主机k8s-dns上:

vim /var/k8s-yaml/coredns/rbac.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system

vim /var/k8s-yaml/coredns/configmap.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
log
health
kubernetes cluster.local 192.168.0.0/16
forward . 10.1.1.254
cache 30
loop
reload
loadbalance
}

关键参数解释:

kubernetes cluster.local 192.168.0.0/16 Cluster网段(该网段就是service ip地址范围)

forward . 10.1.1.254 制定上级dns服务器地址

vim /var/k8s-yaml/coredns/deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: coredns
template:
metadata:
labels:
k8s-app: coredns
spec:
serviceAccountName: coredns
containers:
- name: coredns
image: harbor.od.com/public/coredns:v1.6.5
args:
- -conf
- /etc/coredns/Corefile
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
dnsPolicy: Default
imagePullSecrets:
- name: harbor
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile

vim /var/k8s-yaml/coredns/svc.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: coredns
clusterIP: 192.168.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53

依次执行创建

浏览器打开:http://k8s-yaml.od.com/coredns 检查资源配置清单文件是否正确创建

1
2
3
4
5
6
7
8
9
10
kubectl apply -f http://k8s-yaml.od.com/coredns/rbac.yaml
#serviceaccount/coredns created
#clusterrole.rbac.authorization.k8s.io/system:coredns created
#clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
kubectl apply -f http://k8s-yaml.od.com/coredns/configmap.yaml
#configmap/coredns created
kubectl apply -f http://k8s-yaml.od.com/coredns/deployment.yaml
#deployment.apps/coredns created
kubectl apply -f http://k8s-yaml.od.com/coredns/svc.yaml
#service/coredns created

检查验证

1
2
3
4
5
6
[root@k8s-node01 ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-97c47b95c-krc7n 1/1 Running 0 6m33s 172.7.21.3 k8s-node01.boysec.cn <none> <none>
[root@k8s-node02 ~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns ClusterIP 192.168.0.2 <none> 53/UDP,53/TCP 6m41s

部署traefik(ingress)

准备traefik镜像

运维主机k8s-dns上:

1
2
3
docker pull traefik:v1.7.26
docker tag traefik:v1.7.26 harbor.od.com/public/traefik:v1.7.26
docker push harbor.od.com/public/traefik:v1.7.26

准备资源配置清单

1
mkdir -p /var/k8s-yaml/traefik &&cd /var/k8s-yaml/traefik 

vim /var/k8s-yaml/traefik/rbac.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system

vim /var/k8s-yaml/traefik/daemonset.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: harbor.od.com/public/traefik:v1.7.26
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 81
- name: admin
containerPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
- --insecureskipverify=true
- --kubernetes.endpoint=https://10.1.1.50:7443
- --accesslog
- --accesslog.filepath=/var/log/traefik_access.log
- --traefiklog
- --traefiklog.filepath=/var/log/traefik.log
- --metrics.prometheus
imagePullSecrets:
- name: harbor

vim /var/k8s-yaml/traefik/svc.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin

vim /var/k8s-yaml/traefik/ingress.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: traefik.od.com
http:
paths:
- path: /
backend:
serviceName: traefik-ingress-service
servicePort: 8080

依次执行创建

浏览器打开:http://k8s-yaml.od.com/traefik 检查资源配置清单文件是否正确创建

1
2
3
4
kubectl apply -f http://k8s-yaml.od.com/traefik/rbac.yaml 
kubectl apply -f http://k8s-yaml.od.com/traefik/daemonset.yaml
kubectl apply -f http://k8s-yaml.od.com/traefik/svc.yaml
kubectl apply -f http://k8s-yaml.od.com/traefik/ingress.yaml

解析域名

k8s-dns

1
2
3
vim /var/named/chroot/etc/od.com.zone
...
traefik A 10.1.1.50

配置反代

k8s-master.boysec.cnk8s-slave.boysec.cn两台主机上的nginx均需要配置,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
cat /etc/nginx/conf.d/od.com.conf
upstream default_backend_traefik {
server 10.1.1.100:81 max_fails=3 fail_timeout=10s;
server 10.1.1.110:81 max_fails=3 fail_timeout=10s;
}
server {
server_name *.od.com;

location / {
proxy_pass http://default_backend_traefik;
proxy_set_header Host $http_host;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
}
}

浏览器访问

http://traefik.od.com

部署dashboard

准备dashboard镜像

k8s-dns

1
2
3
docker pull kubernetesui/dashboard:v2.0.1
docker tag kubernetesui/dashboard:v2.0.1 harbor.od.com/public/dashboard:v2.0
docker push harbor.od.com/public/dashboard:v2.0

准备资源配置清单

mkdir -p /var/k8s-yaml/dashboard &&cd /var/k8s-yaml/dashboard

vim /var/k8s-yaml/dashboard/rbac.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

vim /var/k8s-yaml/dashboard/secret.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kube-system
type: Opaque
data:
csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kube-system
type: Opaque

vim /var/k8s-yaml/dashboard/configmap.yaml

1
2
3
4
5
6
7
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kube-system

vim /var/k8s-yaml/dashboard/svc.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard

vim /var/k8s-yaml/dashboard/ingress.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-dashboard
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: dashboard.od.com
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443

vim /var/k8s-yaml/dashboard/deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: harbor.od.com/public/dashboard:v2.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kube-system
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: admin-user
nodeSelector:
"beta.kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule

解析域名

k8s-dns

1
2
3
vim /var/named/chroot/etc/od.com.zone
...
dashboard A 10.1.1.50

依次执行创建

1
2
3
4
5
6
kubectl apply -f http://k8s-yaml.od.com/dashboard/rbac.yaml 
kubectl apply -f http://k8s-yaml.od.com/dashboard/secret.yaml
kubectl apply -f http://k8s-yaml.od.com/dashboard/configmap.yaml
kubectl apply -f http://k8s-yaml.od.com/dashboard/svc.yaml
kubectl apply -f http://k8s-yaml.od.com/dashboard/ingress.yaml
kubectl apply -f http://k8s-yaml.od.com/dashboard/deployment.yaml

签发证书

在运维主机上k8s-dns.boysec.cn创建证书

1
2
3
4
5
6
7
# 通过openssl创建证书
(umask 077;openssl genrsa -out dashboard.od.com.key 2048)

openssl req -new -key dashboard.od.com.key -out dashboard.od.com.csr -subj "/CN=dashboard.od.com/C=CN/ST=BJ/L=Beijing/O=od/OU=Linuxboy"

openssl x509 -req -in dashboard.od.com.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out dashboard.od.com.crt -days 3560
scp dashboard.od.com.crt dashboard.od.com.key k8s-master:/etc/nginx/certs

浏览器访问

dashboard在1.8以后没有跳过认证登录。需要配置https协议,在k8s-master.boysec.cn、k8s-slave.boysec.cn配置nginx

vim /etc/nginx/conf.d/dashboard.od.com.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
server {
listen 80;
server_name dashboard.od.com;

rewrite ^(.*)$ https://${server_name}$1 permanent;
}
server {
listen 443 ssl;
server_name dashboard.od.com;

ssl_certificate "certs/dashboard.od.com.crt";
ssl_certificate_key "certs/dashboard.od.com.key";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;

location / {
proxy_pass http://default_backend_traefik;
proxy_set_header Host $http_host;
proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
}
}

获取token

1
kubectl describe secret admin-user -n kube-system

登录