Kubernetes认证-CKA和CKS模拟环境安装部署

环境准备

准备三台Linux机器(本文以Ubuntu 23.10系统为例),三台机器之间能相互通信。

以下是本文使用的三台Ubuntu 23.10:

hostnameIPmemory
k8s-master10.1.1.204GB
k8s-node110.1.1.302GB
k8s-node210.1.1.402GB

系统初始化

需分别在k8s-master、k8s-node1、k8s-node2 中执行,建议通过root用户操作

将普通用户(work)设置免密sudo切换

1
2
3
4
visudo 
# Allow members of group sudo to execute any command
# 添加如下配置
work ALL=(ALL) NOPASSWD:ALL

设置时区为上海

1
2
3
timedatectl set-timezone Asia/Shanghai
apt-get install -y ntpdate >/dev/null 2>&1
ntpdate ntp.aliyun.com

关闭swap

1
2
sed -i '/swap/d' /etc/fstab
swapoff -a

关闭防火墙

1
systemctl disable --now ufw >/dev/null 2>&1

载入内核模块开启流量转发

1
2
3
4
5
6
7
8
9
10
11
12
13
cat >>/etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter

cat >>/etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system >/dev/null 2>&1

安装containerd, kubeadm, kubelet, kubectl

安装containerd

1
2
3
4
5
6
7
8
9
10
11
mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

apt-get -qq update >/dev/null 2>&1
apt-get install -qq -y containerd.io >/dev/null 2>&1
containerd config default >/etc/containerd/config.toml
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
sed -i 's#sandbox_image = "registry.k8s.io/pause:3.6"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#' /etc/containerd/config.toml

systemctl restart --now containerd

安装 kubeadm, kubelet, kubectl

1
2
3
4
5
curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add > /dev/null 2>&1
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list > /dev/null 2>&1
apt-get -qq update >/dev/null 2>&1

apt-get install -y kubeadm=1.28.0-00 kubelet=1.28.0-00 kubectl=1.28.0-00

可以检查下kubeadm,kubelet,kubectl的安装情况,如果都能获取到版本号,说明安装成功。

1
2
3
kubeadm version
kubelet --version
kubectl version --client

初始化master节点

以下操作都在master节点上进行。

通过阿里开源拉取集群所需要的镜像

1
sudo kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers

如果拉取成功,会看到类似下面的输出

1
2
3
4
5
6
7
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.4
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1

初始化Kubeadm

  • --apiserver-advertise-address 这个地址是本地用于和其他节点通信的IP地址
  • --pod-network-cidr pod network 地址空间
1
sudo kubeadm init --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address=10.1.1.20  --pod-network-cidr=10.244.0.0/16

最后一段的输出要保存记录好, 这一段指出后续需要做什么配置。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:
# 准备 .kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

# 部署pod network方案
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

# 添加worker节点
kubeadm join 10.1.1.20:6443 --token hd3cjk.sk5co35ml64kw2wo \
--discovery-token-ca-cert-hash sha256:05b42f0a81350227d45f7005c6f2dc664f75d70e0b5e5e8dbfb65705425a859c

shell 自动补全(Bash)

more information can be found https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-autocomplete

1
2
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

部署pod network方案

https://kubernetes.io/docs/concepts/cluster-administration/addons/ 选择一个network方案, 根据提供的具体链接去部署。

这里我们选择overlay的方案,名字叫 calico 部署方法如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
$ kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/tigera-operator.yaml

$ curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.4/manifests/custom-resources.yaml -o calico-custom-resources.yaml

$ cat calico-custom-resources.yaml
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 10.244.0.0/16 # 修改为 pod network 地址空间
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}

$ kubectl apply -f calico-custom-resources.yaml

添加worker节点

添加worker节点非常简单,直接在worker节点上运行join即可,注意–token

1
2
kubeadm join 10.1.1.20:6443 --token hd3cjk.sk5co35ml64kw2wo \
--discovery-token-ca-cert-hash sha256:05b42f0a81350227d45f7005c6f2dc664f75d70e0b5e5e8dbfb65705425a859c

注意:不小心忘记join的tokendiscovery-token-ca-cert-hash 怎么办?

token 可以通过 kubeadm token list获取到,比如 ``0pdoeh.wrqchegv3xm3k1ow`

1
2
3
$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
hd3cjk.sk5co35ml64kw2wo 23h 2023-12-19T03:41:11Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token

discovery-token-ca-cert-hash 可以通过

1
2
3
openssl x509 -in /etc/kubernetes/pki/ca.crt -pubkey -noout |
openssl pkey -pubin -outform DER |
openssl dgst -sha256

结果类似于 SHA2-256(stdin)= 05b42f0a81350227d45f7005c6f2dc664f75d70e0b5e5e8dbfb65705425a859c

最后在master节点查看node和pod结果。(比如我们有两个worker节点)

1
2
3
4
5
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 7m v1.28.0
k8s-node1 Ready <none> 50s v1.28.0
k8s-node2 Ready <none> 21s v1.28.0

集群验证

创建pod

创建一个nginx的pod,pod能成功过running

1
2
3
4
5
$ kubectl run web --image nginx
pod/web created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web 1/1 Running 0 5s

创建service

给nginx pod创建一个service, 通过curl能访问这个service的cluster ip地址。

1
2
3
4
5
6
7
8
9
10
$ kubectl expose pod web  --port=80 --name=web-service
service/web-service exposed
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 65m
web-service ClusterIP 10.96.95.185 <none> 80/TCP 4s
$ curl 10.96.95.185
...
<title>Welcome to nginx!</title>
...

环境清理

1
2
$ kubectl delete service web-service
$ kubectl delete pod web

需要cka/cks相关yaml

关注公众回复,cka-yaml、cks-yaml