CentOS8安装Kubernetes1.22.2

CentOS8安装kubernetes 1.22.2

一、环境介绍

CentOS8.3三台

主机名 CPU 内存 硬盘
k8s-master 4 4G 600G
k8s-node1 4 4G 600G
k8s-node2 4 4G 600G

二、开始搭建

以下命令为三个节点同时运行

1、更换阿里云软件源并更新系统

curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
yum clean all
yum update -y

2、关闭防火墙和selinux

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

3、关闭swap,注释swap分区

swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab

4、加载网桥防火墙

modprobe br_netfilter

5、配置内核参数

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

启用配置

sysctl --system

6、安装软件包

yum install -y vim bash-completion net-tools gcc yum-utils device-mapper-persistent-data lvm2

7、配置aliyun源安装docker-ce

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce

8、添加docker仓库加速器

mkdir -p /etc/docker              #创建文件夹 
tee /etc/docker/daemon.json <<-'EOF'     #配置加速
{
  "registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload        #重载配置
systemctl restart docker       #重启dcoker
systemctl enable  docker 

9、安装kubectl、kubelet、kubeadm

添加阿里kubernetes源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装

yum install kubectl kubelet kubeadm -y
systemctl enable kubelet

10、配置Cgroup Driver

docker的Cgroup Driver和kubelet的Cgroup Driver不一致,如若不修改在后续安装时会报错

vi /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

在最后一行添加如下内容

Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=cgroupfs"

重启kubelet

systemctl daemon-reload
systemctl restart kubelet

以下命令在仅master节点上运行

11、初始化k8s集群

POD的网段为: 10.122.0.0/16, api server地址就是master本机IP。

kubeadm init --kubernetes-version=1.22.2  \
--apiserver-advertise-address=192.168.1.109   \
--image-repository registry.aliyuncs.com/google_containers 
--service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

成功后返回如下信息

kubeadm init --kubernetes-version=1.22.2  \
> --apiserver-advertise-address=192.168.1.109   \
> --image-repository registry.aliyuncs.com/google_containers 
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
^C
[root@k8s-master ~]# ^C
[root@k8s-master ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1020 23:32:20.248271    4600 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W1020 23:32:20.259867    4600 cleanupnode.go:109] [reset] Failed to evaluate the "/var/lib/kubelet" directory. Skipping its unmount and cleanup: lstat /var/lib/kubelet: no such file or directory
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[root@k8s-master ~]# kubeadm init --kubernetes-version=1.22.2  \
> --apiserver-advertise-address=192.168.1.109   \
> --image-repository registry.aliyuncs.com/google_containers 
[init] Using Kubernetes version: v1.22.2
[preflight] Running pre-flight checks
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master.lzcwy.cn] and IPs [10.96.0.1 192.168.1.109]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master.lzcwy.cn] and IPs [192.168.1.109 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master.lzcwy.cn] and IPs [192.168.1.109 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.538533 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master.lzcwy.cn as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master.lzcwy.cn as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 8t88ff.mydpxdpcuv5cf8jm
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.109:6443 --token 8t88ff.mydpxdpcuv5cf8jm \
    --discovery-token-ca-cert-hash sha256:834d854781e28928c7ac6d670f05d66be4dcc5fe2b93521eaa69562c6edde5c7 

创建kubectl

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

执行下面命令,使kubectl可以自动补充

source <(kubectl completion bash)

以下命令在计算节点中执行

kubeadm join 192.168.1.109:6443 --token 8t88ff.mydpxdpcuv5cf8jm \
    --discovery-token-ca-cert-hash sha256:834d854781e28928c7ac6d670f05d66be4dcc5fe2b93521eaa69562c6edde5c7 

以下命令在控制节点中执行

12、安装calico网络

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

等待20分钟后查看pod

 kubectl get pod --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-75f8f6cc59-lstp4   1/1     Running   0          9m30s
kube-system   calico-node-2wj4z                          1/1     Running   0          9m31s
kube-system   calico-node-4ftfp                          1/1     Running   0          9m31s
kube-system   calico-node-r5nm7                          1/1     Running   0          9m31s
kube-system   coredns-7f6cbbb7b8-pkrlb                   1/1     Running   0          13m
kube-system   coredns-7f6cbbb7b8-rpr45                   1/1     Running   0          13m
kube-system   etcd-master.lzcwy.cn                       1/1     Running   0          13m
kube-system   kube-apiserver-master.lzcwy.cn             1/1     Running   0          13m
kube-system   kube-controller-manager-master.lzcwy.cn    1/1     Running   0          13m
kube-system   kube-proxy-dh7z5                           1/1     Running   0          11m
kube-system   kube-proxy-dq2gt                           1/1     Running   0          13m
kube-system   kube-proxy-grtfg                           1/1     Running   0          11m
kube-system   kube-scheduler-master.lzcwy.cn             1/1     Running   0          13m

查看node状态

kubectl get nodes
NAME              STATUS   ROLES                  AGE   VERSION
master.lzcwy.cn   Ready    control-plane,master   13m   v1.22.2
node1.lzcwy.cn    Ready    <none>                 11m   v1.22.2
node2.lzcwy.cn    Ready    <none>                 11m   v1.22.2

13、安装kubernetes-dashboard

官方部署dashboard的服务没使用nodeport,将yaml文件下载到本地,在service里添加nodeport

 wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard
[root@master01 ~]# kubectl create -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

文章内容仅用于作者学习使用,如果内容侵犯您的权益,请立即联系作者删除,作者不承担任何法律责任。

×

喜欢就点赞,疼爱就打赏