春暖花開的五月,疫情基本過去,值得慶賀,今天就來實戰K8S的高可用負載均衡集羣。html
2020 年 05月 07 日 - 初稿 - 左程立node
原文地址 - https://blog.zuolinux.com/2020/05/07/k8s-cluster-on-centos7.htmllinux
軟件信息
硬件信息
主機名 | ip |
---|---|
master01 | 192.168.10.12 |
master02 | 192.168.10.13 |
master03 | 192.168.10.14 |
work01 | 192.168.10.15 |
work02 | 192.168.10.16 |
work03 | 192.168.10.17 |
VIP | 192.168.10.19 |
master/worker 均執行
# cat >> /etc/hosts << EOF 192.168.10.12 master01 192.168.10.13 master02 192.168.10.14 master03 192.168.10.15 work01 192.168.10.16 work02 192.168.10.17 work03 EOF
# 關閉防火牆 systemctl stop firewalld systemctl disable firewalld # 關閉 SeLinux setenforce 0 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config # 關閉 swap swapoff -a yes | cp /etc/fstab /etc/fstab_bak cat /etc/fstab_bak |grep -v swap > /etc/fstab # 安裝wget yum install wget -y # 備份 mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup # 阿里雲yum源 wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo# 獲取阿里雲epel源 wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo# 清理緩存並建立新的緩存 yum clean all && yum makecache # 更新 yum update -y #同步時間 timedatectl timedatectl set-ntp true
master/worker 均安裝
# 安裝 Docker CE # 設置倉庫 # 安裝所需包 yum install -y yum-utils device-mapper-persistent-data lvm2 # 新增 Docker 安裝源 yum-config-manager \ --add-repo \ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # 安裝 Docker CE. yum install -y containerd.io \ docker-ce-18.09.9 \ docker-ce-cli-18.09.9 # 啓動 Docker 並添加開機啓動 systemctl start docker systemctl enable docker #將Docker 的 Cgroup Driver 修改成 systemd #修改成國內源 cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "registry-mirrors":[ "http://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn", "https://registry.docker-cn.com" ]} EOF mkdir -p /etc/systemd/system/docker.service.d # Restart docker. systemctl daemon-reload systemctl restart docker
master/worker 節點均執行
# 配置K8S的yum源,最好使用官方源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF # 增長配置 cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF # 加載 sysctl --system # 安裝當前日期最新穩定版(v1.18.2) kubelet、 kubeadm 、kubectl yum install -y kubelet-1.18.2 kubeadm-1.18.2 kubectl-1.18.2 --disableexcludes=kubernetes # 啓動並設置 kubelet 開機啓動 systemctl start kubelet systemctl enable kubelet
全部 master 節點執行
yum install haproxy-1.5.18 -y cat > /etc/haproxy/haproxy.cfg <<EOF global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend k8s-api mode tcp option tcplog bind *:16443 default_backend k8s-api backend k8s-api mode tcp balance roundrobin server master01 192.168.10.12:6443 check server master02 192.168.10.13:6443 check server master03 192.168.10.14:6443 check EOF
全部 master 節點啓動 HAProxy
systemctl start haproxy systemctl enable haproxy
全部 master 節點執行
yum -y install keepalived psmisc
master01 上 keepalived 的配置:
# cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id master01 script_user root enable_script_security } vrrp_script check_haproxy { script "killall -0 haproxy" interval 2 weight 10 } vrrp_instance VI_1 { state MASTER interface ens192 virtual_router_id 50 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.10.19 } track_script { check_haproxy } } EOF
master02 上 keepalived 的配置:
# cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id master02 script_user root enable_script_security } vrrp_script check_haproxy { script "killall -0 haproxy" interval 2 weight 10 } vrrp_instance VI_1 { state BACKUP interface ens192 virtual_router_id 50 priority 98 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.10.19 } track_script { check_haproxy } } EOF
master03 上 keepalived 的配置:
# cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id master03 script_user root enable_script_security } vrrp_script check_haproxy { script "killall -0 haproxy" interval 2 weight 10 } vrrp_instance VI_1 { state BACKUP interface ens192 virtual_router_id 50 priority 96 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.10.19 } track_script { check_haproxy } } EOF
全部 master 節點執行
service keepalived start systemctl enable keepalived
注意
在初始化以前,須要先設置 hosts 解析
MASTER_IP 爲 VIP 的地址
APISERVER_NAME 爲 VIP 的域名
export MASTER_IP=192.168.10.19 export APISERVER_NAME=k8s.api echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts
在 master01 上執行 kubeadm init 進行初始化
kubeadm init \ --apiserver-advertise-address 0.0.0.0 \ --apiserver-bind-port 6443 \ --cert-dir /etc/kubernetes/pki \ --control-plane-endpoint k8s.api \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version 1.18.2 \ --pod-network-cidr 192.10.0.0/16 \ --service-cidr 192.20.0.0/16 \ --service-dns-domain cluster.local \ --upload-certs
master01 上執行,用於管理集羣
若是在root用戶下git
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile source .bash_profile
若是非root用戶下github
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
master01 上執行
# 獲取配置文件 mkdir calico && cd calico wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml # 修改配置文件 vi calico.yaml # 找到 192.168.0.0/16 ,修改成 192.10.0.0/16 # 部署 Pod 網絡組件 kubectl apply -f calico.yaml
實時查看 pod 的狀態
watch kubectl get pods --all-namespaces -o wide
在其餘 master 節點執行
使用 master01 上 kubeadm init 的執行結果中包含 join 的指令信息
端口由 6443 修改成 16443
export MASTER_IP=192.168.10.19 export APISERVER_NAME=k8s.api echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts
kubeadm join k8s.api:16443 --token ztjux9.2tau56zck212j9ra \ --discovery-token-ca-cert-hash sha256:a2b552266902fb5f6620330fc1a6638a9cdd6fec3408edba1082e6c8389ac517 \ --control-plane --certificate-key 961494e7d0a9de0219e2b0dc8bdaa9ca334ecf093a6c5f648aa34040ad39b61a
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile source .bash_profile
在worker 節點執行
使用 master01 上 kubeadm init 的執行結果中包含 join 的指令信息
端口由 6443 修改成 16443
export MASTER_IP=192.168.10.19 export APISERVER_NAME=k8s.api echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hosts
kubeadm join k8s.api:16443 --token ztjux9.2tau56zck212j9ra \ --discovery-token-ca-cert-hash sha256:a2b552266902fb5f6620330fc1a6638a9cdd6fec3408edba1082e6c8389ac517
watch kubectl get nodes -o wide
所有爲 Ready 說明集羣安裝成功。docker
能夠看到 VIP 能夠漂移到 master02 上
而後能夠在 master02 作一樣操做觀察 VIP 是否能夠漂到 master03 上json
今天實戰搭建了 K8S 高可用負載均衡集羣,是我實際操做的記錄。
沒有寫總體架構、各個組件的具體功能、數據流轉流程,後面我會寫一下。
好了,下期見。centos
https://wsgzao.github.io/post...
https://www.kubernetes.org.cn...