Kubernetes 企業級集羣部署方式

1、Kubernetes介紹與特性

1.一、kubernetes是什麼

官方網站:http://www.kubernetes.ionode

• Kubernetes是Google在2014年開源的一個容器集羣管理系統,Kubernetes簡稱K8S。
• K8S用於容器化應用程序的部署,擴展和管理。
• K8S提供了容器編排,資源調度,彈性伸縮,部署管理,服務發現等一系列功能。
• Kubernetes目標是讓部署容器化應用簡單高效。linux

1.二、kubernetes是什麼

一個容器平臺
一個微服務平臺
便捷式雲平臺nginx

1.三、kubernetes特性

- 自我修復
在節點故障時從新啓動失敗的容器,替換和從新部署,保證預期的副本數量;殺死健康檢查失敗的容器,而且在未準備好以前不會處理客戶端請求,確保線上服務不中斷。
- 彈性伸縮
使用命令、UI或者基於CPU使用狀況自動快速擴容和縮容應用程序實例,保證應用業務高峯併發時的高可用性;業務低峯時回收資源,以最小成本運行服務。
- 自動部署和回滾
K8S採用滾動更新策略更新應用,一次更新一個Pod,而不是同時刪除全部Pod,若是更新過程當中出現問題,將回滾更改,確保升級不受影響業務。
- 服務發現和負載均衡
K8S爲多個容器提供一個統一訪問入口(內部IP地址和一個DNS名稱),而且負載均衡關聯的全部容器,使得用戶無需考慮容器IP問題。
- 機密和配置管理
管理機密數據和應用程序配置,而不須要把敏感數據暴露在鏡像裏,提升敏感數據安全性。並能夠將一些經常使用的配置存儲在K8S中,方便應用程序使用。
- 存儲編排
掛載外部存儲系統,不管是來自本地存儲,公有云(如AWS),仍是網絡存儲(如NFS、GlusterFS、Ceph)都做爲集羣資源的一部分使用,極大提升存儲使用靈活性。
- 批處理
提供一次性任務,定時任務;知足批量數據處理和分析的場景。git

2、kubernetes組織架構介紹

2.一、總體架構組件詳解

一、如圖,有三個節點一個master節點和兩個node節點。
二、Master有三個組件:
    - API server:K8S提供的一個統一的入口,提供RESTful API訪問方式接口服務。
      - Auth:認證受權,判斷是否有權限訪問
      - Etcd:存儲的數據庫、存儲認證信息等,K8S狀態,節點信息等
    - scheduler:集羣的調度,將集羣分配到哪一個節點內
    - controller manager: 控制器,來控制來作哪些任務,管理 pod service 控制器等
  - Kubectl:管理工具,直接管理API Server,期間會有認證受權。
三、Node有兩個組件:
    - kubelet:接收K8S下發的任務,管理容器建立,生命週期管理等,將一個pod轉換成一組容器。
    - kube-proxy:Pod網絡代理,四層負載均衡,對外訪問
      -  用戶 -> 防火牆 -> kube-proxy -> 業務
    Pod:K8S最小單元
      - Container:運行容器的環境,運行容器引擎
        - Dockergithub

2.二、集羣管理流程及核心概念

一、 管理集羣流程docker

二、Kubernetes核心概念數據庫

 

Pod
  • 最小部署單元
  • 一組容器的集合
  • 一個Pod中的容器共享網絡命名空間
  • Pod是短暫的
Controllers
  • ReplicaSet : 確保預期的Pod副本數量
  • Deployment : 無狀態應用部署
  • StatefulSet : 有狀態應用部署
  • DaemonSet : 確保全部Node運行同一個Pod
  • Job : 一次性任務
  • Cronjob : 定時任務
  注:更高級層次對象,部署和管理Podjson

Service
  • 防止Pod失聯
  • 定義一組Pod的訪問策略bootstrap

Label : 標籤,附加到某個資源上,用於關聯對象、查詢和篩選vim

Namespaces : 命名空間,將對象邏輯上隔離

Annotations :註釋

 

 3、Kubernetes 部署

3.1 服務版本及架構說明

服務版本

  • centos:7.4
  • etcd-v3.3.10
  • flannel-v0.10.0
  • kubernetes-1.12.1
  • nginx-1.16.1
  • keepalived-1.3.5
  • docker-19.03.1

單Master架構

  • k8s Master:172.16.105.220
  • k8s Node:172.16.105.230、172.16.105.213
  • etcd:172.16.105.220、172.16.105.230、172.16.105.213

雙Master+Nginx+Keepalived

  • k8s Master1:192.168.1.108
  • k8s Master2:192.168.1.109
  • k8s Node3:192.168.1.110
  • k8s Node4:192.168.1.111
  • etc:192.168.1.10八、192.168.1.10九、192.168.1.1十、192.168.1.111
  • Nginx+keepalived1:192.168.1.112
  • Nginx+keepalived2:192.168.1.113
  • vip:192.168.1.100

 

3.二、部署kubernetes準備

一、關閉防火牆

systemctl stop firewalld.service

二、關閉SELINUX

setenforce 0

三、修改主機名

vim /etc/hostname
hostname ****

四、同步時間

ntpdate time.windows.com

五、環境變量

注:下面配置全部用到的k8s最好部署環境變量

3.三、Etcd 數據庫集羣部署

一、部署 Etcd 自簽證書 

一、建立k8s及證書目錄

mkdir ~/k8s && cd ~/k8s
mkdir k8s-cert
mkdir etcd-cert
cd etcd-cert

二、安裝cfssl生成證書工具

# 經過選項生成證書
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
# 經過json生成證書
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
# 查看證書信息
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
# 添加執行權限
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo

三、執行命令生成證書使用的json文件1

vim ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
vim ca-config.json

四、執行命令生成證書使用的json文件2

{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
vim ca-csr.json

五、執行命令經過json文件生成CA根證書、會在當前目錄生成ca.pem和ca-key.pem

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

六、執行命令生成Etcd域名證書、首先建立json文件後生成

{
    "CN": "etcd",
    "hosts": [
    "172.16.105.220",
    "172.16.105.230",
    "172.16.105.213"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
vim server-csr.json

注:hosts下面跟etcd部署服務的IP。

七、執行命令辦法Etcd域名證書、當前目錄下生成 server.pem 與 server-key.pem

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

八、查看建立的證書

ls *pem
ca-key.pem  ca.pem  server-key.pem  server.pem

二、部署 Etcd 數據庫集羣

  • 使用etcd版本:etcd-v3.3.10-linux-amd64.tar.gz
  • 二進制包下載地址:https://github.com/coreos/etcd/releases/tag/v3.2.12

一、下載本地後進行解壓、進入到解壓目錄

tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64

二、爲了方便管理etcd建立幾個目錄、並移動文件

mkdir /opt/etcd/{cfg,bin,ssl} -p
mv etcd etcdctl /opt/etcd/bin/

三、建立編寫etcd配置文件

#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.105.220:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.105.220:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.105.220:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.105.220:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://172.16.105.220:2380,etcd02=https://172.16.105.230:2380,etcd03=https://172.16.105.213:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
vim /opt/etcd/cfg/etcd
· ETCD_NAME 節點名稱
· ETCD_DATA_DIR 數據目錄
· ETCD_LISTEN_PEER_URLS 集羣通訊監聽地址
· ETCD_LISTEN_CLIENT_URLS 客戶端訪問監聽地址
· ETCD_INITIAL_ADVERTISE_PEER_URLS 集羣通告地址
· ETCD_ADVERTISE_CLIENT_URLS 客戶端通告地址
· ETCD_INITIAL_CLUSTER 集羣節點地址
· ETCD_INITIAL_CLUSTER_TOKEN 集羣Token
· ETCD_INITIAL_CLUSTER_STATE 加入集羣的當前狀態,new是新集羣,existing表示加入已有集羣
參數含義

四、建立systemd 管理 etcd

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
vim /usr/lib/systemd/system/etcd.service

五、將證書文件copy到指定目錄

cp /root/k8s/etcd-cert/{ca,ca-key,server-key,server}.pem /opt/etcd/ssl/

六、啓動 etcd、並設置開機自啓動

systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service

七、開啓後etcd可能會等待其餘兩個節點等待,須要講其餘兩個節點etcd開啓

# 一、將目錄etcd配置目錄 copy 到兩個節點內
scp -r /opt/etcd/ root@172.16.105.230:/opt/
scp -r /opt/etcd/ root@172.16.105.213:/opt/
# 二、將啓動服務配置文件 copy 到兩個節點內
scp -r /usr/lib/systemd/system/etcd.service root@172.16.105.230:/usr/lib/systemd/system/
scp -r /usr/lib/systemd/system/etcd.service root@172.16.105.213:/usr/lib/systemd/system/

八、修改 兩個節點 etcd /opt/etcd/cfg/etcd 配置文件

#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.105.230:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.105.230:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.105.230:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.105.230:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://172.16.105.220:2380,etcd02=https://172.16.105.230:2380,etcd03=https://172.16.105.213:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
節點:172.16.105.230 配置文件
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.105.213:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.105.213:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.105.213:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.105.213:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://172.16.105.220:2380,etcd02=https://172.16.105.230:2380,etcd03=https://172.16.105.213:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
節點:172.16.105.213 配置文件

九、兩個節點啓動服務、並設置開機自啓動

systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service

十、查看主etcd日誌

Aug  6 11:13:54 izbp14x4an2p4z7awyek7mz etcd: updating the cluster version from 3.0 to 3.3
Aug  6 11:13:54 izbp14x4an2p4z7awyek7mz etcd: updated the cluster version from 3.0 to 3.3
Aug  6 11:13:54 izbp14x4an2p4z7awyek7mz etcd: enabled capabilities for version 3.3
tail /var/log/messages -f

十一、查看端口啓動

tcp        0      0 172.16.105.220:2379     0.0.0.0:*               LISTEN      13021/etcd          
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      13021/etcd          
tcp        0      0 172.16.105.220:2380     0.0.0.0:*               LISTEN      13021/etcd 
netstat -lnpt

十二、查看進程使用

root     13021  1.1  1.4 10541908 28052 ?      Ssl  11:13   0:02 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://172.16.105.220:2380 --listen-client-urls=https://172.16.105.220:2379,http://127.0.0.1:2379 --advertise-client-urls=https://172.16.105.220:2379 --initial-advertise-peer-urls=https://172.16.105.220:2380 --initial-cluster=etcd01=https://172.16.105.220:2380,etcd02=https://172.16.105.230:2380,etcd03=https://172.16.105.213:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
ps -aux | grep etcd

1三、經過工具驗證etcd

# 添加證書文件絕對路徑與etcd集羣節點地址
/opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379" cluster-health
member 1d5fcc16a8c9361e is healthy: got healthy result from https://172.16.105.220:2379
member 7b28469233594fbd is healthy: got healthy result from https://172.16.105.230:2379
member b2e216e703023e21 is healthy: got healthy result from https://172.16.105.213:2379
cluster is healthy
輸出以下表示沒問題:

其餘:

# 刪除每一個節點data文件從新啓動
rm -rf /var/lib/etcd/default.etcd
報錯:etcd: request cluster ID mismatch

3.四、Node 部署 Docker 容器應用 

一、安裝依賴包

yum install -y yum-utils device-mapper-persistent-data lvm2

二、配置官方源

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

三、安裝docker最新版

yum -y install docker-ce

四、配置docker倉庫加速器

官網:https://www.daocloud.io/mirror
加速命令:curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io

五、重啓docker

systemctl restart docker

六、查看docker版本:docker version

Version: 19.03.1

3.五、Node 部署 Flannel 網絡模型

  • 二進制包:https://github.com/coreos/flannel/releases

一、寫入分配的子網到etcd、提供flanneld使用。

/opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'

二、查看建立網絡信息

/opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379" get /coreos.com/network/config

三、下載完flannel包後進行解壓

tar -xvzf flannel-v0.10.0-linux-amd64.tar.gz

四、建立目錄將文件存放到指定目錄下

mkdir -p /opt/kubernetes/{bin,cfg,ssl}
mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/

五、建立flanneld配置文件

FLANNEL_OPTIONS="--etcd-endpoints=https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
vim /opt/kubernetes/cfg/flanneld

六、建立systemd管理flannel

[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
vim /usr/lib/systemd/system/flanneld.service

七、配置Docker啓動指定網段

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
vim /usr/lib/systemd/system/docker.service

八、啓動flannel與docker、設置開機自啓動

systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl restart docker

九、確認 docker 與 flannel 再同網段

docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.26.1 netmask 255.255.255.0 broadcast 172.17.26.255
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.26.0 netmask 255.255.255.255 broadcast 0.0.0.0
ifconfig

十、查看路由信息

# 一、查看生成的文件
/opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379" ls /coreos.com/network/subnets/
/coreos.com/network/subnets/172.17.59.0-24
/coreos.com/network/subnets/172.17.23.0-24
/coreos.com/network/subnets/172.17.26.0-24
輸出:
# 二、查看指定路由文件
/opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379" get /coreos.com/network/subnets/172.17.59.0-24
# 對應關係
{"PublicIP":"172.16.105.220","BackendType":"vxlan","BackendData":{"VtepMAC":"ae:6b:20:4a:bd:ed"}}
輸出:

3.六、部署 kubernetes 單Master集羣

 

  • 下載二進制包:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md
  • 下載這個包(kubernetes-server-linux-amd64.tar.gz)就夠了,包含了所需的全部組件。

一、生成證書
1.一、執行命令生成證書使用的json文件1

{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
vim ca-config.json

1.二、執行命令生成證書使用的json文件2

{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
vim ca-csr.json

1.三、執行命令生成CA證書

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

1.四、執行命令生成證書使用的json文件、注:添加全部使用到k8s的節點IP。

{
    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "172.16.105.220",
      "172.16.105.210",
      "多選添加IP,Node節點不用添加",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
vim server-csr.json

1.五、執行命令生成 apiserver 證書

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

1.六、執行命令生成證書使用的json文件生成 kube-proxy 證書

{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
vim kube-proxy-csr.json

1.七、執行命令生成 kube-proxy 證書

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

1.八、查看全部生成證書

ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem
ls *pem

二、部署Master apiserver 組件

一、下載到k8s目錄解壓、進入目錄

tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/

二、建立目錄

mkdir /opt/kubernetes/{bin,cfg,ssl,logs} -p

三、將二進制文件導入到相應目錄下

cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin

四、將生成的證書文件存入到指定文件

cp ca.pem ca-key.pem server.pem server-key.pem /opt/kubernetes/ssl/

五、建立 token 文件

674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
vim /opt/kubernetes/cfg/token.csv
第一列:隨機字符串,本身可生成
第二列:用戶名
第三列:UID
第四列:用戶組
說明

六、建立 apiserver 配置文件、確保配置好生成證書,確保鏈接etcd

KUBE_APISERVER_OPTS="--logtostderr=false \
--log-dir=/opt/kubernetes/logs \
--v=4 \
--etcd-servers=https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379 \
--bind-address=172.16.105.220 \
--secure-port=6443 \
--advertise-address=172.16.105.220 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--service-node-port-range=30000-50000 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
vim /opt/kubernetes/cfg/kube-apiserver
參數說明:
· --logtostderr 啓用日誌
· ---v 日誌等級
· --etcd-servers etcd集羣地址
· --bind-address 監聽地址
· --secure-port https安全端口
· --advertise-address 集羣通告地址
· --allow-privileged 啓用受權
· --service-cluster-ip-range Service虛擬IP地址段
· --enable-admission-plugins 准入控制模塊
· --authorization-mode 認證受權,啓用RBAC受權和節點自管理
· --enable-bootstrap-token-auth 啓用TLS bootstrap功能,後面會講到
· --token-auth-file token文件
· --service-node-port-range Service Node類型默認分配端口範圍

日誌:
# true 日誌默認放到/var/log/messages
--logtostderr=true
# false 日誌能夠指定放到一個目錄
--logtostderr=false
--log-dir=/opt/kubernetes/logs
參數說明:

七、建立 systemd 管理 apiserver

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target
vim /usr/lib/systemd/system/kube-apiserver.service

八、啓動、並設置開機自啓動

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

九、查看端口

tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      5431/kube-apiserver 
netstat -lnpt | grep 8080
tcp        0      0 172.16.105.220:6443     0.0.0.0:*               LISTEN      5431/kube-apiserver 
netstat -lnpt | grep 6443

三、部署 Master scheduler 組件
一、建立 schduler 配置文件

KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"
vim /opt/kubernetes/cfg/kube-scheduler
參數說明:
· --master 鏈接本地apiserver
· --leader-elect 當該組件啓動多個時,自動選舉(HA)
參數說明:

二、systemd管理schduler組件

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
vim /usr/lib/systemd/system/kube-scheduler.service

三、啓動並設置開機自啓

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

四、查看進程

root 8393 0.5 1.1 45360 21356 ? Ssl 11:23 0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
ps -aux | grep kube-scheduler

四、部署 Master controller-manager 組件
一、建立 controller-manager 配置文件

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"
vim /opt/kubernetes/cfg/kube-controller-manager

二、systemd管理controller-manager組件

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
vim /usr/lib/systemd/system/kube-controller-manager.service

三、啓動並添加開機自啓

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

四、查看進程

root 8966 0.4 1.1 45360 20900 ? Ssl 11:27 0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
ps -aux | grep controller-manager

五、經過 kubectl 檢查全部組件狀態

NAME STATUS MESSAGE ERROR
controller-manager Healthy ok 
scheduler Healthy ok 
etcd-2 Healthy {"health":"true"} 
etcd-0 Healthy {"health":"true"} 
etcd-1 Healthy {"health":"true"}
/opt/kubernetes/bin/kubectl get cs

五、部署 kubecongig 文件

master 節點配置

一、將kubelet-bootstrap用戶綁定到系統集羣角色。生成的token文件中定義的角色。

# 主要爲kuelet辦法證書的最小全權限
/opt/kubernetes/bin/kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

二、建立kubeconfig文件、在生成kubernetes證書的目錄下執行如下命令生成kubeconfig文件:

# 建立kubelet bootstrapping kubeconfig 
BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc
KUBE_APISERVER="https://172.16.105.220:6443"

# 設置集羣參數
kubectl config set-cluster kubernetes \
  --certificate-authority=/root/k8s/k8s-cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 設置客戶端認證參數
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 設置上下文參數
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 設置默認上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 建立kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/root/k8s/k8s-cert/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=/root/k8s/k8s-cert/kube-proxy.pem \
  --client-key=/root/k8s/k8s-cert/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
vim kubeconfig.sh

三、執行腳本

bash kubeconfig.sh

四、將生成的kube-proxy.kubeconfig與bootstrap.kubeconfig copy 到 Node 機器內。

scp bootstrap.kubeconfig kube-proxy.kubeconfig root@172.16.105.230:/opt/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@172.16.105.213:/opt/kubernetes/cfg/

六、部署Node kubelet 組件

一、Node節點建立目錄

mkdir -p /opt/kubernetes/{cfg,bin,logs,ssl}

二、copy下列文件到指定目錄下

  • 使用:/kubernetes/server/bin/kubelet
  • 使用:/kubernetes/server/bin/kube-proxy
  • 將上面兩個文件copy到Node端/opt/kubernetes/bin/目錄下

三、建立 kubelet 配置文件

KUBELET_OPTS="--logtostderr=false \
--log-dir=/opt/kubernetes/logs/ \
--v=4 \
--hostname-override=172.16.105.213 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
vim /opt/kubernetes/cfg/kubelet
參數說明:
· --hostname-override 在集羣中顯示的主機名
· --kubeconfig 指定kubeconfig文件位置,會自動生成
· --bootstrap-kubeconfig 指定剛纔生成的bootstrap.kubeconfig文件
· --cert-dir 頒發證書存放位置
· --pod-infra-container-image 管理Pod網絡的鏡像
參數說明:

二、建立 kubelet.config 配置文件

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 172.16.105.213
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
vim /opt/kubernetes/cfg/kubelet.config

三、systemd 管理 kubelet 組件

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
vim /usr/lib/systemd/system/kubelet.service

四、啓動並設置開機自啓動

systemctl daemon-reload
systemctl enable kubelet.service
systemctl start kubelet.service

五、查看進程

root     24607  0.8  1.7 626848 69140 ?        Ssl  16:03   0:05 /opt/kubernetes/bin/kubelet --logtostderr=false --log-dir=/opt/kubernetes/logs/ --v=4 --hostname-override=172.16.105.213 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
ps -aux | grep kubelet

六、Master 端 審批Node 加入集羣:

  • 啓動後還沒加入到集羣中,須要手動容許該節點才能夠。
  • 在Master節點查看請求籤名的Node:

七、查看請求加入集羣的Node

NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-7ZHhg19mVh1w2gfJOh55eaBsRisA_wT8EHZQfqCLPLE   21s   kubelet-bootstrap   Pending
node-csr-weeFsR6VVUNIHyohOgaGvy2Hr6M9qSUIkoGjQ_mUyOo   28s   kubelet-bootstrap   Pending
kubectl get csr

八、贊成請求讓Node節點加入

kubectl certificate approve node-csr-7ZHhg19mVh1w2gfJOh55eaBsRisA_wT8EHZQfqCLPLE
kubectl certificate approve node-csr-weeFsR6VVUNIHyohOgaGvy2Hr6M9qSUIkoGjQ_mUyOo

九、查看加入節點

NAME             STATUS   ROLES    AGE   VERSION
172.16.105.213   Ready    <none>   42s   v1.12.1
172.16.105.230   Ready    <none>   57s   v1.12.1
kubectl get node

七、部署Node kube-proxy組件

一、建立 kube-proxy 配置文件

KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=172.16.105.213 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
vim /opt/kubernetes/cfg/kube-proxy

二、systemd管理kube-proxy組件

[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
vim /usr/lib/systemd/system/kube-proxy.service

三、啓動並設置開機自啓動

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy

四、查看進程

root     27166  0.3  0.5  41588 21332 ?        Ssl  16:16   0:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=172.16.105.213 --cluster-cidr=10.0.0.0/24 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
ps -aux | grep kube-proxy

八、其餘設置

一、解決:將匿名用戶綁定到系統用戶

kubectl create clusterrolebinding system:anonymous   --clusterrole=cluster-admin   --user=system:anonymous

 3.七、部署 kubernetes 多Master集羣

一、Master2配置部署

  • 注:Master節點2配置與單Master相同下面我這裏只直接略過相同配置。
  • 注:直接複製配置文件可能會致使etcd連接問題
  • 注:最好以master爲etcd端。

一、修改Master02配置文件中的IP,更改成Master02IP

--bind-address=172.16.105.212
--advertise-address=172.16.105.212
vim kube-apiserver

二、啓動Master02 k8s

systemctl start kube-apiserver
systemctl start kube-scheduler
systemctl start kube-controller-manager

三、查看集羣狀態

NAME STATUS MESSAGE ERROR
etcd-1 Healthy {"health":"true"} 
etcd-2 Healthy {"health":"true"} 
controller-manager Healthy ok 
scheduler Healthy ok 
etcd-0 Healthy {"health":"true"} 
kubectl get cs

五、查看etcd鏈接狀態

NAME STATUS ROLES AGE VERSION
172.16.105.213 Ready <none> 41h v1.12.1
172.16.105.230 Ready <none> 41h v1.12.1
kubectl get node

二、部署 Nginx 負載均衡

  • 注:保證系統時間統一證書正常使用
  • nginx官網:http://www.nginx.org
  • documentation --> Installing nginx --> packages

一、複製nginx官方源寫入到/etc/yum.repos.d/nginx.repo、修該centos版本

[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key

[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/7/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
vim /etc/yum.repos.d/nginx.repo

二、重新加載yum

yum clean all
yum makecache

三、安裝 nginx

yum install nginx -y

四、修該配置文件,events同級添加

events {
    worker_connections  1024;
} 

stream {
    log_format main "$remote_addr $upstream_addr - $time_local $status";
    access_log /var/log/nginx/k8s-access.log main;
    upstream k8s-apiserver {
         server 172.16.105.220:6443;
         server 172.16.105.210:6443;
    }    
    server {
       listen 172.16.105.231:6443;
       proxy_pass k8s-apiserver;
    }  
}
vim /etc/nginx/nginx.conf
參數說明:
# 建立四層負載均衡
stream {
    # 記錄日誌
    log_format main "$remote_addr $upstream_addr $time_local $status"
    # 日誌存放路徑
    access_log /var/log/nginx/k8s-access.log main;
    # 建立調度集羣 k8s-apiserver 爲服務名稱
    upstream k8s-apiserver {
         server 172.16.105.220:6443;
         server 172.16.105.210:6443;
    }    
    # 建立監聽服務
    server {
       # 本地監聽訪問開啓的使用IP與端口
       listen 172.16.105.231:6443;
       # 調度的服務名稱,因爲是4層則不是用http
       proxy_pass k8s-apiserver;
    }  
}
參數說明:

五、啓動nginx並生效配置文件

systemctl start nginx

六、查看監聽端口

tcp 0 0 172.16.105.231:6443 0.0.0.0:* LISTEN 19067/nginx: master
netstat -lnpt | grep 6443

八、修改每一個Node 節點中配置文件。將引用的鏈接端,改成該負載均衡的機器內。

vim bootstrap.kubeconfig
server: https://172.16.105.231:6443

vim kubelet.kubeconfig
server: https://172.16.105.231:6443

vim kube-proxy.kubeconfig
server: https://172.16.105.231:6443

九、重啓 kubelet Node 客戶端

systemctl restart kubelet
systemctl restart kube-proxy

十、查看Node 啓動進程

root 23226 0.0 0.4 300552 16460 ? Ssl Aug08 0:25 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://172.16.105.220:2379,https://172.16.105.230:2379,https://172.16.105.213:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem
root 26986 1.5 1.5 632676 60740 ? Ssl 11:30 0:01 /opt/kubernetes/bin/kubelet --logtostderr=false --log-dir=/opt/kubernetes/logs/ --v=4 --hostname-override=172.16.105.213 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
root 27584 0.7 0.5 41588 19896 ? Ssl 11:32 0:00 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=172.16.105.213 --cluster-cidr=10.0.0.0/24 --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig
ps -aux | grep kube

十一、重啓Master kube-apiserver

systemctl restart kube-apiserver

十二、查看Nginx日誌

172.16.105.213 172.16.105.220:6443 09/Aug/2019:13:34:59 +0800 200
172.16.105.230 172.16.105.220:6443 09/Aug/2019:13:34:59 +0800 200
172.16.105.213 172.16.105.220:6443 09/Aug/2019:13:34:59 +0800 200
172.16.105.230 172.16.105.220:6443 09/Aug/2019:13:34:59 +0800 200
172.16.105.230 172.16.105.220:6443 09/Aug/2019:13:35:00 +0800 200
tail -f /var/log/nginx/k8s-access.log

3部署 Nginx2+keepalived 高可用

  • 注:VIP 要設置爲證書受權過得ip不然會沒法經過外網訪問
  • 注:安裝Nginx2與單Nginx的安裝步驟相同,這裏我再也不重複部署,只講解重點。

一、Nginx1與Nginx2安裝keepalive高可用

yum -y install keepalived

二、修改Nginx1 Master 主配置文件

! Configuration File for keepalived
global_defs {
   # 接收郵件地址
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   # 郵件發送地址
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

# 經過vrrp協議檢查本機nginx服務是否正常
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state MASTER
    interface ens32
    virtual_router_id 51 # VRRP 路由 ID實例,每一個實例是惟一的
    priority 100    # 優先級,備服務器設置 90
    advert_int 1    # 指定VRRP 心跳包通告間隔時間,默認1秒

    # 密碼認證
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # VIP
    virtual_ipaddress {
        192.168.1.100/24
    }

    # 使用檢查腳本
    track_script {
        check_nginx
    }
}
vim /etc/keepalived/keepalived.conf

三、修改Nginx2 Slave 主配置文件

! Configuration File for keepalived
global_defs {
   # 接收郵件地址
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   # 郵件發送地址
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

# 經過vrrp協議檢查本機nginx服務是否正常
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens32
    virtual_router_id 51 # VRRP 路由 ID實例,每一個實例是惟一的
    priority 90    # 優先級,備服務器設置 90
    advert_int 1    # 指定VRRP 心跳包通告間隔時間,默認1秒

    # 密碼認證
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # VIP
    virtual_ipaddress {
        192.168.1.100/24
    }

    # 使用檢查腳本
    track_script {
        check_nginx
    }
}
vim /etc/keepalived/keepalived.conf

 四、Ngin1與Nginx2建立檢查腳本

# 檢查nginx進程數
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
vim /etc/keepalived/check_nginx.sh

五、給腳本添加權限

chmod +x /etc/keepalived/check_nginx.sh

六、Ngin1與Nginx2啓動keepalived

systemctl start keepalived

七、查看進程

root 1969 0.0 0.1 118608 1396 ? Ss 09:41 0:00 /usr/sbin/keepalived -D
root 1970 0.0 0.2 120732 2832 ? S 09:41 0:00 /usr/sbin/keepalived -D
root 1971 0.0 0.2 120732 2380 ? S 09:41 0:00 /usr/sbin/keepalived -D
ps aux | grep keepalived

八、Master 查看虛擬IP

ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:3d:1c:d0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.115/24 brd 192.168.1.255 scope global dynamic ens32
       valid_lft 5015sec preferred_lft 5015sec
    inet 192.168.1.100/24 scope global secondary ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::4db8:8591:9f94:8837/64 scope link 
       valid_lft forever preferred_lft forever
ip addr

九、Slave 六、查看虛擬IP(沒有就正常)

ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:09:b3:c4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.112/24 brd 192.168.1.255 scope global dynamic ens32
       valid_lft 7200sec preferred_lft 7200sec
    inet6 fe80::1dbe:11ff:f093:ef49/64 scope link 
       valid_lft forever preferred_lft forever
ip addr

十、測試

測試IP飄逸
1、關閉Master Nginx1
pkill nginx
2、查看Slave Nginx2 虛擬IP是否飄逸
ip addr
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:09:b3:c4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.112/24 brd 192.168.1.255 scope global dynamic ens32
       valid_lft 4387sec preferred_lft 4387sec
    inet 192.168.1.100/24 scope global secondary ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::1dbe:11ff:f093:ef49/64 scope link 
       valid_lft forever preferred_lft forever
3、啓動Master Nginx1 keepalived 測試ip飄回
systemctl start nginx
systemctl start keepalived
4、查看Nginx1 vip
ip addr
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:3d:1c:d0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.115/24 brd 192.168.1.255 scope global dynamic ens32
       valid_lft 7010sec preferred_lft 7010sec
    inet 192.168.1.100/24 scope global secondary ens32
       valid_lft forever preferred_lft forever
    inet6 fe80::4db8:8591:9f94:8837/64 scope link 
       valid_lft forever preferred_lft forever
測試IP飄逸

 十一、修改Nginx1 與 Nginx2 代理監聽

stream {
    log_format main "$remote_addr $upstream_addr - $time_local $status";
    access_log /var/log/nginx/k8s-access.log main;
    upstream k8s-apiserver {
         server 192.168.1.108:6443;
         server 192.168.1.109:6443;
    }
    server {
       listen 0.0.0.0:6443;
       proxy_pass k8s-apiserver;
    }  
}
vim /etc/nginx/nginx.conf

十二、重啓nginx

systemctl restart nginx

 1三、接入K8S  修改全部Node配置文件IP爲 VIP

一、修改配置文件

vim bootstrap.kubeconfig 
server: https://192.168.1.100:6443
vim kube-proxy.kubeconfig
server: https://192.168.1.100:6443

二、重啓Node

systemctl restart kubelet
systemctl restart kube-proxy

三、查看Master nginx1 日誌

192.168.1.111 192.168.1.108:6443 - 22/Aug/2019:11:02:36 +0800 200
192.168.1.111 192.168.1.109:6443 - 22/Aug/2019:11:02:36 +0800 200
192.168.1.110 192.168.1.108:6443 - 22/Aug/2019:11:02:36 +0800 200
192.168.1.110 192.168.1.109:6443 - 22/Aug/2019:11:02:36 +0800 200
192.168.1.111 192.168.1.108:6443 - 22/Aug/2019:11:02:37 +0800 200
tail /var/log/nginx/k8s-access.log -f
相關文章
相關標籤/搜索