kubernetes1.7.6 ha高可用部署

寫在前面: node

1. 該文章部署方式爲二進制部署。linux

2. 版本信息 k8s 1.7.6,etcd 3.2.9git

3. 高可用部分 etcd作高可用集羣、kube-apiserver 爲無狀態服務使用haproxy作負載均衡,kube-controller-manager和kube-scheduler使用自身的選舉功能,無需考慮高可用問題。github

 

環境說明:docker

本環境中網絡說明,宿主機及node網段爲192.168.1.x/24,service cluster網段爲172.16.x.x/16 ,pod網段爲172.17.x.x/16,下面會用到。json

主機名 ip地址 服務 備註
master1 192.168.1.18 etcd flanneld kube-apiserver kube-controller-manager kube-scheduler haproxy keepalived VIP 192.168.1.24做爲apiserver的浮動ip
master2 192.168.1.19 etcd flanneld kube-apiserver kube-controller-manager kube-scheduler haproxy keepalived
master3 192.168.1.20 etcd flanneld kube-apiserver kube-controller-manager kube-scheduler  
node1 192.168.1.21 flanneld docker kube-proxy kubelet harbor  
node2 192.168.1.22 flanneld docker kube-proxy kubelet harbor  
node3 192.168.1.23 flanneld docker kube-proxy kubelet harbor  

 

 

 

 

 

 

 

 

 

 

 

步驟:bootstrap

1. 證書及kubeconfig文件生成(該操做在任何一臺master上執行便可)api

 kubernetes 系統的各組件須要使用 TLS 證書對通訊進行加密,本文檔使用 CloudFlare 的 PKI 工具集 cfssl 來生成 Certificate Authority (CA) 和其它證書;安全

 

生成的 CA 證書和祕鑰文件以下:網絡

  • ca-key.pem
  • ca.pem
  • kubernetes-key.pem
  • kubernetes.pem
  • kube-proxy.pem
  • kube-proxy-key.pem
  • admin.pem
  • admin-key.pem

使用證書的組件以下:

  • etcd:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
  • kube-apiserver:使用 ca.pem、kubernetes-key.pem、kubernetes.pem;
  • kubelet:使用 ca.pem;
  • kube-proxy:使用 ca.pem、kube-proxy-key.pem、kube-proxy.pem;
  • kubectl:使用 ca.pem、admin-key.pem、admin.pem;

證書生成須要使用cfssl,下面安裝cfssl:

[root@k8s-master01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@k8s-master01 ~]# chmod +x cfssl_linux-amd64
[root@k8s-master01 ~]# mv cfssl_linux-amd64 /usr/bin/cfssl
[root@k8s
-master01 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 [root@k8s-master01 ~]# chmod +x cfssljson_linux-amd64 [root@k8s-master01 ~]# mv cfssljson_linux-amd64 /usr/bin/cfssljson
[root@k8s
-master01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 [root@k8s-master01 ~]# chmod +x cfssl-certinfo_linux-amd64 [root@k8s-master01 ~]# mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

建立 CA (Certificate Authority)

建立 CA 配置文件

[root@k8s-master01 ~]# mkdir /opt/ssl
[root@k8s-master01 ~]# cd /opt/ssl
[root@k8s-master01 ~]# cfssl print-defaults config > config.json
[root@k8s-master01 ~]# cfssl print-defaults csr > csr.json
[root@k8s-master01 ~]# cat ca-config.json
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "8760h"
      }
    }
  }
}

 

建立 CA 證書籤名請求

[root@k8s-master01 ~]# cat ca-csr.json
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

 

生成 CA 證書和私鑰

[root@k8s-master01 ~]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[root@k8s-master01 ~]# ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

 

建立 kubernetes 證書

建立 kubernetes 證書籤名請求

[root@k8s-master01 ~]# cat kubernetes-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "192.168.1.18",
      "192.168.1.19",
      "192.168.1.20",
      "192.168.1.21",
      "192.168.1.22",
      "192.168.1.23",
      "172.16.0.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
  • 若是 hosts 字段不爲空則須要指定受權使用該證書的 IP 或域名列表,因爲該證書後續被 etcd 集羣和 kubernetes master集羣使用,因此上面分別指定了 etcd 集羣、kubernetes master 集羣的主機 IP 和 kubernetes 服務的服務 IP(通常是kue-apiserver 指定的 service-cluster-ip-range 網段的第一個IP,如 本環境中的172.16.0.1。

生成 kubernetes 證書和私鑰

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes
$ ls kuberntes*
kubernetes.csr  kubernetes-csr.json  kubernetes-key.pem  kubernetes.pem

 

建立 admin 證書

建立 admin 證書籤名請求

$ cat admin-csr.json
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}

 

生成 admin 證書和私鑰

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
$ ls admin*
admin.csr  admin-csr.json  admin-key.pem  admin.pem

 

建立 kube-proxy 證書

建立 kube-proxy 證書籤名請求

$ cat kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}

 

生成 kube-proxy 客戶端證書和私鑰

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
$ ls kube-proxy*
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem

 

分發證書

將生成的證書和祕鑰文件(後綴名爲.pem)拷貝到全部機器的 /etc/kubernetes/ssl 目錄下備用;

[root@k8s-master01 ~]# cd /opt/ssl/
[root@k8s-master01 ssl]# mkdir -p /etc/kubernetes/ssl/[root@k8s-master01 ssl]# cp * /etc/kubernetes/ssl/
[root@k8s-master01 ssl]# for i in `seq 19 23`; do  scp -r /etc/kubernetes/ 192.168.1.$i:/etc/;done

 

建立 kubeconfig 文件

 

配置kubectl的kubeconfig文件

文件會生產在 /root/.kube/config

#配置 kubernetes 集羣
[root@k8s-master01 ~]# kubectl config set-cluster kubernetes \
>   --certificate-authority=/etc/kubernetes/ssl/ca.pem \
>   --embed-certs=true \
>   --server=https://192.168.1.24:6444
Cluster "kubernetes" set.

#配置 客戶端認證
[root@k8s-master01 ~]# kubectl config set-credentials admin \
>   --client-certificate=/etc/kubernetes/ssl/admin.pem \
>   --embed-certs=true \
>   --client-key=/etc/kubernetes/ssl/admin-key.pem
User "admin" set.
[root@k8s-master01 ~]# kubectl config set-context kubernetes \
>   --cluster=kubernetes \
>   --user=admin
Context "kubernetes" created.
[root@k8s-master01 ~]# kubectl config use-context kubernetes
Switched to context "kubernetes".

#分發文件
[root@k8s-master01 ~]# for i in `seq 19 23`;do scp -r /root/.kube 192.168.1.$i:/root/;done
config                                                                                              100% 6260     6.1KB/s   00:00    
config                                                                                              100% 6260     6.1KB/s   00:00    
config                                                                                              100% 6260     6.1KB/s   00:00    
config                                                                                              100% 6260     6.1KB/s   00:00    
config                                                                                              100% 6260     6.1KB/s   00:00    
[root@k8s-master01 ~]# 

 

 

 

 

 

kubeletkube-proxy 等 Node 機器上的進程與 Master 機器的 kube-apiserver 進程通訊時須要認證和受權;

kubernetes 1.4 開始支持由 kube-apiserver 爲客戶端生成 TLS 證書的 TLS Bootstrapping 功能,這樣就不須要爲每一個客戶端生成證書了;該功能當前僅支持爲 kubelet 生成證書;

建立 TLS Bootstrapping Token

Token auth file

Token能夠是任意的包涵128 bit的字符串,可使用安全的隨機數發生器生成。

[root@k8s-master01 ssl]# cd /etc/kubernetes/
[root@k8s-master01 kubernetes]# export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
[root@k8s-master01 kubernetes]# cat > token.csv <<EOF
> ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
> EOF
[root@k8s-master01 kubernetes]# ls
ssl  token.csv
[root@k8s-master01 kubernetes]# cat token.csv 
bd962dfaa4b87d896c4e944f113428d3,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
[root@k8s-master01 kubernetes]# 

 

將token.csv發到全部機器(Master 和 Node)的 /etc/kubernetes/ 目錄。

[root@k8s-master01 kubernetes]# for i in `seq 19 23`; do scp token.csv 192.168.1.$i:/etc/kubernetes/;done
token.csv                                                                                           100%   84     0.1KB/s   00:00    
token.csv                                                                                           100%   84     0.1KB/s   00:00    
token.csv                                                                                           100%   84     0.1KB/s   00:00    
token.csv                                                                                           100%   84     0.1KB/s   00:00    
token.csv                                                                                           100%   84     0.1KB/s   00:00    
[root@k8s-master01 kubernetes]#

 

建立 kubelet bootstrapping kubeconfig 文件

kubelet 啓動時向 kube-apiserver 發送 TLS bootstrapping 請求,須要先將 bootstrap token 文件中的 kubelet-bootstrap 用戶賦予 system:node-bootstrapper 角色,而後 kubelet 纔有權限建立認證請求(certificatesigningrequests)。

 

先建立認證請求 user 爲 master 中 token.csv 文件裏配置的用戶 只需在一個node中建立一次就能夠
Master節點執行

[root@k8s-master01 bin]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

 

建立kubelet  kubeconfig文件

拷貝kubectl二進制文件
[root@k8s-master01 bin]# cp kubectl /usr/bin/

[root@k8s-master01 bin]# cd /etc/kubernetes/

#配置集羣 [root@k8s-master01 kubernetes]# kubectl config set-cluster kubernetes \ > --certificate-authority=/etc/kubernetes/ssl/ca.pem \ > --embed-certs=true \ > --server=https://192.168.1.24:6444 \ > --kubeconfig=bootstrap.kubeconfig Cluster "kubernetes" set.

#配置客戶端認證 [root@k8s
-master01 kubernetes]# kubectl config set-credentials kubelet-bootstrap \ > --token=bd962dfaa4b87d896c4e944f113428d3 \ > --kubeconfig=bootstrap.kubeconfig User "kubelet-bootstrap" set.

#配置關聯 [root@k8s-master01 kubernetes]# kubectl config set-context default \ > --cluster=kubernetes \ > --user=kubelet-bootstrap \ > --kubeconfig=bootstrap.kubeconfig Context "default" created.

#配置默認關聯 [root@k8s
-master01 kubernetes]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig Switched to context "default". [root@k8s-master01 kubernetes]# ls bootstrap.kubeconfig ssl token.csv

#分發文件 [root@k8s
-master01 kubernetes]# for i in `seq 19 23`; do scp bootstrap.kubeconfig 192.168.1.$i:/etc/kubernetes/;done bootstrap.kubeconfig 100% 2166 2.1KB/s 00:00 bootstrap.kubeconfig 100% 2166 2.1KB/s 00:00 bootstrap.kubeconfig 100% 2166 2.1KB/s 00:00 bootstrap.kubeconfig 100% 2166 2.1KB/s 00:00 bootstrap.kubeconfig 100% 2166 2.1KB/s 00:00 [root@k8s-master01 kubernetes]#

 

建立kube-proxy kubeconfig文件

[root@k8s-master01 ~]# cd /etc/kubernetes/

#Node節點 配置集羣
[root@k8s-master01 kubernetes]# kubectl config set-cluster kubernetes \
>   --certificate-authority=/etc/kubernetes/ssl/ca.pem \
>   --embed-certs=true \
>   --server=https://192.168.1.24:6444 \
>   --kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes" set.

#配置客戶端認證
[root@k8s-master01 kubernetes]# kubectl config set-credentials kube-proxy \
>   --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \
>   --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \
>   --embed-certs=true \
>   --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.

#配置關聯
[root@k8s-master01 kubernetes]# kubectl config set-context default \
>   --cluster=kubernetes \
>   --user=kube-proxy \
>   --kubeconfig=kube-proxy.kubeconfig
Context "default" created.

#配置默認關聯
[root@k8s-master01 kubernetes]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Switched to context "default".
[root@k8s-master01 kubernetes]# ls
bootstrap.kubeconfig  kube-proxy.kubeconfig  ssl  token.csv
[root@k8s-master01 kubernetes]# 

#分發文件到全部node節點便可
[root@k8s-master01 kubernetes]# for i in `seq 19 23`; do scp kube-proxy.kubeconfig 192.168.1.$i:/etc/kubernetes/;done

 kube-proxy.kubeconfig 100% 6272 6.1KB/s 00:00
 kube-proxy.kubeconfig 100% 6272 6.1KB/s 00:00
  kube-proxy.kubeconfig 100% 6272 6.1KB/s 00:00
  kube-proxy.kubeconfig 100% 6272 6.1KB/s 00:00
  kube-proxy.kubeconfig 100% 6272 6.1KB/s 00:00

 

2. etcd高可用部署

 

 

 

3. master節點配置

安裝

[root@k8s-master01 src]# tar zxvf flannel-v0.9.0-linux-amd64.tar.gz 
flanneld
mk-docker-opts.sh
README.md
[root@k8s-master01 src]# mv flanneld /usr/bin/
[root@k8s-master01 src]# mv mk-docker-opts.sh /usr/bin/
[root@k8s-master01 src]# for i in `seq 19 23`;do scp /usr/bin/flanneld /usr/bin/mk-docker-opts.sh 192.168.1.$i:/usr/bin/ ;done
flanneld                                                                                            100%   33MB  32.9MB/s   00:00    
mk-docker-opts.sh                                                                                   100% 2139     2.1KB/s   00:00    
flanneld                                                                                            100%   33MB  32.9MB/s   00:00    
mk-docker-opts.sh                                                                                   100% 2139     2.1KB/s   00:00    
flanneld                                                                                            100%   33MB  32.9MB/s   00:01    
mk-docker-opts.sh                                                                                   100% 2139     2.1KB/s   00:00    
flanneld                                                                                            100%   33MB  32.9MB/s   00:00    
mk-docker-opts.sh                                                                                   100% 2139     2.1KB/s   00:00    
flanneld                                                                                            100%   33MB  32.9MB/s   00:01    
mk-docker-opts.sh                                                                                   100% 2139     2.1KB/s   00:00    
[root@k8s-master01 src]# 

 

全部master節點分發二進制程序

[root@k8s-master01 bin]# for i in `seq 18 20`;do scp kube-apiserver kube-controller-manager kube-scheduler 192.168.1.$i:/usr/bin/;donekube-apiserver                                                                                      100%  176MB  88.2MB/s   00:02    
kube-controller-manager                                                                             100%  131MB  65.3MB/s   00:02    
kube-scheduler                                                                                      100%   73MB  72.6MB/s   00:01    
kube-apiserver                                                                                      100%  176MB  58.8MB/s   00:03    
kube-controller-manager                                                                             100%  131MB  65.3MB/s   00:02    
kube-scheduler                                                                                      100%   73MB  72.6MB/s   00:01    
kube-apiserver                                                                                      100%  176MB  58.8MB/s   00:03    
kube-controller-manager                                                                             100%  131MB  65.3MB/s   00:02    
kube-scheduler                                                                                      100%   73MB  72.6MB/s   00:01

 

添加CA證書到系統信任庫

使用動態CA配置

update-ca-trust force-enable

 

拷貝ca根證書到指定目錄

cp /etc/kubernetes/ssl/ca.pem /etc/pki/ca-trust/source/anchors/

 

生效

update-ca-trust extract

 

5.3 配置flannel的ip段

etcd節點執行  此網段爲上面提到的pod網段

[root@k8s-master01 src]# etcdctl --endpoint https://192.168.1.18:2379 set /flannel/network/config '{"Network":"172.17.0.0/16"}'
{"Network":"172.17.0.0/16"}
[root@k8s-master01 src]#

5.4 配置flannel

設置flanneld.service

[root@k8s-master01 system]# cat /usr/lib/systemd/system/flanneld.service 
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
    
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
ExecStartPost=/usr/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
[root@k8s-master01 system]#

 

配置文件:

[root@k8s-master01 system]# cat  /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="https://192.168.1.18:2379,https://192.168.1.19:2379,https://192.168.1.20:2379"
FLANNEL_ETCD_PREFIX="/flannel/network"
FLANNEL_OPTIONS="--iface=eth0"

#iface爲物理網卡名

[root@k8s-master01 system]# cat /etc/sysconfig/docker-network
CKER_NETWORK_OPTIONS=

#能夠爲空 

[root@k8s-master01 system]# cat /usr/bin/flanneld-start
#!/bin/sh

exec /usr/bin/flanneld \
-etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS:-${FLANNEL_ETCD}} \
-etcd-prefix=${FLANNEL_ETCD_PREFIX:-${FLANNEL_ETCD_KEY}} \
"$@"

 

[root@k8s-master01 system]# chmod +x /usr/bin/flanneld-start

 

 

確保docker已中止

systemctl stop docker

啓動flanneld服務

systemctl daemon-reload 
systemctl enable flanneld
systemctl start flanneld

 

驗證

[root@k8s-master01 system]# ifconfig flannel0
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST>  mtu 1472
        inet 172.17.2.0  netmask 255.255.0.0  destination 172.17.2.0
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 500  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

 

配置kube-apiserver

建立日誌目錄

[root@k8s-master01 ~]# mkdir /var/log/kubernetes

 

配置service文件

[root@k8s-master01 system]# cat /usr/lib/systemd/system/kube-apiserver.service 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
User=root
ExecStart=/usr/bin/kube-apiserver \
  --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --advertise-address=192.168.1.18 \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/lib/audit.log \
  --authorization-mode=RBAC \
  --bind-address=192.168.1.18 \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --enable-swagger-ui=true \
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \
  --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
  --etcd-servers=https://192.168.1.18:2379,https://192.168.1.19:2379,https://192.168.1.20:2379 \
  --event-ttl=1h \
  --kubelet-https=true \
  --insecure-bind-address=192.168.1.18 \
  --runtime-config=rbac.authorization.k8s.io/v1alpha1 \
  --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-cluster-ip-range=172.16.0.0/16 \
  --service-node-port-range=30000-32000 \
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --experimental-bootstrap-token-auth \
  --token-auth-file=/etc/kubernetes/token.csv \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
[root@k8s-master01 system]# 
#其中--service-cluster-ip-range=172.16.0.0/16 就是上面提到的service 網段,這裏面要注意的是 --service-node-port-range=30000-32000
這個地方是 映射外部端口時 的端口範圍,隨機映射也在這個範圍內映射,指定映射端口必須也在這個範圍內。

#啓動服務
systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl status kube-apiserver

 

配置kube-controller-manager

配置service文件

[root@k8s-master01 kubernetes]# cat  /usr/lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/bin/kube-controller-manager \
  --address=0.0.0.0 \
  --master=http://192.168.1.24:8081 \
  --allocate-node-cidrs=true \
  --service-cluster-ip-range=172.16.0.0/16 \
  --cluster-cidr=172.17.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --root-ca-file=/etc/kubernetes/ssl/ca.pem \
  --leader-elect=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
[root@k8s-master01 kubernetes]#


#啓動服務
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

 

配置kube-controller-scheduler

 

 

4. node節點配置

 

5. 安裝私有倉庫harbor並高可用配置

 

6. 安裝dns插件

 

7. 安裝 dashboard

 

8. 安裝監控插件

相關文章
相關標籤/搜索