我這裏使用的是五臺CentOS-7.7的虛擬機,具體信息以下表:css
系統版本 | IP地址 | 節點角色 | CPU | Memory | Hostname |
---|---|---|---|---|---|
CentOS-7.7 | 192.168.243.143 | master | >=2 | >=2G | m1 |
CentOS-7.7 | 192.168.243.144 | master | >=2 | >=2G | m2 |
CentOS-7.7 | 192.168.243.145 | master | >=2 | >=2G | m3 |
CentOS-7.7 | 192.168.243.146 | worker | >=2 | >=2G | n1 |
CentOS-7.7 | 192.168.243.147 | worker | >=2 | >=2G | n2 |
這五臺機器均需事先安裝好Docker,因爲安裝過程比較簡單這裏不進行介紹,能夠參考官方文檔:html
軟件版本說明:node
如下是搭建k8s集羣過程當中ip、端口等網絡相關配置的說明,後續將再也不重複解釋:linux
# 3個master節點的ip 192.168.243.143 192.168.243.144 192.168.243.145 # 2個worker節點的ip 192.168.243.146 192.168.243.147 # 3個master節點的hostname m一、m二、m3 # api-server的高可用虛擬ip(keepalived會用到,可自定義) 192.168.243.101 # keepalived用到的網卡接口名,通常是eth0,可執行ip a命令查看 ens32 # kubernetes服務ip網段(可自定義) 10.255.0.0/16 # kubernetes的api-server服務的ip,通常是cidr的第一個(可自定義) 10.255.0.1 # dns服務的ip地址,通常是cidr的第二個(可自定義) 10.255.0.2 # pod網段(可自定義) 172.23.0.0/16 # NodePort的取值範圍(可自定義) 8400-8900
一、主機名必須每一個節點都不同,而且保證全部點之間能夠經過hostname互相訪問。設置hostname:nginx
# 查看主機名 $ hostname # 修改主機名 $ hostnamectl set-hostname <your_hostname>
配置host,使全部節點之間能夠經過hostname互相訪問:git
$ vim /etc/hosts 192.168.243.143 m1 192.168.243.144 m2 192.168.243.145 m3 192.168.243.146 n1 192.168.243.147 n2
二、安裝依賴包:github
# 更新yum $ yum update -y # 安裝依賴包 $ yum install -y conntrack ipvsadm ipset jq sysstat curl wget iptables libseccomp
三、關閉防火牆、swap,重置iptables:web
# 關閉防火牆 $ systemctl stop firewalld && systemctl disable firewalld # 重置iptables $ iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT # 關閉swap $ swapoff -a $ sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab # 關閉selinux $ setenforce 0 # 關閉dnsmasq(不然可能致使docker容器沒法解析域名) $ service dnsmasq stop && systemctl disable dnsmasq # 重啓docker服務 $ systemctl restart docker
四、系統參數設置:docker
# 製做配置文件 $ cat > /etc/sysctl.d/kubernetes.conf <<EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 EOF # 生效文件 $ sysctl -p /etc/sysctl.d/kubernetes.conf
因爲二進制的搭建方式須要各個節點具有k8s組件的二進制可執行文件,因此咱們得將準備好的二進制文件copy到各個節點上。爲了方便文件的copy,咱們能夠選擇一箇中轉節點(隨便一個節點),配置好跟其餘全部節點的免密登陸,這樣在copy的時候就不須要反覆輸入密碼了。shell
我這裏選擇m1
做爲中轉節點,首先在m1
節點上建立一對密鑰:
[root@m1 ~]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:9CVdxUGLSaZHMwzbOs+aF/ibxNpsUaaY4LVJtC3DJiU root@m1 The key's randomart image is: +---[RSA 2048]----+ | .o*o=o| | E +Bo= o| | . *o== . | | . + @o. o | | S BoO + | | . *=+ | | .=o | | B+. | | +o=. | +----[SHA256]-----+ [root@m1 ~]#
查看公鑰的內容:
[root@m1 ~]# cat ~/.ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDF99/mk7syG+OjK5gFFKLZDpMWcF3BEF1Gaa8d8xNIMKt2qGgxyYOC7EiGcxanKw10MQCoNbiAG1UTd0/wgp/UcPizvJ5AKdTFImzXwRdXVbMYkjgY2vMYzpe8JZ5JHODggQuGEtSE9Q/RoCf29W2fIoOKTKaC2DNyiKPZZ+zLjzQr8sJC3BRb1Tk4p8cEnTnMgoFwMTZD8AYMNHwhBeo5NXZSE8zyJiWCqQQkD8n31wQxVgSL9m3rD/1wnsBERuq3cf7LQMiBTxmt1EyqzqM4S1I2WEfJkT0nJZeY+zbHqSJq2LbXmCmWUg5LmyxaE9Ksx4LDIl7gtVXe99+E1NLd root@m1 [root@m1 ~]#
而後把id_rsa.pub
文件中的內容copy到其餘機器的受權文件中,在其餘節點執行下面命令(這裏的公鑰替換成你生成的公鑰):
$ echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDF99/mk7syG+OjK5gFFKLZDpMWcF3BEF1Gaa8d8xNIMKt2qGgxyYOC7EiGcxanKw10MQCoNbiAG1UTd0/wgp/UcPizvJ5AKdTFImzXwRdXVbMYkjgY2vMYzpe8JZ5JHODggQuGEtSE9Q/RoCf29W2fIoOKTKaC2DNyiKPZZ+zLjzQr8sJC3BRb1Tk4p8cEnTnMgoFwMTZD8AYMNHwhBeo5NXZSE8zyJiWCqQQkD8n31wQxVgSL9m3rD/1wnsBERuq3cf7LQMiBTxmt1EyqzqM4S1I2WEfJkT0nJZeY+zbHqSJq2LbXmCmWUg5LmyxaE9Ksx4LDIl7gtVXe99+E1NLd root@m1" >> ~/.ssh/authorized_keys
測試一下可否免密登陸,能夠看到我這裏登陸m2
節點不須要輸入密碼:
[root@m1 ~]# ssh m2 Last login: Fri Sep 4 15:55:59 2020 from m1 [root@m2 ~]#
咱們首先下載k8s的二進制文件,k8s的官方下載地址以下:
我這裏下載的是1.19.0
版本,注意下載連接是在CHANGELOG/CHANGELOG-1.19.md裏面:
只須要在「Server Binaries」一欄選擇對應的平臺架構下載便可,由於Server的壓縮包裏已經包含了Node和Client的二進制文件:
複製下載連接,到系統上下載並解壓:
[root@m1 ~]# cd /usr/local/src [root@m1 /usr/local/src]# wget https://dl.k8s.io/v1.19.0/kubernetes-server-linux-amd64.tar.gz # 下載 [root@m1 /usr/local/src]# tar -zxvf kubernetes-server-linux-amd64.tar.gz # 解壓
k8s的二進制文件都存放在kubernetes/server/bin/
目錄下:
[root@m1 /usr/local/src]# ls kubernetes/server/bin/ apiextensions-apiserver kube-apiserver kube-controller-manager kubectl kube-proxy.docker_tag kube-scheduler.docker_tag kubeadm kube-apiserver.docker_tag kube-controller-manager.docker_tag kubelet kube-proxy.tar kube-scheduler.tar kube-aggregator kube-apiserver.tar kube-controller-manager.tar kube-proxy kube-scheduler mounter [root@m1 /usr/local/src]#
爲了後面copy文件方便,咱們須要整理一下文件,將不一樣節點所需的二進制文件統一放在相同的目錄下。具體步驟以下:
[root@m1 /usr/local/src]# mkdir -p k8s-master k8s-worker [root@m1 /usr/local/src]# cd kubernetes/server/bin/ [root@m1 /usr/local/src/kubernetes/server/bin]# for i in kubeadm kube-apiserver kube-controller-manager kubectl kube-scheduler;do cp $i /usr/local/src/k8s-master/; done [root@m1 /usr/local/src/kubernetes/server/bin]# for i in kubelet kube-proxy;do cp $i /usr/local/src/k8s-worker/; done [root@m1 /usr/local/src/kubernetes/server/bin]#
整理後的文件都被放在了相應的目錄下,k8s-master
目錄存放master
所需的二進制文件,k8s-worker
目錄則存放了worker
節點所需的文件:
[root@m1 /usr/local/src/kubernetes/server/bin]# cd /usr/local/src [root@m1 /usr/local/src]# ls k8s-master/ kubeadm kube-apiserver kube-controller-manager kubectl kube-scheduler [root@m1 /usr/local/src]# ls k8s-worker/ kubelet kube-proxy [root@m1 /usr/local/src]#
k8s依賴於etcd作分佈式存儲,因此接下來咱們還須要下載etcd,官方下載地址以下:
我這裏下載的是3.4.13
版本:
一樣,複製下載連接到系統上使用wget
命令進行下載並解壓:
[root@m1 /usr/local/src]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz [root@m1 /usr/local/src]# mkdir etcd && tar -zxvf etcd-v3.4.13-linux-amd64.tar.gz -C etcd --strip-components 1 [root@m1 /usr/local/src]# ls etcd Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md [root@m1 /usr/local/src]#
將etcd的二進制文件拷貝到k8s-master
目錄下:
[root@m1 /usr/local/src]# cd etcd [root@m1 /usr/local/src/etcd]# for i in etcd etcdctl;do cp $i /usr/local/src/k8s-master/; done [root@m1 /usr/local/src/etcd]# ls ../k8s-master/ etcd etcdctl kubeadm kube-apiserver kube-controller-manager kubectl kube-scheduler [root@m1 /usr/local/src/etcd]#
在全部節點上建立/opt/kubernetes/bin
目錄:
$ mkdir -p /opt/kubernetes/bin
將二進制文件分發到相應的節點上:
[root@m1 /usr/local/src]# for i in m1 m2 m3; do scp k8s-master/* $i:/opt/kubernetes/bin/; done [root@m1 /usr/local/src]# for i in n1 n2; do scp k8s-worker/* $i:/opt/kubernetes/bin/; done
給每一個節點設置PATH
環境變量:
[root@m1 /usr/local/src]# for i in m1 m2 m3 n1 n2; do ssh $i "echo 'PATH=/opt/kubernetes/bin:$PATH' >> ~/.bashrc"; done
cfssl
是很是好用的CA工具,咱們用它來生成證書和祕鑰文件。安裝過程比較簡單,我這裏選擇在m1
節點上安裝。首先下載cfssl
的二進制文件:
[root@m1 ~]# mkdir -p ~/bin [root@m1 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O ~/bin/cfssl [root@m1 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O ~/bin/cfssljson
將這兩個文件授予可執行的權限:
[root@m1 ~]# chmod +x ~/bin/cfssl ~/bin/cfssljson
設置一下PATH
環境變量:
[root@m1 ~]# vim ~/.bashrc PATH=~/bin:$PATH [root@m1 ~]# source ~/.bashrc
驗證一下是否能正常執行:
[root@m1 ~]# cfssl version Version: 1.2.0 Revision: dev Runtime: go1.6 [root@m1 ~]#
根證書是集羣全部節點共享的,因此只須要建立一個 CA 證書,後續建立的全部證書都由它簽名。首先建立一個ca-csr.json
文件,內容以下:
{ "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "seven" } ] }
執行如下命令,生成證書和私鑰
[root@m1 ~]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
生成完成後會有如下文件(咱們最終想要的就是ca-key.pem
和ca.pem
,一個祕鑰,一個證書):
[root@m1 ~]# ls *.pem ca-key.pem ca.pem [root@m1 ~]#
將這兩個文件分發到每一個master
節點上:
[root@m1 ~]# for i in m1 m2 m3; do ssh $i "mkdir -p /etc/kubernetes/pki/"; done [root@m1 ~]# for i in m1 m2 m3; do scp *.pem $i:/etc/kubernetes/pki/; done
接下來咱們須要生成etcd節點使用的證書和私鑰,建立ca-config.json
文件,內容以下:
{ "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } }
而後建立etcd-csr.json
文件,內容以下:
{ "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.243.143", "192.168.243.144", "192.168.243.145" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "seven" } ] }
hosts
裏的ip是master
節點的ip有了以上兩個文件之後就可使用以下命令生成etcd的證書和私鑰:
[root@m1 ~]# cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes etcd-csr.json | cfssljson -bare etcd [root@m1 ~]# ls etcd*.pem # 執行成功會生成兩個文件 etcd-key.pem etcd.pem [root@m1 ~]#
而後將這兩個文件分發到每一個etcd節點:
[root@m1 ~]# for i in m1 m2 m3; do scp etcd*.pem $i:/etc/kubernetes/pki/; done
建立etcd.service
文件,用於後續能夠經過systemctl
命令去啓動、中止及重啓etcd服務,內容以下:
[Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/opt/kubernetes/bin/etcd \ --data-dir=/var/lib/etcd \ --name=m1 \ --cert-file=/etc/kubernetes/pki/etcd.pem \ --key-file=/etc/kubernetes/pki/etcd-key.pem \ --trusted-ca-file=/etc/kubernetes/pki/ca.pem \ --peer-cert-file=/etc/kubernetes/pki/etcd.pem \ --peer-key-file=/etc/kubernetes/pki/etcd-key.pem \ --peer-trusted-ca-file=/etc/kubernetes/pki/ca.pem \ --peer-client-cert-auth \ --client-cert-auth \ --listen-peer-urls=https://192.168.243.143:2380 \ --initial-advertise-peer-urls=https://192.168.243.143:2380 \ --listen-client-urls=https://192.168.243.143:2379,http://127.0.0.1:2379 \ --advertise-client-urls=https://192.168.243.143:2379 \ --initial-cluster-token=etcd-cluster-0 \ --initial-cluster=m1=https://192.168.243.143:2380,m2=https://192.168.243.144:2380,m3=https://192.168.243.145:2380 \ --initial-cluster-state=new Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
將該配置文件分發到每一個master
節點:
[root@m1 ~]# for i in m1 m2 m3; do scp etcd.service $i:/etc/systemd/system/; done
分發完以後,須要在除了m1
之外的其餘master
節點上修改etcd.service
文件的內容,主要修改以下幾處:
# 修改成所在節點的hostname --name=m1 # 如下幾項則是將ip修改成所在節點的ip,本地ip不用修改 --listen-peer-urls=https://192.168.243.143:2380 --initial-advertise-peer-urls=https://192.168.243.143:2380 --listen-client-urls=https://192.168.243.143:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.243.143:2379
接着在每一個master
節點上建立etcd的工做目錄:
[root@m1 ~]# for i in m1 m2 m3; do ssh $i "mkdir -p /var/lib/etcd"; done
在各個etcd節點上執行以下命令啓動etcd服務:
$ systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd
systemctl start etcd
會卡住一段時間,爲正常現象。查看服務狀態,狀態爲active (running)
表明啓動成功:
$ systemctl status etcd
若是沒有啓動成功,能夠查看啓動日誌排查下問題:
$ journalctl -f -u etcd
第一步仍是同樣的,首先生成api-server的證書和私鑰。建立kubernetes-csr.json
文件,內容以下:
{ "CN": "kubernetes", "hosts": [ "127.0.0.1", "192.168.243.143", "192.168.243.144", "192.168.243.145", "192.168.243.101", "10.255.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "seven" } ] }
生成證書、私鑰:
[root@m1 ~]# cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes [root@m1 ~]# ls kubernetes*.pem kubernetes-key.pem kubernetes.pem [root@m1 ~]#
分發到每一個master節點:
[root@m1 ~]# for i in m1 m2 m3; do scp kubernetes*.pem $i:/etc/kubernetes/pki/; done
建立kube-apiserver.service
文件,內容以下:
[Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] ExecStart=/opt/kubernetes/bin/kube-apiserver \ --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --anonymous-auth=false \ --advertise-address=192.168.243.143 \ --bind-address=0.0.0.0 \ --insecure-port=0 \ --authorization-mode=Node,RBAC \ --runtime-config=api/all=true \ --enable-bootstrap-token-auth \ --service-cluster-ip-range=10.255.0.0/16 \ --service-node-port-range=8400-8900 \ --tls-cert-file=/etc/kubernetes/pki/kubernetes.pem \ --tls-private-key-file=/etc/kubernetes/pki/kubernetes-key.pem \ --client-ca-file=/etc/kubernetes/pki/ca.pem \ --kubelet-client-certificate=/etc/kubernetes/pki/kubernetes.pem \ --kubelet-client-key=/etc/kubernetes/pki/kubernetes-key.pem \ --service-account-key-file=/etc/kubernetes/pki/ca-key.pem \ --etcd-cafile=/etc/kubernetes/pki/ca.pem \ --etcd-certfile=/etc/kubernetes/pki/kubernetes.pem \ --etcd-keyfile=/etc/kubernetes/pki/kubernetes-key.pem \ --etcd-servers=https://192.168.243.143:2379,https://192.168.243.144:2379,https://192.168.243.145:2379 \ --enable-swagger-ui=true \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kube-apiserver-audit.log \ --event-ttl=1h \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target
將該配置文件分發到每一個master
節點:
[root@m1 ~]# for i in m1 m2 m3; do scp kube-apiserver.service $i:/etc/systemd/system/; done
分發完以後,須要在除了m1
之外的其餘master
節點上修改kube-apiserver.service
文件的內容。只須要修改如下一項:
# 修改成所在節點的ip便可 --advertise-address=192.168.243.143
而後在全部的master
節點上建立api-server
的日誌目錄:
[root@m1 ~]# for i in m1 m2 m3; do ssh $i "mkdir -p /var/log/kubernetes"; done
在各個master
節點上執行以下命令啓動api-server
服務:
$ systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver
查看服務狀態,狀態爲active (running)
表明啓動成功:
$ systemctl status kube-apiserver
檢查是否有正常監聽6443
端口:
[root@m1 ~]# netstat -lntp |grep 6443 tcp6 0 0 :::6443 :::* LISTEN 24035/kube-apiserve [root@m1 ~]#
若是沒有啓動成功,能夠查看啓動日誌排查下問題:
$ journalctl -f -u kube-apiserver
在兩個主節點上安裝keepalived便可(一主一備),我這裏選擇在m1
和m2
節點上安裝:
$ yum install -y keepalived
在m1
和m2
節點上建立一個目錄用於存放keepalived的配置文件:
[root@m1 ~]# for i in m1 m2; do ssh $i "mkdir -p /etc/keepalived"; done
在m1
(角色爲master)上建立配置文件以下:
[root@m1 ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.back [root@m1 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id keepalive-master } vrrp_script check_apiserver { # 檢測腳本路徑 script "/etc/keepalived/check-apiserver.sh" # 多少秒檢測一次 interval 3 # 失敗的話權重-2 weight -2 } vrrp_instance VI-kube-master { state MASTER # 定義節點角色 interface ens32 # 網卡名稱 virtual_router_id 68 priority 100 dont_track_primary advert_int 3 virtual_ipaddress { # 自定義虛擬ip 192.168.243.101 } track_script { check_apiserver } }
在m2
(角色爲backup)上建立配置文件以下:
[root@m1 ~]# mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.back [root@m1 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id keepalive-backup } vrrp_script check_apiserver { script "/etc/keepalived/check-apiserver.sh" interval 3 weight -2 } vrrp_instance VI-kube-master { state BACKUP interface ens32 virtual_router_id 68 priority 99 dont_track_primary advert_int 3 virtual_ipaddress { 192.168.243.101 } track_script { check_apiserver } }
分別在m1
和m2
節點上建立keepalived的檢測腳本:
$ vim /etc/keepalived/check-apiserver.sh # 建立檢測腳本,內容以下 #!/bin/sh errorExit() { echo "*** $*" 1>&2 exit 1 } # 檢查本機api-server是否正常 curl --silent --max-time 2 --insecure https://localhost:6443/ -o /dev/null || errorExit "Error GET https://localhost:6443/" # 若是虛擬ip綁定在本機上,則檢查可否經過虛擬ip正常訪問到api-server if ip addr | grep -q 192.168.243.101; then curl --silent --max-time 2 --insecure https://192.168.243.101:6443/ -o /dev/null || errorExit "Error GET https://192.168.243.101:6443/" fi
分別在master和backup上啓動keepalived服務:
$ systemctl enable keepalived && service keepalived start
查看服務狀態,狀態爲active (running)
表明啓動成功:
$ systemctl status keepalived
查看有無正常綁定虛擬ip:
$ ip a |grep 192.168.243.101
訪問測試,能返回數據表明服務是正在運行的:
[root@m1 ~]# curl --insecure https://192.168.243.101:6443/ { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "Unauthorized", "reason": "Unauthorized", "code": 401 } [root@m1 ~]#
若是沒有啓動成功,能夠查看日誌排查下問題:
$ journalctl -f -u keepalived
kubectl
是 kubernetes 集羣的命令行管理工具,它默認從 ~/.kube/config
文件讀取 kube-apiserver
地址、證書、用戶名等信息。
kubectl
與 apiserver https 安全端口通訊,apiserver 對提供的證書進行認證和受權。kubectl
做爲集羣的管理工具,須要被授予最高權限,因此這裏建立具備最高權限的 admin 證書。首先建立admin-csr.json
文件,內容以下:
{ "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "seven" } ] }
使用cfssl
工具建立證書和私鑰:
[root@m1 ~]# cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes admin-csr.json | cfssljson -bare admin [root@m1 ~]# ls admin*.pem admin-key.pem admin.pem [root@m1 ~]#
kubeconfig
爲 kubectl
的配置文件,包含訪問 apiserver 的全部信息,如 apiserver 地址、CA 證書和自身使用的證書。
一、設置集羣參數:
[root@m1 ~]# kubectl config set-cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://192.168.243.101:6443 \ --kubeconfig=kube.config
二、設置客戶端認證參數:
[root@m1 ~]# kubectl config set-credentials admin \ --client-certificate=admin.pem \ --client-key=admin-key.pem \ --embed-certs=true \ --kubeconfig=kube.config
三、設置上下文參數:
[root@m1 ~]# kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin \ --kubeconfig=kube.config
四、設置默認上下文:
[root@m1 ~]# kubectl config use-context kubernetes --kubeconfig=kube.config
五、拷貝文件配置文件並重命名爲~/.kube/config
:
[root@m1 ~]# cp kube.config ~/.kube/config
在執行 kubectl exec
、run
、logs
等命令時,apiserver 會轉發到 kubelet
。這裏定義 RBAC 規則,受權 apiserver 調用 kubelet
API。
[root@m1 ~]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver:kubelet-apis created [root@m1 ~]#
一、查看集羣信息:
[root@m1 ~]# kubectl cluster-info Kubernetes master is running at https://192.168.243.101:6443 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [root@m1 ~]#
二、查看集羣中全部命名空間下的資源信息:
[root@m1 ~]# kubectl get all --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.255.0.1 <none> 443/TCP 43m [root@m1 ~]#
四、查看集羣中的組件狀態:
[root@m1 ~]# kubectl get componentstatuses Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} [root@m1 ~]#
kubectl
是用於與k8s集羣交互的一個命令行工具,操做k8s基本離不開這個工具,因此該工具所支持的命令比較多。好在kubectl
支持設置命令補全,使用kubectl completion -h
能夠查看各個平臺下的設置示例。這裏以Linux平臺爲例,演示一下如何設置這個命令補全,完成如下操做後就可使用tap
鍵補全命令了:
[root@m1 ~]# yum install bash-completion -y [root@m1 ~]# source /usr/share/bash-completion/bash_completion [root@m1 ~]# source <(kubectl completion bash) [root@m1 ~]# kubectl completion bash > ~/.kube/completion.bash.inc [root@m1 ~]# printf " # Kubectl shell completion source '$HOME/.kube/completion.bash.inc' " >> $HOME/.bash_profile [root@m1 ~]# source $HOME/.bash_profile
controller-manager
啓動後將經過競爭選舉機制產生一個 leader 節點,其它節點爲阻塞狀態。當 leader 節點不可用後,剩餘節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。
建立controller-manager-csr.json
文件,內容以下:
{ "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "hosts": [ "127.0.0.1", "192.168.243.143", "192.168.243.144", "192.168.243.145" ], "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:kube-controller-manager", "OU": "seven" } ] }
生成證書、私鑰:
[root@m1 ~]# cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes controller-manager-csr.json | cfssljson -bare controller-manager [root@m1 ~]# ls controller-manager*.pem controller-manager-key.pem controller-manager.pem [root@m1 ~]#
分發到每一個master
節點:
[root@m1 ~]# for i in m1 m2 m3; do scp controller-manager*.pem $i:/etc/kubernetes/pki/; done
建立kubeconfig
:
# 設置集羣參數 [root@m1 ~]# kubectl config set-cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://192.168.243.101:6443 \ --kubeconfig=controller-manager.kubeconfig # 設置客戶端認證參數 [root@m1 ~]# kubectl config set-credentials system:kube-controller-manager \ --client-certificate=controller-manager.pem \ --client-key=controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=controller-manager.kubeconfig # 設置上下文參數 [root@m1 ~]# kubectl config set-context system:kube-controller-manager \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=controller-manager.kubeconfig
設置默認上下文:
[root@m1 ~]# kubectl config use-context system:kube-controller-manager --kubeconfig=controller-manager.kubeconfig
分發controller-manager.kubeconfig
文件到每一個master
節點上:
[root@m1 ~]# for i in m1 m2 m3; do scp controller-manager.kubeconfig $i:/etc/kubernetes/; done
建立kube-controller-manager.service
文件,內容以下:
[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/kube-controller-manager \ --port=0 \ --secure-port=10252 \ --bind-address=127.0.0.1 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \ --service-cluster-ip-range=10.255.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \ --allocate-node-cidrs=true \ --cluster-cidr=172.23.0.0/16 \ --experimental-cluster-signing-duration=87600h \ --root-ca-file=/etc/kubernetes/pki/ca.pem \ --service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem \ --leader-elect=true \ --feature-gates=RotateKubeletServerCertificate=true \ --controllers=*,bootstrapsigner,tokencleaner \ --horizontal-pod-autoscaler-use-rest-clients=true \ --horizontal-pod-autoscaler-sync-period=10s \ --tls-cert-file=/etc/kubernetes/pki/controller-manager.pem \ --tls-private-key-file=/etc/kubernetes/pki/controller-manager-key.pem \ --use-service-account-credentials=true \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
將kube-controller-manager.service
配置文件分發到每一個master
節點上:
[root@m1 ~]# for i in m1 m2 m3; do scp kube-controller-manager.service $i:/etc/systemd/system/; done
在各個master
節點上啓動kube-controller-manager
服務,具體命令以下:
$ systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager
查看服務狀態,狀態爲active (running)
表明啓動成功:
$ systemctl status kube-controller-manager
查看leader信息:
[root@m1 ~]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml apiVersion: v1 kind: Endpoints metadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"m1_ae36dc74-68d0-444d-8931-06b37513990a","leaseDurationSeconds":15,"acquireTime":"2020-09-04T15:47:14Z","renewTime":"2020-09-04T15:47:39Z","leaderTransitions":0}' creationTimestamp: "2020-09-04T15:47:15Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:control-plane.alpha.kubernetes.io/leader: {} manager: kube-controller-manager operation: Update time: "2020-09-04T15:47:39Z" name: kube-controller-manager namespace: kube-system resourceVersion: "1908" selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager uid: 149b117e-f7c4-4ad8-bc83-09345886678a [root@m1 ~]#
若是沒有啓動成功,能夠查看日誌排查下問題:
$ journalctl -f -u kube-controller-manager
scheduler
啓動後將經過競爭選舉機制產生一個 leader 節點,其它節點爲阻塞狀態。當 leader 節點不可用後,剩餘節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。
建立scheduler-csr.json
文件,內容以下:
{ "CN": "system:kube-scheduler", "hosts": [ "127.0.0.1", "192.168.243.143", "192.168.243.144", "192.168.243.145" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:kube-scheduler", "OU": "seven" } ] }
生成證書和私鑰:
[root@m1 ~]# cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes scheduler-csr.json | cfssljson -bare kube-scheduler [root@m1 ~]# ls kube-scheduler*.pem kube-scheduler-key.pem kube-scheduler.pem [root@m1 ~]#
分發到每一個master
節點:
[root@m1 ~]# for i in m1 m2 m3; do scp kube-scheduler*.pem $i:/etc/kubernetes/pki/; done
建立kubeconfig
:
# 設置集羣參數 [root@m1 ~]# kubectl config set-cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://192.168.243.101:6443 \ --kubeconfig=kube-scheduler.kubeconfig # 設置客戶端認證參數 [root@m1 ~]# kubectl config set-credentials system:kube-scheduler \ --client-certificate=kube-scheduler.pem \ --client-key=kube-scheduler-key.pem \ --embed-certs=true \ --kubeconfig=kube-scheduler.kubeconfig # 設置上下文參數 [root@m1 ~]# kubectl config set-context system:kube-scheduler \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=kube-scheduler.kubeconfig
設置默認上下文:
[root@m1 ~]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
分發kube-scheduler.kubeconfig
文件到每一個master
節點上:
[root@m1 ~]# for i in m1 m2 m3; do scp kube-scheduler.kubeconfig $i:/etc/kubernetes/; done
建立kube-scheduler.service
文件,內容以下:
[Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/opt/kubernetes/bin/kube-scheduler \ --address=127.0.0.1 \ --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \ --leader-elect=true \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
將kube-scheduler.service
配置文件分發到每一個master
節點上:
[root@m1 ~]# for i in m1 m2 m3; do scp kube-scheduler.service $i:/etc/systemd/system/; done
在每一個master
節點上啓動kube-scheduler
服務:
$ systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler
查看服務狀態,狀態爲active (running)
表明啓動成功:
$ service kube-scheduler status
查看leader信息:
[root@m1 ~]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml apiVersion: v1 kind: Endpoints metadata: annotations: control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"m1_f6c4da9f-85b4-47e2-919d-05b24b4aacac","leaseDurationSeconds":15,"acquireTime":"2020-09-04T16:03:57Z","renewTime":"2020-09-04T16:04:19Z","leaderTransitions":0}' creationTimestamp: "2020-09-04T16:03:57Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:control-plane.alpha.kubernetes.io/leader: {} manager: kube-scheduler operation: Update time: "2020-09-04T16:04:19Z" name: kube-scheduler namespace: kube-system resourceVersion: "3230" selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler uid: c2f2210d-b00f-4157-b597-d3e3b4bec38b [root@m1 ~]#
若是沒有啓動成功,能夠查看日誌排查下問題:
$ journalctl -f -u kube-scheduler
首先咱們須要預先下載鏡像到全部的節點上,因爲有些鏡像不***沒法下載,因此這裏提供了一個簡單的腳本拉取阿里雲倉庫的鏡像並修改了tag
:
[root@m1 ~]# vim download-images.sh #!/bin/bash docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2 k8s.gcr.io/pause-amd64:3.2 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0 docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
將腳本分發到其餘節點上:
[root@m1 ~]# for i in m2 m3 n1 n2; do scp download-images.sh $i:~; done
而後讓每一個節點執行該腳本:
[root@m1 ~]# for i in m1 m2 m3 n1 n2; do ssh $i "sh ~/download-images.sh"; done
拉取完成後,此時各個節點應有以下鏡像:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/coredns 1.7.0 bfe3a36ebd25 2 months ago 45.2MB k8s.gcr.io/pause-amd64 3.2 80d28bedfe5d 6 months ago 683kB
建立 token 並設置環境變量:
[root@m1 ~]# export BOOTSTRAP_TOKEN=$(kubeadm token create \ --description kubelet-bootstrap-token \ --groups system:bootstrappers:worker \ --kubeconfig kube.config)
建立kubelet-bootstrap.kubeconfig
:
# 設置集羣參數 [root@m1 ~]# kubectl config set-cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://192.168.243.101:6443 \ --kubeconfig=kubelet-bootstrap.kubeconfig # 設置客戶端認證參數 [root@m1 ~]# kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=kubelet-bootstrap.kubeconfig # 設置上下文參數 [root@m1 ~]# kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=kubelet-bootstrap.kubeconfig
設置默認上下文:
[root@m1 ~]# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
在worker節點上建立k8s配置文件存儲目錄並把生成的配置文件copy到每一個worker
節點上:
[root@m1 ~]# for i in n1 n2; do ssh $i "mkdir /etc/kubernetes/"; done [root@m1 ~]# for i in n1 n2; do scp kubelet-bootstrap.kubeconfig $i:/etc/kubernetes/kubelet-bootstrap.kubeconfig; done
在worker
節點上建立密鑰存放目錄:
[root@m1 ~]# for i in n1 n2; do ssh $i "mkdir -p /etc/kubernetes/pki"; done
把CA證書分發到每一個worker
節點上:
[root@m1 ~]# for i in n1 n2; do scp ca.pem $i:/etc/kubernetes/pki/; done
建立kubelet.config.json
配置文件,內容以下:
{ "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", "authentication": { "x509": { "clientCAFile": "/etc/kubernetes/pki/ca.pem" }, "webhook": { "enabled": true, "cacheTTL": "2m0s" }, "anonymous": { "enabled": false } }, "authorization": { "mode": "Webhook", "webhook": { "cacheAuthorizedTTL": "5m0s", "cacheUnauthorizedTTL": "30s" } }, "address": "192.168.243.146", "port": 10250, "readOnlyPort": 10255, "cgroupDriver": "cgroupfs", "hairpinMode": "promiscuous-bridge", "serializeImagePulls": false, "featureGates": { "RotateKubeletClientCertificate": true, "RotateKubeletServerCertificate": true }, "clusterDomain": "cluster.local.", "clusterDNS": ["10.255.0.2"] }
把kubelet
配置文件分發到每一個worker
節點上:
[root@m1 ~]# for i in n1 n2; do scp kubelet.config.json $i:/etc/kubernetes/; done
注意:分發完成後須要修改配置文件中的address
字段,改成所在節點的IP
建立kubelet.service
文件,內容以下:
[Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/opt/kubernetes/bin/kubelet \ --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \ --cert-dir=/etc/kubernetes/pki \ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \ --config=/etc/kubernetes/kubelet.config.json \ --network-plugin=cni \ --pod-infra-container-image=k8s.gcr.io/pause-amd64:3.2 \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
把kubelet
的服務文件分發到每一個worker
節點上
[root@m1 ~]# for i in n1 n2; do scp kubelet.service $i:/etc/systemd/system/; done
kublet
啓動時會查找 --kubeletconfig
參數配置的文件是否存在,若是不存在則使用 --bootstrap-kubeconfig
向 kube-apiserver 發送證書籤名請求 (CSR)。
kube-apiserver 收到 CSR 請求後,對其中的 Token 進行認證(事先使用 kubeadm 建立的 token),認證經過後將請求的 user
設置爲 system:bootstrap:
,group
設置爲 system:bootstrappers
,這就是Bootstrap Token Auth。
bootstrap賦權,即建立一個角色綁定:
[root@m1 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
而後就能夠啓動kubelet
服務了,在每一個worker
節點上執行以下命令:
$ mkdir -p /var/lib/kubelet $ systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet
查看服務狀態,狀態爲active (running)
表明啓動成功:
$ systemctl status kubelet
若是沒有啓動成功,能夠查看日誌排查下問題:
$ journalctl -f -u kubelet
確認kubelet
服務啓動成功後,接着到master
上Approve一下bootstrap請求。執行以下命令能夠看到兩個worker
節點分別發送了兩個 CSR 請求:
[root@m1 ~]# kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-0U6dO2MrD_KhUCdofq1rab6yrLvuVMJkAXicLldzENE 27s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:seh1w7 Pending node-csr-QMAVx75MnxCpDT5QtI6liNZNfua39vOwYeUyiqTIuPg 74s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:seh1w7 Pending [root@m1 ~]#
而後Approve這兩個請求便可:
[root@m1 ~]# kubectl certificate approve node-csr-0U6dO2MrD_KhUCdofq1rab6yrLvuVMJkAXicLldzENE certificatesigningrequest.certificates.k8s.io/node-csr-0U6dO2MrD_KhUCdofq1rab6yrLvuVMJkAXicLldzENE approved [root@m1 ~]# kubectl certificate approve node-csr-QMAVx75MnxCpDT5QtI6liNZNfua39vOwYeUyiqTIuPg certificatesigningrequest.certificates.k8s.io/node-csr-QMAVx75MnxCpDT5QtI6liNZNfua39vOwYeUyiqTIuPg approved [root@m1 ~]#
建立 kube-proxy-csr.json
文件,內容以下:
{ "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "seven" } ] }
生成證書和私鑰:
[root@m1 ~]# cfssl gencert -ca=ca.pem \ -ca-key=ca-key.pem \ -config=ca-config.json \ -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy [root@m1 ~]# ls kube-proxy*.pem kube-proxy-key.pem kube-proxy.pem [root@m1 ~]#
執行以下命令建立kube-proxy.kubeconfig
文件:
# 設置集羣參數 [root@m1 ~]# kubectl config set-cluster kubernetes \ --certificate-authority=ca.pem \ --embed-certs=true \ --server=https://192.168.243.101:6443 \ --kubeconfig=kube-proxy.kubeconfig # 設置客戶端認證參數 [root@m1 ~]# kubectl config set-credentials kube-proxy \ --client-certificate=kube-proxy.pem \ --client-key=kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig # 設置上下文參數 [root@m1 ~]# kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig
切換默認上下文:
[root@m1 ~]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
分發kube-proxy.kubeconfig
文件到各個worker
節點上:
[root@m1 ~]# for i in n1 n2; do scp kube-proxy.kubeconfig $i:/etc/kubernetes/; done
建立kube-proxy.config.yaml
文件,內容以下:
apiVersion: kubeproxy.config.k8s.io/v1alpha1 # 修改成所在節點的ip bindAddress: {worker_ip} clientConnection: kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig clusterCIDR: 172.23.0.0/16 # 修改成所在節點的ip healthzBindAddress: {worker_ip}:10256 kind: KubeProxyConfiguration # 修改成所在節點的ip metricsBindAddress: {worker_ip}:10249 mode: "iptables"
將kube-proxy.config.yaml
文件分發到每一個worker
節點上:
[root@m1 ~]# for i in n1 n2; do scp kube-proxy.config.yaml $i:/etc/kubernetes/; done
建立kube-proxy.service
文件,內容以下:
[Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy \ --config=/etc/kubernetes/kube-proxy.config.yaml \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
將kube-proxy.service
文件分發到全部worker
節點上:
[root@m1 ~]# for i in n1 n2; do scp kube-proxy.service $i:/etc/systemd/system/; done
建立kube-proxy
服務所依賴的目錄:
[root@m1 ~]# for i in n1 n2; do ssh $i "mkdir -p /var/lib/kube-proxy && mkdir -p /var/log/kubernetes"; done
而後就能夠啓動kube-proxy
服務了,在每一個worker
節點上執行以下命令:
$ systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy
查看服務狀態,狀態爲active (running)
表明啓動成功:
$ systemctl status kube-proxy
若是沒有啓動成功,能夠查看日誌排查下問題:
$ journalctl -f -u kube-proxy
咱們使用calico官方的安裝方式來部署。建立目錄(在配置了kubectl
的節點上執行):
[root@m1 ~]# mkdir -p /etc/kubernetes/addons
在該目錄下建立calico-rbac-kdd.yaml
文件,內容以下:
kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-node rules: - apiGroups: [""] resources: - namespaces verbs: - get - list - watch - apiGroups: [""] resources: - pods/status verbs: - update - apiGroups: [""] resources: - pods verbs: - get - list - watch - patch - apiGroups: [""] resources: - services verbs: - get - apiGroups: [""] resources: - endpoints verbs: - get - apiGroups: [""] resources: - nodes verbs: - get - list - update - watch - apiGroups: ["extensions"] resources: - networkpolicies verbs: - get - list - watch - apiGroups: ["networking.k8s.io"] resources: - networkpolicies verbs: - watch - list - apiGroups: ["crd.projectcalico.org"] resources: - globalfelixconfigs - felixconfigurations - bgppeers - globalbgpconfigs - bgpconfigurations - ippools - globalnetworkpolicies - globalnetworksets - networkpolicies - clusterinformations - hostendpoints verbs: - create - get - list - update - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: calico-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-node subjects: - kind: ServiceAccount name: calico-node namespace: kube-system
而後分別執行以下命令完成calico
的安裝:
[root@m1 ~]# kubectl apply -f /etc/kubernetes/addons/calico-rbac-kdd.yaml [root@m1 ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
等待幾分鐘後查看Pod狀態,均爲Running才表明部署成功:
[root@m1 ~]# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-5bc4fc6f5f-z8lhf 1/1 Running 0 105s kube-system calico-node-qflvj 1/1 Running 0 105s kube-system calico-node-x9m2n 1/1 Running 0 105s [root@m1 ~]#
在/etc/kubernetes/addons/
目錄下建立coredns.yaml
配置文件:
apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } --- apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/name: "CoreDNS" spec: # replicas: not specified here: # 1. Default is 1. # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: kubernetes.io/os: linux affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: ["kube-dns"] topologyKey: kubernetes.io/hostname containers: - name: coredns image: coredns/coredns:1.7.0 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /ready port: 8181 scheme: HTTP dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.255.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP
-i
參數指定dns的clusterIP
,一般爲kubernetes服務ip網段的第二個,ip相關的定義在本文開頭有說明而後執行以下命令部署coredns
:
[root@m1 ~]# kubectl create -f /etc/kubernetes/addons/coredns.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created [root@m1 ~]#
查看Pod狀態:
[root@m1 ~]# kubectl get pod --all-namespaces | grep coredns kube-system coredns-7bf4bd64bd-ww4q2 1/1 Running 0 3m40s [root@m1 ~]#
查看集羣中的節點狀態:
[root@m1 ~]# kubectl get node NAME STATUS ROLES AGE VERSION n1 Ready <none> 3h30m v1.19.0 n2 Ready <none> 3h30m v1.19.0 [root@m1 ~]#
在m1
節點上建立nginx-ds.yml
配置文件,內容以下:
apiVersion: v1 kind: Service metadata: name: nginx-ds labels: app: nginx-ds spec: type: NodePort selector: app: nginx-ds ports: - name: http port: 80 targetPort: 80 --- apiVersion: apps/v1 kind: DaemonSet metadata: name: nginx-ds labels: addonmanager.kubernetes.io/mode: Reconcile spec: selector: matchLabels: app: nginx-ds template: metadata: labels: app: nginx-ds spec: containers: - name: my-nginx image: nginx:1.7.9 ports: - containerPort: 80
而後執行以下命令建立nginx ds:
[root@m1 ~]# kubectl create -f nginx-ds.yml service/nginx-ds created daemonset.apps/nginx-ds created [root@m1 ~]#
稍等一會後,檢查Pod狀態是否正常:
[root@m1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ds-4f48f 1/1 Running 0 63s 172.16.40.130 n1 <none> <none> nginx-ds-zsm7d 1/1 Running 0 63s 172.16.217.10 n2 <none> <none> [root@m1 ~]#
在每一個worker
節點上去嘗試ping
Pod IP(master
節點沒有安裝calico
因此不能訪問Pod IP):
[root@n1 ~]# ping 172.16.40.130 PING 172.16.40.130 (172.16.40.130) 56(84) bytes of data. 64 bytes from 172.16.40.130: icmp_seq=1 ttl=64 time=0.073 ms 64 bytes from 172.16.40.130: icmp_seq=2 ttl=64 time=0.055 ms 64 bytes from 172.16.40.130: icmp_seq=3 ttl=64 time=0.052 ms 64 bytes from 172.16.40.130: icmp_seq=4 ttl=64 time=0.054 ms ^C --- 172.16.40.130 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2999ms rtt min/avg/max/mdev = 0.052/0.058/0.073/0.011 ms [root@n1 ~]#
確認Pod IP可以ping
通後,檢查Service的狀態:
[root@m1 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.255.0.1 <none> 443/TCP 17h nginx-ds NodePort 10.255.4.100 <none> 80:8568/TCP 11m [root@m1 ~]#
在每一個worker
節點上嘗試訪問nginx-ds
服務(master
節點沒有proxy因此不能訪問Service IP):
[root@n1 ~]# curl 10.255.4.100:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> [root@n1 ~]#
在每一個節點上檢查NodePort的可用性,NodePort會將服務的端口與宿主機的端口作映射,正常狀況下全部節點均可以經過worker
節點的 IP + NodePort 訪問到nginx-ds
服務:
$ curl 192.168.243.146:8568 $ curl 192.168.243.147:8568
須要建立一個Nginx Pod,首先定義一個pod-nginx.yaml
配置文件,內容以下:
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
而後基於該配置文件去建立Pod:
[root@m1 ~]# kubectl create -f pod-nginx.yaml pod/nginx created [root@m1 ~]#
使用以下命令進入到Pod裏:
[root@m1 ~]# kubectl exec nginx -i -t -- /bin/bash
查看dns配置,nameserver
的值需爲 coredns 的clusterIP
:
root@nginx:/# cat /etc/resolv.conf nameserver 10.255.0.2 search default.svc.cluster.local. svc.cluster.local. cluster.local. localdomain options ndots:5 root@nginx:/#
接着測試是否能夠正確解析Service的名稱。以下能根據nginx-ds
這個名稱解析出對應的IP:10.255.4.100
,表明dns也是正常的:
root@nginx:/# ping nginx-ds PING nginx-ds.default.svc.cluster.local (10.255.4.100): 48 data bytes
kubernetes
服務也能正常解析:
root@nginx:/# ping kubernetes PING kubernetes.default.svc.cluster.local (10.255.0.1): 48 data bytes
將m1
節點上的kubectl
配置文件拷貝到其餘兩臺master
節點上:
[root@m1 ~]# for i in m2 m3; do ssh $i "mkdir ~/.kube/"; done [root@m1 ~]# for i in m2 m3; do scp ~/.kube/config $i:~/.kube/; done
到m1
節點上執行以下命令將其關機:
[root@m1 ~]# init 0
而後查看虛擬IP是否成功漂移到了m2
節點上:
[root@m2 ~]# ip a |grep 192.168.243.101 inet 192.168.243.101/32 scope global ens32 [root@m2 ~]#
接着測試可否在m2
或m3
節點上使用kubectl
與集羣進行交互,能正常交互則表明集羣已經具有了高可用:
[root@m2 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION n1 Ready <none> 4h2m v1.19.0 n2 Ready <none> 4h2m v1.19.0 [root@m2 ~]#
dashboard是k8s提供的一個可視化操做界面,用於簡化咱們對集羣的操做和管理,在界面上咱們能夠很方便的查看各類信息、操做Pod、Service等資源,以及建立新的資源等。dashboard的倉庫地址以下,
dashboard的部署也比較簡單,首先定義dashboard-all.yaml
配置文件,內容以下:
apiVersion: v1 kind: Namespace metadata: name: kubernetes-dashboard --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 nodePort: 8523 type: NodePort selector: k8s-app: kubernetes-dashboard --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kubernetes-dashboard type: Opaque --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-csrf namespace: kubernetes-dashboard type: Opaque data: csrf: "" --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-key-holder namespace: kubernetes-dashboard type: Opaque --- kind: ConfigMap apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-settings namespace: kubernetes-dashboard --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard rules: # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster", "dashboard-metrics-scraper"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"] verbs: ["get"] --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard rules: # Allow Metrics Scraper to get metrics from the Metrics server - apiGroups: ["metrics.k8s.io"] resources: ["pods", "nodes"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: kubernetesui/dashboard:v2.0.3 imagePullPolicy: Always ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates - --namespace=kubernetes-dashboard # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- kind: Service apiVersion: v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: ports: - port: 8000 targetPort: 8000 selector: k8s-app: dashboard-metrics-scraper --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: dashboard-metrics-scraper template: metadata: labels: k8s-app: dashboard-metrics-scraper annotations: seccomp.security.alpha.kubernetes.io/pod: 'runtime/default' spec: containers: - name: dashboard-metrics-scraper image: kubernetesui/metrics-scraper:v1.0.4 ports: - containerPort: 8000 protocol: TCP livenessProbe: httpGet: scheme: HTTP path: / port: 8000 initialDelaySeconds: 30 timeoutSeconds: 30 volumeMounts: - mountPath: /tmp name: tmp-volume securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule volumes: - name: tmp-volume emptyDir: {}
建立dashboard服務:
[root@m1 ~]# kubectl create -f dashboard-all.yaml namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created [root@m1 ~]#
查看deployment
運行狀況:
[root@m1 ~]# kubectl get deployment kubernetes-dashboard -n kubernetes-dashboard NAME READY UP-TO-DATE AVAILABLE AGE kubernetes-dashboard 1/1 1 1 20s [root@m1 ~]#
查看dashboard pod運行狀況:
[root@m1 ~]# kubectl --namespace kubernetes-dashboard get pods -o wide |grep dashboard dashboard-metrics-scraper-7b59f7d4df-xzxs8 1/1 Running 0 82s 172.16.217.13 n2 <none> <none> kubernetes-dashboard-5dbf55bd9d-s8rhb 1/1 Running 0 82s 172.16.40.132 n1 <none> <none> [root@m1 ~]#
查看dashboard service的運行狀況:
[root@m1 ~]# kubectl get services kubernetes-dashboard -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.255.120.138 <none> 443:8523/TCP 101s [root@m1 ~]#
到n1
節點上查看8523
端口是否有被正常監聽:
[root@n1 ~]# netstat -ntlp |grep 8523 tcp 0 0 0.0.0.0:8523 0.0.0.0:* LISTEN 13230/kube-proxy [root@n1 ~]#
爲了集羣安全,從 1.7 開始,dashboard 只容許經過 https 訪問,咱們使用NodePort的方式暴露服務,可使用 https://NodeIP:NodePort 地址訪問。例如使用curl
進行訪問:
[root@n1 ~]# curl https://192.168.243.146:8523 -k <!-- Copyright 2017 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>Kubernetes Dashboard</title> <link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png" /> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="styles.988f26601cdcb14da469.css"></head> <body> <kd-root></kd-root> <script src="runtime.ddfec48137b0abfd678a.js" defer></script><script src="polyfills-es5.d57fe778f4588e63cc5c.js" nomodule defer></script><script src="polyfills.49104fe38e0ae7955ebb.js" defer></script><script src="scripts.391d299173602e261418.js" defer></script><script src="main.b94e335c0d02b12e3a7b.js" defer></script></body> </html> [root@n1 ~]#
-k
參數指定不驗證證書進行https請求關於自定義證書
默認dashboard的證書是自動生成的,確定是非安全的證書,若是你們有域名和對應的安全證書能夠本身替換掉。使用安全的域名方式訪問dashboard。
在dashboard-all.yaml
中增長dashboard啓動參數,能夠指定證書文件,其中證書文件是經過secret注進來的。
- --tls-cert-file=dashboard.cer
- --tls-key-file=dashboard.key
Dashboard 默認只支持 token 認證,因此若是使用 KubeConfig 文件,須要在該文件中指定 token,咱們這裏使用token的方式登陸。
首先建立service account:
[root@m1 ~]# kubectl create sa dashboard-admin -n kube-system serviceaccount/dashboard-admin created [root@m1 ~]#
建立角色綁定關係:
[root@m1 ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created [root@m1 ~]#
查看dashboard-admin
的Secret名稱:
[root@m1 ~]# kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}' dashboard-admin-token-757fb [root@m1 ~]#
打印Secret的token:
[root@m1 ~]# ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}') [root@m1 ~]# kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}' eyJhbGciOiJSUzI1NiIsImtpZCI6Ilhyci13eDR3TUtmSG9kcXJxdzVmcFdBTFBGeDhrOUY2QlZoenZhQWVZM0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNzU3ZmIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYjdlMWVhMzQtMjNhMS00MjZkLWI0NTktOGI2NmQxZWZjMWUzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.UlKmcZoGb6OQ1jE55oShAA2dBiL0FHEcIADCfTogtBEuYLPdJtBUVQZ_aVICGI23gugIu6Y9Yt7iQYlwT6zExhUzDz0UUiBT1nSLe94CkPl64LXbeWkC3w2jee8iSqR2UfIZ4fzY6azaqhGKE1Fmm_DLjD-BS-etphOIFoCQFbabuFjvR8DVDss0z1czhHwXEOvlv5ted00t50dzv0rAZ8JN-PdOoem3aDkXDvWWmqu31QAhqK1strQspbUOF5cgcSeGwsQMfau8U5BNsm_K92IremHqOVvOinkR_EHslomDJRc3FYbV_Jw359rc-QROSTbLphRfvGNx9UANDMo8lA [root@m1 ~]#
獲取到token後,使用瀏覽器訪問https://192.168.243.146:8523
,因爲是dashboard是自籤的證書,因此此時瀏覽器會提示警告。不用理會直接點擊「高級」 -> 「繼續前往」便可:
而後輸入token:
成功登陸後首頁以下:
可視化界面也沒啥可說的,這裏就不進一步介紹了,能夠自行探索一下。咱們使用二進制方式搭建高可用的k8s集羣之旅至此就結束了,本文篇幅能夠說是很是的長,這也是爲了記錄每一步的操做細節,因此爲了方便仍是使用kubeadm
吧。