前面三篇文章已經將單節點的Kubernetes以二進制的方式進行了部署,本文將基於此單節點的配置完成多節點的二進制的Kubernetes部署。html
master01地址:192.168.0.128/24node
master02地址:192.168.0.131/24linux
node01地址:192.168.0.129/24nginx
node02地址:192.168.0.130/24web
負載均衡服務器:主nginx01:192.168.0.132/24docker
負載均衡服務器:備nginx02:192.168.0.133/24shell
Harbor私有倉庫:192.168.0.134/24bootstrap
首先,關閉防火牆和核心防禦,這個再也不多說。vim
複製/opt/kubernetes/目錄下的全部文件到master02節點上centos
[root@master01 ~]# scp -r /opt/kubernetes/ root@192.168.0.131:/opt The authenticity of host '192.168.0.131 (192.168.0.131)' can't be established. ECDSA key fingerprint is SHA256:Px4bb9N3Hsv7XF4EtyC5lHdA8EwXyQ2r5yeUJ+QqnrM. ECDSA key fingerprint is MD5:cc:7c:68:15:75:7e:f5:bd:63:e3:ce:9e:df:06:06:b7. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.0.131' (ECDSA) to the list of known hosts. root@192.168.0.131's password: token.csv 100% 84 45.1KB/s 00:00 kube-apiserver 100% 929 786.0KB/s 00:00 kube-scheduler 100% 94 92.6KB/s 00:00 kube-controller-manager 100% 483 351.0KB/s 00:00 kube-apiserver 100% 184MB 108.7MB/s 00:01 kubectl 100% 55MB 117.9MB/s 00:00 kube-controller-manager 100% 155MB 127.1MB/s 00:01 kube-scheduler 100% 55MB 118.9MB/s 00:00 ca-key.pem 100% 1679 1.8MB/s 00:00 ca.pem 100% 1359 1.5MB/s 00:00 server-key.pem 100% 1675 1.8MB/s 00:00 server.pem 100% 1643 1.6MB/s 00:00 [root@master01 ~]#
[root@master01 ~]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.0.131:/usr/lib/systemd/system/ root@192.168.0.131's password: kube-apiserver.service 100% 282 79.7KB/s 00:00 kube-controller-manager.service 100% 317 273.0KB/s 00:00 kube-scheduler.service 100% 281 290.5KB/s 00:00 [root@master01 ~]#
主要是對kube-apiserver文件的ip地址進行修改便可
[root@master02 ~]# cd /opt/kubernetes/cfg/ [root@master02 cfg]# ls kube-apiserver kube-controller-manager kube-scheduler token.csv 下面對kube-apiserver修改
#修改第5和第7行的ip地址 2 KUBE_APISERVER_OPTS="--logtostderr=true \ 3 --v=4 \ 4 --etcd-servers=https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379 \ 5 --bind-address=192.168.0.131 \ 6 --secure-port=6443 \ 7 --advertise-address=192.168.0.131 \ 8 --allow-privileged=true \ 9 --service-cluster-ip-range=10.0.0.0/24 \ 10 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ 11 --authorization-mode=RBAC,Node \ 12 --kubelet-https=true \ 13 --enable-bootstrap-token-auth \ 14 --token-auth-file=/opt/kubernetes/cfg/token.csv \ 15 --service-node-port-range=30000-50000 \ 16 --tls-cert-file=/opt/kubernetes/ssl/server.pem \ 17 --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \ 18 --client-ca-file=/opt/kubernetes/ssl/ca.pem \ 19 --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ 20 --etcd-cafile=/opt/etcd/ssl/ca.pem \ 21 --etcd-certfile=/opt/etcd/ssl/server.pem \ 22 --etcd-keyfile=/opt/etcd/ssl/server-key.pem"
爲了保存信息,所以須要將master02節點加入到etcd集羣中,那麼就須要對應的證書。
咱們能夠直接將master01的證書拷貝給master02使用
在master01上操做
[root@master01 ~]# scp -r /opt/etcd/ root@192.168.0.131:/opt root@192.168.0.131's password: etcd.sh 100% 1812 516.2KB/s 00:00 etcd 100% 509 431.6KB/s 00:00 etcd 100% 18MB 128.2MB/s 00:00 etcdctl 100% 15MB 123.7MB/s 00:00 ca-key.pem 100% 1679 278.2KB/s 00:00 ca.pem 100% 1265 533.1KB/s 00:00 server-key.pem 100% 1675 1.4MB/s 00:00 server.pem 100% 1338 1.7MB/s 00:00 [root@master01 ~]# #建議再在master02上驗證一下是否複製成功
一、開啓apiserver組件服務
[root@master2 cfg]# systemctl start kube-apiserver.service [root@master2 cfg]# systemctl enable kube-apiserver.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
二、開啓 controller-manager 組件服務
[root@master02 cfg]# systemctl start kube-controller-manager.service [root@master02 cfg]# systemctl enable kube-controller-manager.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
三、開啓scheduler組件服務
[root@master02 cfg]# systemctl start kube-scheduler.service [root@master02 cfg]# systemctl enable kube-scheduler.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service. [root@master02 cfg]#
四、建議使用systemctl status 命令查看三個服務的狀態是不是active(running)狀態
優化kubectl命令(設置一下環境變量便可)
systemctl status kube-controller-manager.service systemctl status kube-apiserver.service systemctl status kube-scheduler.service
#在下面的文件末尾加入一行,聲明命令位置 [root@master02 cfg]# vim /etc/profile export PATH=$PATH:/opt/kubernetes/bin/ [root@master02 cfg]# source /etc/profile [root@master02 cfg]# echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/opt/kubernetes/bin/
只須要使用kubectl命令查看是否有node節點信息便可(筆者在實驗環境中遇到過執行命令阻塞問題,能夠嘗試換個終端或者重啓該節點服務器再次驗證)
[root@master02 kubernetes]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.0.129 Ready <none> 18h v1.12.3 192.168.0.130 Ready <none> 19h v1.12.3
兩臺nginx服務器一臺爲master一臺爲backup,使用keepalived軟件服務實現負載高可用功能
nginx01:192.168.0.132/24
nginx02:192.168.0.133/24
開啓兩臺服務器,設置主機名、關閉防火牆和核心防禦,以下所示配置,以nginx01爲例
[root@localhost ~]# hostnamectl set-hostname nginx01 [root@localhost ~]# su [root@nginx01 ~]# systemctl stop firewalld.service [root@nginx01 ~]# systemctl disable firewalld.service Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@nginx01 ~]# setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config #修改成靜態ip地址,重啓網絡服務後關閉網絡管理服務 [root@nginx01 ~]# systemctl stop NetworkManager [root@nginx01 ~]# systemctl disable NetworkManager Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service. Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service. Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
[root@nginx01 ~]# vi /etc/yum.repos.d/nginx.repo [nginx] name=nginx.repo baseurl=http://nginx.org/packages/centos/7/$basearch/ enabled=1 gpgcheck=0 [root@nginx01 ~]# yum list [root@nginx01 ~]# yum -y install nginx
主要是在stream模塊中添加日誌格式、日誌目錄以及作負載均衡的upstream設置
此處就以nginx01爲例演示
[root@nginx01 ~]# vim /etc/nginx/nginx.conf events { 10 worker_connections 1024; 11 } 12 13 stream { 14 log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; 15 access_log /var/log/nginx/k8s-access.log main; 16 17 upstream k8s-apiserver { 18 server 192.168.0.128:6443; 19 server 192.168.0.131:6443; 20 } 21 server { 22 listen 6443; 23 proxy_pass k8s-apiserver; 24 } 25 } 26
[root@nginx01 ~]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful [root@nginx01 ~]# systemctl start nginx [root@nginx01 ~]# netstat -natp | grep nginx tcp 0 0 0.0.0.0:6443 0.0.0.0:* LISTEN 41576/nginx: master tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 41576/nginx: master
服務開啓沒問題則什麼nginx調度之負載均衡配置完畢,接下來就是使用keepalived實現高可用了
[root@nginx01 ~]# yum install keepalived -y #先將本來的配置文件備份,再修改 [root@nginx01 ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak #主nginx節點的配置文件修改以下 cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER } vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface ens33 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.0.100/24 } track_script { check_nginx } } #備份的nginx節點的配置文件修改以下 ! Configuration File for keepalived global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER } vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" #檢測腳本路徑,須要本身寫 } vrrp_instance VI_1 { state BACKUP #表示備份 interface ens33 virtual_router_id 51 priority 90 #優先級低一些 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.0.100/24 #虛擬ip以前的文章中以及作過設置 } track_script { check_nginx } }
[root@nginx01 ~]# vim /etc/nginx/check_nginx.sh count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then systemctl stop keepalived fi #給其權限方便執行 [root@nginx01 ~]# chmod +x /etc/nginx/check_nginx.sh
nginx01上:
[root@nginx01 ~]# systemctl start keepalived [root@nginx01 ~]# systemctl status keepalived ● keepalived.service - LVS and VRRP High Availability Monitor Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled) Active: active (running) since 二 2020-05-05 19:16:30 CST; 3s ago Process: 41849 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 41850 (keepalived) CGroup: /system.slice/keepalived.service ├─41850 /usr/sbin/keepalived -D ├─41851 /usr/sbin/keepalived -D └─41852 /usr/sbin/keepalived -D
nginx02上:
[root@nginx02 ~]# systemctl start keepalived [root@nginx02 ~]# systemctl status keepalived ● keepalived.service - LVS and VRRP High Availability Monitor Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled) Active: active (running) since 二 2020-05-05 19:16:44 CST; 4s ago Process: 41995 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 41996 (keepalived) CGroup: /system.slice/keepalived.service ├─41996 /usr/sbin/keepalived -D ├─41997 /usr/sbin/keepalived -D └─41998 /usr/sbin/keepalived -D
此時會發現VIP在nginx01,也就是說是在nginx的master節點上,而backup上仍是沒有VIP的,能夠經過pkill掉nginx01上的nginx進程測試其是否會漂移到nginx02上來驗證keepalived的高可用是否配置成功
[root@nginx01 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:ce:99:a4 brd ff:ff:ff:ff:ff:ff inet 192.168.0.132/24 brd 192.168.0.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.0.100/24 scope global secondary ens33 valid_lft forever preferred_lft forever inet6 fe80::5663:b305:ba28:b102/64 scope link valid_lft forever preferred_lft forever [root@nginx02 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:71:48:82 brd ff:ff:ff:ff:ff:ff inet 192.168.0.133/24 brd 192.168.0.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::915f:60c5:1086:1e04/64 scope link valid_lft forever preferred_lft forever
[root@nginx01 ~]# pkill nginx #進程結束後keepalived服務也會終止 [root@nginx01 ~]# systemctl status keepalived.service ● keepalived.service - LVS and VRRP High Availability Monitor Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled) Active: inactive (dead) 5月 05 19:16:37 nginx01 Keepalived_vrrp[41852]: Sending gratuitous ARP on ens33 for 192.168.0.100 5月 05 19:16:37 nginx01 Keepalived_vrrp[41852]: Sending gratuitous ARP on ens33 for 192.168.0.100 5月 05 19:16:37 nginx01 Keepalived_vrrp[41852]: Sending gratuitous ARP on ens33 for 192.168.0.100 5月 05 19:16:37 nginx01 Keepalived_vrrp[41852]: Sending gratuitous ARP on ens33 for 192.168.0.100 5月 05 19:21:19 nginx01 Keepalived[41850]: Stopping 5月 05 19:21:19 nginx01 systemd[1]: Stopping LVS and VRRP High Availability Monitor... 5月 05 19:21:19 nginx01 Keepalived_vrrp[41852]: VRRP_Instance(VI_1) sent 0 priority 5月 05 19:21:19 nginx01 Keepalived_vrrp[41852]: VRRP_Instance(VI_1) removing protocol VIPs. 5月 05 19:21:19 nginx01 Keepalived_healthcheckers[41851]: Stopped 5月 05 19:21:20 nginx01 systemd[1]: Stopped LVS and VRRP High Availability Monitor.
查看VIP,跳轉到nginx02上了
[root@nginx02 ~]# ip ad 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:71:48:82 brd ff:ff:ff:ff:ff:ff inet 192.168.0.133/24 brd 192.168.0.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.0.100/24 scope global secondary ens33 valid_lft forever preferred_lft forever inet6 fe80::915f:60c5:1086:1e04/64 scope link valid_lft forever preferred_lft forever
[root@nginx01 ~]# systemctl start nginx [root@nginx01 ~]# systemctl start keepalived.service [root@nginx01 ~]# systemctl status keepalived.service ● keepalived.service - LVS and VRRP High Availability Monitor Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled) Active: active (running) since 二 2020-05-05 19:24:30 CST; 3s ago Process: 44012 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 44013 (keepalived) CGroup: /system.slice/keepalived.service ├─44013 /usr/sbin/keepalived -D ├─44014 /usr/sbin/keepalived -D └─44015 /usr/sbin/keepalived -D
驗證VIP,會發現又回到了nginx01上
[root@nginx01 ~]# ip ad 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:ce:99:a4 brd ff:ff:ff:ff:ff:ff inet 192.168.0.132/24 brd 192.168.0.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.0.100/24 scope global secondary ens33 valid_lft forever preferred_lft forever inet6 fe80::5663:b305:ba28:b102/64 scope link valid_lft forever preferred_lft forever
截至目前,咱們配置好了基於keepalived軟件實現的nginx高可用集羣,接下來須要作的一件很是重要的事情就是修改兩個node節點上Kubernetes集羣配置文件的全部的kubeconfig格式文件,由於以前單節點上咱們指向的是master01服務器,而如今是多節點,所以若是指向的仍是原先的mater01,那麼當master01服務器故障時則會致使服務沒法使用。所以咱們須要將其指向VIP,經過LB(負載均衡)服務來實現。
[root@node01 ~]# cd /opt/kubernetes/cfg/ [root@node01 cfg]# ls bootstrap.kubeconfig flanneld kubelet kubelet.config kubelet.kubeconfig kube-proxy kube-proxy.kubeconfig [root@node01 cfg]# vim bootstrap.kubeconfig [root@node01 cfg]# awk 'NR==5' bootstrap.kubeconfig server: https://192.168.0.100:6443 [root@node01 cfg]# vim kubelet.kubeconfig [root@node01 cfg]# awk 'NR==5' kubelet.kubeconfig server: https://192.168.0.100:6443 [root@node01 cfg]# vim kube-proxy.kubeconfig [root@node01 cfg]# awk 'NR==5' kube-proxy.kubeconfig server: https://192.168.0.100:6443
修改完成後重啓node節點服務
[root@node01 cfg]# systemctl restart kubelet [root@node01 cfg]# systemctl restart kube-proxy [root@node01 cfg]# grep 100 * bootstrap.kubeconfig: server: https://192.168.0.100:6443 kubelet.kubeconfig: server: https://192.168.0.100:6443 kube-proxy.kubeconfig: server: https://192.168.0.100:6443
重啓以後在nginx01上查看Kubernetes的日誌,是以輪循調度將請求分發出去的
[root@nginx01 ~]# tail /var/log/nginx/k8s-access.log 192.168.0.129 192.168.0.128:6443 - [05/May/2020:19:36:55 +0800] 200 1120 192.168.0.129 192.168.0.131:6443 - [05/May/2020:19:36:55 +0800] 200 1118
在master01上建立pod資源進行測試
[root@master01 ~]# kubectl run nginx --image=nginx kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead. deployment.apps/nginx created [root@master01 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-dbddb74b8-djr5h 0/1 ContainerCreating 0 14s
此時正在建立過程的狀態,待會再看(30秒到1min左右便可)
[root@master01 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-dbddb74b8-djr5h 1/1 Running 0 75s
目前已是運行狀態了
咱們經過kubectl工具來查看剛剛建立的pod中的nginx日誌
[root@master01 ~]# kubectl logs nginx-dbddb74b8-djr5h Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-r5xz9)
Error的報錯緣由是由於權限問題,此時咱們設置一下就能夠了(添加匿名用戶給予權限便可)
[root@master01 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created [root@master01 ~]# kubectl logs nginx-dbddb74b8-djr5h [root@master01 ~]# #因爲沒有訪問,所以不會有日誌生成
咱們來查看一下pod的網絡信息
[root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-dbddb74b8-djr5h 1/1 Running 0 5m52s 172.17.91.3 192.168.0.130 <none> [root@master01 ~]#
咱們發現其創建在了130的服務器上,也就是咱們的node02服務器上
此時咱們能夠在node02上訪問一下
[root@node02 ~]# curl 172.17.91.3 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
因爲咱們安裝了flannel組件,所以咱們測試一下再node01的瀏覽器上訪問172.17.91.3是否能夠訪問該網頁,這裏的flannel組件掛了,所以須要從新搞定一下,最後測試兩個node節點的flannel能夠互通(此時地址網段發生改變了)
從新執行那個flannel.sh腳本而後重載進程和重啓docker服務
因爲以前的flannel地址失效了,所以在master01節點上建立的pod資源分配的地址也無效了,所以能夠刪除,而後會自動建立新的pod資源
[root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-dbddb74b8-djr5h 1/1 Running 0 20s 172.17.91.3 192.168.0.130 <none> [root@master01 ~]# kubectl delete pod nginx-dbddb74b8-djr5h pod "nginx-dbddb74b8-djr5h" deleted #新的pod資源以下 [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-dbddb74b8-cnhsl 0/1 ContainerCreating 0 12s <none> 192.168.0.130 <none> [root@master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-dbddb74b8-cnhsl 1/1 Running 0 91s 172.17.70.2 192.168.0.130 <none>
而後咱們直接在node01節點上的瀏覽器中訪問該地址,結果截圖以下:
根據該圖的結果表示咱們的測試成功了
以上就是這次多節點以二進制方式部署的Kubernetes集羣了。
這次配置的過程給個人收穫是在部署比較複雜的架構時須要注意文件的備份工做,每走一步驗證一步,以防越作越錯,儘量在實踐以前驗證環境和所需開啓的服務的狀態,最後就是如何解決遇到的一些問題,例如方纔的flannel組件掛掉問題,並且還塞翁失馬,驗證了Kubernetes的一個特性——故障自動解決。
謝謝您的閱讀!