1、管理k8s資源
1.管理k8s核心資源的三種基本方法
- 陳述式管理方法----主要依賴命令行cli工具進行管理
- 聲明式管理方法--主要依賴統一資源配置清單(manifest)進行管理
- GUI式管理方法--主要依賴圖形化操做界面(web頁面)進行管理
2.陳述式管理方法
2.1 管理namespace資源
##查看名稱空間 [root@kjdow7-21 ~]# kubectl get namespaces #或者 kubectl get ns NAME STATUS AGE default Active 4d18h kube-node-lease Active 4d18h kube-public Active 4d18h kube-system Active 4d18h ##查看名稱空間內的資源 [root@kjdow7-21 ~]# kubectl get all -n default NAME READY STATUS RESTARTS AGE pod/nginx-ds-ssdtm 1/1 Running 1 2d18h pod/nginx-ds-xfsk4 1/1 Running 1 2d18h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 4d19h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/nginx-ds 2 2 2 2 2 <none> 2d18h ##建立名稱空間 [root@kjdow7-21 ~]# kubectl create namespace app namespace/app created ##刪除名稱空間 [root@kjdow7-21 ~]# kubectl delete namespace app namespace "app" deleted
注意: namespace能夠簡寫爲ns;html
2.2 管理Deploymen資源
##建立deployment [root@kjdow7-21 ~]# kubectl create deployment nginx-dp --image=harbor.phc-dow.com/public/nginx:v1.7.9 -n kube-public deployment.apps/nginx-dp created [root@kjdow7-21 ~]# kubectl get all -n kube-public NAME READY STATUS RESTARTS AGE pod/nginx-dp-67f6684bb9-zptmd 1/1 Running 0 26s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-dp 1/1 1 1 27s NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-dp-67f6684bb9 1 1 1 26s ###查看deployment資源 [root@kjdow7-21 ~]# kubectl get deployment -n kube-public NAME READY UP-TO-DATE AVAILABLE AGE nginx-dp 1/1 1 1 42m
注意: deployment能夠簡寫爲deploynode
###長格式顯示deployment信息 [root@kjdow7-21 ~]# kubectl get deploy -n kube-public -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx-dp 1/1 1 1 53m nginx harbor.phc-dow.com/public/nginx:v1.7.9 app=nginx-dp ###查看deployment詳細信息 [root@kjdow7-21 ~]# kubectl describe deployment nginx-dp -n kube-public Name: nginx-dp Namespace: kube-public CreationTimestamp: Mon, 13 Jan 2020 23:33:45 +0800 Labels: app=nginx-dp Annotations: deployment.kubernetes.io/revision: 1 Selector: app=nginx-dp Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx-dp Containers: nginx: Image: harbor.phc-dow.com/public/nginx:v1.7.9 Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: nginx-dp-67f6684bb9 (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 55m deployment-controller Scaled up replica set nginx-dp-67f6684bb9 to 1
2.3 管理pod資源
###查看pod資源 [root@kjdow7-21 ~]# kubectl get pod -n kube-public NAME READY STATUS RESTARTS AGE nginx-dp-67f6684bb9-zptmd 1/1 Running 0 104m ###進入pod資源 [root@kjdow7-21 ~]# kubectl exec -it nginx-dp-67f6684bb9-zptmd /bin/bash -n kube-public root@nginx-dp-67f6684bb9-zptmd:/# ip add 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:07:16:03 brd ff:ff:ff:ff:ff:ff inet 172.7.22.3/24 brd 172.7.22.255 scope global eth0 valid_lft forever preferred_lft forever
注意:也能夠在對應的宿主機上使用docker exec 進入linux
2.4 刪除資源
2.4.1 刪除pod
###刪除pod [root@kjdow7-21 ~]# kubectl get pod -n kube-public NAME READY STATUS RESTARTS AGE nginx-dp-67f6684bb9-zptmd 1/1 Running 0 146m [root@kjdow7-21 ~]# kubectl delete pod nginx-dp-67f6684bb9-zptmd -n kube-public pod "nginx-dp-67f6684bb9-zptmd" deleted [root@kjdow7-21 ~]# kubectl get pod -n kube-public NAME READY STATUS RESTARTS AGE nginx-dp-67f6684bb9-kn8m9 1/1 Running 0 11s
注:刪除pod其實是重啓了一個新的pod,由於pod控制器預期有一個pod,刪除了一個,就會再啓動一個,讓它符合預約的預期。nginx
###強制刪除pod [root@kjdow7-21 ~]# kubectl get pod -n kube-public NAME READY STATUS RESTARTS AGE nginx-dp-67f6684bb9-kn8m9 1/1 Running 0 22h [root@kjdow7-21 ~]# kubectl delete pod nginx-dp-67f6684bb9-kn8m9 -n kube-public --force --grace-period=0 warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "nginx-dp-67f6684bb9-kn8m9" force deleted [root@kjdow7-21 ~]# kubectl get pod -n kube-public NAME READY STATUS RESTARTS AGE nginx-dp-67f6684bb9-898p6 1/1 Running 0 17s
--force --grace-period=0 是強制刪除git
2.4.2 刪除deployment
[root@kjdow7-21 ~]# kubectl delete deployment nginx-dp -n kube-public deployment.extensions "nginx-dp" deleted [root@kjdow7-21 ~]# kubectl get deployment -n kube-public No resources found. [root@kjdow7-21 ~]# kubectl get pod -n kube-public No resources found.
2.5 service 資源
2.5.1 建立service
###建立service資源 [root@kjdow7-21 ~]# kubectl expose deployment nginx-dp --port=80 -n kube-public service/nginx-dp exposed ###查看資源 [root@kjdow7-21 ~]# kubectl get all -n kube-public NAME READY STATUS RESTARTS AGE pod/nginx-dp-67f6684bb9-l9jqh 1/1 Running 0 13m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/nginx-dp ClusterIP 192.168.208.157 <none> 80/TCP 7m53s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-dp 1/1 1 1 13m NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-dp-67f6684bb9 1 1 1 13m ####service中的cluster-ip就是pod的固定接入點,無論podip怎麼變化,cluster-ip不會變
# kubectl get pod -n kube-public -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-dp-67f6684bb9-l9jqh 1/1 Running 0 26m 172.7.22.3 kjdow7-22.host.com <none> <none> kjdow7-22 ~]# curl 192.168.208.157 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> ###能夠訪問 [root@kjdow7-22 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.1:443 nq -> 10.4.7.21:6443 Masq 1 0 0 -> 10.4.7.22:6443 Masq 1 0 0 TCP 192.168.208.157:80 nq -> 172.7.22.3:80 Masq 1 0 0 [root@kjdow7-21 ~]# kubectl scale deployment nginx-dp --replicas=2 -n kube-public deployment.extensions/nginx-dp scaled [root@kjdow7-22 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.1:443 nq -> 10.4.7.21:6443 Masq 1 0 0 -> 10.4.7.22:6443 Masq 1 0 0 TCP 192.168.208.157:80 nq -> 172.7.21.3:80 Masq 1 0 0 -> 172.7.22.3:80 Masq 1 0 0
###嘗試刪除一個pod,查看下情況 [root@kjdow7-21 ~]# kubectl get pod -n kube-public -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-dp-67f6684bb9-l9jqh 1/1 Running 0 39m 172.7.22.3 kjdow7-22.host.com <none> <none> nginx-dp-67f6684bb9-nn9pq 1/1 Running 0 3m41s 172.7.21.3 kjdow7-21.host.com <none> <none> [root@kjdow7-21 ~]# kubectl delete pod nginx-dp-67f6684bb9-l9jqh -n kube-public pod "nginx-dp-67f6684bb9-l9jqh" deleted [root@kjdow7-21 ~]# kubectl get pod -n kube-public -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-dp-67f6684bb9-lmfvl 1/1 Running 0 9s 172.7.22.4 kjdow7-22.host.com <none> <none> nginx-dp-67f6684bb9-nn9pq 1/1 Running 0 3m57s 172.7.21.3 kjdow7-21.host.com <none> <none> [root@kjdow7-21 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.1:443 nq -> 10.4.7.21:6443 Masq 1 0 0 -> 10.4.7.22:6443 Masq 1 0 0 TCP 192.168.208.157:80 nq -> 172.7.21.3:80 Masq 1 0 0 -> 172.7.22.4:80 Masq 1 0 0
不管後端的pod的ip如何變化,前面的service的cluster-ip不會變。cluster-ip代理後端兩臺ipgithub
service就是抽象一個相對穩定的點,可讓服務有一個穩定的接入點能夠接入進來web
2.5.2 查看service
[root@kjdow7-21 ~]# kubectl describe svc nginx-dp -n kube-public Name: nginx-dp Namespace: kube-public Labels: app=nginx-dp Annotations: <none> Selector: app=nginx-dp Type: ClusterIP IP: 192.168.208.157 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 172.7.21.3:80,172.7.22.4:80 Session Affinity: None Events: <none>
能夠看到selector跟pod同樣,service是經過標籤進行匹配docker
注意:service簡寫爲svcshell
2.6 總結
- kubernetes集羣管理集羣資源的惟一入口是經過相應的方法調用apiserver的接口
- kubectl是官方的cli命令行工具,用於與apiserver進行通訊,將用戶在命令行輸入的命令,組織並轉化爲apiserver能識別的信息,進而實現管理k8s各類資源的一種有效途徑。
- kubectl的命令大全
-
- kubectl --help
- kubernetes中文社區
- 陳述式資源管理方法能夠知足90%以上的資源管理需求,但它的缺點也很明顯
-
- 命令冗長、複雜、難以記憶
- 特定場景下沒法實現管理需求
- 對資源的增、刪、查操做比較容易,改就很痛苦了
3.聲明式資源管理方法
聲明式資源管理方法依賴於資源配置清單(yaml/json)json
3.1 查看資源配置清單的方法
[root@kjdow7-21 ~]# kubectl get svc nginx-dp -o yaml -n kube-public apiVersion: v1 kind: Service metadata: creationTimestamp: "2020-01-14T17:15:32Z" labels: app: nginx-dp name: nginx-dp namespace: kube-public resourceVersion: "649706" selfLink: /api/v1/namespaces/kube-public/services/nginx-dp uid: 5159828f-6d5d-4e43-83aa-51fd16b652d0 spec: clusterIP: 192.168.208.157 ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx-dp sessionAffinity: None type: ClusterIP status: loadBalancer: {}
3.2 解釋資源配置清單
[root@kjdow7-21 ~]# kubectl explain service
至關於--help,解釋後面的資源是什麼,怎麼使用
3.3 建立資源配置清單
[root@kjdow7-21 nginx-ds]# vim nginx-ds-svc.yaml apiVersion: v1 kind: Service metadata: labels: app: nginx-ds name: nginx-ds namespace: default spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx-ds sessionAffinity: None type: ClusterIP [root@kjdow7-21 nginx-ds]# ls nginx-ds-svc.yaml [root@kjdow7-21 nginx-ds]# kubectl create -f nginx-ds-svc.yaml service/nginx-ds created
3.4 修改資源配置清單
3.4.1 離線修改
[root@kjdow7-21 nginx-ds]# kubectl apply -f nginx-ds-svc.yaml service/nginx-ds created [root@kjdow7-21 nginx-ds]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 6d nginx-ds ClusterIP 192.168.52.127 <none> 8080/TCP 7s [root@kjdow7-21 nginx-ds]# vim nginx-ds-svc.yaml ###修改文件的端口爲8081,並應用新的配置清單 [root@kjdow7-21 nginx-ds]# kubectl apply -f nginx-ds-svc.yaml service/nginx-ds configured [root@kjdow7-21 nginx-ds]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 6d nginx-ds ClusterIP 192.168.52.127 <none> 8081/TCP 2m46s
注意:必須使用apply建立的,才能使用apply進行應用修改,若是是使用create進行建立的,則不能用apply應用修改
3.4.2 在線修改
[root@kjdow7-21 nginx-ds]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 6d nginx-ds ClusterIP 192.168.8.104 <none> 8080/TCP 4m18s [root@kjdow7-21 nginx-ds]# kubectl edit svc nginx-ds service/nginx-ds edited ####經過edit至關於vim打開清單文件,修改後保存退出,當即生效,可是建立時使用的yaml文件並無被修改 [root@kjdow7-21 nginx-ds]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 6d nginx-ds ClusterIP 192.168.8.104 <none> 8082/TCP 4m42s
3.5 刪除資源配置清單
3.5.1 陳述式刪除
[root@kjdow7-21 nginx-ds]# kubectl delete svc nginx-ds service "nginx-ds" deleted
3.5.2 聲明式刪除
[root@kjdow7-21 nginx-ds]# kubectl delete -f nginx-ds-svc.yaml service "nginx-ds" deleted
3.6 總結
- 聲明式資源管理方法依賴於統一資源配置清單文件對資源進行管理
- 對資源的管理,是經過事先定義在統一資源配置清單內,再經過陳述式命令應用到k8s集羣裏
- 語法格式:kubectl create/apply/delete -f /path/to/yaml
- 資源配置清單的學習方法:
-
- 多看別人(官方)寫的,能讀懂
- 能照着現成的文件改着用
- 遇到不懂的,善於用kubectl explain 。。。查
- 初學切記上來就無中生有,本身憋着寫
2、k8s核心的插件
1.flanneld安裝部署
kubernetes設計了網絡模型,但卻將它的實現交給了網絡插件,CNI網絡插件最主要的功能就是實現pod資源可以跨宿主機進行通訊
建立的CNI網絡插件:
- Flannel
- Calico
- Canal
- Contiv
- OpenContrail
- NSX-T
- Kub-router
1.1 下載與安裝flannel
在kjdow7-2一、kjdow7-22上進行部署
###下載並解壓flannel [root@kjdow7-21 ~]# cd /opt/src [root@kjdow7-22 src]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz [root@kjdow7-21 src]# mkdir /opt/flannel-v0.11.0 [root@kjdow7-21 src]# tar xf flannel-v0.11.0-linux-amd64.tar.gz -C /opt/flannel-v0.11.0/ [root@kjdow7-21 src]# ln -s /opt/flannel-v0.11.0 /opt/flannel
1.2 拷貝證書
[root@kjdow7-21 srcl]# mkdir /opt/flannel/certs [root@kjdow7-21 srcl]# cd /opt/flannel/certs [root@kjdow7-21 certs]# scp kjdow7-200:/opt/certs/ca.pem . [root@kjdow7-21 certs]# scp kjdow7-200:/opt/certs/client.pem . [root@kjdow7-21 certs]# scp kjdow7-200:/opt/certs/client-key.pem .
flannel默認是須要使用etcd作一些存儲和配置的,所以須要flannel使用client證書鏈接etcd
1.3 配置flannel配置文件
[root@kjdow7-21 flannel]# vim /opt/flannel/subnet.env FLANNEL_NETWORK=172.7.0.0/16 FLANNEL_SUBNET=172.7.21.1/24 FLANNEL_MTU=1500 FLANNEL_IPMASQ=false
第一行表示管理的網絡,第二行表示本機的網絡,在其餘節點上進行配置時須要進行修改
[root@kjdow7-21 flannel]# vim /opt/flannel/flanneld.sh #!/bin/sh ./flanneld \ --public-ip=10.4.7.21 \ --etcd-endpoints=https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \ --etcd-keyfile=./certs/client-key.pem \ --etcd-certfile=./certs/client.pem \ --etcd-cafile=./certs/ca.pem \ --iface=eth1 \ --subnet-file=./subnet.env \ --healthz-port=2401 [root@kjdow7-21 flannel]# chmod +x flanneld.sh
注意:ip寫本機ip,iface寫通訊的接口名,根據實際進行修改,不一樣節點配置略有不一樣
1.4 配置etcd
由於flannel須要etcd作一些存儲配置,所以須要在etcd中建立有關flannel的配置
[root@kjdow7-21 flannel]# cd /opt/etcd ###查看etcd集羣狀態 [root@kjdow7-21 etcd]# ./etcdctl member list 988139385f78284: name=etcd-server-7-22 peerURLs=https://10.4.7.22:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.22:2379 isLeader=false 5a0ef2a004fc4349: name=etcd-server-7-21 peerURLs=https://10.4.7.21:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.21:2379 isLeader=false f4a0cb0a765574a8: name=etcd-server-7-12 peerURLs=https://10.4.7.12:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.12:2379 isLeader=true
配置etcd,增長host-gw
[root@kjdow7-21 etcd]# ./etcdctl set /coreos.com/network/config '{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}' {"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}} ###查看配置的內容 [root@kjdow7-21 etcd]# ./etcdctl get /coreos.com/network/config {"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}
注意:這一步只須要作一次就行,不須要在其餘節點重複配置了
1.5 建立supervisor配置
[root@kjdow7-21 etcd]# vi /etc/supervisord.d/flannel.ini [program:flanneld-7-21] command=/opt/flannel/flanneld.sh ; the program (relative uses PATH, can take args) numprocs=1 ; number of processes copies to start (def 1) directory=/opt/flannel ; directory to cwd to before exec (def no cwd) autostart=true ; start at supervisord start (default: true) autorestart=true ; retstart at unexpected quit (default: true) startsecs=30 ; number of secs prog must stay running (def. 1) startretries=3 ; max # of serial start failures (default 3) exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) stopsignal=QUIT ; signal used to kill process (default TERM) stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) user=root ; setuid to this UNIX account to run the program redirect_stderr=true ; redirect proc stderr to stdout (default false) stdout_logfile=/data/logs/flanneld/flanneld.stdout.log ; stderr log path, NONE for none; default AUTO stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB) stdout_logfile_backups=4 ; # of stdout logfile backups (default 10) stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) stdout_events_enabled=false ; emit events on stdout writes (default false) ####建立日誌目錄 [root@kjdow7-21 flannel]# mkdir -p /data/logs/flanneld
###啓動 [root@kjdow7-21 flannel]# supervisorctl update flanneld-7-21: added process group [root@kjdow7-21 flannel]# supervisorctl status etcd-server-7-21 RUNNING pid 5885, uptime 4 days, 13:33:06 flanneld-7-21 RUNNING pid 10430, uptime 0:00:40 kube-apiserver-7-21 RUNNING pid 5886, uptime 4 days, 13:33:06 kube-controller-manager-7-21 RUNNING pid 5887, uptime 4 days, 13:33:06 kube-kubelet-7-21 RUNNING pid 5881, uptime 4 days, 13:33:06 kube-proxy-7-21 RUNNING pid 5890, uptime 4 days, 13:33:06 kube-scheduler-7-21 RUNNING pid 5894, uptime 4 days, 13:33:06
配置端口轉發
flannel依託於端口轉發功能,每檯安裝flannel的服務器必須配置端口轉發
~]# vim /etc/sysctl.conf net.ipv4.ip_forward = 1 sysctl -p
1.6 驗證Flannel
[root@kjdow7-21 ~]# kubectl get pod -n kube-public -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-dp-67f6684bb9-lmfvl 1/1 Running 0 18h 172.7.22.4 kjdow7-22.host.com <none> <none> nginx-dp-67f6684bb9-nn9pq 1/1 Running 0 18h 172.7.21.3 kjdow7-21.host.com <none> <none> ####在一臺運算節點上ping,發現都能ping通了 [root@kjdow7-21 ~]# ping 172.7.22.4 PING 172.7.22.4 (172.7.22.4) 56(84) bytes of data. 64 bytes from 172.7.22.4: icmp_seq=1 ttl=63 time=0.528 ms 64 bytes from 172.7.22.4: icmp_seq=2 ttl=63 time=0.249 ms ^C --- 172.7.22.4 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.249/0.388/0.528/0.140 ms [root@kjdow7-21 ~]# ping 172.7.21.3 PING 172.7.21.3 (172.7.21.3) 56(84) bytes of data. 64 bytes from 172.7.21.3: icmp_seq=1 ttl=64 time=0.151 ms 64 bytes from 172.7.21.3: icmp_seq=2 ttl=64 time=0.101 ms ^C --- 172.7.21.3 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1000ms rtt min/avg/max/mdev = 0.101/0.126/0.151/0.025 ms
注意: 經過ip能夠看到這兩個pod分別在兩個不一樣的宿主機上部署着,以前不能curl通其餘宿主機上的pod,如今已經能夠了
1.7 flannel原理以及三種網絡模型
1.7.1 Flannel的host-gw模型
###在21上看路由表 [root@kjdow7-21 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.4.7.11 0.0.0.0 UG 100 0 0 eth1 10.4.7.0 0.0.0.0 255.255.255.0 U 100 0 0 eth1 172.7.21.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0 172.7.22.0 10.4.7.22 255.255.255.0 UG 0 0 0 eth1 ###在22上看路由表 [root@kjdow7-22 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.4.7.11 0.0.0.0 UG 100 0 0 eth1 10.4.7.0 0.0.0.0 255.255.255.0 U 100 0 0 eth1 172.7.21.0 10.4.7.21 255.255.255.0 UG 0 0 0 eth1 172.7.22.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
能夠看到flannel在host-gw模式下實際上是維護了一個靜態路由表,經過靜態路由進行通訊。
注意:此種模型僅限於因此的運算節點都在一個網段,且指向同一個網關
'{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}'
1.7.2 Flannel的VxLAN模型
注意:若是運算節點不在一個網段內,則須要使用vxlan模型。其實就是先找本機接口ip,並給包添加一個flannel的包頭,而後再發送出去。
'{"Network": "172.7.0.0/16", "Backend": {"Type": "VxLAN"}}'
經過VxLAN模式啓動,會在宿主機上添加一個flannel.1的虛擬網絡設備,例如7-21上有以下圖所示的網絡
1.7.3 直接路由模型
'{"Network": "172.7.0.0/16", "Backend": {"Type": "VxLAN","Directrouting": true}}'
模型是vxlan模型,可是若是發現運算節點都在一個網段,就走host-gw網絡
1.8 flannel之SNAT規則優化
問題:在k8s中運行了一個deployment,並起了兩個pod作web服務器,能夠看到分別部署在兩個節點上,登陸一個pod,並使用curl訪問web頁面,在另外一個pod上查看實時日誌,發現web日誌,顯示訪問的源ip是宿主機ip
怎麼解決呢?
在kjdow7-2一、kjdow7-22上進行操做
[root@kjdow7-21 ~]# yum install iptables-services -y [root@kjdow7-21 ~]# systemctl start iptables [root@kjdow7-21 ~]# systemctl enable iptables —————————————————————————————————————————————————————————————————————————————— [root@kjdow7-21 ~]# iptables -t nat -D POSTROUTING -s 172.7.21.0/24 ! -o docker0 -j MASQUERADE [root@kjdow7-21 ~]# iptables -t nat -I POSTROUTING -s 172.7.21.0/24 ! -d 172.7.0.0/16 ! -o docker0 -j MASQUERADE [root@kjdow7-21 ~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ] ####10.4.7.21主機上的,來源是172.7.21.0/24段的docker的ip,目標ip不是172.7.0.0/16段,網絡發包不從docker0橋設備出站的,才進行SNAT轉換 —————————————————————————————————————————————————————————————————————————————— [root@kjdow7-21 ~]# iptables-save | grep -i reject -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited [root@kjdow7-21 ~]# iptables -t filter -D INPUT -j REJECT --reject-with icmp-host-prohibited [root@kjdow7-21 ~]# iptables -t filter -D FORWARD -j REJECT --reject-with icmp-host-prohibited [root@kjdow7-21 ~]# service iptables save iptables: Saving firewall rules to /etc/sysconfig/iptables:[ OK ]
在其中一個pod上訪問另外一個pod的地址,並查看另外一個pod的實時日誌,能夠看到日誌中的源地址是真實的地址
在真實生產的狀況中,要根據實際狀況進行優化。
2.安裝部署coredns
- 簡單來講,服務發現就是服務(應用)之間互相定位的過程。
- 服務發現並不是雲時代獨有的,傳統的單體架構時代也會用到。如下應用場景下,更須要服務發現
-
- 服務(應用)的動態性強
- 服務(應用)更新發布頻繁
- 服務(應用)支持自動伸縮
- 在k8s集羣裏,POD的ip是不斷變化的,如何「以不變應萬變」呢?
-
- 抽象出了service資源,經過標籤選擇器,關聯一組pod
- 抽象出了集羣網絡,經過相對固定的「集羣ip」,使服務接入點固定
- 那麼如何自動關聯service資源的「名稱」和「集羣網絡ip」,從而達到服務被集羣自動發現的目的呢?
-
- 考慮傳統的DNS的模型:kjdow7-21.host.com----->10.4.7.21
- 可否在k8s裏創建這樣的模型:nginx-ds------>192.168.0.5
- k8s裏服務發現的方式-------DNS
- 實現k8s裏DNS功能的插件(軟件)
-
- kube-dns----------> kubernetes-v1.2至kubernetes-v1.10
- coredns------------> kubernetes-v1.11至今
- 注意:
-
- k8s裏的DNS不是萬能的,他應該只負責自動維護「服務名」-------> 「集羣網絡IP」之間的關係
2.1 部署k8s的內網資源配置清單http服務
在kjdow7-200上,配置一個nginx虛擬主機,用來提供k8s統一的資源配置清單訪問入口
- 配置nginx
[root@kjdow7-200 conf.d]# vim k8s-yaml.phc-dow.com.conf server { listen 80; server_name k8s-yaml.phc-dow.com; location / { autoindex on; default_type text/plain; root /data/k8s-yaml; } } [root@kjdow7-200 conf.d]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful [root@kjdow7-200 conf.d]# systemctl reload nginx [root@kjdow7-200 conf.d]# mkdir /data/k8s-yaml -p
- 配置內網dns解析
在kjdow7-11上
[root@kjdow7-11 ~]# cd /var/named/ [root@kjdow7-11 named]# vim phc-dow.com.zon 2020010203 ; serial #serial值加一 k8s-yaml A 10.4.7.200 #添加此行記錄 [root@kjdow7-11 named]# systemctl restart named
2.2 下載coredns鏡像
在kjdow7-200上
###準備coredns的鏡像,並推送到harbor裏面 [root@kjdow7-200 ~]# docker pull coredns/coredns:1.6.1 1.6.1: Pulling from coredns/coredns c6568d217a00: Pull complete d7ef34146932: Pull complete Digest: sha256:9ae3b6fcac4ee821362277de6bd8fd2236fa7d3e19af2ef0406d80b595620a7a Status: Downloaded newer image for coredns/coredns:1.6.1 docker.io/coredns/coredns:1.6.1 [root@kjdow7-200 ~]# docker tag c0f6e815079e harbor.phc-dow.com/public/coredns:v1.6.1 [root@kjdow7-200 ~]# docker push harbor.phc-dow.com/public/coredns:v1.6.1
2.3 準備資源配置清單
在kjdow7-200上
[root@kjdow7-200 k8s-yaml]# mkdir coredns && cd /data/k8s-yaml/coredns [root@kjdow7-200 coredns]# vim /data/k8s-yaml/coredns/rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: Reconcile name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: EnsureExists name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system [root@kjdow7-200 coredns]# vim /data/k8s-yaml/coredns/cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system data: Corefile: | .:53 { errors log health ready kubernetes cluster.local 192.168.0.0/16 forward . 10.4.7.11 cache 30 loop reload loadbalance } [root@kjdow7-200 coredns]# vim /data/k8s-yaml/coredns/dp.yaml apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: coredns kubernetes.io/name: "CoreDNS" spec: replicas: 1 selector: matchLabels: k8s-app: coredns template: metadata: labels: k8s-app: coredns spec: priorityClassName: system-cluster-critical serviceAccountName: coredns containers: - name: coredns image: harbor.phc-dow.com/public/coredns:v1.6.1 args: - -conf - /etc/coredns/Corefile volumeMounts: - name: config-volume mountPath: /etc/coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile [root@kjdow7-200 coredns]# vim /data/k8s-yaml/coredns/svc.yaml apiVersion: v1 kind: Service metadata: name: coredns namespace: kube-system labels: k8s-app: coredns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: coredns clusterIP: 192.168.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 - name: metrics port: 9153 protocol: TCP [root@kjdow7-200 coredns]# ll total 16 -rw-r--r-- 1 root root 319 Jan 16 20:53 cm.yaml -rw-r--r-- 1 root root 1299 Jan 16 20:57 dp.yaml -rw-r--r-- 1 root root 954 Jan 16 20:51 rbac.yaml -rw-r--r-- 1 root root 387 Jan 16 20:58 svc.yaml
2.4 聲明式建立資源
[root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/coredns/rbac.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/coredns/cm.yaml configmap/coredns created [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/coredns/dp.yaml deployment.apps/coredns created [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/coredns/svc.yaml service/coredns created [root@kjdow7-21 ~]# kubectl get all -n kube-system NAME READY STATUS RESTARTS AGE pod/coredns-7dd986bcdc-2w8nw 1/1 Running 0 119s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/coredns ClusterIP 192.168.0.2 <none> 53/UDP,53/TCP,9153/TCP 112s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/coredns 1/1 1 1 119s NAME DESIRED CURRENT READY AGE replicaset.apps/coredns-7dd986bcdc 1 1 1 119s [root@kjdow7-21 ~]# dig -t A www.baidu.com @192.168.0.2 +short www.a.shifen.com. 180.101.49.12 180.101.49.11 [root@kjdow7-21 ~]# dig -t A kjdow7-21.host.com @192.168.0.2 +short 10.4.7.21
注意:在前面kubelet的啓動腳本中已經制定了dns的ip是192.168.0.2,所以這裏能夠解析
2.5 驗證
[root@kjdow7-21 ~]# kubectl get all -n kube-public NAME READY STATUS RESTARTS AGE pod/nginx-dp-5595d547b4-9hc8h 1/1 Running 0 40h pod/nginx-dp-5595d547b4-zdwxl 1/1 Running 0 40h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/nginx-dp ClusterIP 192.168.208.157 <none> 80/TCP 2d16h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-dp 2/2 2 2 2d16h NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-dp-5595d547b4 2 2 2 40h replicaset.apps/nginx-dp-67f6684bb9 0 0 0 2d16h [root@kjdow7-21 ~]# dig -t A nginx-dp.kube-public.svc.cluster.local. @192.168.0.2 +short 192.168.208.157 [root@kjdow7-21 ~]# kubectl exec -it nginx-dp-5595d547b4-9hc8h /bin/bash -n kube-public root@nginx-dp-5595d547b4-9hc8h:/# cat /etc/resolv.conf nameserver 192.168.0.2 search kube-public.svc.cluster.local svc.cluster.local cluster.local host.com options ndots:5 ###進入pod能夠看到容器的dns已經自動設置爲192.168.0.2
3.k8s服務暴露之ingress
- k8s的DNS實現了服務在集羣內被自動發現,那如何使得服務在k8s集羣外被使用和訪問呢?
-
- 使用nodeport型的service
-
- 注意:沒法使用kube-proxy的ipvs模型,只能使用iptables模型
- 使用ingress資源
-
- Ingress只能調度並暴露7層應用,特指http和https協議
- Ingress是K8S API的標準資源類型之一,也是一種核心資源,它其實就是一組基於域名和URL路徑,把用戶的請求轉發至指定的service資源的規則
- 能夠將集羣外部的請求流量,轉發至集羣內部,從而實現服務暴露
- Ingress控制器是可以爲Ingress資源監聽某套接字,而後根據Ingress規則匹配機制路由調度流量的一個組件
- 說白了,Ingress沒啥神祕的,就是個簡化版的nginx+一段go腳本而已
- 經常使用的Ingress控制器的實現軟件
-
- Ingress-nginx
- HAProxy
- Traefik
3.1 部署traefik(ingress控制器)---準備traefik鏡像
[root@kjdow7-200 ~]# docker pull traefik:v1.7.2-alpine v1.7.2-alpine: Pulling from library/traefik 4fe2ade4980c: Pull complete 8d9593d002f4: Pull complete 5d09ab10efbd: Pull complete 37b796c58adc: Pull complete Digest: sha256:cf30141936f73599e1a46355592d08c88d74bd291f05104fe11a8bcce447c044 Status: Downloaded newer image for traefik:v1.7.2-alpine docker.io/library/traefik:v1.7.2-alpine [root@kjdow7-200 ~]# docker images | grep traefik traefik v1.7.2-alpine add5fac61ae5 15 months ago 72.4MB [root@kjdow7-200 ~]# docker tag traefik:v1.7.2-alpine harbor.phc-dow.com/public/traefik:v1.7.2-alpine [root@kjdow7-200 ~]# docker push harbor.phc-dow.com/public/traefik:v1.7.2-alpine The push refers to repository [harbor.phc-dow.com/public/traefik] a02beb48577f: Pushed ca22117205f4: Pushed 3563c211d861: Pushed df64d3292fd6: Pushed v1.7.2-alpine: digest: sha256:6115155b261707b642341b065cd3fac2b546559ba035d0262650b3b3bbdd10ea size: 1157
3.2 準備資源配置清單
[root@kjdow7-200 ~]# mkdir /data/k8s-yaml/traefik [root@kjdow7-200 ~]# cat /data/k8s-yaml/traefik/rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: traefik-ingress-controller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: traefik-ingress-controller rules: - apiGroups: - "" resources: - services - endpoints - secrets verbs: - get - list - watch - apiGroups: - extensions resources: - ingresses verbs: - get - list - watch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: traefik-ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: traefik-ingress-controller subjects: - kind: ServiceAccount name: traefik-ingress-controller namespace: kube-system [root@kjdow7-200 ~]# cat /data/k8s-yaml/traefik/ds.yaml apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: traefik-ingress namespace: kube-system labels: k8s-app: traefik-ingress spec: template: metadata: labels: k8s-app: traefik-ingress name: traefik-ingress spec: serviceAccountName: traefik-ingress-controller terminationGracePeriodSeconds: 60 containers: - image: harbor.phc-dow.com/public/traefik:v1.7.2-alpine name: traefik-ingress ports: - name: controller containerPort: 80 hostPort: 81 - name: admin-web containerPort: 8080 securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE args: - --api - --kubernetes - --logLevel=INFO - --insecureskipverify=true - --kubernetes.endpoint=https://10.4.7.10:7443 - --accesslog - --accesslog.filepath=/var/log/traefik_access.log - --traefiklog - --traefiklog.filepath=/var/log/traefik.log - --metrics.prometheus [root@kjdow7-200 ~]# cat /data/k8s-yaml/traefik/svc.yaml kind: Service apiVersion: v1 metadata: name: traefik-ingress-service namespace: kube-system spec: selector: k8s-app: traefik-ingress ports: - protocol: TCP port: 80 name: controller - protocol: TCP port: 8080 name: admin-web [root@kjdow7-200 ~]# cat /data/k8s-yaml/traefik/ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: traefik-web-ui namespace: kube-system annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: traefik.phc-dow.com http: paths: - path: / backend: serviceName: traefik-ingress-service servicePort: 8080
3.3 建立資源
[root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/traefik/rbac.yaml serviceaccount/traefik-ingress-controller created clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/traefik/ds.yaml daemonset.extensions/traefik-ingress created [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/traefik/svc.yaml service/traefik-ingress-service created [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/traefik/ingress.yaml ingress.extensions/traefik-web-ui created
注意:使用kubectl get pod -n kube-system,traefik這個pod可能會停留在containercreating的狀態,並報錯
Warning FailedCreatePodSandBox 15m kubelet, kjdow7-21.host.com Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "traefik-ingress-7fsp8": Error response from daemon: driver failed programming external connectivity on endpoint k8s_POD_traefik-ingress-7fsp8_kube-system_3c7fbecb-801c-4f3a-aa30-e3717245d9f5_0 (7603ab3cbc43915876ab0db527195a963f8c8f4a59f8a1a84f00332f3f387227): (iptables failed: iptables --wait -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.7.21.5 --dport 80 -j ACCEPT: iptables: No chain/target/match by that name. (exit status 1))這時能夠重啓kubelet服務
按照配置已經監聽了宿主機的81端口
[root@kjdow7-21 ~]# netstat -lntup | grep 81 tcp6 0 0 :::81 :::* LISTEN 25625/docker-proxy [root@kjdow7-22 ~]# netstat -lntup | grep 81 tcp6 0 0 :::81 :::* LISTEN 22080/docker-proxy
3.5 配置反向代理
[root@kjdow7-11 ~]# cat /etc/nginx/conf.d/phc-dow.com.conf upstream default_backend_traefik { server 10.4.7.21:81 max_fails=3 fail_timeout=10s; server 10.4.7.22:81 max_fails=3 fail_timeout=10s; } server { server_name *.phc-dow.com; location / { proxy_pass http://default_backend_traefik; proxy_set_header Host $http_host; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; } } [root@kjdow7-11 ~]# systemctl reload nginx
3.6 添加域名解析
[root@kjdow7-11 conf.d]# vim /var/named/phc-dow.com.zone $ORIGIN phc-dow.com. $TTL 600 ; 10 minutes @ IN SOA dns.phc-dow.com. dnsadmin.phc-dow.com. ( 2020010204 ; serial #serial值+1 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS dns.phc-dow.com. $TTL 60 ; 1 minute dns A 10.4.7.11 harbor A 10.4.7.200 k8s-yaml A 10.4.7.200 traefik A 10.4.7.10 #添加此行配置 [root@kjdow7-11 conf.d]# systemctl restart named [root@kjdow7-11 conf.d]# dig -t A traefik,phc-dow.com @10.4.7.11 +short [root@kjdow7-11 conf.d]# dig -t A traefik.phc-dow.com @10.4.7.11 +short 10.4.7.10
訪問域名:traefik.phc-dow.com
4.dashboar插件安裝部署
4.1 準備dashboard鏡像
[root@kjdow7-200 ~]# docker pull k8scn/kubernetes-dashboard-amd64:v1.8.3 [root@kjdow7-200 ~]# docker images | grep dashboard k8scn/kubernetes-dashboard-amd64 v1.8.3 fcac9aa03fd6 19 months ago 102MB [root@kjdow7-200 ~]# docker tag fcac9aa03fd6 harbor.phc-dow.com/public/dashboard:v1.8.3 [root@kjdow7-200 ~]# docker push harbor.phc-dow.com/public/dashboard:v1.8.3
4.2 準備資源配置清單
能夠參考github中kubernetes下cluster/addons/dashboard/這裏有官方提供的yaml模板
[root@kjdow7-200 ~]# mkdir -p /data/k8s-yaml/dashboard [root@kjdow7-200 ~]# cat /data/k8s-yaml/dashboard/rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard addonmanager.kubernetes.io/mode: Reconcile name: kubernetes-dashboard-admin namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard-admin namespace: kube-system labels: k8s-app: kubernetes-dashboard addonmanager.kubernetes.io/mode: Reconcile roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard-admin namespace: kube-system [root@kjdow7-200 ~]# cat /data/k8s-yaml/dashboard/dp.yaml apiVersion: apps/v1 kind: Deployment metadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: priorityClassName: system-cluster-critical containers: - name: kubernetes-dashboard image: harbor.phc-dow.com/public/dashboard:v1.8.3 resources: limits: cpu: 100m memory: 300Mi requests: cpu: 50m memory: 100Mi ports: - containerPort: 8443 protocol: TCP args: # PLATFORM-SPECIFIC ARGS HERE - --auto-generate-certificates volumeMounts: - name: tmp-volume mountPath: /tmp livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard-admin tolerations: - key: "CriticalAddonsOnly" operator: "Exists" [root@kjdow7-200 dashboard]# cat svc.yaml apiVersion: v1 kind: Service metadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: selector: k8s-app: kubernetes-dashboard ports: - port: 443 targetPort: 8443 [root@kjdow7-200 ~]# cat /data/k8s-yaml/dashboard/ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kubernetes-dashboard namespace: kube-system annotations: kubernetes.io/ingress.class: traefik spec: rules: - host: dashboard.phc-dow.com http: paths: - backend: serviceName: kubernetes-dashboard servicePort: 443
4.3 建立資源
[root@kjdow7-22 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dashboard/rbac.yaml serviceaccount/kubernetes-dashboard-admin created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-admin created [root@kjdow7-22 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dashboard/dp.yaml deployment.apps/kubernetes-dashboard created [root@kjdow7-22 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dashboard/svc.yaml service/kubernetes-dashboard created [root@kjdow7-22 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dashboard/ingress.yaml ingress.extensions/kubernetes-dashboard created [root@kjdow7-22 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-7dd986bcdc-2w8nw 1/1 Running 0 3d22h kubernetes-dashboard-857c754c78-fv5k6 1/1 Running 0 72s traefik-ingress-7fsp8 1/1 Running 0 2d20h traefik-ingress-rlwrj 1/1 Running 0 2d20h [root@kjdow7-22 ~]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE coredns ClusterIP 192.168.0.2 <none> 53/UDP,53/TCP,9153/TCP 3d22h kubernetes-dashboard ClusterIP 192.168.75.142 <none> 443/TCP 2m32s traefik-ingress-service ClusterIP 192.168.58.5 <none> 80/TCP,8080/TCP 2d20h [root@kjdow7-22 ~]# kubectl get ingress -n kube-system NAME HOSTS ADDRESS PORTS AGE kubernetes-dashboard dashboard.phc-dow.com 80 2m30s traefik-web-ui traefik.phc-dow.com 80 2d20h
4.4 添加域名解析
[root@kjdow7-11 ~]# cat /var/named/phc-dow.com.zone $ORIGIN phc-dow.com. $TTL 600 ; 10 minutes @ IN SOA dns.phc-dow.com. dnsadmin.phc-dow.com. ( 2020010205 ; serial #值加一 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS dns.phc-dow.com. $TTL 60 ; 1 minute dns A 10.4.7.11 harbor A 10.4.7.200 k8s-yaml A 10.4.7.200 traefik A 10.4.7.10 dashboard A 10.4.7.10 #添加A記錄 [root@kjdow7-11 ~]# systemctl restart named [root@kjdow7-11 ~]# dig -t A dashboard.phc-dow.com @10.4.7.11 +short 10.4.7.10 [root@kjdow7-21 ~]# dig -t A dashboard.phc-dow.com @192.168.0.2 +shor 10.4.7.10
4.5 dashboard配置配置https訪問並登陸
[root@kjdow7-200 ~]# cd /opt/certs/ [root@kjdow7-200 certs]# (umask 077; openssl genrsa -out dashboard.phc-dow.com.key 2048) Generating RSA private key, 2048 bit long modulus ...............................................+++ ...........................+++ e is 65537 (0x10001) [root@kjdow7-200 certs]# openssl req -new -key dashboard.phc-dow.com.key -out dashboard.phc-dow.com.csr -subj "/CN=dashboard.phc-dow.com/C=CN/ST=SH/L=Shanghai/O=kjdow/OU=kj" [root@kjdow7-200 certs]# openssl x509 -req -in dashboard.phc-dow.com.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out dashboard.phc-dow.com.crt -days 3650 Signature ok subject=/CN=dashboard.phc-dow.com/C=CN/ST=SH/L=Shanghai/O=kjdow/OU=kj Getting CA Private Key [root@kjdow7-200 certs]# ls -l| grep dashboard -rw-r--r-- 1 root root 1212 Jan 20 23:59 dashboard.phc-dow.com.crt -rw-r--r-- 1 root root 1009 Jan 20 23:53 dashboard.phc-dow.com.csr -rw------- 1 root root 1679 Jan 20 23:48 dashboard.phc-dow.com.key
在kjdow7-11上配置nginx,dashborad.phc-dow.com使用https訪問
[root@kjdow7-11 ~]# mkdir /etc/nginx/certs [root@kjdow7-11 nginx]# cd /etc/nginx/certs [root@kjdow7-11 certs]# scp 10.4.7.200:/opt/certs/dashboard.phc-dow.com.crt . [root@kjdow7-11 certs]# scp 10.4.7.200:/opt/certs/dashboard.phc-dow.com.key . [root@kjdow7-11 conf.d]# cat dashboard.phc-dow.conf server { listen 80; server_name dashboard.phc-dow.com; rewrite ^(.*)$ https://${server_name}$1 permanent; } server { listen 443 ssl; server_name dashboard.phc-dow.com; ssl_certificate "certs/dashboard.phc-dow.com.crt"; ssl_certificate_key "certs/dashboard.phc-dow.com.key"; ssl_session_cache shared:SSL:1m; ssl_session_timeout 10m; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_pass http://default_backend_traefik; proxy_set_header Host $http_host; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; } } [root@kjdow7-11 conf.d]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful [root@kjdow7-11 conf.d]# systemctl reload nginx
4.6 使用token登陸dashboard
[root@kjdow7-21 ~]# kubectl get secret -n kube-system NAME TYPE DATA AGE coredns-token-xr65q kubernetes.io/service-account-token 3 4d2h default-token-4gfv2 kubernetes.io/service-account-token 3 11d kubernetes-dashboard-admin-token-c55cw kubernetes.io/service-account-token 3 4h27m kubernetes-dashboard-key-holder Opaque 2 4h26m traefik-ingress-controller-token-p9jp6 kubernetes.io/service-account-token 3 3d1h [root@kjdow7-21 ~]# kubectl describe secret kubernetes-dashboard-admin-token-c55cw -n kube-system Name: kubernetes-dashboard-admin-token-c55cw Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: kubernetes-dashboard-admin kubernetes.io/service-account.uid: de6430a5-5d41-4916-917d-23f39a47c9a0 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1379 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1jNTVjdyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImRlNjQzMGE1LTVkNDEtNDkxNi05MTdkLTIzZjM5YTQ3YzlhMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.3FdxCC2-u7635hsifG57G0fR4kqnJPD5ARRGQXBfu47cEgCNbJMAceeW6f8Lmq_Acz_nQxaH92dVFuuouxJvBY1hrswQJMYb2qeH5icH-zZ3ivuzaZ9WFsiix-40w8itMvkv4EhQv8dGaId0DdkPxE0lo-OVUhfk3dndRZnLVPhmkBC-ciJEoaFUgjejaoLoEHx1lMXurVfd5BICPP_hGfeg5sA0HSUaUwp14oAOcbR6syHlCH3O5FN6q7Mxie9g0zqHvGc5RvyLWEKyYwbJwLPA2MeJxRmJ4wH6573w9yaOVEFvENMO-_yjJhIi1BL3MGDvtqWv35yk-jsLo5emVg
複製最後token字段的值,並粘貼到下圖所示的位置
4.7 dashboard升級1.10.1
###下載最新版鏡像 [root@kjdow7-200 ~]# docker pull hexun/kubernetes-dashboard-amd64:v1.10.1 v1.10.1: Pulling from hexun/kubernetes-dashboard-amd64 9518d8afb433: Pull complete Digest: sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 Status: Downloaded newer image for hexun/kubernetes-dashboard-amd64:v1.10.1 docker.io/hexun/kubernetes-dashboard-amd64:v1.10.1 [root@kjdow7-200 ~]# docker images | grep dashboard hexun/kubernetes-dashboard-amd64 v1.10.1 f9aed6605b81 13 months ago 122MB [root@kjdow7-200 ~]# docker tag f9aed6605b81 harbor.phc-dow.com/public/dashboard:v1.10.1 [root@kjdow7-200 ~]# docker push harbor.phc-dow.com/public/dashboard:v1.10.1 ###修改dashboard的dp.yaml文件 [root@kjdow7-200 ~]# cd /data/k8s-yaml/dashboard/ [root@kjdow7-200 ~]# sed -i s#dashboard:v1.8.3#dashboard:v1.10.1#g dp.yaml #修改dp中使用的image ###應用最新配置 [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dashboard/dp.yaml deployment.apps/kubernetes-dashboard configured
等待幾秒刷新頁面便可,1.10.1若是不登陸是進不去的。是沒有跳過選項的
4.8 建立最小權限的ServiceAccount
###建立資源配置文件 [root@kjdow7-200 dashboard]# cat /data/k8s-yaml/dashboard/rbac-minimal.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard addonmanager.kubernetes.io/mode: Reconcile name: kubernetes-dashboard namespace: kube-system --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard addonmanager.kubernetes.io/mode: Reconcile name: kubernetes-dashboard-minimal namespace: kube-system rules: # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubernetes-dashboard-minimal namespace: kube-system labels: k8s-app: kubernetes-dashboard addonmanager.kubernetes.io/mode: Reconcile roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimal subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system [root@kjdow7-200 dashboard]# sed -i s#"serviceAccountName: kubernetes-dashboard-admin"#"serviceAccountName: kubernetes-dashboard"#g dp.yaml #修改綁定的服務用戶名 ###應用配置 [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dashboard/rbac-minimal.yaml serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created [root@kjdow7-21 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-7dd986bcdc-2w8nw 1/1 Running 0 4d5h kubernetes-dashboard-7f5f8dd677-knlrs 1/1 Running 0 6s kubernetes-dashboard-d9d98bb89-2jmmx 0/1 Terminating 0 38m traefik-ingress-7fsp8 1/1 Running 0 3d3h traefik-ingress-rlwrj 1/1 Running 0 3d3h #能夠看到新的pod已經啓動,老的pod正在刪除
查看新的服務用戶名的令牌,並登陸
[root@kjdow7-21 ~]# kubectl get secret -n kube-system NAME TYPE DATA AGE coredns-token-xr65q kubernetes.io/service-account-token 3 4d5h default-token-4gfv2 kubernetes.io/service-account-token 3 11d kubernetes-dashboard-admin-token-c55cw kubernetes.io/service-account-token 3 6h58m kubernetes-dashboard-key-holder Opaque 2 6h58m kubernetes-dashboard-token-xnvct kubernetes.io/service-account-token 3 11m traefik-ingress-controller-token-p9jp6 kubernetes.io/service-account-token 3 3d3h [root@kjdow7-21 ~]# kubectl describe secret kubernetes-dashboard-token-xnvct -n kube-system Name: kubernetes-dashboard-token-xnvct Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: kubernetes-dashboard kubernetes.io/service-account.uid: 5d14b8fb-9b64-4e5b-ad49-13ac04cf44be Type: kubernetes.io/service-account-token Data ==== ca.crt: 1379 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi14bnZjdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjVkMTRiOGZiLTliNjQtNGU1Yi1hZDQ5LTEzYWMwNGNmNDRiZSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.qs1oTLOPUuufo5rJC6QkS09tGUBVHd21xqR4BfOU7QLv5Ua_thlFDVps5V1pznFTyk7hV_9pN9BmZ6GPecuF2eWiwUm-sLv5gf0lg1kvO3ObO9R1RJ8AuJ6slNXJwlpQC8H0jRK2QYgLEWvnF1_tHH2F0ZTmyqBnf_O-rMrwQvLr4FGEmiZ3yf_yI6V7gNwZ_TdWTrcxpaVZk8urpmucda-o9IToy98I0MrDe1EfLJrMl_YIBppmMJFFfTgArQ7IFVQ0STlpqKY6OKV8pTXXnuUnTuD0BzLxfXQfUMo-otQdSxvhQEr8vM5vC5zrvlnBkGqJ15ctQgT0qshAcENBJg
複製最後的token字段的值,並粘貼到dashboard的web頁面對應處,並登陸
能夠看到已經登陸成功了,可是由於咱們配置的是官方提供的最小的權限,所以會報錯沒有權限
注意:
1.在1.8.3版本中,若是點擊跳過,進去以後的默認權限就是在dp.yaml配置文件中綁定的默認serviceaccountname。
2.若是用以前配置的用戶也能夠登陸,由於這個角色已經綁定了serviceaccountname
4.9 K8S之RBAC原理
- k8s自1.6版本起默認使用基於角色的訪問控制(RBAC)
- 它實現了對集羣中的資源的權限的完整覆蓋
- 支持權限的動態調整,無需重啓apiserver
對用戶資源權限的管理以dashboard進行舉例
- 一共有三種對象,分別是帳戶、角色、權限。
- 在定義角色時賦予權限,而後角色綁定給帳戶,那麼帳戶就有了相應的權限,一個帳戶能夠綁定多個角色,能夠方便進行多權限的靈活控制。
- 一個角色都惟一對應了secret,而後經過查看secret的詳細信息,找到token字段,使用token的值進行登陸,那麼就有了這個角色所對應的權限
-
- kubectl get secret -n kube-system #查看secret
- kubectl describe secret kubernetes-dashboard-token-xnvct -n kube-system #查看指定secret的詳細信息,並複製token值,進行登陸
- 帳戶分爲用戶帳戶和服務帳戶
- 角色分爲role和clusterrole。
- 綁定角色就有兩種,分別是rolebinding和clusterrolebinding
5.dashboard插件-heapster
5.1 準備heapster鏡像
在kjdow7-200上部署
[root@kjdow7-200 ~]# docker pull quay.io/bitnami/heapster:1.5.4 [root@kjdow7-200 ~]# docker images | grep heapster quay.io/bitnami/heapster 1.5.4 c359b95ad38b 11 months ago 136MB [root@kjdow7-200 ~]# docker tag c359b95ad38b harbor.phc-dow.com/public/heapster:v1.5.4 [root@kjdow7-200 ~]# docker push harbor.phc-dow.com/public/heapster:v1.5.4
5.2 準備資源配置清單
在kjdow7-200上部署
[root@kjdow7-200 ~]# mkdir /data/k8s-yaml/dashboard/heapster [root@kjdow7-200 ~]# vi /data/k8s-yaml/dashboard/heapster/rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: heapster namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: heapster roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:heapster subjects: - kind: ServiceAccount name: heapster namespace: kube-system [root@kjdow7-200 ~]# vi /data/k8s-yaml/dashboard/heapster/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: heapster namespace: kube-system spec: replicas: 1 template: metadata: labels: task: monitoring k8s-app: heapster spec: serviceAccountName: heapster containers: - name: heapster image: harbor.phc-dow.com/public/heapster:v1.5.4 imagePullPolicy: IfNotPresent command: - /opt/bitnami/heapster/bin/heapster - --source=kubernetes:https://kubernetes.default [root@kjdow7-200 ~]# vi /data/k8s-yaml/dashboard/heapster/svc.yaml apiVersion: v1 kind: Service metadata: labels: task: monitoring # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) # If you are NOT using this as an addon, you should comment out this line. kubernetes.io/cluster-service: 'true' kubernetes.io/name: Heapster name: heapster namespace: kube-system spec: ports: - port: 80 targetPort: 8082 selector: k8s-app: heapster
5.3 應用資源配置清單
在任意運算節點上部署
[root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dashboard/heapster/rbac.yaml serviceaccount/heapster created clusterrolebinding.rbac.authorization.k8s.io/heapster created [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dashboard/heapster/deployment.yaml deployment.extensions/heapster created [root@kjdow7-21 ~]# kubectl apply -f http://k8s-yaml.phc-dow.com/dashboard/heapster/svc.yaml service/heapster created [root@kjdow7-21 ~]# kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-7dd986bcdc-rq9dz 1/1 Running 0 26h 172.7.21.3 kjdow7-21.host.com <none> <none> heapster-96d7f656f-4xw5k 1/1 Running 0 88s 172.7.21.4 kjdow7-21.host.com <none> <none> kubernetes-dashboard-7f5f8dd677-knlrs 1/1 Running 0 8d 172.7.22.5 kjdow7-22.host.com <none> <none> traefik-ingress-7fsp8 1/1 Running 0 11d 172.7.21.5 kjdow7-21.host.com <none> <none> traefik-ingress-rlwrj 1/1 Running 0 11d 172.7.22.4 kjdow7-22.host.com <none> <none>