kubernetes是一個開源的容器引擎管理平臺,實現容器化應用的自動化部署,任務調度,彈性伸縮,負載均衡等功能,cluster是由master和node兩種角色組成html
一、查看master組件角色node
[root@node-1 ~]# kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"}
二、 查看node節點列表linux
[root@node-1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION node-1 Ready master 26h v1.14.1 node-2 Ready <none> 26h v1.14.1 node-3 Ready <none> 26h v1.14.1
三、查看node節點詳情nginx
[root@node-1 ~]# kubectl describe node node-3 Name: node-3 Roles: <none> Labels: beta.kubernetes.io/arch=amd64。#標籤和Annotations beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=node-3 kubernetes.io/os=linux Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"22:f8:75:bb:da:4e"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 10.254.100.103 kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sat, 10 Aug 2019 17:50:00 +0800 Taints: <none> Unschedulable: false。#是否禁用調度,cordon命令控制的標識位。 Conditions: #資源調度能力,MemoryPressure內存是否有壓力(即內存不足) #DiskPressure磁盤壓力 #PIDPressure磁盤壓力 #Ready,是否就緒,代表節點是否處於正常工做狀態,表示資源充足+相關進程狀態正常 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sun, 11 Aug 2019 20:32:07 +0800 Sat, 10 Aug 2019 17:50:00 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sun, 11 Aug 2019 20:32:07 +0800 Sat, 10 Aug 2019 17:50:00 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sun, 11 Aug 2019 20:32:07 +0800 Sat, 10 Aug 2019 17:50:00 +0800 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sun, 11 Aug 2019 20:32:07 +0800 Sat, 10 Aug 2019 18:04:20 +0800 KubeletReady kubelet is posting ready status Addresses: #地址和主機名 InternalIP: 10.254.100.103 Hostname: node-3 Capacity: #容器的資源容量 cpu: 2 ephemeral-storage: 51473868Ki hugepages-2Mi: 0 memory: 3880524Ki pods: 110 Allocatable: #已分配資源狀況 cpu: 2 ephemeral-storage: 47438316671 hugepages-2Mi: 0 memory: 3778124Ki pods: 110 System Info: #系統信息,如內核版本,操做系統版本,cpu架構,node節點軟件版本 Machine ID: 0ea734564f9a4e2881b866b82d679dfc System UUID: D98ECAB1-2D9E-41CC-9A5E-51A44DC5BB97 Boot ID: 6ec81f5b-cb05-4322-b47a-a8e046d9bf79 Kernel Version: 3.10.0-957.el7.x86_64 OS Image: CentOS Linux 7 (Core) Operating System: linux Architecture: amd64 Container Runtime Version: docker://18.3.1 . #Container Runtime爲docker,版本爲18.3.1 Kubelet Version: v1.14.1 #kubelet版本 Kube-Proxy Version: v1.14.1 #kube-proxy版本 PodCIDR: 10.244.2.0/24 #pod使用的網絡 Non-terminated Pods: (4 in total)。 #下面是每一個pod資源佔用狀況 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-fb8b8dccf-hrqm8 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 26h kube-system coredns-fb8b8dccf-qwwks 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 26h kube-system kube-flannel-ds-amd64-zzm2g 100m (5%) 100m (5%) 50Mi (1%) 50Mi (1%) 26h kube-system kube-proxy-x8zqh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26h Allocated resources: #已分配資源狀況 (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 300m (15%) 100m (5%) memory 190Mi (5%) 390Mi (10%) ephemeral-storage 0 (0%) 0 (0%) Events: <none>
kubernetes是容器編排引擎,其負責容器的調度,管理和容器的運行,但kubernetes調度最小單位並不是是container,而是pod,pod中可包含多個container,一般集羣中不會直接運行pod,而是經過各類工做負載的控制器如Deployments,ReplicaSets,DaemonSets的方式運行,爲啥?由於控制器可以保證pod狀態的一致性,正如官方所描述的同樣「make sure the current state match to the desire state」,確保當前狀態和預期的一致,簡單來講就是pod異常了,控制器會在其餘節點重建,確保集羣當前運行的pod和預期設定的一致。web
kubernetes中pod是實際運行的載體,pod依附於node中,node可能會出現故障,kubernetes的控制器如replicasets會在其餘node上從新拉起一個pod,新的pod會分配一個新的IP;再者,應用部署時會包含多個副本replicas,如同個應用deployments部署了3個pod副本,pod至關於後端的Real Server,如何實現這三個應用訪問呢?對於這種狀況,咱們通常會在Real Server前面加一個負載均衡Load Balancer,service就是pod的負載均衡調度器,service將動態的pod抽象爲一個服務,應用程序直接訪問service便可,service會自動將請求轉發到後端的pod。負責service轉發規則有兩種機制:iptables和ipvs,iptables經過設置DNAT等規則實現負載均衡,ipvs經過ipvsadm設置轉發規。算法
根據服務不一樣的訪問方式,service分爲以下幾種類型:ClusterIP,NodePort,LoadBalancer和_ExternalName,可經過type設置。docker
pod是動態變化的,ip地址可能會變化(如node故障),副本數可能會變化,如應用擴展scale up,應用鎖容scale down等,service如何識別到pod的動態變化呢?答案是labels,經過labels自動會過濾出某個應用的Endpoints,當pod變化時會自動更新Endpoints,不一樣的應用會有由不一樣的label組成。labels相關能夠參考下https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/後端
咱們開始部署一個應用即deployments,kubernetes中包含各類workload如無狀態話的Deployments,有狀態化的StatefulSets,守護進程的DaemonSets,每種workload對應不一樣的應用場景,咱們先以Deployments爲例入門,其餘workload均以此相似,通常而言,在kubernetes中部署應用均以yaml文件方式部署,對於初學者而言,編寫yaml文件太冗長,不適合初學,咱們先kubectl命令行方式實現API的接入。api
一、部署nginx應用,部署三個副本數組
[root@node-1 ~]# kubectl run nginx-app-demo --image=nginx:1.7.9 --port=80 --replicas=3 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/nginx-app-demo created
二、查看應用列表,能夠看到當前pod的狀態均已正常,Ready是當前狀態,AVAILABLE是目標狀態
[root@node-1 ~]# kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE nginx-app-demo 3/3 3 3 72s
三、查看應用的詳細信息,以下咱們能夠知道Deployments是經過ReplicaSets控制副本數的,由Replicaset控制pod數
[root@node-1 ~]# kubectl describe deployments nginx-app-demo Name: nginx-app-demo #應用名稱 Namespace: default #命名空間 CreationTimestamp: Sun, 11 Aug 2019 21:52:32 +0800 Labels: run=nginx-app-demo #labels,很重要,後續service經過labels實現訪問 Annotations: deployment.kubernetes.io/revision: 1 #滾動升級版本號 Selector: run=nginx-app-demo #labels的選擇器selector Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable #副本控制器 StrategyType: RollingUpdate #升級策略爲RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge #RollingUpdate升級策略,即最大不超過25%的pod Pod Template: #容器應用模版,包含鏡像,port,存儲等 Labels: run=nginx-app-demo Containers: nginx-app-demo: Image: nginx:1.7.9 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: #當前狀態 Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: nginx-app-demo-7bdfd97dcd (3/3 replicas created) #ReplicaSets控制器名稱 Events: #運行事件 Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 3m24s deployment-controller Scaled up replica set nginx-app-demo-7bdfd97dcd to 3
四、查看replicasets狀況,經過查看可知replicasets副本控制器生成了三個pod
1. 查看replicasets列表 [root@node-1 ~]# kubectl get replicasets NAME DESIRED CURRENT READY AGE nginx-app-demo-7bdfd97dcd 3 3 3 9m9s 2. 查看replicasets詳情 [root@node-1 ~]# kubectl describe replicasets nginx-app-demo-7bdfd97dcd Name: nginx-app-demo-7bdfd97dcd Namespace: default Selector: pod-template-hash=7bdfd97dcd,run=nginx-app-demo Labels: pod-template-hash=7bdfd97dcd #labels,增長了一個hash的label識別replicasets run=nginx-app-demo Annotations: deployment.kubernetes.io/desired-replicas: 3 #滾動升級的信息,副本樹,最大數,應用版本 deployment.kubernetes.io/max-replicas: 4 deployment.kubernetes.io/revision: 1 Controlled By: Deployment/nginx-app-demo #副本的父控制,爲nginx-app-demo這個Deployments Replicas: 3 current / 3 desired Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: #容器模板,繼承於deployments Labels: pod-template-hash=7bdfd97dcd run=nginx-app-demo Containers: nginx-app-demo: Image: nginx:1.7.9 Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Events: #事件日誌,生成了三個不一樣的pod Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 9m25s replicaset-controller Created pod: nginx-app-demo-7bdfd97dcd-hsrft Normal SuccessfulCreate 9m25s replicaset-controller Created pod: nginx-app-demo-7bdfd97dcd-qtbzd Normal SuccessfulCreate 9m25s replicaset-controller Created pod: nginx-app-demo-7bdfd97dcd-7t72x
五、查看pod的狀況,實際應用部署的載體,pod中部署了一個nginx的容器並分配了一個ip,可經過該ip直接訪問應用
1. 查看pod的列表,和replicasets生成的名稱一致 [root@node-1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-app-demo-7bdfd97dcd-7t72x 1/1 Running 0 13m nginx-app-demo-7bdfd97dcd-hsrft 1/1 Running 0 13m nginx-app-demo-7bdfd97dcd-qtbzd 1/1 Running 0 13m 查看pod的詳情 [root@node-1 ~]# kubectl describe pods nginx-app-demo-7bdfd97dcd-7t72x Name: nginx-app-demo-7bdfd97dcd-7t72x Namespace: default Priority: 0 PriorityClassName: <none> Node: node-3/10.254.100.103 Start Time: Sun, 11 Aug 2019 21:52:32 +0800 Labels: pod-template-hash=7bdfd97dcd #labels名稱 run=nginx-app-demo Annotations: <none> Status: Running IP: 10.244.2.4 #pod的ip地址 Controlled By: ReplicaSet/nginx-app-demo-7bdfd97dcd #副本控制器爲replicasets Containers: #容器的信息,包括容器id,鏡像,丟按扣,狀態,環境變量等信息 nginx-app-demo: Container ID: docker://5a0e5560583c5929e9768487cef43b045af4c6d3b7b927d9daf181cb28867766 Image: nginx:1.7.9 Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451 Port: 80/TCP Host Port: 0/TCP State: Running Started: Sun, 11 Aug 2019 21:52:40 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-txhkc (ro) Conditions: #容器的狀態條件 Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: #容器卷 default-token-txhkc: Type: Secret (a volume populated by a Secret) SecretName: default-token-txhkc Optional: false QoS Class: BestEffort #QOS類型 Node-Selectors: <none> #污點類型 Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: #事件狀態,拉鏡像,啓動容器 Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 14m default-scheduler Successfully assigned default/nginx-app-demo-7bdfd97dcd-7t72x to node-3 Normal Pulling 14m kubelet, node-3 Pulling image "nginx:1.7.9" Normal Pulled 14m kubelet, node-3 Successfully pulled image "nginx:1.7.9" Normal Created 14m kubelet, node-3 Created container nginx-app-demo Normal Started 14m kubelet, node-3 Started container nginx-app-demo
kubernetes爲每一個pod都分配了一個ip地址,可經過該地址直接訪問應用,至關於訪問RS,但一個應用是一個總體,由多個副本數組成,須要依賴於service來實現應用的負載均衡,service咱們探討ClusterIP和NodePort的訪問方式。
一、設置pod的內容,爲了方便區分,咱們將三個pod的nginx站點內容設置爲不一樣,以觀察負載均衡的效果
查看pod列表 [root@node-1 ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-app-demo-7bdfd97dcd-7t72x 1/1 Running 0 28m nginx-app-demo-7bdfd97dcd-hsrft 1/1 Running 0 28m nginx-app-demo-7bdfd97dcd-qtbzd 1/1 Running 0 28m 進入pod容器中 [root@node-1 ~]# kubectl exec -it nginx-app-demo-7bdfd97dcd-7t72x /bin/bash 設置站點內容 [root@nginx-app-demo-7bdfd97dcd-7t72x:/# echo "web1" >/usr/share/nginx/html/index.html 以此類推設置另外兩個pod的內容爲web2和web3 [root@nginx-app-demo-7bdfd97dcd-hsrft:/# echo web2 >/usr/share/nginx/html/index.html [root@nginx-app-demo-7bdfd97dcd-qtbzd:/# echo web3 >/usr/share/nginx/html/index.html
二、獲取pod的ip地址,如何快速獲取pod的ip地址呢,能夠經過-o wide參數顯示更多的內容,會包含pod所屬node和ip
[root@node-1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-app-demo-7bdfd97dcd-7t72x 1/1 Running 0 34m 10.244.2.4 node-3 <none> <none> nginx-app-demo-7bdfd97dcd-hsrft 1/1 Running 0 34m 10.244.1.2 node-2 <none> <none> nginx-app-demo-7bdfd97dcd-qtbzd 1/1 Running 0 34m 10.244.1.3 node-2 <none> <none>
三、訪問pod的ip,查看站點內容,不一樣的pod站點內容和上述步驟設置一致。
[root@node-1 ~]# curl http://10.244.2.4 web1 [root@node-1 ~]# curl http://10.244.1.2 web2 [root@node-1 ~]# curl http://10.244.1.3 web3
經過pod的ip直接訪問應用,對於單個pod的應用能夠實現,對於多個副本replicas的應用則不符合要求,須要經過service來實現負載均衡,service須要設置不一樣的type,默認爲ClusterIP即集羣內部訪問,以下經過expose子命令將服務暴露到service。
一、暴露service,其中port表示代理監聽端口,target-port表明是容器的端口,type設置的是service的類型
[root@node-1 ~]# kubectl expose deployment nginx-app-demo --name nginx-service-demo \ --port=80 \ --protocol=TCP \ --target-port=80 \ --type ClusterIP service/nginx-service-demo exposed
二、查看service的詳情,能夠看到service經過labels選擇器selector自動將pod的ip生成endpoints
查看service列表,顯示有兩個,kubernetes爲默認集羣建立的service [root@node-1 ~]# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29h nginx-service-demo ClusterIP 10.102.1.1 <none> 80/TCP 2m54s 查看service詳情,能夠看到Labels的Seletor和前面Deployments設置一致,Endpoints將pod組成一個列表 [root@node-1 ~]# kubectl describe services nginx-service-demo Name: nginx-service-demo #名稱 Namespace: default #命名空間 Labels: run=nginx-app-demo #標籤名稱 Annotations: <none> Selector: run=nginx-app-demo #標籤選擇器 Type: ClusterIP #service類型爲ClusterIP IP: 10.102.1.1 #服務的ip,即vip,集羣內部會自動分配一個 Port: <unset> 80/TCP #服務端口,即ClusterIP對外訪問的端口 TargetPort: 80/TCP #容器端口 Endpoints: 10.244.1.2:80,10.244.1.3:80,10.244.2.4:80 #訪問地址列表 Session Affinity: None #負載均衡調度算法 Events: <none>
三、訪問service的地址,能夠訪問的內容可知,service自動實現了pods的負載均衡,調度策略爲輪詢,爲什麼?由於service默認的調度策略Session Affinity爲None,便是輪訓,能夠設置爲ClientIP,實現會話保持,相同客戶端IP的請求會調度到相同的pod上。
[root@node-1 ~]# curl http://10.102.1.1 web3 [root@node-1 ~]# curl http://10.102.1.1 web1 [root@node-1 ~]# curl http://10.102.1.1 web2 [root@node-1 ~]# curl http://10.102.1.1
四、ClusterIP原理深刻剖析,service後端實現有兩種機制:iptables和ipvs,環境安裝採用iptables,iptables經過nat的鏈生成訪問規則,KUBE-SVC-R5Y5DZHD7Q6DDTFZ爲入站DNAT轉發規則,KUBE-MARK-MASQ爲出站轉發
[root@node-1 ~]# iptables -t nat -L -n Chain KUBE-SERVICES (2 references) target prot opt source destination KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.102.1.1 /* default/nginx-service-demo: cluster IP */ tcp dpt:80 KUBE-SVC-R5Y5DZHD7Q6DDTFZ tcp -- 0.0.0.0/0 10.102.1.1 /* default/nginx-service-demo: cluster IP */ tcp dpt:80 出站:KUBE-MARK-MASQ源地址段不是10.244.0.0/16訪問10.102.1.1的目標端口80時,將請求轉發給KUBE-MARK-MASQ鏈 入站:KUBE-SVC-R5Y5DZHD7Q6DDTFZ任意原地址訪問目標10.102.1.1的目標端口80時將請求轉發給KUBE-SVC-R5Y5DZHD7Q6DDTFZ鏈
五、查看入站請求規則,入站請求規則將會映射到不一樣的鏈,不一樣鏈將會轉發到不一樣pod的ip上。
1. 查看入站規則KUBE-SVC-R5Y5DZHD7Q6DDTFZ,請求將轉發至三條鏈 [root@node-1 ~]# iptables -t nat -L KUBE-SVC-R5Y5DZHD7Q6DDTFZ -n Chain KUBE-SVC-R5Y5DZHD7Q6DDTFZ (1 references) target prot opt source destination KUBE-SEP-DSWLUQNR4UPH24AX all -- 0.0.0.0/0 0.0.0.0/0 statistic mode random probability 0.33332999982 KUBE-SEP-56SLMGHHOILJT36K all -- 0.0.0.0/0 0.0.0.0/0 statistic mode random probability 0.50000000000 KUBE-SEP-K6G4Z74HQYF6X7SI all -- 0.0.0.0/0 0.0.0.0/0 2. 查看實際轉發的三條鏈的規則,實際映射到不一樣的pod的ip地址上 [root@node-1 ~]# iptables -t nat -L KUBE-SEP-DSWLUQNR4UPH24AX -n Chain KUBE-SEP-DSWLUQNR4UPH24AX (1 references) target prot opt source destination KUBE-MARK-MASQ all -- 10.244.1.2 0.0.0.0/0 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp to:10.244.1.2:80 [root@node-1 ~]# iptables -t nat -L KUBE-SEP-56SLMGHHOILJT36K -n Chain KUBE-SEP-56SLMGHHOILJT36K (1 references) target prot opt source destination KUBE-MARK-MASQ all -- 10.244.1.3 0.0.0.0/0 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp to:10.244.1.3:80 [root@node-1 ~]# iptables -t nat -L KUBE-SEP-K6G4Z74HQYF6X7SI -n Chain KUBE-SEP-K6G4Z74HQYF6X7SI (1 references) target prot opt source destination KUBE-MARK-MASQ all -- 10.244.2.4 0.0.0.0/0 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp to:10.244.2.4:80
Service經過ClusterIP只能提供集羣內部的應用訪問,外部沒法直接訪問應用,若是須要外部訪問有以下幾種方式:NodePort,LoadBalancer和Ingress,其中LoadBalancer須要由雲服務提供商實現,Ingress須要安裝單獨的Ingress Controller,平常測試能夠經過NodePort的方式實現,NodePort能夠將node的某個端口暴露給外部網絡訪問。
一、修改type的類型由ClusterIP修改成NodePort類型(或者從新建立,指定type的類型爲NodePort)
1. 經過patch修改type的類型 [root@node-1 ~]# kubectl patch services nginx-service-demo -p '{"spec":{"type": "NodePort"}}' service/nginx-service-demo patched 2. 確認yaml文件配置,分配了一個NodePort端口,即每一個node上都會監聽該端口 [root@node-1 ~]# kubectl get services nginx-service-demo -o yaml apiVersion: v1 kind: Service metadata: creationTimestamp: "2019-08-11T14:35:59Z" labels: run: nginx-app-demo name: nginx-service-demo namespace: default resourceVersion: "157676" selfLink: /api/v1/namespaces/default/services/nginx-service-demo uid: 55e29b78-bc45-11e9-b073-525400490421 spec: clusterIP: 10.102.1.1 externalTrafficPolicy: Cluster ports: - nodePort: 32416 #自動分配了一個NodePort端口 port: 80 protocol: TCP targetPort: 80 selector: run: nginx-app-demo sessionAffinity: None type: NodePort #類型修改成NodePort status: loadBalancer: {} 3. 查看service列表,能夠知道service的type已經修改成NodePort,同時還保留ClusterIP的訪問IP [root@node-1 ~]# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30h nginx-service-demo NodePort 10.102.1.1 <none> 80:32416/TCP 68m
二、經過NodePort訪問應用程序,每一個node的地址至關於vip,能夠實現相同的負載均衡效果,同時CluserIP功能依可用
1. NodePort的負載均衡 [root@node-1 ~]# curl http://node-1:32416 web1 [root@node-1 ~]# curl http://node-2:32416 web1 [root@node-1 ~]# curl http://node-3:32416 web1 [root@node-1 ~]# curl http://node-3:32416 web3 [root@node-1 ~]# curl http://node-3:32416 web2 2. ClusterIP的負載均衡 [root@node-1 ~]# curl http://10.102.1.1 web2 [root@node-1 ~]# curl http://10.102.1.1 web1 [root@node-1 ~]# curl http://10.102.1.1 web1 [root@node-1 ~]# curl http://10.102.1.1 web3
三、NodePort轉發原理,每一個node上經過kube-proxy監聽NodePort的端口,由後端的iptables實現端口的轉發
1. NodePort監聽端口 [root@node-1 ~]# netstat -antupl |grep 32416 tcp6 0 0 :::32416 :::* LISTEN 32052/kube-proxy 2. 查看nat表的轉發規則,有兩條規則KUBE-MARK-MASQ出口和KUBE-SVC-R5Y5DZHD7Q6DDTFZ入站方向。 Chain KUBE-NODEPORTS (1 references) target prot opt source destination KUBE-MARK-MASQ tcp -- 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service-demo: */ tcp dpt:32416 KUBE-SVC-R5Y5DZHD7Q6DDTFZ tcp -- 0.0.0.0/0 0.0.0.0/0 /* default/nginx-service-demo: */ tcp dpt:32416 3. 查看入站的請求規則鏈KUBE-SVC-R5Y5DZHD7Q6DDTFZ [root@node-1 ~]# iptables -t nat -L KUBE-SVC-R5Y5DZHD7Q6DDTFZ -n Chain KUBE-SVC-R5Y5DZHD7Q6DDTFZ (2 references) target prot opt source destination KUBE-SEP-DSWLUQNR4UPH24AX all -- 0.0.0.0/0 0.0.0.0/0 statistic mode random probability 0.33332999982 KUBE-SEP-56SLMGHHOILJT36K all -- 0.0.0.0/0 0.0.0.0/0 statistic mode random probability 0.50000000000 KUBE-SEP-K6G4Z74HQYF6X7SI all -- 0.0.0.0/0 0.0.0.0/0 4. 繼續查看轉發鏈,包含有DNAT轉發和KUBE-MARK-MASQ和出站返回的規則 [root@node-1 ~]# iptables -t nat -L KUBE-SEP-DSWLUQNR4UPH24AX -n Chain KUBE-SEP-DSWLUQNR4UPH24AX (1 references) target prot opt source destination KUBE-MARK-MASQ all -- 10.244.1.2 0.0.0.0/0 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp to:10.244.1.2:80 [root@node-1 ~]# iptables -t nat -L KUBE-SEP-56SLMGHHOILJT36K -n Chain KUBE-SEP-56SLMGHHOILJT36K (1 references) target prot opt source destination KUBE-MARK-MASQ all -- 10.244.1.3 0.0.0.0/0 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp to:10.244.1.3:80 [root@node-1 ~]# iptables -t nat -L KUBE-SEP-K6G4Z74HQYF6X7SI -n Chain KUBE-SEP-K6G4Z74HQYF6X7SI (1 references) target prot opt source destination KUBE-MARK-MASQ all -- 10.244.2.4 0.0.0.0/0 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp to:10.244.2.4:80
當應用程序的負載比較高沒法知足應用請求時,通常咱們會經過擴展RS的數量來實現,在kubernetes中,擴展RS實際上經過擴展副本數replicas來實現,擴展RS很是便利,快速實現彈性伸縮。kubernets能提供兩種方式的伸縮能力:1. 手動伸縮能力scale up和scale down,2. 動態的彈性伸縮horizontalpodautoscalers,基於CPU的利用率實現自動的彈性伸縮,須要依賴與監控組件如metrics server,當前未實現,後續再作深刻探討,本文以手動的scale的方式擴展應用的副本數。
一、手動擴展副本數
[root@node-1 ~]# kubectl scale --replicas=4 deployment nginx-app-demo deployment.extensions/nginx-app-demo scaled
二、查看副本擴展狀況,deployments自動部署一個應用
[root@node-1 ~]# kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE nginx-app-demo 4/4 4 4 133m
三、此時service的狀況會怎樣呢?查看service詳情,新擴展的pod會自動更新到service的endpoints中,自動服務發現
查看service詳情 [root@node-1 ~]# kubectl describe services nginx-service-demo Name: nginx-service-demo Namespace: default Labels: run=nginx-app-demo Annotations: <none> Selector: run=nginx-app-demo Type: NodePort IP: 10.102.1.1 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 32416/TCP Endpoints: 10.244.1.2:80,10.244.1.3:80,10.244.2.4:80 + 1 more...#地址已自動加入 Session Affinity: None External Traffic Policy: Cluster Events: <none> 查看endpioints詳情 [root@node-1 ~]# kubectl describe endpoints nginx-service-demo Name: nginx-service-demo Namespace: default Labels: run=nginx-app-demo Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2019-08-11T16:04:56Z Subsets: Addresses: 10.244.1.2,10.244.1.3,10.244.2.4,10.244.2.5 NotReadyAddresses: <none> Ports: Name Port Protocol ---- ---- -------- <unset> 80 TCP Events: <none>
四、測試,將新加入的pod站點內容設置爲web4,參考前面的設置方法,測試service的ip,查看負載均衡效果
[root@node-1 ~]# curl http://10.102.1.1 web4 [root@node-1 ~]# curl http://10.102.1.1 web4 [root@node-1 ~]# curl http://10.102.1.1 web2 [root@node-1 ~]# curl http://10.102.1.1 web3 [root@node-1 ~]# curl http://10.102.1.1 web1 [root@node-1 ~]# curl http://10.102.1.1 web2 [root@node-1 ~]# curl http://10.102.1.1 web1
由此可知,彈性伸縮會自動自動加入到service中實現服務自動發現和負載均衡,應用的擴展相比於傳統應用快速很是多。此外,kubernetes還支持自動彈性擴展的能力,即Horizontal Pod AutoScaler,自動橫向伸縮能力,配合監控系統根據CPU的利用率彈性擴展Pod個數。
在kubernetes中更新應用程序時能夠將應用程序打包到鏡像中,而後更新應用程序的鏡像以實現升級。默認Deployments的升級策略爲RollingUpdate,其每次會更新應用中的25%的pod,新建新的pod逐個替換,防止應用程序在升級過程當中不可用。同時,若是應用程序升級過程當中失敗,還能夠經過回滾的方式將應用程序回滾到以前的狀態,回滾時經過replicasets的方式實現。
一、更換nginx的鏡像,將應用升級至最新版本,打開另一個窗口使用kubectl get pods -w觀察升級過程
[root@node-1 ~]# kubectl set image deployments/nginx-app-demo nginx-app-demo=nginx:latest deployment.extensions/nginx-app-demo image updated
二、觀察升級過程,經過查看可知,升級過程當中是經過新建+刪除的方式逐個替換pod的方式
[root@node-1 ~]# kubectl get pods -w NAME READY STATUS RESTARTS AGE nginx-app-demo-7bdfd97dcd-7t72x 1/1 Running 0 145m nginx-app-demo-7bdfd97dcd-hsrft 1/1 Running 0 145m nginx-app-demo-7bdfd97dcd-j6lgd 1/1 Running 0 12m nginx-app-demo-7bdfd97dcd-qtbzd 1/1 Running 0 145m nginx-app-demo-5cc8746f96-xsxz4 0/1 Pending 0 0s #新建一個pod nginx-app-demo-5cc8746f96-xsxz4 0/1 Pending 0 0s nginx-app-demo-7bdfd97dcd-j6lgd 1/1 Terminating 0 14m #刪除舊的pod,替換 nginx-app-demo-5cc8746f96-xsxz4 0/1 ContainerCreating 0 0s nginx-app-demo-5cc8746f96-s49nv 0/1 Pending 0 0s #新建第二個pod nginx-app-demo-5cc8746f96-s49nv 0/1 Pending 0 0s nginx-app-demo-5cc8746f96-s49nv 0/1 ContainerCreating 0 0s nginx-app-demo-7bdfd97dcd-j6lgd 0/1 Terminating 0 14m #更換第二個pod nginx-app-demo-5cc8746f96-s49nv 1/1 Running 0 7s nginx-app-demo-7bdfd97dcd-qtbzd 1/1 Terminating 0 146m nginx-app-demo-5cc8746f96-txjqh 0/1 Pending 0 0s nginx-app-demo-5cc8746f96-txjqh 0/1 Pending 0 0s nginx-app-demo-5cc8746f96-txjqh 0/1 ContainerCreating 0 0s nginx-app-demo-7bdfd97dcd-j6lgd 0/1 Terminating 0 14m nginx-app-demo-7bdfd97dcd-j6lgd 0/1 Terminating 0 14m nginx-app-demo-5cc8746f96-xsxz4 1/1 Running 0 9s nginx-app-demo-5cc8746f96-txjqh 1/1 Running 0 1s nginx-app-demo-7bdfd97dcd-hsrft 1/1 Terminating 0 146m nginx-app-demo-7bdfd97dcd-qtbzd 0/1 Terminating 0 146m nginx-app-demo-5cc8746f96-rcpmw 0/1 Pending 0 0s nginx-app-demo-5cc8746f96-rcpmw 0/1 Pending 0 0s nginx-app-demo-5cc8746f96-rcpmw 0/1 ContainerCreating 0 0s nginx-app-demo-7bdfd97dcd-7t72x 1/1 Terminating 0 146m nginx-app-demo-7bdfd97dcd-7t72x 0/1 Terminating 0 147m nginx-app-demo-7bdfd97dcd-hsrft 0/1 Terminating 0 147m nginx-app-demo-7bdfd97dcd-hsrft 0/1 Terminating 0 147m nginx-app-demo-5cc8746f96-rcpmw 1/1 Running 0 2s nginx-app-demo-7bdfd97dcd-7t72x 0/1 Terminating 0 147m nginx-app-demo-7bdfd97dcd-7t72x 0/1 Terminating 0 147m nginx-app-demo-7bdfd97dcd-hsrft 0/1 Terminating 0 147m nginx-app-demo-7bdfd97dcd-hsrft 0/1 Terminating 0 147m nginx-app-demo-7bdfd97dcd-qtbzd 0/1 Terminating 0 147m nginx-app-demo-7bdfd97dcd-qtbzd 0/1 Terminating 0 147m
三、再次查看deployments的詳情可知道,deployments已經更換了新的replicasets,原來的replicasets的版本爲1,可用於回滾。
[root@node-1 ~]# kubectl describe deployments nginx-app-demo Name: nginx-app-demo Namespace: default CreationTimestamp: Sun, 11 Aug 2019 21:52:32 +0800 Labels: run=nginx-app-demo Annotations: deployment.kubernetes.io/revision: 2 #新的版本號,用於回滾 Selector: run=nginx-app-demo Replicas: 4 desired | 4 updated | 4 total | 4 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: run=nginx-app-demo Containers: nginx-app-demo: Image: nginx:latest Port: 80/TCP Host Port: 0/TCP Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: nginx-app-demo-5cc8746f96 (4/4 replicas created) #新的replicaset,實際是替換新的replicasets Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 19m deployment-controller Scaled up replica set nginx-app-demo-7bdfd97dcd to 4 Normal ScalingReplicaSet 4m51s deployment-controller Scaled up replica set nginx-app-demo-5cc8746f96 to 1 Normal ScalingReplicaSet 4m51s deployment-controller Scaled down replica set nginx-app-demo-7bdfd97dcd to 3 Normal ScalingReplicaSet 4m51s deployment-controller Scaled up replica set nginx-app-demo-5cc8746f96 to 2 Normal ScalingReplicaSet 4m43s deployment-controller Scaled down replica set nginx-app-demo-7bdfd97dcd to 2 Normal ScalingReplicaSet 4m43s deployment-controller Scaled up replica set nginx-app-demo-5cc8746f96 to 3 Normal ScalingReplicaSet 4m42s deployment-controller Scaled down replica set nginx-app-demo-7bdfd97dcd to 1 Normal ScalingReplicaSet 4m42s deployment-controller Scaled up replica set nginx-app-demo-5cc8746f96 to 4 Normal ScalingReplicaSet 4m42s deployment-controller Scaled down replica set nginx-app-demo-7bdfd97dcd to 0
四、查看滾動升級的版本,能夠看到有兩個版本,分別對應的兩個不一樣的replicasets
[root@node-1 ~]# kubectl rollout history deployment nginx-app-demo deployment.extensions/nginx-app-demo REVISION CHANGE-CAUSE 1 <none> 2 <none> 查看replicasets列表,舊的包含pod爲0 [root@node-1 ~]# kubectl get replicasets NAME DESIRED CURRENT READY AGE nginx-app-demo-5cc8746f96 4 4 4 9m2s nginx-app-demo-7bdfd97dcd 0 0 0 155m
五、測試應用的升級狀況,發現nginx已經升級到最新nginx/1.17.2版本
[root@node-1 ~]# curl -I http://10.102.1.1 HTTP/1.1 200 OK Server: nginx/1.17.2 #nginx版本信息 Date: Sun, 11 Aug 2019 16:30:03 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 23 Jul 2019 11:45:37 GMT Connection: keep-alive ETag: "5d36f361-264" Accept-Ranges: bytes
六、回滾到舊的版本
[root@node-1 ~]# kubectl rollout undo deployment nginx-app-demo --to-revision=1 deployment.extensions/nginx-app-demo rolled back 再次測應用,已經回滾到舊版本。 [root@node-1 ~]# curl -I http://10.102.1.1 HTTP/1.1 200 OK Server: nginx/1.7.9 Date: Sun, 11 Aug 2019 16:34:33 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 23 Dec 2014 16:25:09 GMT Connection: keep-alive ETag: "54999765-264" Accept-Ranges: bytes
基礎概念:https://kubernetes.io/docs/tutorials/kubernetes-basics/
部署應用:https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/
訪問應用:https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/
外部訪問:https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
訪問應用:https://kubernetes.io/docs/tutorials/kubernetes-basics/scale/scale-intro/
滾動升級:https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/