理想狀態下,咱們能夠認爲Kubernetes Pod是健壯的。可是,理想與現實的差距每每是很是大的。不少狀況下,Pod中的容器可能會由於發生故障而死掉。Deployment等Controller會經過動態建立和銷燬Pod來保證應用總體的健壯性。衆所周知,每一個Pod都擁有本身的IP地址,當新的Controller用新的Pod替代發生故障的Pod時,咱們會發現,新的IP地址可能跟故障的Pod的IP地址可能不一致。此時,客戶端如何訪問這個服務呢?Kubernetes中的Service應運而生。html
Kubernetes Service 邏輯上表明瞭一組具備某些label關聯的Pod,Service擁有本身的IP,這個IP是不變的。不管後端的Pod如何變化,Service都不會發生改變。建立YAML以下:node
apiVersion: apps/v1beta1 kind: Deployment metadata: name: httpd spec: replicas: 4 template: metadata: labels: run: httpd spec: containers: - name: httpd image: httpd ports: - containerPort: 80
配置命令:後端
[root@k8s-m ~]# kubectl apply -f Httpd-Deployment.yaml deployment.apps/httpd created
稍後片刻:api
[root@k8s-m ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE httpd-79c4f99955-dbbx7 1/1 Running 0 7m32s 10.244.2.35 k8s-n2 <none> httpd-79c4f99955-djv44 1/1 Running 0 7m32s 10.244.1.101 k8s-n1 <none> httpd-79c4f99955-npqxz 1/1 Running 0 7m32s 10.244.1.102 k8s-n1 <none> httpd-79c4f99955-vkjk6 1/1 Running 0 7m32s 10.244.2.36 k8s-n2 <none> [root@k8s-m ~]# curl 10.244.2.35 <html><body><h1>It works!</h1></body></html> [root@k8s-m ~]# curl 10.244.2.36 <html><body><h1>It works!</h1></body></html> [root@k8s-m ~]# curl 10.244.1.101 <html><body><h1>It works!</h1></body></html> [root@k8s-m ~]# curl 10.244.1.102 <html><body><h1>It works!</h1></body></html>
建立YAML以下:瀏覽器
apiVersion: v1 kind: Service metadata: name: httpd-svc spec: selector: run: httpd ports: - protocol: TCP port: 8080 targetPort: 80
配置完成並觀察:bash
[root@k8s-m ~]# kubectl apply -f Httpd-Service.yaml service/httpd-svc created [root@k8s-m ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpd-svc ClusterIP 10.110.212.171 <none> 8080/TCP 14s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d [root@k8s-m ~]# curl 10.110.212.171:8080 <html><body><h1>It works!</h1></body></html> [root@k8s-m ~]# kubectl describe service httpd-svc Name: httpd-svc Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"httpd-svc","namespace":"default"},"spec":{"ports":[{"port":8080,"... Selector: run=httpd Type: ClusterIP IP: 10.110.212.171 Port: <unset> 8080/TCP TargetPort: 80/TCP Endpoints: 10.244.1.101:80,10.244.1.102:80,10.244.2.35:80 + 1 more... Session Affinity: None Events: <none>
從以上內容中的Endpoints能夠看出服務httpd-svc下面包含咱們指定的labels的Pod,cluster-ip經過iptables成功映射到Pod IP,成功。再經過iptables-save命令看一下相關的iptables規則。網絡
[root@k8s-m ~]# iptables-save |grep "10.110.212.171" -A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.110.212.171/32 -p tcp -m comment --comment "default/httpd-svc: cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ -A KUBE-SERVICES -d 10.110.212.171/32 -p tcp -m comment --comment "default/httpd-svc: cluster IP" -m tcp --dport 8080 -j KUBE-SVC-RL3JAE4GN7VOGDGP [root@k8s-m ~]# iptables-save|grep -v 'default/httpd-svc'|grep 'KUBE-SVC-RL3JAE4GN7VOGDGP' :KUBE-SVC-RL3JAE4GN7VOGDGP - [0:0] -A KUBE-SVC-RL3JAE4GN7VOGDGP -m statistic --mode random --probability 0.25000000000 -j KUBE-SEP-R5YBMKYSG56R4KDU -A KUBE-SVC-RL3JAE4GN7VOGDGP -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-7G5ANBWSVVLRNZAH -A KUBE-SVC-RL3JAE4GN7VOGDGP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-2PT6QZGNQHS4OL4I -A KUBE-SVC-RL3JAE4GN7VOGDGP -j KUBE-SEP-I4PXZ6UARQLLOV4E
咱們能夠進一步查看相關的轉發規則,此處省略。iptables將訪問Service的流量轉發到後端Pod,使用相似於輪詢的的負載均衡策略。app
咱們的平臺是經過kubeadm部署的,版本是v1.12.1,這個版本自帶的dns相關組件是coredns。負載均衡
[root@k8s-m ~]# kubectl get deployment --namespace=kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE coredns 2 2 2 2 17d
經過建立一個臨時的隔離環境來驗證一下DNS是否生效。dom
[root@k8s-m ~]# kubectl run -it --rm busybox --image=busybox /bin/sh kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead. If you don't see a command prompt, try pressing enter. / # wget httpd-svc.default:8080 Connecting to httpd-svc.default:8080 (10.110.212.171:8080) index.html 100% |*******************************************************************************************************************************| 45 0:00:00 ETA / # cat index.html <html><body><h1>It works!</h1></body></html>
順便提一下,在將來版本中,kubectl run可能再也不被支持,推薦使用kubectl create替代。此處偷了個懶,後續不建議如此操做。
在以上例子中,臨時的隔離環境的namespace爲default,與咱們新建的httpd-svc都在同一namespace內,httpd-svc.default的default能夠省略。若是跨namespace訪問的話,那麼namespace是不能省略的。
一般狀況下,咱們能夠經過四種方式來訪問Kubeenetes的Service,分別是ClusterIP,NodePort,Loadbalance,ExternalName。在此以前的實驗都是基於ClusterIP的,集羣內部的Node和Pod都可經過Cluster IP來訪問Service。NodePort是經過集羣節點的靜態端口對外提供服務。
接下來咱們將以NodePort爲例來進行實際演示。修改以後的Service的YAML以下:
apiVersion: v1 kind: Service metadata: name: httpd-svc spec: type: NodePort selector: run: httpd ports: - protocol: TCP nodePort: 31688 port: 8080 targetPort: 80
配置後觀察:
[root@k8s-m ~]# kubectl apply -f Httpd-Service.yaml service/httpd-svc configured [root@k8s-m ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpd-svc NodePort 10.110.212.171 <none> 8080:31688/TCP 117m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12d
Service httpd-svc的端口被映射到了主機的31688端口。YAML文件若是不指定nodePort的話,Kubernetes會在30000-32767範圍內爲Service分配一個端口。此刻咱們就能夠經過瀏覽器來訪問咱們的服務了。在與node網絡互通的環境中,經過任意一個Node的IP:31688便可訪問剛剛部署好的Service。