一般狀況下,咱們會定義一個 Service 來管理一組 Pod 暴露相關的服務,若是要對外暴露服務的話,只須要定義相應的端口便可(NodePort模式),但若是定義了不少 Service 對象並暴露服務的話就須要配置不少端口,後續維護起來就會變的很複雜,因此 Kubernetes 中還使用了 Ingress 的機制,好比使用 Nginx 綁定一個固定端口 80,後續的請求經過轉發到 Service 便可。這樣若是每次新增服務的話,還須要修改 Nginx 的配置,爲了解決這個問題 Kubernetes 中還使用了 Ingress Controller 組件。簡單理解就是原先須要修改 Nginx 配置,而後配置不一樣的轉發規則到 Service 這個過程抽象出來變成一個 Ingress 對象,後續 Nginx 的變動再經過 Ingress Controoler 與 Kubernetes API 交互,動態的去感知集羣中 Ingress 規則變化,再寫到 Nginx Pod 裏。html
kind: Service apiVersion: v1 metadata: name: test-service spec: selector: app: test—-app ports: - protocol: TCP port: 80 targetPort: 8080
因爲 Service 中的服務僅能夠在容器內部中通信,若是須要外部能訪問到,還須要暴露 Service , 以下 : node
internet | ------------ [ Services ]
kubernetes 中定義了以下的一些服務類型:nginx
Proxygit
使用 Kubernetes 代理來訪問服務,通常內網查看 dashboard,調試遠程查看時可能會用到。 github
{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "kubernetes", "namespace": "default", "selfLink": "/api/v1/namespaces/default/services/kubernetes", "uid": "f594405b-dc19-11e8-90ea-0050569f4a19", "resourceVersion": "6", "creationTimestamp": "2018-10-30T08:01:08Z", "labels": { "component": "apiserver", "provider": "kubernetes" } }, "spec": { "ports": [ { "name": "https", "protocol": "TCP", "port": 443, "targetPort": 6443 } ], "clusterIP": "10.96.0.1", "type": "ClusterIP", "sessionAffinity": "None" }, "status": { "loadBalancer": {} } }
NodePort Serviceweb
指定服務類型爲 NodePort 是將外部流量直接發送給服務最原始的方式,須要在 Node 服務節點主機上開放特定的端口,經過綁定主機的某個端口,而後進行 Pod 的請求轉發和負載均衡,但這種方式下缺陷是 Service 可能有不少個,若是每一個都綁定一個 Node 主機端口的話,主機須要開放外圍一堆的端口進行服務調用,管理混亂,而且只能使用 30000-32767 之間的端口,適用與臨時暴露某一個服務的端口演示的場景。 docker
{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "kubernetes-dashboard", "namespace": "kube-system", "selfLink": "/api/v1/namespaces/kube-system/services/kubernetes-dashboard", "uid": "edd78318-dc1a-11e8-90ea-0050569f4a19", "resourceVersion": "1076", "creationTimestamp": "2018-10-30T08:08:05Z", "labels": { "k8s-app": "kubernetes-dashboard" } }, "spec": { "ports": [ { "protocol": "TCP", "port": 443, "targetPort": 8443, "nodePort": 32151 } ], "selector": { "k8s-app": "kubernetes-dashboard" }, "clusterIP": "10.103.60.159", "type": "NodePort", "sessionAffinity": "None", "externalTrafficPolicy": "Cluster" }, "status": { "loadBalancer": {} } }
LoadBlancer Servicejson
通常配合公用雲使用,指定的端口上的全部流量都將轉發到該服務,使用 LoadBlancer Service 暴露服務時其實是向雲平臺申請建立一個負載均衡器來向外暴露服務,可能須要支付額外的費用。後端
Nginx Ingress Controller 是由 Nginx 與 Ingress Controller 兩部分組成。api
Ingress
與上述的幾種類型不一樣,Ingress 實際上不是一種服務。它是位於多前面服務的不一樣,能夠轉發不一樣的域名請求到集羣中不一樣的服務上,簡單的理解,Ingress 就是從 Kubernetes 集羣外訪問集羣的入口,將用戶的URL請求轉發到不一樣的 Service 上。Ingress 至關於 Nginx、Apache 等負載均衡方向代理服務器,其中還包括規則定義,即 URL 的路由信息,路由信息得的刷新由 Ingress controller 來提供。
Ingress controller
Ingress Controller 經過不斷地跟 kubernetes API 打交道,實時的感知後端 service、pod 等變化,好比新增和減小 pod,service 增長與減小等;當獲得這些變化信息後,Ingress Controller 再結合下文的 Ingress 生成配置,而後更新反向代理負載均衡器,並刷新其配置,達到服務發現的做用。對於 Ingress controller 來講,建立一個 Ingress 至關於在 nginx.conf 中添加一個 server 入口,並 nginx -s reload 從新生效。。
好比想要經過負載均衡器實現不一樣子域名到不一樣服務的訪問:
foo.bar.com --| |-> foo.bar.com s1:80 | 178.91.123.132 | bar.foo.com --| |-> bar.foo.com s2:80
定義 Ingress
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: test spec: rules: - host: foo.bar.com http: paths: - backend: serviceName: s1 servicePort: 80 - host: bar.foo.com http: paths: - backend: serviceName: s2 servicePort: 80
Ingress自己並不會自動建立負載均衡器,須要運行一個 Ingress controller 來根據 Ingress 的定義來管理負載均衡器。爲了 Ingress 正常工做,集羣中必須運行 Ingress controller ,支持 Ingress 的類型的 kube-controller 有以下幾種:
Kubernetes currently supports and maintains GCE and nginx controllers.
F5 Networks provides support and maintenance for the F5 BIG-IP Controller for Kubernetes.
Kong offers community or commercial support and maintenance for the Kong Ingress Controller for Kubernetes
Traefik is a fully featured ingress controller (Let’s Encrypt, secrets, http2, websocket…), and it also comes with commercial support by Containous
NGINX, Inc. offers support and maintenance for the NGINX Ingress Controller for Kubernetes
HAProxy based ingress controller jcmoraisjr/haproxy-ingress which is mentioned on this blog post HAProxy Ingress Controller for Kubernetes
Istio based ingress controller Control Ingress Traffic
以下是第三方 Nginx 支持的模式 NGINX Ingress Controller for Kubernetes ,非官方(雖然官方也依賴開源版本的 Nginx)。
關於 nginxinc/kubernetes-ingress 與 kubernetes/ingress-nginx 二者的差別: https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/nginx-ingress-controllers.md ,若是不是使用的不是 NGINX Plus 商業版本,仍是官方的支持要豐富一些,另外基於軟件的 Traefik,HAProxy,Kong 也是不錯的選擇,不差錢的也能夠考慮商業性質的硬件負載設備如 F5 等。我的比較看好 kubernetes/ingress-nginx 與 Traefik 。這裏選擇使用官方的 kubernetes/ingress-nginx 做爲 ingress controller ,後續再嘗試其餘第三方的組件。
安裝 nginx-ingress-controller (host network 模式根據 Node 數量設置副本數,取決於部署的方式 Deployment or DaemonSet)
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-ingress-controller-5bdbdc5657-bcn2f 1/1 Running 0 74m 10.38.0.1 kubernetes-node-2 <none> nginx-ingress-controller-5bdbdc5657-mmtph 1/1 Running 0 79m 10.40.0.1 kubernetes-node-1 <none> $ kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx nginx-ingress-controller-6457d975c8-6twqf 1/1 Running 0 78m ingress-nginx nginx-ingress-controller-5bdbdc5657-mmtph 1/1 Running 0 83m $ kubectl describe pod nginx-ingress-controller-6457d975c8-6twqf -n ingress-nginx Name: nginx-ingress-controller-6457d975c8-6twqf Namespace: ingress-nginx Priority: 0 PriorityClassName: <none> Node: kubernetes-node-2/172.23.216.50 Start Time: Wed, 31 Oct 2018 20:04:44 +0800 Labels: app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx pod-template-hash=6457d975c8 Annotations: prometheus.io/port: 10254 prometheus.io/scrape: true Status: Running IP: 172.23.216.50 Controlled By: ReplicaSet/nginx-ingress-controller-6457d975c8 Containers: nginx-ingress-controller: Container ID: docker://f4d5b69cf579752799d6d7e92c547ed9a5a0ba9154b3683c4956079ea9e77304 Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0 Image ID: docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:f6180c5397d2361c317aff1314dc192ab0f9f515346a5319422cdc264f05d2d9 Ports: 80/TCP, 443/TCP Host Ports: 80/TCP, 443/TCP Args: /nginx-ingress-controller --configmap=$(POD_NAMESPACE)/nginx-configuration --publish-service=$(POD_NAMESPACE)/ingress-nginx --annotations-prefix=nginx.ingress.kubernetes.io State: Running Started: Wed, 31 Oct 2018 20:04:45 +0800 Ready: True Restart Count: 0 Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3 Readiness: http-get http://:10254/healthz delay=0s timeout=1s period=10s #success=1 #failure=3 Environment: POD_NAME: nginx-ingress-controller-6457d975c8-6twqf (v1:metadata.name) POD_NAMESPACE: ingress-nginx (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-serviceaccount-token-rhpsb (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: nginx-ingress-serviceaccount-token-rhpsb: Type: Secret (a volume populated by a Secret) SecretName: nginx-ingress-serviceaccount-token-rhpsb Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 49m (x3 over 49m) default-scheduler 0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) didn't have free ports for the requested pod ports. Normal Scheduled 49m default-scheduler Successfully assigned ingress-nginx/nginx-ingress-controller-6457d975c8-6twqf to kubernetes-node-2 Normal Pulled 48m kubelet, kubernetes-node-2 Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0" already present on machine Normal Created 48m kubelet, kubernetes-node-2 Created container Normal Started 48m kubelet, kubernetes-node-2 Started container
查看安裝的版本
POD_NAMESPACE=ingress-nginx POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.20.0 Build: git-e8d8103 Repository: https://github.com/kubernetes/ingress-nginx.git -------------------------------------------------------------------------------
根據不一樣的網絡環境與場景,須要選擇不一樣的策略,具體能夠參考官方的文檔,以下列舉經常使用的幾種:
Cloud environments 模式
若是是使用公有云,可使用公有云的負載到 Node 節點便可,須要支付額外費用。
NodePort Service 模式(臨時測試,通常不建議使用)
apiVersion: v1 kind: Service metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: type: NodePort ports: - name: http port: 80 targetPort: 80
# nodePort: 30000 protocol: TCP - name: https port: 443 targetPort: 443 protocol: TCP selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ---
執行
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx NodePort 10.98.42.64 <none> 80:31460/TCP,443:31200/TCP 6m50s
host network 模式
修改 hostNetwork: true
vi nginx-ingress-controller.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: replicas: 2 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: hostNetwork: true serviceAccountName: nginx-ingress-serviceaccount containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -> 33 runAsUser: 33 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 --- $ kubectl apply -f nginx-ingress-controller.yaml
查看 ingress-nginx
$ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-ingress-controller-6457d975c8-6twqf 1/1 Running 0 3m7s 172.23.216.50 kubernetes-node-2 <none> nginx-ingress-controller-6457d975c8-smjsv 1/1 Running 0 3m7s 172.23.216.49 kubernetes-node-1 <none>
測試訪問
$ curl -D- 172.23.216.50 HTTP/1.1 404 Not Found Server: nginx/1.15.5 Date: Wed, 31 Oct 2018 12:08:40 GMT Content-Type: text/html Content-Length: 153 Connection: keep-alive <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.15.5</center> </body> </html>
建立 kubernetes-dashboard-ingress(默認 https ,annotations 頭必須設置 )
vi kubernetes-dashboard-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kubernetes-dashboard-ingress namespace: kube-system annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/secure-backends: "true" spec: # tls: # - secretName: k8s-dashboard-secret rules: - http: paths: - path: / backend: serviceName: kubernetes-dashboard servicePort: 443 - path: /test backend: serviceName: test-nginx servicePort: 80
執行$ kubectl apply -f kubernetes-dashboard-ingress.yaml $ kubectl get ingress -o wide --all-namespaces NAMESPACE NAME HOSTS ADDRESS PORTS AGE kube-system kubernetes-dashboard-ingress * 80 3h53m
最後訪問:https://172.23.216.49/ 或 https://172.23.216.50/
備註:其餘功能參考官方文檔:
其餘命令#刪除 Ingress Controller 命名空間 $ kubectl delete namespace nginx-ingress #安裝網絡組件 $ yum install net-tools #查看開放的端口 $ netstat -ntlp
REFER:
https://kubernetes.io/docs/concepts/services-networking/ingress/
https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
https://www.nginx.com/products/nginx/kubernetes-ingress-controller
https://github.com/kubernetes/ingress-nginx
https://github.com/containous/traefik
https://github.com/nginxinc/kubernetes-ingress/
http://www.javashuo.com/article/p-zbnkcewe-mr.html