看懂本文要具有一下知識點:node
Service實現原理和會應用mysql
知道反向代理原理,瞭解nginx和apache的vhost概念nginx
瞭解service的幾種類型(Nodeport、clusterip、LB)git
四層和七層區別(不明白就這樣去理解,七層最多見就是應用層的http,也就是url,四層是傳輸層,爲tcp/udp端口)github
域名解析,/etc/hosts等基礎知識web
Ingress Controller
sql
Ingress NGINX: Kubernetes 官方維護的方案,也是本次安裝使用的 Controller。apache
F5 BIG-IP Controller: F5 所開發的 Controller,它可以讓管理員經過 CLI 或 API 讓 Kubernetes 與 OpenShift 管理 F5 BIG-IP 設備。後端
Ingress Kong: 著名的開源 API Gateway 方案所維護的 Kubernetes Ingress Controller。api
Traefik: 是一套開源的 HTTP 反向代理與負載均衡器,而它也支援了 Ingress。
Voyager: 一套以 HAProxy 爲底的 Ingress Controller。
Ingress Controller 的實現不僅上面這些方案,還有不少能夠在網絡上找到這裏不一一列出來了。
咱們部署在集羣裏的服務的svc想暴露出來的時候,從長久眼光看和易於管理維護都是用的Ingress Controller
來處理,clusterip非集羣主機沒法訪問,Nodeport不方便長久管理和效率,LB服務多了不方便由於須要花費額外的錢,externalIPS很差用(後面有空寫文章會說它)。
咱們跑的大多服務都是應用層http(s),Ingress Controller使用service或者pod的網絡將它暴露在集羣外,而後它反向代理集羣內的七層服務,經過vhost子域名那樣路由到後端的服務,Ingress Controller
工做架構以下,借用traefik官方的圖。
你能夠將api.domain.com
進來的流量路由到集羣裏api的pod,你能夠將backoffice.domain.com
流量路由到backoffice的一組pod上,雖然說咱們能夠本身搭建一個nginx來代替掉Ingress Controller
,可是要增長代理的service長期來看維護很不方便,在使用上Ingress Controller
後能夠用一種抽象的對象告訴controller添加對應的代理,也就是kind: Ingress
。它裏面描述了從Ingress Controller訪問進來的ServerName和web的url要代理到集羣裏哪一個service(以及service的port)等等具體信息。
而官方的Ingress Nginx
能夠視爲一個魔改的nginx,擁有集羣賦予的RBAC權限後,可以有監聽集羣Ingress相關的變化能力,用戶建立了kind: Ingress
, 例如上面trafik圖裏的Ingress大體就是下面這樣:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-ingress annotations: nginx.ingress.kubernetes.io/use-regex: "true" spec: rules: - host: api.mydomain.com http: paths: - backend: serviceName: api servicePort: 80 - host: domain.com http: paths: - path: /web/* backend: serviceName: web servicePort: 8080 - host: backoffice.domain.com http: paths: - backend: serviceName: backoffice servicePort: 8080
只要建立了上面的Ingress後,ingress controller裏會監聽到從而生成對應的配置段後動態reload配置文件。
部署很是簡單,一條命令建立便可,yml來源於 https://github.com/kubernetes/ingress-nginx/tree/master/deploy。
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
該yaml缺乏向羣外暴露的方式,ingress-controller須要開啓 hostNetwork: true便於暴漏ingress的80端口和其餘ingress-controller的nginx.conf暴漏的端口
下面提供一個修改好驗證可用的yaml:
apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: replicas: 2 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: serviceAccountName: nginx-ingress-serviceaccount hostNetwork: true containers: - name: nginx-ingress-controller image: hejianlai/nginx-ingress-controller:0.23.0 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io securityContext: allowPrivilegeEscalation: true capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -> 33 runAsUser: 33 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 ---
上面的yaml裏後面詳細解釋咱們須要關注的配置項,先來建立ingress對象試試。
部署了官方的ingress nginx後,我部署了一個nginx的pod,爲它建立了一個名爲nginx的service:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx spec: template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: nginx spec: selector: app: nginx ports: - protocol: TCP port: 80 targetPort: 80
而後建立對應的一個ingress對象來暴露集羣裏這個nginx的http服務:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress spec: rules: - host: nginx.testdomain.com http: paths: - backend: serviceName: nginx servicePort: 80
查看ingress資源
[root@master k8s_yaml]# kubectl get ingress NAME HOSTS ADDRESS PORTS AGE app-nginx-ingress nginx.testdomain.com 80 3d
找到ingress nginx的pod名字後經過命令查看裏面nginx配置文件能找到有對應的配置段生成:
$ kubectl -n ingress-nginx exec nginx-ingress-controller-6cdcfd8ff9-t5sxl -- cat /etc/nginx/nginx.conf ... ## start server nginx.testdomain.com server { server_name nginx.testdomain.com ; listen 80; set $proxy_upstream_name "-"; location / { set $namespace "default"; set $ingress_name "nginx-ingress"; set $service_name "nginx"; set $service_port "80"; set $location_path "/"; ........ ## end server nginx.testdomain.com ...
找一臺非集羣的Windows機器(也能夠mac,主要是有圖形界面且非集羣內機器),設置hosts文件把域名nginx.testdomain.com
設置到對service的那個node的ip上,打開瀏覽器訪問nginx.testdomain.com
便可發現集羣內的nginx已經暴露在集羣外。
注意:Ingress Controller雖然調用的是service,看起來按照nginx來理解轉發是client–nginx–svc–pod; 實際上轉發是client–nginx–pod,由於已經魔改了不能按照nginx的來理解,是直接負載到svc的endpoint上面的。
另外低版本的ingress nginx的args參數--default-backend-service=$(POD_NAMESPACE)/default-http-backend
,該參數指定ingress nginx的同namespace下名爲default-http-backend
的service做爲默認訪問的時候頁面,一般那個時候是建立一個404頁面的的pod和對應service,若是ingress nginx啓動的時候沒找到這個service會沒法啓動,新版本不是必須了,好像也自帶404頁面了。
下面是default-http-backend.yaml:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: default-http-backend labels: app: default-http-backend namespace: ingress-nginx spec: replicas: 1 template: metadata: labels: app: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissable as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: gcr.io/google_containers/defaultbackend:1.4 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service metadata: name: default-http-backend namespace: ingress-nginx labels: app: default-http-backend spec: ports: - port: 80 targetPort: 8080 selector: app: default-http-backend
另外ingress也能多路徑,以下:
spec: rules: - host: xxxx.xxxx.xxx http: paths: - backend: serviceName: service-index servicePort: 80 path: / - backend: serviceName: service-test-api servicePort: 80 path: /api/
咱們能夠看到ingress nginx的args裏有這兩行:
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
從選項和值能夠猜想出,要想代理四層(tcp/udp),得寫同namespace裏一個名爲tcp-service
和udp-service
的兩個configmap的數據 四層的話這邊咱們建立一個mysql的pod,來代理3306端口到集羣外面,則須要寫tcp-services這個configmap:
kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx data: 3306: "default/mysql:3306"
四層寫這兩個ConfigMap的data便可,按照這樣去寫便可out_port: namespaces/svc_name:port
,要給每一個ingress加一些nginx裏的配置能夠查看官方的annotation字段以及值(traefik同理)。
這裏來討論下Ingress Controller
的高可用。
Ingress Controller到集羣內的路徑這部分都有負載均衡了,咱們比較關注部署了Ingress Controller後,外部到它這段路怎麼高可用?
上面的例子裏service我使用的externalIPs,可是代理四層的時候會新加端口,須要每次人爲去介入增長暴露端口?
流量從入口到Ingress Controller
的pod有下面幾種方式:
type爲LoadBalancer
的時候手寫externalIPs
很雞肋,後面會再寫文章去講它
type爲LoadBalancer
的時候只有雲廠商支持分配公網ip來負載均衡,LoadBalancer 公開的每項服務都將得到本身的 IP 地址,可是須要收費,且本身創建集羣沒法使用
不建立service,pod直接用hostport,效率等同於hostNetwork
,若是不代理四層端口還好,代理了須要修改pod的template來滾動更新來讓nginx bind的四層端口能映射到宿主機上
Nodeport
,端口不是web端口(可是能夠修改Nodeport的範圍改爲web端口),若是進來流量負載到Nodeport上可能某個流量路線到某個node上的時候由於Ingress Controller
的pod不在這個node上,會走這個node的kube-proxy轉發到Ingress Controller的pod上,多走一趟路
不建立service,效率最高,也能四層負載的時候不修改pod的template,惟一要注意的是hostNetwork
下pod會繼承宿主機的網絡協議,也就是使用了主機的dns,會致使svc的請求直接走宿主機的上到公網的dns服務器而非集羣裏的dns server,須要設置pod的dnsPolicy: ClusterFirstWithHostNet
便可解決
部署方式沒多大區別開心就好。
DaemonSet + nodeSeletor
deploy設置replicas數量 + nodeSeletor + pod互斥
因此能夠一個vip飄在擁有存活的controller的宿主機上,雲上的話就用slb來負載代替vip
最後說說域名請求指向它,若是部署在內網或者辦公室啥的,內網有dns server的話把ing的域名所有解析到ingress controller的宿主機ip上,不然要有人訪問每一個人設置/etc/hosts才能把域名解析來賊麻煩,若是沒有dns server能夠跑一個external-dns,它的上游dns是公網的dns服務器,辦公網內機器的dns server指向它便可,雲上的話把域名請求解析到對應ip便可
traefik和ingress nginx相似,不過它用go實現的
在一些老版本的ingress nginx的log裏會一直刷找不到ingress-nginx的svc,不處理的話會狂刷log致使機器load太高,建立一個同名的svc便可解決,例如建立一個不帶選擇器clusterip爲null的便可。非要建立port的svc的話參照下面:
apiVersion: v1 kind: Service metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: type: ClusterIP ports: - name: http port: 80 targetPort: 80 protocol: TCP - name: https port: 443 targetPort: 443 protocol: TCP - name: metrics port: 10254 targetPort: 10254 protocol: TCP selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx