[toc]html
Ingress對象,其實就是對「反向代理」的一種抽象,簡單的說就是一個全局的負載均衡器,能夠經過訪問URL定位到後端的Servicenode
有了Ingress這個抽象,K8S就不須要關心Ingress的細節了,實際使用時,只須要選擇一個具體的Ingress Controller部署就好了,業界經常使用的反向代理項目有:Nginx、HAProxy、Envoy、Traefik,都已經成爲了K8S專門維護的Ingress Controller 一個Ingress對象的主要內容,就相似Nginx的配置文件描述,對應的轉發規則就是ingressRule, 有了Ingress這個對象,用戶就能夠根據本身的需求選擇Ingress Controller,例如,若是應用對代理服務的中斷很是敏感,能夠使用Treafik這樣的Ingress Controllerlinux
Ingress工做在七層,Service工做在四層,當想要在Kubernetes裏爲應用進行TLS配置等HTTPS相關操做時,都必須經過Ingress來進行nginx
LoadBalancer的Service也會在公有云建立負載均衡,可是是一個Servcie對應一個,太浪費了git
如下以演示部署一個Nginx Ingress Controller,並代理一個咱們本身建立的服務coffeegithub
Nginx Ingress Controller 官網地址:https://kubernetes.github.io/ingress-nginx 依賴的其餘github相關yaml文件:https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example後端
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
內容以下api
apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses verbs: - get - list - watch - apiGroups: - "extensions" - "networking.k8s.io" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx template: metadata: labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: prometheus.io/port: "10254" prometheus.io/scrape: "true" spec: # wait up to five minutes for the drain of connections terminationGracePeriodSeconds: 300 serviceAccountName: nginx-ingress-serviceaccount nodeSelector: kubernetes.io/os: linux containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services - --publish-service=$(POD_NAMESPACE)/ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io securityContext: allowPrivilegeEscalation: true capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -> 33 runAsUser: 33 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 lifecycle: preStop: exec: command: - /wait-shutdown ---
1.上面的YAML文件,定義了一個使用nginx-ingress-controller鏡像的Pod,Pod的啓動命令須要使用Pod的Namespace做爲參數,Pod自己,自己就一個監聽ingress對象以及它所代理的後端Service變化的控制器app
2.當一個新的Ingress對象由用戶建立後,nginx-ingress-controller就會根據ingress對象裏的內容,生成一份對應的Nginx配置文件,並使用這個配置文件啓動一個Nginx服務 若是Ingress對象是被更新,nginx-ingress-controller就會更新這個配置文件 注意:若是隻是被代理的Service對象被更新,nginx-ingress-controller所管理的服務是不須要從新加載的,由於nginx-ingress-controller經過Nginx Lua方案實現了Nginx Upstream的動態配置負載均衡
3.nginx-ingress-controller容許經過K8S的ConfigMap對象對上述Nginx配置文件定製,ConfigMap名字須要以參數方式傳遞給nginx-ingress-controller,在ConfigMap裏添加的字段會生成到nginx配置文件當中
一個Nginx Ingress Controller實際上是一個能夠根據Ingress對象和被代理後端Service的變化,來自動更新的Nginx負載均衡器
爲了讓用戶能訪問這個Nginx,咱們建立一個Service把Nginx服務暴露出去,經過NodePort方式
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml
格式以下
apiVersion: v1 kind: Service metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: type: NodePort ports: - name: http port: 80 targetPort: 80 protocol: TCP - name: https port: 443 targetPort: 443 protocol: TCP selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ---
這個service的工做,就是將帶有ingress-nginx標籤的Pod的80和443端口暴露出去
咱們查看一下這個svc
~ kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx NodePort 10.43.33.55 <none> 80:30164/TCP,443:32229/TCP 107m
經過Node的30164這個端口就能夠訪問這個servcie,隨便選擇一個Node就能夠,這裏使用72.17.0.215訪問一下
curl 172.17.0.215:30164 <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>openresty/1.15.8.2</center> </body> </html>
這是會建立Deployment和Servcie,cafe.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: coffee spec: replicas: 2 selector: matchLabels: app: coffee template: metadata: labels: app: coffee spec: containers: - name: coffee image: nginxdemos/hello:plain-text ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: coffee-svc spec: ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: app: coffee --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: tea spec: replicas: 3 selector: matchLabels: app: tea template: metadata: labels: app: tea spec: containers: - name: tea image: nginxdemos/hello:plain-text ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: tea-svc labels: spec: ports: - port: 80 targetPort: 80 protocol: TCP name: http selector: app: tea
運行
➜ ingress-test kubectl apply -f cafe.yaml deployment.extensions/coffee created service/coffee-svc created deployment.extensions/tea created service/tea-svc created ➜ ingress-test kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE coffee 2/2 2 2 66s tea 3/3 3 3 66s ➜ ingress-test kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE coffee-svc ClusterIP 10.43.216.45 <none> 80/TCP 70s kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 69d tea-svc ClusterIP 10.43.208.95 <none> 80/TCP 69s
能夠看到對應的tea-svc和coffee-svc被建立出來了
cafe-secret.yaml
apiVersion: v1 kind: Secret metadata: name: cafe-secret type: Opaque data: tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURMakNDQWhZQ0NRREFPRjl0THNhWFdqQU5CZ2txaGtpRzl3MEJBUXNGQURCYU1Rc3dDUVlEVlFRR0V3SlYKVXpFTE1Ba0dBMVVFQ0F3Q1EwRXhJVEFmQmdOVkJBb01HRWx1ZEdWeWJtVjBJRmRwWkdkcGRITWdVSFI1SUV4MApaREViTUJrR0ExVUVBd3dTWTJGbVpTNWxlR0Z0Y0d4bExtTnZiU0FnTUI0WERURTRNRGt4TWpFMk1UVXpOVm9YCkRUSXpNRGt4TVRFMk1UVXpOVm93V0RFTE1Ba0dBMVVFQmhNQ1ZWTXhDekFKQmdOVkJBZ01Ba05CTVNFd0h3WUQKVlFRS0RCaEpiblJsY201bGRDQlhhV1JuYVhSeklGQjBlU0JNZEdReEdUQVhCZ05WQkFNTUVHTmhabVV1WlhoaApiWEJzWlM1amIyMHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDcDZLbjdzeTgxCnAwanVKL2N5ayt2Q0FtbHNmanRGTTJtdVpOSzBLdGVjcUcyZmpXUWI1NXhRMVlGQTJYT1N3SEFZdlNkd0kyaloKcnVXOHFYWENMMnJiNENaQ0Z4d3BWRUNyY3hkam0zdGVWaVJYVnNZSW1tSkhQUFN5UWdwaW9iczl4N0RsTGM2SQpCQTBaalVPeWwwUHFHOVNKZXhNVjczV0lJYTVyRFZTRjJyNGtTa2JBajREY2o3TFhlRmxWWEgySTVYd1hDcHRDCm42N0pDZzQyZitrOHdnemNSVnA4WFprWldaVmp3cTlSVUtEWG1GQjJZeU4xWEVXZFowZXdSdUtZVUpsc202OTIKc2tPcktRajB2a29QbjQxRUUvK1RhVkVwcUxUUm9VWTNyemc3RGtkemZkQml6Rk8yZHNQTkZ4MkNXMGpYa05MdgpLbzI1Q1pyT2hYQUhBZ01CQUFFd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFLSEZDY3lPalp2b0hzd1VCTWRMClJkSEliMzgzcFdGeW5acS9MdVVvdnNWQTU4QjBDZzdCRWZ5NXZXVlZycTVSSWt2NGxaODFOMjl4MjFkMUpINnIKalNuUXgrRFhDTy9USkVWNWxTQ1VwSUd6RVVZYVVQZ1J5anNNL05VZENKOHVIVmhaSitTNkZBK0NuT0Q5cm4yaQpaQmVQQ0k1ckh3RVh3bm5sOHl3aWozdnZRNXpISXV5QmdsV3IvUXl1aTlmalBwd1dVdlVtNG52NVNNRzl6Q1Y3ClBwdXd2dWF0cWpPMTIwOEJqZkUvY1pISWc4SHc5bXZXOXg5QytJUU1JTURFN2IvZzZPY0s3TEdUTHdsRnh2QTgKN1dqRWVxdW5heUlwaE1oS1JYVmYxTjM0OWVOOThFejM4Zk9USFRQYmRKakZBL1BjQytHeW1lK2lHdDVPUWRGaAp5UkU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcWVpcCs3TXZOYWRJN2lmM01wUHJ3Z0pwYkg0N1JUTnBybVRTdENyWG5LaHRuNDFrCkcrZWNVTldCUU5semtzQndHTDBuY0NObzJhN2x2S2wxd2k5cTIrQW1RaGNjS1ZSQXEzTVhZNXQ3WGxZa1YxYkcKQ0pwaVJ6ejBza0lLWXFHN1BjZXc1UzNPaUFRTkdZMURzcGRENmh2VWlYc1RGZTkxaUNHdWF3MVVoZHErSkVwRwp3SStBM0kreTEzaFpWVng5aU9WOEZ3cWJRcCt1eVFvT05uL3BQTUlNM0VWYWZGMlpHVm1WWThLdlVWQ2cxNWhRCmRtTWpkVnhGbldkSHNFYmltRkNaYkp1dmRySkRxeWtJOUw1S0Q1K05SQlAvazJsUkthaTAwYUZHTjY4NE93NUgKYzMzUVlzeFR0bmJEelJjZGdsdEkxNURTN3lxTnVRbWF6b1Z3QndJREFRQUJBb0lCQVFDUFNkU1luUXRTUHlxbApGZlZGcFRPc29PWVJoZjhzSStpYkZ4SU91UmF1V2VoaEp4ZG01Uk9ScEF6bUNMeUw1VmhqdEptZTIyM2dMcncyCk45OUVqVUtiL1ZPbVp1RHNCYzZvQ0Y2UU5SNThkejhjbk9SVGV3Y290c0pSMXBuMWhobG5SNUhxSkpCSmFzazEKWkVuVVFmY1hackw5NGxvOUpIM0UrVXFqbzFGRnM4eHhFOHdvUEJxalpzVjdwUlVaZ0MzTGh4bndMU0V4eUZvNApjeGI5U09HNU9tQUpvelN0Rm9RMkdKT2VzOHJKNXFmZHZ5dGdnOXhiTGFRTC94MGtwUTYyQm9GTUJEZHFPZVBXCktmUDV6WjYvMDcvdnBqNDh5QTFRMzJQem9idWJzQkxkM0tjbjMyamZtMUU3cHJ0V2wrSmVPRmlPem5CUUZKYk4KNHFQVlJ6NWhBb0dCQU50V3l4aE5DU0x1NFArWGdLeWNrbGpKNkY1NjY4Zk5qNUN6Z0ZScUowOXpuMFRsc05ybwpGVExaY3hEcW5SM0hQWU00MkpFUmgySi9xREZaeW5SUW8zY2czb2VpdlVkQlZHWTgrRkkxVzBxZHViL0w5K3l1CmVkT1pUUTVYbUdHcDZyNmpleHltY0ppbS9Pc0IzWm5ZT3BPcmxEN1NQbUJ2ek5MazRNRjZneGJYQW9HQkFNWk8KMHA2SGJCbWNQMHRqRlhmY0tFNzdJbUxtMHNBRzR1SG9VeDBlUGovMnFyblRuT0JCTkU0TXZnRHVUSnp5K2NhVQprOFJxbWRIQ2JIelRlNmZ6WXEvOWl0OHNaNzdLVk4xcWtiSWN1YytSVHhBOW5OaDFUanNSbmU3NFowajFGQ0xrCmhIY3FIMHJpN1BZU0tIVEU4RnZGQ3haWWRidUI4NENtWmlodnhicFJBb0dBSWJqcWFNWVBUWXVrbENkYTVTNzkKWVNGSjFKelplMUtqYS8vdER3MXpGY2dWQ0thMzFqQXdjaXowZi9sU1JxM0hTMUdHR21lemhQVlRpcUxmZVpxYwpSMGlLYmhnYk9jVlZrSkozSzB5QXlLd1BUdW14S0haNnpJbVpTMGMwYW0rUlk5WUdxNVQ3WXJ6cHpjZnZwaU9VCmZmZTNSeUZUN2NmQ21mb09oREN0enVrQ2dZQjMwb0xDMVJMRk9ycW40M3ZDUzUxemM1em9ZNDR1QnpzcHd3WU4KVHd2UC9FeFdNZjNWSnJEakJDSCtULzZzeXNlUGJKRUltbHpNK0l3eXRGcEFOZmlJWEV0LzQ4WGY2ME54OGdXTQp1SHl4Wlp4L05LdER3MFY4dlgxUE9ucTJBNWVpS2ErOGpSQVJZS0pMWU5kZkR1d29seHZHNmJaaGtQaS80RXRUCjNZMThzUUtCZ0h0S2JrKzdsTkpWZXN3WEU1Y1VHNkVEVXNEZS8yVWE3ZlhwN0ZjanFCRW9hcDFMU3crNlRYcDAKWmdybUtFOEFSek00NytFSkhVdmlpcS9udXBFMTVnMGtKVzNzeWhwVTl6WkxPN2x0QjBLSWtPOVpSY21Vam84UQpjcExsSE1BcWJMSjhXWUdKQ2toaVd4eWFsNmhZVHlXWTRjVmtDMHh0VGwvaFVFOUllTktvCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
執行建立
➜ ingress-test kubectl apply -f cafe-secret.yaml secret/cafe-secret created ➜ ingress-test kubectl get secrets NAME TYPE DATA AGE cafe-secret Opaque 2 17s
文件以下,cafe-ingress.yaml
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: cafe-ingress spec: tls: - hosts: - cafe.example.com secretName: cafe-secret rules: - host: cafe.example.com http: paths: - path: /tea backend: serviceName: tea-svc servicePort: 80 - path: /coffee backend: serviceName: coffee-svc servicePort: 80
能夠看到,該ingress指定了/conffee和/tea這兩個path,分別指向coffee-svc和tea-svc,咱們執行部署
➜ ingress-test kubectl apply -f cafe-ingress.yaml ingress.extensions/cafe-ingress created ➜ ingress-test kubectl get ingress NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.example.com 10.43.33.55 80, 443 7s ➜ ingress-test kubectl describe ingress cafe-ingres Name: cafe-ingress Namespace: default Address: 10.43.33.55 Default backend: default-http-backend:80 (<none>) TLS: cafe-secret terminates cafe.example.com Rules: Host Path Backends ---- ---- -------- cafe.example.com /tea tea-svc:80 (10.42.0.41:80,10.42.0.42:80,10.42.2.185:80) /coffee coffee-svc:80 (10.42.0.40:80,10.42.2.184:80) Annotations: field.cattle.io/publicEndpoints: [{"addresses":["10.43.33.55"],"port":443,"protocol":"HTTPS","serviceName":"default:tea-svc","ingressName":"default:cafe-ingress","hostname":"cafe.example.com","path":"/tea","allNodes":true},{"addresses":["10.43.33.55"],"port":443,"protocol":"HTTPS","serviceName":"default:coffee-svc","ingressName":"default:cafe-ingress","hostname":"cafe.example.com","path":"/coffee","allNodes":true}] kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"cafe-ingress","namespace":"default"},"spec":{"rules":[{"host":"cafe.example.com","http":{"paths":[{"backend":{"serviceName":"tea-svc","servicePort":80},"path":"/tea"},{"backend":{"serviceName":"coffee-svc","servicePort":80},"path":"/coffee"}]}}],"tls":[{"hosts":["cafe.example.com"],"secretName":"cafe-secret"}]}} Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 2m13s nginx-ingress-controller Ingress default/cafe-ingress Normal UPDATE 2m13s (x2 over 2m13s) nginx-ingress-controller Ingress default/cafe-ingress
能夠看到,這個Ingress的核心部署是Rule字段
Rules: Host Path Backends ---- ---- -------- cafe.example.com /tea tea-svc:80 (10.42.0.41:80,10.42.0.42:80,10.42.2.185:80) /coffee coffee-svc:80 (10.42.0.40:80,10.42.2.184:80)
定義的HOST是cafe.example.com,分別對應兩條轉發規則/tea和/coffee
拉下來,咱們就能夠經過訪問這個ingress的地址和端口,訪問到咱們部署的應用了 地址分別是https://cafe.example.com/coffee:443和https://cafe.example.com/tea:443
咱們先把設置一下入口的環境變量導入,指定Node的IP和HTTPS對應的端口號
IC_IP=172.17.0.215 IC_HTTPS_PORT=32229
curl --resolve cafe.example.com:$IC_HTTPS_PORT:$IC_IP https://cafe.example.com:$IC_HTTPS_PORT/coffee --insecur Server address: 10.42.0.40:80 Server name: coffee-bbd45c6-jp6bz Date: 23/Oct/2019:06:35:35 +0000 URI: /coffee Request ID: df63447fcacfc2ff0acd1f8a9b30d334
curl --resolve cafe.example.com:$IC_HTTPS_PORT:$IC_IP https://cafe.example.com:$IC_HTTPS_PORT/tea --insecure Server address: 10.42.2.185:80 Server name: tea-5857f7786b-qkpn7 Date: 23/Oct/2019:06:37:17 +0000 URI: /tea Request ID: a5da5ff4d869c3978aed450738b44706
curl --resolve cafe.example.com:$IC_HTTPS_PORT:$IC_IP https://cafe.example.com:$IC_HTTPS_PORT/cqh --insecure <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>openresty/1.15.8.2</center> </body> </html>
能夠看到,當咱們請求沒有匹配任何一條IngressRule,會返回一個Nginx的404頁面
到這,咱們已經成功的部署並使用ingress代理了咱們的服務
原文出處:https://www.cnblogs.com/chenqionghe/p/11726231.html