高可用Kubernetes集羣-12. 部署kubernetes-ingress

參考文檔:html

  1. Github:https://github.com/kubernetes/ingress-nginx
  2. Kubernetes ingress:https://kubernetes.io/docs/concepts/services-networking/ingress/
  3. Ingress:https://mritd.me/2017/03/04/how-to-use-nginx-ingress/
  4. 配置示例:http://www.javashuo.com/article/p-edprgrre-gr.html
  5. Github示例:https://github.com/kubernetes/ingress-nginx/tree/master/deploy
  6. Traefik示例:https://github.com/containous/traefik

Ingress是對外服務到集羣內的Service之間規則的集合:容許進入集羣的請求被轉發至集羣內的Service。前端

Ingress能把Service配置成外網可以訪問的url,流量負載均衡,終止ssl,提供基於域名訪問的虛擬主機等,用戶經過訪問url訪問Service。node

Ingress-controller負責處理全部Ingress的請求流量,它一般是一個負載均衡器。nginx

一.環境

1. 基礎環境

組件git

版本github

Remarkweb

kubernetesdocker

v1.9.2apache

 

Ingress-nginxvim

0.11.0

 

default-backend

1.4

 

2. 原理

  1. ingress策略本質是轉發的規則;
  2. ingress-controller基於ingress策略將客戶端的請求轉發到service對應的後端endpoint,即Pod上;實現了爲全部後端service提供統一入口,基於不一樣的http url向後轉發負載分發規則,並能夠靈活設置7層的負載分發策略的功能;通常由nginx實現。

二.部署ingress-nginx

1. 準備images

kubernetes部署Pod服務時,爲避免部署時發生pull鏡像超時的問題,建議提早將相關鏡像pull到相關全部節點(實驗),或搭建本地鏡像系統。

  1. 基礎環境已作了鏡像加速,可參考:http://www.cnblogs.com/netonline/p/7420188.html
  2. 須要從gcr.io pull的鏡像,已利用Docker Hub的"Create Auto-Build GitHub"功能(Docker Hub利用GitHub上的Dockerfile文件build鏡像),在我的的Docker Hub build成功,可直接pull到本地使用。
# ingress-controller默認的backend,用於在客戶端訪問的url地址不存在時,可以返回一個正確的404應答
[root@kubenode1 ~]# docker pull netonline/defaultbackend:1.4

# ingress-nginx
[root@kubenode1 ~]# docker pull netonline/nginx-ingress-controller:0.11.0

2. 下載ingress-nginx相關yaml範本

# 相關的yaml文件可在1個或多個master節點下載後修改
[root@kubenode1 ~]# mkdir -p /usr/local/src/yaml/ingress
[root@kubenode1 ~]# cd /usr/local/src/yaml/ingress/

#下載連接: https://github.com/kubernetes/ingress-nginx/tree/master/deploy
# namespace
[root@kubenode1 ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml

# configmap,此驗證未使用
[root@kubenode1 ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml

# tcp-service-configmap
[root@kubenode1 ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml

# udp-service-configmap,此驗證未使用
[root@kubenode1 ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml

# rbac
[root@kubenode1 ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml

# default-backend
[root@kubenode1 ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml

# with-rbac
[root@kubenode1 ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml

# without-rbac,此驗證未使用
[root@kubenode1 ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/without-rbac.yaml

# patch,此驗證未使用
[root@kubenode1 ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/publish-service-patch.yaml

3. namespace.yaml

# ingress-nginx github文檔中將Namespace,ConfingMap,ServiceAccount,Deployment,default-backend,xxx-services-configmap等服務的yaml配置文件獨立保存,如下章節分別針對各yaml文件修改,紅色加粗字體即修改部分;
# 對Pod yaml文件的編寫這裏不作展開,可另參考資料,如《Kubernetes權威指南》;
# 修改後的ingress-nginx相關yaml文件請見:https://github.com/Netonline2016/kubernetes/tree/master/addons/ingress

# namespace.yaml不作修改,建立1個獨立的namespace
[root@kubenode1 ingress]# cat namespace.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx

4. tcp-services-configmap.yaml

# tcp-services-configmap.yaml不作修改
[root@kubenode1 ingress]# cat tcp-services-configmap.yaml 
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx

5. rbac.yaml

# ingress-controller須要監聽apiserver,獲取ingress定義,經過rbac受權;
# rbac.yaml文件不用修改
[root@kubenode1 ingress]# cat rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
        - events
    verbs:
        - create
        - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
namespace: ingress-nginx

6. default-backend.yaml 

# 提供1個默認的後臺404錯誤頁面與/healthz的健康檢查頁;
# 含1個Deployment與1個service;
# 只須要修改Pod啓動調用的image文件名
[root@kubenode1 ingress]# vim default-backend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: default-http-backend
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissible as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: netonline/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend 

7. with-rbac.yaml

Ingress-Controller以Pod的形式運行,監控apiserver的/ingress接口後端的backend services,若是service發生變化,則Ingress-Controller自動更新轉發規則。

基本邏輯以下:

  1. 監聽apiserver,獲取所有ingress定義;
  2. 基於ingress定義,生成nginx的配置文件/etc/nginx/nginx.conf;
  3. 執行nginx -s reload,從新加載nginx.conf配置文件的內容。
# without-rbac.yaml與with-rbac.yaml的區別是沒有調用rabc.yaml中定義的ServiceAccount: nginx-ingress-serviceaccount,這裏訪問apiserve須要認證;
# github文檔給的kind: Deployment,replicas: 1,即在1個節點上啓動1個ingress-nginx controller Pod,外部流量訪問該節點,由該節點負載後端services;但Pod自己會由於故障而轉移,節點ip會變動,DaemonSet會在多個節點(能夠利用Pod的親和性將指定Pod部署到指點節點)生成ingress-nginx controller Pod,則客戶端能夠訪問任意節點;或者在前端部署負載,使用vip訪問後端3個節點;
# hostNetwork: true,暴露ingress-nginx controller的相關業務端口到主機;
# 驗證中暫時用不到的服務不啓用,則相應的ingress-controller中的對應參數也註釋不用
[root@kubenode1 ingress]# vim with-rbac.yaml
apiVersion: extensions/v1beta1
# kind: Deployment # 變動kind kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx 
spec:
  # 已變動kind,註釋副本數 # replicas: 1
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      # 暴露主機端口  hostNetwork: true
      containers:
        - name: nginx-ingress-controller
          # 變動調用image名 image: netonline/nginx-ingress-controller:0.11.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            # 此驗證未使用ConfigMap,註釋 # - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            # 此驗證未使用udp-service-ConfigMap,註釋 # - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --annotations-prefix=nginx.ingress.kubernetes.io
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
 # 暴露的主機端口 hostPort: 80
          - name: https
            containerPort: 443 hostPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1

8. 啓動ingress-nginx

[root@kubenode1 ingress]# kubectl create -f namespace.yaml 
[root@kubenode1 ingress]# kubectl create -f tcp-services-configmap.yaml 
[root@kubenode1 ingress]# kubectl create -f rbac.yaml 
[root@kubenode1 ingress]# kubectl create -f default-backend.yaml 
[root@kubenode1 ingress]# kubectl create -f with-rbac.yaml

9. 設置iptables

# 3臺master節點均設置,with-rbac.yaml/without-rbac.yaml會啓用」hostNetwork」,而且開放tcp80,443,18080(/nginx-status)端口;
# 由於這裏docker服務已啓動,採用直接在input鏈追加開放端口的方式;
# 建議在/etc/sysconfig/iptables配置文件中將相應端口打開;
# 若是採用」service iptables save」命令會將當前已有iptables規則(含docker服務相關規則)所有寫入配置文件,慎用
[root@kubenode1 ~]# iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
[root@kubenode1 ~]# iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
[root@kubenode1 ~]# iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 18080 -j ACCEPT

10. 驗證

# 1個default-htt-backend與3個ingress-nginx Pod已運行
[root@kubenode1 ~]# kubectl get pods -n ingress-nginx -o wide

# 節點本機相應端口已被使用
[root@kubenode1 ~]# netstat -tunlp | grep nginx

# 訪問任意節點的80端口,返回404頁面,ingress-nginx controller與default-http-backend生效
[root@kubenode1 ~]# curl http://172.30.200.21
[root@kubenode1 ~]# curl http://172.30.200.22
[root@kubenode1 ~]# curl http://172.30.200.23

三.部署ingress

1. 部署後端服務

# 在相關全部節點下載後端服務鏡像,避免鏡像下載超時;
# 後端服務使用nginx
[root@kubenode1 ~]# docker pull nginx 

# 部署後端服務,這裏將後端服務Pod(Deployment的形式下發Pod,直接建立的Pod與Service關聯有問題,經過」kubectl get endpoints」可查看到Service的」ENDPOINTS」列關聯不上後端Pod,緣由未查明)與Service放在1個yaml文件中
[root@kubenode1 ~]# cd /usr/local/src/yaml/ingress/
[root@kubenode1 ingress]# touch nginx-svc.yaml
[root@kubenode1 ingress]# vim nginx-svc.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-01
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: nginx-01
    spec:
      containers:
      - name: nginx-01
        image: nginx:latest
        ports:
        - containerPort: 80 
---
apiVersion: v1
kind: Service
metadata:
  # Service的全局惟一名稱
  name: nginx-svc
spec:
  ports:
  # Service服務監聽的端口號
  - port: 80
    # 後端服務Pod提供的端口號
    targetPort: 80
    # 端口名稱(非必須)
    name: http
  # Service關聯定義了相應標籤的Pod
  selector:
    name: nginx-01

# 啓動後端服務
[root@kubenode1 ingress]# kubectl create -f nginx-svc.yaml

2. 部署ingress 

[root@kubenode1 ingress]# touch nginx-svc-ingress.yaml 
[root@kubenode1 ingress]# vim nginx-svc-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-svc-ingress
spec:
  rules:
  # 主機域名,須要在本地綁定節點ip
  - host: nginx-svc.me
http:
  # 若是paths下有具體的路徑,如/demo,須要與後端提供真實服務的path一致,這裏即nginx下須要有/demo路徑
      paths:
      - backend:
         # 後端服務名
         serviceName: nginx-svc
         # 後端服務監聽端口,區別於提供真實服務的容器監聽端口
         servicePort: 80

# 下發ingress
[root@kubenode1 ingress]# kubectl create -f nginx-svc-ingress.yaml 

3. 驗證

# Service關聯到後端Pod;
# 爲主機」nginx-svc.me」定製了ingress策略;
# 理論上ingress的」ADDRESS」列顯示ingress-nginx-controller Pod的ip地址則表示nginx已設置好後端Service的Endpoint,ingress此時能夠正常工做;爲空則有須要排錯;但可能這裏有個bug(在1.8.x與1.9.x版本都有此問題),在顯示爲空的狀態下,ingress依然生效
[root@kubenode1 ingress]# cd ~
[root@kubenode1 ~]# kubectl get endpoints nginx-svc -o wide
[root@kubenode1 ~]# kubectl get ingress -o wide 

在本地瀏覽器訪問host主機(注意提早綁定域名):http://nginx-svc.me

# 或者採用--resolve參數模擬dns解析,目標地址爲域名
[root@kubenode1 ~]# curl --resolve nginx-svc.me:80:172.30.200.21 http://nginx-svc.me
# 或者採用-H參數設置http頭中須要訪問的域名,目標地址爲ip地址
[root@kubenode1 ~]# curl -H 'Host:nginx-svc.me' http://172.30.200.22

4. ingress策略配置技巧

1)轉發到單個後端服務

# 全部訪問被轉發到後端惟一的Service,此時可不定義rule
# 關注紅色加粗字體
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress
spec:
 backend: serviceName: nginx-svc servicePort: 80

2)同一域名,不一樣的url路徑被轉發到不一樣的服務

# 相同域名下兩個不一樣的路徑對應不一樣的服務; # 注意若是paths下有具體的路徑,如/web,/api等,須要與後端提供真實服務的path一致,這裏即nginx服務器下須要有/web,/api等路徑
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress
spec:
 rules: - host: nginx-svc.me http: paths: - path: /web backend: serviceName: nginx-svc-web servicePort: 80 - path: /api backend: serviceName: nginx-svc-api servicePort: 8081

3)不一樣域名,被轉發到不一樣的服務

# 不一樣域名對應不一樣的服務
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress
spec:
 rules: - host: nginx-svc.me http: paths: - backend: serviceName: nginx-svc servicePort: 80 - host: apache-svc.me http: paths: - backend: serviceName: apache-svc servicePort: 80

4)不使用域名轉發

# 使用無域名ingress規則時,默認禁用http,強制啓用https;
# 此時客戶端訪問以下路徑:curl http://172.30.200.21/demo 會返回301錯誤,但使用https訪問則能夠成功:curl -k https://172.30.200.21/demo ;
# 能夠在ingress定義的metadata設置annotation 「ingress.kubernetes.io/ssl-redirect=false」關閉強制啓用https的設置,以下藍色加粗字體
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress
  # annotations: # ingress.kubernetes.io/ssl-redirect: 「false」
spec:
 rules: - http: paths: - path: /demo backend: serviceName: nginx-svc-demo servicePort: 8080

四.ingress的tls設置

對ingress中的域名進行tls安全證書的設置步驟以下:

  1. 建立自簽名的祕鑰與ssl證書;
  2. 將證書保存到kubernetes集羣的1個Secret資源對象上;
  3. 設置Secret資源對象到ingress中。

根據網站域名是1個仍是多個,前兩步的操做稍有不一樣,第3步操做相同,下面以多域名的操做爲例:

1. 生成ca證書

[root@kubenode1 ~]# mkdir -p /etc/kubernetes/ingress
[root@kubenode1 ~]# cd /etc/kubernetes/ingress/
[root@kubenode1 ingress]# openssl genrsa -out ca.key 2048
[root@kubenode1 ingress]# openssl req -x509 -new -nodes -key ca.key -days 3560 -out ca.crt -subj "/CN=ingress-ca"

2. 修改openssl.cnf文件

# 對於多域名,生成ssl證書須要使用額外的x509v3配置文件輔助;
# 在[alt_names]字段中設置多域名
[root@kubenode1 ingress]# cp /etc/pki/tls/openssl.cnf .
[root@kubenode1 ingress]# vim openssl.cnf

[ req ]
# 第126行,取消註釋 req_extensions = v3_req # The extensions to add to a certificate request

[ v3_req ]
# Extensions to add to a certificate request
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
# 第224行以後,新增部分 subjectAltName = @alt_names [alt_names] DNS.1 = nginx01-svc-tls.me DNS.2 = nginx02-svc-tls.me

3. 生成ingress ssl證書

# 基於修改的openssl.cnf與ca證書生成ingress ssl證書
# 生成祕鑰
[root@kubenode1 ingress]# openssl genrsa -out ingress.key 2048

# 生成csr文件
[root@kubenode1 ingress]# openssl req -new -key ingress.key -out ingress.csr -subj "/CN=nginx-svc-tls" -config openssl.cnf

# 生成證書
[root@kubenode1 ingress]# openssl x509 -req -in ingress.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out ingress.crt -days 3650 -extensions v3_req -extfile openssl.cnf

4. 生成Secret資源對象

Secret對象的主要做用是保管私密數據,如:密碼,OAuth Tokens,ssh Keys等信息。將私密信息存放在Secret對象中,比直接放在Pod或者Docker image中更安全,更便於使用與分發。

Secret對象建立完成以後,可經過3種方式調用:

  1. 在建立Pod時,經過爲Pod指定Service Account來自動使用;
  2. 經過掛載Secret到Pod來使用;
  3. Docker image下載時使用,經過指定Pod的spc.ImagePullSecrets來引用。
# 編輯secret-ingress.yaml文件,將ingress.key與ingress.crt的內容複製到yaml文件中; # 注意1:Secret的」data」域的各子域的值必須爲BASE64編碼; # 注意2:複製key與crt的內容時去掉換行符,變成一行
[root@kubenode1 ingress]# cd /usr/local/src/yaml/ingress/
[root@kubenode1 ingress]# touch secret-ingress.yaml
[root@kubenode1 ingress]# vim secret-ingress.yaml
apiVersion: v1
kind: Secret
metadata:
  name: secret-ingress
# 1.8.x以後使用kubernetes.io/tls替換Opaque
type: kubernetes.io/tls
data:
  tls.crt: MIIC/TCCAeWgAwIBAgIJALGRacBg2fWIMA0GCSqGSIb3DQEBCwUAMBUxEzARBgNVBAMMCmluZ3Jlc3MtY2EwHhcNMTgwMjI4MDM0NDI1WhcNMjgwMjI2MDM0NDI1WjAYMRYwFAYDVQQDDA1uZ2lueC1zdmMtdGxzMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuZvMBYF104JPtMZFFUxCpGGODFG4rGffN1FFC98CGt99QAwVMfABGDMU8zfa21twxON1v3WK8HdJH5KRdLOIRQnhuMHsC174sb/+FuOa0GhStgmNX0f2jGETuImPQ82faXACnUkUYuvYG5odbY+tS+LQBtIormpxWRlNNTVzT3jFD6JECVZzpMyCJutkwxJC083PS1VE9ki+7mgpPWbb9BqT0Tn672x4cHI8LZ5snr1fpR8I0sqADXY+KpFQeh7UJsWZjfr00wDBsg76aF3TNK+pecXnBNYPZ6o7sOGXvagAxU58xjjz75TwMQ7NnqF584fshvQLnzeTGhDbXx4GHQIDAQABo00wSzAJBgNVHRMEAjAAMAsGA1UdDwQEAwIF4DAxBgNVHREEKjAoghJuZ2lueDAxLXN2Yy10bHMubWWCEm5naW54MDItc3ZjLXRscy5tZTANBgkqhkiG9w0BAQsFAAOCAQEAaUQH21wct78wW1tAz1+3j9SGLLk7kd06PcnYH1pBGW3wlMFiusMOUdVfm2aPwkX7VFOfeo6LYtILisQ9+wJraQcGd31H5M/ILSH2bhd739CcySqm3aEYHplQCfepsRRcINp82N3GLjcT6sCeuvoC4l+rUDIKMEPt7Skwj1HjCj/NSpHguzmtmRG4PCvv/3nCrcntGLKxsKBD85llpRlT9/Q+9eMmQz7YRUjUEr/3cSmBfRjcBQhB7yXZyDLRbObAc1BMgxBRX/oexNIHCMNdRFSP6wAtlyajytOhcixBMu3RQ1g0lrFaT9vQWPXs6HV8xC6VvfhdSQN8B1T65klNNg==
  tls.key: MIIEpAIBAAKCAQEAuZvMBYF104JPtMZFFUxCpGGODFG4rGffN1FFC98CGt99QAwVMfABGDMU8zfa21twxON1v3WK8HdJH5KRdLOIRQnhuMHsC174sb/+FuOa0GhStgmNX0f2jGETuImPQ82faXACnUkUYuvYG5odbY+tS+LQBtIormpxWRlNNTVzT3jFD6JECVZzpMyCJutkwxJC083PS1VE9ki+7mgpPWbb9BqT0Tn672x4cHI8LZ5snr1fpR8I0sqADXY+KpFQeh7UJsWZjfr00wDBsg76aF3TNK+pecXnBNYPZ6o7sOGXvagAxU58xjjz75TwMQ7NnqF584fshvQLnzeTGhDbXx4GHQIDAQABAoIBAFMGCI3R6eWRXZvsMEyljw2+gW6rQ2MDF4rD9JGp0GQ64ei7PuPWinbLqqxcqK4ESf4YDLx2lI6ZnQDda+j6wZK4J9qgC7jOY4oG6l5MsxxT/eNlhHJBW1xRtCOQjJ/0o0DjlJfMb60L99/o4Q73/Ll8HDdg3EegX1FOiwWpAgpipA+WyosAtrfR8DjOAVMavlhkCejmgupWU7syuVmVQ0Dz/z9zPESI1b6pHO0Js4Keb8vnUHPLNcq1HCdCMK+wrdUaW2YmuAr9uoF7Wqvp7MCog//cQX93mijJzW8GFPrSt2y4NHN6AnUw6PE3aoMgF1my7O1xLwOjCQz+eW8voyECgYEA8luy6iEDYkfq+tkxA9kl3CgXVk5WgiE/4mVaEjOIT2llgM+8K3TAn8EGATk5s79phn/MRfqi8YQ13Z9dzhp3R/ARynD+/TVRzMHe5830ysBScHaW4vxvPXEn2uBtB8TC8goxmoIu9My5H746ceyY2xBEn8HA0XZ7pQTrCRimcmUCgYEAxA5a3g/Ni/uwTUAsQJNUPyvjcYxq+E3S2VNsYZiOiogKqXeE0QtasNMh1L7Wv9aan5Xca7eKbHP4fZFxLif/YrwwcmktIX3u5vkGyq2VCAw5V8iGD3vdbDJvAc2+YVBoeWf4w4eDST2Ir6xrM3WCtXR35EM0Jhw+8PAdytIKrVkCgYAqEK5yIr7CnTb0ySPPxi3jE3ZRfZFYTssW0X6bsCQVnHaIsAW6CS6xy7/uEG+qeiuns6DR+Jm1j7wFtnaComdXrhx4ZbpsWofTIUc+NqopUs48ROkVhrkMEgrX26Iw+f7YIdrQNY5O4QW0s8DTKzywsRcoH2oHMShu0Pa2gnfJXQKBgQClzLn9t4GNk0EKY23JAo8piTUkbqp76Fyam5k5g+lvsBLMNB4nJyIADd07bFRyEcvbj8HDeolepEiN8HS1ou+wERQrfVTEURq7S/f5aQhysNvBp/vvlkGv4YrNDLCm3Xgsy8etm6lkQ9yXLAnQj90FFUTazhaI8DQuT/Hx9uU+qQKBgQCBEpc98YikgYmZk/6kyzUP3l+MIj5i3UK/7ZG3QOpTAeTbzBQQX0s31b2Lf9M+SN2+2XJb/0OUr3RKKkuf5KgedMll7hNaEaFu9z5qPepFUlKWZz2MkIRSljecbSJ8ZfGz2wCUhQoW8KLQY9ftEaz+27eEJ0FxHhuhe5+yQMpkKA==

# 生成Secret資源對象
[root@kubenode1 ingress]# kubectl create -f secret-ingress.yaml

# 以上編輯yaml文件,使用」kubectl create」命令生成Secret對象在步驟上更清晰;
# 但能夠利用」kubectl create secret tls」命令直接建立Secret對象
[root@kubenode1 ingress]# kubectl create secret tls secret-ingress --key /etc/kubernetes/ingress/ingress.key --cert /etc/kubernetes/ingress/ingress.crt

5. 建立後端服務

# 編輯後端服務nginx01-svc-tls.yaml
[root@kubenode1 ingress]# touch nginx01-svc-tls.yaml
[root@kubenode1 ingress]# vim nginx01-svc-tls.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx01-tls
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: nginx01-tls
    spec:
      containers:
      - name: nginx01-tls
        image: nginx:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx01-svc-tls
spec:
  ports:
  # Service服務監聽的端口號
  - port: 443
    # 後端提供真實服務的Pod提供的端口號
    targetPort: 80
    name: https
  selector:
    name: nginx01-tls

# 編輯後端服務nginx02-svc-tls.yaml
[root@kubenode1 ingress]# cp nginx01-svc-tls.yaml nginx02-svc-tls.yaml
[root@kubenode1 ingress]# sed -i 's|nginx01|nginx02|g' nginx02-svc-tls.yaml

# 生成後端服務
[root@kubenode1 ingress]# kubectl create -f nginx01-svc-tls.yaml 
[root@kubenode1 ingress]# kubectl create -f nginx02-svc-tls.yaml

# 修改提供後端服務的nginx容器的html文件;
# 經過」kubectl exec -ti <pod-name> -c <container-name> /bin/bash」進入容器修改;pod-name可經過命令」kubectl get pods -o wide」獲取;container-name即yaml文件中定義的名字
[root@kubenode1 ingress]# kubectl get pods -o wide

# nginx官方容器的index.html文件在/usr/share/nginx/html/目錄下
[root@kubenode1 ingress]# kubectl exec -ti nginx01-tls-59fbf6696c-qfq4k -c nginx01-tls /bin/bash
root@nginx01-tls-59fbf6696c-qfq4k:/# echo "<h1>Welcome to test site nginx01-svc-tls</h1>" > /usr/share/nginx/html/index.html
root@nginx01-tls-59fbf6696c-qfq4k:/# cat /usr/share/nginx/html/index.html                                                   
root@nginx01-tls-59fbf6696c-qfq4k:/# exit
[root@kubenode1 ingress]# kubectl exec -ti nginx02-tls-5559fd9bc7-dfbrp -c nginx02-tls /bin/bash
root@nginx02-tls-5559fd9bc7-dfbrp:/# echo "<h1>Welcome to test site nginx02-svc-tls</h1>" > /usr/share/nginx/html/index.html
root@nginx02-tls-5559fd9bc7-dfbrp:/# cat /usr/share/nginx/html/index.html                                                   
root@nginx02-tls-5559fd9bc7-dfbrp:/# exit

6. 建立ingress對象

# 編輯ingress對象yaml文件;
# 在」spec」域下新增「tls」子域,」hosts」字段加入多域名,「secretName」字段調用對應的Secret資源;
# 1個ingress對象只能使用1個Secret對象(「secretName」字段value惟一),即只能使用1個證書,該正式須要支持」hosts」字段下全部域名;
# 「secretName」字段必定要置於域名列表最後的位置;
# 」hosts」字段的域名須要匹配」rules」字段域名;
# ingress默認狀況下,當不配置證書或者證書配置錯誤時,會默認給出一個tls證書;如「secretName」字段配置了2個值,則全部域名採用默認證書;如」hosts」字段少配置一個域名,缺失的域名會採用默認證書;
# 更新ingress證書可能須要等待一段時間才能生效
[root@kubenode1 ingress]# touch nginx-svc-tls-ingress.yaml
[root@kubenode1 ingress]# vim nginx-svc-tls-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-tls
spec:
 tls: - hosts: - nginx01-svc-tls.me - nginx02-svc-tls.me secretName: secret-ingress
  rules:
  - host: nginx01-svc-tls.me
    http:
      paths:
      - backend:
          serviceName: nginx01-svc-tls
          # 後端服務監聽端口,區別於提供真實服務的容器監聽端口
          servicePort: 443
  - host: nginx02-svc-tls.me
    http:
      paths:
      - backend:
          serviceName: nginx02-svc-tls
          servicePort: 443

# 生成ingress對象
[root@kubenode1 ingress]# kubectl create -f nginx-svc-tls-ingress.yaml
[root@kubenode1 ingress]# kubectl get ingress

7. 驗證

# 採用--resolve參數模擬dns解析,目標地址爲域名;
# http訪問時被重定向,採用https訪問正常
[root@kubenode1 ingress]# curl --resolve nginx01-svc-tls.me:80:172.30.200.21 http://nginx01-svc-tls.me
[root@kubenode1 ingress]# curl --resolve nginx01-svc-tls.me:443:172.30.200.21 -k https://nginx01-svc-tls.me

# 或者採用-H參數設置http頭中須要訪問的域名,目標地址爲ip地址
[root@kubenode1 ingress]# curl -H 'Host:nginx01-svc-tls.me' -k https://172.30.200.23
[root@kubenode1 ingress]# curl -H 'Host:nginx02-svc-tls.me' -k https://172.30.200.23

在本地瀏覽器訪問host主機(注意提早綁定域名):http://nginx01-svc-tls.me

採用http訪問,重定向自動跳轉爲https訪問,以下:

站點:nginx01-svc-tls.me

站點:nginx02-svc-tls.me

相關文章
相關標籤/搜索