同一k8s集羣中多nginx ingress controller

同一k8s集羣中多nginx ingress controller
同一k8s集羣中,如有多個項目(對應多個namespace)共用一個nginx ingress controller,所以任意註冊到ingress的服務有變動都會致使controller配置重載,當更新頻率愈來愈高時,此controller壓力會愈來愈大,理想的解決方案就是每一個namespace對應一個nginx ingress controller,各司其職。node

NGINX ingress controller提供了ingress.class參數來實現多ingress功能nginx

使用示例
若是你已配置好多個nginx ingress controller,則可在建立ingress時在annotations中指定使用ingress.class爲nginx(示例)的controller:docker

metadata:
  name: foo
  annotations:
    kubernetes.io/ingress.class: "nginx"

注意:將annotation設置爲與有效ingress class不匹配的任何值將強制controller忽略你的ingress。 若是你只運行單個controller,一樣annotation設置爲除此ingress class或空字符串以外的任何值也會被controller忽略。後端

配置多個nginx ingress controller示例:api

spec:
  template:
     spec:
       containers:
         - name: nginx-ingress-internal-controller
           args:
             - /nginx-ingress-controller
             - '--election-id=ingress-controller-leader-internal'
             - '--ingress-class=nginx-internal'
             - '--configmap=ingress/nginx-ingress-internal-controller'

--ingress-class:保證此參數惟一,即每一個controller配置各不相同網絡

注意:app

部署不一樣類型的多個nginx ingress controller(例如,nginx和gce),而不在annotation中指定ingress.class類將致使兩個或全部controller都在努力知足ingress建立需求,而且以混亂的方式競爭更新其ingress狀態字段。tcp

當運行多個nginx ingress controller時,若是其中一個controller使用默認的--ingress-class值(參見internal/ingress/annotations/class/main.go中的IsValid方法),它將只處理未設置ingress.class的ingress需求。ide

實際應用ui

建立新namespace
此處建立名爲ingress的namespace

kubectl create ns ingress

建立此namespace下的nginx-ingress-controller

nginx-ingress.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: ingress
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: 192.168.100.100/k8s/nginx-ingress-controller-defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        # resources:
          # limits:
            # cpu: 10m
            # memory: 20Mi
          # requests:
            # cpu: 10m
            # memory: 20Mi
---

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress
---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-ingress"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress

---

kind: ConfigMap
apiVersion: v1
#data:
#  "59090": ingress/kubernetes-dashboard:9090
metadata:
  name: tcp-services
  namespace: ingress

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress

---

kind: ConfigMap
apiVersion: v1
data:
  log-format-upstream: $remote_addr - $remote_user [$time_local] "$request" ' '$status
    $body_bytes_sent "$http_referer" "$http_user_agent" $request_time "$http_x_forwarded_for"
  worker-shutdown-timeout: "600"
metadata:
  name: nginx-configuration
  namespace: ingress
  labels:
    app: ingress-nginx

---

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress 
spec:
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      # initContainers:
      # - command:
        # - sh
        # - -c
        # - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535"
        # image: concar-docker-rg01:5000/k8s/alpine:3.6
        # imagePullPolicy: IfNotPresent
        # name: sysctl
        # securityContext:
          # privileged: true
      containers:
        - name: nginx-ingress-controller
          image: 192.168.100.100/k8s/nginx-ingress-controller:0.24.1
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --ingress-class=ingress
            - --annotations-prefix=nginx.ingress.kubernetes.io
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: aliyun_logs_ingress
              value: "stdout"
            - name: aliyun_logs_ingress_tags
              value: "fields.project=kube,fields.env=system,fields.app=nginx-ingress,fields.version=v1,type=nginx,multiline=1"
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
      hostNetwork: true
      nodeSelector:
        type: lb

對應資源解析:

Deployment: default-http-backend即默認ingress後端服務,它處理ingress中nginx控制器沒法理解的全部URL路徑和host

service:default-http-backend對應服務

ServiceAccount:nginx-ingress-controller綁定的RBAC服務帳戶

Role:nginx-ingress-controller綁定的RBAC角色

"ingress-controller-leader-ingress":注意最後的ingress對應自定義的namespace,此處爲ingress(默認爲ingress-controller-leader-nginx)

RoleBinding:nginx-ingress-controller角色和服務帳戶綁定

ConfigMap:nginx-ingress-controller的tcp-service/udp-service/nginx對應配置

DaemonSet:nginx-ingress-controller部署集(須要知足nodeSelector)

--ingress-class=ingress:聲明此ingress僅服務於名爲ingress的namespace

注:因爲僅同一namespace使用,因此這種ingress-controller不須要配置ClusterRole和ClusterRoleBinding二個RBAC權限!

建立此ingress-controller

kubectl create -f nginx-ingress.yaml

添加ClusterRoleBinding權限

因爲默認的ingress-contorller對應的ClusterRoleBinding僅綁定了kube-system命名空間對應的ingress ServiceAcount,因此須要在此ClusterRoleBinding上再添加自定義的ingress ServiceAcount,不然,新的ingress-contorller啓動會報權限錯誤:

The cluster seems to be running with a restrictive Authorization mode and the Ingress controller does not have the required permissions to operate normally

ClusterRoleBinding:nginx-ingress-clusterrole-nisa-binding對應配置以下:

subjects:
- kind: ServiceAccount
  name: nginx-ingress-serviceaccount
  namespace: kube-system
- kind: ServiceAccount
  name: nginx-ingress-serviceaccount
  namespace: ingress

建立ingress資源

前提條件:知足存在label有type: lb的節點

示例以下:

ingress.yml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    kubernetes.io/ingress.class: "ingress"
  name: storm-nimbus-ingress
  namespace: ingress
spec:
  rules:
  - host: storm-nimbus-ingress.test.com
    http:
      paths:
      - backend:
          serviceName: storm-nimbus-cluster
          servicePort: 8080
        path: /

kubernetes.io/ingress.class: 聲明僅使用名爲"ingress"的ingress-controller

host:自定義的server_name

serviceName:須要代理的k8s servcie服務名

serciePort:服務暴露的端口

建立此ingress

kubectl create -f ingress.yml

訪問

訪問:http://storm-nimbus-ingress.test.com/ (storm-nimbus-ingress.test.com保證可解析到ingress-controller節點)

多ingress以後須要網絡隔離,默認多個namespace是沒有網絡隔離的

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: aep-production-network-policy
  namespace: aep-production #針對namespace設置訪問權限,默認容許全部namespace之間的互訪,可是若是創建了一個networkpolicy,則默認變爲拒絕全部
spec:
  podSelector: {}
  ingress:
  - from:
    - ipBlock: #容許此ip訪問
        cidr: 10.42.89.0/32 #這個ip是兩臺ingress主機中其中一臺的ip
    - ipBlock: #容許此ip訪問
        cidr: 10.42.143.0/32
    - namespaceSelector: {} #容許此namespace裏面的訪問
    - podSelector:  #容許打瞭如下標籤的pod訪問
        matchLabels:
          project: aep
          env: production
          vdc: oscarindustry
  policyTypes:
  - Ingress

如下爲生產環境配置實例(ingress由devops平臺產生):

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: carchat-prod
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: ecloud02-plat-ops-repo01.cmiov:5000/k8s/nginx-ingress-controller-defaultbackend:1.4
        resources:
          limits:
            cpu: 500m
            memory: 500Mi
          requests:
            cpu: 10m
            memory: 20Mi
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080

---

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: carchat-prod
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: carchat-prod

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
        - events
    verbs:
        - create
        - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: carchat-prod
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-carchat-prod-oscarindustry"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: carchat-prod
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: carchat-prod

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: carchat-prod

---

kind: ConfigMap
apiVersion: v1
data:
  "59090": kube-system/kubernetes-dashboard:9090
  "49090": monitoring/prometheus-operated:9090
metadata:
  name: tcp-services
  namespace: carchat-prod

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: carchat-prod

---

kind: ConfigMap
apiVersion: v1
data:
  log-format-upstream: $remote_addr - $remote_user [$time_local] "$request" ' '$status
    $body_bytes_sent "$http_referer" "$http_user_agent" $request_time "$http_x_forwarded_for"
  worker-shutdown-timeout: "600"
metadata:
  name: nginx-configuration
  namespace: carchat-prod
  labels:
    app: ingress-nginx

---

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: carchat-prod 
spec:
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      # initContainers:
      # - command:
        # - sh
        # - -c
        # - sysctl -w net.core.somaxconn=32768; sysctl -w net.ipv4.ip_local_port_range="1024 65535"
        # image: concar-docker-rg01:5000/k8s/alpine:3.6
        # imagePullPolicy: IfNotPresent
        # name: sysctl
        # securityContext:
          # privileged: true
      containers:
        - name: nginx-ingress-controller
          image: ecloud02-plat-ops-repo01.cmiov:5000/k8s/nginx-ingress-controller:0.24.1
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --ingress-class=carchat-prod-oscarindustry
            - --annotations-prefix=nginx.ingress.kubernetes.io
          resources:
            limits:
              cpu: "3"
              memory: 6000Mi
            requests:
              cpu: 100m
              memory: 100Mi
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: aliyun_logs_ingress
              value: "stdout"
            - name: aliyun_logs_ingress_tags
              value: "fields.project=kube,fields.env=system,fields.app=nginx-ingress,fields.version=v1,type=nginx,multiline=1"
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
      hostNetwork: true
      nodeSelector:
        ingress: carchat-prod

若是不在devops中建立:

ingress.yml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    kubernetes.io/ingress.class: "ingress"
  name: storm-nimbus-ingress
  namespace: ingress
spec:
  rules:
  - host: storm-nimbus-ingress.test.com
    http:
      paths:
      - backend:
          serviceName: storm-nimbus-cluster
          servicePort: 8080
        path: /
相關文章
相關標籤/搜索