海口-老男人 17:42:43 就是我要運行一個nodejs服務,先發布爲deployment,而後建立service,讓集羣外能夠訪問 舊報紙 17:43:35 也就是 你的需求爲 一個app 用delployment 發佈 而後service 而後 代理 對吧
部署traefik 發佈deployment pod 建立service traefik 代理service 外部訪問
docker 19.03.5php
kubernetes 1.17.2html
爲何選擇 traefik,拋棄nginx:node
https://www.php.cn/nginx/422461.html 特別說明:traefik 2.1 版本更新了灰色發佈和流量複製功能 爲容器/微服務而生 traefik 官網寫的都很不錯了 附帶一篇 快速入門博文:https://www.qikqiak.com/post/traefik-2.1-101/ 建議部署traefik 不要用上述連接的方案
注意:這裏 Traefik 是部署在 assembly Namespace 下,若是不是須要修改下面部署文件中的 Namespace 屬性。linux
traefik-crd.yamlnginx
在 traefik v2.1 版本後,開始使用 CRD(Custom Resource Definition)來完成路由配置等,因此須要提早建立 CRD 資源。web
# cat traefik-crd.yaml ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2020-03-16 #FileName: traefik-crd.yaml #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2020 All rights reserved ########################################################################### ## IngressRoute apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ingressroutes.traefik.containo.us spec: scope: Namespaced group: traefik.containo.us version: v1alpha1 names: kind: IngressRoute plural: ingressroutes singular: ingressroute --- ## IngressRouteTCP apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ingressroutetcps.traefik.containo.us spec: scope: Namespaced group: traefik.containo.us version: v1alpha1 names: kind: IngressRouteTCP plural: ingressroutetcps singular: ingressroutetcp --- ## Middleware apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: middlewares.traefik.containo.us spec: scope: Namespaced group: traefik.containo.us version: v1alpha1 names: kind: Middleware plural: middlewares singular: middleware --- ## TLSOption apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: tlsoptions.traefik.containo.us spec: scope: Namespaced group: traefik.containo.us version: v1alpha1 names: kind: TLSOption plural: tlsoptions singular: tlsoption --- ## TraefikService apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: traefikservices.traefik.containo.us spec: scope: Namespaced group: traefik.containo.us version: v1alpha1 names: kind: TraefikService plural: traefikservices singular: traefikservice
建立RBAC權限redis
Traefik 須要必定的權限,因此這裏提早建立好 Traefik ServiceAccount 並分配必定的權限。docker
# cat traefik-rbac.yaml ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2020-03-16 #FileName: traefik-rbac.yaml #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2020 All rights reserved ########################################################################### ## ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: namespace: kube-system name: traefik-ingress-controller --- ## ClusterRole kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: traefik-ingress-controller rules: - apiGroups: [""] resources: ["services","endpoints","secrets"] verbs: ["get","list","watch"] - apiGroups: ["extensions"] resources: ["ingresses"] verbs: ["get","list","watch"] - apiGroups: ["extensions"] resources: ["ingresses/status"] verbs: ["update"] - apiGroups: ["traefik.containo.us"] resources: ["middlewares"] verbs: ["get","list","watch"] - apiGroups: ["traefik.containo.us"] resources: ["ingressroutes"] verbs: ["get","list","watch"] - apiGroups: ["traefik.containo.us"] resources: ["ingressroutetcps"] verbs: ["get","list","watch"] - apiGroups: ["traefik.containo.us"] resources: ["tlsoptions"] verbs: ["get","list","watch"] - apiGroups: ["traefik.containo.us"] resources: ["traefikservices"] verbs: ["get","list","watch"] --- ## ClusterRoleBinding kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: traefik-ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: traefik-ingress-controller subjects: - kind: ServiceAccount name: traefik-ingress-controller namespace: kube-system
建立traefik配置文件shell
因爲 Traefik 配置不少,經過 CLI 定義不是很方便,通常時候選擇將其配置選項放到配置文件中,而後存入 ConfigMap,將其掛入 traefik 中。json
traefik-config.yaml
# cat traefik-config.yaml ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2020-03-16 #FileName: traefik-config.yaml #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2020 All rights reserved ########################################################################### kind: ConfigMap apiVersion: v1 metadata: name: traefik-config namespace: kube-system data: traefik.yaml: |- ping: "" ## 啓用 Ping serversTransport: insecureSkipVerify: true ## Traefik 忽略驗證代理服務的 TLS 證書 api: insecure: true ## 容許 HTTP 方式訪問 API dashboard: true ## 啓用 Dashboard debug: false ## 啓用 Debug 調試模式 metrics: prometheus: "" ## 配置 Prometheus 監控指標數據,並使用默認配置 entryPoints: web: address: ":80" ## 配置 80 端口,並設置入口名稱爲 web websecure: address: ":443" ## 配置 443 端口,並設置入口名稱爲 websecure redis: address: ":663" providers: kubernetesCRD: "" ## 啓用 Kubernetes CRD 方式來配置路由規則 kubernetesIngress: "" ## 啓動 Kubernetes Ingress 方式來配置路由規則 log: filePath: "" ## 設置調試日誌文件存儲路徑,若是爲空則輸出到控制檯 level: error ## 設置調試日誌級別 format: json ## 設置調試日誌格式 accessLog: filePath: "" ## 設置訪問日誌文件存儲路徑,若是爲空則輸出到控制檯 format: json ## 設置訪問調試日誌格式 bufferingSize: 0 ## 設置訪問日誌緩存行數 filters: #statusCodes: ["200"] ## 設置只保留指定狀態碼範圍內的訪問日誌 retryAttempts: true ## 設置代理訪問重試失敗時,保留訪問日誌 minDuration: 20 ## 設置保留請求時間超過指定持續時間的訪問日誌 fields: ## 設置訪問日誌中的字段是否保留(keep 保留、drop 不保留) defaultMode: keep ## 設置默認保留訪問日誌字段 names: ## 針對訪問日誌特別字段特別配置保留模式 ClientUsername: drop headers: ## 設置 Header 中字段是否保留 defaultMode: keep ## 設置默認保留 Header 中字段 names: ## 針對 Header 中特別字段特別配置保留模式 User-Agent: redact Authorization: drop Content-Type: keep
關鍵配置部分
entryPoints: web: address: ":80" ## 配置 80 端口,並設置入口名稱爲 web websecure: address: ":443" ## 配置 443 端口,並設置入口名稱爲 websecure redis: address: ":6379" ## 配置 6379 端口,並設置入口名稱爲 redis
部署traefik
採用daemonset這種方式部署Traefik,因此須要提早給節點設置 Label,這樣當程序部署時 Pod 會自動調度到設置 Label 的節點上
traefik-deploy.yaml
kubectl label nodes 20.0.0.202 IngressProxy=true #cat traefik-deploy.yaml ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2020-03-16 #FileName: traefik-deploy.yaml #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2020 All rights reserved ########################################################################### apiVersion: v1 kind: Service metadata: name: traefik namespace: kube-system spec: type: NodePort ports: - name: web port: 80 - name: websecure port: 443 - name: admin port: 8080 - name: redis port: 6379 selector: app: traefik --- apiVersion: apps/v1 kind: DaemonSet metadata: name: traefik-ingress-controller namespace: kube-system labels: app: traefik spec: selector: matchLabels: app: traefik template: metadata: name: traefik labels: app: traefik spec: serviceAccountName: traefik-ingress-controller terminationGracePeriodSeconds: 1 containers: - image: traefik:v2.1.2 name: traefik-ingress-lb ports: - name: web containerPort: 80 hostPort: 80 ## 將容器端口綁定所在服務器的 80 端口 - name: websecure containerPort: 443 hostPort: 443 ## 將容器端口綁定所在服務器的 443 端口 - name: redis containerPort: 6379 hostPort: 6379 - name: admin containerPort: 8080 ## Traefik Dashboard 端口 resources: limits: cpu: 2000m memory: 1024Mi requests: cpu: 1000m memory: 1024Mi securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE args: - --configfile=/config/traefik.yaml volumeMounts: - mountPath: "/config" name: "config" volumes: - name: config configMap: name: traefik-config tolerations: ## 設置容忍全部污點,防止節點被設置污點 - operator: "Exists" nodeSelector: ## 設置node篩選器,在特定label的節點上啓動 IngressProxy: "true"
Traefik2.1 應用已經部署完成,可是想讓外部訪問 Kubernetes 內部服務,還須要配置路由規則,這裏開啓了 Traefik Dashboard 配置,因此首先配置 Traefik Dashboard 看板的路由規則,使外部可以訪問 Traefik Dashboard。
部署traefik dashboard
# cat traefik-dashboard-route.yaml ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2020-03-16 #FileName: traefik-dashboard-route.yaml #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2020 All rights reserved ########################################################################### apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: traefik-dashboard-route namespace: kube-system spec: entryPoints: - web routes: - match: Host(`traefik.linux.com`) kind: Rule services: - name: traefik port: 8080
本地host解析
其實到這裏 已經能基本看出traefik是如何進行代理的了
apiVersion: v1 kind: Service metadata: name: traefik namespace: kube-system spec: type: NodePort ports: - name: web port: 80 - name: websecure port: 443 - name: admin port: 8080 selector: app: traefik apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: traefik-dashboard-route namespace: kube-system spec: entryPoints: - web //入口 routes: - match: Host(`traefik.linux.com`) //外部訪問域名 kind: Rule services: - name: traefik //service.selector port: 8080 //service.ports
問題要求是: 部署一個app 用delployment 發佈 而後service 而後 代理
端口是3009
那這裏就用個應用實驗吧 這裏我隨便用個prometheus 應用吧
[root@bs-k8s-master01 prometheus]# cat prometheus-cm.yaml ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2020-03-20 #FileName: prometheus-cm.yaml #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2020 All rights reserved ########################################################################### apiVersion: v1 kind: ConfigMap metadata: name: prometheus-config namespace: assembly data: prometheus.yml: | global: scrape_interval: 15s scrape_timeout: 15s alerting: alertmanagers: - static_configs: - targets: ["alertmanager-svc:9093"] rule_files: - /etc/prometheus/rules.yaml scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] - job_name: 'traefik' static_configs: - targets: ['traefik.kube-system.svc.cluster.local:8080'] - job_name: "kubernetes-nodes" kubernetes_sd_configs: - role: node relabel_configs: - source_labels: [__address__] regex: '(.*):10250' replacement: '${1}:9100' target_label: __address__ action: replace - action: labelmap regex: __meta_kubernetes_node_label_(.+) - job_name: 'kubernetes-kubelet' kubernetes_sd_configs: - role: node scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt insecure_skip_verify: true bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - job_name: "kubernetes-apiserver" kubernetes_sd_configs: - role: endpoints scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] action: keep regex: default;kubernetes;https - job_name: "kubernetes-scheduler" kubernetes_sd_configs: - role: endpoints - job_name: 'kubernetes-cadvisor' kubernetes_sd_configs: - role: node scheme: https tls_config: ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token relabel_configs: - action: labelmap regex: __meta_kubernetes_node_label_(.+) - target_label: __address__ replacement: kubernetes.default.svc:443 - source_labels: [__meta_kubernetes_node_name] regex: (.+) target_label: __metrics_path__ replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor - job_name: 'kubernetes-service-endpoints' kubernetes_sd_configs: - role: endpoints relabel_configs: - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] action: keep regex: true - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] action: replace target_label: __scheme__ regex: (https?) - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ regex: (.+) - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] action: replace target_label: __address__ regex: ([^:]+)(?::\d+)?;(\d+) replacement: $1:$2 - action: labelmap regex: __meta_kubernetes_service_label_(.+) - source_labels: [__meta_kubernetes_namespace] action: replace target_label: kubernetes_namespace - source_labels: [__meta_kubernetes_service_name] action: replace target_label: kubernetes_name rules.yaml: | groups: - name: test-rule rules: - alert: NodeMemoryUsage expr: (sum(node_memory_MemTotal_bytes) - sum(node_memory_MemFree_bytes + node_memory_Buffers_bytes+node_memory_Cached_bytes)) / sum(node_memory_MemTotal_bytes) * 100 > 5 for: 2m labels: team: node annotations: summary: "{{$labels.instance}}: High Memory usage detected" description: "{{$labels.instance}}: Memory usage is above 80% (current value is: {{ $value }}" [root@bs-k8s-master01 prometheus]# cat prometheus-rbac.yaml ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2020-03-20 #FileName: prometheus-rbac.yaml #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2020 All rights reserved ########################################################################### apiVersion: v1 kind: ServiceAccount metadata: name: prometheus namespace: assembly --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus rules: - apiGroups: - "" resources: - nodes - services - endpoints - pods - nodes/proxy verbs: - get - list - watch - apiGroups: - "" resources: - configmaps - nodes/metrics verbs: - get - nonResourceURLs: - /metrics verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: prometheus roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: prometheus subjects: - kind: ServiceAccount name: prometheus namespace: assembly # cat prometheus-deploy.yaml ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2020-03-20 #FileName: prometheus-deploy.yaml #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2020 All rights reserved ########################################################################### apiVersion: apps/v1 kind: Deployment metadata: name: prometheus namespace: assembly labels: app: prometheus spec: selector: matchLabels: app: prometheus template: metadata: labels: app: prometheus spec: imagePullSecrets: - name: k8s-harbor-login serviceAccountName: prometheus containers: - image: harbor.linux.com/prometheus/prometheus:v2.4.3 name: prometheus command: - "/bin/prometheus" args: - "--config.file=/etc/prometheus/prometheus.yml" - "--storage.tsdb.path=/prometheus" - "--storage.tsdb.retention=24h" - "--web.enable-admin-api" # 控制對admin HTTP API的訪問,其中包括刪除時間序列等功能 - "--web.enable-lifecycle" # 支持熱更新,直接執行localhost:9090/-/reload當即生效 ports: - containerPort: 9090 protocol: TCP name: http volumeMounts: - mountPath: "/prometheus" subPath: prometheus name: data - mountPath: "/etc/prometheus" name: config-volume resources: requests: cpu: 100m memory: 512Mi limits: cpu: 100m memory: 512Mi securityContext: runAsUser: 0 volumes: - name: data persistentVolumeClaim: claimName: prometheus-pvc - configMap: name: prometheus-config name: config-volume nodeSelector: ## 設置node篩選器,在特定label的節點上啓動 prometheus: "true" [root@bs-k8s-master01 prometheus]# cat prometheus-svc.yaml ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2020-03-20 #FileName: prometheus-svc.yaml #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2020 All rights reserved ########################################################################### apiVersion: v1 kind: Service metadata: name: prometheus namespace: assembly labels: app: prometheus spec: selector: app: prometheus type: NodePort ports: - name: web port: 9090 targetPort: http [root@bs-k8s-master01 prometheus]# cat prometheus-ingressroute.yaml ########################################################################## #Author: zisefeizhu #QQ: 2********0 #Date: 2020-03-20 #FileName: prometheus-ingressroute.yaml #URL: https://www.cnblogs.com/zisefeizhu/ #Description: The test script #Copyright (C): 2020 All rights reserved ########################################################################### apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: prometheus namespace: assembly spec: entryPoints: - web routes: - match: Host(`prometheus.linux.com`) kind: Rule services: - name: prometheus port: 9090
顯然已經代理成功了 本地hosts解析
仍是沒得問題的