k8s-資源指標API及自定義指標API-二十三

1、html

原先版本是用heapster來收集資源指標才能看,可是如今heapster要廢棄了。node

從k8s v1.8開始後,引入了新的功能,即把資源指標引入api;git

在使用heapster時,獲取資源指標是由heapster自已獲取的,heapster有自已的獲取路徑,沒有經過apiserver,後來k8s引入了資源指標API(Metrics API),因而資源指標的數據就從k8s的api中的直接獲取,沒必要再經過其它途徑。
metrics-server: 它也是一種API Server,提供了核心的Metrics API,就像k8s組件kube-apiserver提供了不少API羣組同樣,但它不是k8s組成部分,而是託管運行在k8s之上的Pod。
github

爲了讓用戶無縫的使用metrics-server當中的API,還須要把這類自定義的API,經過聚合器聚合到核心API組裏,
而後能夠把此API看成是核心API的一部分,經過kubectl api-versions可直接查看。
sql

metrics-server收集指標數據的方式是從各節點上kubelet提供的Summary API 即10250端口收集數據,收集Node和Pod核心資源指標數據,主要是內存和cpu方面的使用狀況,並將收集的信息存儲在內存中,因此當經過kubectl top不能查看資源數據的歷史狀況,其它資源指標數據則經過prometheus採集了。vim

k8s中不少組件是依賴於資源指標API的功能 ,好比kubectl top 、hpa,若是沒有一個資源指標API接口,這些組件是無法運行的;api

資源指標:metrics-server瀏覽器

自定義指標: prometheus, k8s-prometheus-adapterrestful

新一代架構:架構

  • 核心指標流水線:由kubelet、metrics-server以及由API server提供的api組成;cpu累計利用率、內存實時利用率、pod的資源佔用率及容器的磁盤佔用率;
  • 監控流水線:用於從系統收集各類指標數據並提供終端用戶、存儲系統以及HPA,他們包含核心指標以及許多非核心指標。非核心指標不能被k8s所解析;

metrics-server是一個api server,收集cpu利用率、內存利用率等。

2、metrics

(1)卸載上一節heapster建立的資源;

[root@master metrics]# pwd
/root/manifests/metrics

[root@master metrics]# kubectl delete -f ./
deployment.apps "monitoring-grafana" deleted
service "monitoring-grafana" deleted
clusterrolebinding.rbac.authorization.k8s.io "heapster" deleted
serviceaccount "heapster" deleted
deployment.apps "heapster" deleted
service "heapster" deleted
deployment.apps "monitoring-influxdb" deleted
service "monitoring-influxdb" deleted
pod "pod-demo" deleted

metrics-server在GitHub上有單獨的項目,在kubernetes的addons裏面也有關於metrics-server插件的項目yaml文件;

咱們這裏使用kubernetes裏面的yaml:

metrics-server on kubernetes:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server

將如下幾個文件下載出來:

image

[root@master metrics-server]# pwd
/root/manifests/metrics/metrics-server

[root@master metrics-server]# ls
auth-delegator.yaml  auth-reader.yaml  metrics-apiservice.yaml  metrics-server-deployment.yaml  metrics-server-service.yaml  resource-reader.yaml

須要修改一些內容:

目前metrics-server的鏡像版本已經升級到metrics-server-amd64:v0.3.1了,此前的版本爲v0.2.1,二者的啓動的參數仍是有所不一樣的。

[root@master metrics-server]# vim resource-reader.yaml
...
...
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - namespaces
  - nodes/stats    #添加此行
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - "extensions"
  resources:
  - deployments
...
...


[root@master metrics-server]# vim metrics-server-deployment.yaml 
...
...
containers:
      - name: metrics-server
        image: k8s.gcr.io/metrics-server-amd64:v0.3.1    #修改鏡像(能夠從阿里雲上拉取,而後從新打標)
        command:
        - /metrics-server
        - --metric-resolution=30s
        - --kubelet-insecure-tls        ##添加此行
        - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP    #添加此行
        # These are needed for GKE, which doesn't support secure communication yet.
        # Remove these lines for non-GKE clusters, and when GKE supports token-based auth.
        #- --kubelet-port=10255
        #- --deprecated-kubelet-completely-insecure=true
        ports:
        - containerPort: 443
          name: https
          protocol: TCP
      - name: metrics-server-nanny
        image: k8s.gcr.io/addon-resizer:1.8.4    #修改鏡像(能夠從阿里雲上拉取,而後從新打標)
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 5m
            memory: 50Mi
...
...
# 修改containers,metrics-server-nanny 啓動參數,修改好的以下:
 volumeMounts:
        - name: metrics-server-config-volume
          mountPath: /etc/config
        command:
          - /pod_nanny
          - --config-dir=/etc/config
          - --cpu=80m
          - --extra-cpu=0.5m
          - --memory=80Mi
          - --extra-memory=8Mi
          - --threshold=5
          - --deployment=metrics-server-v0.3.1
          - --container=metrics-server
          - --poll-period=300000
          - --estimator=exponential
          # Specifies the smallest cluster (defined in number of nodes)
          # resources will be scaled to.
          #- --minClusterSize={{ metrics_server_min_cluster_size }}
...
...


#建立
[root@master metrics-server]# kubectl apply -f ./
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
configmap/metrics-server-config created
deployment.apps/metrics-server-v0.3.1 created
service/metrics-server created

#查看,pod已經起來了
[root@master metrics-server]# kubectl get pods -n kube-system |grep metrics-server
metrics-server-v0.3.1-7d8bf87b66-8v2w9   2/2     Running   0          9m37s
[root@master ~]# kubectl api-versions |grep metrics
metrics.k8s.io/v1beta1

[root@master ~]# kubectl top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

#如下爲pod中兩個容器的日誌
[root@master ~]# kubectl logs metrics-server-v0.3.1-7d8bf87b66-8v2w9 -c metrics-server -n kube-system
I0327 07:06:47.082938       1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
[restful] 2019/03/27 07:06:59 log.go:33: [restful/swagger] listing is available at https://:443/swaggerapi
[restful] 2019/03/27 07:06:59 log.go:33: [restful/swagger] https://:443/swaggerui/ is mapped to folder /swagger-ui/
I0327 07:06:59.783549       1 serve.go:96] Serving securely on [::]:443

[root@master ~]# kubectl logs metrics-server-v0.3.1-7d8bf87b66-8v2w9 -c metrics-server-nanny -n kube-system
ERROR: logging before flag.Parse: I0327 07:06:40.684552       1 pod_nanny.go:65] Invoked by [/pod_nanny --config-dir=/etc/config --cpu=80m --extra-cpu=0.5m --memory=80Mi --extra-memory=8Mi --threshold=5 --deployment=metrics-server-v0.3.1 --container=metrics-server --poll-period=300000 --estimator=exponential]
ERROR: logging before flag.Parse: I0327 07:06:40.684806       1 pod_nanny.go:81] Watching namespace: kube-system, pod: metrics-server-v0.3.1-7d8bf87b66-8v2w9, container: metrics-server.
ERROR: logging before flag.Parse: I0327 07:06:40.684829       1 pod_nanny.go:82] storage: MISSING, extra_storage: 0Gi
ERROR: logging before flag.Parse: I0327 07:06:40.689926       1 pod_nanny.go:109] cpu: 80m, extra_cpu: 0.5m, memory: 80Mi, extra_memory: 8Mi
ERROR: logging before flag.Parse: I0327 07:06:40.689970       1 pod_nanny.go:138] Resources: [{Base:{i:{value:80 scale:-3} d:{Dec:<nil>} s:80m Format:DecimalSI} ExtraPerNode:{i:{value:5 scale:-4} d:{Dec:<nil>} s: Format:DecimalSI} Name:cpu} {Base:{i:{value:83886080 scale:0} d:{Dec:<nil>} s: Format:BinarySI} ExtraPerNode:{i:{value:8388608 scale:0} d:{Dec:<nil>} s: Format:BinarySI} Name:memory}]

遺憾的是,pod雖然起來了,可是依然不能獲取到資源指標;

因爲初學,沒有什麼經驗,網上查了一些資料,也沒有解決;

上面貼出了日誌,若是哪位大佬有此類經驗,還望不吝賜教!


2、prometheus

metrics只能監控cpu和內存,對於其餘指標如用戶自定義的監控指標,metrics就沒法監控到了。這時就須要另一個組件叫prometheus;

image

node_exporter是agent;

PromQL至關於sql語句來查詢數據;

k8s-prometheus-adapter:prometheus是不能直接解析k8s的指標的,須要藉助k8s-prometheus-adapter轉換成api;

kube-state-metrics是用來整合數據的;

kubernetes中prometheus的項目地址:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/prometheus

馬哥的prometheus項目地址:https://github.com/ikubernetes/k8s-prom

一、部署node_exporter

[root@master metrics]# git clone https://github.com/iKubernetes/k8s-prom.git

[root@master metrics]# cd k8s-prom/

[root@master k8s-prom]# ls
k8s-prometheus-adapter  kube-state-metrics  namespace.yaml  node_exporter  podinfo  prometheus  README.md

#建立一個叫prom的名稱空間
[root@master k8s-prom]# kubectl apply -f namespace.yaml 
namespace/prom created

#部署node_exporter
[root@master k8s-prom]# cd node_exporter/

[root@master node_exporter]# ls
node-exporter-ds.yaml  node-exporter-svc.yaml

[root@master node_exporter]# kubectl apply -f ./
daemonset.apps/prometheus-node-exporter created
service/prometheus-node-exporter created

[root@master ~]# kubectl get pods -n prom
NAME                             READY   STATUS    RESTARTS   AGE
prometheus-node-exporter-5tfbz   1/1     Running   0          107s
prometheus-node-exporter-6rl8k   1/1     Running   0          107s
prometheus-node-exporter-rkx47   1/1     Running   0          107s

二、部署prometheus:

[root@master k8s-prom]# cd prometheus/

#prometheus-deploy.yaml文件中有限制使用內存的定義,若是內存不夠用,能夠將此規則刪除;
[root@master ~]# kubectl describe pods prometheus-server-76dc8df7b-75vbp -n prom
0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient memory.

[root@master prometheus]# kubectl apply -f ./
configmap/prometheus-config created
deployment.apps/prometheus-server created
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/prometheus created

#查看prom名稱空間下,全部資源信息
[root@master ~]# kubectl get all -n prom
NAME                                     READY   STATUS    RESTARTS   AGE
pod/prometheus-node-exporter-5tfbz       1/1     Running   0          15m
pod/prometheus-node-exporter-6rl8k       1/1     Running   0          15m
pod/prometheus-node-exporter-rkx47       1/1     Running   0          15m
pod/prometheus-server-556b8896d6-cztlk   1/1     Running   0          3m5s    #pod起來了

NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/prometheus                 NodePort    10.99.240.192   <none>        9090:30090/TCP   9m55s
service/prometheus-node-exporter   ClusterIP   None            <none>        9100/TCP         15m

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/prometheus-node-exporter   3         3         3       3            3           <none>          15m

NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/prometheus-server   1/1     1            1           3m5s

NAME                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/prometheus-server-556b8896d6   1         1         1       3m5s

由於用的是NodePort,能夠直接在集羣外部訪問:

瀏覽器輸入:http://192.168.3.102:30090

192.168.3.102:爲任意一個node節點的地址,不是master

image

生產環境應該使用pv+pvc的方式部署;

三、部署kube-state-metrics

[root@master k8s-prom]# cd kube-state-metrics/

[root@master kube-state-metrics]# ls
kube-state-metrics-deploy.yaml  kube-state-metrics-rbac.yaml  kube-state-metrics-svc.yaml

#建立,相關鏡像能夠去阿里雲拉取,而後打標
[root@master kube-state-metrics]# kubectl apply -f ./
deployment.apps/kube-state-metrics created
serviceaccount/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
service/kube-state-metrics created

[root@master ~]# kubectl get all -n prom
NAME                                      READY   STATUS    RESTARTS   AGE
pod/kube-state-metrics-5dbf8d5979-cc2pk   1/1     Running   0          20s
pod/prometheus-node-exporter-5tfbz        1/1     Running   0          74m
pod/prometheus-node-exporter-6rl8k        1/1     Running   0          74m
pod/prometheus-node-exporter-rkx47        1/1     Running   0          74m
pod/prometheus-server-556b8896d6-qk8jc    1/1     Running   0          48m

NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/kube-state-metrics         ClusterIP   10.98.0.63      <none>        8080/TCP         20s
service/prometheus                 NodePort    10.111.85.219   <none>        9090:30090/TCP   48m
service/prometheus-node-exporter   ClusterIP   None            <none>        9100/TCP         74m

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/prometheus-node-exporter   3         3         3       3            3           <none>          74m

NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kube-state-metrics   1/1     1            1           20s
deployment.apps/prometheus-server    1/1     1            1           48m

NAME                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/kube-state-metrics-5dbf8d5979   1         1         1       20s
replicaset.apps/prometheus-server-556b8896d6    1         1         1       48m

四、部署k8s-prometheus-adapter

須要自制證書:

[root@master ~]# cd /etc/kubernetes/pki/

[root@master pki]# (umask 077; openssl genrsa -out serving.key 2048)
Generating RSA private key, 2048 bit long modulus
................+++
...+++
e is 65537 (0x10001)

建立:

#證書請求
[root@master pki]# openssl req -new -key serving.key -out serving.csr -subj "/CN=serving"

#簽證:
[root@master pki]# openssl x509 -req -in serving.csr -CA ./ca.crt -CAkey ./ca.key -CAcreateserial -out serving.crt -days 3650
Signature ok
subject=/CN=serving
Getting CA Private Key

[root@master k8s-prometheus-adapter]# pwd
/root/manifests/metrics/k8s-prom/k8s-prometheus-adapter

[root@master k8s-prometheus-adapter]# tail -n 4 custom-metrics-apiserver-deployment.yaml 
      volumes:
      - name: volume-serving-cert
        secret:
          secretName: cm-adapter-serving-certs    #此處寫了secret的名字,因此下面建立的時候要和這裏一致

#建立加密的配置文件:
[root@master pki]# kubectl create secret generic cm-adapter-serving-certs --from-file=serving.crt=./serving.crt --from-file=serving.key=./serving.key -n prom
secret/cm-adapter-serving-certs created

[root@master pki]# kubectl get secrets -n prom
NAME                             TYPE                                  DATA   AGE
cm-adapter-serving-certs         Opaque                                2      23s
default-token-4jlsz              kubernetes.io/service-account-token   3      17h
kube-state-metrics-token-klc7q   kubernetes.io/service-account-token   3      16h
prometheus-token-qv598           kubernetes.io/service-account-token   3      17h

部署k8s-prometheus-adapter:

#這裏須要去下載最新的custom-metrics-apiserver-deployment.yaml和custom-metrics-config-map.yaml

#先將現有目錄中的文件移出去
[root@master k8s-prometheus-adapter]# mv custom-metrics-apiserver-deployment.yaml {,.bak}

#拉取兩個文件
[root@master k8s-prometheus-adapter]# wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/master/deploy/manifests/custom-metrics-apiserver-deployment.yaml

[root@master k8s-prometheus-adapter]# wget https://raw.githubusercontent.com/DirectXMan12/k8s-prometheus-adapter/master/deploy/manifests/custom-metrics-config-map.yaml

#把兩個文件裏面的namespace的字段值改爲prom

#建立
[root@master k8s-prometheus-adapter]# kubectl apply -f ./
clusterrolebinding.rbac.authorization.k8s.io/custom-metrics:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/custom-metrics-auth-reader created
deployment.apps/custom-metrics-apiserver created
clusterrolebinding.rbac.authorization.k8s.io/custom-metrics-resource-reader created
serviceaccount/custom-metrics-apiserver created
service/custom-metrics-apiserver created
apiservice.apiregistration.k8s.io/v1beta1.custom.metrics.k8s.io created
clusterrole.rbac.authorization.k8s.io/custom-metrics-server-resources created
configmap/adapter-config created
clusterrole.rbac.authorization.k8s.io/custom-metrics-resource-reader created
clusterrolebinding.rbac.authorization.k8s.io/hpa-controller-custom-metrics created

#查看
[root@master ~]# kubectl get all -n prom
NAME                                          READY   STATUS    RESTARTS   AGE
pod/custom-metrics-apiserver-c86bfc77-6hgjh   1/1     Running   0          50s
pod/kube-state-metrics-5dbf8d5979-cc2pk       1/1     Running   0          16h
pod/prometheus-node-exporter-5tfbz            1/1     Running   0          18h
pod/prometheus-node-exporter-6rl8k            1/1     Running   0          18h
pod/prometheus-node-exporter-rkx47            1/1     Running   0          18h
pod/prometheus-server-556b8896d6-qk8jc        1/1     Running   0          17h

NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/custom-metrics-apiserver   ClusterIP   10.96.223.14    <none>        443/TCP          51s
service/kube-state-metrics         ClusterIP   10.98.0.63      <none>        8080/TCP         16h
service/prometheus                 NodePort    10.111.85.219   <none>        9090:30090/TCP   17h
service/prometheus-node-exporter   ClusterIP   None            <none>        9100/TCP         18h

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/prometheus-node-exporter   3         3         3       3            3           <none>          18h

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/custom-metrics-apiserver   1/1     1            1           53s
deployment.apps/kube-state-metrics         1/1     1            1           16h
deployment.apps/prometheus-server          1/1     1            1           17h

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/custom-metrics-apiserver-c86bfc77   1         1         1       52s
replicaset.apps/kube-state-metrics-5dbf8d5979       1         1         1       16h
replicaset.apps/prometheus-server-556b8896d6        1         1         1       17h

[root@master ~]# kubectl get cm -n prom
NAME                DATA   AGE
adapter-config      1      60s
prometheus-config   1      17h

能夠看到資源都起來了

#查看api
[root@master ~]# kubectl api-versions |grep custom
custom.metrics.k8s.io/v1beta1            #此項已經有了

五、prometheus和grafana整合

一、獲取grafana.yaml

https://raw.githubusercontent.com/kubernetes-retired/heapster/master/deploy/kube-config/influxdb/grafana.yaml

二、修改yaml文件

[root@master metrics]# vim grafana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: monitoring-grafana
  namespace: prom        #此處namespace改成prom
spec:
  replicas: 1
  selector:
    matchLabels:
      task: monitoring
      k8s-app: grafana

  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      containers:
      - name: grafana
        image: angelnu/heapster-grafana:v5.0.4
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ca-certificates
          readOnly: true
        - mountPath: /var
          name: grafana-storage
        env:
        #- name: INFLUXDB_HOST        #註釋此行
        #  value: monitoring-influxdb        #註釋此行
        - name: GF_SERVER_HTTP_PORT
          value: "3000"
          # The following env variables are required to make Grafana accessible via
          # the kubernetes api-server proxy. On production clusters, we recommend
          # removing these env variables, setup auth for grafana, and expose the grafana
          # service using a LoadBalancer or a public IP.
        - name: GF_AUTH_BASIC_ENABLED
          value: "false"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          # If you're only using the API Server proxy, set this value instead:
          # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
          value: /
      volumes:
      - name: ca-certificates
        hostPath:
          path: /etc/ssl/certs
      - name: grafana-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: prom        #此處namespace改成prom
spec:
  # In a production setup, we recommend accessing Grafana through an external Loadbalancer
  # or through a public IP.
  # type: LoadBalancer
  # You could also use NodePort to expose the service at a randomly-generated port
  # type: NodePort
  ports:
  - port: 80
    targetPort: 3000
  type: NodePort        #添加此行
  selector:
    k8s-app: grafana

三、建立grafana,整合Prometheus

[root@master metrics]# kubectl apply -f grafana.yaml 
deployment.apps/monitoring-grafana created
service/monitoring-grafana created

[root@master metrics]# kubectl get pods -n prom |grep grafana
monitoring-grafana-8549b985b6-zghcj       1/1     Running   0          108s

[root@master metrics]# kubectl get svc -n prom |grep grafana
monitoring-grafana         NodePort    10.101.124.148   <none>        80:31808/TCP     118s      #此處爲NodePort,外部直接訪問31808端口

grafana運行在node02上了:

[root@master pki]# kubectl get pods -n prom -o wide |grep grafana
monitoring-grafana-8549b985b6-zghcj       1/1     Running   0          27m   10.244.2.58     node02   <none>           <none>

在外部瀏覽器打開:

image

[root@master ~]# kubectl get svc -n prom -o wide |grep prometheus
prometheus                 NodePort    10.111.85.219    <none>        9090:30090/TCP   41h   app=prometheus,component=server

而後修改框住的內容:

image

以上經過之後,點擊「Dashboards」,將三個模板都導入;

image

以下圖,已經有些監控數據了:

image

也能夠去下載一些模板:

https://grafana.com/dashboards

https://grafana.com/dashboards?dataSource=prometheus&search=kubernetes

image

image

image

而後導入:

image

image

image

image


3、HPA(水平pod自動擴展)

(1)

Horizontal Pod Autoscaling能夠根據CPU利用率(內存爲不可壓縮資源)自動伸縮一個Replication Controller、Deployment 或者Replica Set中的Pod數量;

目前HPA只支持兩個版本,其中v1版本只支持核心指標的定義;

[root@master ~]# kubectl api-versions |grep autoscaling
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2

(2)下面咱們用命令行的方式建立一個帶有資源限制的pod

[root@master ~]# kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=1 --requests='cpu=50m,memory=256Mi' --limits='cpu=50m,memory=256Mi' --labels='app=myapp' --expose --port=80
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
service/myapp created
deployment.apps/myapp created

[root@master ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
myapp-657fb86dd-nkhhx   1/1     Running   0          56s

(3)下面咱們讓myapp 這個pod能自動水平擴展,用kubectl autoscale,其實就是建立HPA控制器的;

#查看幫助
[root@master ~]# kubectl autoscale -h 

#建立
[root@master ~]# kubectl autoscale deployment myapp --min=1 --max=8 --cpu-percent=60
horizontalpodautoscaler.autoscaling/myapp autoscaled

--min:表示最小擴展pod的個數  --max:表示最多擴展pod的個數 
--cpu-percent:cpu利用率

#查看hpa
[root@master ~]# kubectl get hpa
NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
myapp   Deployment/myapp   0%/60%    1         8         1          64s

[root@master ~]# kubectl get svc |grep myapp
myapp        ClusterIP   10.107.17.18   <none>        80/TCP    7m46s

#把service改爲NodePort的方式:
[root@master ~]# kubectl patch svc myapp -p '{"spec":{"type": "NodePort"}}'
service/myapp patched

[root@master ~]# kubectl get svc |grep myapp
myapp        NodePort    10.107.17.18   <none>        80:31043/TCP   9m

接着能夠對pod進行壓測,看看pod會不會擴容:

#安裝ab壓測工具
[root@master ~]# yum -y install httpd-tools

#壓測
[root@master ~]# ab -c 1000 -n 50000000 http://192.168.3.100:31043/index.html

#壓測的同時,能夠看到pods的cpu利用率爲102%,須要擴展爲2個pod了: 
[root@master ~]# kubectl describe hpa |grep -A 3 "resource cpu" 
  resource cpu on pods  (as a percentage of request):  102% (51m) / 60%
Min replicas:                                          1
Max replicas:                                          8
Deployment pods:                                       1 current / 2 desired

#已經擴展爲兩個pod了
[root@master ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
myapp-657fb86dd-k4jdg   1/1     Running   0          62s
myapp-657fb86dd-nkhhx   1/1     Running   0          110m

#等壓測完,cpu使用率降下來,pod數量還會自動恢復爲1個,以下
[root@master ~]# kubectl describe hpa |grep -A 3 "resource cpu" 
  resource cpu on pods  (as a percentage of request):  0% (0) / 60%
Min replicas:                                          1
Max replicas:                                          8
Deployment pods:                                       1 current / 1 desired

[root@master ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
myapp-657fb86dd-nkhhx   1/1     Running   0          116m

#可是若是cpu使用率仍是一直上升,pod數量會擴展的更多

(4)hpa v2

上面用的是hpav1來作的水平pod自動擴展的功能,hpa v1版本只能根據cpu利用率括水平自動擴展pod。
接下來咱們看一下hpa v2的功能,它能夠根據自定義指標利用率來水平擴展pod。

#刪除剛纔的hpa
[root@master ~]# kubectl delete hpa myapp
horizontalpodautoscaler.autoscaling "myapp" deleted

#hpa-v2資源定義清單
[root@master hpav2]# vim hpa-v2-demo.yaml

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-hpa-v2
spec:
  scaleTargetRef:        #根據什麼指標來作評估壓力
    apiVersion: apps/v1    #對誰來作自動擴展
    kind: Deployment
    name: myapp
  minReplicas: 1        #最少副本數量
  maxReplicas: 10        #最多副本數量
  metrics:            #表示依據哪些指標來進行評估
  - type: Resource        #表示基於資源進行評估
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50        #pod cpu使用率超過55%,就自動水平擴展pod個數

#建立
[root@master hpav2]# kubectl apply -f hpa-v2-demo.yaml 
horizontalpodautoscaler.autoscaling/myapp-hpa-v2 created

[root@master ~]# kubectl get hpa
NAME           REFERENCE          TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
myapp-hpa-v2   Deployment/myapp   <unknown>/50%   1         10        0          9s

接着能夠對pod進行壓測,看看pod會不會擴容:

[root@master hpav2]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
myapp-657fb86dd-nkhhx   1/1     Running   0          3h16m

#壓測
[root@master ~]# ab -c 1000 -n 80000000 http://192.168.3.100:31043/index.html

#看到cpu使用率已經到了100%
[root@master ~]# kubectl describe hpa |grep -A 3 "resource cpu" 
  resource cpu on pods  (as a percentage of request):  100% (50m) / 50%
Min replicas:                                          1
Max replicas:                                          10
Deployment pods:                                       1 current / 2 desired

#pod已經自動擴容爲兩個了
[root@master hpav2]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
myapp-657fb86dd-fkdxq   1/1     Running   0          27s
myapp-657fb86dd-nkhhx   1/1     Running   0          3h19m

#等壓測結束後,資源使用正常一段時間後,pod個數還會收縮爲正常個數;

(5)hpa v2能夠根據cpu和內存使用率進行伸縮Pod個數,還能夠根據其餘參數進行pod處理,如http併發量

[root@master hpa]# vimt hpa-v2-custom.yaml  apiVersion: autoscaling/v2beta2   #從這能夠看出是hpa v2版本
kind: HorizontalPodAutoscaler
metadata:   name: myapp-hpa-v2
spec:   scaleTargetRef:  #根據什麼指標來作評估壓力     apiVersion: apps/v1  #對誰來作自動擴展     kind: Deployment     name: myapp   minReplicas: 1  #最少副本數量   maxReplicas: 10   metrics:  #表示依據哪些指標來進行評估   - type: Pods  #表示基於資源進行評估     pods:        metricName: http_requests    #自定義的資源指標       targetAverageValue: 800m  #m表示個數,表示併發數800

hpa-v2版本的,有須要之後能夠深刻學習一下;

相關文章
相關標籤/搜索