配置和安裝 Heapster
到 heapster release 頁面 下載最新版本的 heapster。node
$ wget https://github.com/kubernetes/heapster/archive/v1.3.0.zip $ unzip v1.3.0.zip $ mv v1.3.0.zip heapster-1.3.0
文件目錄: heapster-1.3.0/deploy/kube-config/influxdb
git
$ cd heapster-1.3.0/deploy/kube-config/influxdb $ ls *.yaml grafana-deployment.yaml grafana-service.yaml heapster-deployment.yaml heapster-service.yaml influxdb-deployment.yaml influxdb-service.yaml heapster-rbac.yaml
我這使用已經修改好的 yaml 文件見:heapstergithub
咱們本身建立了heapster的rbac配置heapster-rbac.yaml
docker
配置 grafana-deployment
# cat grafana-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: monitoring-grafana namespace: kube-system spec: replicas: 1 template: metadata: labels: task: monitoring k8s-app: grafana spec: containers: - name: grafana image: index.tenxcloud.com/jimmy/heapster-grafana-amd64:v4.0.2 ports: - containerPort: 3000 protocol: TCP volumeMounts: - mountPath: /var name: grafana-storage env: - name: INFLUXDB_HOST value: monitoring-influxdb - name: GRAFANA_PORT value: "3000" # The following env variables are required to make Grafana accessible via # the kubernetes api-server proxy. On production clusters, we recommend # removing these env variables, setup auth for grafana, and expose the grafana # service using a LoadBalancer or a public IP. - name: GF_AUTH_BASIC_ENABLED value: "false" - name: GF_AUTH_ANONYMOUS_ENABLED value: "true" - name: GF_AUTH_ANONYMOUS_ORG_ROLE value: Admin - name: GF_SERVER_ROOT_URL # If you're only using the API Server proxy, set this value instead: value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ #value: / volumes: - name: grafana-storage emptyDir: {}
- 若是後續使用 kube-apiserver 或者 kubectl proxy 訪問 grafana dashboard,則必須將
GF_SERVER_ROOT_URL
設置爲/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
,不然後續訪問grafana時訪問時提示找不到http://192.168.1.121:8086/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/api/dashboards/home
頁面;
配置 heapster-deployment
# cat heapster-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: heapster namespace: kube-system spec: replicas: 1 template: metadata: labels: task: monitoring k8s-app: heapster spec: serviceAccountName: heapster containers: - name: heapster image: index.tenxcloud.com/jimmy/heapster-amd64:v1.3.0-beta.1 imagePullPolicy: IfNotPresent command: - /heapster - --source=kubernetes:https://kubernetes.default - --sink=influxdb:http://monitoring-influxdb:8086
配置 influxdb-deployment
influxdb 官方建議使用命令行或 HTTP API 接口來查詢數據庫,從 v1.1.0 版本開始默認關閉 admin UI,將在後續版本中移除 admin UI 插件。數據庫
開啓鏡像中 admin UI的辦法以下:先導出鏡像中的 influxdb 配置文件,開啓 admin 插件後,再將配置文件內容寫入 ConfigMap,最後掛載到鏡像中,達到覆蓋原始配置的目的:vim
注意:yaml目錄已經提供了 修改後的 ConfigMap 定義文件api
# # 導出鏡像中的 influxdb 配置文件 # docker run --rm --entrypoint 'cat' -ti lvanneo/heapster-influxdb-amd64:v1.1.1 /etc/config.toml >config.toml.orig # cp config.toml.orig config.toml # # 修改:啓用 admin 接口 # vim config.toml # diff config.toml.orig config.toml 35c35 < enabled = false --- > enabled = true # # 將修改後的配置寫入到 ConfigMap 對象中 # kubectl create configmap influxdb-config --from-file=config.toml -n kube-system configmap "influxdb-config" created # # 將 ConfigMap 中的配置文件掛載到 Pod 中,達到覆蓋原始配置的目的 # cat influxdb-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: monitoring-influxdb namespace: kube-system spec: replicas: 1 template: metadata: labels: task: monitoring k8s-app: influxdb spec: containers: - name: influxdb image: index.tenxcloud.com/jimmy/heapster-influxdb-amd64:v1.1.1 volumeMounts: - mountPath: /data name: influxdb-storage - mountPath: /etc/ name: influxdb-config volumes: - name: influxdb-storage emptyDir: {} - name: influxdb-config configMap: name: influxdb-config
配置 monitoring-influxdb Service
# cat influxdb-service.yaml apiVersion: v1 kind: Service metadata: labels: task: monitoring # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) # # If you are NOT using this as an addon, you should comment out this line. kubernetes.io/cluster-service: 'true' kubernetes.io/name: monitoring-influxdb name: monitoring-influxdb namespace: kube-system spec: type: NodePort ports: - port: 8086 targetPort: 8086 name: http - port: 8083 targetPort: 8083 name: admin selector: k8s-app: influxdb
- 定義端口類型爲 NodePort,額外增長了 admin 端口映射,用於後續瀏覽器訪問 influxdb 的 admin UI 界面;
執行全部定義文件
# pwd /root/yaml/heapster # ls *.yaml grafana-service.yaml heapster-rbac.yaml influxdb-cm.yaml influxdb-service.yaml grafana-deployment.yaml heapster-deployment.yaml heapster-service.yaml influxdb-deployment.yaml # kubectl create -f . deployment "monitoring-grafana" created service "monitoring-grafana" created deployment "heapster" created serviceaccount "heapster" created clusterrolebinding "heapster" created service "heapster" created configmap "influxdb-config" created deployment "monitoring-influxdb" created service "monitoring-influxdb" created
檢查執行結果
檢查 Deployment瀏覽器
# kubectl get deployments -n kube-system | grep -E 'heapster|monitoring' heapster 1 1 1 1 10m monitoring-grafana 1 1 1 1 10m monitoring-influxdb 1 1 1 1 10m
檢查 Pods安全
# kubectl get pods -n kube-system | grep -E 'heapster|monitoring' heapster-2291216627-qqm6s 1/1 Running 0 10m monitoring-grafana-2490289118-v1brc 1/1 Running 0 10m monitoring-influxdb-1450237832-zst7c 1/1 Running 0 10m
檢查 kubernets dashboard 界面,看是顯示各 Nodes、Pods 的 CPU、內存、負載等利用率曲線圖;bash
訪問 grafana
- 經過 kube-apiserver 訪問:
獲取 monitoring-grafana 服務 URL
# kubectl cluster-info Kubernetes master is running at https://192.168.1.121:6443 Heapster is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/heapster KubeDNS is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns kubernetes-dashboard is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard monitoring-grafana is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana monitoring-influxdb is running at https://192.168.1.121:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxd To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
瀏覽器訪問 URL: `http://192.168.1.121:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana`
- 經過 kubectl proxy 訪問:
建立代理
# kubectl proxy --address='192.168.1.121' --port=8086 --accept-hosts='^*$' Starting to serve on 192.168.1.121:8086
瀏覽器訪問 URL:http://192.168.1.121:8086/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
訪問 influxdb admin UI
獲取 influxdb http 8086 映射的 NodePort
# kubectl get svc -n kube-system|grep influxdb monitoring-influxdb 10.254.193.23 <nodes> 8086:31765/TCP,8083:32494/TCP 51m
經過 kube-apiserver 的非安全端口訪問 influxdb 的 admin UI 界面: http://192.168.1.121:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb:8083/
也能夠使用proxy的8086端口訪問influxdb 的 admin UI 界面: http://192.168.1.121:8086/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb:8083/
在頁面的 「Connection Settings」 的 Host 中輸入 node IP, Port 中輸入 8086 映射的 nodePort 如上面的 31765,點擊 「Save」 便可(個人集羣中的地址是192.168.1.121:31765):