本文將從 GPU-Operator 概念介紹、安裝部署、深度訓練測試應用部署,以及在 KubeSphere 使用自定義監控面板對接 GPU 監控,從原理到實踐,逐步淺析介紹與實踐 GPU-Operator。html
衆所周知,Kubernetes 平臺經過設備插件框架提供對特殊硬件資源的訪問,如 NVIDIA GPU、網卡、Infiniband 適配器和其餘設備。然而,使用這些硬件資源配置和管理節點須要配置多個軟件組件,如驅動程序、容器運行時或其餘依賴庫,這是困難的和容易出錯的。node
NVIDIA GPU Operator 由 Nvidia 公司開源,利用了 Kubernetes 平臺的 Operator 控制模式,方便地自動化集成管理 GPU 所需的 NVIDIA 設備組件,有效地解決了上述GPU設備集成的痛點。這些組件包括 NVIDIA 驅動程序(用於啓用 CUDA )、用於 GPU 的 Kubernetes 設備插件、NVIDIA Container 運行時、自動節點標籤、基於 DCGM 的監控等。mysql
NVIDIA GPU Operator 的不只實現了設備和組件一體化集成,並且它管理 GPU 節點就像管理 CPU 節點同樣方便,無需單獨爲 GPU 節點提供特殊的操做系統。值得關注的是,它將GPU各組件容器化,提供 GPU 能力,很是適合快速擴展和管理規模 GPU 節點。固然,對於已經爲GPU組件構建了特殊操做系統的應用場景來講,顯得並非那麼合適了。linux
前文提到,NVIDIA GPU Operator 管理 GPU 節點就像管理 CPU 節點同樣方便,那麼它是如何實現這一能力呢?git
咱們一塊兒來看看 GPU-Operator 運行時的架構圖:github
經過圖中的描述,咱們能夠知道, GPU-Operator 是經過實現了 Nvidia 容器運行時,以runC
做爲輸入,在runC
中preStart hook
中注入了一個名叫nvidia-container-toolkit
的腳本,該腳本調用libnvidia-container CLI
設置一系列合適的flags
,使得容器運行後具備 GPU 能力。sql
在安裝 GPU Operator 以前,請配置好安裝環境以下:docker
driver
,container runtime
,device plugin
);Docker
,cri-o
, 或者containerd
.對於 docker 來講,能夠參考這裏;nouveau driver
添加黑名單,須要更新initramfs
;$ sudo vim /etc/modprobe.d/blacklist.conf # 在尾部添加黑名單 blacklist nouveau options nouveau modeset=0 $ sudo update-initramfs -u $ reboot $ lsmod | grep nouveau # 驗證nouveau是否已禁用 $ cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c #本文測試時處理器架構代號爲Broadwell 16 Intel Core Processor (Broadwell)
Helm chart
變量設置nfd.enabled
爲false
, 再安裝;OS Name / Version | Identifier | amd64 / x86_64 | ppc64le | arm64 / aarch64 |
---|---|---|---|---|
Amazon Linux 1 | amzn1 | X | ||
Amazon Linux 2 | amzn2 | X | ||
Amazon Linux 2017.09 | amzn2017.09 | X | ||
Amazon Linux 2018.03 | amzn2018.03 | X | ||
Open Suse Leap 15.0 | sles15.0 | X | ||
Open Suse Leap 15.1 | sles15.1 | X | ||
Debian Linux 9 | debian9 | X | ||
Debian Linux 10 | debian10 | X | ||
Centos 7 | centos7 | X | X | |
Centos 8 | centos8 | X | X | X |
RHEL 7.4 | rhel7.4 | X | X | |
RHEL 7.5 | rhel7.5 | X | X | |
RHEL 7.6 | rhel7.6 | X | X | |
RHEL 7.7 | rhel7.7 | X | X | |
RHEL 8.0 | rhel8.0 | X | X | X |
RHEL 8.1 | rhel8.1 | X | X | X |
RHEL 8.2 | rhel8.2 | X | X | X |
Ubuntu 16.04 | ubuntu16.04 | X | X | |
Ubuntu 18.04 | ubuntu18.04 | X | X | X |
Ubuntu 20.04 | ubuntu20.04 | X | X | X |
OS Name / Version | amd64 / x86_64 | ppc64le | arm64 / aarch64 |
---|---|---|---|
Docker 18.09 | X | X | X |
Docker 19.03 | X | X | X |
RHEL/CentOS 8 podman | X | ||
CentOS 8 Docker | X | ||
RHEL/CentOS 7 Docker | X |
可參考 Docker 官方文檔shell
配置 stable 倉庫和 GPG key :json
$ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \ && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \ && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
更新軟件倉庫後安裝nvidia-docker2
並添加運行時配置:
$ sudo apt-get update $ sudo apt-get install -y nvidia-docker2 ----- What would you like to do about it ? Your options are: Y or I : install the package maintainer's version N or O : keep your currently-installed version D : show the differences between the versions Z : start a shell to examine the situation ----- # 初次安裝,遇到以上交互式問題可選擇N # 若是選擇Y會覆蓋你的一些默認配置 # 選擇N後,將如下配置添加到etc/docker/daemon.json { "runtimes": { "nvidia": { "path": "/usr/bin/nvidia-container-runtime", "runtimeArgs": [] } } }
重啓docker
:
$ sudo systemctl restart docker
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 \ && chmod 700 get_helm.sh \ && ./get_helm.sh
添加helm
倉庫
$ helm repo add nvidia https://nvidia.github.io/gpu-operator \ && helm repo update
$ kubectl create ns gpu-operator-resources $ helm install gpu-operator nvidia/gpu-operator -n gpu-operator-resources --wait
若是須要指定驅動版本,可參考以下:
$ helm install gpu-operator nvidia/gpu-operator -n gpu-operator-resources \ --set driver.version="450.80.02"
helm install gpu-operator nvidia/gpu-operator -n gpu-operator-resources\ --set operator.defaultRuntime=crio
helm install gpu-operator nvidia/gpu-operator -n gpu-operator-resources\ --set operator.defaultRuntime=containerd Furthermore, when setting containerd as the defaultRuntime the following options are also available: toolkit: env: - name: CONTAINERD_CONFIG value: /etc/containerd/config.toml - name: CONTAINERD_SOCKET value: /run/containerd/containerd.sock - name: CONTAINERD_RUNTIME_CLASS value: nvidia - name: CONTAINERD_SET_AS_DEFAULT value: true
因爲安裝的鏡像比較大,因此初次安裝過程當中可能會出現超時的情形,請檢查你的鏡像是否在拉取中!能夠考慮使用離線安裝解決該類問題,參考離線安裝的連接。
$ helm install gpu-operator nvidia/gpu-operator -n gpu-operator-resources -f values.yaml
$ kubectl get pods -n gpu-operator-resources NAME READY STATUS RESTARTS AGE gpu-feature-discovery-4gk78 1/1 Running 0 35s gpu-operator-858fc55fdb-jv488 1/1 Running 0 2m52s gpu-operator-node-feature-discovery-master-7f9ccc4c7b-2sg6r 1/1 Running 0 2m52s gpu-operator-node-feature-discovery-worker-cbkhn 1/1 Running 0 2m52s gpu-operator-node-feature-discovery-worker-m8jcm 1/1 Running 0 2m52s nvidia-container-toolkit-daemonset-tfwqt 1/1 Running 0 2m42s nvidia-dcgm-exporter-mqns5 1/1 Running 0 38s nvidia-device-plugin-daemonset-7npbs 1/1 Running 0 53s nvidia-device-plugin-validation 0/1 Completed 0 49s nvidia-driver-daemonset-hgv6s 1/1 Running 0 2m47s
$ kubectl describe node worker-gpu-001 --- Allocatable: cpu: 15600m ephemeral-storage: 82435528Ki hugepages-2Mi: 0 memory: 63649242267 nvidia.com/gpu: 1 #check here pods: 110 ---
$ cat cuda-load-generator.yaml apiVersion: v1 kind: Pod metadata: name: dcgmproftester spec: restartPolicy: OnFailure containers: - name: dcgmproftester11 image: nvidia/samples:dcgmproftester-2.0.10-cuda11.0-ubuntu18.04 args: ["--no-dcgm-validation", "-t 1004", "-d 120"] resources: limits: nvidia.com/gpu: 1 securityContext: capabilities: add: ["SYS_ADMIN"] EOF
$ curl -LO https://nvidia.github.io/gpu-operator/notebook-example.yml $ cat notebook-example.yml apiVersion: v1 kind: Service metadata: name: tf-notebook labels: app: tf-notebook spec: type: NodePort ports: - port: 80 name: http targetPort: 8888 nodePort: 30001 selector: app: tf-notebook --- apiVersion: v1 kind: Pod metadata: name: tf-notebook labels: app: tf-notebook spec: securityContext: fsGroup: 0 containers: - name: tf-notebook image: tensorflow/tensorflow:latest-gpu-jupyter resources: limits: nvidia.com/gpu: 1 ports: - containerPort: 8
$ kubectl apply -f cuda-load-generator.yaml pod/dcgmproftester created $ kubectl apply -f notebook-example.yml service/tf-notebook created pod/tf-notebook created
查看 GPU 處於已分配狀態:
$ kubectl describe node worker-gpu-001 --- Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1087m (6%) 1680m (10%) memory 1440Mi (2%) 1510Mi (2%) ephemeral-storage 0 (0%) 0 (0%) nvidia.com/gpu 1 1 #check this Events: <none>
當有 GPU 任務發佈給平臺時,GPU 資源從可分配狀態轉變爲已分配狀態,安裝任務發佈的前後順序,第二個任務在第一個任務運行結束後開始運行:
$ kubectl get pods --watch NAME READY STATUS RESTARTS AGE dcgmproftester 1/1 Running 0 76s tf-notebook 0/1 Pending 0 58s ------ NAME READY STATUS RESTARTS AGE dcgmproftester 0/1 Completed 0 4m22s tf-notebook 1/1 Running 0 4m4s
獲取應用端口信息:
$ kubectl get svc # get the nodeport of the svc, 30001 gpu-operator-1611672791-node-feature-discovery ClusterIP 10.233.10.222 <none> 8080/TCP 12h kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 12h tf-notebook NodePort 10.233.53.116 <none> 80:30001/TCP 7m52s
查看日誌,獲取登陸口令:
$ kubectl logs tf-notebook [I 21:50:23.188 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret [I 21:50:23.390 NotebookApp] Serving notebooks from local directory: /tf [I 21:50:23.391 NotebookApp] The Jupyter Notebook is running at: [I 21:50:23.391 NotebookApp] http://tf-notebook:8888/?token=3660c9ee9b225458faaf853200bc512ff2206f635ab2b1d9 [I 21:50:23.391 NotebookApp] or http://127.0.0.1:8888/?token=3660c9ee9b225458faaf853200bc512ff2206f635ab2b1d9 [I 21:50:23.391 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [C 21:50:23.394 NotebookApp] To access the notebook, open this file in a browser: file:///root/.local/share/jupyter/runtime/nbserver-1-open.html Or copy and paste one of these URLs: http://tf-notebook:8888/?token=3660c9ee9b225458faaf853200bc512ff2206f635ab2b1d9 or http://127.0.0.1:8888/?token=3660c9ee9b225458faaf853200bc512ff2206f635ab2b1d9
進入jupyter notebook
環境後,嘗試進入終端,運行深度學習任務:
進入terminal
後拉取tersorflow
測試代碼並運行:
與此同時,開啓另一個終端運行nvidia-smi
查看 GPU 監控使用狀況:
gpu-operator
幫咱們提供了nvidia-dcgm-exporter
這個exportor
, 只須要將它集成到Prometheus
的可採集對象中,也就是ServiceMonitor
中,咱們就能獲取GPU監控數據了:
$ kubectl get pods -n gpu-operator-resources NAME READY STATUS RESTARTS AGE gpu-feature-discovery-ff4ng 1/1 Running 2 15h nvidia-container-toolkit-daemonset-2vxjz 1/1 Running 0 15h nvidia-dcgm-exporter-pqwfv 1/1 Running 0 5h27m #here nvidia-device-plugin-daemonset-42n74 1/1 Running 0 5h27m nvidia-device-plugin-validation 0/1 Completed 0 5h27m nvidia-driver-daemonset-dvd9r 1/1 Running 3 15h
能夠構建一個busybox
查看該exporter
暴露的指標:
$ kubectl get svc -n gpu-operator-resources NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE gpu-operator-node-feature-discovery ClusterIP 10.233.54.111 <none> 8080/TCP 56m nvidia-dcgm-exporter ClusterIP 10.233.53.196 <none> 9400/TCP 54m $ kubectl exec -it busybox-sleep -- sh $ wget http://nvidia-dcgm-exporter.gpu-operator-resources:9400/metrics $ cat metrics ---- DCGM_FI_DEV_SM_CLOCK{gpu="0",UUID="GPU-eeff7856-475a-2eb7-6408-48d023d9dd28",device="nvidia0",container="tf-notebook",namespace="default",pod="tf-notebook"} 405 DCGM_FI_DEV_MEM_CLOCK{gpu="0",UUID="GPU-eeff7856-475a-2eb7-6408-48d023d9dd28",device="nvidia0",container="tf-notebook",namespace="default",pod="tf-notebook"} 715 DCGM_FI_DEV_GPU_TEMP{gpu="0",UUID="GPU-eeff7856-475a-2eb7-6408-48d023d9dd28",device="nvidia0",container="tf-notebook",namespace="default",pod="tf-notebook"} 30 ----
查看nvidia-dcgm-exporter
暴露的svc
和ep
:
$ kubectl describe svc nvidia-dcgm-exporter -n gpu-operator-resources Name: nvidia-dcgm-exporter Namespace: gpu-operator-resources Labels: app=nvidia-dcgm-exporter Annotations: prometheus.io/scrape: true Selector: app=nvidia-dcgm-exporter Type: NodePort IP: 10.233.28.200 Port: gpu-metrics 9400/TCP TargetPort: 9400/TCP NodePort: gpu-metrics 31129/TCP Endpoints: 10.233.84.54:9400 Session Affinity: None External Traffic Policy: Cluster Events: <none>
配置ServiceMonitor
定義清單:
$ cat custom/gpu-servicemonitor.yaml apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: nvidia-dcgm-exporter namespace: gpu-operator-resources labels: app: nvidia-dcgm-exporter spec: jobLabel: nvidia-gpu endpoints: - port: gpu-metrics interval: 15s selector: matchLabels: app: nvidia-dcgm-exporter namespaceSelector: matchNames: - gpu-operator-resources $ kubectl apply -f custom/gpu-servicemonitor.yaml
將servicemonitor
提交給kubesphere
平臺後,經過暴露prometheus-k8s
爲NodePort
,咱們能夠在Prometheus
的UI
上驗證一下是否採集到的相關指標:
KubeSphere 3.0
若是部署的 KubeSphere 版本是KubeSphere 3.0
,須要簡單地配置如下幾個步驟,即可順利完成可觀察性監控。
首先, 登陸kubsphere console
後,建立一個企業空間名稱爲ks-monitoring-demo
, 名稱可按需建立;
其次,須要將ServiceMonitor
所在的目標名稱空間gpu-operator-resources
分配爲已存在的企業空間中,以便歸入監控。
最後,進入目標企業空間,在納管的項目找到gpu-operator-resources
, 點擊後找到可自定義監控界面, 便可添加自定義監控。
後續版本可選擇添加集羣監控
下載dashboard
以及配置namespace
:
$ curl -LO https://raw.githubusercontent.com/kubesphere/monitoring-dashboard/master/contrib/gallery/nvidia-gpu-dcgm-exporter-dashboard.yaml $ cat nvidia-gpu-dcgm-exporter-dashboard.yaml ---- apiVersion: monitoring.kubesphere.io/v1alpha1 kind: Dashboard metadata: name: nvidia-dcgm-exporter-dashboard-rev1 namespace: gpu-operator-resources # check here spec: -----
能夠直接命令行apply
或者在自定義監控面板中選擇編輯模式進行導入:
正確導入後:
在上面建立的jupyter notebook
運行深度學習測試任務後,能夠明顯地觀察到相關GPU指標變化:
$ helm list -n gpu-operator-resources NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION gpu-operator gpu-operator-resources 1 2021-02-20 11:50:56.162559286 +0800 CST deployed gpu-operator-1.5.2 1.5.2 $ helm uninstall gpu-operator -n gpu-operator-resources
關於已部署正常運行的gpu-operator
和AI應用的集羣,重啓GPU主機後會出現無法用上 GPU 的狀況,極有多是由於插件還沒加載,應用優先進行了載入,就會致使這種問題。這時,只須要優先保證插件運行正常,而後從新部署應用便可。
答:關於已部署正常運行的gpu-operator和 AI 應用的集羣,重啓 GPU 主機後會出現無法用上 GPU 的狀況,極有多是由於插件還沒加載,應用優先進行了載入,就會致使這種問題。這時,只須要優先保證插件運行正常,而後從新部署應用便可。
我以前針對GPU使用的是 https://github.com/NVIDIA/k8s-device-plugin 和 https://github.com/NVIDIA/gpu-monitoring-tools 相結合的方案來監控 GPU,請問這個方案與 GPU-Operator的方案相比,孰優孰劣一些?
答:我的認爲 GPU-Operator 更簡單易用,其自帶 GPU 注入能力不須要構建專用的 OS,而且支持節點發現與可插拔,可以自動化集成管理 GPU 所需的 NVIDIA 設備組件,相對來講仍是很省事的。
答:能夠參考 KubeSphere 官方文檔來使用自定義監控。
本文由博客一文多發平臺 OpenWrite 發佈!