kubenetes GPU

https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#deploying-nvidia-gpu-device-pluginnode

 

1. 安裝 nvidia-docker(ubuntu14.04)git

https://github.com/NVIDIA/nvidia-dockergithub

 

卸載舊版docker

docker volume ls -q -f driver=nvidia-docker | xargs -r -I{} -n1 docker ps -q -a -f volume={} | xargs -r docker rm -f sudo apt-get purge -y nvidia-docker


# Add the package repositories curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update # Install nvidia-docker2 and reload the Docker daemon configuration sudo apt-get install -y nvidia-docker2
sudo pkill -SIGHUP dockerd

 

2. 設置docker runtimejson

First you will need to check and/or enable the nvidia runtime as your default runtime on your node. We will be editing the docker daemon config file which is usually present at /etc/docker/daemon.json:ubuntu

{
    "default-runtime": "nvidia", "runtimes": { "nvidia": { "path": "/usr/bin/nvidia-container-runtime", "runtimeArgs": [] } } }


重起docker

 

root@ogs-gpu02:/etc/ssl/certs# docker run --runtime=nvidia --rm registry.bst-1.cns.bstjpc.com:5000/nvidia/cuda nvidia-smi
Fri Mar 23 05:30:37 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.111                Driver Version: 384.111                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K20m          Off  | 00000000:04:00.0 Off |                    0 |
| N/A   27C    P0    48W / 225W |      0MiB /  4742MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla K20m          Off  | 00000000:43:00.0 Off |                    0 |
| N/A   27C    P0    48W / 225W |      0MiB /  4742MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  Tesla K20m          Off  | 00000000:84:00.0 Off |                    0 |
| N/A   31C    P0    47W / 225W |      0MiB /  4742MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  Tesla K20m          Off  | 00000000:C4:00.0 Off |                    0 |
| N/A   30C    P0    48W / 225W |      0MiB /  4742MiB |     43%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

curl

 

 

 

kubelet 啓動參數增長 --feature-gates="DevicePlugins=true"url

用k8s 啓動 nvidia-device-pluginspa

//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////code

 

用k8s 自帶的gpu功能, kubelet 啓動參數 --feature-gates="Accelerators=true"

相關文章
相關標籤/搜索