機器學習模型經常使用Docker部署,而如何對Docker部署的模型進行管理呢?工業界的解決方案是使用Kubernetes來管理、編排容器。Kubernetes的理論知識不是本文討論的重點,這裏再也不贅述,有關Kubernetes的優勢讀者可自行Google。筆者整理的Kubernetes入門系列的側重點是如何實操,前三節介紹了Kubernets的安裝、Dashboard的安裝,以及如何在Kubernetes中部署一個無狀態的應用,本節將討論如何在Kubernetes中部署一個可對外服務的Tensorflow機器學習模型,做爲Kubernetes入門系列的結尾。nginx
但願Kubernetes入門系列能對K8S初學者提供一些參考,對文中描述有不一樣觀點,或者對工業級部署與應用機器學習算法模型有什麼建議,歡迎你們在評論區討論與交流~~~git
# Download the TensorFlow Serving Docker image and repo docker pull tensorflow/serving mkdir /data0/modules cd /data0/modules git clone https://github.com/tensorflow/serving # Location of demo models TESTDATA="/data0/modules/serving/tensorflow_serving/servables/tensorflow/testdata/" # Start TensorFlow Serving container and open the REST API port docker run -dit --rm -p 8501:8501 \ -v /data0/modules/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu:/models/half_plus_two \ -e MODEL_NAME=half_plus_two tensorflow/serving # Query the model using the predict API curl -d '{"instances": [1.0, 2.0, 5.0]}' \ -X POST http://localhost:8501/v1/models/half_plus_two:predict # Returns => { "predictions": [2.5, 3.0, 4.5] }
docker run -d --rm --name serving_base tensorflow/serving
docker cp /data0/modules/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu serving_base:/models/half_plus_two
docker commit --change "ENV MODEL_NAME half_plus_two" serving_base ljh/half_plus_two
docker kill serving_base docker rm serving_base
docker run -dit --rm -p 8501:8501 \ -e MODEL_NAME=half_plus_two ljh/half_plus_two
curl -d '{"instances": [1.0, 2.0, 5.0]}' -X POST http://localhost:8501/v1/models/half_plus_two:predict # Returns => { "predictions": [2.5, 3.0, 4.5] }
cat deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: halfplustwo-deployment spec: selector: matchLabels: app: halfplustwo replicas: 1 template: metadata: labels: app: halfplustwo spec: containers: - name: halfplustwo image: ljh/half_plus_two:latest imagePullPolicy: IfNotPresent ports: - containerPort: 8501 name: restapi - containerPort: 8500 name: grpc
kubectl apply -f deployment.yaml
kubectl get deployment -o wide kubectl describe deployment halfplustwo-deployment
kubectl get pods -l app=halfplustwo
kubectl describe pod <pod-name>
cat service.yaml apiVersion: v1 kind: Service metadata: labels: run: halfplustwo-service name: halfplustwo-service spec: ports: - port: 8501 targetPort: 8501 name: restapi - port: 8500 targetPort: 8500 name: grpc selector: app: halfplustwo type: LoadBalancer
kubectl create -f service.yaml or kubectl apply -f service.yaml
kubectl get service #output: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE halfplustwo-service LoadBalancer 10.96.181.116 <pending> 8501:30771/TCP,8500:31542/TCP 4s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d nginx NodePort 10.96.153.10 <none> 80:30088/TCP 29h
curl -d '{"instances": [1.0, 2.0, 5.0]}' -X POST http://localhost:8501/v1/models/half_plus_two:predict {"predictions": [2.5, 3.0, 4.5]}
kubectl delete -f deployment.yaml kubectl delete -f service.yaml
[1] https://www.tensorflow.org/tfx/serving/docker TensorFlow Serving 與 Docker [2] https://www.tensorflow.org/tfx/serving/serving_kubernetes?hl=zh_cn 將TensorFlow Serving與 Kubernetes結合使用 [3] https://towardsdatascience.com/scaling-machine-learning-models-using-tensorflow-serving-kubernetes-ed00d448c917 Scaling Machine Learning models using Tensorflow Serving & Kubernetes [4] http://www.tuwee.cn/2019/03/03/Kubernetes+Tenserflow-serving%E6%90%AD%E5%BB%BA%E5%8F%AF%E5%AF%B9%E5%A4%96%E6%9C%8D%E5%8A%A1%E7%9A%84%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E5%BA%94%E7%94%A8/ Kubernetes+Tenserflow-serving搭建可對外服務的機器學習應用