基於Kubernetes和Jenkins來實現CI/CD。 全部須要跑任務的jenkins slave(pod)經過模版動態建立,當任務執行結束自動刪除。node
kubernetspython
jenkins-deployment.yaml
apiVersion: "apps/v1beta1" kind: "Deployment" metadata: name: "jenkins" labels: name: "jenkins" spec: replicas: 1 template: metadata: name: "jenkins" labels: name: "jenkins" spec: containers: - name: jenkins image: jenkinsci/jenkins:2.154 imagePullPolicy: IfNotPresent volumeMounts: - name: jenkins-home mountPath: /var/jenkins_home env: - name: TZ value: Asia/Shanghai ports: - containerPort: 8080 name: web - containerPort: 50000 name: agent volumes: - name: jenkins-home nfs: path: "/nfs/jenkins/data" server: "cpu029.hogpu.cc" terminationGracePeriodSeconds: 10 serviceAccountName: jenkins
jenkins-account.yaml
--- apiVersion: v1 kind: ServiceAccount metadata: name: jenkins --- kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: jenkins rules: - apiGroups: [""] resources: ["pods"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/exec"] verbs: ["create","delete","get","list","patch","update","watch"] - apiGroups: [""] resources: ["pods/log"] verbs: ["get","list","watch"] - apiGroups: [""] resources: ["secrets"] verbs: ["get"] - apiGroups: [""] resources: ["configmap"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: jenkins roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: jenkins subjects: - kind: ServiceAccount name: jenkins
jenkins-service.yaml
kind: Service apiVersion: v1 metadata: labels: k8s-app: jenkins name: jenkins spec: type: NodePort ports: - port: 8080 name: web targetPort: 8080 - port: 50000 name: agent targetPort: 50000 selector: name: jenkins
說明一下:這裏 Service 咱們暴漏了端口 8080 和 50000,8080 爲訪問 Jenkins Server 頁面端口,50000 爲建立的 Jenkins Slave 與 Master 創建鏈接進行通訊的默認端口,若是不暴露的話,Slave 沒法跟 Master 創建鏈接。這裏使用 NodePort 方式暴漏端口,並未指定其端口號,由 Kubernetes 系統默認分配,固然也能夠指定不重複的端口號(範圍在 30000~32767)git
接下來,經過 kubectl 命令行執行建立 Jenkins Service。 $ kubectl create namespace kubernetes-plugin $ kubectl config set-context $(kubectl config current-context) --namespace=kubernetes-plugin $ kubectl create -f jenkins-deployment.yaml $ kubectl create -f jenkins-account.yaml $ kubectl create -f jenkins-service.yaml
ps:github
建立一個新的 namespace 爲 kubernetes-plugin,而且將當前 context 設置爲 kubernetes-plugin namespace 這樣就會自動切換到該空間下。
jianyu.tian@yz-gpu-k8s004 ~]$ kubectl get deployment,svc,pods NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/jenkins 1 1 1 1 1h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/jenkins NodePort 10.106.235.91 <none> 8080:31051/TCP,50000:30545/TCP 2h NAME READY STATUS RESTARTS AGE po/jenkins-64564fc5c9-pzlpb 1/1 Running 0 1h
ps:web
Jenkins Master 服務已經啓動起來了,而且將端口暴漏到 8080:31051,50000:30545,此時能夠經過瀏覽器打開 http://<Cluster_IP>:30645 訪問 Jenkins 頁面了。
1.主要對jenkins-plugin插件作說明docker
安裝完畢後,點擊 「系統管理」 —> 「系統設置」 —> 「新增一個雲」 —> 選擇 「Kubernetes」,而後填寫 Kubernetes 和 Jenkins 配置信息。centos
ps:
Name 處默認爲 kubernetes,也能夠修改成其餘名稱,若是這裏修改了,下邊在執行 Job 時指定 podTemplate() 參數 cloud 爲其對應名稱,不然會找不到,cloud 默認值取:kubernetes
Kubernetes URL 處我填寫了 https://kubernetes.default.sv... 這裏我填寫了 Kubernetes Service 對應的 DNS 記錄,經過該 DNS 記錄能夠解析成該 Service 的 Cluster IP,或者直接填寫外部 Kubernetes 的地址 https://<ClusterIP>:<Ports>。
Jenkins URL 處我填寫了 http://jenkins.kubernetes-plugin:8080,跟上邊相似,也是使用 Jenkins Service 對應的 DNS 記錄,不過要指定爲 8080 端口,由於咱們設置暴漏 8080 端口。同時也能夠用 http://<ClusterIP>:<Node_Port>api
配置完畢,能夠點擊 「Test Connection」 按鈕測試是否可以鏈接的到 Kubernetes,若是顯示 Connection test successful 則表示鏈接成功,配置沒有問題。瀏覽器
建立一個 Pipeline 類型 Job:bash
pipeline { agent any //並行操做 stages { stage("test_all") { parallel { stage("python3-cuda9.2") { agent { kubernetes { label 'mxnet-python3-cuda9' yaml """ apiVersion: "v1" kind: "Pod" metadata: labels: name: "mxnet-python3-cuda9" spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: hobot.workas operator: In values: - gpu - key: kubernetes.io/nvidia-gpu-name operator: In values: - TITAN_V containers: - name: mxnetone image: docker.hobot.cc/dlp/mxnetci:runtime-py3.6-cudnn7.3-cuda9.2-centos7 imagePullPolicy: Always resources: limits: nvidia.com/gpu: 1 """ } } stages { stage("拉取代碼") { steps { container("mxnetone") { checkout( [ $class: 'GitSCM', branches: [[name: 'nnvm']], browser: [$class: 'Phabricator', repo: 'rMXNET', repoUrl: ''], doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'SubmoduleOption', disableSubmodules: false, parentCredentials: true, recursiveSubmodules: true, reference: '', trackingSubmodules: false]], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'zhaoming_private', url: '']] ] ) } } } stage("編譯") { steps { container("mxnetone") { sh """ nvidia-smi source /root/.bashrc make deps echo -e "USE_PROFILER=1\nUSE_GLOG=0\nUSE_HDFS=0" >> ./make/config.mk sed -i "s#USE_CUDA_PATH = /usr/local/cuda-8.0#USE_CUDA_PATH = /usr/local/cuda-9.2#g" ./make/config.mk make lint make -j 12 ln -s /home/data ./ make test | tee unittest.log """ } } } stage("單元測試") { steps { container("mxnetone") { sh """ cp -rf python/mxnet ./ cp -f lib/libmxnet.so mxnet/ echo "-------Running tests under Python3-------" python3 -V python3 `which nosetests` tests/python/train python3 `which nosetests` -v -d tests/python/unittest """ } } } } } stage("python2-cuda9.2") { agent { kubernetes { label 'mxnet-python2-cuda9' yaml """ apiVersion: "v1" kind: "Pod" metadata: labels: name: "mxnet-python2-cuda9" spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: hobot.workas operator: In values: - gpu - key: kubernetes.io/nvidia-gpu-name operator: In values: - TITAN_V containers: - name: mxnettwo image: docker.hobot.cc/dlp/mxnetci:runtime-cudnn7.3-cuda9.2-centos7 imagePullPolicy: Always resources: limits: nvidia.com/gpu: 1 """ } } stages { stage("拉取代碼") { steps { container("mxnettwo") { checkout( [ $class: 'GitSCM', branches: [[name: 'nnvm']], browser: [$class: 'Phabricator', repo: 'rMXNET', repoUrl: ''], doGenerateSubmoduleConfigurations: false, extensions: [[$class: 'SubmoduleOption', disableSubmodules: false, parentCredentials: true, recursiveSubmodules: true, reference: '', trackingSubmodules: false]], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'zhaoming_private', url: '']] ] ) } } } stage("編譯") { steps { container("mxnettwo") { sh """ nvidia-smi pip2 install numpy==1.14.3 -i https://mirrors.aliyun.com/pypi/simple/ source /root/.bashrc make deps echo -e "USE_PROFILER=1\nUSE_GLOG=0\nUSE_HDFS=0" >> ./make/config.mk sed -i "s#USE_CUDA_PATH = /usr/local/cuda-8.0#USE_CUDA_PATH = /usr/local/cuda-9.2#g" ./make/config.mk make lint make -j 12 ln -s /home/data ./ make test | tee unittest.log """ } } } stage("單元測試") { steps { container("mxnettwo") { sh """ cp -rf python/mxnet ./ cp -f lib/libmxnet.so mxnet/ echo "-------Running tests under Python2-------" python2 -V python2 `which nosetests` tests/python/train python2 `which nosetests` -v -d tests/python/unittest """ } } } } } } } } }