在Kubernetes中使用GlusterFS(https://www.gluster.org/)有endpoint(外置存儲)和heketi(k8s內置GlusterFS服務)兩種方式。這裏主要介紹使用endpoint將GlusterFS存儲設置到JupyterHub for K8s中使用。爲了簡化起見,使用缺省的JupyterHub helm進行安裝。按照《快速設置JupyterHub for K8s》安裝後,會在Jupyterhub的安裝命名空間下出現一個hub-db-dir的pvc,我將使用GlusterFS的volume來提供這個pvc。node
保存下面內容到文件0a-glusterfs-gvzr00-endpoint-jupyter.yaml:python
apiVersion: v1 kind: Endpoints metadata: name: glusterfs-gvzr00 namespace: jhub subsets: - addresses: - ip: 10.1.1.193 - ip: 10.1.1.205 - ip: 10.1.1.112 ports: - port: 10000 protocol: TCP
建立服務,保存下面內容到文件0b-glusterfs-gvzr00-service-jupyter.yaml :web
apiVersion: v1 kind: Service metadata: name: glusterfs-gvzr00 namespace: jhub spec: ports: - port: 10000 protocol: TCP targetPort: 10000 sessionAffinity: None type: ClusterIP
建立jupyterhub主服務程序的pv和pvc,用於存儲系統數據。docker
保存下面內容到文件 1a-glusterfs-gvzr00-pv-jupyter-hub.yaml:json
apiVersion: v1 kind: PersistentVolume metadata: name: hub-db-dir namespace: jhub spec: capacity: storage: 8Gi accessModes: - ReadWriteMany glusterfs: endpoints: "glusterfs-gvzr00" path: "gvzr00/jupyterhub/hub-db-dir" readOnly: false
首先刪除pvc。api
kubectl delete pvc/hub-db-dir -n jhub
保存下面內容到文件1b-glusterfs-gvzr00-pvc-jupyter-hub.yaml:bash
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: hub-db-dir namespace: jhub spec: accessModes: - ReadWriteMany resources: requests: storage: 8Gi
每一個用戶本身的pv和pvc,用於存儲notebook server的用戶數據。session
保存下面內容到文件2a-glusterfs-gvzr00-pv-jupyter-supermap.yaml:app
apiVersion: v1 kind: PersistentVolume metadata: name: claim-supermap namespace: jhub spec: capacity: storage: 16Gi accessModes: - ReadWriteMany glusterfs: endpoints: "glusterfs-gvzr00" path: "gvzr00/jupyterhub/claim-supermap" readOnly: false
保存下面內容到文件2b-glusterfs-gvzr00-pvc-jupyter-supermap.yaml:分佈式
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: claim-supermap namespace: jhub spec: accessModes: - ReadWriteMany resources: requests: storage: 16Gi
根據本身的集羣地址和存儲容量修改上面的幾個文件。
保存下面內容到文件apply.sh:
echo "Create endpoint and svc, glusterfs-gvzr00 ..." kubectl apply -f 0a-glusterfs-gvzr00-endpoint-jupyter.yaml kubectl apply -f 0b-glusterfs-gvzr00-service-jupyter.yaml echo "Create pv and pvc, hub-db-dir ..." kubectl apply -f 1a-glusterfs-gvzr00-pv-jupyter-hub.yaml kubectl apply -f 1b-glusterfs-gvzr00-pvc-jupyter-hub.yaml echo "Create pv and pvc, claim--supermap ..." kubectl apply -f 2a-glusterfs-gvzr00-pv-jupyter-supermap.yaml kubectl apply -f 2b-glusterfs-gvzr00-pvc-jupyter-supermap.yaml echo "Finished." echo ""
而後運行apply.sh。
保存下面內容到文件delete.sh
# echo "Delete pv and pvc, hub-db-dir ..." kubectl delete pvc/hub-db-dir -n jhub kubectl delete pv/hub-db-dir -n jhub echo "Delete pv and pvc, claim--supermap ..." kubectl delete pvc/claim-supermap -n jhub kubectl delete pv/claim--supermap -n jhub echo "Delete endpoint and svc, glusterfs-gvzr00 ..." kubectl delete svc/glusterfs-gvzr00 -n jhub kubectl delete ep/glusterfs-gvzr00 -n jhub echo "Finished." echo ""
須要所有刪除時,運行delete.sh。
經過Dashboard或命令:
kubectl get pv -n jhub kubectl get pvc -n jhub
注意:若是升級到最近的python3版本,JupyterHub運行時出現錯誤,Notebook Server沒法啓動,進pod日誌發現提示信息"NoneType",能夠採用下面的方法修復:
kubectl patch deploy -n jhub hub --type json \ --patch '[{"op": "replace", "path": "/spec/template/spec/containers/0/command", "value": ["bash", "-c", "\nmkdir -p ~/hotfix\ncp \ -r /usr/local/lib/python3.6/dist-packages/kubespawner ~/hotfix\nls -R ~/hotfix\npatch ~/hotfix/kubespawner/spawner.py \ << EOT\n72c72\n< key=lambda x: x.last_timestamp,\n---\n> key=lambda x: x.last_timestamp and x.last_timestamp.timestamp() or 0.,\nEOT\n\nPYTHONPATH=$HOME/hotfix \ jupyterhub --config /srv/jupyterhub_config.py --upgrade-db\n"]}]'
再去訪問JupyterHub的服務,恢復正常。