[k8s]k8s配置nfs作後端存儲&配置多nginx共享存儲&&statefulset配置

全部節點安裝nfs

yum install nfs-utils rpcbind -y
mkdir -p /ifs/kubernetes
echo "/ifs/kubernetes 192.168.x.0/24(rw,sync,no_root_squash)" >> /etc/exports

僅在nfs服務器上 systemctl start rpcbind nfs
節點測試沒問題便可

能夠參考下之前寫的:
http://blog.csdn.net/iiiiher/article/details/77865530html

安裝nfs做爲存儲

參考:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client/deploy
nginx

deployment這是一個nfs的client的,會掛載/ifs/kubernetes, 之後建立的目錄,會在這個目錄下建立各個子目錄.git

$ cat deployment.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.x.135
            - name: NFS_PATH
              value: /ifs/kubernetes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.x.135
            path: /ifs/kubernetes
$ cat class.yaml 
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
$ cat test-claim.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
$ cat test-pod.yaml 
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim


默認狀況建立pvc,pv自動建立. pvc手動幹掉後,nfs裏面是archive狀態.還沒弄清楚在哪控制這東西github

todo:
驗證pvc的:
容量
讀寫
回收策略web

實現下共享存儲(左半部分)

$ cat nginx-pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 1Mi
$ cat nginx-deployment.yaml 
apiVersion: apps/v1beta1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
          - name: nfs-pvc
            mountPath: "/usr/share/nginx/html"
      volumes:
        - name: nfs-pvc
          persistentVolumeClaim:
            claimName: nginx-claim
$ cat nginx-svc.yaml 
kind: Service
apiVersion: v1
metadata:
  name: svc-nginx
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
  - protocol: TCP
    targetPort: 80

右半部分參考

https://feisky.gitbooks.io/kubernetes/concepts/statefulset.html
spring

todo:
我想驗證左邊模式ReadWriteOnce狀況怎麼破.api

---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx"
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
      annotations:
        volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Mi

pvc經過storageClassName調用nfs存儲(strorageclass)

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: spring-pvc
  namespace: kube-public
spec:
  storageClassName: "managed-nfs-storage"
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 100Mi

gfs參考:
https://github.com/kubernetes-incubator/external-storage/tree/master/gluster/glusterfs服務器

pv讀寫模式(1.9)

參考
ReadWriteOnce / ReadOnlyMany / ReadWriteManyapp

)測試

相關文章
相關標籤/搜索