[k8s]k8s-ceph-statefulsets-storageclass-nfs 有狀態應用佈署實踐

k8s stateful sets storageclass 有狀態應用佈署實踐v2

Copyright 2017-05-22 xiaogang(172826370@qq.com)node

參考

因爲網上的文章所有是抄襲官網等,爛文章一堆,誤導一堆人,完美沒有實用性,特寫此文章,nfs相對來講比較簡單,通常都會安裝
先送上nfs的相關文檔,稍後將爲你們獻上ceph rbd動態卷文檔,同時還有幾個redis和mysql主從實例mysql

有狀態容器的工做過程當中,存儲是一個關鍵問題,Kubernetes 對存儲的管理提供了有力的支持。Kubernetes 獨有的動態卷供給特性,
實現了存儲卷的按需建立。在這一特性面世以前,集羣管理員首先要給雲供應商或者存儲供應商致電,來申請新的存儲卷,而後建立持
久卷(PersistentVolue),使其在 Kubernetes 中可見。而動態卷供給功能則實現了這兩個步驟的自動化,讓管理員無需再進行存儲卷
預分配。存儲資源會依照 StorageClass 定義的方式進行供給。StorageClass 是對底層存儲資源的抽象,包含了存儲相關的參數,
例如磁盤類型(標準類型和 SSD)。nginx

StorageClass 的多種供給者(Previsioner),爲 Kubernetes 提供了針對特定物理存儲或雲存儲的訪問能力。目前提供了多種開箱即
用的存儲支持,另外還有一些在 Kubernetes 孵化器中提供的其餘存儲支持。git

在 Kubernetes 1.6 中,動態卷供給提高爲穩定版(1.4 開始進入 Beta 版)。這在 Kubernetes 的存儲自動化過程當中是很重要的一步,
讓管理員可以控制資源的供給方式,讓用戶可以更專一於本身的應用。在上面提到的益處以外,在升級到 Kubernetes 1.6 以前,還需
要了解一下這裏涉及到的針對用戶方面的變動。
有狀態的應用程序
通常狀況下,nginx或者web server(不包含MySQL)自身都是不須要保存數據的,對於 web server,數據會保存在專門作持久化的節點
上。因此這些節點能夠隨意擴容或者縮容,只要簡單的增長或減小副本的數量就能夠。可是不少有狀態的程序都須要集羣式的部署,
意味着節點須要造成羣組關係,每一個節點須要一個惟一的ID(例如Kafka BrokerId, Zookeeper myid)來做爲集羣內部每一個成員的標識,
集羣內節點之間進行內部通訊時須要用到這些標識。傳統的作法是管理員會把這些程序部署到穩定的,長期存活的節點上去,這些節點
有持久化的存儲和靜態的IP地址。這樣某個應用的實例就跟底層物理基礎設施好比某臺機器,某個IP地址耦合在一塊兒了。Kubernets中
StatefulSet的目標是經過把標識分配給應用程序的某個不依賴於底層物理基礎設施的特定實例來解耦這種依賴關係。(消費方不使用靜
態的IP,而是經過DNS域名去找到某臺特定機器)github

StatefulSetweb

前提

使用StatefulSet的前提:redis

  • Kubernetes集羣的版本 >=1.5
  • 安裝好DNS集羣插件,版本 >=15sql

    特色

StatefulSet(1.5版本以前叫作PetSet)爲何適合有狀態的程序,由於它相比於Deployment有如下特色:docker

  • 穩定的,惟一的網絡標識,能夠用來發現集羣內部的其餘成員。好比StatefulSet的名字叫kafka,那麼第一個起來的Pet叫kafka-0,mysql-0
    第二個叫 kafk-1, mysql-1依次類推。
  • 穩定的持久化存儲:經過Kubernetes的PV/PVC或者外部存儲(預先提供的)來實現
    啓動或關閉時保證有序:優雅的部署和伸縮性: - 操做第n個pod時,前n-1個pod已是運行且準備好的狀態。 有序的,優雅的刪除和
    終止操做:從 n, n-1, ... 1, 0 這樣的順序刪除
  • 上述提到的「穩定」指的是Pod在屢次從新調度時保持穩定,即存儲,DNS名稱,hostname都是跟Pod綁定到一塊兒的,跟Pod被調度到哪一個
    節點不要緊。

因此Zookeeper, Etcd 或 Elasticsearch這類須要穩定的集羣成員的應用時,就能夠用StatefulSet。經過查詢無頭服務域名的A記錄,
就能夠獲得集羣內成員的域名信息。centos

限制

StatefulSet也有一些限制:

  • Pod的存儲必須是經過 PersistentVolume Provisioner基於 storeage類來提供,或者是管理員預先提供的外部存儲
    刪除或者縮容不會刪除跟StatefulSet相關的卷,這是爲了保證數據的安全
    StatefulSet如今須要一個無頭服務(Headless Service)來負責生成Pods的惟一網絡標示,須要開發人員建立這個服務
    對StatefulSet的升級是一個手工的過程
  • 無頭服務(Headless Service)

要定義一個服務(Service)爲無頭服務(Headless Service),須要把Service定義中的ClusterIP配置項設置爲空: spec.clusterIP:None。
和普通Service相比,Headless Service沒有ClusterIP(因此沒有負載均衡),它會給一個集羣內部的每一個成員提供一個惟一的DNS- 域名來
做爲每一個成員的網絡標識,集羣內部成員之間使用域名通訊。無頭服務管理的域名是以下的格式:$(service_name).$(k8s_namespace).svc.cluster.local。
其中的 "cluster.local"是集羣的域名,除非作了配置,不然集羣域名默認就是cluster.local。StatefulSet下建立的每一個Pod,獲得一個對應的DNS子域名,
格式以下:
$(podname).$(governing_service_domain),這裏 governing_service_domain是由StatefulSet中定義的serviceName來決定。舉例子,
無頭服務管理的kafka的域名是:kafka.test.svc.cluster.local, 建立的Pod獲得的子域名是 kafka-1.kafka.test.svc.cluster.local。
注意這裏提到的域名,都是由kuber-dns組件管理的集羣內部使用的域名,能夠經過命令來查詢:

1.nfs-client storage class動態卷

在nfs-server物理機上配置權限 cat /etc/exports 

/data/nfs-storage/k8s-storage/ssd *(rw,insecure,sync,no_subtree_check,no_root_squash)

下載nfs-client 插件

docker pull quay.io/kubernetes_incubator/nfs-client-provisioner:v1
docker tag quay.io/kubernetes_incubator/nfs-client-provisioner:v1 192.168.1.103/k8s_public/nfs-client-provisioner:v1
docker push 192.168.1.103/k8s_public/nfs-client-provisioner:v1

佈署供應卷,其實是把pv掛載成class供應卷

cat deployment-nfs.yaml
 
 kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      containers:
        - name: nfs-client-provisioner
          image: 192.168.1.103/k8s_public/nfs-client-provisioner:v1
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.1.103
            - name: NFS_PATH
              value: /data/nfs-storage/k8s-storage/ssd
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.1.103
            path: /data/nfs-storage/k8s-storage/ssd #此處填寫nfs 存儲路徑 跟據實際狀況填寫
[root@master3 deploy]#  kubectl create -f  deployment-nfs.yaml

kubectl get pod 
nfs-client-provisioner-4163627910-fn70d   1/1       Running             0          1m

佈署storageclass.yaml

[root@master3 deploy]# cat nfs-class.yaml 
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: managed-nfs-storage 
provisioner: fuseim.pri/ifs # 此處引用nfs-client-provisioner裏面的 fuseim.pri/ifs or choose another name, must match deployment's env PROVISIONER_NAME'

[root@master3 deploy]#  kubectl create -f nfs-class.yaml

[root@master3 deploy]# kubectl get storageclass 
NAME                  TYPE
ceph-web              kubernetes.io/rbd   
managed-nfs-storage   fuseim.pri/ifs

建立一個pod引用storageclass

[root@master3 stateful-set]# cat nginx.yaml 
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx1"
  replicas: 2
  volumeClaimTemplates:
  - metadata:
      name: test 
      annotations:
        volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" #此處引用classname
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 2Gi 
  template:
    metadata:
      labels:
        app: nginx1
    spec:
      containers:
      - name: nginx1
        image: 192.168.1.103/k8s_public/nginx:latest
        volumeMounts:
        - mountPath: "/mnt"
          name: test
      imagePullSecrets:
        - name: "registrykey" #注意此處註名了secret安全鏈接registy 本地鏡相服務器

驗證pv pvc 是否本身建立成功

[root@master3 stateful-set]# kubectl get pv |grep web
default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59                                 2Gi        RWO           Delete          Bound     default/test-web-0                                           1m
default-test-web-1-pvc-6bbec6a0-3ed4-11e7-9818-525400c2bc59                                 2Gi        RWO           Delete          Bound     default/test-web-1                                           1m
[root@master3 stateful-set]# kubectl get pvc |grep web
test-web-0                                 Bound     default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59                                 2Gi        RWO           1m
test-web-1                                 Bound     default-test-web-1-pvc-6bbec6a0-3ed4-11e7-9818-525400c2bc59                                 2Gi        RWO           1m
[root@master3 stateful-set]# kubectl get storageclass |grep web
ceph-web              kubernetes.io/rbd   
[root@master3 stateful-set]# kubectl get storageclass 
NAME                  TYPE
ceph-web              kubernetes.io/rbd   
managed-nfs-storage   fuseim.pri/ifs
[root@master3 stateful-set]# kubectl get pod |grep web
web-0                                     1/1       Running             0          2m
web-1                                     1/1       Running             0          2m

擴展 pod

[root@master3 stateful-set]#  kubectl scale statefulset web --replicas=3
[root@master3 stateful-set]# kubectl get pod |grep web
web-0                                     1/1       Running             0          10m
web-1                                     1/1       Running             0          10m
web-3                                     1/1       Running             0          1m

收縮 pod 至1個

kubectl scale statefulset web --replicas=1
[root@master3 stateful-set]# kubectl get pod |grep web
web-0                                     1/1       Running             0          11m

ok ,建立完成 pod也正常

進入web-0驗證pvc掛載目錄

[root@master3 stateful-set]# kubectl exec -it web-0 /bin/bash
root@web-0:/# 
root@web-0:/# df -h
Filesystem                                                                                                   Size  Used Avail Use% Mounted on
/dev/mapper/docker-253:0-654996-18a8b448ce9ebf898e46c4468b33093ed9a5f81794d82a271124bcd1eb27a87c              10G  230M  9.8G   3% /
tmpfs                                                                                                        1.6G     0  1.6G   0% /dev
tmpfs                                                                                                        1.6G     0  1.6G   0% /sys/fs/cgroup
192.168.1.103:/data/nfs-storage/k8s-storage/ssd/default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59  189G   76G  104G  43% /mnt
/dev/mapper/centos-root                                                                                       37G  9.1G   26G  27% /etc/hosts
shm                                                                                                           64M     0   64M   0% /dev/shm
tmpfs                                                                                                        1.6G   12K  1.6G   1% /run/secrets/kubernetes.io/serviceaccount
root@web-0:/#

去nfs-server上看看pvc卷

root@pxt:/data/nfs-storage/k8s-storage/ssd# ll
total 40
drwxr-xr-x 10 root root 4096 May 22 17:53 ./
drwxr-xr-x  7 root root 4096 May 12 17:26 ../
drwxr-xr-x  3 root root 4096 May 16 16:19 default-data-mysql-0-pvc-3954b59e-3a10-11e7-b646-525400c2bc59/
drwxr-xr-x  3 root root 4096 May 16 16:20 default-data-mysql-1-pvc-396bd26f-3a10-11e7-b646-525400c2bc59/
drwxr-xr-x  3 root root 4096 May 16 16:21 default-data-mysql-2-pvc-39958611-3a10-11e7-b646-525400c2bc59/
drwxr-xr-x  2 root root 4096 May 17 17:49 default-redis-primary-volume-redis-primary-0-pvc-bb19aa13-3ad3-11e7-b646-525400c2bc59/
drwxr-xr-x  2 root root 4096 May 17 17:56 default-redis-secondary-volume-redis-secondary-0-pvc-16c8749d-3ae7-11e7-b646-525400c2bc59/
drwxr-xr-x  2 root root 4096 May 17 17:58 default-redis-secondary-volume-redis-secondary-1-pvc-16da7ba5-3ae7-11e7-b646-525400c2bc59/
drwxr-xr-x  2 root root 4096 May 22 17:53 default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59/
drwxr-xr-x  2 root root 4096 May 22 17:53 default-test-web-1-pvc-6bbec6a0-3ed4-11e7-9818-525400c2bc59/
root@pxt:/data/nfs-storage/k8s-storage/ssd# showmount -e
Export list for pxt.docker.agent103:
/data/nfs_ssd                          *
/data/nfs-storage/k8s-storage/standard *
/data/nfs-storage/k8s-storage/ssd      *
/data/nfs-storage/k8s-storage/redis    *
/data/nfs-storage/k8s-storage/nginx    *
/data/nfs-storage/k8s-storage/mysql    *


root@pxt:/data/nfs-storage/k8s-storage/ssd# cat /etc/exports 
# /etc/exports: the access control list for filesystems which may be exported
#       to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#/data/nfs-storage/k8s-storage *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs-storage/k8s-storage/mysql *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs-storage/k8s-storage/nginx *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs-storage/k8s-storage/redis *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs-storage/k8s-storage/ssd *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs-storage/k8s-storage/standard *(rw,insecure,sync,no_subtree_check,no_root_squash)
/data/nfs_ssd *(rw,insecure,sync,no_subtree_check,no_root_squash)

2.佈署一個可伸縮的mysql 主從集羣,基於mysql5.7 一主多從,準備3個yaml文件

參考

mysql-configmap.yaml mysql-services.yaml mysql-statefulset.yaml 

[root@master3 setateful-set-mysql]# cat mysql-configmap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql
  labels:
    app: mysql
data:
  master.cnf: |
    # Apply this config only on the master.
    [mysqld]
    log-bin
  slave.cnf: |
    # Apply this config only on slaves.
    [mysqld]
    super-read-only
[root@master3 setateful-set-mysql]# cat mysql-services.yaml
# Headless service for stable DNS entries of StatefulSet members.
apiVersion: v1
kind: Service
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  clusterIP: None
  selector:
    app: mysql
---
# Client service for connecting to any MySQL instance for reads.
# For writes, you must instead connect to the master: mysql-0.mysql.
apiVersion: v1
kind: Service
metadata:
  name: mysql-read
  labels:
    app: mysql
spec:
  ports:
  - name: mysql
    port: 3306
  selector:
    app: mysql
[root@master3 setateful-set-mysql]# cat mysql-statefulset.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mysql
spec:
  serviceName: mysql
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
      annotations:
        pod.beta.kubernetes.io/init-containers: '[
          {
            "name": "init-mysql",
            #原始鏡相:image: msql:5.7
            "image": "192.168.1.103/k8s_public/mysql:5.7",
            "command": ["bash", "-c", "
              set -ex\n
              # Generate mysql server-id from pod ordinal index.\n
              [[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n
              ordinal=${BASH_REMATCH[1]}\n
              echo [mysqld] > /mnt/conf.d/server-id.cnf\n
              # Add an offset to avoid reserved server-id=0 value.\n
              echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf\n
              # Copy appropriate conf.d files from config-map to emptyDir.\n
              if [[ $ordinal -eq 0 ]]; then\n
                cp /mnt/config-map/master.cnf /mnt/conf.d/\n
              else\n
                cp /mnt/config-map/slave.cnf /mnt/conf.d/\n
              fi\n
            "],
            "volumeMounts": [
              {"name": "conf", "mountPath": "/mnt/conf.d"},
              {"name": "config-map", "mountPath": "/mnt/config-map"}
            ]
          },
          {
            "name": "clone-mysql",
            #"image": gcr.io/google-samples/xtrabackup:1.0 原始鏡相本身打tag push 到私庫
            "image": "192.168.1.103/k8s_public/xtrabackup:1.0",
            "command": ["bash", "-c", "
              set -ex\n
              # Skip the clone if data already exists.\n
              [[ -d /var/lib/mysql/mysql ]] && exit 0\n
              # Skip the clone on master (ordinal index 0).\n
              [[ `hostname` =~ -([0-9]+)$ ]] || exit 1\n
              ordinal=${BASH_REMATCH[1]}\n
              [[ $ordinal -eq 0 ]] && exit 0\n
              # Clone data from previous peer.\n
              ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql\n
              # Prepare the backup.\n
              xtrabackup --prepare --target-dir=/var/lib/mysql\n
            "],
            "volumeMounts": [
              {"name": "data", "mountPath": "/var/lib/mysql", "subPath": "mysql"},
              {"name": "conf", "mountPath": "/etc/mysql/conf.d"}
            ]
          }
        ]'
    spec:
      containers:
      - name: mysql
        image: 192.168.1.103/k8s_public/mysql:5.7
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "1"
        ports:
        - name: mysql
          containerPort: 3306
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 1
            memory: 1Gi
            #memory: 500Mi
        livenessProbe:
          exec:
            command: ["mysqladmin", "ping"]
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          exec:
            # Check we can execute queries over TCP (skip-networking is off).
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          timeoutSeconds: 1
      - name: xtrabackup
        image: 192.168.1.103/k8s_public/xtrabackup:1.0
        ports:
        - name: xtrabackup
          containerPort: 3307
        command:
        - bash
        - "-c"
        - |
          set -ex
          cd /var/lib/mysql

          # Determine binlog position of cloned data, if any.
          if [[ -f xtrabackup_slave_info ]]; then
            # XtraBackup already generated a partial "CHANGE MASTER TO" query
            # because we're cloning from an existing slave.
            mv xtrabackup_slave_info change_master_to.sql.in
            # Ignore xtrabackup_binlog_info in this case (it's useless).
            rm -f xtrabackup_binlog_info
          elif [[ -f xtrabackup_binlog_info ]]; then
            # We're cloning directly from master. Parse binlog position.
            [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
            rm xtrabackup_binlog_info
            echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                  MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
          fi

          # Check if we need to complete a clone by starting replication.
          if [[ -f change_master_to.sql.in ]]; then
            echo "Waiting for mysqld to be ready (accepting connections)"
            until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done

            echo "Initializing replication from clone position"
            # In case of container restart, attempt this at-most-once.
            mv change_master_to.sql.in change_master_to.sql.orig
            mysql -h 127.0.0.1 <<EOF
          $(<change_master_to.sql.orig),
            MASTER_HOST='mysql-0.mysql',
            MASTER_USER='root',
            MASTER_PASSWORD='',
            MASTER_CONNECT_RETRY=10;
          START SLAVE;
          EOF
          fi

          # Start a server to send backups when requested by peers.
          exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
            "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
        volumeMounts:
        - name: data
          mountPath: /var/lib/mysql
          subPath: mysql
        - name: conf
          mountPath: /etc/mysql/conf.d
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
      nodeSelector:
        zone: mysql
      volumes:
      - name: conf
        emptyDir: {}
      - name: config-map
        configMap:
          name: mysql
  volumeClaimTemplates:
  - metadata:
      name: data
      annotations:
        #volume.alpha.kubernetes.io/storage-class: "managed-nfs-storage" #不一樣版本這裏引用的alpha/beta不一樣注意
        volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 10Gi
[root@master3 setateful-set-mysql]# kubectl create -f mysql-configmap.yaml  -f mysql-services.yaml  -f mysql-statefulset.yaml

[root@master3 setateful-set-mysql]# kubectl get storageclass,pv,pvc,statefulset,pod,service |grep mysql



pv/default-data-mysql-0-pvc-3954b59e-3a10-11e7-b646-525400c2bc59                               10Gi       RWO           Delete          Bound     default/data-mysql-0                                         6d
pv/default-data-mysql-1-pvc-396bd26f-3a10-11e7-b646-525400c2bc59                               10Gi       RWO           Delete          Bound     default/data-mysql-1                                         6d
pv/default-data-mysql-2-pvc-39958611-3a10-11e7-b646-525400c2bc59                               10Gi       RWO           Delete          Bound     default/data-mysql-2   
                                      6d
pvc/data-mysql-0                               Bound     default-data-mysql-0-pvc-3954b59e-3a10-11e7-b646-525400c2bc59                               10Gi       RWO           6d
pvc/data-mysql-1                               Bound     default-data-mysql-1-pvc-396bd26f-3a10-11e7-b646-525400c2bc59                               10Gi       RWO           6d
pvc/data-mysql-2                               Bound     default-data-mysql-2-pvc-39958611-3a10-11e7-b646-525400c2bc59                               10Gi       RWO           6d

statefulsets/mysql             3         3         5d

po/mysql-0                                   2/2       Running             0          5d
po/mysql-1                                   2/2       Running             0          5d
po/mysql-2                                   2/2       Running             0          5d

svc/mysql                    None            <none>        3306/TCP       6d  #同一個namespaces 下面是能夠ping 的 ping mysql-0.mysql  ; ping mysql-1.mysql
svc/mysql-read               172.1.11.160    <none>        3306/TCP       6d

[root@master3 setateful-set-mysql]# ok 全部pok建立完成,注意這裏的service 沒有clusterip 這種就是headless service無頭類型,
注意刪除kubectl delete statefulset yaml pv和pvc仍是會存在的

擴容mysql slave 擴容後能夠看到pv,pvc相應的自動建立了

kubectl scale --replicas=5 statefulset mysql
kubectl get pod|grep mysql
po/mysql-0                                   2/2       Running             0          5d
po/mysql-1                                   2/2       Running             0          5d
po/mysql-2                                   2/2       Running             0          5d
po/mysql-3                                   2/2       Running             0          5m
po/mysql-4                                   2/2       Running             0          5m

收宿 kubectl scale --replicas=2 statefulset mysql

kubectl get pod|grep mysql

po/mysql-0                                   2/2       Running             0          5d
po/mysql-1                                   2/2       Running             0          5d

測試

鏈接mysql測試

方法1: 經過容器鏈接
啓動1個mysql-client pod

#啓動1個容器,這裏測了下,執行成功了, 沒反應 ctrl+c下. 看到查看pod 能夠看到mysql-client的pod

kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\
  mysql -h mysql-0.mysql <<EOF
CREATE DATABASE test;
CREATE TABLE test.messages (message VARCHAR(250));
INSERT INTO test.messages VALUES ('hello');
EOF
kubectl exec -it mysql-client bash

#鏈接從庫
root@mysql-client:/# mysql -h mysql-read

#鏈接主庫
mysql -h mysql-0.mysql

方法2: 能夠物理機安裝mysql-client

#安裝
yum install mysql -y

#查看pod的ip
[root@node131 images]# kubectl get po -o wide|grep mysql
mysql-0                                  2/2       Running   0          25m       172.30.2.4    192.168.6.133
mysql-1                                  2/2       Running   1          24m       172.30.28.4   192.168.6.132
mysql-2                                  2/2       Running   1          24m       172.30.2.5    192.168.6.133
mysql-client                             1/1       Running   0          22m       172.30.28.5   192.168.6.132

#經過本地mysql客戶端登陸mysql
mysql -h 172.30.2.5

檢查mysql-read svc

kubectl run mysql-client-loop --image=mysql:5.7 -i -t --rm --restart=Never --\
  bash -ic "while sleep 1; do mysql -h mysql-read -e 'SELECT @@server_id,NOW()'; done"
[root@node131 images]# kubectl run mysql-client-loop --image=mysql:5.7 -i -t --rm --restart=Never --\
>   bash -ic "while sleep 1; do mysql -h mysql-read -e 'SELECT @@server_id,NOW()'; done"
If you don't see a command prompt, try pressing enter.
                                                      +-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW()               |
+-------------+---------------------+
|         100 | 2017-05-23 08:58:31 |
+-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW()               |
+-------------+---------------------+
|         101 | 2017-05-23 08:58:32 |
+-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW()               |
+-------------+---------------------+
|         102 | 2017-05-23 08:58:33 |
+-------------+---------------------+

^C

上面窗口保留

模擬mysql-node宕機

kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql /usr/bin/mysql.off

從窗口能夠看到只有id是100和101的了.

+-------------+---------------------+
| @@server_id | NOW()               |
+-------------+---------------------+
|         100 | 2017-05-23 09:03:05 |
+-------------+---------------------+
+-------------+---------------------+
| @@server_id | NOW()               |
+-------------+---------------------+
|         100 | 2017-05-23 09:03:06 |
+-------------+---------------------+

恢復102,後自動有添加到從庫了

kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql.off /usr/bin/mysql

刪除pod:

kubectl delete pod mysql-2

刪掉後,StatefulSet controller會自動建立mysql-2

維護node: 當1個node須要被維護,全部的所在此node的pod都要被驅逐出來.pod會自動實現調用到別的節點

kubectl drain <node-name> --force --delete-local-data --ignore-daemonsets
kubectl get pod mysql-2 -o wide --watch

維護好node後,加入集羣

kubectl uncordon <node-name>
kubectl get pods -l app=mysql --watch

擴展節點

kubectl scale --replicas=5 statefulset mysql

kubectl get pods -l app=mysql --watch

kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --\
  mysql -h mysql-3.mysql -e "SELECT * FROM test.messages"
  
kubectl scale --replicas=3 statefulset mysql

kubectl get pvc -l app=mysql

縮小:
Which shows that all 5 PVCs still exist, despite having scaled the StatefulSet down to 3:

NAME           STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
data-mysql-0   Bound     pvc-8acbf5dc-b103-11e6-93fa-42010a800002   10Gi       RWO           20m
data-mysql-1   Bound     pvc-8ad39820-b103-11e6-93fa-42010a800002   10Gi       RWO           20m
data-mysql-2   Bound     pvc-8ad69a6d-b103-11e6-93fa-42010a800002   10Gi       RWO           20m
data-mysql-3   Bound     pvc-50043c45-b1c5-11e6-93fa-42010a800002   10Gi       RWO           2m
data-mysql-4   Bound     pvc-500a9957-b1c5-11e6-93fa-42010a800002   10Gi       RWO           2m

If you don’t intend to reuse the extra PVCs, you can delete them:

kubectl delete pvc data-mysql-3
kubectl delete pvc data-mysql-4

清理環境:

kubectl delete pod mysql-client-loop --now
kubectl delete statefulset mysql
kubectl get pods -l app=mysql
kubectl delete configmap,service,pvc -l app=mysql

受權解決

由於k8s 1.6開啓了rbac受權

建立statfulset後,看了下pod的日誌

kubectl logs -f  nfs-client-provisioner-2387627438-hs250 
...
E0523 02:47:32.695718       1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:397: Failed to list *v1.PersistentVolume: User "system:serviceaccount:default:default" cannot list persistentvolumes at the cluster scope. (get persistentvolumes)
E0523 02:47:32.696305       1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:369: Failed to list *v1.StorageClass: User "system:serviceaccount:default:default" cannot list storageclasses.storage.k8s.io at the cluster scope. (get storageclasses.storage.k8s.io)
E0523 02:47:32.697326       1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:396: Failed to list *v1.PersistentVolumeClaim: User "system:serviceaccount:default:default" cannot list persistentvolumeclaims at the cluster scope. (get persistentvolumeclaims)
E0523 02:47:33.697467       1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:397: Failed to list *v1.PersistentVolume: User "system:serviceaccount:default:default" cannot list persistentvolumes at the cluster scope. (get persistentvolumes)
E0523 02:47:33.697967       1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:369: Failed to list *v1.StorageClass: User "system:serviceaccount:default:default" cannot list storageclasses.storage.k8s.io at the cluster scope. (get storageclasses.storage.k8s.io)
E0523 02:47:33.699042       1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:396: Failed to list *v1.PersistentVolumeClaim: User "system:serviceaccount:default:default" cannot list persistentvolumeclaims at the cluster scope. (get persistentvolumeclaims)
...
^C

解決:

[root@node131 rbac]# cat serviceaccount.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
[root@node131 rbac]# cat clusterrole.yaml 
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: nfs-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services", "endpoints"]
    verbs: ["get"]
  - apiGroups: ["extensions"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["nfs-provisioner"]
    verbs: ["use"]
[root@node131 rbac]# cat  clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
[root@node131 rbac]#

注意點

[root@node131 nfs]# cat nfs-stateful.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      **serviceAccount: nfs-provisioner** #這裏須要調用剛建立的sa

按照以依次建立,而後執行上面的 pv

kubectl create -f serviceaccount.yaml -f clusterrole.yaml -f clusterrolebinding.yaml
相關文章
相關標籤/搜索