部署redis集羣方式的選擇

  • Statefulset
  • Service&depolyment

對於redis,mysql這種有狀態的服務,咱們使用statefulset方式爲首選.咱們這邊主要就是介紹statefulset這種方式node

ps:
statefulset 的設計原理模型:
    拓撲狀態.應用的多個實例之間不是徹底對等的關係,這個應用實例的啓動必須按照某些順序啓動,好比應用的
    主節點 A 要先於從節點 B 啓動。而若是你把 A 和 B 兩個Pod刪除掉,他們再次被建立出來是也必須嚴格按
    照這個順序才行,而且,新建立出來的Pod,必須和原來的Pod的網絡標識同樣,這樣原先的訪問者才能使用一樣
    的方法,訪問到這個新的Pod

    存儲狀態:應用的多個實例分別綁定了不一樣的存儲數據.對於這些應用實例來講,Pod A第一次讀取到的數據,和
    隔了十分鐘以後再次讀取到的數據,應該是同一份,哪怕在此期間Pod A被從新建立過.一個數據庫應用的多個
    存儲實例

部署

安裝NFS(共享存儲)

由於k8s上pod是飄忽不定的,因此咱們確定須要用一個共享存儲來提供存儲,這樣無論pod漂移到哪一個節點都能訪問這個共享的數據卷.我這個地方先使用NFS來作共享存儲,後期能夠 選擇別的替換mysql

yum -y install nfs-utils rpcbind
vim /etc/exports
/usr/local/kubernetes/redis/pv1 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv2 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv3 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv4 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv5 0.0.0.0/0(rw,all_squash) /usr/local/kubernetes/redis/pv6 0.0.0.0/0(rw,all_squash) mkdir -p /usr/local/kubernetes/redis/pv{1..6} chmod 777 /usr/local/kubernetes/redis/pv{1..6}

後期咱們能夠寫成域名 通配符redis

啓動服務
systemctl enable nfs systemctl enable rpcbind systemctl start nfs systemctl start rpcbind

建立pv

建立6個pv 一會供pvc掛載使用sql

vim pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv1
spec:
  capacity:
    storage: 200M      #磁盤大小200M
  accessModes:
    - ReadWriteMany    #多客戶可讀寫
  nfs:
    server: NFS服務器地址
    path: "/usr/local/kubernetes/redis/pv1"

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-vp2
spec:
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteMany
  nfs:
    server: NFS服務器地址
    path: "/usr/local/kubernetes/redis/pv2"

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv3
spec:
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteMany
  nfs:
    server: NFS服務器地址
    path: "/usr/local/kubernetes/redis/pv3"

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv4
spec:
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteMany
  nfs:
    server: NFS服務器地址
    path: "/usr/local/kubernetes/redis/pv4"

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv5
spec:
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteMany
  nfs:
    server: NFS服務器地址
    path: "/usr/local/kubernetes/redis/pv5"

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv6
spec:
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteMany
  nfs:
    server: NFS服務器地址
    path: "/usr/local/kubernetes/redis/pv6"

字段說明:
apiversion: api版本
kind: 這個yaml是生成pv的
metadata: 元數據
spec.capacity: 進行資源限制的
spec.accessmodes: 訪問模式(讀寫模式)
spec.nfs: 這個pv卷名是經過nfs提供的docker

建立pv數據庫

kubectl create -f pv.yaml kubectl get pv #查看建立的pv

建立configmap,用來存放redis的配置文件

由於redis的配置文件裏面可能會改變,因此咱們使用configmap這種方式給配置文件弄出來,咱們後期改的時候就不須要沒改次配置文件就重新生成一個docker images包了vim

appendonly yes                      #開啓Redis的AOF持久化 cluster-enabled yes #集羣模式打開 cluster-config-file /var/lib/redis/nodes.conf #下面說明 cluster-node-timeout 5000 #節點超時時間 dir /var/lib/redis #AOF持久化文件存在的位置 port 6379 #開啓的端口

cluster-conf-file: 選項設定了保存節點配置文件的路徑,若是這個配置文件不存在,每一個節點在啓動的時候都爲他自身指定了一個新的ID存檔到這個文件中,實例會一直使用同一個ID,在集羣中保持一個獨一無二的(Unique)名字.每一個節點都是用ID而不是IP或者端口號來記錄其餘節點,由於在k8s中,IP地址是不固定的,而這個獨一無二的標識符(Identifier)則會在節點的整個生命週期中一直保持不變,咱們這個文件裏面存放的是節點IDcentos

建立名爲redis-conf的Configmap:api

kubectl create configmap redis-conf --from-file=redis.conf

查看:bash

[root@rke ~]# kubectl get cm NAME DATA AGE redis-conf 1 22h [root@rke ~]# kubectl describe cm redis-conf Name: redis-conf Namespace: default Labels: <none> Annotations: <none> Data ==== redis.conf: ---- appendonly yes cluster-enabled yes cluster-config-file /var/lib/redis/nodes.conf cluster-node-timeout 5000 dir /var/lib/redis port 6379 Events: <none>

建立headless service

Headless service是StatefulSet實現穩定網絡標識的基礎,咱們須要提早建立。準備文件headless-service.yml以下:

apiVersion: v1
kind: Service
metadata:
  name: redis-service
  labels:
    app: redis
spec:
  ports:
  - name: redis-port
    port: 6379
  clusterIP: None
  selector:
    app: redis
    appCluster: redis-cluster

建立:

kubectl create -f headless-service.yml

查看:

[root@k8s-node1 redis]# kubectl get svc redis-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE redis-service ClusterIP None <none> 6379/TCP 53s

能夠看到,服務名稱爲redis-service,其CLUSTER-IP爲None,表示這是一個「無頭」服務。

建立redis集羣節點

這是本文的核心內容,建立redis.yaml文件

[root@rke ~]# cat /home/docker/redis/redis.yml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: redis-app
spec:
  serviceName: "redis-service"
  replicas: 6
  template:
    metadata:
      labels:
        app: redis
        appCluster: redis-cluster
    spec:
      terminationGracePeriodSeconds: 20
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - redis
              topologyKey: kubernetes.io/hostname
      containers:
      - name: redis
        image: "redis"
        command:
          - "redis-server"                  #redis啓動命令
        args:
          - "/etc/redis/redis.conf"         #redis-server後面跟的參數,換行表明空格
          - "--protected-mode"              #容許外網訪問
          - "no"
        # command: redis-server /etc/redis/redis.conf --protected-mode no
        resources:                          #資源
          requests:                         #請求的資源
            cpu: "100m"                     #m表明千分之,至關於0.1 個cpu資源
            memory: "100Mi"                 #內存100m大小
        ports:
            - name: redis
              containerPort: 6379
              protocol: "TCP"
            - name: cluster
              containerPort: 16379
              protocol: "TCP"
        volumeMounts:
          - name: "redis-conf"              #掛載configmap生成的文件
            mountPath: "/etc/redis"         #掛載到哪一個路徑下
          - name: "redis-data"              #掛載持久卷的路徑
            mountPath: "/var/lib/redis"
      volumes:
      - name: "redis-conf"                  #引用configMap卷
        configMap:
          name: "redis-conf"
          items:
            - key: "redis.conf"             #建立configMap指定的名稱
              path: "redis.conf"            #裏面的那個文件--from-file參數後面的文件
  volumeClaimTemplates:                     #進行pvc持久卷聲明,
  - metadata:
      name: redis-data
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 200M

PodAntiAffinity:表示反親和性,其決定了某個pod不能夠和哪些Pod部署在同一拓撲域,能夠用於將一個服務的POD分散在不一樣的主機或者拓撲域中,提升服務自己的穩定性。
matchExpressions:規定了Redis Pod要儘可能不要調度到包含app爲redis的Node上,也便是說已經存在Redis的Node上儘可能不要再分配Redis Pod了.
另外,根據StatefulSet的規則,咱們生成的Redis的6個Pod的hostname會被依次命名爲$(statefulset名稱)-$(序號),以下圖所示:

[root@rke ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE redis-app-0 1/1 Running 0 40m 10.42.2.17 192.168.1.21 <none> redis-app-1 1/1 Running 0 40m 10.42.0.15 192.168.1.114 <none> redis-app-2 1/1 Running 0 40m 10.42.1.13 192.168.1.20 <none> redis-app-3 1/1 Running 0 40m 10.42.2.18 192.168.1.21 <none> redis-app-4 1/1 Running 0 40m 10.42.0.16 192.168.1.114 <none> redis-app-5 1/1 Running 0 40m 10.42.1.14 192.168.1.20 <none>

如上,能夠看到這些Pods在部署時是以{0..N-1}的順序依次建立的。注意,直到redis-app-0狀態啓動後達到Running狀態以後,redis-app-1 纔開始啓動。
同時,每一個Pod都會獲得集羣內的一個DNS域名,格式爲$(podname).$(service name).$(namespace).svc.cluster.local,也便是:

redis-app-0.redis-service.default.svc.cluster.local
redis-app-1.redis-service.default.svc.cluster.local
...以此類推...

在K8S集羣內部,這些Pod就能夠利用該域名互相通訊。咱們可使用busybox鏡像的nslookup檢驗這些域名:

[root@k8s-node1 ~]# kubectl run -i --tty --image busybox dns-test --restart=Never --rm /bin/sh If you don't see a command prompt, try pressing enter. / # nslookup redis-app-1.redis-service.default.svc.cluster.local Server: 10.43.0.10 Address: 10.43.0.10:53 Name: redis-app-1.redis-service.default.svc.cluster.local Address: 10.42.0.15 *** Can't find redis-app-1.redis-service.default.svc.cluster.local: No answer / # nslookup redis-app-0.redis-service.default.svc.cluster.local Server: 10.43.0.10 Address: 10.43.0.10:53 Name: redis-app-0.redis-service.default.svc.cluster.local Address: 10.42.2.17

能夠看到, redis-app-0的IP爲10.42.2.17。固然,若Redis Pod遷移或是重啓(咱們能夠手動刪除掉一個Redis Pod來測試),則IP是會改變的,但Pod的域名、SRV records、A record都不會改變。
另外能夠發現,咱們以前建立的pv都被成功綁定了:

[root@k8s-node1 ~]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-pv1 200M RWX Retain Bound default/redis-data-redis-app-2 1h nfs-pv2 200M RWX Retain Bound default/redis-data-redis-app-3 1h nfs-pv3 200M RWX Retain Bound default/redis-data-redis-app-4 1h nfs-pv4 200M RWX Retain Bound default/redis-data-redis-app-5 1h nfs-pv5 200M RWX Retain Bound default/redis-data-redis-app-0 1h nfs-pv6 200M RWX Retain Bound default/redis-data-redis-app-1 1h

初始化redis集羣

建立好6個Redis Pod後,咱們還須要利用經常使用的Redis-tribe工具進行集羣的初始化。

建立centos容器

因爲Redis集羣必須在全部節點啓動後才能進行初始化,而若是將初始化邏輯寫入Statefulset中,則是一件很是複雜並且低效的行爲。這裏,本人不得不稱讚一下原項目做者的思路,值得學習。也就是說,咱們能夠在K8S上建立一個額外的容器,專門用於進行K8S集羣內部某些服務的管理控制。
這裏,咱們專門啓動一個Ubuntu的容器,能夠在該容器中安裝Redis-tribe,進而初始化Redis集羣,執行:

kubectl run -i --tty centos --image=centos --restart=Never /bin/bash

成功後,咱們能夠進入centos容器中,原項目要求執行以下命令安裝基本的軟件環境:

cat >> /etc/yum.repo.d/epel.repo<<'EOF' [epel] name=Extra Packages for Enterprise Linux 7 - $basearch baseurl=https://mirrors.tuna.tsinghua.edu.cn/epel/7/$basearch #mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch failovermethod=priority enabled=1 gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 EOF

初始化redis集羣
首先,咱們須要安裝redis-trib(redis集羣命令行工具):

yum -y install redis-trib.noarch bind-utils-9.9.4-72.el7.x86_64

而後建立一主一從的集羣節點信息:

redis-trib create --replicas 1 \
`dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379 \
`dig +short redis-app-1.redis-service.default.svc.cluster.local`:6379 \
`dig +short redis-app-2.redis-service.default.svc.cluster.local`:6379 \
`dig +short redis-app-3.redis-service.default.svc.cluster.local`:6379 \
`dig +short redis-app-4.redis-service.default.svc.cluster.local`:6379 \
`dig +short redis-app-5.redis-service.default.svc.cluster.local`:6379

#create: 建立一個新的集羣 #--replicas 1 : 建立的集羣中每一個主節點分配一個從節點,達到3主3從 #後面跟的就是redis實例所在的位置

如上,命令dig +short redis-app-0.redis-service.default.svc.cluster.local用於將Pod的域名轉化爲IP,這是由於redis-trib不支持域名來建立集羣。
執行完成後redis-trib會打印一份預配置文件給你查看,若是沒問題輸入yes,redis-trib就會把這份配置文件應用到集羣中

>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes... Using 3 masters: 10.42.2.17:6379 10.42.0.15:6379 10.42.1.13:6379 Adding replica 10.42.2.18:6379 to 10.42.2.17:6379 Adding replica 10.42.0.16:6379 to 10.42.0.15:6379 Adding replica 10.42.1.14:6379 to 10.42.1.13:6379 M: 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379 slots:0-5460 (5461 slots) master M: 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379 slots:5461-10922 (5462 slots) master M: 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.13:6379 slots:10923-16383 (5461 slots) master S: 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379 replicates 4676f8913cdcd1e256db432531c80591ae6c5fc3 S: cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379 replicates 505f3e126882c0c5115885e54f9b361bc7e74b97 S: e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379 replicates 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f Can I set the above configuration? (type 'yes' to accept):

輸入yes後開始建立集羣

>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join... >>> Performing Cluster Check (using node 10.42.2.17:6379) M: 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379 slots:0-5460 (5461 slots) master 1 additional replica(s) M: 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.13:6379@16379 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 slots: (0 slots) slave replicates 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f S: 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slots: (0 slots) slave replicates 4676f8913cdcd1e256db432531c80591ae6c5fc3 M: 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 slots:5461-10922 (5462 slots) master 1 additional replica(s) S: cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slots: (0 slots) slave replicates 505f3e126882c0c5115885e54f9b361bc7e74b97 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.

最後一句表示集羣中的16384個槽都有至少一個主節點在處理, 集羣運做正常.

至此,咱們的Redis集羣就真正建立完畢了,連到任意一個Redis Pod中檢驗一下:

root@k8s-node1 ~]# kubectl exec -it redis-app-2 /bin/bash root@redis-app-2:/data# /usr/local/bin/redis-cli -c 127.0.0.1:6379> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:186 cluster_stats_messages_pong_sent:199 cluster_stats_messages_sent:385 cluster_stats_messages_ping_received:194 cluster_stats_messages_pong_received:186 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:385 127.0.0.1:6379> cluster nodes 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.13:6379@16379 master - 0 1550555011000 3 connected 10923-16383 e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 slave 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 0 1550555011512 6 connected 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slave 4676f8913cdcd1e256db432531c80591ae6c5fc3 0 1550555010507 4 connected 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 master - 0 1550555011000 2 connected 5461-10922 cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slave 505f3e126882c0c5115885e54f9b361bc7e74b97 0 1550555011713 5 connected 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379@16379 myself,master - 0 1550555010000 1 connected 0-5460

另外,還能夠在NFS上查看Redis掛載的數據:

[root@rke ~]# tree /usr/local/kubernetes/redis/ /usr/local/kubernetes/redis/ ├── pv1 │   ├── appendonly.aof │   ├── dump.rdb │   └── nodes.conf ├── pv2 │   ├── appendonly.aof │   ├── dump.rdb │   └── nodes.conf ├── pv3 │   ├── appendonly.aof │   ├── dump.rdb │   └── nodes.conf ├── pv4 │   ├── appendonly.aof │   ├── dump.rdb │   └── nodes.conf ├── pv5 │   ├── appendonly.aof │   ├── dump.rdb │   └── nodes.conf └── pv6 ├── appendonly.aof ├── dump.rdb └── nodes.conf 6 directories, 18 files

建立用於訪問service

前面咱們建立了用於實現statefulset的headless service,但該service沒有cluster IP,所以不能用於外界訪問.因此咱們還須要建立一個service,專用於爲Redis集羣提供訪問和負載均衡:

piVersion: v1
kind: Service
metadata:
  name: redis-access-service
  labels:
    app: redis
spec:
  ports:
  - name: redis-port
    protocol: "TCP"
    port: 6379
    targetPort: 6379
  selector:
    app: redis
    appCluster: redis-cluster

如上,該Service名稱爲 redis-access-service,在K8S集羣中暴露6379端口,而且會對labels nameapp: redisappCluster: redis-cluster的pod進行負載均衡。
建立後查看:

[root@rke ~]# kubectl get svc redis-access-service -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR redis-access-service ClusterIP 10.43.40.62 <none> 6379/TCP 47m app=redis,appCluster=redis-cluster

如上,在k8s集羣中,全部應用均可以經過10.43.40.62:6379來訪問redis集羣,固然,爲了方便測試,咱們也能夠爲Service添加一個NodePort映射到物理機上,待測試。

測試主從切換

在K8S上搭建無缺Redis集羣后,咱們最關心的就是其原有的高可用機制是否正常。這裏,咱們能夠任意挑選一個Master的Pod來測試集羣的主從切換機制,如redis-app-2

[root@rke ~]# kubectl get pods redis-app-2 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE redis-app-2 1/1 Running 0 1h 10.42.1.13 192.168.1.20 <none>

進入redis-app-2查看:

[root@rke ~]# kubectl exec -it redis-app-2 /bin/bash root@redis-app-2:/data# redis-cli 127.0.0.1:6379> role 1) "master" 2) (integer) 9478 3) 1) 1) "10.42.1.14" 2) "6379" 3) "9478"

如上能夠看到,其爲master,slave10.42.1.14redis-app-5

接着,咱們手動刪除redis-app-2

[root@rke ~]# kubectl delete pods redis-app-2 pod "redis-app-2" deleted [root@rke ~]# kubectl get pods redis-app-2 -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE redis-app-2 1/1 Running 0 19s 10.42.1.15 192.168.1.20 <none>

如上,IP改變爲10.42.1.15。咱們再進入redis-app-2內部查看:

[root@rke ~]# kubectl exec -it redis-app-2 /bin/bash root@redis-app-2:/data# redis-cli 127.0.0.1:6379> ROLE 1) "slave" 2) "10.42.1.14" 3) (integer) 6379 4) "connected" 5) (integer) 9688

如上,redis-app-2變成了slave,從屬於它以前的從節點10.42.1.14redis-app-5

redis動態擴容

咱們如今這個集羣中有6個節點三主三從,我如今添加兩個pod節點,達到4主4從

添加nfs共享目錄

cat >> /etc/exports <<'EOF' /usr/local/kubernetes/redis/pv7 192.168.0.0/16(rw,all_squash) /usr/local/kubernetes/redis/pv8 192.168.0.0/16(rw,all_squash) EOF systemctl restart nfs rpcbind [root@rke ~]# mkdir /usr/local/kubernetes/redis/pv{7..8} [root@rke ~]# chmod 777 /usr/local/kubernetes/redis/*

更新pv的yml文件,也能夠本身在從新建立一個,這邊選擇本身新建

[root@rke redis]# cat pv_add.yml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv7
spec:
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.1.253
    path: "/usr/local/kubernetes/redis/pv7"

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv8
spec:
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.1.253
    path: "/usr/local/kubernetes/redis/pv8"

建立查看pv:

[root@rke redis]# kubectl create -f pv_add.yml persistentvolume/nfs-pv7 created persistentvolume/nfs-pv8 created [root@rke redis]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-pv1 200M RWX Retain Bound default/redis-data-redis-app-1 2h nfs-pv2 200M RWX Retain Bound default/redis-data-redis-app-2 2h nfs-pv3 200M RWX Retain Bound default/redis-data-redis-app-4 2h nfs-pv4 200M RWX Retain Bound default/redis-data-redis-app-5 2h nfs-pv5 200M RWX Retain Bound default/redis-data-redis-app-0 2h nfs-pv6 200M RWX Retain Bound default/redis-data-redis-app-3 2h nfs-pv7 200M RWX Retain Available 7s nfs-pv8 200M RWX Retain Available 7s

添加redis節點

更改redis的yml文件裏面的replicas:字段,把這個字段改成8,而後升級運行

[root@rke redis]# kubectl apply -f redis.yml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply statefulset.apps/redis-app configured [root@rke redis]# kubectl get pods NAME READY STATUS RESTARTS AGE redis-app-0 1/1 Running 0 2h redis-app-1 1/1 Running 0 2h redis-app-2 1/1 Running 0 19m redis-app-3 1/1 Running 0 2h redis-app-4 1/1 Running 0 2h redis-app-5 1/1 Running 0 2h redis-app-6 1/1 Running 0 57s redis-app-7 1/1 Running 0 30s

添加集羣節點

[root@rke redis]#kubectl exec -it centos /bin/bash [root@centos /]# redis-trib add-node \ `dig +short redis-app-6.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379 [root@centos /]# redis-trib add-node \ `dig +short redis-app-7.redis-service.default.svc.cluster.local`:6379 \ `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379

add-node後面跟的是新節點的信息,後面是之前集羣中的任意 一個節點

查看添加redis節點是否正常

[root@rke redis]# kubectl exec -it redis-app-0 bash root@redis-app-0:/data# redis-cli 127.0.0.1:6379> cluster nodes 589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.15:6379@16379 slave e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 0 1550564776000 7 connected e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 master - 0 1550564776000 7 connected 10923-16383 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slave 4676f8913cdcd1e256db432531c80591ae6c5fc3 0 1550564777051 4 connected 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 master - 0 1550564776851 2 connected 5461-10922 cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slave 505f3e126882c0c5115885e54f9b361bc7e74b97 0 1550564775000 5 connected e4697a7ba460ae2979692116b95fbe1f2c8be018 10.42.0.20:6379@16379 master - 0 1550564776549 0 connected 246c79682e6cc78b4c2c28d0e7166baf47ecb265 10.42.2.23:6379@16379 master - 0 1550564776548 8 connected 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379@16379 myself,master - 0 1550564775000 1 connected 0-5460

從新分配哈希槽

redis-trib.rb reshard `dig +short redis-app-0.redis-service.default.svc.cluster.local`:6379
## 輸入要移動的哈希槽 ## 移動到哪一個新的master節點(ID) ## all 是從全部master節點上移動

查看對應的節點信息

127.0.0.1:6379> cluster nodes
589b4f4f908a04f56d2ab9cd6fd0fd25ea14bb8f 10.42.1.15:6379@16379 slave e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 0 1550566162000 7 connected e9f1f704ff7c8f060d6b39e23be9cd8e55cb2e46 10.42.1.14:6379@16379 master - 0 1550566162909 7 connected 11377-16383 366abbba45d3200329a5c6305fbcec9e29b50c80 10.42.2.18:6379@16379 slave 4676f8913cdcd1e256db432531c80591ae6c5fc3 0 1550566161600 4 connected 505f3e126882c0c5115885e54f9b361bc7e74b97 10.42.0.15:6379@16379 master - 0 1550566161902 2 connected 5917-10922 cee3a27cc27635da54d94f16f6375cd4acfe6c30 10.42.0.16:6379@16379 slave 505f3e126882c0c5115885e54f9b361bc7e74b97 0 1550566162506 5 connected 246c79682e6cc78b4c2c28d0e7166baf47ecb265 10.42.2.23:6379@16379 master - 0 1550566161600 8 connected 0-453 5461-5916 10923-11376 4676f8913cdcd1e256db432531c80591ae6c5fc3 10.42.2.17:6379@16379 myself,master - 0 1550566162000 1