(高併發探測)4、redis集羣部署整理續(容災)

前言

前面已經部署了:redis的4對主從集羣 + 1對主從session主從備份。若是redis集羣中有宕機狀況發生,怎麼保障服務的可用性呢,本文準備在session服務器上添加啓動哨兵服務,測試集羣的容災狀況。node

1.添加哨兵

a.從新整理集羣

因爲集羣的鏈接操做均使用內網,實際應用一樣。修改容器啓動命令、配置文件,取消redis集羣對外公網的端口映射、cli鏈接密碼。另外節點太多不方便管理,因此減少點。
調整爲無密碼的3對主從集羣,刪除clm四、cls4:redis

/ # redis-cli --cluster del-node 172.1.30.21:6379 c2b42a6c35ab6afb1f360280f9545b3d1761725e
>>> Removing node c2b42a6c35ab6afb1f360280f9545b3d1761725e from cluster 172.1.30.21:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
/ # redis-cli --cluster del-node 172.1.50.21:6379 6d1b7a14a6d0be55a5fcb9266358bd1a42244d47
>>> Removing node 6d1b7a14a6d0be55a5fcb9266358bd1a42244d47 from cluster 172.1.50.21:6379
[ERR] Node 172.1.50.21:6379 is not empty! Reshard data away and try again.
#須要先清空槽數據(rebalance成weigth=0)
/ # redis-cli --cluster rebalance 172.1.50.21:6379 --cluster-weight 6d1b7a14a6d0be55a5fcb9266358bd1a42244d47=0
Moving 2186 slots from 172.1.50.21:6379 to 172.1.30.11:6379
###
Moving 2185 slots from 172.1.50.21:6379 to 172.1.50.11:6379
###
Moving 2185 slots from 172.1.50.21:6379 to 172.1.50.12:6379
###
/ # redis-cli --cluster del-node 172.1.50.21:6379 6d1b7a14a6d0be55a5fcb9266358bd1a42244d47
>>> Removing node 6d1b7a14a6d0be55a5fcb9266358bd1a42244d47 from cluster 172.1.50.21:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

縮容成功。docker

b.啓動哨兵

這裏修改rm/rs的容器啓動命令:服務器

docker run --name rm \
           --restart=always \
    --network=mybridge --ip=172.1.13.11 \
    -v /root/tmp/dk/redis/data:/data \
    -v /root/tmp/dk/redis/redis.conf:/etc/redis/redis.conf \
    -v /root/tmp/dk/redis/sentinel.conf:/etc/redis/sentinel.conf \
    -d cffycls/redis5:1.7  
docker run --name rs \
    --restart=always \
    --network=mybridge --ip=172.1.13.12 \
    -v /root/tmp/dk/redis_slave/data:/data \
    -v /root/tmp/dk/redis_slave/redis.conf:/etc/redis/redis.conf \
    -v /root/tmp/dk/redis_slave/sentinel.conf:/etc/redis/sentinel.conf \
    -d cffycls/redis5:1.7

參考《redis集羣實現(六) 容災與宕機恢復》、《Redis及其Sentinel配置項詳細說明》,修改配置文件:session

#若產生數據的存放路徑
dir /data/sentinel

#<master-name> <ip> <redis-port> <quorum>
#監視名,ip,port,保證一致性的最小數目
sentinel monitor mymaster1 172.1.50.11 6379 2
sentinel monitor mymaster2 172.1.50.12 6379 2
sentinel monitor mymaster3 172.1.50.13 6379 2

#sentinel down-after-milliseconds <master-name> <milliseconds>
#監視名,認爲此節點下線了的超時時間
# Default is 30 seconds.
sentinel down-after-milliseconds mymaster1 30000
sentinel down-after-milliseconds mymaster2 30000
sentinel down-after-milliseconds mymaster3 30000

#sentinel parallel-syncs <master-name> <numslaves>
#監視名,值設爲 1 來保證每次只有一個slave 處於不能處理命令請求的狀態
sentinel parallel-syncs mymaster1 1
sentinel parallel-syncs mymaster2 1
sentinel parallel-syncs mymaster3 1

#默認值
# Default is 3 minutes.
sentinel failover-timeout mymaster1 180000
sentinel failover-timeout mymaster2 180000
sentinel failover-timeout mymaster3 180000

建立相應文件夾(xx/data/sentinel),重啓2個容器,並進入rm:測試

/ # redis-sentinel /etc/redis/sentinel.conf
... ... 
14:X 11 Jul 2019 18:25:24.418 # +monitor master mymaster3 172.1.50.13 6379 quorum 2
14:X 11 Jul 2019 18:25:24.419 # +monitor master mymaster1 172.1.50.11 6379 quorum 2
14:X 11 Jul 2019 18:25:24.419 # +monitor master mymaster2 172.1.50.12 6379 quorum 2
14:X 11 Jul 2019 18:25:24.421 * +slave slave 172.1.30.12:6379 172.1.30.12 6379 @ mymaster1 172.1.50.11 6379
14:X 11 Jul 2019 18:25:24.425 * +slave slave 172.1.30.13:6379 172.1.30.13 6379 @ mymaster2 172.1.50.12 6379
14:X 11 Jul 2019 18:26:14.464 # +sdown master mymaster3 172.1.50.13 6379

「不須要監視slave,監視了master的話,slave會自動加入到sentinel裏邊」,.net

相關文章
相關標籤/搜索