動手實踐Redis主從複製、Sentinel主從切換、Cluster分片

背景簡介

Redis 提供的以下技術「Redis Sentinel『主從切換』、Redis Cluster『分片』」,有效實現了 Redis 的高可用、高性能、高可伸縮性,本文對以上技術進行親自動手實踐。html

1. Redis Sentinel「主從切換」

  • 監控主從節點的在線狀態,並根據配置自行完成切換「基於raft協議」。
  • 主從複製從容量角度來講,仍是單機。

2. Redis Cluster「分片」

  • 經過一致性 hash 的方式,將數據分散到多個服務器節點:設計了 16384 個哈希槽,並分配到多臺 redis-server。
  • 當須要在 Redis Cluster 中存取一個 key 時,Redis 客戶端先對 key 使用 CRC16 算法計算一個數值,而後對 16384 取模,這樣每一個 key 都會對應一個編號在 0-16383 之間的哈希槽,而後在此槽對應的節點上操做。

1、主從複製

設置詳情

# 已知網關 IP 爲:172.17.0.1

# 啓動 master 節點
docker run -it --name redis-6380  -p 6380:6379 redis
docker exec -it redis-6380 /bin/bash
redis-cli -h 172.17.0.1 -p 6380

# 啓動slave節點 1
docker run -it --name redis-6381  -p 6381:6379 redis
docker exec -it redis-6381 /bin/bash
redis-cli -h 172.17.0.1 -p 6381
replicaof 172.17.0.1 6380

# 啓動slave節點 2
docker run -it --name redis-6382  -p 6382:6379 redis
docker exec -it redis-6382 /bin/bash
redis-cli -h 172.17.0.1 -p 6382
replicaof 172.17.0.1 6380

以後可查看 master 節點的信息,在 master-redis 下,執行:node

> info Replication

# Replication
role:master
connected_slaves:2
slave0:ip=172.17.0.1,port=6379,state=online,offset=686,lag=0
slave1:ip=172.17.0.1,port=6379,state=online,offset=686,lag=1
master_replid:79187e2241015c2f8ed98ce68caafa765796dff2
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:686
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:686

以後操做 master 節點,slave 節點會自動同步。redis

slave-redis 下執行 replicaof no one 可從新改成主節點。算法

關鍵點

  1. 查看網絡相關信息:
docker network ls
docker network inspect bridge
  1. 容器之間互訪問,可以使用內部端口號,也可以使用外部映射端口號;
  2. 執行 docker network inspect bridge 後,可查看到網關 IP 以及各容器 IP,可以使用 網關 IP : 外部映射端口,或 容器 IP : 6379 訪問 Redis;

參考資料

  1. 命令:SLAVEOF

2、Sentinel 高可用

當前狀態:docker

  1. 網關IP:172.17.0.1
  2. master端口:6390
  3. slave端口:6391,6392

操做步驟

1. 從新建立 redis 的 docker 容器:

redis.conf 配置內容以下:shell

# 默認端口6379
port 6390

# 綁定ip,若是是內網能夠直接綁定 127.0.0.1, 或者忽略, 0.0.0.0 是外網
bind 0.0.0.0

# 守護進程啓動
daemonize no

變動監聽端口號,並從新建立 redis 容器:bash

docker run -p 6390:6390 -v D:\develop\shell\docker\redis\conf6390:/usr/local/etc/redis --name redis-conf-6390 redis redis-server /usr/local/etc/redis/redis.conf
docker exec -it redis-conf-6390 /bin/bash
redis-cli -h 172.17.0.1 -p 6390

docker run -p 6391:6391 -v D:\develop\shell\docker\redis\conf6391:/usr/local/etc/redis --name redis-conf-6391 redis redis-server /usr/local/etc/redis/redis.conf
docker exec -it redis-conf-6391 /bin/bash
redis-cli -h 172.17.0.1 -p 6391
slaveof 172.17.0.1 6390

docker run -p 6392:6392 -v D:\develop\shell\docker\redis\conf6392:/usr/local/etc/redis --name redis-conf-6392 redis redis-server /usr/local/etc/redis/redis.conf
docker exec -it redis-conf-6392 /bin/bash
redis-cli -h 172.17.0.1 -p 6392
slaveof 172.17.0.1 6390

以後可查看 master 節點的信息,可看到 master 獲取到的 slave 的端口號恢復了正常。在 master-redis 下,執行:服務器

> info Replication

# Replication
role:master
connected_slaves:2
slave0:ip=172.17.0.1,port=6391,state=online,offset=84,lag=0
slave1:ip=172.17.0.1,port=6392,state=online,offset=84,lag=0
master_replid:ed2e513ceed2b48a272b97c674c99d82284342a1
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:84
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:84

2. 建立配置文件

建立 sentinel.conf,文件中寫入以下內容:網絡

sentinel monitor bitkylin-master 172.17.0.1 6390 2
sentinel down-after-milliseconds bitkylin-master 5000
sentinel failover-timeout bitkylin-master 10000
sentinel parallel-syncs bitkylin-master 1

命令詳解:指示 Sentinel 去監視一個名爲 bitkylin-master 的主服務器,將這個主服務器標記爲客觀下線至少須要 2 個 Sentinel 贊成;
響應超時 5 秒標記爲主觀下線,主觀下線後就開始了遷移流程,超時 10 秒爲遷移超時,暫不知用途。app

3. 再建立兩個 redis-docker 容器

將配置文件複製到 docker 容器內,共兩個容器須要複製該文件:

docker run -it --name redis-6490 redis
docker run -it --name redis-6491 redis
docker cp ./sentinel.conf dcbd015dbc0e:/data/sentinel.conf
docker cp ./sentinel.conf 7c8307730bcc:/data/sentinel.conf

4. 執行 redis-sentinel 命令

redis-sentinel sentinel.conf

5. 最終效果

此時任意啓停 redis 容器,能夠看到 sentinel 自動完成 redis 的主從切換,主從配置等不須要人工操做。

參考資料

  1. Redis 的 Sentinel 文檔
  2. Docker 容器的文件操做
  3. > 覆蓋寫入; >> 追加寫入

3、Cluster 集羣

操做步驟

1. 更新 redis 配置文件

主要追加集羣配置信息,示例配置文件以下:

# 默認端口6379
port 6390

# 綁定 ip,若是是內網能夠直接綁定 127.0.0.1, 或者忽略, 0.0.0.0 是外網
bind 0.0.0.0

# 守護進程啓動
daemonize no

# 集羣配置
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes

2. 建立 6 個容器

以第二節做爲基礎,基於最新的配置文件,建立 6 個容器,注意新增集羣總線端口映射:

docker run -p 6390:6390 -p 16390:16390 -v D:\develop\shell\docker\redis\conf6390:/usr/local/etc/redis --name redis-conf-6390 redis redis-server /usr/local/etc/redis/redis.conf
docker exec -it redis-conf-6390 /bin/bash
redis-cli -h 172.17.0.1 -p 6390

docker run -p 6391:6391 -p 16391:16391 -v D:\develop\shell\docker\redis\conf6391:/usr/local/etc/redis --name redis-conf-6391 redis redis-server /usr/local/etc/redis/redis.conf
docker exec -it redis-conf-6391 /bin/bash
redis-cli -h 172.17.0.1 -p 6391

docker run -p 6392:6392 -p 16392:16392 -v D:\develop\shell\docker\redis\conf6392:/usr/local/etc/redis --name redis-conf-6392 redis redis-server /usr/local/etc/redis/redis.conf
docker exec -it redis-conf-6392 /bin/bash
redis-cli -h 172.17.0.1 -p 6392

docker run -p 6393:6393 -p 16393:16393 -v D:\develop\shell\docker\redis\conf6393:/usr/local/etc/redis --name redis-conf-6393 redis redis-server /usr/local/etc/redis/redis.conf
docker exec -it redis-conf-6393 /bin/bash
redis-cli -h 172.17.0.1 -p 6393

docker run -p 6394:6394 -p 16394:16394 -v D:\develop\shell\docker\redis\conf6394:/usr/local/etc/redis --name redis-conf-6394 redis redis-server /usr/local/etc/redis/redis.conf
docker exec -it redis-conf-6394 /bin/bash
redis-cli -h 172.17.0.1 -p 6394

docker run -p 6395:6395 -p 16395:16395 -v D:\develop\shell\docker\redis\conf6395:/usr/local/etc/redis --name redis-conf-6395 redis redis-server /usr/local/etc/redis/redis.conf
docker exec -it redis-conf-6395 /bin/bash
redis-cli -h 172.17.0.1 -p 6395

3. 直接經過命令建立集羣

> redis-cli --cluster create 172.17.0.1:6390 172.17.0.1:6391 172.17.0.1:6392 172.17.0.1:6393 172.17.0.1:6394 172.17.0.1:6395 --cluster-replicas 1

# 如下是命令執行結果:

>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.17.0.1:6394 to 172.17.0.1:6390
Adding replica 172.17.0.1:6395 to 172.17.0.1:6391
Adding replica 172.17.0.1:6393 to 172.17.0.1:6392
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: a9678b062663957e59bc3b4beb7be4366fa24adc 172.17.0.1:6390
   slots:[0-5460] (5461 slots) master
M: 41a4976431713cce936220fba8a230627d28d40c 172.17.0.1:6391
   slots:[5461-10922] (5462 slots) master
M: 1bf83414a12bad8f2e25dcea19ccea1c881d28c5 172.17.0.1:6392
   slots:[10923-16383] (5461 slots) master
S: 3d65eadd3321ef34c9413ae8f75d610c4228eda7 172.17.0.1:6393
   replicates 41a4976431713cce936220fba8a230627d28d40c
S: b604356698a5f211823ada4b45a97939744b1d57 172.17.0.1:6394
   replicates 1bf83414a12bad8f2e25dcea19ccea1c881d28c5
S: 2c1cc93221dc3830aa1eb28601ac27e22a6801cc 172.17.0.1:6395
   replicates a9678b062663957e59bc3b4beb7be4366fa24adc
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 172.17.0.1:6390)
M: a9678b062663957e59bc3b4beb7be4366fa24adc 172.17.0.1:6390
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: b604356698a5f211823ada4b45a97939744b1d57 172.17.0.1:6394
   slots: (0 slots) slave
   replicates 1bf83414a12bad8f2e25dcea19ccea1c881d28c5
M: 41a4976431713cce936220fba8a230627d28d40c 172.17.0.1:6391
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 3d65eadd3321ef34c9413ae8f75d610c4228eda7 172.17.0.1:6393
   slots: (0 slots) slave
   replicates 41a4976431713cce936220fba8a230627d28d40c
M: 1bf83414a12bad8f2e25dcea19ccea1c881d28c5 172.17.0.1:6392
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 2c1cc93221dc3830aa1eb28601ac27e22a6801cc 172.17.0.1:6395
   slots: (0 slots) slave
   replicates a9678b062663957e59bc3b4beb7be4366fa24adc
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

集羣建立成功

注意點

  1. 須要開放集羣總線端口號,默認爲 業務端口號 + 10000
  2. cluster reset 命令能夠將當前節點從集羣中移除

參考資料

  1. redis-cluster 集羣 - 安裝與狀態驗證
  2. Redis 集羣教程
  3. Redis 命令參考 - 集羣教程
相關文章
相關標籤/搜索