redis集羣

redis集羣


統計的操做
建立redis使用的conf,logs,pid目錄
建立redis數據目錄
編輯redis_6380和6381配置文件
啓動redis_6380和6381redis

1. Redis Cluster手動部署

  • db01操做
mkdir -p /opt/redis_cluster/redis_{6380,6381}/{conf,logs,pid}
mkdir –p /data/redis_cluster/redis_{6380,6381}
cat >/opt/redis_cluster/redis_6380/conf/redis_6380.conf<<EOF
bind 10.0.0.51
port 6380
daemonize yes
pidfile "/opt/redis_cluster/redis_6380/pid/redis_6380.pid"
logfile "/opt/redis_cluster/redis_6380/logs/redis_6380.log"
dbfilename "redis_6380.rdb"
dir "/data/redis_cluster/redis_6380/"
cluster-enabled yes
cluster-config-file nodes_6380.conf
cluster-node-timeout 15000
EOF
cat >/opt/redis_cluster/redis_6381/conf/redis_6381.conf<<EOF
bind 10.0.0.51
port 6381
daemonize yes
pidfile "/opt/redis_cluster/redis_6381/pid/redis_6381.pid"
logfile "/opt/redis_cluster/redis_6381/logs/redis_6381.log"
dbfilename "redis_6381.rdb"
dir "/data/redis_cluster/redis_6381/"
cluster-enabled yes
cluster-config-file nodes_6381.conf
cluster-node-timeout 15000
EOF
redis-server /opt/redis_cluster/redis_6380/conf/redis_6380.conf
redis-server /opt/redis_cluster/redis_6381/conf/redis_6381.conf
  • db02操做
mkdir -p /opt/redis_cluster/redis_{6380,6381}/{conf,logs,pid}
mkdir –p /data/redis_cluster/redis_{6380,6381}
cat >/opt/redis_cluster/redis_6380/conf/redis_6380.conf<<EOF
bind 10.0.0.52
port 6380
daemonize yes
pidfile "/opt/redis_cluster/redis_6380/pid/redis_6380.pid"
logfile "/opt/redis_cluster/redis_6380/logs/redis_6380.log"
dbfilename "redis_6380.rdb"
dir "/data/redis_cluster/redis_6380/"
cluster-enabled yes
cluster-config-file nodes_6380.conf
cluster-node-timeout 15000
EOF
cat >/opt/redis_cluster/redis_6381/conf/redis_6381.conf<<EOF
bind 10.0.0.52
port 6381
daemonize yes
pidfile "/opt/redis_cluster/redis_6381/pid/redis_6381.pid"
logfile "/opt/redis_cluster/redis_6381/logs/redis_6381.log"
dbfilename "redis_6381.rdb"
dir "/data/redis_cluster/redis_6381/"
cluster-enabled yes
cluster-config-file nodes_6381.conf
cluster-node-timeout 15000
EOF
redis-server /opt/redis_cluster/redis_6380/conf/redis_6380.conf
redis-server /opt/redis_cluster/redis_6381/conf/redis_6381.conf
  • db03操做
mkdir -p /opt/redis_cluster/redis_{6380,6381}/{conf,logs,pid}
mkdir –p /data/redis_cluster/redis_{6380,6381}
cat >/opt/redis_cluster/redis_6380/conf/redis_6380.conf<<EOF
bind 10.0.0.53
port 6380
daemonize yes
pidfile "/opt/redis_cluster/redis_6380/pid/redis_6380.pid"
logfile "/opt/redis_cluster/redis_6380/logs/redis_6380.log"
dbfilename "redis_6380.rdb"
dir "/data/redis_cluster/redis_6380/"
cluster-enabled yes
cluster-config-file nodes_6380.conf
cluster-node-timeout 15000
EOF
cat >/opt/redis_cluster/redis_6381/conf/redis_6381.conf<<EOF
bind 10.0.0.53
port 6381
daemonize yes
pidfile "/opt/redis_cluster/redis_6381/pid/redis_6381.pid"
logfile "/opt/redis_cluster/redis_6381/logs/redis_6381.log"
dbfilename "redis_6381.rdb"
dir "/data/redis_cluster/redis_6381/"
cluster-enabled yes
cluster-config-file nodes_6381.conf
cluster-node-timeout 15000
EOF
redis-server /opt/redis_cluster/redis_6380/conf/redis_6380.conf
redis-server /opt/redis_cluster/redis_6381/conf/redis_6381.conf
  • 服務檢查
[root@db01 ~]# netstat -lntup|grep redis
tcp        0      0 10.0.0.51:6380          0.0.0.0:*               LISTEN      32568/redis-server  
tcp        0      0 10.0.0.51:6381          0.0.0.0:*               LISTEN      32564/redis-server  
tcp        0      0 10.0.0.51:16380         0.0.0.0:*               LISTEN      32568/redis-server  
tcp        0      0 10.0.0.51:16381         0.0.0.0:*               LISTEN      32564/redis-server

1.1 Redis Cluster 通信流程

在分佈式存儲中須要提供維護節點元數據信息的機制,所謂元數據是指:節點負責哪些數據,是否
出現故障燈狀態信息,redis 集羣採用 Gossip(流言)協議,Gossip 協議工做原理就是節點彼此不斷
交換信息,一段時間後全部的節點都會知道集羣完整信息,這種方式相似流言傳播。shell

  • 通訊過程:
    1)集羣中的每個節點都會單獨開闢一個 Tcp 通道,用於節點之間彼此通訊, 通訊端口在基礎端口上加10000.
    2)每一個節點在固定週期內經過特定規則選擇結構節點發送 ping 消息
    3)接收到 ping 消息的節點用 pong 消息做爲響應。集羣中每一個節點經過必定規則挑選要通訊的節點,每一個節點
    可能知道所有節點,也可能僅知道部分節點,只要這些節點彼此能夠正常通訊,最終他們會打成一致的狀態,當
    節點出現故障,新節點加入,主從角色變化等,它可以給不斷的 ping/pong 消息,從而達到同步目的。
    通信消息類型:
  • Gossip
    Gossip 協議職責就是信息交換,信息交換的載體就是節點間彼此發送 Gossip 消息。
    常見 Gossip 消息分爲:ping、 pong、 meet、 fail 等
  • meet
    meet 消息:用於通知新節點加入,消息發送者通知接受者加入到當前集羣,meet 消息通訊正常完成後,接收節
    點會加入到集羣中並進行 ping、 pong 消息交換ruby

  • ping
    ping 消息:集羣內交換最頻繁的消息,集羣內每一個節點每秒想多個其餘節點發送 ping 消息,用於檢測節點是否
    在線和交換彼此信息。
  • pong
    Pong 消息:當接收到 ping,meet 消息時,做爲相應消息回覆給發送方確認消息正常通訊,節點也能夠向集羣內
    廣播自身的 pong 消息來通知整個集羣對自身狀態進行更新。
  • fail
    fail 消息:當節點斷定集羣內另外一個節點下線時,迴向集羣內廣播一個 fail 消息,其餘節點收到 fail 消息之
    後把對應節點更新爲下線狀態。服務器

1.2 手動配置節點發現

當前集羣中只能看到自身,還未互相發現運維

[root@db01 ~]# sh redis_shell.sh  login 6380
10.0.0.51:6380> cluster nodes
215158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380 myself,master - 0 0 0 connected

生成的配置文件tcp

[root@db01 ~]# tree /data/redis_cluster/redis_638*
/data/redis_cluster/redis_6380
└── nodes_6380.conf
/data/redis_cluster/redis_6381
└── nodes_6381.conf

當前配置文件中只有本身的ID,配置meet後會將其餘節點的ID也寫入,clester nodes中id和配置文件一致分佈式

10.0.0.51:6380> cluster nodes
15158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380 myself,master - 0 0 0 connected
[root@db01 ~]# cat /data/redis_cluster/redis_6380/nodes_6380.conf 
215158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380 myself,master - 0 0 0 connected 
vars currentEpoch 0 lastVoteEpoch 0

集羣模式的 Redis 除了原有的配置文件以外又加了一份集羣配置文件.當集羣內節點
信息發生變化,如添加節點,節點下線,故障轉移等.節點會自動保存集羣狀態到配置文件.
須要注意的是,Redis 自動維護集羣配置文件,不須要手動修改,防止節點重啓時產生錯亂.工具

配置發現測試

[root@db01 ~]# sh redis_shell.sh login 6380
10.0.0.51:6380> CLUSTER MEET 10.0.0.51 6381
OK
10.0.0.51:6380> CLUSTER MEET 10.0.0.52 6380
OK
10.0.0.51:6380> CLUSTER MEET 10.0.0.52 6381
OK
10.0.0.51:6380> CLUSTER MEET 10.0.0.53 6380
OK
10.0.0.51:6380> CLUSTER MEET 10.0.0.53 6381
OK
10.0.0.51:6380> cluster nodes
68af205aad42909db61013ae2d0f9d2ec49cb5b9 10.0.0.52:6380 master - 0 1562242149129 2 connected
215158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380 myself,master - 0 0 1 connected
f5248261ef32638fc11966cdeedff96ab197b812 10.0.0.53:6380 master - 0 1562242147114 0 connected
c847d86eb040a5cbeaeddef225beecba22f401b2 10.0.0.51:6381 master - 0 1562242146109 3 connected
0815edc37378ce8c1bed1b46a460b410f231937d 10.0.0.53:6381 master - 0 1562242148121 5 connected
eb67970ebb0eb3512f09b7be79128bf736eaccb5 10.0.0.52:6381 master - 0 1562242150135 4 connected

配置發現完成後,每一個配置文件中都會存在其餘節點的信息

[root@db01 ~]# cat /data/redis_cluster/redis_6380/nodes_6380.conf 
68af205aad42909db61013ae2d0f9d2ec49cb5b9 10.0.0.52:6380 master - 0 1562242117909 2 connected
215158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380 myself,master - 0 0 1 connected
f5248261ef32638fc11966cdeedff96ab197b812 10.0.0.53:6380 master - 0 1562242116499 0 connected
c847d86eb040a5cbeaeddef225beecba22f401b2 10.0.0.51:6381 master - 0 1562242118914 3 connected
0815edc37378ce8c1bed1b46a460b410f231937d 10.0.0.53:6381 master - 0 1562242118010 5 connected
eb67970ebb0eb3512f09b7be79128bf736eaccb5 10.0.0.52:6381 master - 0 1562242119920 4 connected
vars currentEpoch 5 lastVoteEpoch 0

[root@db01 ~]# cat /data/redis_cluster/redis_6381/nodes_6381.conf 
215158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380 master - 0 1562242122540 1 connected
68af205aad42909db61013ae2d0f9d2ec49cb5b9 10.0.0.52:6380 master - 0 1562242118512 2 connected
0815edc37378ce8c1bed1b46a460b410f231937d 10.0.0.53:6381 master - 0 1562242123546 5 connected
eb67970ebb0eb3512f09b7be79128bf736eaccb5 10.0.0.52:6381 master - 0 1562242121533 4 connected
f5248261ef32638fc11966cdeedff96ab197b812 10.0.0.53:6380 master - 0 1562242120524 0 connected
c847d86eb040a5cbeaeddef225beecba22f401b2 10.0.0.51:6381 myself,master - 0 0 3 connected
vars currentEpoch 5 lastVoteEpoch 0

1.3 手動配置槽位

雖然節點之間已經互相發現了,可是此時集羣仍是不可用的狀態,由於並無給節點分配槽位,並且必
須是全部的槽位都分配完畢後整個集羣纔是可用的狀態.
反之,也就是說只要有一個槽位沒有分配,那麼整個集羣就是不可用的.

查看報錯

[root@db01 ~]# sh redis_shell.sh login 6380
10.0.0.51:6380> set k1 v1
(error) CLUSTERDOWN Hash slot not served
#集羣未啓動 hash槽位不提供服務,

配置槽位
雖然有 6 個節點,可是真正負責數據寫入的只有 3 個節點,其餘 3 個節點只是做爲主節點的從節點,也就是說,只須要分配期中三個節點的槽位就能夠了
分配槽位的方法:
分配槽位須要在每一個主節點上來配置,此時有 2 種方法執行:
1.分別登陸到每一個主節點的客戶端來執行命令
2.在其中一臺機器上用 redis 客戶端遠程登陸到其餘機器的主節點上執行命令

redis-cli -h 10.0.0.51 -p 6380 cluster addslots {0..5461}
redis-cli -h 10.0.0.52 -p 6380 cluster addslots {5462..10922}
redis-cli -h 10.0.0.53 -p 6380 cluster addslots {10923..16383}

查看集羣節點信息,槽位已經正常分配

10.0.0.51:6380> cluster nodes
68af205aad42909db61013ae2d0f9d2ec49cb5b9 10.0.0.52:6380 master - 0 1562243748164 2 connected 5462-10922
215158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380 myself,master - 0 0 1 connected 0-5461
f5248261ef32638fc11966cdeedff96ab197b812 10.0.0.53:6380 master - 0 1562243745141 0 connected 10923-16383
c847d86eb040a5cbeaeddef225beecba22f401b2 10.0.0.51:6381 master - 0 1562243742121 3 connected
0815edc37378ce8c1bed1b46a460b410f231937d 10.0.0.53:6381 master - 0 1562243749173 5 connected
eb67970ebb0eb3512f09b7be79128bf736eaccb5 10.0.0.52:6381 master - 0 1562243746147 4 connected

查看集羣信息,狀態爲ok

10.0.0.51:6380> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:5
cluster_my_epoch:1
cluster_stats_messages_sent:3478
cluster_stats_messages_received:3478

1.4 手動配置高可用

雖然這時候集羣是可用的了,可是整個集羣只要有一臺機器壞掉了,那麼整個集羣都是不可用的.
因此這時候須要用到其餘三個節點分別做爲如今三個主節點的從節點,以應對集羣主節點故障時能夠
進行自動切換以保證集羣持續可用.

注意:
1.不要讓複製節點複製本機器的主節點, 由於若是那樣的話機器掛了集羣仍是不可用狀態, 因此複製
節點要複製其餘服務器的主節點.
2.使用 redis-trid 工具自動分配的時候會出現複製節點和主節點在同一臺機器上的狀況,須要注意

將cluster nodes結果粘貼到文本中,複製關係以下
db01:6381-->db02:6380
db02:6381-->db03:6380
db03:6381-->db01:6380
列出節點ID方便對比複製關係是否正確

[root@db01 ~]# redis-cli -c -h db01 -p 6381 cluster nodes|grep -v "6381"|awk '{print $1,$2}'
215158ede75cadd1c9a8fccb99278d0da3c5de48 10.0.0.51:6380
68af205aad42909db61013ae2d0f9d2ec49cb5b9 10.0.0.52:6380
f5248261ef32638fc11966cdeedff96ab197b812 10.0.0.53:6380

[root@db01 ~]# redis-cli -c -h db01 -p 6381 cluster nodes|grep -v "6380"|awk '{print $1,$2}'
0815edc37378ce8c1bed1b46a460b410f231937d 10.0.0.53:6381
eb67970ebb0eb3512f09b7be79128bf736eaccb5 10.0.0.52:6381
c847d86eb040a5cbeaeddef225beecba22f401b2 10.0.0.51:6381

執行完成主從關係
redis-cli -c -h db01 -p 6381 cluster replicate 68af205aad42909db61013ae2d0f9d2ec49cb5b9
redis-cli -c -h db02 -p 6381 cluster replicate f5248261ef32638fc11966cdeedff96ab197b812
redis-cli -c -h db03 -p 6381 cluster replicate 215158ede75cadd1c9a8fccb99278d0da3c5de48

1.5 Redis Cluster 測試集羣

咱們使用常規插入 redis 數據的方式往集羣裏寫入數據看看會發生什麼

[root@db01 ~]# redis-cli -h db01 -p 6380 set k1 v1
(error) MOVED 12706 10.0.0.53:6380

結果提示 error, 可是給出了集羣另外一個節點的地址,這裏數據也不會寫入到53上

使用集羣后因爲數據被分片了,因此並非說在那臺機器上寫入數據就會在哪臺機
器的節點上寫入,集羣的數據寫入和讀取就涉及到了另一個概念,ASK 路由

1.6 Redis Cluster ASK 路由介紹

在集羣模式下,Redis 接受任何鍵相關命令時首先會計算鍵對應的槽,再根據槽找出所對應的節點
若是節點是自身,則處理鍵命令;
不然回覆 MOVED 重定向錯誤,通知客戶端請求正確的節點,這個過程稱爲 Mover 重定向.
使用-c參數,redis會自動路由到正確節點

[root@db01 ~]# redis-cli -h db01 -c -p 6380 
db01:6380> set k1 v1
-> Redirected to slot [12706] located at 10.0.0.53:6380
OK

批量插入一些數據,數據量很會很均勻的分佈,數據差不超過%2就正常

for i in {0..1000};do redis-cli -c -h db01 -p 6380 set 58NB_${i} 58V5_${i};done

[root@db01 ~]# redis-cli -h db01 -c -p 6380  DBSIZE
(integer) 670
[root@db01 ~]# redis-cli -h db02 -c -p 6380  DBSIZE
(integer) 660
[root@db01 ~]# redis-cli -h db03 -c -p 6380  DBSIZE
(integer) 672

請求的數據被分配到了不一樣服務器的槽位上

10.0.0.51:6380> get 58NB_1000
"58V5_1000"
10.0.0.51:6380> get 58NB_999
-> Redirected to slot [6540] located at 10.0.0.52:6380
"58V5_999"
10.0.0.52:6380> get 58NB_699
-> Redirected to slot [13757] located at 10.0.0.53:6380
"58V5_699"

1.7 模擬故障轉移

這裏咱們就模擬故障,停掉期中一臺主機的 redis 節點,而後查看一下集羣的變化
咱們使用暴力的 kill 殺掉 db02 上的 redis 集羣節點,而後觀察節點狀態
理想狀況應該是 db01 上的 6381 從節點升級爲主節點

[root@db02 ~]# ps -ef | grep redis
root       8257      1  0 19:55 ?        00:00:07 redis-server 10.0.0.52:6380 [cluster]
root       8261      1  0 19:55 ?        00:00:07 redis-server 10.0.0.52:6381 [cluster]
root       8576   8526  0 22:49 pts/0    00:00:00 grep --color=auto redis
[root@db02 ~]# kill 8257

切換後查看節點信息,不方便閱讀,將ID列刪除了
能夠看到db01的6381節點以及成爲主節點

[root@db01 ~]# redis-cli -h db01 -c -p 6380 cluster nodes
10.0.0.52:6380 master,fail - 1562251836868 1562251833945 2 disconnected
10.0.0.51:6380 myself,master - 0 0 1 connected 0-5461
10.0.0.53:6380 master - 0 1562251999666 0 connected 10923-16383
10.0.0.51:6381 master - 0 1562251998652 6 connected 5462-10922
10.0.0.53:6381 slave 215158ede75cadd1c9a8fccb99278d0da3c5de48 0 1562251999159 5 connected
10.0.0.52:6381 slave f5248261ef32638fc11966cdeedff96ab197b812 0 1562252000675 4 connected

雖然咱們已經測試了故障切換的功能,可是節點修復後仍是須要從新上線
因此這裏測試節點從新上線後的操做
從新啓動 db02 的 6380,而後觀察日誌

[root@db02 ~]# redis-server /opt/redis_cluster/redis_6380/conf/redis_6380.conf 
[root@db02 ~]# ps -ef | grep redis
root       8261      1  0 19:55 ?        00:00:08 redis-server 10.0.0.52:6381 [cluster]
root       8619      1  0 22:59 ?        00:00:00 redis-server 10.0.0.52:6380 [cluster]

能夠看到db02的6380上線後跟51的6381進行了同步

[root@db02 ~]# sh redis_shell.sh tail 6380
8619:S 04 Jul 22:59:54.217 * Connecting to MASTER 10.0.0.51:6381
8619:S 04 Jul 22:59:54.217 * MASTER <-> SLAVE sync started
8619:S 04 Jul 22:59:54.217 * Non blocking connect for SYNC fired the event.
8619:S 04 Jul 22:59:54.218 * Master replied to PING, replication can continue...
8619:S 04 Jul 22:59:54.218 * Partial resynchronization not possible (no cached master)
8619:S 04 Jul 22:59:54.223 * Full resync from master: c75849d817eac34104812970ea0d80ef06448e91:1
8619:S 04 Jul 22:59:54.303 * MASTER <-> SLAVE sync: receiving 10524 bytes from master
8619:S 04 Jul 22:59:54.303 * MASTER <-> SLAVE sync: Flushing old data
8619:S 04 Jul 22:59:54.303 * MASTER <-> SLAVE sync: Loading DB in memory
8619:S 04 Jul 22:59:54.304 * MASTER <-> SLAVE sync: Finished with success

再觀察一下db01的日誌
清除了db02的6380的故障狀態,slave 10.0.0.52:6380同步成功

[root@db01 ~]# sh redis_shell.sh tail 6381
32564:M 04 Jul 22:50:53.868 # Cluster state changed: ok
32564:M 04 Jul 22:59:53.294 * Clear FAIL state for node 68af205aad42909db61013ae2d0f9d2ec49cb5b9: master without slots is reachable again.
32564:M 04 Jul 22:59:54.217 * Slave 10.0.0.52:6380 asks for synchronization
32564:M 04 Jul 22:59:54.217 * Full resync requested by slave 10.0.0.52:6380
32564:M 04 Jul 22:59:54.217 * Starting BGSAVE for SYNC with target: disk
32564:M 04 Jul 22:59:54.219 * Background saving started by pid 37006
37006:C 04 Jul 22:59:54.230 * DB saved on disk
37006:C 04 Jul 22:59:54.230 * RDB: 6 MB of memory used by copy-on-write
32564:M 04 Jul 22:59:54.300 * Background saving terminated with success
32564:M 04 Jul 22:59:54.301 * Synchronization with slave 10.0.0.52:6380 succeeded

這時假如咱們想讓修復後的節點從新上線,能夠在想變成主庫的從庫執行 CLUSTER FAILOVER 命令
這裏咱們在 db02 的 6380 上執行

[root@db02 ~]# sh redis_shell.sh login 6380
10.0.0.52:6380> CLUSTER FAILOVER
OK

[root@db02 ~]# sh redis_shell.sh tail 6380
8619:M 04 Jul 23:10:31.899 * Caching the disconnected master state.
8619:M 04 Jul 23:10:31.899 * Discarding previously cached master state.
8619:M 04 Jul 23:10:32.404 * Slave 10.0.0.51:6381 asks for synchronization
8619:M 04 Jul 23:10:32.404 * Full resync requested by slave 10.0.0.51:6381
8619:M 04 Jul 23:10:32.404 * Starting BGSAVE for SYNC with target: disk
8619:M 04 Jul 23:10:32.405 * Background saving started by pid 8686
8686:C 04 Jul 23:10:32.410 * DB saved on disk
8686:C 04 Jul 23:10:32.411 * RDB: 6 MB of memory used by copy-on-write
8619:M 04 Jul 23:10:32.499 * Background saving terminated with success
8619:M 04 Jul 23:10:32.500 * Synchronization with slave 10.0.0.51:6381 succeede

觀察日誌信息能夠看出10.0.0.51:6381已經變爲了10.0.0.52:6380的從庫


2. 使用工具搭建部署 Redis Cluster

手動搭建集羣便於理解集羣建立的流程和細節,不過手動搭建集羣須要不少步驟,當集羣節點衆多
時,必然會加大搭建集羣的複雜度和運維成本,所以官方提供了 redis-trib.rb 的工具方便咱們快速搭
建集羣。
redis-trib.rb 是採用 Ruby 實現的 redis 集羣管理工具,內部經過 Cluster 相關命令幫咱們簡化集羣
建立、檢查、槽遷移和均衡等常見運維操做,使用前要安裝 ruby 依賴環境

安裝命令:

yum makecache fast
yum install rubygems
gem sources --remove https://rubygems.org/
gem sources -a http://mirrors.aliyun.com/rubygems/
gem update –system
gem install redis -v 3.3.5

咱們能夠停掉全部的節點,而後清空數據,恢復成一個全新的集羣,全部節點服務器執行命令

pkill redis
rm -rf /data/redis_cluster/redis_6380/*
rm -rf /data/redis_cluster/redis_6381/*

所有清空以後啓動全部的節點,節點服務器執行

sh redis_shell.sh start 6380
sh redis_shell.sh start 6381

db01 執行建立集羣命令

cd /opt/redis_cluster/redis/src/
./redis-trib.rb create --replicas 1 10.0.0.51:6380 10.0.0.52:6380 10.0.0.53:6380 10.0.0.51:6381 10.0.0.52:6381 10.0.0.53:6381

檢查集羣完整性及槽位狀態
軟件存在一個bug,總會有一臺節點同步本身,因此咱們還須要修改一下同步關係

[root@db01 /opt/redis_cluster/redis/src]# ./redis-trib.rb check 10.0.0.51:6380
>>> Performing Cluster Check (using node 10.0.0.51:6380)
M: ac14a416ef65d4d03fb4ad528ecbd7271296ba3a 10.0.0.51:6380
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: 37e728a2d12aedc1e5b7732d88d2aed9f684fd73 10.0.0.51:6381
   slots: (0 slots) slave
   replicates 876e7ced4441cda59aa19d51051af6459a5c90d4
M: c2349ca206f3747c140a83cfef10e78845bed2b3 10.0.0.53:6380
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: 876e7ced4441cda59aa19d51051af6459a5c90d4 10.0.0.52:6380
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: aa10948d4289aa3eabaf661ba5dc7459eac37adf 10.0.0.53:6381
   slots: (0 slots) slave
   replicates c2349ca206f3747c140a83cfef10e78845bed2b3
S: 3ca828a23de48c997ce3d6515bde225016c57b68 10.0.0.52:6381
   slots: (0 slots) slave
   replicates ac14a416ef65d4d03fb4ad528ecbd7271296ba3a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

檢查槽位狀態
[root@db01 /opt/redis_cluster/redis/src]# ./redis-trib.rb rebalance 10.0.0.51:6380
>>> Performing Cluster Check (using node 10.0.0.51:6380)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
*** No rebalancing needed! All nodes are within the 2.0% threshold.

最終發現53的6381複製53的6380

S: aa10948d4289aa3eabaf661ba5dc7459eac37adf 10.0.0.53:6381
   slots: (0 slots) slave
   replicates c2349ca206f3747c140a83cfef10e78845bed2b3
M: c2349ca206f3747c140a83cfef10e78845bed2b3 10.0.0.53:6380

當前複製關係
10.0.0.51:6381-->10.0.0.52:6380 876e7ced4441cda59aa19d51051af6459a5c90d4
10.0.0.52:6381-->10.0.0.51:6380 ac14a416ef65d4d03fb4ad528ecbd7271296ba3a
10.0.0.53:6381-->10.0.0.53:6380 c2349ca206f3747c140a83cfef10e78845bed2b3

10.0.0.51:6381-->10.0.0.52:6380 複製關係正確不須要修改

redis-cli -c -h db02 -p 6381 cluster replicate c2349ca206f3747c140a83cfef10e78845bed2b3
redis-cli -c -h db03 -p 6381 cluster replicate ac14a416ef65d4d03fb4ad528ecbd7271296ba3a

當前複製結構梳理

10.0.0.51:6381-->10.0.0.52:6380 876e7ced4441cda59aa19d51051af6459a5c90d4
10.0.0.52:6381-->10.0.0.53:6380 c2349ca206f3747c140a83cfef10e78845bed2b3
10.0.0.53:6381-->10.0.0.51:6380 ac14a416ef65d4d03fb4ad528ecbd7271296ba3a

對比上面結果正常

[root@db01 ~]# redis-cli -c -h db03 -p 6381 cluster nodes|awk '$3~/slave/{print $2,$3,$4}'
10.0.0.51:6381 slave 876e7ced4441cda59aa19d51051af6459a5c90d4
10.0.0.52:6381 slave c2349ca206f3747c140a83cfef10e78845bed2b3
10.0.0.53:6381 myself,slave ac14a416ef65d4d03fb4ad528ecbd7271296ba3a

到此集羣部署已經完成了

相關文章
相關標籤/搜索