本文是Redis集羣學習的實踐總結(基於Redis 6.0+),詳細介紹逐步搭建Redis集羣環境的過程,並完成集羣伸縮的實踐。html
Redis集羣(Redis Cluster) 是Redis提供的分佈式數據庫方案,經過 分片(sharding) 來進行數據共享,並提供複製和故障轉移功能。相比於主從複製、哨兵模式,Redis集羣實現了較爲完善的高可用方案,解決了存儲能力受到單機限制,寫操做沒法負載均衡的問題。node
本文是Redis集羣學習的實踐總結,詳細介紹逐步搭建Redis集羣環境的過程,並完成集羣伸縮的實踐。git
方便起見,這裏集羣環境的全部節點所有位於同一個服務器上,共6個節點以端口號區分,3個主節點+3個從節點。集羣的簡單架構如圖:本文基於最新的Redis 6.0+,直接從github下載最新的源碼編譯得到經常使用工具 redis-server , redis-cli 。值得注意的是,從Redis 5.0之後的版本,集羣管理軟件 redis-trib.rb 被集成到 redis-cli 客戶端工具中(詳細可參考cluster-tutorial)。github
本節介紹集羣環境搭建時,並未藉助 redis-trib.rb 快速管理,而是按照標準步驟一步步搭建,這也是爲了熟悉集羣管理的基本步驟。在集羣伸縮實踐一節將藉助 redis-trib.rb 完成集羣從新分片工做。redis
集羣的搭建能夠分爲四步:數據庫
啓動節點服務器
每一個節點初始狀態仍爲 Master服務器,惟一不一樣的是:使用 Cluster 模式啓動。須要對配置文件進行修改,以端口號爲6379的節點爲例,主要修改以下幾項:網絡
# redis_6379_cluster.conf port 6379 cluster-enabled yes cluster-config-file "node-6379.conf" logfile "redis-server-6379.log" dbfilename "dump-6379.rdb" daemonize yes
其中 cluster-config-file 參數指定了集羣配置文件的位置,每一個節點在運行過程當中,會維護一份集羣配置文件;每當集羣信息發生變化時(如增減節點),集羣內全部節點會將最新信息更新到該配置文件;當節點重啓後,會從新讀取該配置文件,獲取集羣信息,能夠方便的從新加入到集羣中。也就是說,當Redis節點以集羣模式啓動時,會首先尋找是否有集羣配置文件,若是有則使用文件中的配置啓動,若是沒有,則初始化配置並將配置保存到文件中。集羣配置文件由Redis節點維護,不須要人工修改。架構
爲6個節點修改好相應的配置文件後,便可利用 redis-server redis_xxxx_cluster.conf 工具啓動6個服務器(xxxx表示端口號,對應相應的配置文件)。利用ps命令查看進程:app
$ ps -aux | grep redis ... 800 0.1 0.0 49584 2444 ? Ssl 20:42 0:00 redis-server 127.0.0.1:6379 [cluster] ... 805 0.1 0.0 49584 2440 ? Ssl 20:42 0:00 redis-server 127.0.0.1:6380 [cluster] ... 812 0.3 0.0 49584 2436 ? Ssl 20:42 0:00 redis-server 127.0.0.1:6381 [cluster] ... 817 0.1 0.0 49584 2432 ? Ssl 20:43 0:00 redis-server 127.0.0.1:6479 [cluster] ... 822 0.0 0.0 49584 2380 ? Ssl 20:43 0:00 redis-server 127.0.0.1:6480 [cluster] ... 827 0.5 0.0 49584 2380 ? Ssl 20:43 0:00 redis-server 127.0.0.1:6481 [cluster]
節點握手
將上面的每一個節點啓動後,節點間是相互獨立的,他們都處於一個只包含本身的集羣當中,以端口號6379的服務器爲例,利用 CLUSTER NODES 查看當前集羣包含的節點。
127.0.0.1:6379> CLUSTER NODES 37784b3605ad216fa93e976979c43def42bf763d :6379@16379 myself,master - 0 0 0 connected 449 4576 5798 7568 8455 12706
咱們須要將各個獨立的節點鏈接起來,構成一個包含多個節點的集羣,使用 CLUSTER MEET 命令。
$ redis-cli -p 6379 -c # -c 選項指定以Cluster模式運行redis-cli 127.0.0.1:6379> CLUSTER MEET 127.0.0.1 6380 OK 127.0.0.1:6379> CLUSTER MEET 127.0.0.1 6381 OK 127.0.0.1:6379> CLUSTER MEET 127.0.0.1 6480 OK 127.0.0.1:6379> CLUSTER MEET 127.0.0.1 6381 OK 127.0.0.1:6379> CLUSTER MEET 127.0.0.1 6382 OK
再次查看此時集羣中包含的節點狀況:
127.0.0.1:6379> CLUSTER NODES c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:6380@16380 master - 0 1603632309283 4 connected 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:6379@16379 myself,master - 0 1603632308000 1 connected 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:6381@16381 master - 0 1603632310292 2 connected 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:6481@16481 master - 0 1603632309000 5 connected 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:6479@16479 master - 0 1603632308000 3 connected 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:6480@16480 master - 0 1603632311302 0 connected
能夠發現此時6個節點均做爲主節點加入到集羣中, CLUSTER NODES 返回的結果各項含義以下:
<id> <ip:port@cport> <flags> <master> <ping-sent> <pong-recv> <config-epoch> <link-state> <slot> <slot> ... <slot>
其他各項的詳細解釋能夠參考官方文檔cluster nodes。
槽指派
Redis集羣經過分片(sharding)的方式保存數據庫的鍵值對,整個數據庫被分爲16384個槽(slot),數據庫每一個鍵都屬於這16384個槽的一個,集羣中的每一個節點均可以處理0個或者最多16384個slot。
槽是數據管理和遷移的基本單位。當數據庫中的16384個槽都分配了節點時,集羣處於上線狀態(ok);若是有任意一個槽沒有分配節點,則集羣處於下線狀態(fail)。
注意,只有主節點有處理槽的能力,若是將槽指派步驟放在主從複製以後,而且將槽位分配給從節點,那麼集羣將沒法正常工做(處於下線狀態)。
利用 CLUSTER ADDSLOTS
redis-cli -p 6379 cluster addslots {0..5000} redis-cli -p 6380 cluster addslots {5001..10000} redis-cli -p 6381 cluster addslots {10001..16383}
槽指派後集羣中節點狀況以下:
127.0.0.1:6379> CLUSTER NODES c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:6380@16380 master - 0 1603632880310 4 connected 5001-10000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:6379@16379 myself,master - 0 1603632879000 1 connected 0-5000 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:6381@16381 master - 0 1603632879000 2 connected 10001-16383 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:6481@16481 master - 0 1603632878000 5 connected 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:6479@16479 master - 0 1603632880000 3 connected 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:6480@16480 master - 0 1603632881317 0 connected 127.0.0.1:6379> CLUSTER INFO cluster_state:ok # 集羣處於上線狀態 cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:5 cluster_my_epoch:1 cluster_stats_messages_ping_sent:4763 cluster_stats_messages_pong_sent:4939 cluster_stats_messages_meet_sent:5 cluster_stats_messages_sent:9707 cluster_stats_messages_ping_received:4939 cluster_stats_messages_pong_received:4768 cluster_stats_messages_received:9707
上述步驟後,集羣節點均做爲主節點存在,仍不能實現Redis的高可用,配置主從複製以後,纔算真正實現了集羣的高可用功能。
CLUSTER REPLICATE <node_id> 用來讓集羣中接收命令的節點成爲 node_id 所指定節點的從節點,並開始對主節點進行復制。
redis-cli -p 6479 cluster replicate 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 redis-cli -p 6480 cluster replicate c47598b25205cc88abe2e5094d5bfd9ea202335f redis-cli -p 6481 cluster replicate 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:6379> CLUSTER NODES c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:6380@16380 master - 0 1603633105211 4 connected 5001-10000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:6379@16379 myself,master - 0 1603633105000 1 connected 0-5000 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:6381@16381 master - 0 1603633105000 2 connected 10001-16383 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:6481@16481 slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603633107229 5 connected 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:6479@16479 slave 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 0 1603633106221 3 connected 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:6480@16480 slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603633104000 4 connected
順帶補充,上述步驟1.2,1.3,1.4能夠利用 redis-trib.rb 工具總體實現,在Redis 5.0以後直接利用 redis-cli 完成,參考命令以下:
redis-cli --cluster create 127.0.0.1:6379 127.0.0.1:6479 127.0.0.1:6380 127.0.0.1:6480 127.0.0.1:6381 127.0.0.1:6481 --cluster-replicas 1 --cluster-replicas 1 指示給定的建立節點列表是以主節點+從節點對組成的。
在集羣中執行命令
集羣此時處於上線狀態,能夠經過客戶端向集羣中的節點發送命令。接收命令的節點會計算出命令要處理的鍵屬於哪一個槽,並檢查這個槽是否指派給本身。
此處,咱們利用 CLUSTER KEYSLOT 查看到鍵 name 所在槽號爲5798(被分配在6380節點),當對此鍵操做時,會被重定向到相應的節點。對鍵 fruits 的操做與此相似。
127.0.0.1:6379> CLUSTER KEYSLOT name (integer) 5798 127.0.0.1:6379> set name huey -> Redirected to slot [5798] located at 127.0.0.1:6380 OK 127.0.0.1:6380> 127.0.0.1:6379> get fruits -> Redirected to slot [14943] located at 127.0.0.1:6381 "apple" 127.0.0.1:6381>
值得注意的是,當咱們將命令經過客戶端發送給一個從節點時,命令會被重定向至對應的主節點。
127.0.0.1:6480> KEYS * 1) "name" 127.0.0.1:6480> get name -> Redirected to slot [5798] located at 127.0.0.1:6380 "huey"
集羣中主節點下線時,複製此主節點的全部的從節點將會選出一個節點做爲新的主節點,並完成故障轉移。和主從複製的配置類似,當原先的從節點再次上線,它會被做爲新主節點的的從節點存在於集羣中。
下面模擬6379節點宕機的狀況(將其SHUTDOWN),能夠觀察到其從節點6479將做爲新的主節點繼續工做。
462:S 26 Oct 14:08:12.750 * FAIL message received from c47598b25205cc88abe2e5094d5bfd9ea202335f about 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 462:S 26 Oct 14:08:12.751 # Cluster state changed: fail 462:S 26 Oct 14:08:12.829 # Start of election delayed for 595 milliseconds (rank #0, offset 9160). 462:S 26 Oct 14:08:13.434 # Starting a failover election for epoch 6. 462:S 26 Oct 14:08:13.446 # Failover election won: I'm the new master. 462:S 26 Oct 14:08:13.447 # configEpoch set to 6 after successful failover 462:M 26 Oct 14:08:13.447 # Setting secondary replication ID to d357886e00341b57bf17e46b6d9f8cf53b7fad21, valid up to offset: 9161. New replication ID is adbf41b16075ea22b17f145186c53c4499864d5b 462:M 26 Oct 14:08:13.447 * Discarding previously cached master state. 462:M 26 Oct 14:08:13.448 # Cluster state changed: ok
6379節點從宕機狀態恢復後,將做爲6380節點的從節點存在。
127.0.0.1:6379> CLUSTER NODES 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:6381@16381 master - 0 1603692968000 2 connected 10001-16383 c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:6380@16380 master - 0 1603692968504 0 connected 5001-10000 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:6479@16479 master - 0 1603692967495 6 connected 0-5000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:6379@16379 myself,slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603692964000 1 connected 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:6481@16481 slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603692967000 4 connected 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:6480@16480 slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603692967000 5 connected
前文提到 cluster-config-file 會記錄下集羣節點的狀態,打開節點6379的配置文件 nodes-6379.conf ,能夠看到 CLUSTER NODES 所示信息均被保存在配置文件中:
51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:6381@16381 master - 0 1603694920206 2 connected 10001-16383 c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:6380@16380 master - 0 1603694916000 0 connected 5001-10000 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:6479@16479 master - 0 1603694920000 6 connected 0-5000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:6379@16379 myself,slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603694918000 1 connected 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:6481@16481 slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603694919000 4 connected 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:6480@16480 slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603694919200 5 connected vars currentEpoch 6 lastVoteEpoch 0
集羣伸縮的關鍵在於對集羣的進行從新分片,實現槽位在節點間的遷移。本節將以在集羣中添加節點和刪除節點爲例,對槽遷移進行實踐。
藉助於 redis-cli 中集成的 redis-trib.rb 工具進行槽位的管理,工具的幫助菜單以下:
$ redis-cli --cluster help Cluster Manager Commands: create host1:port1 ... hostN:portN --cluster-replicas <arg> check host:port --cluster-search-multiple-owners info host:port fix host:port --cluster-search-multiple-owners --cluster-fix-with-unreachable-masters reshard host:port --cluster-from <arg> --cluster-to <arg> --cluster-slots <arg> --cluster-yes --cluster-timeout <arg> --cluster-pipeline <arg> --cluster-replace rebalance host:port --cluster-weight <node1=w1...nodeN=wN> --cluster-use-empty-masters --cluster-timeout <arg> --cluster-simulate --cluster-pipeline <arg> --cluster-threshold <arg> --cluster-replace add-node new_host:new_port existing_host:existing_port --cluster-slave --cluster-master-id <arg> del-node host:port node_id call host:port command arg arg .. arg set-timeout host:port milliseconds import host:port --cluster-from <arg> --cluster-copy --cluster-replace backup host:port backup_directory help For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.
考慮在集羣中添加兩個節點,端口號爲6382和6482,其中節點6482對6382進行復制。
redis-cli --cluster add-node 127.0.0.1:6382 127.0.0.1:6379 redis-cli --cluster add-node 127.0.0.1:6482 127.0.0.1:6379 $ redis-cli --cluster add-node 127.0.0.1:6382 127.0.0.1:6379 >>> Adding node 127.0.0.1:6382 to cluster 127.0.0.1:6379 >>> Performing Cluster Check (using node 127.0.0.1:6379) S: 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:6379 slots: (0 slots) slave replicates 4c23b25bd4bcef7f4b77d8287e330ae72e738883 M: 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:6381 slots:[10001-16383] (6383 slots) master 1 additional replica(s) M: c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:6380 slots:[5001-10000] (5000 slots) master 1 additional replica(s) M: 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:6479 slots:[0-5000] (5001 slots) master 1 additional replica(s) S: 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:6481 slots: (0 slots) slave replicates 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 S: 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:6480 slots: (0 slots) slave replicates c47598b25205cc88abe2e5094d5bfd9ea202335f [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 127.0.0.1:6382 to make it join the cluster. [OK] New node added correctly.
$ redis-cli --cluster reshard 127.0.0.1 6479 >>> Performing Cluster Check (using node 127.0.0.1:6479) M: 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:6479 slots:[0-5000] (5001 slots) master 1 additional replica(s) S: 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:6480 slots: (0 slots) slave replicates c47598b25205cc88abe2e5094d5bfd9ea202335f M: 706f399b248ed3a080cf1d4e43047a79331b714f 127.0.0.1:6482 slots: (0 slots) master M: af81109fc29f69f9184ce9512c46df476fe693a3 127.0.0.1:6382 slots: (0 slots) master M: 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:6381 slots:[10001-16383] (6383 slots) master 1 additional replica(s) S: 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:6481 slots: (0 slots) slave replicates 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 S: 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:6379 slots: (0 slots) slave replicates 4c23b25bd4bcef7f4b77d8287e330ae72e738883 M: c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:6380 slots:[5001-10000] (5000 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 4096 What is the receiving node ID?
redis-cli -p 6482 cluster replicate af81109fc29f69f9184ce9512c46df476fe693a3 127.0.0.1:6482> CLUSTER NODES 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:6480@16480 slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603694930000 0 connected 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:6381@16381 master - 0 1603694931000 2 connected 11597-16383 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:6481@16481 slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603694932000 2 connected 706f399b248ed3a080cf1d4e43047a79331b714f 127.0.0.1:6482@16482 myself,slave af81109fc29f69f9184ce9512c46df476fe693a3 0 1603694932000 8 connected 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:6379@16379 slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603694932000 6 connected c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:6380@16380 master - 0 1603694933678 0 connected 6251-10000 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:6479@16479 master - 0 1603694932669 6 connected 1250-5000 af81109fc29f69f9184ce9512c46df476fe693a3 127.0.0.1:6382@16382 master - 0 1603694933000 9 connected 0-1249 5001-6250 10001-11596
這裏考慮將新添加的兩個節點6382和6482刪除,須要將節點6382上分配的槽位遷移到其餘節點。
$ redis-cli --cluster reshard 127.0.0.1 6382 >>> Performing Cluster Check (using node 127.0.0.1:6382) M: af81109fc29f69f9184ce9512c46df476fe693a3 127.0.0.1:6382 slots:[0-1249],[5001-6250],[10001-11596] (4096 slots) master 1 additional replica(s) M: 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:6381 slots:[11597-16383] (4787 slots) master 1 additional replica(s) S: 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:6379 slots: (0 slots) slave replicates 4c23b25bd4bcef7f4b77d8287e330ae72e738883 S: 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:6480 slots: (0 slots) slave replicates c47598b25205cc88abe2e5094d5bfd9ea202335f M: 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:6479 slots:[1250-5000] (3751 slots) master 1 additional replica(s) M: c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:6380 slots:[6251-10000] (3750 slots) master 1 additional replica(s) S: 706f399b248ed3a080cf1d4e43047a79331b714f 127.0.0.1:6482 slots: (0 slots) slave replicates af81109fc29f69f9184ce9512c46df476fe693a3 S: 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:6481 slots: (0 slots) slave replicates 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 4096 What is the receiving node ID? 4c23b25bd4bcef7f4b77d8287e330ae72e738883 Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1: af81109fc29f69f9184ce9512c46df476fe693a3 Source node #2: done 127.0.0.1:6379> CLUSTER NODES c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:6380@16380 master - 0 1603773540922 0 connected 6251-10000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:6379@16379 myself,slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603773539000 1 connected 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:6479@16479 master - 0 1603773541000 10 connected 0-6250 10001-11596 706f399b248ed3a080cf1d4e43047a79331b714f 127.0.0.1:6482@16482 slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603773541000 10 connected 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:6480@16480 slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603773539000 5 connected 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:6481@16481 slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603773541931 4 connected af81109fc29f69f9184ce9512c46df476fe693a3 127.0.0.1:6382@16382 master - 0 1603773539000 9 connected 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:6381@16381 master - 0 1603773540000 2 connected 11597-16383
$ redis-cli --cluster del-node 127.0.0.1:6482 706f399b248ed3a080cf1d4e43047a79331b714f >>> Removing node 706f399b248ed3a080cf1d4e43047a79331b714f from cluster 127.0.0.1:6482 >>> Sending CLUSTER FORGET messages to the cluster... >>> Sending CLUSTER RESET SOFT to the deleted node. $ redis-cli --cluster del-node 127.0.0.1:6382 af81109fc29f69f9184ce9512c46df476fe693a3 >>> Removing node af81109fc29f69f9184ce9512c46df476fe693a3 from cluster 127.0.0.1:6382 >>> Sending CLUSTER FORGET messages to the cluster... >>> Sending CLUSTER RESET SOFT to the deleted node. 127.0.0.1:6379> CLUSTER NODES c47598b25205cc88abe2e5094d5bfd9ea202335f 127.0.0.1:6380@16380 master - 0 1603773679121 0 connected 6251-10000 87b7dfacde34b3cf57d5f46ab44fd6fffb2e4f52 127.0.0.1:6379@16379 myself,slave 4c23b25bd4bcef7f4b77d8287e330ae72e738883 0 1603773677000 1 connected 4c23b25bd4bcef7f4b77d8287e330ae72e738883 127.0.0.1:6479@16479 master - 0 1603773678000 10 connected 0-6250 10001-11596 32ed645a9c9d13ca68dba5a147937fb1d05922ee 127.0.0.1:6480@16480 slave c47598b25205cc88abe2e5094d5bfd9ea202335f 0 1603773680130 5 connected 9d587b75bdaed26ca582036ed706df8b2282b0aa 127.0.0.1:6481@16481 slave 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 0 1603773677099 4 connected 51081a64ddb3ccf5432c435a8cf20d45ab795dd8 127.0.0.1:6381@16381 master - 0 1603773678112 2 connected 11597-16383
Redis集羣環境的搭建主要包括啓動節點、節點握手、槽指派和主從複製等四個步驟,集羣伸縮一樣涉及這幾個方面。藉助 redis-cli --cluster 命令來管理集羣環境,不只能增長簡便性,還能下降操做失誤的風險。
原文: https://www.cnblogs.com/hueyx...