redis-cluster原理請看這篇node
https://blog.csdn.net/truelove12358/article/details/79612954redis
說明:本實驗是在2臺服務器上搭建啓動6個redis實例作的僞集羣ruby
環境:bash
服務器1:10.129.24.16 redis實例500一、500二、5003服務器
服務器2:10.129.24.7 redis實例500四、500五、5006app
1、準備工做curl
1.一、安裝rediside
[root@Redis-RS01 src]# tar -zxvf redis-3.0.7.tar.gz [root@Redis-RS01 src]# cd redis-3.0.7 [root@Redis-RS01 redis-3.0.7]# make && make PREFIX=/usr/local/redis install
1.二、配置文件ui
daemonize yes pidfile /var/run/redis_5001.pid port 5001 bind 10.129.24.16 logfile "/usr/local/redis-cluster/5001/log/redis.log" dir /usr/local/redis-cluster/5001/data cluster-enabled yes cluster-config-file nodes5001.conf cluster-node-timeout 15000 appendonly yes
1.三、建立集羣目錄(多個redis實例)url
[root@Redis-RS01 redis-3.0.7]# mkdir -p /usr/local/redis-cluster/{5001/{data,log},5002/{data,log},5003/{data,log}} [root@Redis-RS01 redis-3.0.7]# cp redis.conf /usr/local/redis-cluster/5001/ [root@Redis-RS01 redis-3.0.7]# cp redis.conf /usr/local/redis-cluster/5002/ [root@Redis-RS01 redis-3.0.7]# sed -i 's/5001/5002/g' /usr/local/redis-cluster/5002/redis.conf [root@Redis-RS01 redis-3.0.7]# cp redis.conf /usr/local/redis-cluster/5003/ [root@Redis-RS01 redis-3.0.7]# sed -i 's/5001/5003/g' /usr/local/redis-cluster/5002/redis.conf
2、配置redis集羣
2.一、redis3.0之後集羣的建立命令是ruby實現的,因此要先安裝ruby環境
[root@Redis-RS01 redis-3.0.7]# yum -y install ruby [root@Redis-RS01 redis-3.0.7]# yum -y install rubygems [root@Redis-RS01 redis-3.0.7]# gem install redis
這裏安裝redis可能會報錯:
gem install redis
ERROR: Error installing redis:
redis requires Ruby version >= 2.2.2.
解決辦法,安裝大於2.2.2的ruby版本
[root@Redis-RS01 redis-3.0.7]# gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB [root@Redis-RS01 redis-3.0.7]# curl -sSL https://get.rvm.io | bash -s stable [root@Redis-RS01 redis-3.0.7]# source /usr/local/rvm/scripts/rvm [root@Redis-RS01 redis-3.0.7]# rvm install 2.4.1 [root@Redis-RS01 redis-3.0.7]# rvm use 2.4.1 [root@Redis-RS01 redis-3.0.7]# rvm use 2.4.1 --default [root@Redis-RS01 redis-3.0.7]# gem install redis Fetching: redis-4.0.1.gem (100%) Successfully installed redis-4.0.1 Parsing documentation for redis-4.0.1 Installing ri documentation for redis-4.0.1 Done installing documentation for redis after 3 seconds 1 gem installed
另外一臺服務器的配置和上面相同,就不在重複寫了,注意配置文件中的端口和目錄要配置正確
2.二、啓動全部redis實例
[root@Redis-RS01 redis-3.0.7]# /usr/local/redis/bin/redis-server /usr/local/redis-cluster/5001/redis.conf [root@Redis-RS01 redis-3.0.7]# /usr/local/redis/bin/redis-server /usr/local/redis-cluster/5002/redis.conf [root@Redis-RS01 redis-3.0.7]# /usr/local/redis/bin/redis-server /usr/local/redis-cluster/5003/redis.conf
3、啓動redis集羣
3.一、redis-trib.rb命令的參數
一、 create :建立集羣
二、 check :檢查集羣
三、 info :查看集羣信息
四、 fix :修復集羣
五、 reshard :在線遷移slot
六、 rebalance :平衡集羣節點slot數量
七、 add-node :將新節點加入集羣
八、 del-node :從集羣中刪除節點
九、 set-timeout :設置集羣節點間心跳鏈接的超時時間
十、 call :在集羣所有節點上執行命令
十一、 import :將外部redis數據導入集羣
[root@Redis-RS01 5001]# /usr/local/redis-cluster/bin/redis-trib.rb create --replicas 1 10.129.24.16:5001 10.129.24.16:5002 10.129.24.16:5003 10.129.24.7:5004 10.129.24.7:5005 10.129.24.7:5006 >>> Creating cluster >>> Performing hash slots allocation on 6 nodes... Using 3 masters: 10.129.24.16:5001 10.129.24.7:5004 10.129.24.16:5002 Adding replica 10.129.24.7:5005 to 10.129.24.16:5001 Adding replica 10.129.24.16:5003 to 10.129.24.7:5004 Adding replica 10.129.24.7:5006 to 10.129.24.16:5002 M: 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b 10.129.24.16:5001 slots:0-5460 (5461 slots) master M: 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 10.129.24.16:5002 slots:10923-16383 (5461 slots) master S: 51d0615f10b6912e1e88bbca6cd4e8d843541f3a 10.129.24.16:5003 replicates bdf828d85ecb97612d16dc20803d25ade427e7e6 M: bdf828d85ecb97612d16dc20803d25ade427e7e6 10.129.24.7:5004 slots:5461-10922 (5462 slots) master S: 4a06022c8d4354cecbce3dc769173be0b100a939 10.129.24.7:5005 replicates 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b S: 958e79522a2eb81134a2e62d9fc991fe14320af4 10.129.24.7:5006 replicates 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join....... >>> Performing Cluster Check (using node 10.129.24.16:5001) M: 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b 10.129.24.16:5001 slots:0-5460 (5461 slots) master M: 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 10.129.24.16:5002 slots:10923-16383 (5461 slots) master M: 51d0615f10b6912e1e88bbca6cd4e8d843541f3a 10.129.24.16:5003 slots: (0 slots) master replicates bdf828d85ecb97612d16dc20803d25ade427e7e6 M: bdf828d85ecb97612d16dc20803d25ade427e7e6 10.129.24.7:5004 slots:5461-10922 (5462 slots) master M: 4a06022c8d4354cecbce3dc769173be0b100a939 10.129.24.7:5005 slots: (0 slots) master replicates 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b M: 958e79522a2eb81134a2e62d9fc991fe14320af4 10.129.24.7:5006 slots: (0 slots) master replicates 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. [root@Redis-RS01 5001]#
--replicas 1 表示爲每一個主節點建立一個從節點
至此,redis集羣建立完成,下面進行驗證
4、驗證
4.一、鏈接redis
[root@Redis-RS01 5001]# /usr/local/redis/bin/redis-cli -c -h 10.129.24.16 -p 5001 10.129.24.16:5001> keys * (empty list or set) 10.129.24.16:5001>
4.二、設置一個key
10.129.24.16:5001> set name "john.gou" -> Redirected to slot [5798] located at 10.129.24.7:5004 OK 10.129.24.7:5004>
咱們看到已經成功寫入,而且集羣將數據自動分配到了5789槽中,5004的主redis實例上
4.三、鏈接其餘實例嘗試讀取剛纔設置的key
[root@Redis-RS01 5001]# /usr/local/redis/bin/redis-cli -c -h 10.129.24.16 -p 5002 10.129.24.16:5002> get name -> Redirected to slot [5798] located at 10.129.24.7:5004 "john.gou" [root@Redis-RS01 5001]# /usr/local/redis/bin/redis-cli -c -h 10.129.24.7 -p 5005 10.129.24.7:5005> get name -> Redirected to slot [5798] located at 10.129.24.7:5004 "john.gou" [root@Redis-RS01 5001]# /usr/local/redis/bin/redis-cli -c -h 10.129.24.16 -p 5003 10.129.24.16:5003> get name -> Redirected to slot [5798] located at 10.129.24.7:5004 "john.gou"
其餘主實例也能夠讀取到
4.四、查看當前集羣主從狀態
[root@iZ28tqwgn5qZ redis-cluster]# /usr/local/redis-cluster/bin/redis-trib.rb check 10.129.24.16:5001 >>> Performing Cluster Check (using node 10.129.24.16:5001) M: 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b 10.129.24.16:5001 slots:0-5460 (5461 slots) master 1 additional replica(s) M: 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 10.129.24.16:5002 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: 958e79522a2eb81134a2e62d9fc991fe14320af4 10.129.24.7:5006 slots: (0 slots) slave replicates 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 M: bdf828d85ecb97612d16dc20803d25ade427e7e6 10.129.24.7:5004 slots:5461-10922 (5462 slots) master 1 additional replica(s) S: 4a06022c8d4354cecbce3dc769173be0b100a939 10.129.24.7:5005 slots: (0 slots) slave replicates 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b S: 51d0615f10b6912e1e88bbca6cd4e8d843541f3a 10.129.24.16:5003 slots: (0 slots) slave replicates bdf828d85ecb97612d16dc20803d25ade427e7e6 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
500一、500四、5002都是主實例,從實例分別是500五、500三、5006,下面咱們關閉5004主實例,驗證5003是否會升級爲主接替5004實例
4.五、關掉5004實例驗證數據可用性
[root@iZ28tqwgn5qZ redis-cluster]# ps -ef |grep redis root 2550 1 0 15:32 ? 00:00:01 /usr/local/redis/bin/redis-server 10.129.24.7:5004 [cluster] root 2558 1 0 15:32 ? 00:00:01 /usr/local/redis/bin/redis-server 10.129.24.7:5005 [cluster] root 2566 1 0 15:32 ? 00:00:01 /usr/local/redis/bin/redis-server 10.129.24.7:5006 [cluster] root 3876 12717 0 15:58 pts/7 00:00:00 grep redis [root@iZ28tqwgn5qZ redis-cluster]# kill -9 2550 [root@iZ28tqwgn5qZ redis-cluster]# /usr/local/redis/bin/redis-cli -c -h 10.129.24.16 -p 5001 10.129.24.16:5001> 10.129.24.16:5001> get name -> Redirected to slot [5798] located at 10.129.24.16:5003 "john.gou" 10.129.24.16:5003>
能夠看到已經自動跳轉到了5003實例上,剛纔是5004
4.六、再次查看集羣狀態
[root@iZ28tqwgn5qZ redis-cluster]# /usr/local/redis-cluster/bin/redis-trib.rb check 10.129.24.16:5001 >>> Performing Cluster Check (using node 10.129.24.16:5001) M: 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b 10.129.24.16:5001 slots:0-5460 (5461 slots) master 1 additional replica(s) M: 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 10.129.24.16:5002 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: 958e79522a2eb81134a2e62d9fc991fe14320af4 10.129.24.7:5006 slots: (0 slots) slave replicates 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 S: 4a06022c8d4354cecbce3dc769173be0b100a939 10.129.24.7:5005 slots: (0 slots) slave replicates 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b M: 51d0615f10b6912e1e88bbca6cd4e8d843541f3a 10.129.24.16:5003 slots:5461-10922 (5462 slots) master 0 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
5003實例已經接替5004變成主了
4.七、恢復5004實例驗證是否會自動加入集羣並接替5004變爲主實例
[root@iZ28tqwgn5qZ redis-cluster]# /usr/local/redis/bin/redis-server ./5004/redis.conf [root@Redis-RS01 redis-cluster]# ./bin/redis-trib.rb check 10.129.24.16:5001 >>> Performing Cluster Check (using node 10.129.24.16:5001) M: 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b 10.129.24.16:5001 slots:0-5460 (5461 slots) master 1 additional replica(s) M: 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 10.129.24.16:5002 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: 958e79522a2eb81134a2e62d9fc991fe14320af4 10.129.24.7:5006 slots: (0 slots) slave replicates 85b7d3393161eef4b5faebc75b5f3be6c3cca2d9 S: bdf828d85ecb97612d16dc20803d25ade427e7e6 10.129.24.7:5004 slots: (0 slots) slave replicates 51d0615f10b6912e1e88bbca6cd4e8d843541f3a S: 4a06022c8d4354cecbce3dc769173be0b100a939 10.129.24.7:5005 slots: (0 slots) slave replicates 5cf7204523d806c7cf1bac7eb0b2fd4fcbf2a94b M: 51d0615f10b6912e1e88bbca6cd4e8d843541f3a 10.129.24.16:5003 slots:5461-10922 (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
能夠看到,主恢復後不會接替現有的5003實例變爲主,直到5003實例故障後從纔會接替主
5、向集羣中添加主節點
5.一、準備一個新的實例
root@master1:/usr/local/redis# mkdir 6388 root@master1:/usr/local/redis# sed -i 's/6380/6388/g' 6388/redis.conf root@master1:/usr/local/redis# ./bin/redis-server 6388/redis.conf root@master1:/usr/local/redis# ps -ef |grep 6388 root 23686 1 0 10:10 ? 00:00:00 ./bin/redis-server 192.168.3.143:6388 [cluster]
5.二、向當前集羣中添加主節點
root@master1:/usr/local/redis# ./redis-trib.rb add-node 192.168.3.143:6388 192.168.3.143:6380 >>> Adding node 192.168.3.143:6388 to cluster 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) M: 0c9fae90cfa5fac015f8e172af056f830056b79d 192.168.3.143:6380 slots:0-4095 (4096 slots) master 1 additional replica(s) S: ab5898dde3de48ddf2829ae8a7359e274790f5b9 192.168.3.143:6381 slots: (0 slots) slave replicates 1a4852c5bf5d751c011d7cd57764da015a24e468 S: f639f8ff30c7f2f3a92b2406cb3e640629358645 192.168.3.143:6384 slots: (0 slots) slave replicates cacb64e70f0f142d6ef24b862f84eab074eb7bea S: dd4da448f03f5bd415f9e62e3070e4050984ede0 192.168.3.143:6386 slots: (0 slots) slave replicates 0c9fae90cfa5fac015f8e172af056f830056b79d M: 1a4852c5bf5d751c011d7cd57764da015a24e468 192.168.3.143:6387 slots:4096-8191 (4096 slots) master 1 additional replica(s) S: 788d5ef31c92f7e5f6d1020db46241c652a60f54 192.168.3.143:6385 slots: (0 slots) slave replicates f4308e9dac62e18934ad29b25007c48691a2051f M: f4308e9dac62e18934ad29b25007c48691a2051f 192.168.3.143:6382 slots:8192-12287 (4096 slots) master 1 additional replica(s) M: cacb64e70f0f142d6ef24b862f84eab074eb7bea 192.168.3.143:6383 slots:12288-16383 (4096 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 192.168.3.143:6388 to make it join the cluster. [OK] New node added correctly.
5.三、查看當前集羣狀態
root@master1:/usr/local/redis# ./redis-trib.rb check 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) ... ... ... M: e279f6728d4996cf864acb590b03747dae6191f4 192.168.3.143:6388 slots: (0 slots) master 0 additional replica(s)
6388節點已經加入併爲主節點,可是6388上並無slots。還要爲集羣從新分配slots
5.四、爲新增的master節點分配hash槽
輸入要分配多少節點給新增的master節點
root@master1:/usr/local/redis# ./redis-trib.rb reshard 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) ... ... [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 2000
輸入接收slots的節點ID,我這裏是6388節點的ID
What is the receiving node ID? e279f6728d4996cf864acb590b03747dae6191f4 Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1:all
最後輸入要從哪些節點抽取slots給6388節點。all表示從當前全部主節點分配
查看從新分配slots後的集羣狀態
root@master1:/usr/local/redis# ./redis-trib.rb check 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) ... ... M: e279f6728d4996cf864acb590b03747dae6191f4 192.168.3.143:6388 slots:0-1169,4096-4619,8192-8714,12288-12810 (2740 slots) master 0 additional replica(s)
5.五、爲新增的master節點添加從節點
依舊是準備一個新的實例
root@master1:/usr/local/redis# mkdir -p 6389/{data,log} root@master1:/usr/local/redis# cat 6380/redis.conf >6389/redis.conf root@master1:/usr/local/redis# sed -i 's/6380/6389/g' 6389/redis.conf root@master1:/usr/local/redis# ./bin/redis-server 6389/redis.conf root@master1:/usr/local/redis# ps -ef |grep 6389 root 24132 1 0 10:49 ? 00:00:00 ./bin/redis-server 192.168.3.143:6389 [cluster]
爲6388增長slave節點
root@master1:/usr/local/redis# ./redis-trib.rb add-node --slave --master-id e279f6728d4996cf864acb590b03747dae6191f4 192.168.3.143:6389 192.168.3.143:6380 >>> Adding node 192.168.3.143:6389 to cluster 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) ... ... M: e279f6728d4996cf864acb590b03747dae6191f4 192.168.3.143:6388 slots:0-1169,4096-4619,8192-8714,12288-12810 (2740 slots) master 0 additional replica(s) ... ... [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 192.168.3.143:6389 to make it join the cluster. Waiting for the cluster to join. >>> Configure node as replica of 192.168.3.143:6388. [OK] New node added correctly.
查看當前集羣狀態
root@master1:/usr/local/redis# ./redis-trib.rb check 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) ... ... M: e279f6728d4996cf864acb590b03747dae6191f4 192.168.3.143:6388 slots:0-1169,4096-4619,8192-8714,12288-12810 (2740 slots) master 1 additional replica(s) S: fefd056625f5cc1a75f81ca555e70bcf3f4efaed 192.168.3.143:6389 slots: (0 slots) slave replicates e279f6728d4996cf864acb590b03747dae6191f4 ... ...
添加成功,能夠看到6389的複製節點ID是6388的節點ID。
5.六、刪除節點
在刪除節點的時候要先使用reshard將該節點上的slots分配出去,才能刪除。
刪除從節點
root@master1:/usr/local/redis# ./redis-trib.rb del-node 192.168.3.143:6389 fefd056625f5cc1a75f81ca555e70bcf3f4efaed >>> Removing node fefd056625f5cc1a75f81ca555e70bcf3f4efaed from cluster 192.168.3.143:6389 >>> Sending CLUSTER FORGET messages to the cluster... >>> SHUTDOWN the node. root@master1:/usr/local/redis# ./redis-trib.rb check 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) M: 0c9fae90cfa5fac015f8e172af056f830056b79d 192.168.3.143:6380 slots:1170-4095 (2926 slots) master 1 additional replica(s) M: e279f6728d4996cf864acb590b03747dae6191f4 192.168.3.143:6388 slots:0-1169,4096-4619,8192-8714,12288-12810 (2740 slots) master 0 additional replica(s) S: 788d5ef31c92f7e5f6d1020db46241c652a60f54 192.168.3.143:6385 slots: (0 slots) slave replicates f4308e9dac62e18934ad29b25007c48691a2051f M: 1a4852c5bf5d751c011d7cd57764da015a24e468 192.168.3.143:6387 slots:4620-8191 (3572 slots) master 1 additional replica(s) M: cacb64e70f0f142d6ef24b862f84eab074eb7bea 192.168.3.143:6383 slots:12811-16383 (3573 slots) master 1 additional replica(s) S: ab5898dde3de48ddf2829ae8a7359e274790f5b9 192.168.3.143:6381 slots: (0 slots) slave replicates 1a4852c5bf5d751c011d7cd57764da015a24e468 S: dd4da448f03f5bd415f9e62e3070e4050984ede0 192.168.3.143:6386 slots: (0 slots) slave replicates 0c9fae90cfa5fac015f8e172af056f830056b79d S: f639f8ff30c7f2f3a92b2406cb3e640629358645 192.168.3.143:6384 slots: (0 slots) slave replicates cacb64e70f0f142d6ef24b862f84eab074eb7bea M: f4308e9dac62e18934ad29b25007c48691a2051f 192.168.3.143:6382 slots:8715-12287 (3573 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage...
刪除主節點
必須先將刪除節點上的slots分配出去,不然報錯
root@master1:/usr/local/redis# ./redis-trib.rb del-node 192.168.3.143:6388 e279f6728d4996cf864acb590b03747dae6191f4 >>> Removing node e279f6728d4996cf864acb590b03747dae6191f4 from cluster 192.168.3.143:6388 [ERR] Node 192.168.3.143:6388 is not empty! Reshard data away and try again.
將主節點上的slots分配出去,輸入要分配出去的slots數量:2740,接收slots的節點是6380,源節點是6388
root@master1:/usr/local/redis# ./redis-trib.rb reshard 192.168.3.143:6388 >>> Performing Cluster Check (using node 192.168.3.143:6388) ... ... ... [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 2740 What is the receiving node ID? 0c9fae90cfa5fac015f8e172af056f830056b79d Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1:e279f6728d4996cf864acb590b03747dae6191f4 Source node #1:done ... ... ... Do you want to proceed with the proposed reshard plan (yes/no)? yes
查看6388狀態已經沒有了slots
root@master1:/usr/local/redis# ./redis-trib.rb check 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) ... ... M: e279f6728d4996cf864acb590b03747dae6191f4 192.168.3.143:6388 slots: (0 slots) master 0 additional replica(s) ... ...
刪除6388節點
root@master1:/usr/local/redis# ./redis-trib.rb del-node 192.168.3.143:6388 e279f6728d4996cf864acb590b03747dae6191f4 >>> Removing node e279f6728d4996cf864acb590b03747dae6191f4 from cluster 192.168.3.143:6388 >>> Sending CLUSTER FORGET messages to the cluster... >>> SHUTDOWN the node.
再次查看集羣信息
root@master1:/usr/local/redis# ./redis-trib.rb check 192.168.3.143:6380 >>> Performing Cluster Check (using node 192.168.3.143:6380) M: 0c9fae90cfa5fac015f8e172af056f830056b79d 192.168.3.143:6380 slots:0-4619,8192-8714,12288-12810 (5666 slots) master 1 additional replica(s) S: 788d5ef31c92f7e5f6d1020db46241c652a60f54 192.168.3.143:6385 slots: (0 slots) slave replicates f4308e9dac62e18934ad29b25007c48691a2051f M: 1a4852c5bf5d751c011d7cd57764da015a24e468 192.168.3.143:6387 slots:4620-8191 (3572 slots) master 1 additional replica(s) M: cacb64e70f0f142d6ef24b862f84eab074eb7bea 192.168.3.143:6383 slots:12811-16383 (3573 slots) master 1 additional replica(s) S: ab5898dde3de48ddf2829ae8a7359e274790f5b9 192.168.3.143:6381 slots: (0 slots) slave replicates 1a4852c5bf5d751c011d7cd57764da015a24e468 S: dd4da448f03f5bd415f9e62e3070e4050984ede0 192.168.3.143:6386 slots: (0 slots) slave replicates 0c9fae90cfa5fac015f8e172af056f830056b79d S: f639f8ff30c7f2f3a92b2406cb3e640629358645 192.168.3.143:6384 slots: (0 slots) slave replicates cacb64e70f0f142d6ef24b862f84eab074eb7bea M: f4308e9dac62e18934ad29b25007c48691a2051f 192.168.3.143:6382 slots:8715-12287 (3573 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.