目錄node
Redis Cluster爲Redis官方提供的一種分佈式集羣解決方案。它支持在線節點增長和減小。 集羣中的節點角色多是主,也多是從,但須要保證每一個主節點都要有對應的從節點, 這樣保證了其高可用。redis
Redis Cluster採用了分佈式系統的分片(分區)的思路,每一個主節點爲一個分片,這樣也就意味着 存儲的數據是分散在全部分片中的。當增長節點或刪除主節點時,原存儲在某個主節點中的數據 會自動再次分配到其餘主節點。算法
以下圖,各節點間是相互通訊的,通訊端口爲各節點Redis服務端口+10000,這個端口是固定的,因此注意防火牆設置, 節點之間經過二進制協議通訊,這樣的目的是減小帶寬消耗。vim
在Redis Cluster中有一個概念slot,咱們翻譯爲槽。Slot數量是固定的,爲16384個。這些slot會均勻地分佈到各個 節點上。另外Redis的鍵和值會根據hash算法存儲在對應的slot中。簡單講,對於一個鍵值對,存的時候在哪裏是經過 hash算法算出來的,那麼取得時候也會算一下,知道值在哪一個slot上。根據slot編號再到對應的節點上去取。bash
Redis Cluster沒法保證數據的強一致性,這是由於當數據存儲時,只要在主節點上存好了,就會告訴客戶端存好了, 若是等全部從節點上都同步完再跟客戶端確認,那麼會有很大的延遲,這個對於客戶端來說是沒法容忍的。因此, 最終Redis Cluster只好放棄了數據強一致性,而把性能放在了首位。tcp
Redis Cluster至少須要三個節點,即一主二從,本實驗中咱們使用6個節點搭建。分佈式
主機名 | IP+Port | 角色 |
---|---|---|
redis-master | 192.168.56.11:6379 | Redis Master |
redis-slave01 | 192.168.56.12:6379 | Redis Master |
redis-slave02 | 192.168.56.13:6379 | Redis Master |
redis-master | 192.168.56.11:6380 | Redis Slave |
redis-slave01 | 192.168.56.12:6380 | Redis Slave |
redis-slave02 | 192.168.56.13:6380 | Redis Slave |
這裏使用三臺虛擬機,每臺虛擬機運行兩個Redis實例,端口號分別爲6379和6380,在進行修改配置文件時,須要將以前的哨兵模式和主從配置項取消掉,而且須要將原來的數據存儲目錄中的數據清空。不然Redis是沒法啓動集羣模式的。下面給出其中一主一從的6379和6380的部分須要修改的配置,其他兩個節點採用同樣的配置便可。性能
[root@redis-master ~]# grep -Ev "^$|#" /usr/local/redis/redis.conf bind 192.168.56.11 protected-mode yes port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize yes pidfile "/var/run/redis_6379.pid" logfile "/var/log/redis.log" dir "/var/redis" cluster-enabled yes #開啓集羣 cluster-config-file nodes-6379.conf #集羣的配置文件,首次啓動會自動建立 cluster-node-timeout 15000 #集羣節點鏈接超時時間,15秒 ...... [root@redis-master ~]# grep -Ev "^$|#" /usr/local/redis/redis_6380.conf bind 192.168.56.11 protected-mode yes port 6380 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize yes pidfile "/var/run/redis_6380.pid" logfile "/var/log/redis_6380.log" dir "/var/redis_6380" cluster-enabled yes #開啓集羣 cluster-config-file nodes-6380.conf #集羣的配置文件,首次啓動會自動建立 cluster-node-timeout 15000 #集羣節點鏈接超時時間,15秒 ...... [root@redis-master ~]# mkdir /var/redis_6380 #建立6380實例的數據目錄
[root@redis-master ~]# systemctl start redis [root@redis-master ~]# redis-server /usr/local/redis/redis_6380.conf [root@redis-master ~]# ps -ef |grep redis root 3536 1 0 09:33 ? 00:00:00 /usr/local/redis/src/redis-server 192.168.56.11:6379 [cluster] root 3543 1 0 09:33 ? 00:00:00 redis-server 192.168.56.11:6380 [cluster] [root@redis-slave01 ~]# systemctl start redis [root@redis-slave01 ~]# redis-server /usr/local/redis/redis_6380.conf [root@redis-slave01 ~]# ps axu |grep redis root 3821 0.5 0.7 153832 7692 ? Ssl 09:35 0:00 /usr/local/redis/src/redis-server 192.168.56.12:6379 [cluster] root 3826 0.5 0.6 153832 6896 ? Ssl 09:35 0:00 redis-server 192.168.56.12:6380 [cluster] [root@redis-slave02 ~]# systemctl start redis [root@redis-slave02 ~]# redis-server /usr/local/redis/redis_6380.conf [root@redis-slave02 ~]# ps axu |grep redis root 3801 0.7 0.7 153832 7696 ? Ssl 09:36 0:00 /usr/local/redis/src/redis-server 192.168.56.13:6379 [cluster] root 3806 1.4 0.7 153832 7692 ? Ssl 09:36 0:00 redis-server 192.168.56.13:6380 [cluster]
若是虛擬機上開啓了firewalld,全部機器須要增長以下規則,簡單粗暴的方式是直接systemctl stop firewalld
翻譯
firewall-cmd --permanent --add-port 6379-6380/tcp firewall-cmd --permanent --add-port 16379-16380/tcp firewall-cmd --reload
當前已經啓動了6個節點的Redis服務:
192.168.56.11:6379
192.168.56.11:6380
192.168.56.12:6379
192.168.56.12:6380
192.168.56.13:6379
192.168.56.13:6380
3d
下面在任一節點上執行如下構建集羣的命令,將這裏面的6個節點組建集羣模式,--cluster-replicas 1
表示每一個主對應一個從。
[root@redis-master ~]# redis-cli --cluster create 192.168.56.11:6379 192.168.56.11:6380 192.168.56.12:6379 192.168.56.12:6380 192.168.56.13:6379 192.168.56.13:6380 --cluster-replicas 1 >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 #槽位的分配 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 192.168.56.12:6380 to 192.168.56.11:6379 #一主對應一從 Adding replica 192.168.56.11:6380 to 192.168.56.12:6379 Adding replica 192.168.56.13:6380 to 192.168.56.13:6379 >>> Trying to optimize slaves allocation for anti-affinity [OK] Perfect anti-affinity obtained! M: 31886d2098fb1e627bd71b5af000957a1e252787 192.168.56.11:6379 slots:[0-5460] (5461 slots) master S: be0ef4a1b1a60cee781afe5c2b8b5cbd7b68b4e6 192.168.56.11:6380 replicates 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 M: 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 192.168.56.12:6379 slots:[5461-10922] (5462 slots) master S: eb812dc6051776e151bf69cd328bd0a66a20de01 192.168.56.12:6380 replicates 587adfa041d0c0a14aa1a875bdec219a56b10201 M: 587adfa041d0c0a14aa1a875bdec219a56b10201 192.168.56.13:6379 slots:[10923-16383] (5461 slots) master S: 55a6f654dcb87c6a8017c8619f0ce8763a92abd6 192.168.56.13:6380 replicates 31886d2098fb1e627bd71b5af000957a1e252787 Can I set the above configuration? (type 'yes' to accept): yes #是否接受這樣的配置,填yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ... >>> Performing Cluster Check (using node 192.168.56.11:6379) M: 31886d2098fb1e627bd71b5af000957a1e252787 192.168.56.11:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 55a6f654dcb87c6a8017c8619f0ce8763a92abd6 192.168.56.13:6380 slots: (0 slots) slave replicates 31886d2098fb1e627bd71b5af000957a1e252787 M: 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 192.168.56.12:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: be0ef4a1b1a60cee781afe5c2b8b5cbd7b68b4e6 192.168.56.11:6380 slots: (0 slots) slave replicates 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 S: eb812dc6051776e151bf69cd328bd0a66a20de01 192.168.56.12:6380 slots: (0 slots) slave replicates 587adfa041d0c0a14aa1a875bdec219a56b10201 M: 587adfa041d0c0a14aa1a875bdec219a56b10201 192.168.56.13:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. 到此提示,集羣建立成功!!!!!
這裏使用redis-cli
去鏈接任一節點的Redis服務,能夠進行一系列的平常操做,以下:
[root@redis-master ~]# redis-cli -c -h 192.168.56.11 -p 6380 192.168.56.11:6380> set k1 123 #k1鍵值存儲定位在13這個節點上的6379實例 -> Redirected to slot [12706] located at 192.168.56.13:6379 OK 192.168.56.13:6379> set k2 abc #k2鍵值存儲定位在11這個節點上的6379實例 -> Redirected to slot [449] located at 192.168.56.11:6379 OK 192.168.56.11:6379> set k3 efg #k3鍵值存儲在本節點的實例上 OK 192.168.56.11:6379> KEYS * #一樣能夠獲取鍵值數據來查看數據的存儲位置 1) "k2" 2) "k3" 192.168.56.11:6379> get k1 -> Redirected to slot [12706] located at 192.168.56.13:6379 "123" 192.168.56.13:6379> get k2 -> Redirected to slot [449] located at 192.168.56.11:6379 "abc" 192.168.56.11:6379> get k3 "efg"
[root@redis-master ~]# redis-cli --cluster check 192.168.56.11:6379 192.168.56.11:6379 (31886d20...) -> 2 keys | 5461 slots | 1 slaves. 192.168.56.12:6379 (8cd40e6a...) -> 0 keys | 5462 slots | 1 slaves. 192.168.56.13:6379 (587adfa0...) -> 1 keys | 5461 slots | 1 slaves. [OK] 3 keys in 3 masters. 0.00 keys per slot on average. >>> Performing Cluster Check (using node 192.168.56.11:6379) M: 31886d2098fb1e627bd71b5af000957a1e252787 192.168.56.11:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 55a6f654dcb87c6a8017c8619f0ce8763a92abd6 192.168.56.13:6380 slots: (0 slots) slave replicates 31886d2098fb1e627bd71b5af000957a1e252787 M: 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 192.168.56.12:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: be0ef4a1b1a60cee781afe5c2b8b5cbd7b68b4e6 192.168.56.11:6380 slots: (0 slots) slave replicates 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 S: eb812dc6051776e151bf69cd328bd0a66a20de01 192.168.56.12:6380 slots: (0 slots) slave replicates 587adfa041d0c0a14aa1a875bdec219a56b10201 M: 587adfa041d0c0a14aa1a875bdec219a56b10201 192.168.56.13:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
添加集羣節點,在redis-master節點上,複製配置文件,增長6381和6382端口的實例,以下:
[root@redis-master redis]# cp redis_6380.conf redis_6381.conf [root@redis-master redis]# cp redis_6380.conf redis_6382.conf [root@redis-master redis]# vim redis_6381.conf #修改6381實例配置文件 Port 6381 pidfile "/var/run/redis_6381.pid" logfile "/var/log/redis_6381.log" dir "/var/redis_6381" [root@redis-master redis]# vim redis_6382.conf #修改6382實例配置文件 Port 6382 pidfile "/var/run/redis_6382.pid" logfile "/var/log/redis_6382.log" dir "/var/redis_6382" [root@redis-master redis]# mkdir /var/redis_6381 #建立新增實例的數據目錄 [root@redis-master redis]# mkdir /var/redis_6382 [root@redis-master ~]# redis-server /usr/local/redis/redis_6381.conf #啓動6381實例 [root@redis-master ~]# redis-server /usr/local/redis/redis_6382.conf #啓動6382實例 [root@redis-master ~]# ps axu |grep redis root 3536 0.3 0.7 156392 7712 ? Ssl 09:34 1:07 /usr/local/redis/src/redis-server 192.168.56.11:6379 [cluster] root 3543 0.3 0.3 169192 3388 ? Ssl 09:34 1:21 redis-server 192.168.56.11:6380 [cluster] root 4189 0.2 0.2 153832 2852 ? Ssl 15:29 0:00 redis-server 192.168.56.11:6381 [cluster] root 4194 0.1 0.2 153832 2852 ? Ssl 15:29 0:00 redis-server 192.168.56.11:6382 [cluster]
這裏添加集羣節點有幾種方式,能夠將節點添加爲主,也能夠添加節點爲從節點,也能夠爲節點指定給某個master節點做爲從節點,下面是三種方式的不一樣增長方法,以下:
#將192.168.56.11:6381節點增長爲集羣的主節點,命令以下: [root@redis-master ~]# redis-cli --cluster add-node 192.168.56.11:6381 192.168.56.11:6379 >>> Adding node 192.168.56.11:6381 to cluster 192.168.56.11:6379 >>> Performing Cluster Check (using node 192.168.56.11:6379) M: 31886d2098fb1e627bd71b5af000957a1e252787 192.168.56.11:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 55a6f654dcb87c6a8017c8619f0ce8763a92abd6 192.168.56.13:6380 slots: (0 slots) slave replicates 31886d2098fb1e627bd71b5af000957a1e252787 M: 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 192.168.56.12:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: be0ef4a1b1a60cee781afe5c2b8b5cbd7b68b4e6 192.168.56.11:6380 slots: (0 slots) slave replicates 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 S: eb812dc6051776e151bf69cd328bd0a66a20de01 192.168.56.12:6380 slots: (0 slots) slave replicates 587adfa041d0c0a14aa1a875bdec219a56b10201 M: 587adfa041d0c0a14aa1a875bdec219a56b10201 192.168.56.13:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 192.168.56.11:6381 to make it join the cluster. [OK] New node added correctly. #檢查集羣狀態,能夠看到192.168.56.11:6381已經添加爲集羣中的master [root@redis-master ~]# redis-cli --cluster check 192.168.56.11:6379 192.168.56.11:6379 (31886d20...) -> 2 keys | 5461 slots | 1 slaves. 192.168.56.12:6379 (8cd40e6a...) -> 0 keys | 5462 slots | 1 slaves. 192.168.56.11:6381 (d04ed6a7...) -> 0 keys | 0 slots | 0 slaves. 192.168.56.13:6379 (587adfa0...) -> 1 keys | 5461 slots | 1 slaves. [OK] 3 keys in 4 masters. 0.00 keys per slot on average. >>> Performing Cluster Check (using node 192.168.56.11:6379) M: 31886d2098fb1e627bd71b5af000957a1e252787 192.168.56.11:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 55a6f654dcb87c6a8017c8619f0ce8763a92abd6 192.168.56.13:6380 slots: (0 slots) slave replicates 31886d2098fb1e627bd71b5af000957a1e252787 M: 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 192.168.56.12:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: be0ef4a1b1a60cee781afe5c2b8b5cbd7b68b4e6 192.168.56.11:6380 slots: (0 slots) slave replicates 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 S: eb812dc6051776e151bf69cd328bd0a66a20de01 192.168.56.12:6380 slots: (0 slots) slave replicates 587adfa041d0c0a14aa1a875bdec219a56b10201 M: d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 192.168.56.11:6381 slots: (0 slots) master M: 587adfa041d0c0a14aa1a875bdec219a56b10201 192.168.56.13:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
上面將192.168.56.11:6381
實例添加爲集羣的master,如今將6382實例添加爲集羣的從節點,要知道的是,上面新增的master節點是沒有從節點的,那麼接下來新增的slave節點,會自動給master做爲從節點,以下:
#將192.168.56.11:6382節點增長爲集羣的從節點,命令以下: [root@redis-master ~]# redis-cli --cluster add-node 192.168.56.11:6382 192.168.56.11:6379 --cluster-slave >>> Adding node 192.168.56.11:6382 to cluster 192.168.56.11:6379 >>> Performing Cluster Check (using node 192.168.56.11:6379) M: 31886d2098fb1e627bd71b5af000957a1e252787 192.168.56.11:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 55a6f654dcb87c6a8017c8619f0ce8763a92abd6 192.168.56.13:6380 slots: (0 slots) slave replicates 31886d2098fb1e627bd71b5af000957a1e252787 M: 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 192.168.56.12:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: be0ef4a1b1a60cee781afe5c2b8b5cbd7b68b4e6 192.168.56.11:6380 slots: (0 slots) slave replicates 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 S: eb812dc6051776e151bf69cd328bd0a66a20de01 192.168.56.12:6380 slots: (0 slots) slave replicates 587adfa041d0c0a14aa1a875bdec219a56b10201 M: d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 192.168.56.11:6381 slots: (0 slots) master M: 587adfa041d0c0a14aa1a875bdec219a56b10201 192.168.56.13:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. Automatically selected master 192.168.56.11:6381 >>> Send CLUSTER MEET to node 192.168.56.11:6382 to make it join the cluster. Waiting for the cluster to join >>> Configure node as replica of 192.168.56.11:6381. [OK] New node added correctly. #檢查集羣節點狀態信息,能夠看到6382的master節點id爲:d04ed6a7d3dbf0aaff0643b045fe22efc7c34500,即6381實例。 [root@redis-master ~]# redis-cli --cluster check 192.168.56.11:6379 192.168.56.11:6379 (31886d20...) -> 2 keys | 5461 slots | 1 slaves. 192.168.56.12:6379 (8cd40e6a...) -> 0 keys | 5462 slots | 1 slaves. 192.168.56.11:6381 (d04ed6a7...) -> 0 keys | 0 slots | 1 slaves. 192.168.56.13:6379 (587adfa0...) -> 1 keys | 5461 slots | 1 slaves. [OK] 3 keys in 4 masters. 0.00 keys per slot on average. >>> Performing Cluster Check (using node 192.168.56.11:6379) M: 31886d2098fb1e627bd71b5af000957a1e252787 192.168.56.11:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 55a6f654dcb87c6a8017c8619f0ce8763a92abd6 192.168.56.13:6380 slots: (0 slots) slave replicates 31886d2098fb1e627bd71b5af000957a1e252787 M: 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 192.168.56.12:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 91c803049aa0a12678e26521959aba25d4b06913 192.168.56.11:6382 slots: (0 slots) slave replicates d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 S: be0ef4a1b1a60cee781afe5c2b8b5cbd7b68b4e6 192.168.56.11:6380 slots: (0 slots) slave replicates 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 S: eb812dc6051776e151bf69cd328bd0a66a20de01 192.168.56.12:6380 slots: (0 slots) slave replicates 587adfa041d0c0a14aa1a875bdec219a56b10201 M: d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 192.168.56.11:6381 slots: (0 slots) master 1 additional replica(s) M: 587adfa041d0c0a14aa1a875bdec219a56b10201 192.168.56.13:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
一樣的,也能夠爲指定的master節點添加從節點,如將6382實例指定添加爲6381實例的從節點:
redis-cli --cluster add-node 192.168.56.11:6382 192.168.56.11:6379 --cluster-slave --cluster-master-id d04ed6a7d3dbf0aaff0643b045fe22efc7c34500
從上面的操做上,咱們能夠看到新增的master 6381實例上的slot爲0,那麼這裏咱們先給這個master進行分配一些slot,須要注意的是,reshard後面跟的ip,能夠是集羣中任一master ip。
[root@redis-master ~]# redis-cli --cluster reshard 192.168.56.11:6379 >>> Performing Cluster Check (using node 192.168.56.11:6379) M: 31886d2098fb1e627bd71b5af000957a1e252787 192.168.56.11:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 55a6f654dcb87c6a8017c8619f0ce8763a92abd6 192.168.56.13:6380 slots: (0 slots) slave replicates 31886d2098fb1e627bd71b5af000957a1e252787 M: 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 192.168.56.12:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 91c803049aa0a12678e26521959aba25d4b06913 192.168.56.11:6382 slots: (0 slots) slave replicates d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 S: be0ef4a1b1a60cee781afe5c2b8b5cbd7b68b4e6 192.168.56.11:6380 slots: (0 slots) slave replicates 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 S: eb812dc6051776e151bf69cd328bd0a66a20de01 192.168.56.12:6380 slots: (0 slots) slave replicates 587adfa041d0c0a14aa1a875bdec219a56b10201 M: d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 192.168.56.11:6381 slots: (0 slots) master 1 additional replica(s) M: 587adfa041d0c0a14aa1a875bdec219a56b10201 192.168.56.13:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 100 #定義要分配多少slot,這裏分配100個 What is the receiving node ID? d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 #定義接收slot的node id,即新的master id Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1: 31886d2098fb1e627bd71b5af000957a1e252787 #定義第一個源master的id,若是想在全部master上拿slot,直接敲all Source node #2: 587adfa041d0c0a14aa1a875bdec219a56b10201 ##定義第二個源master的id,若是再也不繼續有新的源,直接敲done Source node #3: done Ready to move 100 slots. Source nodes: M: 31886d2098fb1e627bd71b5af000957a1e252787 192.168.56.11:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) M: 587adfa041d0c0a14aa1a875bdec219a56b10201 192.168.56.13:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) Destination node: M: d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 192.168.56.11:6381 slots: (0 slots) master 1 additional replica(s) Resharding plan: Moving slot 0 from 31886d2098fb1e627bd71b5af000957a1e252787 Moving slot 1 from 31886d2098fb1e627bd71b5af000957a1e252787 ...... Do you want to proceed with the proposed reshard plan (yes/no)? yes Moving slot 0 from 192.168.56.11:6379 to 192.168.56.11:6381: Moving slot 1 from 192.168.56.11:6379 to 192.168.56.11:6381: Moving slot 2 from 192.168.56.11:6379 to 192.168.56.11:6381: Moving slot 3 from 192.168.56.11:6379 to 192.168.56.11:6381: ...... #從新查看集羣中的slot,能夠看到新的master上已經分配了100個slot [root@redis-master ~]# redis-cli --cluster check 192.168.56.11:6379 ...... M: d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 192.168.56.11:6381 slots:[0-49],[10923-10972] (100 slots) master 1 additional replica(s) ......
slot的分配,也能夠做爲slot的遷移,在當前某個master節點出現故障時,咱們能夠經過slot的分配對故障節點的slot進行遷移,命令和分配的命令是同樣的,以下:
redis-cli --cluster reshard 192.168.56.11:6379
有添加,就有刪除,刪除集羣中的節點,自己節點上的Redis必須不帶有slot,若是有slot,須要先進行移除,不然會報錯。被刪除的node重啓後,依然會記得集羣中的其餘節點,這就須要執行cluster forget nodeid
來忘記其餘節點。
#這裏先刪除6382這個從節點實例,由於該實例上沒有slot,是能夠直接移除的 [root@redis-master ~]# redis-cli --cluster del-node 192.168.56.11:6382 91c803049aa0a12678e26521959aba25d4b06913 >>> Removing node 91c803049aa0a12678e26521959aba25d4b06913 from cluster 192.168.56.11:6382 >>> Sending CLUSTER FORGET messages to the cluster... >>> SHUTDOWN the node. [root@redis-master ~]# redis-cli --cluster check 192.168.56.11:6379 #從新查看是已經沒有了6382實例了 192.168.56.11:6379 (31886d20...) -> 2 keys | 5411 slots | 1 slaves. 192.168.56.12:6379 (8cd40e6a...) -> 0 keys | 5462 slots | 1 slaves. 192.168.56.11:6381 (d04ed6a7...) -> 0 keys | 100 slots | 0 slaves. 192.168.56.13:6379 (587adfa0...) -> 1 keys | 5411 slots | 1 slaves. [OK] 3 keys in 4 masters. 0.00 keys per slot on average. >>> Performing Cluster Check (using node 192.168.56.11:6379) M: 31886d2098fb1e627bd71b5af000957a1e252787 192.168.56.11:6379 slots:[50-5460] (5411 slots) master 1 additional replica(s) S: 55a6f654dcb87c6a8017c8619f0ce8763a92abd6 192.168.56.13:6380 slots: (0 slots) slave replicates 31886d2098fb1e627bd71b5af000957a1e252787 M: 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 192.168.56.12:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: be0ef4a1b1a60cee781afe5c2b8b5cbd7b68b4e6 192.168.56.11:6380 slots: (0 slots) slave replicates 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 S: eb812dc6051776e151bf69cd328bd0a66a20de01 192.168.56.12:6380 slots: (0 slots) slave replicates 587adfa041d0c0a14aa1a875bdec219a56b10201 M: d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 192.168.56.11:6381 slots:[0-49],[10923-10972] (100 slots) master M: 587adfa041d0c0a14aa1a875bdec219a56b10201 192.168.56.13:6379 slots:[10973-16383] (5411 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. #那麼如今再來刪除帶有slot的6381實例,則會出現報錯,以下 [root@redis-master ~]# redis-cli --cluster del-node 192.168.56.11:6381 d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 >>> Removing node d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 from cluster 192.168.56.11:6381 [ERR] Node 192.168.56.11:6381 is not empty! Reshard data away and try again. #提示該實例不爲空,須要Reshard掉數據後再重試,那麼如今先將slot進行遷移掉 [root@redis-master ~]# redis-cli --cluster reshard 192.168.56.11:6381 >>> Performing Cluster Check (using node 192.168.56.11:6381) M: d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 192.168.56.11:6381 slots:[0-49],[10923-10972] (100 slots) master ...... [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 100 What is the receiving node ID? 587adfa041d0c0a14aa1a875bdec219a56b10201 Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1: d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 Source node #2: done [root@redis-master ~]# redis-cli --cluster check 192.168.56.11:6379 192.168.56.11:6379 (31886d20...) -> 2 keys | 5411 slots | 1 slaves. 192.168.56.12:6379 (8cd40e6a...) -> 0 keys | 5462 slots | 1 slaves. 192.168.56.11:6381 (d04ed6a7...) -> 0 keys | 0 slots | 0 slaves. #slots數量已經爲0 192.168.56.13:6379 (587adfa0...) -> 1 keys | 5511 slots | 1 slaves. ...... #再次進行刪除6381集羣節點,便可順利完成刪除 [root@redis-master ~]# redis-cli --cluster del-node 192.168.56.11:6381 d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 >>> Removing node d04ed6a7d3dbf0aaff0643b045fe22efc7c34500 from cluster 192.168.56.11:6381 >>> Sending CLUSTER FORGET messages to the cluster... >>> SHUTDOWN the node.
先經過reshard功能,模擬slot不均勻分配,而後檢查集羣狀態中,能夠看到各個master上的slot數量不均衡,以下:
[root@redis-master ~]# redis-cli --cluster check 192.168.56.11:6379 192.168.56.11:6379 (31886d20...) -> 1 keys | 4416 slots | 1 slaves. 192.168.56.12:6379 (8cd40e6a...) -> 0 keys | 4457 slots | 1 slaves. 192.168.56.13:6379 (587adfa0...) -> 2 keys | 7511 slots | 1 slaves. ......
當咱們使用的slot數量分佈不均勻時,一樣也可使用平衡功能,將各個節點的slot進行從新均衡分配:
[root@redis-master ~]# redis-cli --cluster rebalance --cluster-threshold 1 192.168.56.11:6379 >>> Performing Cluster Check (using node 192.168.56.11:6379) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Rebalancing across 3 nodes. Total weight = 3.00 Moving 1046 slots from 192.168.56.13:6379 to 192.168.56.11:6379 ################################################################################################### Moving 1004 slots from 192.168.56.13:6379 to 192.168.56.12:6379 ################################################################################################### [root@redis-master ~]# redis-cli --cluster check 192.168.56.11:6379 192.168.56.11:6379 (31886d20...) -> 2 keys | 5462 slots | 1 slaves. 192.168.56.12:6379 (8cd40e6a...) -> 0 keys | 5461 slots | 1 slaves. 192.168.56.13:6379 (587adfa0...) -> 1 keys | 5461 slots | 1 slaves. ...... #須要注意的是,全部節點的平衡閾值在1%,當各個節點的slot數值相差不到1%時,進行balance是會報如下提示的: *** No rebalancing needed! All nodes are within the 1.00% threshold.
先解除6381實例的集羣模式,從新啓動6381實例,並寫入數據,以下
[root@redis-master ~]# redis-cli -h 192.168.56.11 -p 6381 192.168.56.11:6381> KEYS * (empty list or set) 192.168.56.11:6381> set k0 aaa OK 192.168.56.11:6381> set k5 bbb OK 192.168.56.11:6381> exit
再將6381實例的數據導入到集羣中,以下
[root@redis-master ~]# redis-cli --cluster import 192.168.56.11:6379 --cluster-from 192.168.56.11:6381 --cluster-copy >>> Importing data from 192.168.56.11:6381 to cluster 192.168.56.11:6379 >>> Performing Cluster Check (using node 192.168.56.11:6379) M: 31886d2098fb1e627bd71b5af000957a1e252787 192.168.56.11:6379 slots:[0-5461] (5462 slots) master 1 additional replica(s) S: 55a6f654dcb87c6a8017c8619f0ce8763a92abd6 192.168.56.13:6380 slots: (0 slots) slave replicates 31886d2098fb1e627bd71b5af000957a1e252787 M: 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 192.168.56.12:6379 slots:[5462-10922] (5461 slots) master 1 additional replica(s) S: be0ef4a1b1a60cee781afe5c2b8b5cbd7b68b4e6 192.168.56.11:6380 slots: (0 slots) slave replicates 8cd40e6a31c12e0b9c01f20056b9ecaa4db51446 S: eb812dc6051776e151bf69cd328bd0a66a20de01 192.168.56.12:6380 slots: (0 slots) slave replicates 587adfa041d0c0a14aa1a875bdec219a56b10201 M: 587adfa041d0c0a14aa1a875bdec219a56b10201 192.168.56.13:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. *** Importing 2 keys from DB 0 Migrating k5 to 192.168.56.13:6379: OK Migrating k0 to 192.168.56.12:6379: OK [root@redis-master ~]# redis-cli -c -h 192.168.56.11 -p 6379 #登陸集羣查看數據 192.168.56.11:6379> KEYS * 1) "k2" 2) "k3" 192.168.56.11:6379> get k0 -> Redirected to slot [8579] located at 192.168.56.12:6379 "aaa" 192.168.56.12:6379> get k5 -> Redirected to slot [12582] located at 192.168.56.13:6379 "bbb"
這裏須要注意的是:Cluster-from
後面跟外部redis的ip和port
若是隻使用cluster-copy
,則要導入集羣中的key不能在,不然以下: 若是集羣中已有一樣的key,若是須要替換,能夠cluster-copy和cluster-replace
聯用,這樣集羣中的key
就會被替換爲外部的。而在生產環境當中,爲了保證Redis的高可用性,一般會配置一主兩從的模式,也就是說在建立集羣時該參數提示的副本數爲2,如:--cluster-replicas 2