需求:一、請求量過大 100w/s 二、數據量大node
哈希分佈特色:redis
順序分佈特色:shell
節點取餘分區特色:vim
一致性哈希特色:ruby
虛擬哈希分區特色:bash
原生安裝:服務器
一、配置節點架構
port ${port} daemonize yes dir "" dbfilename "dump-${port}.rdb" logfile "${port}.log" # 表明當前節點爲cluster節點 cluster-enabled yes # 指定當前cluster節點的配置 cluster-config-file nodes-${port}.conf
二、配置開啓Redis運維
三、cluster meet ip porttcp
redis-cli -h 127.0.0.1 -p 7000 cluster meet 127.0.0.1 7001
四、cluster節點配置
# 表明當前節點爲cluster節點 cluster-ebabled yes # 節點超時時間爲15秒 cluster-node-timeout 15000 # 集羣節點 cluster-config-file "nodes.conf" # 集羣內全部節點都正確,才能提供服務,配置成 cluster-require-full-coverage no
五、分配槽
redis-cli -h 127.0.0.1 -p 7000 cluster addslots {0...5461}
六、設置主從關係
redis-cli -h 127.0.0.1 -p 7003 cluster replication ${node-id-7000}
單機部署:
一、建立配置
mkdir cluster vim redis-6379.conf
port 6379 daemonize yes dir "./" logfile "6379.log" dbfilename "dump-7000.rdb" # 集羣配置 cluster-enabled yes cluster-config-file node-6379.conf cluster-require-full-coverage no
sed "s/6379/6380/g" redis-6379.conf > redis-6380.conf sed "s/6379/6381/g" redis-6379.conf > redis-6381.conf sed "s/6379/6382/g" redis-6379.conf > redis-6382.conf sed "s/6379/6383/g" redis-6379.conf > redis-6383.conf sed "s/6379/6384/g" redis-6379.conf > redis-6384.conf
啓動:
redis-server redis-6379.conf redis-server redis-6380.conf redis-server redis-6381.conf redis-server redis-6382.conf redis-server redis-6383.conf redis-server redis-6384.conf
二、meet握手
redis-cli -p 6379 cluster meet 127.0.0.1 6380 redis-cli -p 6379 cluster meet 127.0.0.1 6381 redis-cli -p 6379 cluster meet 127.0.0.1 6382 redis-cli -p 6379 cluster meet 127.0.0.1 6383 redis-cli -p 6379 cluster meet 127.0.0.1 6384 redis-cli -p 6380 cluster meet 127.0.0.1 6381 redis-cli -p 6380 cluster meet 127.0.0.1 6382 redis-cli -p 6380 cluster meet 127.0.0.1 6383 redis-cli -p 6380 cluster meet 127.0.0.1 6384 redis-cli -p 6381 cluster meet 127.0.0.1 6382 redis-cli -p 6381 cluster meet 127.0.0.1 6383 redis-cli -p 6381 cluster meet 127.0.0.1 6384 redis-cli -p 6382 cluster meet 127.0.0.1 6383 redis-cli -p 6382 cluster meet 127.0.0.1 6384 redis-cli -p 6383 cluster meet 127.0.0.1 6384 redis-cli -p 6383 cluster meet 127.0.0.1 6385 redis-cli -p 6384 cluster meet 127.0.0.1 6385
當前狀態
[root@localhost cluster]# redis-cli -p 6379 127.0.0.1:6379> cluster nodes 171ad8b979d147dfe069dc7accf183adec22e1e3 127.0.0.1:6379@16379 myself,master - 0 1553940474000 4 connected 46b59f04b4ff7e3c691e7d8561f79e75d774eae3 127.0.0.1:6381@16381 master - 0 1553940472000 1 connected 60f54b28c08b3f96e31fe532000ba0b53fffdcec 127.0.0.1:6384@16384 master - 0 1553940474552 2 connected 12718197ace83ae68d876e7dee03d8e5774aed43 127.0.0.1:6382@16382 master - 0 1553940474000 5 connected ec6bdcea4b3244d6f2315c8a7b82b54775f1c38e 127.0.0.1:6380@16380 master - 0 1553940475557 0 connected a8face71e2648047748980c8f2c612c1b3be7cfd 127.0.0.1:6383@16383 master - 0 1553940473542 3 connected 127.0.0.1:6379> cluster info cluster_state:fail cluster_slots_assigned:0 cluster_slots_ok:0 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:0 cluster_current_epoch:5 cluster_my_epoch:4 cluster_stats_messages_ping_sent:36 cluster_stats_messages_pong_sent:38 cluster_stats_messages_meet_sent:5 cluster_stats_messages_sent:79 cluster_stats_messages_ping_received:38 cluster_stats_messages_pong_received:41 cluster_stats_messages_received:79
三、分配槽
#!/bin/bash start=$1 end=$2 port=$3 for item in `seq $start $end` do redis-cli -p $port cluster addslots $item done
./addSlot.sh 0 5461 6379 ./addSlot.sh 5462 10922 6380 ./addSlot.sh 10923 16383 6381
四、配置主從
redis-cli -p 6382 cluster replicate 171ad8b979d147dfe069dc7accf183adec22e1e3 redis-cli -p 6383 cluster replicate ec6bdcea4b3244d6f2315c8a7b82b54775f1c38e redis-cli -p 6384 cluster replicate 46b59f04b4ff7e3c691e7d8561f79e75d774eae3
當前狀態:
[root@localhost cluster]# redis-cli -p 6379 cluster nodes 171ad8b979d147dfe069dc7accf183adec22e1e3 127.0.0.1:6379@16379 myself,master - 0 1553943779000 4 connected 0-5461 46b59f04b4ff7e3c691e7d8561f79e75d774eae3 127.0.0.1:6381@16381 master - 0 1553943779745 1 connected 10923-16383 60f54b28c08b3f96e31fe532000ba0b53fffdcec 127.0.0.1:6384@16384 slave 46b59f04b4ff7e3c691e7d8561f79e75d774eae3 0 1553943777725 2 connected 12718197ace83ae68d876e7dee03d8e5774aed43 127.0.0.1:6382@16382 slave 171ad8b979d147dfe069dc7accf183adec22e1e3 0 1553943779000 5 connected ec6bdcea4b3244d6f2315c8a7b82b54775f1c38e 127.0.0.1:6380@16380 master - 0 1553943780000 0 connected 5462-10922 a8face71e2648047748980c8f2c612c1b3be7cfd 127.0.0.1:6383@16383 slave ec6bdcea4b3244d6f2315c8a7b82b54775f1c38e 0 1553943780753 3 connected
五、測試集羣
redis-cli -c -p 6379:-c集羣模式
[root@localhost cluster]# redis-cli -c -p 6379 127.0.0.1:6379> set name aa -> Redirected to slot [5798] located at 127.0.0.1:6380 OK 127.0.0.1:6380> set key redis -> Redirected to slot [12539] located at 127.0.0.1:6381 OK 127.0.0.1:6381> get name -> Redirected to slot [5798] located at 127.0.0.1:6380 "aa" 127.0.0.1:6380> get key -> Redirected to slot [12539] located at 127.0.0.1:6381 "redis" 127.0.0.1:6381>
一、ruby環境:
這裏從本機上傳過來的:scp ruby-2.6.2.tar.gz root@192.168.0.109:~
虛擬機接收:
wget https://cache.ruby-lang.org/pub/ruby/2.6/ruby-2.6.2.tar.gz tar -zxvf ruby-2.6.2.tar.gz -C /usr/local/ cd /usr/local/ruby-2.6.2 ./configure make && make install cd gems/ # 下載redis.gem wget https://rubygems.org/downloads/redis-4.1.0.gem # 安裝redis.gem gem install -l redis-4.1.0.gem # 檢查安裝狀況 gem list -- check redis gem # 執行redis-trib.rb ./redis-trib.rb
二、準備配置
port 6379 daemonize yes dir "./" logfile "6379.log" dbfilename "dump-7000.rdb" protected-mode no # 集羣配置 cluster-enabled yes cluster-config-file node-6379.conf cluster-require-full-coverage no
六份
sed "s/6379/6380/g" redis-6379.conf > redis-6380.conf sed "s/6379/6381/g" redis-6379.conf > redis-6381.conf sed "s/6379/6382/g" redis-6379.conf > redis-6382.conf sed "s/6379/6383/g" redis-6379.conf > redis-6383.conf sed "s/6379/6384/g" redis-6379.conf > redis-6384.conf
三、啓動服務:
redis-server redis-6379.conf redis-server redis-6380.conf redis-server redis-6381.conf redis-server redis-6382.conf redis-server redis-6383.conf redis-server redis-6384.conf
檢查狀態:
127.0.0.1:6379> cluster nodes 91d2f29b7bf974ebbaeeb4ac90a2232f9a1d126d :6379@16379 myself,master - 0 0 0 connected 127.0.0.1:6379> cluster info cluster_state:fail cluster_slots_assigned:0 cluster_slots_ok:0 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:1 cluster_size:0 cluster_current_epoch:0 cluster_my_epoch:0 cluster_stats_messages_sent:0 cluster_stats_messages_received:0 127.0.0.1:6379> set name aaa (error) CLUSTERDOWN Hash slot not served
四、建立集羣
多機測試要取消保護模式
protected-mode no
, 開放端口xxxx和1xxxx
如:端口爲6379 ,開放6379和16379
# 建立集羣,前三個爲主節點,1:爲每一個主節點的從節點的個數 redis-cli --cluster create --cluster-replicas 1 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 # 響應 >>> Performing hash slots allocation on 6 nodes... # 肯定master節點 Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 # 分配從節點 Adding replica 127.0.0.1:6383 to 127.0.0.1:6379 Adding replica 127.0.0.1:6384 to 127.0.0.1:6380 Adding replica 127.0.0.1:6382 to 127.0.0.1:6381 >>> Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master # 分配槽 M: 91d2f29b7bf974ebbaeeb4ac90a2232f9a1d126d 127.0.0.1:6379 slots:[0-5460] (5461 slots) master M: 9111432777a2356508706c07e44bc0340ee6e594 127.0.0.1:6380 slots:[5461-10922] (5462 slots) master M: 29b0cb2bd387a428bd34109efa5514d221da174b 127.0.0.1:6381 slots:[10923-16383] (5461 slots) master S: 442d077607428ec3116b1ae9a0c5dbea89567c7c 127.0.0.1:6382 replicates 29b0cb2bd387a428bd34109efa5514d221da174b S: 3912cc4baaf6964b07ca05020f6f28f4d7370f38 127.0.0.1:6383 replicates 91d2f29b7bf974ebbaeeb4ac90a2232f9a1d126d S: 61beb8c84aed2079e8e7232b374a20bf1e4dd90c 127.0.0.1:6384 replicates 9111432777a2356508706c07e44bc0340ee6e594 Can I set the above configuration? (type 'yes' to accept): yes # 確認 >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ........ >>> Performing Cluster Check (using node 127.0.0.1:6379) M: 91d2f29b7bf974ebbaeeb4ac90a2232f9a1d126d 127.0.0.1:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) M: 9111432777a2356508706c07e44bc0340ee6e594 127.0.0.1:6380 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 61beb8c84aed2079e8e7232b374a20bf1e4dd90c 127.0.0.1:6384 slots: (0 slots) slave replicates 9111432777a2356508706c07e44bc0340ee6e594 S: 442d077607428ec3116b1ae9a0c5dbea89567c7c 127.0.0.1:6382 slots: (0 slots) slave replicates 29b0cb2bd387a428bd34109efa5514d221da174b S: 3912cc4baaf6964b07ca05020f6f28f4d7370f38 127.0.0.1:6383 slots: (0 slots) slave replicates 91d2f29b7bf974ebbaeeb4ac90a2232f9a1d126d M: 29b0cb2bd387a428bd34109efa5514d221da174b 127.0.0.1:6381 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
五、驗證集羣:
[root@localhost src]# redis-cli -c -p 6379 127.0.0.1:6379> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:219 cluster_stats_messages_pong_sent:230 cluster_stats_messages_sent:449 cluster_stats_messages_ping_received:225 cluster_stats_messages_pong_received:219 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:449 127.0.0.1:6379> cluster nodes 9111432777a2356508706c07e44bc0340ee6e594 127.0.0.1:6380@16380 master - 0 1554094994000 2 connected 5461-10922 61beb8c84aed2079e8e7232b374a20bf1e4dd90c 127.0.0.1:6384@16384 slave 9111432777a2356508706c07e44bc0340ee6e594 0 1554094993530 6 connected 91d2f29b7bf974ebbaeeb4ac90a2232f9a1d126d 127.0.0.1:6379@16379 myself,master - 0 1554094993000 1 connected 0-5460 442d077607428ec3116b1ae9a0c5dbea89567c7c 127.0.0.1:6382@16382 slave 29b0cb2bd387a428bd34109efa5514d221da174b 0 1554094994541 4 connected 3912cc4baaf6964b07ca05020f6f28f4d7370f38 127.0.0.1:6383@16383 slave 91d2f29b7bf974ebbaeeb4ac90a2232f9a1d126d 0 1554094995554 5 connected 29b0cb2bd387a428bd34109efa5514d221da174b 127.0.0.1:6381@16381 master - 0 1554094993000 3 connected 10923-16383 127.0.0.1:6379> set name aaaaaa -> Redirected to slot [5798] located at 127.0.0.1:6380 OK 127.0.0.1:6380> get name "aaaaaa" 127.0.0.1:6380>
六、常見問題
Redis集羣不只須要開通redis客戶端鏈接的端口,並且須要開通集羣總線端口,集羣總線端口爲redis客戶端鏈接的端口 + 10000,如redis端口爲7000,則集羣總線端口爲17000,所以全部服務器的點須要開通redis的客戶端鏈接端口和集羣總線端口
Redis 集羣模式下鏈接須要已-h ip地址方式鏈接,命令應爲./redis-cli -h 192.168.118.110 -c -p 7000
伸縮原理:槽和數據在節點之間的移動
--cluster add-node 192.168.5.100:8007 192.168.5.100:8000
準備新節點:
準備新節點並啓動
[root@localhost cluster]# sed "s/6379/6385/g" redis-6379.conf > redis-6385.conf [root@localhost cluster]# sed "s/6379/6386/g" redis-6379.conf > redis-6386.conf [root@localhost cluster]# redis-server redis-6385.conf [root@localhost cluster]# redis-server redis-6386.conf [root@localhost cluster]# ps -ef | grep redis root 93290 1 0 13:34 ? 00:00:03 redis-server *:6379 [cluster] root 93295 1 0 13:34 ? 00:00:03 redis-server *:6380 [cluster] root 93300 1 0 13:34 ? 00:00:03 redis-server *:6381 [cluster] root 93305 1 0 13:35 ? 00:00:03 redis-server *:6382 [cluster] root 93310 1 0 13:35 ? 00:00:04 redis-server *:6383 [cluster] root 93315 1 0 13:35 ? 00:00:03 redis-server *:6384 [cluster] root 93415 1 0 14:27 ? 00:00:00 redis-server *:6385 [cluster] root 93420 1 0 14:27 ? 00:00:00 redis-server *:6386 [cluster] [root@localhost cluster]# redis-cli -p 6385 127.0.0.1:6385> cluster nodes 9c491c885d8ec3e885c79b3cabb9603e4c386019 :6385@16385 myself,master - 0 0 0 connected
加入集羣:
127.0.0.1:6385 # 新增的節點
127.0.0.1:6379 # 集羣中已存在的節點
[root@localhost cluster]# redis-cli --cluster add-node 127.0.0.1:6385 127.0.0.1:6379 >>> Adding node 127.0.0.1:6385 to cluster 127.0.0.1:6379 >>> Performing Cluster Check (using node 127.0.0.1:6379) M: 91d2f29b7bf974ebbaeeb4ac90a2232f9a1d126d 127.0.0.1:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) M: 9111432777a2356508706c07e44bc0340ee6e594 127.0.0.1:6380 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 61beb8c84aed2079e8e7232b374a20bf1e4dd90c 127.0.0.1:6384 slots: (0 slots) slave replicates 29b0cb2bd387a428bd34109efa5514d221da174b S: 442d077607428ec3116b1ae9a0c5dbea89567c7c 127.0.0.1:6382 slots: (0 slots) slave replicates 91d2f29b7bf974ebbaeeb4ac90a2232f9a1d126d S: 3912cc4baaf6964b07ca05020f6f28f4d7370f38 127.0.0.1:6383 slots: (0 slots) slave replicates 9111432777a2356508706c07e44bc0340ee6e594 M: 29b0cb2bd387a428bd34109efa5514d221da174b 127.0.0.1:6381 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 127.0.0.1:6385 to make it join the cluster. [OK] New node added correctly.
遷移槽和數據:
redis-cli --cluster reshard 127.0.0.1:6385 # 選擇爲節點分配多少個槽(4069) How many slots do you want to move (from 1 to 16384)? 4096 # 選擇分配給哪一個節點 What is the receiving node ID? 9c491c885d8ec3e885c79b3cabb9603e4c386019 # 選擇分配哪些槽(all:自動分配;done:手動分配) Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1: all # 確認 Do you want to proceed with the proposed reshard plan (yes/no)? yes
添加從節點6386到6385
# 先將6386添加到集羣 redis-cli --cluster add-node 127.0.0.1:6386 127.0.0.1:6379 # 客戶端登陸6386 redis-cli -c -p 6386 cluster nodes # 在客戶端使用replicate指定該節點的主節點 redis-cli -c -p 6386 cluster replicate 9c491c885d8ec3e885c79b3cabb9603e4c386019
四主四從完成
刪除從節點:
redis-cli --cluster del-node 127.0.0.1:6386 d327a7e60d078eeaa98122fb1c196fba7bc468b8 >>> Removing node d327a7e60d078eeaa98122fb1c196fba7bc468b8 from cluster 127.0.0.1:6386 >>> Sending CLUSTER FORGET messages to the cluster... >>> SHUTDOWN the node.
分配槽:
能夠分三次平均分給每一個節點
[root@localhost cluster]# redis-cli --cluster reshard 127.0.0.1:6385 >>> Performing Cluster Check (using node 127.0.0.1:6385) M: 9c491c885d8ec3e885c79b3cabb9603e4c386019 127.0.0.1:6385 slots:[11046-12287] (1242 slots) master S: 3912cc4baaf6964b07ca05020f6f28f4d7370f38 127.0.0.1:6383 slots: (0 slots) slave replicates 9111432777a2356508706c07e44bc0340ee6e594 M: 91d2f29b7bf974ebbaeeb4ac90a2232f9a1d126d 127.0.0.1:6379 slots:[0-554],[1964-2650],[4250-5460],[6802-7887],[10923-11045],[12288-13348] (4723 slots) master 1 additional replica(s) S: 61beb8c84aed2079e8e7232b374a20bf1e4dd90c 127.0.0.1:6384 slots: (0 slots) slave replicates 29b0cb2bd387a428bd34109efa5514d221da174b M: 29b0cb2bd387a428bd34109efa5514d221da174b 127.0.0.1:6381 slots:[555-1963],[3237-4249],[6380-6801],[14571-16383] (4657 slots) master 1 additional replica(s) S: 442d077607428ec3116b1ae9a0c5dbea89567c7c 127.0.0.1:6382 slots: (0 slots) slave replicates 91d2f29b7bf974ebbaeeb4ac90a2232f9a1d126d M: 9111432777a2356508706c07e44bc0340ee6e594 127.0.0.1:6380 slots:[2651-3236],[5461-6379],[7888-10922],[13349-14570] (5762 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)? 1242 What is the receiving node ID? 91d2f29b7bf974ebbaeeb4ac90a2232f9a1d126d Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node #1: 9c491c885d8ec3e885c79b3cabb9603e4c386019 Source node #2: done
退出集羣:
[root@localhost cluster]# redis-cli --cluster del-node 127.0.0.1:6385 9c491c885d8ec3e885c79b3cabb9603e4c386019 >>> Removing node 9c491c885d8ec3e885c79b3cabb9603e4c386019 from cluster 127.0.0.1:6385 >>> Sending CLUSTER FORGET messages to the cluster... >>> SHUTDOWN the node.
集羣狀態和節點狀態:
# 集羣狀態 [root@localhost cluster]# redis-cli -c cluster nodes 9111432777a2356508706c07e44bc0340ee6e594 127.0.0.1:6380@16380 master - 0 1554129407823 9 connected 2651-3236 5461-6379 7888-10922 13349-14570 61beb8c84aed2079e8e7232b374a20bf1e4dd90c 127.0.0.1:6384@16384 slave 29b0cb2bd387a428bd34109efa5514d221da174b 0 1554129407000 10 connected 91d2f29b7bf974ebbaeeb4ac90a2232f9a1d126d 127.0.0.1:6379@16379 myself,master - 0 1554129407000 11 connected 0-554 1964-2650 4250-5460 6802-7887 10923-13348 442d077607428ec3116b1ae9a0c5dbea89567c7c 127.0.0.1:6382@16382 slave 91d2f29b7bf974ebbaeeb4ac90a2232f9a1d126d 0 1554129406000 11 connected 3912cc4baaf6964b07ca05020f6f28f4d7370f38 127.0.0.1:6383@16383 slave 9111432777a2356508706c07e44bc0340ee6e594 0 1554129406812 9 connected 29b0cb2bd387a428bd34109efa5514d221da174b 127.0.0.1:6381@16381 master - 0 1554129405000 10 connected 555-1963 3237-4249 6380-6801 14571-16383 # 節點狀態,退出集羣,集羣關閉 [root@localhost cluster]# redis-cli -c -p 6385 Could not connect to Redis at 127.0.0.1:6385: Connection refused not connected>
# 虛擬機: 192.168.0.109 192.168.0.110 192.168.0.111 # 目標 192.168.0.109:6379 主 <- 192.168.0.110:6380 從 192.168.0.110:6379 主 <- 192.168.0.111:6380 從 192.168.0.111:6379 主 <- 192.168.0.109:6380 從
一、三臺機器安裝redis [見redis-安裝]
二、建立配置文件【三臺機器同步】
[root@localhost redis-5.0.4]# mkdir vm-cluster [root@localhost redis-5.0.4]# cd vm-cluster [root@localhost vm-cluster]# vim redis-6379.conf port 6379 daemonize yes dir "./" logfile "6379.log" dbfilename "dump-6379.rdb" protected-mode no # 集羣配置 cluster-enabled yes cluster-config-file node-6379.conf cluster-require-full-coverage no [root@localhost vm-cluster]# sed "s/6379/6380/g" redis-6379.conf > redis-6380.conf # 開啓端口 [root@localhost vm-cluster]# firewall-cmd --permanent --add-port=6379/tcp [root@localhost vm-cluster]# firewall-cmd --permanent --add-port=16379/tcp [root@localhost vm-cluster]# firewall-cmd --permanent --add-port=6380/tcp [root@localhost vm-cluster]# firewall-cmd --permanent --add-port=16380/tcp [root@localhost vm-cluster]# firewall-cmd --reload # 啓動 redis [root@localhost vm-cluster]# redis-server redis-6379.conf [root@localhost vm-cluster]# redis-server redis-6380.conf
三、建立集羣:
[root@localhost vm-cluster]# redis-cli --cluster create --cluster-replicas 1 192.168.0.109:6379 192.168.0.110:6379 192.168.0.111:6379 192.168.0.110:6380 192.168.0.111:6380 192.168.0.109:6380 >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 192.168.0.110:6380 to 192.168.0.109:6379 Adding replica 192.168.0.111:6380 to 192.168.0.110:6379 Adding replica 192.168.0.109:6380 to 192.168.0.111:6379 M: cd831b6cb2e1de4933c7e57a2e6dd9f7c3179879 192.168.0.109:6379 slots:[0-5460] (5461 slots) master M: 418bcea8d67d4194c6ea0019c16536683a1385e7 192.168.0.110:6379 slots:[5461-10922] (5462 slots) master M: c99f237c3f795577fcfbb4d9f44aa974c2f7cc10 192.168.0.111:6379 slots:[10923-16383] (5461 slots) master S: 3c28541b32771a5316f9c52cbfc0ad66729d2eb7 192.168.0.110:6380 replicates cd831b6cb2e1de4933c7e57a2e6dd9f7c3179879 S: a1f239e706445494f685ebaf4dd0032bb66d9060 192.168.0.111:6380 replicates 418bcea8d67d4194c6ea0019c16536683a1385e7 S: b38161698f89cf41f2a7a49b7a8c33506f0c95f0 192.168.0.109:6380 replicates c99f237c3f795577fcfbb4d9f44aa974c2f7cc10 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ...... >>> Performing Cluster Check (using node 192.168.0.109:6379) M: cd831b6cb2e1de4933c7e57a2e6dd9f7c3179879 192.168.0.109:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: a1f239e706445494f685ebaf4dd0032bb66d9060 192.168.0.111:6380 slots: (0 slots) slave replicates 418bcea8d67d4194c6ea0019c16536683a1385e7 M: 418bcea8d67d4194c6ea0019c16536683a1385e7 192.168.0.110:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 3c28541b32771a5316f9c52cbfc0ad66729d2eb7 192.168.0.110:6380 slots: (0 slots) slave replicates cd831b6cb2e1de4933c7e57a2e6dd9f7c3179879 M: c99f237c3f795577fcfbb4d9f44aa974c2f7cc10 192.168.0.111:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: b38161698f89cf41f2a7a49b7a8c33506f0c95f0 192.168.0.109:6380 slots: (0 slots) slave replicates c99f237c3f795577fcfbb4d9f44aa974c2f7cc10 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
四、測試集羣
[root@localhost vm-cluster]# redis-cli -c -p 6379 127.0.0.1:6379> set name aa -> Redirected to slot [5798] located at 192.168.0.110:6379 OK 192.168.0.110:6379> set key value -> Redirected to slot [12539] located at 192.168.0.111:6379 OK 192.168.0.111:6379> get name -> Redirected to slot [5798] located at 192.168.0.110:6379 "aa" 192.168.0.110:6379> get key -> Redirected to slot [12539] located at 192.168.0.111:6379 "value"
一、經過 ping/pong
實現故障發現
二、主觀下線和客觀下線
一、資格檢查
cluster-node-timeout * cluster-slave-validity-factory
取消資格二、選舉投票
三、替換主節點
slaveof no one
)clusterDelSlot
撤銷故障主節點負責的槽,並執行clusterAddSlot把這些槽分配給本身cluster-require-full-coverage: 默認爲yes
redis-trib.rb info ip:port
:查看節點、槽、鍵值分佈
redis-trib.rb rebalance
:均衡(謹慎使用)cluster countkeysinslot {slot}
:查看某個槽內的鍵值數量