在上一篇介紹了Redis Cluster的部署實戰《Redis Cluster 集羣部署實戰》
node
若因業務沒法支撐現有訪問或對之後容量進行擴容預案,如何擴容?能夠提早作好快速擴容的腳本,進行一鍵擴容或是手工進行擴容
redis
這裏是手工進行擴容shell
背景:ide
假設因業務快速增加須要,現上面的Redis集羣已經沒法知足支撐業務系統,先須要快速擴容Redis集羣,這裏假設只擴容一臺Redis(兩個實例)spa
擴容清單:3d
主機名orm |
IP地址server |
Redis端口劃分blog |
備註圖片 |
node174 |
172.20.20.174 |
16001,16002 |
說明:這裏的各個主機的hosts就不配置了,如果自建DNS,則內部解析須要配置
Redis部署略
參見上面的連接,另外能夠將打成鏡像,修改IP地址便可
/opt/redis/bin/redis-server /opt/redis/conf/redis-16001.conf
/opt/redis/bin/redis-server /opt/redis/conf/redis-16002.conf
./bin/redis-cli --cluster add-node 172.20.20.174:16001 172.20.20.171:16001
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 172.20.20.174:16001 to cluster 172.20.20.171:16001
>>> Performing Cluster Check (using node 172.20.20.171:16001)
S: a9ab7a12884d505efcf066fcc3aae74c2b3f101d 172.20.20.171:16001
slots: (0 slots) slave
replicates 6dec89e63a48a9a9f393011a698a0bda21b70f1e
M: 6dec89e63a48a9a9f393011a698a0bda21b70f1e 172.20.20.172:16002
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 04d9c29ef2569b1fc8abd9594d64fca33e4ad4f2 172.20.20.173:16001
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
M: 761348a0107f5b009cabc22c214e39578d0aa707 172.20.20.172:16001
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: 81d1b25ae1ea85421bd4abb2be094c258026c505 172.20.20.171:16002
slots: (0 slots) slave
replicates 04d9c29ef2569b1fc8abd9594d64fca33e4ad4f2
S: 14e79155f78065e4518e00cd5bd057336b17e3a7 172.20.20.173:16002
slots: (0 slots) slave
replicates 761348a0107f5b009cabc22c214e39578d0aa707
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.20.20.174:16001 to make it join the cluster.
[OK] New node added correctly.
登錄171--173中的任意一臺查看
[root@node172 redis]# ./bin/redis-cli -h 172.20.20.172 -p 16002
172.20.20.172:16002> auth zjkj
OK
172.20.20.172:16002> cluster nodes
761348a0107f5b009cabc22c214e39578d0aa707 172.20.20.172:16001@26001 master - 0 1564038119398 3 connected 5461-10922
ea77d7d10c27a1d99de00f4f81034d2d82a3c765 172.20.20.174:16001@26001 master - 0 1564038119000 0 connected
6dec89e63a48a9a9f393011a698a0bda21b70f1e 172.20.20.172:16002@26002 myself,master - 0 1564038120000 7 connected 0-5460
a9ab7a12884d505efcf066fcc3aae74c2b3f101d 172.20.20.171:16001@26001 slave 6dec89e63a48a9a9f393011a698a0bda21b70f1e 0 1564038121413 7 connected
04d9c29ef2569b1fc8abd9594d64fca33e4ad4f2 172.20.20.173:16001@26001 master - 0 1564038118396 5 connected 10923-16383
81d1b25ae1ea85421bd4abb2be094c258026c505 172.20.20.171:16002@26002 slave 04d9c29ef2569b1fc8abd9594d64fca33e4ad4f2 0 1564038118000 5 connected
14e79155f78065e4518e00cd5bd057336b17e3a7 172.20.20.173:16002@26002 slave 761348a0107f5b009cabc22c214e39578d0aa707 0 1564038120405 6 connected
發現172.20.20.174是沒有分配slot的,那如何給174主機分配slot呢?
在標註紅色的有Master字樣的機器上,登錄任意一臺master,給174分配slot
[root@node174 redis]# ./bin/redis-cli --cluster reshard 172.20.20.171:16001 -a zjkj
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing Cluster Check (using node 172.20.20.172:16002)
M: 6dec89e63a48a9a9f393011a698a0bda21b70f1e 172.20.20.172:16002
slots:[0-5460] (5461 slots) master
1 additional replica(s)
M: 761348a0107f5b009cabc22c214e39578d0aa707 172.20.20.172:16001
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
M: ea77d7d10c27a1d99de00f4f81034d2d82a3c765 172.20.20.174:16001
slots: (0 slots) master
S: a9ab7a12884d505efcf066fcc3aae74c2b3f101d 172.20.20.171:16001
slots: (0 slots) slave
replicates 6dec89e63a48a9a9f393011a698a0bda21b70f1e
M: 04d9c29ef2569b1fc8abd9594d64fca33e4ad4f2 172.20.20.173:16001
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: 81d1b25ae1ea85421bd4abb2be094c258026c505 172.20.20.171:16002
slots: (0 slots) slave
replicates 04d9c29ef2569b1fc8abd9594d64fca33e4ad4f2
S: 14e79155f78065e4518e00cd5bd057336b17e3a7 172.20.20.173:16002
slots: (0 slots) slave
replicates 761348a0107f5b009cabc22c214e39578d0aa707
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)?
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? ea77d7d10c27a1d99de00f4f81034d2d82a3c765 #新添加的主庫Id
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: all
shell>./bin/redis-cli -h 172.20.20.174 -p 16002 -a zjkj
172.20.20.174:16002> cluster replicate ea77d7d10c27a1d99de00f4f81034d2d82a3c765 #這裏的Id是 #新庫ID
172.20.20.174:16002> cluster nodes
761348a0107f5b009cabc22c214e39578d0aa707 172.20.20.172:16001@26001 master - 0 1564039099770 3 connected 6827-10922
a9ab7a12884d505efcf066fcc3aae74c2b3f101d 172.20.20.171:16001@26001 slave 6dec89e63a48a9a9f393011a698a0bda21b70f1e 0 1564039096756 7 connected
14e79155f78065e4518e00cd5bd057336b17e3a7 172.20.20.173:16002@26002 slave 761348a0107f5b009cabc22c214e39578d0aa707 0 1564039097762 3 connected
6dec89e63a48a9a9f393011a698a0bda21b70f1e 172.20.20.172:16002@26002 master - 0 1564039098764 7 connected 1365-5460
04d9c29ef2569b1fc8abd9594d64fca33e4ad4f2 172.20.20.173:16001@26001 master - 0 1564039097000 5 connected 12288-16383
81d1b25ae1ea85421bd4abb2be094c258026c505 172.20.20.171:16002@26002 slave 04d9c29ef2569b1fc8abd9594d64fca33e4ad4f2 0 1564039099000 5 connected
7d697261f0a4dd31f422e2fbe593d5cdd575fbb5 172.20.20.174:16002@26002 myself,slave ea77d7d10c27a1d99de00f4f81034d2d82a3c765 0 1564039094000 0 connected
ea77d7d10c27a1d99de00f4f81034d2d82a3c765 172.20.20.174:16001@26001 master - 0 1564039097000 8 connected 0-1364 5461-6826 10923-12287
至此擴容成功,看看各個實例的分配狀況
這裏有數據量不相等的狀況,由於我在171的node上還在批量插入數據
固然擴容的反方案,以後就是縮容,通常對抗突發流量以後須要縮容,正常的數據量增加是不要縮容的,另外縮容也是有必定風險存在的