redis5.0集羣模式安裝部署和節點擴容

redis5.0安裝部署參考一鍵安裝腳本

1.目前在生產環境已經有的集羣環境爲3個master、3個slave,如今添加2個節點,造成5個master和5個slavenode

將安裝好的新redis節點添加到集羣,安裝好的節點規劃以下:redis

172.16.153.32:7002和172.16.153.32:7003     前面的是master後面的是slave
172.16.153.33:7004和172.16.153.33:7005     前面的是master後面的是slave

2.登陸到redis集羣操做以下:ide

./redis-cli  -a redis_pass

執行以下操做:3d

127.0.0.1:6379> cluster meet 172.16.153.33 7004
OK
127.0.0.1:6379> cluster meet 172.16.153.33 7005
OK

查看集羣此時信息:code

127.0.0.1:6379> cluster nodes
a9deeb976ba0efa14190cf382bfe61aea65697ad 172.16.153.5:6380@16380 master - 0 1606980344037 2 connected 6462-10922
ad845f2b5c47d981577abeece629fc14550c7e8b 172.16.153.32:7002@17002 master - 0 1606980344037 7 connected 0-998 5461-6461 10923-11921
1e259f10a6e29d38a1261c637ad1a24d40a3e754 172.16.153.5:6384@16384 slave a9deeb976ba0efa14190cf382bfe61aea65697ad 0 1606980344137 6 connected
ef7d6f7d2b547b6a89a7478b4c39e1650d8074e1 172.16.153.5:6382@16382 slave 54fed77edf250f947f7d27959c2a317a082d0d3b 0 1606980344537 4 connected
922c14b9935f3fd6f457701c41f991c883ec9ca4 172.16.153.5:6379@16379 myself,master - 0 1606980344000 1 connected 999-5460
7eabf4a5569f0ee35f159139cf62bdf99c5fa482 172.16.153.5:6383@16383 slave 922c14b9935f3fd6f457701c41f991c883ec9ca4 0 1606980344037 5 connected
3c57d818abc38d9a0a8ca8d17fe2cac56fad8380 172.16.153.33:7005@17005 master - 0 1606980344137 8 connected
54fed77edf250f947f7d27959c2a317a082d0d3b 172.16.153.5:6381@16381 master - 0 1606980344137 3 connected 11922-16383
ba3dfe8a7462d55272d8ae5defa0b2db1e63fb07 172.16.153.32:7003@17003 slave ad845f2b5c47d981577abeece629fc14550c7e8b 0 1606980344137 7 connected
e1ece83ddf64f3d23567ccecc8354e717c19a961 172.16.153.33:7004@17004 master - 0 1606980344137 0 connected

3.登陸到172.16.153.33 7005操做,將7005做爲7004的slave,操做以下:orm

redis-cli  -c -h 172.16.153.33 -p 7005 -a smcaiot_redis_pass
172.16.153.33:7005> cluster nodes
ef7d6f7d2b547b6a89a7478b4c39e1650d8074e1 172.16.153.5:6382@16382 slave 54fed77edf250f947f7d27959c2a317a082d0d3b 0 1606980637442 3 connected
1e259f10a6e29d38a1261c637ad1a24d40a3e754 172.16.153.5:6384@16384 slave a9deeb976ba0efa14190cf382bfe61aea65697ad 0 1606980637000 2 connected
922c14b9935f3fd6f457701c41f991c883ec9ca4 172.16.153.5:6379@16379 master - 0 1606980637000 1 connected 999-5460
54fed77edf250f947f7d27959c2a317a082d0d3b 172.16.153.5:6381@16381 master - 0 1606980637000 3 connected 11922-16383
e1ece83ddf64f3d23567ccecc8354e717c19a961 172.16.153.33:7004@17004 master - 0 1606980637000 0 connected
a9deeb976ba0efa14190cf382bfe61aea65697ad 172.16.153.5:6380@16380 master - 0 1606980636440 2 connected 6462-10922
ba3dfe8a7462d55272d8ae5defa0b2db1e63fb07 172.16.153.32:7003@17003 slave ad845f2b5c47d981577abeece629fc14550c7e8b 0 1606980637000 7 connected
3c57d818abc38d9a0a8ca8d17fe2cac56fad8380 172.16.153.33:7005@17005 myself,master - 0 1606980637000 8 connected
7eabf4a5569f0ee35f159139cf62bdf99c5fa482 172.16.153.5:6383@16383 slave 922c14b9935f3fd6f457701c41f991c883ec9ca4 0 1606980637000 1 connected
ad845f2b5c47d981577abeece629fc14550c7e8b 172.16.153.32:7002@17002 master - 0 1606980636000 7 connected 0-998 5461-6461 10923-11921
172.16.153.33:7005> cluster replicate  e1ece83ddf64f3d23567ccecc8354e717c19a961
OK

172.16.153.33:7005> cluster nodes
ef7d6f7d2b547b6a89a7478b4c39e1650d8074e1 172.16.153.5:6382@16382 slave 54fed77edf250f947f7d27959c2a317a082d0d3b 0 1606980686498 3 connected
1e259f10a6e29d38a1261c637ad1a24d40a3e754 172.16.153.5:6384@16384 slave a9deeb976ba0efa14190cf382bfe61aea65697ad 0 1606980685000 2 connected
922c14b9935f3fd6f457701c41f991c883ec9ca4 172.16.153.5:6379@16379 master - 0 1606980686000 1 connected 999-5460
54fed77edf250f947f7d27959c2a317a082d0d3b 172.16.153.5:6381@16381 master - 0 1606980685000 3 connected 11922-16383
e1ece83ddf64f3d23567ccecc8354e717c19a961 172.16.153.33:7004@17004 master - 0 1606980686000 0 connected
a9deeb976ba0efa14190cf382bfe61aea65697ad 172.16.153.5:6380@16380 master - 0 1606980685497 2 connected 6462-10922
ba3dfe8a7462d55272d8ae5defa0b2db1e63fb07 172.16.153.32:7003@17003 slave ad845f2b5c47d981577abeece629fc14550c7e8b 0 1606980685000 7 connected
3c57d818abc38d9a0a8ca8d17fe2cac56fad8380 172.16.153.33:7005@17005 myself,slave e1ece83ddf64f3d23567ccecc8354e717c19a961 0 1606980686000 8 connected
7eabf4a5569f0ee35f159139cf62bdf99c5fa482 172.16.153.5:6383@16383 slave 922c14b9935f3fd6f457701c41f991c883ec9ca4 0 1606980685000 1 connected
ad845f2b5c47d981577abeece629fc14550c7e8b 172.16.153.32:7002@17002 master - 0 1606980685000 7 connected 0-998 5461-6461 10923-11921

此時5個master和5個slave完成,默認新加入的節點不分配槽位,沒法存儲數據,下一步開始進行槽位分配。

4.從新分配操做,操做以下部署

redis-cli --cluster reshard  172.16.153.33:7004 -a smcaiot_redis_pass
>>> Performing Cluster Check (using node 172.16.153.33:7004)
M: e1ece83ddf64f3d23567ccecc8354e717c19a961 172.16.153.33:7004
   slots: (0 slots) master
   1 additional replica(s)
S: ba3dfe8a7462d55272d8ae5defa0b2db1e63fb07 172.16.153.32:7003
   slots: (0 slots) slave
   replicates ad845f2b5c47d981577abeece629fc14550c7e8b
S: 3c57d818abc38d9a0a8ca8d17fe2cac56fad8380 172.16.153.33:7005
   slots: (0 slots) slave
   replicates e1ece83ddf64f3d23567ccecc8354e717c19a961
M: 54fed77edf250f947f7d27959c2a317a082d0d3b 172.16.153.5:6381
   slots:[11922-16383] (4462 slots) master
   1 additional replica(s)
S: ef7d6f7d2b547b6a89a7478b4c39e1650d8074e1 172.16.153.5:6382
   slots: (0 slots) slave
   replicates 54fed77edf250f947f7d27959c2a317a082d0d3b
M: a9deeb976ba0efa14190cf382bfe61aea65697ad 172.16.153.5:6380
   slots:[6462-10922] (4461 slots) master
   1 additional replica(s)
M: 922c14b9935f3fd6f457701c41f991c883ec9ca4 172.16.153.5:6379
   slots:[999-5460] (4462 slots) master
   1 additional replica(s)
S: 1e259f10a6e29d38a1261c637ad1a24d40a3e754 172.16.153.5:6384
   slots: (0 slots) slave
   replicates a9deeb976ba0efa14190cf382bfe61aea65697ad
S: 7eabf4a5569f0ee35f159139cf62bdf99c5fa482 172.16.153.5:6383
   slots: (0 slots) slave
   replicates 922c14b9935f3fd6f457701c41f991c883ec9ca4
M: ad845f2b5c47d981577abeece629fc14550c7e8b 172.16.153.32:7002
   slots:[0-998],[5461-6461],[10923-11921] (2999 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 3000
What is the receiving node ID? e1ece83ddf64f3d23567ccecc8354e717c19a961
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: all

5.檢查rediscluster狀態hash

127.0.0.1:6379> cluster info
127.0.0.1:6379> cluster nodes
a9deeb976ba0efa14190cf382bfe61aea65697ad 172.16.153.5:6380@16380 master - 0 1606980981086 2 connected 7278-10922
ad845f2b5c47d981577abeece629fc14550c7e8b 172.16.153.32:7002@17002 master - 0 1606980981086 7 connected 549-998 5461-6461 10923-11921
1e259f10a6e29d38a1261c637ad1a24d40a3e754 172.16.153.5:6384@16384 slave a9deeb976ba0efa14190cf382bfe61aea65697ad 0 1606980981287 6 connected
ef7d6f7d2b547b6a89a7478b4c39e1650d8074e1 172.16.153.5:6382@16382 slave 54fed77edf250f947f7d27959c2a317a082d0d3b 0 1606980981086 4 connected
922c14b9935f3fd6f457701c41f991c883ec9ca4 172.16.153.5:6379@16379 myself,master - 0 1606980980000 1 connected 1816-5460
7eabf4a5569f0ee35f159139cf62bdf99c5fa482 172.16.153.5:6383@16383 slave 922c14b9935f3fd6f457701c41f991c883ec9ca4 0 1606980981086 5 connected
3c57d818abc38d9a0a8ca8d17fe2cac56fad8380 172.16.153.33:7005@17005 slave e1ece83ddf64f3d23567ccecc8354e717c19a961 0 1606980981086 9 connected
54fed77edf250f947f7d27959c2a317a082d0d3b 172.16.153.5:6381@16381 master - 0 1606980981086 3 connected 12740-16383
ba3dfe8a7462d55272d8ae5defa0b2db1e63fb07 172.16.153.32:7003@17003 slave ad845f2b5c47d981577abeece629fc14550c7e8b 0 1606980981086 7 connected
e1ece83ddf64f3d23567ccecc8354e717c19a961 172.16.153.33:7004@17004 master - 0 1606980981086 9 connected 0-548 999-1815 6462-7277 11922-12739
相關文章
相關標籤/搜索