一、Redis Cluster設計要點
redis cluster在設計的時候,就考慮到了去中心化,去中間件,也就是說,集羣中的每一個節點都是平等的關係,都是對等的,每一個節點都保存各自的數據和整個集羣的狀態。每一個節點都和其餘全部節點鏈接,並且這些鏈接保持活躍,這樣就保證了咱們只須要鏈接集羣中的任意一個節點,就能夠獲取到其餘節點的數據。html
那麼redis 是如何合理分配這些節點和數據的呢?node
Redis 集羣沒有並使用傳統的一致性哈希來分配數據,而是採用另一種叫作
哈希槽 (hash slot)
的方式來分配的。redis cluster 默認分配了 16384 個slot,當咱們set一個key 時,會用CRC16
算法來取模獲得所屬的slot
,而後將這個key 分到哈希槽區間的節點上,具體算法就是:CRC16(key) % 16384
。redis注意的是:必需要
3個以上
的主節點,不然在建立集羣時會失敗。算法因此,咱們假設如今有3個節點已經組成了集羣,分別是:A, B, C 三個節點,它們能夠是一臺機器上的三個端口,也能夠是三臺不一樣的服務器。那麼,採用
哈希槽 (hash slot)
的方式來分配16384個slot 的話,它們三個節點分別承擔的slot 區間是:ruby
- 節點A覆蓋0-5460;
- 節點B覆蓋5461-10922;
- 節點C覆蓋10923-16383.
那麼,如今我想設置一個key ,好比叫
my_name
:bashset my_name yangyi按照redis cluster的哈希槽算法:
CRC16('my_name')%16384 = 2412
。 那麼就會把這個key 的存儲分配到 A 上了。服務器一樣,當我鏈接(A,B,C)任何一個節點想獲取
my_name
這個key時,也會這樣的算法,而後內部跳轉到B節點上獲取數據。ui這種
哈希槽
的分配方式有好也有壞,好處就是很清晰,好比我想新增一個節點D
,redis cluster的這種作法是從各個節點的前面各拿取一部分slot到D
上。大體就會變成這樣:spa
- 節點A覆蓋1365-5460
- 節點B覆蓋6827-10922
- 節點C覆蓋12288-16383
- 節點D覆蓋0-1364,5461-6826,10923-12287
一樣刪除一個節點也是相似,移動完成後就能夠刪除這個節點了。設計
因此redis cluster 就是這樣的一個形狀:
二、Redis Cluster主從模式
redis cluster 爲了保證數據的高可用性,加入了主從模式,一個主節點對應一個或多個從節點,主節點提供數據存取,從節點則是從主節點拉取數據備份,當這個主節點掛掉後,就會有這個從節點選取一個來充當主節點,從而保證集羣不會掛掉。
上面那個例子裏, 集羣有ABC三個主節點, 若是這3個節點都沒有加入從節點,若是B掛掉了,咱們就沒法訪問整個集羣了。A和C的slot也沒法訪問。
因此咱們在集羣創建的時候,必定要爲每一個主節點都添加了從節點, 好比像這樣, 集羣包含主節點A、B、C, 以及從節點A一、B一、C1, 那麼即便B掛掉系統也能夠繼續正確工做。
B1節點替代了B節點,因此Redis集羣將會選擇B1節點做爲新的主節點,集羣將會繼續正確地提供服務。 當B從新開啓後,它就會變成B1的從節點。
不過須要注意,若是節點B和B1同時掛了,Redis集羣就沒法繼續正確地提供服務了。
一、 redis實例:
192.168.244.128:6379 主 192.168.244.128:6380 從 192.168.244.130:6379 主 192.168.244.130:6380 從 192.168.244.131:6379 主 192.168.244.131:6380 從二、執行命令建立集羣
把cluster-enabled yes 的註釋打開
執行命令:./redis-trib.rb create --replicas 1 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380
新版本已經修改命令爲:
./redis-cli --cluster create 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380 --cluster-replicas 1 -a zjl123
注意一個服務器啓動多實例時如下配置要不同:
pidfile : pidfile/var/run/redis/redis_6380.pid
port 6380
logfile : logfile/var/log/redis/redis_6380.log
rdbfile : dbfilenamedump_6380.rdb
三、問題:
一、報錯:/usr/bin/env: ruby: No such file or directory
安裝ruby,rubygems 依賴
yum -y install ruby rubygems
二、報錯
./redis-trib.rb:6: odd number list for Hash
white: 29,
^
./redis-trib.rb:6: syntax error, unexpected ':', expecting '}'
white: 29,
^
./redis-trib.rb:7: syntax error, unexpected ',', expecting kEND
安裝新版ruby
yum remove -y ruby yum remove -y rubygems下載ruby-2.6.5.tar.gztar –zxvf ruby-2.6.5.tar.gzcd ruby-2.6.5./configuremakemake install三、重複運行建立集羣命令提示:You should use redis-cli instead. All commands and features belonging to redis-trib.rb have been moved to redis-cli. In order to use them you should call redis-cli with the --cluster option followed by the subcommand name, arguments and options. Use the following syntax: redis-cli --cluster SUBCOMMAND [ARGUMENTS] [OPTIONS] Example: redis-cli --cluster create 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380 --cluster-replicas 1 To get help about all subcommands, type: redis-cli --cluster help[root@zjltest3 src]# redis-cli --cluster create 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380 --cluster-replicas 1
-bash: redis-cli: command not foun
四、執行命令 . /redis-cli --cluster create 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380 --cluster-replicas 1[ERR] Node 192.168.244.128:6379 NOAUTH Authentication required.五、執行命令
./redis-cli --cluster create 192.168.244.128:6379 192.168.244.128:6380 192.168.244.130:6379 192.168.244.130:6380 192.168.244.131:6379 192.168.244.131:6380 --cluster-replicas 1 -a zjl123Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. [ERR] Node 192.168.244.128:6379 is not configured as a cluster node.六、把cluster-enabled yes 的註釋打開 成功:
Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 192.168.244.130:6380 to 192.168.244.128:6379 Adding replica 192.168.244.131:6380 to 192.168.244.130:6379 Adding replica 192.168.244.128:6380 to 192.168.244.131:6379 M: 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 192.168.244.128:6379 slots:[0-5460],[5634],[8157] (5461 slots) master S: 1d6b7a10046a75b3ab5461a8d29f411837a3c0d8 192.168.244.128:6380 replicates 0f30ac78eba3be20aa307ea64c09b5025de165af M: d34845ed63f35645e820946cc0dc24460621a386 192.168.244.130:6379 slots:[5461-10922] (5462 slots) master S: 1332da24115473f73e04dfe8b67cd1e595a34a11 192.168.244.130:6380 replicates 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 M: 0f30ac78eba3be20aa307ea64c09b5025de165af 192.168.244.131:6379 slots:[10923-16383] (5461 slots) master S: 1a57a63faa17e5fda4025a5f088fc70055990a07 192.168.244.131:6380 replicates d34845ed63f35645e820946cc0dc24460621a386 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join ...... >>> Performing Cluster Check (using node 192.168.244.128:6379) M: 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 192.168.244.128:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 1d6b7a10046a75b3ab5461a8d29f411837a3c0d8 192.168.244.128:6380 slots: (0 slots) slave replicates 0f30ac78eba3be20aa307ea64c09b5025de165af M: d34845ed63f35645e820946cc0dc24460621a386 192.168.244.130:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 1a57a63faa17e5fda4025a5f088fc70055990a07 192.168.244.131:6380 slots: (0 slots) slave replicates d34845ed63f35645e820946cc0dc24460621a386 S: 1332da24115473f73e04dfe8b67cd1e595a34a11 192.168.244.130:6380 slots: (0 slots) slave replicates 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 M: 0f30ac78eba3be20aa307ea64c09b5025de165af 192.168.244.131:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration.
一、查看集羣狀態:
./redis-cli --cluster check 192.168.244.128:6379
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. 192.168.244.128:6379 (0f6f4aab...) -> 0 keys | 5461 slots | 1 slaves. 192.168.244.130:6379 (d34845ed...) -> 0 keys | 5462 slots | 1 slaves. 192.168.244.131:6379 (0f30ac78...) -> 0 keys | 5461 slots | 1 slaves. [OK] 0 keys in 3 masters. 0.00 keys per slot on average. >>> Performing Cluster Check (using node 192.168.244.128:6379) M: 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 192.168.244.128:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 1d6b7a10046a75b3ab5461a8d29f411837a3c0d8 192.168.244.128:6380 slots: (0 slots) slave replicates 0f30ac78eba3be20aa307ea64c09b5025de165af M: d34845ed63f35645e820946cc0dc24460621a386 192.168.244.130:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 1a57a63faa17e5fda4025a5f088fc70055990a07 192.168.244.131:6380 slots: (0 slots) slave replicates d34845ed63f35645e820946cc0dc24460621a386 S: 1332da24115473f73e04dfe8b67cd1e595a34a11 192.168.244.130:6380 slots: (0 slots) slave replicates 0f6f4aabd05f0ef3a214a3e67b139ce52c8d5ca4 M: 0f30ac78eba3be20aa307ea64c09b5025de165af 192.168.244.131:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.二、登陸服務器設置和獲取值
[root@zjltest3 src]# ./redis-cli -c -h 192.168.244.128 -p 6379 -a zjl123 Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. 192.168.244.128:6379> set zjl 123 -> Redirected to slot [5634] located at 192.168.244.130:6379 OK 192.168.244.130:6379>如上,設置的值被保存在130服務器
[root@zjltest2 redis]# src/redis-cli -c -h 192.168.244.128 -p 6379 -a zjl123 Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. 192.168.244.128:6379> get zjl -> Redirected to slot [5634] located at 192.168.244.130:6379 "123"如上獲取值是從130上獲取的
以上說明cluster集羣配置成功