Redis 集羣之 Redis-Cluster

Redis集羣官方推薦方案 Redis-Clusternode

集羣 redis cluster
  經過分片實࣫容量擴展
  經過主從複製實࣫節點的高可用
  節點之間互相通訊
  每一個節點都維護整個集羣的節點信息
  redis-cluster把全部的物理節點映射到[0-16383]slotЇ,cluster 負責維護node<->slot<->keyredis

Redis-Cluster的優點算法

一、官方推薦,毋庸置疑。
二、去中心化,集羣最大可增長1000個節點,性能隨節點增長而線性擴展。
三、管理方便,後續可自行增長或摘除節點,移動分槽等等。
四、簡單,易上手
redis-cluster名詞介紹
一、master  主節點
二、slave   從節點
三、slot    槽,一共有16384數據分槽,分佈在集羣的全部主節點中
集羣內部劃分爲16384個數據分槽,分佈在三個主redis中緩存

 

一致性hash
普通hash的缺點
– 當hash節點數量發生變化時,大部分的數據的hash值都會發生變化,這將致使數據的緩存命中率降低不少
– 隨着業務的發展,換從容量會受限,進行擴容又是必然的需求
一致性hash
– Key空間比較大,爲0~~(2^32)-1個,造成一個࣪
– 把機器也經過hash算法映射到此環上
– 把數據的key也映射到此環上,以相同的hash算法
– 按照順時針的方向,把數據存儲在最近的節點上
– 優勢:
• 當新加入某個機器時,只會影響落入此節點左邊和前一個節點之間的數據
• 當刪除某個機器時,也只會影響這個節點左邊和前一個節點之間的數據
– 缺點:
• 數據分佈不夠均勻,新加入的節點的數據較少,也不能能把舊節點的數據進行遷移
如何解決數據分佈不均勻—平衡性
– 思路:足夠多的機器節點-----構建虛擬節點,而後作虛擬節點與物理節點的映射
– 虛擬節點ࡁ物理節點多,分佈較均勻性能

經常使用命令
cluster nodes 查看集羣全部節點
cluster info 查看集羣狀態信息
cluster slots 查看集羣槽位的分配信息
CLUSTER KEYSLOT key 返回某個key對應的槽位的槽位
CLUSTER DELSLOTS slot [slot ...] 刪除當前節點的槽位3d

1 準備節點
ip:port
127.0.0.1 6379 127.0.0.1 6382
127.0.0.1 6383 127.0.0.1 6384
127.0.0.1 6385 127.0.0.1 6386
修改配置文件 /etc/redis/6379.conf
pidfile "/var/run/redis_6379.pid"
port 6379 //端口
logfile "/etc/redis/6379.log"
dbfilename "dump-6382.rdb"
cluster-enabled yes //開啓集羣模式
cluster-config-file nodes-6379.conf //集羣內部的配置文件
cluster-node-timeout 15000 //節點超時時間,單位毫秒
// 其餘配置和單機模式相同
#/usr/local/redis/src/redis-server /etc/redis/6379.conf
/usr/local/redis/src/redis-server /etc/redis/6382.conf
/usr/local/redis/src/redis-server /etc/redis/6383.conf
/usr/local/redis/src/redis-server /etc/redis/6384.conf
/usr/local/redis/src/redis-server /etc/redis/6385.conf
/usr/local/redis/src/redis-server /etc/redis/6386.conf
/usr/local/redis/src/redis-server /etc/redis/6387.confserver

[root@hongquan1 redis]# /usr/local/redis/src/redis-server /etc/redis/6379.confip

*** FATAL CONFIG FILE ERROR ***
Reading the configuration file, at line 945
>>> 'slaveof 127.0.0.1 6380'
slaveof directive not allowed in cluster modehash

[root@hongquan1 redis]# ps -ef|grep redis
root 30167 1 0 08:10 ? 00:00:00 /usr/local/redis/src/redis-server *:6387 [cluster]
root 30762 1 0 08:14 ? 00:00:00 /usr/local/redis/src/redis-server *:6382 [cluster]
root 30781 1 0 08:14 ? 00:00:00 /usr/local/redis/src/redis-server *:6383 [cluster]
root 30783 1 0 08:14 ? 00:00:00 /usr/local/redis/src/redis-server *:6384 [cluster]
root 30785 1 0 08:14 ? 00:00:00 /usr/local/redis/src/redis-server *:6385 [cluster]
root 30793 1 0 08:14 ? 00:00:00 /usr/local/redis/src/redis-server *:6386 [cluster]
root 30840 3385 0 08:14 pts/1 00:00:00 grep redisio

/usr/local/redis/src/redis-cli -h 127.0.0.1 -p 6387

cluster-config-file nodes-6387.conf
集羣配置文件的做用:當集羣內節點發生信息變化時,如添加節點、節點下線、故障轉移等。節點會自動保存集羣的狀態到配置文件中。該配置文件由Redis自行維護,不要手動修改,防止節點重啓時產生集羣信息錯亂。
[root@hongquan1 redis]# tail -n 20 nodes-6387.conf
aa95cc1e617a173aba2a1839a9fcf03f1fa06a94 :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
127.0.0.1:6387> cluster nodes
aa95cc1e617a173aba2a1839a9fcf03f1fa06a94 :6387 myself,master - 0 0 0 connected

2 節點握手
節點握手是指一批運行在集羣模式的節點經過Gossip協議彼此通訊,達到感知對方的過程。節點握手是集羣彼此通訊的第一步,由客戶端發起命令:cluster meet <ip> <port>
127.0.0.1:6387> cluster meet 127.0.0.1 6382
OK
讓全部的節點都互相感知
127.0.0.1:6387> cluster nodes
e70238d739f433b74d4e86c4f651ea97085e2306 127.0.0.1:6382 master - 0 1525825154654 0 connected
aa95cc1e617a173aba2a1839a9fcf03f1fa06a94 127.0.0.1:6387 myself,master - 0 0 1 connected
127.0.0.1:6387> cluster meet 127.0.0.1 6383
OK
127.0.0.1:6387> cluster meet 127.0.0.1 6384
OK
127.0.0.1:6387> cluster meet 127.0.0.1 6385
OK
127.0.0.1:6387> cluster meet 127.0.0.1 6386
OK
127.0.0.1:6387> cluster nodes
1b62871cebb802ebed82b7ce43d2a58a08168c35 127.0.0.1:6383 master - 0 1525825206274 2 connected
714ff9c0cf3c5c45c80bf093a17b290ed02415e9 127.0.0.1:6384 master - 0 1525825207289 3 connected
aa95cc1e617a173aba2a1839a9fcf03f1fa06a94 127.0.0.1:6387 myself,master - 0 0 1 connected
e70238d739f433b74d4e86c4f651ea97085e2306 127.0.0.1:6382 master - 0 1525825207596 0 connected
52a74656529a7af4bd10060038bc866b60e9bb92 127.0.0.1:6385 master - 0 1525825210432 4 connected
e00a19c26d02de645fa1bf493319a91952d49504 127.0.0.1:6386 master - 0 1525825209363 5 connected
前已經使這六個節點組成集羣,可是如今還沒法工做,由於集羣節點尚未分配槽(slot)

3 分配槽
啓動後的集羣狀態爲fail,只有把全部槽位都分配給集羣節點後狀態才變爲可用
能夠看一下6379端口的槽個數
127.0.0.1:6387> cluster info
cluster_state:fail //狀態爲fail
cluster_slots_assigned:0 // 被分配槽的個數爲0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:0
cluster_current_epoch:5
cluster_my_epoch:1
cluster_stats_messages_sent:233
cluster_stats_messages_received:233
接下來爲節點分配槽空間。經過cluster addslots命令。
16383個槽位必須所有分配玩,不一樣節點不能重複,一個槽位只能位於一個節點上
/usr/local/redis/src/redis-cli -h 127.0.0.1 -p 6387 cluster addslots {0..5461}
/usr/local/redis/src/redis-cli -h 127.0.0.1 -p 6382 cluster addslots {5462..10922}
/usr/local/redis/src/redis-cli -h 127.0.0.1 -p 6383 cluster addslots {10923..16383}
[root@hongquan1 redis]# /usr/local/redis/src/redis-cli -h 127.0.0.1 -p 6387 cluster addslots {0..5461}
OK
[root@hongquan1 redis]# /usr/local/redis/src/redis-cli -h 127.0.0.1 -p 6382 cluster addslots {5462..10922}
OK
[root@hongquan1 redis]# /usr/local/redis/src/redis-cli -h 127.0.0.1 -p 6383 cluster addslots {10923..16383}
OK
[root@hongquan1 redis]# /usr/local/redis/src/redis-cli -h 127.0.0.1 -p 6387
127.0.0.1:6387> cluster info
cluster_state:ok //分配後,狀態爲ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:5
cluster_my_epoch:1
cluster_stats_messages_sent:453
cluster_stats_messages_received:453
能夠經過CLUSTER NODES來查看分配狀況:
127.0.0.1:6387> cluster nodes
1b62871cebb802ebed82b7ce43d2a58a08168c35 127.0.0.1:6383 master - 0 1525825377690 2 connected 10923-16383
714ff9c0cf3c5c45c80bf093a17b290ed02415e9 127.0.0.1:6384 master - 0 1525825378752 3 connected
aa95cc1e617a173aba2a1839a9fcf03f1fa06a94 127.0.0.1:6387 myself,master - 0 0 1 connected 0-5461
e70238d739f433b74d4e86c4f651ea97085e2306 127.0.0.1:6382 master - 0 1525825377895 0 connected 5462-10922
52a74656529a7af4bd10060038bc866b60e9bb92 127.0.0.1:6385 master - 0 1525825375628 4 connected
e00a19c26d02de645fa1bf493319a91952d49504 127.0.0.1:6386 master - 0 1525825376647 5 connected

目前還有三個節點沒有使用,做爲一個完整的集羣,每一個負責處理槽的節點應該具備從節點,保證當主節點出現故障時,能夠自動進行故障轉移。
集羣模式下,首次啓動的節點和被分配槽的節點都是主節點,從節點負責複製主節點槽的信息和相關數據。
使用cluster replicate <nodeid>在從節點上執行
127.0.0.1 6387 127.0.0.1 6384
127.0.0.1 6382 127.0.0.1 6385
127.0.0.1 6383 127.0.0.1 6386

/usr/local/redis/src/redis-cli -h 127.0.0.1 -p 6384 cluster replicate aa95cc1e617a173aba2a1839a9fcf03f1fa06a94/usr/local/redis/src/redis-cli -h 127.0.0.1 -p 6385 cluster replicate e70238d739f433b74d4e86c4f651ea97085e2306/usr/local/redis/src/redis-cli -h 127.0.0.1 -p 6386 cluster replicate 1b62871cebb802ebed82b7ce43d2a58a08168c35127.0.0.1:6387> cluster nodes1b62871cebb802ebed82b7ce43d2a58a08168c35 127.0.0.1:6383 master - 0 1525825628380 2 connected 10923-16383714ff9c0cf3c5c45c80bf093a17b290ed02415e9 127.0.0.1:6384 slave aa95cc1e617a173aba2a1839a9fcf03f1fa06a94 0 1525825625343 3 connectedaa95cc1e617a173aba2a1839a9fcf03f1fa06a94 127.0.0.1:6387 myself,master - 0 0 1 connected 0-5461e70238d739f433b74d4e86c4f651ea97085e2306 127.0.0.1:6382 master - 0 1525825628790 0 connected 5462-1092252a74656529a7af4bd10060038bc866b60e9bb92 127.0.0.1:6385 slave e70238d739f433b74d4e86c4f651ea97085e2306 0 1525825627360 4 connectede00a19c26d02de645fa1bf493319a91952d49504 127.0.0.1:6386 slave 1b62871cebb802ebed82b7ce43d2a58a08168c35 0 1525825626349 5 connected這樣就完成了一個3主3從的Redis集羣搭建127.0.0.1:6387> cluster slots1) 1) (integer) 10923 2) (integer) 16383 3) 1) "127.0.0.1" 2) (integer) 6383 4) 1) "127.0.0.1" 2) (integer) 63862) 1) (integer) 0 2) (integer) 5461 3) 1) "127.0.0.1" 2) (integer) 6387 4) 1) "127.0.0.1" 2) (integer) 63843) 1) (integer) 5462 2) (integer) 10922 3) 1) "127.0.0.1" 2) (integer) 6382 4) 1) "127.0.0.1" 2) (integer) 6385

相關文章
相關標籤/搜索