1.主從: 國王和丞相,國王權力大(讀寫),丞相權利小(讀)
2.哨兵: 國王和王子,國王死了(主服務掛掉),王子繼位(從服務變主服務)
3.集羣: 國王和國王,一個國王死了(節點掛掉),其餘國王還活着,世界還沒毀滅html
slaveof 127.0.0.1 6379
便可(一次性)info replication
查看信息
#sentinel端口
port 26379
#工做路徑,注意路徑不要和主重複
dir "/usr/local/redis-6379"
# 守護進程模式
daemonize yes
#關閉保護模式
protected-mode no
# 指明日誌文件名
logfile "./sentinel.log"
#哨兵監控的master,主從配置同樣,這裏只用輸入redis主節點的ip/port和法定人數。
sentinel monitor mymaster 192.168.125.128 6379 1
# master或slave多長時間(默認30秒)不能使用後標記爲s_down狀態。
sentinel down-after-milliseconds mymaster 3000
#若sentinel在該配置值內未能完成failover操做(即故障時master/slave自動切換),則認爲本次failover失敗。
sentinel failover-timeout mymaster 18000
#設置master和slaves驗證密碼
sentinel auth-pass mymaster 123456
#指定了在執行故障轉移時, 最多能夠有多少個從服務器同時對新的主服務器進行同步
sentinel parallel-syncs mymaster 1
2.啓動主服務和從服務, 開始從服務的哨兵進程
方式一:redis-sentinel /path/to/sentinel.conf
(推薦,這種方式啓動和redis實例沒有任何關係)
方式二:redis-server /path/to/sentinel.conf --sentinel
node
3.當主服務掛掉,會有一個從服務自動變爲主服務redis
redis.conf最簡配置:shell
port 7000 daemonize yes cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 5000 appendonly yes
3.把redis編譯以後的src
中redis-cli redis-server redis-trib.rb
複製到cluster-test
文件夾
4.進入每一個文件夾中,依次啓動6個redis實例 ../redis-server ./redis.conf
5.經過集羣命令工具redis-trib(ruby編寫)建立集羣,須要安裝ruby環境ruby
$ yum install ruby
$ yum install rubygems
$ gem install redis
6.安裝ruby2.4.0(yum install ruby 是1.6.*版本過低)
1)安裝rvmbash
$ curl -L get.rvm.io | bash -s stable
若是報錯運行提示信息中的gpg2 --recv-keys xxxxxx
服務器
2)啓動服務app
$ source /usr/local/rvm/scripts/rvm
3)查看rvm庫中已知ruby版本curl
$ rvm list known
4)升級Rubyide
#安裝ruby
rvm install 2.4.0
#使用新版本
rvm use 2.4.0
#移除舊版本
rvm remove 2.0.0
#查看當前版本
ruby --version
7.安裝gem
$ gem install redis
8.執行redis-trib.rb命令
$ ./redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
[root@root cluster-test]# ./redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 \
> 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
127.0.0.1:7000
127.0.0.1:7001
127.0.0.1:7002
Adding replica 127.0.0.1:7004 to 127.0.0.1:7000
Adding replica 127.0.0.1:7005 to 127.0.0.1:7001
Adding replica 127.0.0.1:7003 to 127.0.0.1:7002
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 033d0dbea959fb15a3a27552d18dbb623985a180 127.0.0.1:7000
slots:0-5460 (5461 slots) master
M: 77812723f46f25191eeed04a42303ae83bec66be 127.0.0.1:7001
slots:5461-10922 (5462 slots) master
M: 17fccfc10108c81301a501ec8eaccfb57541fa87 127.0.0.1:7002
slots:10923-16383 (5461 slots) master
S: 89b452f2d5553bef131152932c5725429b0c4aa1 127.0.0.1:7003
replicates 77812723f46f25191eeed04a42303ae83bec66be
S: 095f3d47fa5788ddd94a4c962f7e02fe79a6e8b1 127.0.0.1:7004
replicates 17fccfc10108c81301a501ec8eaccfb57541fa87
S: ba4ca2e1e87ec1a38ad891f2aec5d44f300a52bd 127.0.0.1:7005
replicates 033d0dbea959fb15a3a27552d18dbb623985a180
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join...
>>> Performing Cluster Check (using node 127.0.0.1:7000)
M: 033d0dbea959fb15a3a27552d18dbb623985a180 127.0.0.1:7000
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 095f3d47fa5788ddd94a4c962f7e02fe79a6e8b1 127.0.0.1:7004
slots: (0 slots) slave
replicates 17fccfc10108c81301a501ec8eaccfb57541fa87
S: ba4ca2e1e87ec1a38ad891f2aec5d44f300a52bd 127.0.0.1:7005
slots: (0 slots) slave
replicates 033d0dbea959fb15a3a27552d18dbb623985a180
S: 89b452f2d5553bef131152932c5725429b0c4aa1 127.0.0.1:7003
slots: (0 slots) slave
replicates 77812723f46f25191eeed04a42303ae83bec66be
M: 17fccfc10108c81301a501ec8eaccfb57541fa87 127.0.0.1:7002
slots:10923-16383 (5461 slots) master
1 additional replica(s)
M: 77812723f46f25191eeed04a42303ae83bec66be 127.0.0.1:7001
slots:5461-10922 (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
9.客戶端驗證
$ ./redis-cli -c -p 7000
127.0.0.1:7000> set name lin
-> Redirected to slot [5798] located at 127.0.0.1:7001
OK
10.查看集羣狀態
127.0.0.1:7001> cluster nodes 17fccfc10108c81301a501ec8eaccfb57541fa87 127.0.0.1:7002@17002 master - 0 1535691791595 3 connected 10923-16383 89b452f2d5553bef131152932c5725429b0c4aa1 127.0.0.1:7003@17003 slave 77812723f46f25191eeed04a42303ae83bec66be 0 1535691791595 4 connected 77812723f46f25191eeed04a42303ae83bec66be 127.0.0.1:7001@17001 myself,master - 0 1535691790000 2 connected 5461-10922 ba4ca2e1e87ec1a38ad891f2aec5d44f300a52bd 127.0.0.1:7005@17005 slave 033d0dbea959fb15a3a27552d18dbb623985a180 0 1535691790000 6 connected 033d0dbea959fb15a3a27552d18dbb623985a180 127.0.0.1:7000@17000 master - 0 1535691791393 1 connected 0-5460 095f3d47fa5788ddd94a4c962f7e02fe79a6e8b1 127.0.0.1:7004@17004 slave 17fccfc10108c81301a501ec8eaccfb57541fa87 0 1535691790390 5 connected
127.0.0.1:7001> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:2 cluster_stats_messages_ping_sent:429 cluster_stats_messages_pong_sent:440 cluster_stats_messages_meet_sent:1 cluster_stats_messages_sent:870 cluster_stats_messages_ping_received:436 cluster_stats_messages_pong_received:430 cluster_stats_messages_meet_received:4 cluster_stats_messages_received:870