Redis 提供的以下技術「Redis Sentinel『主從切換』、Redis Cluster『分片』」,有效實現了 Redis 的高可用、高性能、高可伸縮性,本文對以上技術進行親自動手實踐。html
# 已知網關 IP 爲:172.17.0.1 # 啓動 master 節點 docker run -it --name redis-6380 -p 6380:6379 redis docker exec -it redis-6380 /bin/bash redis-cli -h 172.17.0.1 -p 6380 # 啓動slave節點 1 docker run -it --name redis-6381 -p 6381:6379 redis docker exec -it redis-6381 /bin/bash redis-cli -h 172.17.0.1 -p 6381 replicaof 172.17.0.1 6380 # 啓動slave節點 2 docker run -it --name redis-6382 -p 6382:6379 redis docker exec -it redis-6382 /bin/bash redis-cli -h 172.17.0.1 -p 6382 replicaof 172.17.0.1 6380
以後可查看 master 節點的信息,在 master-redis 下,執行:node
> info Replication # Replication role:master connected_slaves:2 slave0:ip=172.17.0.1,port=6379,state=online,offset=686,lag=0 slave1:ip=172.17.0.1,port=6379,state=online,offset=686,lag=1 master_replid:79187e2241015c2f8ed98ce68caafa765796dff2 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:686 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:686
以後操做 master 節點,slave 節點會自動同步。redis
slave-redis 下執行 replicaof no one
可從新改成主節點。算法
docker network ls docker network inspect bridge
docker network inspect bridge
後,可查看到網關 IP 以及各容器 IP,可以使用 網關 IP : 外部映射端口
,或 容器 IP : 6379
訪問 Redis;當前狀態:docker
redis.conf 配置內容以下:shell
# 默認端口6379 port 6390 # 綁定ip,若是是內網能夠直接綁定 127.0.0.1, 或者忽略, 0.0.0.0 是外網 bind 0.0.0.0 # 守護進程啓動 daemonize no
變動監聽端口號,並從新建立 redis 容器:bash
docker run -p 6390:6390 -v D:\develop\shell\docker\redis\conf6390:/usr/local/etc/redis --name redis-conf-6390 redis redis-server /usr/local/etc/redis/redis.conf docker exec -it redis-conf-6390 /bin/bash redis-cli -h 172.17.0.1 -p 6390 docker run -p 6391:6391 -v D:\develop\shell\docker\redis\conf6391:/usr/local/etc/redis --name redis-conf-6391 redis redis-server /usr/local/etc/redis/redis.conf docker exec -it redis-conf-6391 /bin/bash redis-cli -h 172.17.0.1 -p 6391 slaveof 172.17.0.1 6390 docker run -p 6392:6392 -v D:\develop\shell\docker\redis\conf6392:/usr/local/etc/redis --name redis-conf-6392 redis redis-server /usr/local/etc/redis/redis.conf docker exec -it redis-conf-6392 /bin/bash redis-cli -h 172.17.0.1 -p 6392 slaveof 172.17.0.1 6390
以後可查看 master 節點的信息,可看到 master 獲取到的 slave 的端口號恢復了正常。在 master-redis 下,執行:服務器
> info Replication # Replication role:master connected_slaves:2 slave0:ip=172.17.0.1,port=6391,state=online,offset=84,lag=0 slave1:ip=172.17.0.1,port=6392,state=online,offset=84,lag=0 master_replid:ed2e513ceed2b48a272b97c674c99d82284342a1 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:84 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:84
建立 sentinel.conf
,文件中寫入以下內容:網絡
sentinel monitor bitkylin-master 172.17.0.1 6390 2 sentinel down-after-milliseconds bitkylin-master 5000 sentinel failover-timeout bitkylin-master 10000 sentinel parallel-syncs bitkylin-master 1
命令詳解:指示 Sentinel 去監視一個名爲 bitkylin-master 的主服務器,將這個主服務器標記爲客觀下線至少須要 2 個 Sentinel 贊成;
響應超時 5 秒標記爲主觀下線,主觀下線後就開始了遷移流程,超時 10 秒爲遷移超時,暫不知用途。app
將配置文件複製到 docker 容器內,共兩個容器須要複製該文件:
docker run -it --name redis-6490 redis docker run -it --name redis-6491 redis docker cp ./sentinel.conf dcbd015dbc0e:/data/sentinel.conf docker cp ./sentinel.conf 7c8307730bcc:/data/sentinel.conf
redis-sentinel sentinel.conf
此時任意啓停 redis 容器,能夠看到 sentinel 自動完成 redis 的主從切換,主從配置等不須要人工操做。
> 覆蓋寫入; >> 追加寫入
主要追加集羣配置信息,示例配置文件以下:
# 默認端口6379 port 6390 # 綁定 ip,若是是內網能夠直接綁定 127.0.0.1, 或者忽略, 0.0.0.0 是外網 bind 0.0.0.0 # 守護進程啓動 daemonize no # 集羣配置 cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 5000 appendonly yes
以第二節做爲基礎,基於最新的配置文件,建立 6 個容器,注意新增集羣總線端口映射:
docker run -p 6390:6390 -p 16390:16390 -v D:\develop\shell\docker\redis\conf6390:/usr/local/etc/redis --name redis-conf-6390 redis redis-server /usr/local/etc/redis/redis.conf docker exec -it redis-conf-6390 /bin/bash redis-cli -h 172.17.0.1 -p 6390 docker run -p 6391:6391 -p 16391:16391 -v D:\develop\shell\docker\redis\conf6391:/usr/local/etc/redis --name redis-conf-6391 redis redis-server /usr/local/etc/redis/redis.conf docker exec -it redis-conf-6391 /bin/bash redis-cli -h 172.17.0.1 -p 6391 docker run -p 6392:6392 -p 16392:16392 -v D:\develop\shell\docker\redis\conf6392:/usr/local/etc/redis --name redis-conf-6392 redis redis-server /usr/local/etc/redis/redis.conf docker exec -it redis-conf-6392 /bin/bash redis-cli -h 172.17.0.1 -p 6392 docker run -p 6393:6393 -p 16393:16393 -v D:\develop\shell\docker\redis\conf6393:/usr/local/etc/redis --name redis-conf-6393 redis redis-server /usr/local/etc/redis/redis.conf docker exec -it redis-conf-6393 /bin/bash redis-cli -h 172.17.0.1 -p 6393 docker run -p 6394:6394 -p 16394:16394 -v D:\develop\shell\docker\redis\conf6394:/usr/local/etc/redis --name redis-conf-6394 redis redis-server /usr/local/etc/redis/redis.conf docker exec -it redis-conf-6394 /bin/bash redis-cli -h 172.17.0.1 -p 6394 docker run -p 6395:6395 -p 16395:16395 -v D:\develop\shell\docker\redis\conf6395:/usr/local/etc/redis --name redis-conf-6395 redis redis-server /usr/local/etc/redis/redis.conf docker exec -it redis-conf-6395 /bin/bash redis-cli -h 172.17.0.1 -p 6395
> redis-cli --cluster create 172.17.0.1:6390 172.17.0.1:6391 172.17.0.1:6392 172.17.0.1:6393 172.17.0.1:6394 172.17.0.1:6395 --cluster-replicas 1 # 如下是命令執行結果: >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 172.17.0.1:6394 to 172.17.0.1:6390 Adding replica 172.17.0.1:6395 to 172.17.0.1:6391 Adding replica 172.17.0.1:6393 to 172.17.0.1:6392 >>> Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master M: a9678b062663957e59bc3b4beb7be4366fa24adc 172.17.0.1:6390 slots:[0-5460] (5461 slots) master M: 41a4976431713cce936220fba8a230627d28d40c 172.17.0.1:6391 slots:[5461-10922] (5462 slots) master M: 1bf83414a12bad8f2e25dcea19ccea1c881d28c5 172.17.0.1:6392 slots:[10923-16383] (5461 slots) master S: 3d65eadd3321ef34c9413ae8f75d610c4228eda7 172.17.0.1:6393 replicates 41a4976431713cce936220fba8a230627d28d40c S: b604356698a5f211823ada4b45a97939744b1d57 172.17.0.1:6394 replicates 1bf83414a12bad8f2e25dcea19ccea1c881d28c5 S: 2c1cc93221dc3830aa1eb28601ac27e22a6801cc 172.17.0.1:6395 replicates a9678b062663957e59bc3b4beb7be4366fa24adc Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join . >>> Performing Cluster Check (using node 172.17.0.1:6390) M: a9678b062663957e59bc3b4beb7be4366fa24adc 172.17.0.1:6390 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: b604356698a5f211823ada4b45a97939744b1d57 172.17.0.1:6394 slots: (0 slots) slave replicates 1bf83414a12bad8f2e25dcea19ccea1c881d28c5 M: 41a4976431713cce936220fba8a230627d28d40c 172.17.0.1:6391 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 3d65eadd3321ef34c9413ae8f75d610c4228eda7 172.17.0.1:6393 slots: (0 slots) slave replicates 41a4976431713cce936220fba8a230627d28d40c M: 1bf83414a12bad8f2e25dcea19ccea1c881d28c5 172.17.0.1:6392 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 2c1cc93221dc3830aa1eb28601ac27e22a6801cc 172.17.0.1:6395 slots: (0 slots) slave replicates a9678b062663957e59bc3b4beb7be4366fa24adc [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
集羣建立成功
業務端口號 + 10000
cluster reset
命令能夠將當前節點從集羣中移除