最近遇到部分系統由於redis服務掛掉,致使部分服務不可用。因此但願搭建一個redis集羣鏡像,把原先散落各處的redis服務器統一管理起來,而且保障高可用和故障自動遷移。
html
你們都知道redis集羣有兩種,一種是redis sentinel,高可用集羣,同時只有一個master,各實例數據保持一致;一種是redis cluster,分佈式集羣,同時有多個master,數據分片部署在各個master上。基於咱們的需求和redis自己技術的成熟度,本次要搭建的是redis sentinel。java
關於它的介紹:
Redis 的 Sentinel 系統用於管理多個 Redis 服務器(instance), 該系統執行如下三個任務:redis
整個集羣能夠分爲一個master,N個slave,M個sentinel,本次以2個slave和3個sentinel爲例:
首先增長redis.conf
docker
##redis.conf ##redis-0,默認爲master port $redis_port ##受權密碼,請各個配置保持一致 ##暫且禁用指令重命名 ##rename-command ##開啓AOF,禁用snapshot appendonly yes #slaveof redis-master $master_port slave-read-only yes
默認爲master,#slaveof
註釋去掉後變爲slave,這裏固化了master的域名redis-master
。
增長sentinel.conf
bash
port $sentinel_port
dir "/tmp" ##sentinel監控的redis的名字、IP和端口,最後一個數字是sentinel作決策的時候須要投贊同票的最少的sentinel的數量。 sentinel monitor mymaster redis-master $master_port 2 ##選項指定了在執行故障轉移時, 最多能夠有多少個從服務器同時對新的主服務器進行同步, 這個數字越小, 完成故障轉移所需的時間就越長。 sentinel config-epoch mymaster 1 sentinel leader-epoch mymaster 1 sentinel current-epoch 1
增長啓動腳本,根據入參判斷啓動master,slave,sentinel服務器
cd /data redis_role=$1 echo $redis_role if [ $redis_role = "master" ] ; then echo "master" sed -i "s/\$redis_port/$redis_port/g" redis.conf redis-server /data/redis.conf elif [ $redis_role = "slave" ] ; then echo "slave" sed -i "s/\$redis_port/$redis_port/g" redis.conf sed -i "s/#slaveof/slaveof/g" redis.conf sed -i "s/\$master_port/$master_port/g" redis.conf redis-server /data/redis.conf elif [ $redis_role = "sentinel" ] ; then echo "sentinel" sed -i "s/\$sentinel_port/$sentinel_port/g" sentinel.conf sed -i "s/\$master_port/$master_port/g" sentinel.conf redis-sentinel /data/sentinel.conf else echo "unknow role!" fi #ifend
其中$redis_port和$master_port,$sentinel_port都是取自環境變量,經過Docker啓動時候傳入。
編寫Dockerfile
app
FROM redis:3-alpine MAINTAINER voidman <voidman> COPY Shanghai /etc/localtime COPY redis.conf /data/redis.conf COPY sentinel.conf /data/sentinel.conf COPY start.sh /data/start.sh RUN chmod +x /data/start.sh RUN chown redis:redis /data/* ENTRYPOINT ["sh","/data/start.sh"] CMD ["master"]
選取redis-alpine鏡像做爲基礎鏡像,由於它很是小,只有9M,修改時區和把一些配置拷貝進去後,變動下權限和用戶組,由於基礎鏡像是redis用戶組。ENTRYPOINT
和CMD
組合,默認以master方式啓動。
build完成後,鏡像只有15M。分佈式
採用docker-compose格式:測試
redis-master-host:
environment:
redis_port: '16379' labels: io.rancher.container.pull_image: always tty: true image: xxx.aliyun.com:5000/aegis-redis-ha:1.0 stdin_open: true net: host redis-slaves: environment: master_port: '16379' redis_port: '16380' labels: io.rancher.scheduler.affinity:container_label_soft_ne: name=slaves io.rancher.container.pull_image: always name: slaves tty: true command: - slave image: xxx.aliyun.com:5000/aegis-redis-cluster:1.0 stdin_open: true net: host redis-sentinels: environment: master_port: '16379' sentinel_port: '16381' labels: io.rancher.container.pull_image: always name: sentinels io.rancher.scheduler.affinity:container_label_ne: name=sentinels tty: true command: - sentinel image: xxx.aliyun.com:5000/aegis-redis-cluster:1.0 stdin_open: true net: host
首先啓動master,傳入端口16379,host模式,在啓動slave,成爲16379 master 的slave,而且設置調度策略爲儘量分散的方式,sentinels也相似。ui
java客戶端測試(片斷):
//初始化 Set<String> sentinels = new HashSet<String>(16); sentinels.add("redis-sentinel1.aliyun.com:16381"); sentinels.add("redis-sentinel2.aliyun.com:16381"); sentinels.add("redis-sentinel3.aliyun.com:16381"); GenericObjectPoolConfig config = new GenericObjectPoolConfig(); config.setBlockWhenExhausted(true); config.setMaxTotal(10); config.setMaxWaitMillis(1000l); config.setMaxIdle(25); config.setMaxTotal(32); jedisPool = new JedisSentinelPool("mymaster", sentinels, config);
//不停讀寫 while (true) { AegisRedis.set("testSentinel", "ok"); System.err.println(AegisRedis.get("testSentinel")); Thread.sleep(3000); }
此時kill掉一臺sentinel,會提示:
嚴重: Lost connection to Sentinel at redis-sentinel2.aliyun.com:16381. Sleeping 5000ms and retrying.
數據正常讀寫,當把全部sentinel都kill掉後,任然可以正常讀寫,而且不斷在重連sentinel,說明sentinel只是從新選取master和failover時才頂用,一旦選好後,及時全掛了,redis也能照常運行。
而若是這是從新去初始化redisPool的時候,會報錯:
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: All sentinels down, cannot determine where is mymaster master is running...
sentinel之間不須要相互配置,你們都經過訂閱master和slave的sentinel:hello 頻道,上報本身的ip,port等信息,而後每一個sentinel就都維護了一份已知的sentinel列表。
此時kill掉一臺slave,對客戶端沒有任何影響,也不會有感知,master會有失聯日誌:
2016/4/14 下午4:31:336:M 14 Apr 16:31:33.698 # Connection with slave ip_address:16380 lost.
sentinel也有日誌:
2016/4/14 下午4:30:397:X 14 Apr 16:30:39.852 # -sdown slave ip_address:16380 ip_address 16380 @ mymaster ip_address 16379 2016/4/14 下午4:32:037:X 14 Apr 16:32:03.786 # +sdown slave ip_address:16380 ip_address 16380 @ mymaster ip_address 16379
此時恢復那臺slave
2016/4/14 下午4:36:579:S 14 Apr 16:36:57.441 * Connecting to MASTER redis-master:16379 2016/4/14 下午4:36:579:S 14 Apr 16:36:57.449 * MASTER <-> SLAVE sync started 2016/4/14 下午4:36:579:S 14 Apr 16:36:57.449 * Non blocking connect for SYNC fired the event. 2016/4/14 下午4:36:579:S 14 Apr 16:36:57.449 * Master replied to PING, replication can continue... 2016/4/14 下午4:36:579:S 14 Apr 16:36:57.449 * Partial resynchronization not possible (no cached master) 2016/4/14 下午4:36:579:S 14 Apr 16:36:57.450 * Full resync from master: 0505a8e1049095ce597a137ae1161ed4727533d3:84558 2016/4/14 下午4:36:579:S 14 Apr 16:36:57.462 * SLAVE OF ip_address:16379 enabled (user request from 'id=3 addr=ip_address2:57122 fd=10 name=sentinel-11d82028-cmd age=0 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=0 qbuf-free=32768 obl=36 oll=0 omem=0 events=rw cmd=exec') 2016/4/14 下午4:36:579:S 14 Apr 16:36:57.462 # CONFIG REWRITE executed with success. 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.451 * Connecting to MASTER ip_address:16379 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.451 * MASTER <-> SLAVE sync started 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.451 * Non blocking connect for SYNC fired the event. 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.451 * Master replied to PING, replication can continue... 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.451 * Partial resynchronization not possible (no cached master) 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.453 * Full resync from master: 0505a8e1049095ce597a137ae1161ed4727533d3:84721 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.532 * MASTER <-> SLAVE sync: receiving 487 bytes from master 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.532 * MASTER <-> SLAVE sync: Flushing old data 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.532 * MASTER <-> SLAVE sync: Loading DB in memory 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.532 * MASTER <-> SLAVE sync: Finished with success 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.537 * Background append only file rewriting started by pid 12 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.563 * AOF rewrite child asks to stop sending diffs. 2016/4/14 下午4:36:5812:C 14 Apr 16:36:58.563 * Parent agreed to stop sending diffs. Finalizing AOF... 2016/4/14 下午4:36:5812:C 14 Apr 16:36:58.563 * Concatenating 0.00 MB of AOF diff received from parent. 2016/4/14 下午4:36:5812:C 14 Apr 16:36:58.563 * SYNC append only file rewrite performed 2016/4/14 下午4:36:5812:C 14 Apr 16:36:58.564 * AOF rewrite: 0 MB of memory used by copy-on-write 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.652 * Background AOF rewrite terminated with success 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.653 * Residual parent diff successfully flushed to the rewritten AOF (0.00 MB) 2016/4/14 下午4:36:589:S 14 Apr 16:36:58.653 * Background AOF rewrite finished successfully
立刻從master恢復數據,最終保持一致。
此時客戶端出現異常:
Caused by: redis.clients.jedis.exceptions.JedisConnectionException: java.net.ConnectException: Connection refused
而且sentinel開始發現這個狀況,首先主觀判斷master(ip_address 16379)已經掛了,而後經過詢問其餘sentinel,是否master掛了,判斷獲得2個sentinel都認爲master掛了(這裏的2個爲以前sentinel.conf中配置,通常建議選擇多餘一半的sentinel的個數),此時客觀判斷master掛了。開始新的一輪master投票,投票給了ip_address:16380,進行failover,完成後切換至新主。而且通知其他slave,有了新主。如下是詳細日誌:注意的是,再選取過程當中,出現了短暫的客戶端不可用。
2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.162 # +sdown master mymaster ip_address 16379 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.233 # +odown master mymaster ip_address 16379 #quorum 2/2 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.233 # +new-epoch 10 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.233 # +try-failover master mymaster ip_address 16379 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.238 # +vote-for-leader 0a632ec0550401e66486846b521ad2de8c345695 10 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.249 # ip_address2:16381 voted for 0a632ec0550401e66486846b521ad2de8c345695 10 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.261 # ip_address3:16381 voted for 4e590c09819a793faf1abf185a0d0db07dc89f6a 10 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.309 # +elected-leader master mymaster ip_address 16379 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.309 # +failover-state-select-slave master mymaster ip_address 16379 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.376 # +selected-slave slave ip_address:16380 ip_address 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.376 * +failover-state-send-slaveof-noone slave ip_address:16380 ip_address 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3613:X 14 Apr 16:40:36.459 * +failover-state-wait-promotion slave ip_address:16380 ip_address 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3713:X 14 Apr 16:40:37.256 # +promoted-slave slave ip_address:16380 ip_address 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3713:X 14 Apr 16:40:37.256 # +failover-state-reconf-slaves master mymaster ip_address 16379 2016/4/14 下午4:40:3713:X 14 Apr 16:40:37.303 * +slave-reconf-sent slave ip_address3:16380 ip_address3 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3813:X 14 Apr 16:40:38.288 * +slave-reconf-inprog slave ip_address3:16380 ip_address3 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3813:X 14 Apr 16:40:38.289 * +slave-reconf-done slave ip_address3:16380 ip_address3 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3813:X 14 Apr 16:40:38.378 * +slave-reconf-sent slave ip_address2:16380 ip_address2 16380 @ mymaster ip_address 16379 2016/4/14 下午4:40:3813:X 14 Apr 16:40:38.436 # -odown master mymaster ip_address 16379 2016/4/14 下午4: