redis-cluster

環境:
    centos 7.3
    ip:10.36.8.106(這裏是單臺測試,因此沒有用bind,直接是127.0.0.1,若是是實際環境得加bind綁定本身的地址)
安裝:    
    yum -y install redis redis-tribnode

配置:
    mkdir -p /opt/redis-cluster/700{0,1,2,3,4,5}
    cd /opt/redis-cluster/7000
    vim redis.conf
        port 7000
        daemonize yes
        cluster-enabled yes
        cluster-config-file nodes.conf
        cluster-node-timeout 5000
        appendonly yesredis

    #配置複製及修改全部節點
    for i in `seq 1 5`;do cp redis.conf ../700$i/redis.conf;sed -i "s/7000/700$i/" ../700$i/redis.conf;done算法

測試:
    cd /opt/redis-cluster
    #由於配置文件沒有加後臺機器,這樣批量啓動會卡住
    for dir in `ls -r`;do cd /opt/redis-cluster/$dir;redis-server redis.conf;done
    #加入後臺以守護進程方式在批量啓動
    for dir in `ls -r`;do sed -i "/port/a\daemonize yes" $dir/redis.conf;donevim

建立集羣:
    [root@host-192-168-1-100 7000]# redis-trib create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005   
    >>> Creating cluster
    >>> Performing hash slots allocation on 6 nodes...
    Using 3 masters:
    127.0.0.1:7000
    127.0.0.1:7001
    127.0.0.1:7002
    Adding replica 127.0.0.1:7003 to 127.0.0.1:7000
    Adding replica 127.0.0.1:7004 to 127.0.0.1:7001
    Adding replica 127.0.0.1:7005 to 127.0.0.1:7002
    M: bc37b785100a2fe0b4575c977cb587908b15f2d6 127.0.0.1:7000
       slots:0-5460 (5461 slots) master
    M: 0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001
       slots:5461-10922 (5462 slots) master
    M: 575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002
       slots:10923-16383 (5461 slots) master
    S: 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003
       replicates bc37b785100a2fe0b4575c977cb587908b15f2d6
    S: 02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004
       replicates 0b1580a5305041bde69108e438653e1bfccefed0
    S: 00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005
       replicates 575f24b2fe784e2b6d18591e90ff7e760dc272df
    Can I set the above configuration? (type 'yes' to accept): y
    這裏必須輸入yes,不然會失敗
    [root@host-192-168-1-100 7000]# redis-trib create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
    >>> Creating cluster
    >>> Performing hash slots allocation on 6 nodes...
    Using 3 masters:
    127.0.0.1:7000
    127.0.0.1:7001
    127.0.0.1:7002
    Adding replica 127.0.0.1:7003 to 127.0.0.1:7000
    Adding replica 127.0.0.1:7004 to 127.0.0.1:7001
    Adding replica 127.0.0.1:7005 to 127.0.0.1:7002
    M: bc37b785100a2fe0b4575c977cb587908b15f2d6 127.0.0.1:7000
       slots:0-5460 (5461 slots) master
    M: 0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001
       slots:5461-10922 (5462 slots) master
    M: 575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002
       slots:10923-16383 (5461 slots) master
    S: 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003
       replicates bc37b785100a2fe0b4575c977cb587908b15f2d6
    S: 02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004
       replicates 0b1580a5305041bde69108e438653e1bfccefed0
    S: 00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005
       replicates 575f24b2fe784e2b6d18591e90ff7e760dc272df
    Can I set the above configuration? (type 'yes' to accept): yes
    >>> Nodes configuration updated
    >>> Assign a different config epoch to each node
    >>> Sending CLUSTER MEET messages to join the cluster
    Waiting for the cluster to join...
    >>> Performing Cluster Check (using node 127.0.0.1:7000)
    M: bc37b785100a2fe0b4575c977cb587908b15f2d6 127.0.0.1:7000
       slots:0-5460 (5461 slots) master
       1 additional replica(s)
    M: 0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001
       slots:5461-10922 (5462 slots) master
       1 additional replica(s)
    M: 575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002
       slots:10923-16383 (5461 slots) master
       1 additional replica(s)
    S: 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003
       slots: (0 slots) slave
       replicates bc37b785100a2fe0b4575c977cb587908b15f2d6
    S: 02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004
       slots: (0 slots) slave
       replicates 0b1580a5305041bde69108e438653e1bfccefed0
    S: 00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005
       slots: (0 slots) slave
       replicates 575f24b2fe784e2b6d18591e90ff7e760dc272df
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [OK] All 16384 slots covered.centos

    查看集羣狀態:(也能夠 redis-trib check 127.0.0.1:7000)
    [root@host-192-168-1-100 7000]# redis-cli -p 7000 cluster nodes 
    0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001 master - 0 1524020822039 2 connected 5461-10922
    575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002 master - 0 1524020823042 3 connected 10923-16383
    bc37b785100a2fe0b4575c977cb587908b15f2d6 127.0.0.1:7000 myself,master - 0 0 1 connected 0-5460
    3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003 slave bc37b785100a2fe0b4575c977cb587908b15f2d6 0 1524020822540 4 connected
    02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524020823543 5 connected
    00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005 slave 575f24b2fe784e2b6d18591e90ff7e760dc272df 0 1524020821537 6 connected服務器

驗證添加數據:
    必須加-c 以集羣模式進入,不然添加key會報錯(從7000進入,根據算法直接保存在了7002)
    [root@host-192-168-1-100 ~]# redis-cli -p 7000
    127.0.0.1:7000> 
    127.0.0.1:7000> 
    127.0.0.1:7000> set foo bar
    (error) MOVED 12182 127.0.0.1:7002app

    [root@host-192-168-1-100 ~]# redis-cli -c -p 7000
    127.0.0.1:7000> set foo bar
    -> Redirected to slot [12182] located at 127.0.0.1:7002
    OK
    127.0.0.1:7002> 異步

    Redis 集羣沒有使用一致性hash, 而是引入了 哈希槽的概念.
    Redis 集羣有16384個哈希槽,每一個key經過CRC16校驗後對16384取模來決定放置哪一個槽.集羣的每一個節點負責一部分hash槽,socket


主動主從切換:(進入須要切換的從)
    redis-cli -p 7005
    cluster failover
    主動完成主從切換,不會丟失數據測試

添加新節點:
    新啓動的服務加入集羣一個主節點的從節點(把maser-id及後面的node-id去掉隨機分配一個主,若是--slave也去掉就是新添加一個主節點)
    redis-trib add-node --slave --master-id 00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7006 127.0.0.1:7000

從更換主節點:    
    已經是從節點能夠主動改變本身歸屬的主節點
    進入從節點 (網上說主節點必須空,特地測試發現不用爲空也是能夠切換的)
    redis-cli -c -p 7006
    cluster replicate 00a1138c9e4358f5bbd734c6f194bed04a37e999

    插入數據:
        for i in `seq 1000`;do redis-cli -c -p 7001 set foot$i $i ;done

    再次切換
    [root@host-192-168-1-100 7007]# redis-cli -c -p 7007
    127.0.0.1:7007> 
    127.0.0.1:7007> cluster replicate 0b1580a5305041bde69108e438653e1bfccefed0
    OK
    能夠切換成功,對應的主id已經變換爲切換後的主
    [root@host-192-168-1-100 7007]# 
    [root@host-192-168-1-100 7007]# redis-cli -c -p 7000 cluster nodes
    22628a3a484345de1ae0323b26c679eef5688cf8 127.0.0.1:7007 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524111358911 8 connected
    02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524111357909 2 connected
    575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002 slave 00a1138c9e4358f5bbd734c6f194bed04a37e999 0 1524111359413 8 connected
    00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005 master - 0 1524111359413 8 connected 10923-16383
    790b56a8774e24455f6da822aae1c06c8898d45b 127.0.0.1:7000 myself,slave 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 0 0 0 connected
    3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003 master - 0 1524111357909 7 connected 0-5460
    018f59aedc474b8589674ff9454269dbfdefaa22 127.0.0.1:7006 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524111358409 8 connected
    0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001 master - 0 1524111359413 2 connected 5461-10922

    而且切換到新的主下後,與新主的數據是一致的
    redis-cli -c -p 7005 keys "*" |awk '{print  $0}' |sort -n |uniq -c  
    *** 經過這種鏈接方式去get指定值,會關聯全部master作查詢,任何一個節點存在都會返回,set 設置特定值也是根據CRC16校驗後對16384取模來決定放置哪一個槽

移除集羣從新加入:
    移出集羣會直接把服務幹掉(主若是非空不行,必須得先reshard,從能夠幹掉)
    redis-trib del-node  127.0.0.1:7000 "455a070d53f205d89de8c252ff61997b04852976"
    從新啓動加入以前的集羣時,因爲以前產生了集羣的惟一id節點信息,因此得幹掉node.conf文件在重啓
     
    未刪掉node.conf時直接加入報錯
    redis-trib add-node --slave 127.0.0.1:7007 127.0.0.1:7000
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    Automatically selected master 127.0.0.1:7005
    [ERR] Node 127.0.0.1:7007 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.

    中止刪除node.conf後啓動在加入成功
    netstat -tnlp |grep 7007 |awk '{print $NF}'|cut -d"/" -f1|sort -n |uniq
    kill -INT 30523
    rm -f nodes.conf 
    redis-server redis.conf 
    redis-trib add-node --slave 127.0.0.1:7007 127.0.0.1:7000
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    Automatically selected master 127.0.0.1:7005
    >>> Send CLUSTER MEET to node 127.0.0.1:7007 to make it join the cluster.
    Waiting for the cluster to join.
    >>> Configure node as replica of 127.0.0.1:7005.
    [OK] New node added correctly.

    [root@host-192-168-1-100 7007]# redis-cli -c -p 7000 cluster nodes
    22628a3a484345de1ae0323b26c679eef5688cf8 127.0.0.1:7007 slave 00a1138c9e4358f5bbd734c6f194bed04a37e999 0 1524110722826 8 connected
    02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524110724330 2 connected
    575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002 slave 00a1138c9e4358f5bbd734c6f194bed04a37e999 0 1524110723829 8 connected
    00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005 master - 0 1524110722826 8 connected 10923-16383
    790b56a8774e24455f6da822aae1c06c8898d45b 127.0.0.1:7000 myself,slave 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 0 0 0 connected
    3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003 master - 0 1524110723328 7 connected 0-5460
    018f59aedc474b8589674ff9454269dbfdefaa22 127.0.0.1:7006 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524110722325 8 connected
    0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001 master - 0 1524110723829 2 connected 5461-10922


故障轉移或者升級能夠先把從切換爲主在對應的從上cluster failover
擴容-添加節點:(啓動新節點7008,添加add-node不在說明,這裏以master身份加入)
    [root@host-192-168-1-100 7008]# redis-server redis.conf 
    [root@host-192-168-1-100 7008]# redis-cli -p 7008 cluster nodes
    29b6f39557e3c4da1505545de0bd8bf0abb6a5bb :7008 myself,master - 0 0 0 connected
    [root@host-192-168-1-100 7008]# 
    [root@host-192-168-1-100 7008]# redis-cli -p 7005 cluster nodes 
    02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524118789706 5 connected
    0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001 master - 0 1524118788703 2 connected 5461-10922
    00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005 myself,master - 0 0 8 connected 10923-16383
    22628a3a484345de1ae0323b26c679eef5688cf8 127.0.0.1:7007 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524118788201 8 connected
    790b56a8774e24455f6da822aae1c06c8898d45b 127.0.0.1:7000 slave 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 0 1524118788201 7 connected
    018f59aedc474b8589674ff9454269dbfdefaa22 127.0.0.1:7006 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524118789205 8 connected
    575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002 slave 00a1138c9e4358f5bbd734c6f194bed04a37e999 0 1524118790206 8 connected
    3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003 master - 0 1524118789205 7 connected 0-5460
    [root@host-192-168-1-100 7008]# 
    [root@host-192-168-1-100 7008]# redis-trib add-node 127.0.0.1:7008 127.0.0.1:7005
    >>> Adding node 127.0.0.1:7008 to cluster 127.0.0.1:7005
    >>> Performing Cluster Check (using node 127.0.0.1:7005)
    M: 00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005
       slots:10923-16383 (5461 slots) master
       1 additional replica(s)
    S: 02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004
       slots: (0 slots) slave
       replicates 0b1580a5305041bde69108e438653e1bfccefed0
    M: 0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001
       slots:5461-10922 (5462 slots) master
       3 additional replica(s)
    S: 22628a3a484345de1ae0323b26c679eef5688cf8 127.0.0.1:7007
       slots: (0 slots) slave
       replicates 0b1580a5305041bde69108e438653e1bfccefed0
    S: 790b56a8774e24455f6da822aae1c06c8898d45b 127.0.0.1:7000
       slots: (0 slots) slave
       replicates 3df3a3bfd7489854d3dcbd2549e17639b6aa049c
    S: 018f59aedc474b8589674ff9454269dbfdefaa22 127.0.0.1:7006
       slots: (0 slots) slave
       replicates 0b1580a5305041bde69108e438653e1bfccefed0
    S: 575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002
       slots: (0 slots) slave
       replicates 00a1138c9e4358f5bbd734c6f194bed04a37e999
    M: 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003
       slots:0-5460 (5461 slots) master
       1 additional replica(s)
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    >>> Send CLUSTER MEET to node 127.0.0.1:7008 to make it join the cluster.

    [OK] New node added correctly.
    [root@host-192-168-1-100 7008]# 
    [root@host-192-168-1-100 7008]# redis-cli -p 7005 cluster nodes                  
    29b6f39557e3c4da1505545de0bd8bf0abb6a5bb 127.0.0.1:7008 master - 0 1524118842819 0 connected
    02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524118842319 5 connected
    0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001 master - 0 1524118843820 2 connected 5461-10922
    00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005 myself,master - 0 0 8 connected 10923-16383
    22628a3a484345de1ae0323b26c679eef5688cf8 127.0.0.1:7007 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524118843320 8 connected
    790b56a8774e24455f6da822aae1c06c8898d45b 127.0.0.1:7000 slave 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 0 1524118842318 7 connected
    018f59aedc474b8589674ff9454269dbfdefaa22 127.0.0.1:7006 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524118843320 8 connected
    575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002 slave 00a1138c9e4358f5bbd734c6f194bed04a37e999 0 1524118842319 8 connected
    3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003 master - 0 1524118844322 7 connected 0-5460

    上述只是添加到集羣成功,可是尚未分配槽不能正常工做
    [root@host-192-168-1-100 7008]# redis-cli -p 7005 cluster nodes  
    29b6f39557e3c4da1505545de0bd8bf0abb6a5bb 127.0.0.1:7008 master - 0 1524119218246 9 connected 0-999
    02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524119219249 5 connected
    0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001 master - 0 1524119219249 2 connected 5461-10922
    00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005 myself,master - 0 0 8 connected 10923-16383
    22628a3a484345de1ae0323b26c679eef5688cf8 127.0.0.1:7007 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524119220249 8 connected
    790b56a8774e24455f6da822aae1c06c8898d45b 127.0.0.1:7000 slave 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 0 1524119219749 7 connected
    018f59aedc474b8589674ff9454269dbfdefaa22 127.0.0.1:7006 slave 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb 0 1524119218247 9 connected
    575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002 slave 00a1138c9e4358f5bbd734c6f194bed04a37e999 0 1524119218747 8 connected
    3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003 master - 0 1524119219749 7 connected 1000-5460
    [root@host-192-168-1-100 7008]# redis-trib reshard 127.0.0.1:7005   #分配slot
    >>> Performing Cluster Check (using node 127.0.0.1:7005)
    M: 00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005
       slots:10923-16383 (5461 slots) master
       1 additional replica(s)
    M: 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb 127.0.0.1:7008
       slots:0-999 (1000 slots) master
       1 additional replica(s)
    S: 02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004
       slots: (0 slots) slave
       replicates 0b1580a5305041bde69108e438653e1bfccefed0
    M: 0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001
       slots:5461-10922 (5462 slots) master
       2 additional replica(s)
    S: 22628a3a484345de1ae0323b26c679eef5688cf8 127.0.0.1:7007
       slots: (0 slots) slave
       replicates 0b1580a5305041bde69108e438653e1bfccefed0
    S: 790b56a8774e24455f6da822aae1c06c8898d45b 127.0.0.1:7000
       slots: (0 slots) slave
       replicates 3df3a3bfd7489854d3dcbd2549e17639b6aa049c
    S: 018f59aedc474b8589674ff9454269dbfdefaa22 127.0.0.1:7006
       slots: (0 slots) slave
       replicates 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
    S: 575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002
       slots: (0 slots) slave
       replicates 00a1138c9e4358f5bbd734c6f194bed04a37e999
    M: 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003
       slots:1000-5460 (4461 slots) master
       1 additional replica(s)
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    How many slots do you want to move (from 1 to 16384)? 20    #選擇分配多少數量的slot
    What is the receiving node ID? 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb    #上述數量的slot分配給誰
    Please enter all the source node IDs.                       
      Type 'all' to use all the nodes as source nodes for the hash slots.
      Type 'done' once you entered all the source nodes IDs.
    Source node #1:all                                            #all表明從全部的master裏面根據算法選擇上述數量slot槽給receiving node ID
    Ready to move 20 slots.                                        #done表明指定從某個master選擇上述slot給receiving node ID
      Source nodes:
        M: 00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005
       slots:10923-16383 (5461 slots) master
       1 additional replica(s)
        M: 0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001
       slots:5461-10922 (5462 slots) master
       2 additional replica(s)
        M: 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003
       slots:1000-5460 (4461 slots) master
       1 additional replica(s)
      Destination node:
        M: 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb 127.0.0.1:7008
       slots:0-999 (1000 slots) master
       1 additional replica(s)
      Resharding plan:
        Moving slot 5461 from 0b1580a5305041bde69108e438653e1bfccefed0
        Moving slot 5462 from 0b1580a5305041bde69108e438653e1bfccefed0
        Moving slot 5463 from 0b1580a5305041bde69108e438653e1bfccefed0
        Moving slot 5464 from 0b1580a5305041bde69108e438653e1bfccefed0
        Moving slot 5465 from 0b1580a5305041bde69108e438653e1bfccefed0
        Moving slot 5466 from 0b1580a5305041bde69108e438653e1bfccefed0
        Moving slot 5467 from 0b1580a5305041bde69108e438653e1bfccefed0
        Moving slot 5468 from 0b1580a5305041bde69108e438653e1bfccefed0
        Moving slot 10923 from 00a1138c9e4358f5bbd734c6f194bed04a37e999
        Moving slot 10924 from 00a1138c9e4358f5bbd734c6f194bed04a37e999
        Moving slot 10925 from 00a1138c9e4358f5bbd734c6f194bed04a37e999
        Moving slot 10926 from 00a1138c9e4358f5bbd734c6f194bed04a37e999
        Moving slot 10927 from 00a1138c9e4358f5bbd734c6f194bed04a37e999
        Moving slot 10928 from 00a1138c9e4358f5bbd734c6f194bed04a37e999
        Moving slot 10929 from 00a1138c9e4358f5bbd734c6f194bed04a37e999
        Moving slot 1000 from 3df3a3bfd7489854d3dcbd2549e17639b6aa049c
        Moving slot 1001 from 3df3a3bfd7489854d3dcbd2549e17639b6aa049c
        Moving slot 1002 from 3df3a3bfd7489854d3dcbd2549e17639b6aa049c
        Moving slot 1003 from 3df3a3bfd7489854d3dcbd2549e17639b6aa049c
        Moving slot 1004 from 3df3a3bfd7489854d3dcbd2549e17639b6aa049c
    Do you want to proceed with the proposed reshard plan (yes/no)? 
    Moving slot 5461 from 127.0.0.1:7001 to 127.0.0.1:7008: 
    Moving slot 5462 from 127.0.0.1:7001 to 127.0.0.1:7008: 
    Moving slot 5463 from 127.0.0.1:7001 to 127.0.0.1:7008: 
    Moving slot 5464 from 127.0.0.1:7001 to 127.0.0.1:7008: 
    Moving slot 5465 from 127.0.0.1:7001 to 127.0.0.1:7008: 
    Moving slot 5466 from 127.0.0.1:7001 to 127.0.0.1:7008: 
    Moving slot 5467 from 127.0.0.1:7001 to 127.0.0.1:7008: 
    Moving slot 5468 from 127.0.0.1:7001 to 127.0.0.1:7008: 
    Moving slot 10923 from 127.0.0.1:7005 to 127.0.0.1:7008: 
    Moving slot 10924 from 127.0.0.1:7005 to 127.0.0.1:7008: 
    Moving slot 10925 from 127.0.0.1:7005 to 127.0.0.1:7008: 
    Moving slot 10926 from 127.0.0.1:7005 to 127.0.0.1:7008: 
    Moving slot 10927 from 127.0.0.1:7005 to 127.0.0.1:7008: 
    Moving slot 10928 from 127.0.0.1:7005 to 127.0.0.1:7008: 
    Moving slot 10929 from 127.0.0.1:7005 to 127.0.0.1:7008: 
    Moving slot 1000 from 127.0.0.1:7003 to 127.0.0.1:7008: 
    Moving slot 1001 from 127.0.0.1:7003 to 127.0.0.1:7008: 
    Moving slot 1002 from 127.0.0.1:7003 to 127.0.0.1:7008: 
    Moving slot 1003 from 127.0.0.1:7003 to 127.0.0.1:7008: 
    Moving slot 1004 from 127.0.0.1:7003 to 127.0.0.1:7008: 

    那若是你先選擇從某一個指定節點分配到另一個節點的話
    Please enter all the source node IDs.
      Type 'all' to use all the nodes as source nodes for the hash slots.
      Type 'done' once you entered all the source nodes IDs.
    Source node #1:all
    在這裏先填入指定節點回車在輸入done回車便可

    第一次轉移1000選擇的是done,從7003轉移;後續轉移20,從7003轉移5個,7001轉移8個:5461-5468,7005轉移7個:10923-10929
    [root@host-192-168-1-100 7008]# redis-cli -p 7005 cluster nodes  
    29b6f39557e3c4da1505545de0bd8bf0abb6a5bb 127.0.0.1:7008 master - 0 1524119466369 9 connected 0-1004 5461-5468 10923-10929
    02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524119465867 5 connected
    0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001 master - 0 1524119466871 2 connected 5469-10922
    00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005 myself,master - 0 0 8 connected 10930-16383
    22628a3a484345de1ae0323b26c679eef5688cf8 127.0.0.1:7007 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524119466870 8 connected
    790b56a8774e24455f6da822aae1c06c8898d45b 127.0.0.1:7000 slave 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 0 1524119467371 7 connected
    018f59aedc474b8589674ff9454269dbfdefaa22 127.0.0.1:7006 slave 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb 0 1524119467873 9 connected
    575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002 slave 00a1138c9e4358f5bbd734c6f194bed04a37e999 0 1524119467873 8 connected
    3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003 master - 0 1524119466369 7 connected 1005-5460

縮容-縮減節點:
    前提:必須得先轉移master節點上所分配的全部槽以及轉移全部的從節點(若是從節點也幹掉的話就先reshard master,在幹掉這個這個master和slave)

    [root@host-192-168-1-100 7008]# redis-cli -p 7005 cluster nodes  
    29b6f39557e3c4da1505545de0bd8bf0abb6a5bb 127.0.0.1:7008 master - 0 1524120043364 9 connected 5461-5468 10923-10929
    02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524120044363 5 connected
    0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001 master - 0 1524120044363 2 connected 5469-10922
    00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005 myself,master - 0 0 8 connected 10930-16383
    22628a3a484345de1ae0323b26c679eef5688cf8 127.0.0.1:7007 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524120045367 8 connected
    790b56a8774e24455f6da822aae1c06c8898d45b 127.0.0.1:7000 slave 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 0 1524120043864 10 connected
    018f59aedc474b8589674ff9454269dbfdefaa22 127.0.0.1:7006 slave 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb 0 1524120044865 9 connected
    575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002 slave 00a1138c9e4358f5bbd734c6f194bed04a37e999 0 1524120044865 8 connected
    3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003 master - 0 1524120043364 10 connected 0-5460
    [root@host-192-168-1-100 7008]# 
    上面採起reshard+done 指定轉移的方式把1005 轉移給了7003,測試刪除一下發現報錯,非空的,必須得想Ianreshard data
    [root@host-192-168-1-100 7008]# redis-trib del-node 127.0.0.1:7000 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
    >>> Removing node 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb from cluster 127.0.0.1:7000
    [ERR] Node 127.0.0.1:7008 is not empty! Reshard data away and try again.

    [root@host-192-168-1-100 7008]# redis-trib reshard 127.0.0.1:7005                                          
    >>> Performing Cluster Check (using node 127.0.0.1:7005)
    M: 00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005
       slots:10930-16383 (5454 slots) master
       1 additional replica(s)
    M: 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb 127.0.0.1:7008
       slots:5461-5468,10923-10929 (15 slots) master
       1 additional replica(s)
    S: 02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004
       slots: (0 slots) slave
       replicates 0b1580a5305041bde69108e438653e1bfccefed0
    M: 0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001
       slots:5469-10922 (5454 slots) master
       2 additional replica(s)
    S: 22628a3a484345de1ae0323b26c679eef5688cf8 127.0.0.1:7007
       slots: (0 slots) slave
       replicates 0b1580a5305041bde69108e438653e1bfccefed0
    S: 790b56a8774e24455f6da822aae1c06c8898d45b 127.0.0.1:7000
       slots: (0 slots) slave
       replicates 3df3a3bfd7489854d3dcbd2549e17639b6aa049c
    S: 018f59aedc474b8589674ff9454269dbfdefaa22 127.0.0.1:7006
       slots: (0 slots) slave
       replicates 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
    S: 575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002
       slots: (0 slots) slave
       replicates 00a1138c9e4358f5bbd734c6f194bed04a37e999
    M: 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003
       slots:0-5460 (5461 slots) master
       1 additional replica(s)
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    How many slots do you want to move (from 1 to 16384)? 15
    What is the receiving node ID? 00a1138c9e4358f5bbd734c6f194bed04a37e999
    Please enter all the source node IDs.
      Type 'all' to use all the nodes as source nodes for the hash slots.
      Type 'done' once you entered all the source nodes IDs.
    Source node #1:29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
    Source node #2:done

    Ready to move 15 slots.
      Source nodes:
        M: 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb 127.0.0.1:7008
       slots:5461-5468,10923-10929 (15 slots) master
       1 additional replica(s)
      Destination node:
        M: 00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005
       slots:10930-16383 (5454 slots) master
       1 additional replica(s)
      Resharding plan:
        Moving slot 5461 from 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
        Moving slot 5462 from 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
        Moving slot 5463 from 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
        Moving slot 5464 from 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
        Moving slot 5465 from 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
        Moving slot 5466 from 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
        Moving slot 5467 from 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
        Moving slot 5468 from 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
        Moving slot 10923 from 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
        Moving slot 10924 from 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
        Moving slot 10925 from 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
        Moving slot 10926 from 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
        Moving slot 10927 from 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
        Moving slot 10928 from 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
        Moving slot 10929 from 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
    Do you want to proceed with the proposed reshard plan (yes/no)? yes
    Moving slot 5461 from 127.0.0.1:7008 to 127.0.0.1:7005: 
    Moving slot 5462 from 127.0.0.1:7008 to 127.0.0.1:7005: 
    Moving slot 5463 from 127.0.0.1:7008 to 127.0.0.1:7005: 
    Moving slot 5464 from 127.0.0.1:7008 to 127.0.0.1:7005: 
    Moving slot 5465 from 127.0.0.1:7008 to 127.0.0.1:7005: 
    Moving slot 5466 from 127.0.0.1:7008 to 127.0.0.1:7005: 
    Moving slot 5467 from 127.0.0.1:7008 to 127.0.0.1:7005: 
    Moving slot 5468 from 127.0.0.1:7008 to 127.0.0.1:7005: 
    Moving slot 10923 from 127.0.0.1:7008 to 127.0.0.1:7005: 
    Moving slot 10924 from 127.0.0.1:7008 to 127.0.0.1:7005: 
    Moving slot 10925 from 127.0.0.1:7008 to 127.0.0.1:7005: 
    Moving slot 10926 from 127.0.0.1:7008 to 127.0.0.1:7005: 
    Moving slot 10927 from 127.0.0.1:7008 to 127.0.0.1:7005: 
    Moving slot 10928 from 127.0.0.1:7008 to 127.0.0.1:7005: 
    Moving slot 10929 from 127.0.0.1:7008 to 127.0.0.1:7005: 

    再次reshard以後,查看集羣狀況7008已是空的,測試刪除成功(上述縮減reshard的時候注意只能分配給指定的)
    [root@host-192-168-1-100 7008]# redis-cli -p 7005 cluster nodes                                            
    29b6f39557e3c4da1505545de0bd8bf0abb6a5bb 127.0.0.1:7008 master - 0 1524120244373 9 connected
    02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524120246378 5 connected
    0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001 master - 0 1524120245376 2 connected 5469-10922
    00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005 myself,master - 0 0 11 connected 5461-5468 10923-16383
    22628a3a484345de1ae0323b26c679eef5688cf8 127.0.0.1:7007 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524120245877 8 connected
    790b56a8774e24455f6da822aae1c06c8898d45b 127.0.0.1:7000 slave 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 0 1524120246378 10 connected
    018f59aedc474b8589674ff9454269dbfdefaa22 127.0.0.1:7006 slave 00a1138c9e4358f5bbd734c6f194bed04a37e999 0 1524120244875 11 connected
    575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002 slave 00a1138c9e4358f5bbd734c6f194bed04a37e999 0 1524120246378 11 connected
    3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003 master - 0 1524120244875 10 connected 0-5460
    [root@host-192-168-1-100 7008]# 
    [root@host-192-168-1-100 7008]# redis-trib del-node 127.0.0.1:7000 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb
    >>> Removing node 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb from cluster 127.0.0.1:7000
    >>> Sending CLUSTER FORGET messages to the cluster...
    >>> SHUTDOWN the node.
    [root@host-192-168-1-100 7008]# redis-cli -p 7005 cluster nodes                                            
    bc37b785100a2fe0b4575c977cb587908b15f2d6 :0 master,fail,noaddr - 1524025875825 1524025874721 1 disconnected
    02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524120264923 5 connected
    0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001 master - 0 1524120265424 2 connected 5469-10922
    00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005 myself,master - 0 0 11 connected 5461-5468 10923-16383
    22628a3a484345de1ae0323b26c679eef5688cf8 127.0.0.1:7007 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524120266929 8 connected
    790b56a8774e24455f6da822aae1c06c8898d45b 127.0.0.1:7000 slave 3df3a3bfd7489854d3dcbd2549e17639b6aa049c 0 1524120265927 10 connected
    018f59aedc474b8589674ff9454269dbfdefaa22 127.0.0.1:7006 slave 00a1138c9e4358f5bbd734c6f194bed04a37e999 0 1524120264923 11 connected
    575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002 slave 00a1138c9e4358f5bbd734c6f194bed04a37e999 0 1524120266427 11 connected
    3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003 master - 0 1524120266929 10 connected 0-5460

前面說了主從主動切換cluster failover,如今是集羣自動熔斷:
    爲了方便拍錯,記錄日誌
    cd /opt/redis-cluster
    for i in `ls -r`;do echo "logfile \"/var/log/redis/$i.log\"" >> /opt/redis-cluster/$i/redis.conf;done
    重啓服務
    pkill redis-server
    for i in `ls -r`;do cd /opt/redis-cluster/$i;redis-server redis.conf;done
    如今down掉主:
        當都是1主1從時,從直接使用failover選舉,並宣告本身是主
        當down的是1主1從,其餘是1主多從時,選舉這個從爲主以後,其它多從的會選擇一個歷來成爲這個新主的附屬

    [root@host-192-168-1-100 ~]# redis-cli -p 7000 cluster nodes   
    3df3a3bfd7489854d3dcbd2549e17639b6aa049c 127.0.0.1:7003 slave 790b56a8774e24455f6da822aae1c06c8898d45b 0 1524125881979 13 connected
    575f24b2fe784e2b6d18591e90ff7e760dc272df 127.0.0.1:7002 master - 0 1524125880975 14 connected 5461-5468 10923-16383
    02f0216b114632f4d25dcc742693c814e1bdfdd4 127.0.0.1:7004 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524125880975 2 connected
    22628a3a484345de1ae0323b26c679eef5688cf8 127.0.0.1:7007 slave 0b1580a5305041bde69108e438653e1bfccefed0 0 1524125881979 8 connected
    0b1580a5305041bde69108e438653e1bfccefed0 127.0.0.1:7001 master - 0 1524125880473 2 connected 5469-10922
    018f59aedc474b8589674ff9454269dbfdefaa22 127.0.0.1:7006 slave 575f24b2fe784e2b6d18591e90ff7e760dc272df 0 1524125880473 14 connected
    00a1138c9e4358f5bbd734c6f194bed04a37e999 127.0.0.1:7005 master,fail - 1524125827918 1524125826314 11 disconnected
    790b56a8774e24455f6da822aae1c06c8898d45b 127.0.0.1:7000 myself,master - 0 0 13 connected 0-5460

    31105:S 19 Apr 16:17:07.887 # Connection with master lost.
    31105:S 19 Apr 16:17:07.888 * Caching the disconnected master state.
    31105:S 19 Apr 16:17:08.021 * Connecting to MASTER 127.0.0.1:7005
    31105:S 19 Apr 16:17:08.021 * MASTER <-> SLAVE sync started
    31105:S 19 Apr 16:17:08.021 # Error condition on socket for SYNC: Connection refused
    31105:S 19 Apr 16:17:09.022 * Connecting to MASTER 127.0.0.1:7005
    31105:S 19 Apr 16:17:09.022 * MASTER <-> SLAVE sync started
    31105:S 19 Apr 16:17:09.022 # Error condition on socket for SYNC: Connection refused
    31105:S 19 Apr 16:17:10.025 * Connecting to MASTER 127.0.0.1:7005
    31105:S 19 Apr 16:17:10.026 * MASTER <-> SLAVE sync started
    31105:S 19 Apr 16:17:10.026 # Error condition on socket for SYNC: Connection refused
    31105:S 19 Apr 16:17:11.028 * Connecting to MASTER 127.0.0.1:7005
    31105:S 19 Apr 16:17:11.028 * MASTER <-> SLAVE sync started
    31105:S 19 Apr 16:17:11.028 # Error condition on socket for SYNC: Connection refused
    31105:S 19 Apr 16:17:12.031 * Connecting to MASTER 127.0.0.1:7005
    31105:S 19 Apr 16:17:12.031 * MASTER <-> SLAVE sync started
    31105:S 19 Apr 16:17:12.031 # Error condition on socket for SYNC: Connection refused
    31105:S 19 Apr 16:17:13.033 * Connecting to MASTER 127.0.0.1:7005
    31105:S 19 Apr 16:17:13.033 * MASTER <-> SLAVE sync started
    31105:S 19 Apr 16:17:13.033 # Error condition on socket for SYNC: Connection refused
    31105:S 19 Apr 16:17:13.337 * FAIL message received from 790b56a8774e24455f6da822aae1c06c8898d45b about 00a1138c9e4358f5bbd734c6f194bed04a37e999
    31105:S 19 Apr 16:17:13.337 # Cluster state changed: fail
    31105:S 19 Apr 16:17:13.434 # Start of election delayed for 827 milliseconds (rank #0, offset 3389).
    31105:S 19 Apr 16:17:14.035 * Connecting to MASTER 127.0.0.1:7005
    31105:S 19 Apr 16:17:14.035 * MASTER <-> SLAVE sync started
    31105:S 19 Apr 16:17:14.035 # Error condition on socket for SYNC: Connection refused
    31105:S 19 Apr 16:17:14.209 * Ignoring FAIL message from unknown node 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb about 00a1138c9e4358f5bbd734c6f194bed04a37e999
    31105:S 19 Apr 16:17:14.335 # Starting a failover election for epoch 14.
    31105:S 19 Apr 16:17:14.366 # Failover election won: I'm the new master.
    31105:S 19 Apr 16:17:14.366 # configEpoch set to 14 after successful failover
    31105:M 19 Apr 16:17:14.366 * Discarding previously cached master state.
    31105:M 19 Apr 16:17:14.367 # Cluster state changed: ok
    31105:M 19 Apr 16:17:19.946 * Slave 127.0.0.1:7006 asks for synchronization
    31105:M 19 Apr 16:17:19.946 * Full resync requested by slave 127.0.0.1:7006     #這個是從其餘一主多從的有一個從新請求作爲這個新主的從

    31105:M 19 Apr 16:17:19.946 * Starting BGSAVE for SYNC with target: disk
    31105:M 19 Apr 16:17:19.949 * Background saving started by pid 31269
    31269:C 19 Apr 16:17:19.965 * DB saved on disk
    31269:C 19 Apr 16:17:19.966 * RDB: 2 MB of memory used by copy-on-write
    31105:M 19 Apr 16:17:20.063 * Background saving terminated with success
    31105:M 19 Apr 16:17:20.063 * Synchronization with slave 127.0.0.1:7006 succeeded

    31113:M 19 Apr 16:17:13.335 * Marking node 00a1138c9e4358f5bbd734c6f194bed04a37e999 as failing (quorum reached).
    31113:M 19 Apr 16:17:13.335 # Cluster state changed: fail
    31113:M 19 Apr 16:17:14.208 * Ignoring FAIL message from unknown node 29b6f39557e3c4da1505545de0bd8bf0abb6a5bb about 00a1138c9e4358f5bbd734c6f194bed04a37e999
    31113:M 19 Apr 16:17:14.366 # Failover auth granted to 575f24b2fe784e2b6d18591e90ff7e760dc272df for epoch 14
    31113:M 19 Apr 16:17:14.406 # Cluster state changed: ok
    31113:M 19 Apr 16:17:19.443 # Connection with slave 127.0.0.1:7006 lost.

    31324:M 19 Apr 16:31:00.714 * Clear FAIL state for node 0b1580a5305041bde69108e438653e1bfccefed0: master without slots is reachable again.
    31324:M 19 Apr 16:31:01.670 * Slave 127.0.0.1:7001 asks for synchronization
    31324:M 19 Apr 16:31:01.670 * Full resync requested by slave 127.0.0.1:7001
    31324:M 19 Apr 16:31:01.670 * Starting BGSAVE for SYNC with target: disk
    31324:M 19 Apr 16:31:01.672 * Background saving started by pid 31375
    31375:C 19 Apr 16:31:01.690 * DB saved on disk
    31375:C 19 Apr 16:31:01.691 * RDB: 2 MB of memory used by copy-on-write
    31324:M 19 Apr 16:31:01.715 * Background saving terminated with success
    31324:M 19 Apr 16:31:01.715 * Synchronization with slave 127.0.0.1:7001 succeeded

Redis 集羣的一致性保證(Redis Cluster consistency guarantees)
Redis 集羣不保證強一致性。實踐中,這意味着在特定的條件下,Redis 集羣可能會丟掉一些被系統收 到的寫入請求命令。
Redis 集羣爲何會丟失寫請求的第一個緣由,是由於採用了異步複製。這意味着在寫期間下面的事情 發生了:

 你的客戶端向主服務器 B 寫入。
 主服務器 B 回覆 OK 給你的客戶端。
 主服務器 B 傳播寫入操做到其從服務器 B1,B2 和 B3。

第二種就是用CLUSTER FAILOVER 命令  這裏須要登陸從而後輸入命令 就能把對應的主換成從了,這種方法不會丟數據      redis-cluster選舉:     (1)領着選舉過程是集羣中全部master參與,若是半數以上master節點與master節點通訊超過(cluster-node-timeout),認爲當前master節點掛掉.     (2):何時整個集羣不可用(cluster_state:fail),當集羣不可用時,全部對集羣的操做作都不可用,收到((error) CLUSTERDOWN The cluster is down)錯誤         a:若是集羣任意master掛掉,且當前master沒有slave.集羣進入fail狀態,也能夠理解成進羣的slot映射[0-16383]不完成時進入fail狀態.         b:若是進羣超過半數以上master掛掉,不管是否有slave集羣進入fail狀態.

相關文章
相關標籤/搜索