Redis Cluster的搭建與部署,實現redis的分佈式方案

前言

  上篇Redis Sentinel安裝與部署,實現redis的高可用實現了redis的高可用,針對的主要是master宕機的狀況,咱們發現全部節點的數據都是同樣的,那麼一旦數據量過大,redi也會效率降低的問題。redis3.0版本正式推出後,有效地解決了Redis分佈式方面的需求,當遇到單機內存、併發、流量等瓶頸時,能夠採用Cluster架構方法達到負載均衡的目的。html

  而此篇將帶領你們實現Redis Cluster的搭建, 並進行簡單的客戶端操做。java

  github地址:https://github.com/youzhibing/redisnode

環境準備 

  redis版本:redis-3.0.0linux

  linux:centos6.7git

  ip:192.168.11.202,不一樣的端口實現不一樣的redis實例github

  客戶端jedis,基於spring-bootredis

redis cluster環境搭建

         節點準備

    192.168.11.202:6382,192.168.11.202:6383,192.168.11.202:6384,192.168.11.202:6385,192.168.11.202:6386,192.168.11.202:6387搭建初始集羣spring

    192.168.11.202:6388,192.168.11.202:6389擴容時用到json

    redis-6382.confcentos

port 6382
bind 192.168.11.202
requirepass "myredis"
daemonize yes
logfile "6382.log"
dbfilename "dump-6382.rdb"
dir "/opt/soft/redis/cluster_data"

masterauth "myredis"
cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file "nodes-6382.conf"
View Code

    redis-6383.conf

port 6383
bind 192.168.11.202
requirepass "myredis"
daemonize yes
logfile "6383.log"
dbfilename "dump-6383.rdb"
dir "/opt/soft/redis/cluster_data"

masterauth "myredis"
cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file "nodes-6383.conf"
View Code

    redis-6384.conf

port 6384
bind 192.168.11.202
requirepass "myredis"
daemonize yes
logfile "6384.log"
dbfilename "dump-6384.rdb"
dir "/opt/soft/redis/cluster_data"

masterauth "myredis"
cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file "nodes-6384.conf"
View Code

    redis-6385.conf

port 6385
bind 192.168.11.202
requirepass "myredis"
daemonize yes
logfile "6385.log"
dbfilename "dump-6385.rdb"
dir "/opt/soft/redis/cluster_data"

masterauth "myredis"
cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file "nodes-6385.conf"
View Code

    redis-6386.conf

port 6386
bind 192.168.11.202
requirepass "myredis"
daemonize yes
logfile "6386.log"
dbfilename "dump-6386.rdb"
dir "/opt/soft/redis/cluster_data"

masterauth "myredis"
cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file "nodes-6386.conf"
View Code

    redis-6387.conf

port 6387
bind 192.168.11.202
requirepass "myredis"
daemonize yes
logfile "6387.log"
dbfilename "dump-6387.rdb"
dir "/opt/soft/redis/cluster_data"

masterauth "myredis"
cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file "nodes-6387.conf"
View Code

         啓動所有節點

[root@slave1 redis_cluster]# cd /opt/redis-3.0.0/redis_cluster/
[root@slave1 redis_cluster]# ./../src/redis-server redis-6382.conf 
[root@slave1 redis_cluster]# ./../src/redis-server redis-6383.conf 
[root@slave1 redis_cluster]# ./../src/redis-server redis-6384.conf 
[root@slave1 redis_cluster]# ./../src/redis-server redis-6385.conf 
[root@slave1 redis_cluster]# ./../src/redis-server redis-6386.conf 
[root@slave1 redis_cluster]# ./../src/redis-server redis-6387.conf
View Code

   建立集羣

    節點所有啓動後,每一個節點目前只能識別出本身的節點信息,彼此之間並不知道對方的存在;

    採用redis-trib.rb來實現集羣的快速搭建,redis-trib.rb是採用Rudy實現的集羣管理工具,內部經過Cluster相關命令幫咱們簡化集羣建立、檢查、槽遷移和均衡等常見運維操做。

    有興趣的朋友能夠採用cluster 命令一步一步的手動實現redis cluster的搭建,就能夠明白redis-trib.rb是如何快速實現redis cluster的搭建的。

    搭建命令以下,其中--replicas 1表示每一個主節點配置1個從節點

[root@slave1 src]# cd /opt/redis-3.0.0/src/
[root@slave1 src]# ./redis-trib.rb create --replicas 1 192.168.11.202:6382 192.168.11.202:6383 192.168.11.202:6384 192.168.11.202:6385 192.168.11.202:6386 192.168.11.202:6387  

    建立過程當中會給出主從節點角色分配的計劃,以下所示

>>> Creating cluster
Connecting to node 192.168.11.202:6382: OK
Connecting to node 192.168.11.202:6383: OK
Connecting to node 192.168.11.202:6384: OK
Connecting to node 192.168.11.202:6385: OK
Connecting to node 192.168.11.202:6386: OK
Connecting to node 192.168.11.202:6387: OK
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.11.202:6382
192.168.11.202:6383
192.168.11.202:6384
Adding replica 192.168.11.202:6385 to 192.168.11.202:6382
Adding replica 192.168.11.202:6386 to 192.168.11.202:6383
Adding replica 192.168.11.202:6387 to 192.168.11.202:6384
M: 0ec055f9daa5b4f570e6a4c4d46e5285d16e0afe 192.168.11.202:6382
   slots:0-5460 (5461 slots) master
M: 3771e67edab547deff6bd290e1a07b23646906ee 192.168.11.202:6383
   slots:5461-10922 (5462 slots) master
M: 10b3789bb30889b5e6f67175620feddcd496d19e 192.168.11.202:6384
   slots:10923-16383 (5461 slots) master
S: 7649466ec006e0902a7f1578417247a6d5540c47 192.168.11.202:6385
   replicates 0ec055f9daa5b4f570e6a4c4d46e5285d16e0afe
S: 4f36b08d8067a003af45dbe96a5363f348643509 192.168.11.202:6386
   replicates 3771e67edab547deff6bd290e1a07b23646906ee
S: a583def1e6a059e4fdb3592557fd6ab691fd61ec 192.168.11.202:6387
   replicates 10b3789bb30889b5e6f67175620feddcd496d19e
Can I set the above configuration? (type 'yes' to accept): 

    爲何192.168.11.202:6382 192.168.11.202:6383 192.168.11.202:6384是主節點,請看注意點中第1點;當咱們贊成這份計劃以後,輸入yes,redis-trib.rb開始執行節點握手和槽分配操做,輸出以下

>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join...
>>> Performing Cluster Check (using node 192.168.11.202:6382)
M: 0ec055f9daa5b4f570e6a4c4d46e5285d16e0afe 192.168.11.202:6382
   slots:0-5460 (5461 slots) master
M: 3771e67edab547deff6bd290e1a07b23646906ee 192.168.11.202:6383
   slots:5461-10922 (5462 slots) master
M: 10b3789bb30889b5e6f67175620feddcd496d19e 192.168.11.202:6384
   slots:10923-16383 (5461 slots) master
M: 7649466ec006e0902a7f1578417247a6d5540c47 192.168.11.202:6385
   slots: (0 slots) master
   replicates 0ec055f9daa5b4f570e6a4c4d46e5285d16e0afe
M: 4f36b08d8067a003af45dbe96a5363f348643509 192.168.11.202:6386
   slots: (0 slots) master
   replicates 3771e67edab547deff6bd290e1a07b23646906ee
M: a583def1e6a059e4fdb3592557fd6ab691fd61ec 192.168.11.202:6387
   slots: (0 slots) master
   replicates 10b3789bb30889b5e6f67175620feddcd496d19e
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

    16384個槽所有被分配,集羣建立成功。

  集羣完整性檢查

    redis-trib.rb check命令能夠完成檢查工做,check命令只需給出集羣中任意一個節點地址就能夠完成整個集羣的檢查工做,以下

    redis-trib.rb check 192.168.11.202:6382, 輸出結果以下

Connecting to node 192.168.11.202:6382: OK
Connecting to node 192.168.11.202:6385: OK
Connecting to node 192.168.11.202:6383: OK
Connecting to node 192.168.11.202:6384: OK
Connecting to node 192.168.11.202:6387: OK
Connecting to node 192.168.11.202:6386: OK
>>> Performing Cluster Check (using node 192.168.11.202:6382)
M: 0ec055f9daa5b4f570e6a4c4d46e5285d16e0afe 192.168.11.202:6382
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: 7649466ec006e0902a7f1578417247a6d5540c47 192.168.11.202:6385
   slots: (0 slots) slave
   replicates 0ec055f9daa5b4f570e6a4c4d46e5285d16e0afe
M: 3771e67edab547deff6bd290e1a07b23646906ee 192.168.11.202:6383
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
M: 10b3789bb30889b5e6f67175620feddcd496d19e 192.168.11.202:6384
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: a583def1e6a059e4fdb3592557fd6ab691fd61ec 192.168.11.202:6387
   slots: (0 slots) slave
   replicates 10b3789bb30889b5e6f67175620feddcd496d19e
S: 4f36b08d8067a003af45dbe96a5363f348643509 192.168.11.202:6386
   slots: (0 slots) slave
   replicates 3771e67edab547deff6bd290e1a07b23646906ee
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
View Code

    [OK] All 16384 slots covered.表示集羣全部的槽都已分配到節點。

    引入了槽以後,整個數據流向以下圖所示:

    至於爲何引入槽,請看注意點中第3點

  redis cluster簡單操做

    鏈接集羣,隨便鏈接某個節點均可以;-c 集羣支持,支持自動重定向

    [root@slave1 redis_cluster]# ./../src/redis-cli -h 192.168.11.202 -p 6382 -a myredis -c

    鏈接上redis cluster後就能夠執行相關redis命令了,以下

192.168.11.202:6382> get name
-> Redirected to slot [5798] located at 192.168.11.202:6388
"youzhibing"
192.168.11.202:6388> set weight 112
-> Redirected to slot [16280] located at 192.168.11.202:6384
OK
192.168.11.202:6384> get weight
"112"
192.168.11.202:6384> 

客戶端(Jedis)鏈接與操做

  redis-cluster.properties

#cluster
redis.cluster.host=192.168.11.202
redis.cluster.port=6382,6383,6384,6385,6386,6387
#redis讀寫超時時間(毫秒)
redis.cluster.socketTimeout=1000
#redis鏈接超時時間(毫秒)
redis.cluster.connectionTimeOut=3000
#最大嘗試鏈接次數
redis.cluster.maxAttempts=10
#最大重定向次數
redis.cluster.maxRedirects=5
#master鏈接密碼
redis.password=myredis

# 鏈接池
# 鏈接池最大鏈接數(使用負值表示沒有限制)
redis.pool.maxActive=150
# 鏈接池中的最大空閒鏈接
redis.pool.maxIdle=10
# 鏈接池中的最小空閒鏈接
redis.pool.minIdle=1
# 獲取鏈接時的最大等待毫秒數,小於零:阻塞不肯定的時間,默認-1
redis.pool.maxWaitMillis=3000
# 每次釋放鏈接的最大數目
redis.pool.numTestsPerEvictionRun=50
# 釋放鏈接的掃描間隔(毫秒)
redis.pool.timeBetweenEvictionRunsMillis=3000
# 鏈接最小空閒時間(毫秒)
redis.pool.minEvictableIdleTimeMillis=1800000
# 鏈接空閒多久後釋放, 當空閒時間>該值 且 空閒鏈接>最大空閒鏈接數 時直接釋放(毫秒)
redis.pool.softMinEvictableIdleTimeMillis=10000
# 在獲取鏈接的時候檢查有效性, 默認false
redis.pool.testOnBorrow=true
# 在空閒時檢查有效性, 默認false
redis.pool.testWhileIdle=true
# 在歸還給pool時,是否提早進行validate操做
redis.pool.testOnReturn=true
# 鏈接耗盡時是否阻塞, false報異常,ture阻塞直到超時, 默認true
redis.pool.blockWhenExhausted=true
View Code

  RedisClusterConfig.java

package com.lee.redis.config.cluster;

import java.util.HashSet;
import java.util.Set;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;
import org.springframework.util.StringUtils;

import redis.clients.jedis.HostAndPort;
import redis.clients.jedis.JedisCluster;
import redis.clients.jedis.JedisPoolConfig;

import com.alibaba.fastjson.JSON;
import com.lee.redis.exception.LocalException;

@Configuration
@PropertySource("redis/redis-cluster.properties")
public class RedisClusterConfig {

    private static final Logger LOGGER = LoggerFactory.getLogger(RedisClusterConfig.class);
    
    // pool
    @Value("${redis.pool.maxActive}")
    private int maxTotal;
    @Value("${redis.pool.maxIdle}")
    private int maxIdle;
    @Value("${redis.pool.minIdle}")
    private int minIdle;
    @Value("${redis.pool.maxWaitMillis}")
    private long maxWaitMillis;
    @Value("${redis.pool.numTestsPerEvictionRun}")
    private int numTestsPerEvictionRun;
    @Value("${redis.pool.timeBetweenEvictionRunsMillis}")
    private long timeBetweenEvictionRunsMillis;
    @Value("${redis.pool.minEvictableIdleTimeMillis}")
    private long minEvictableIdleTimeMillis;
    @Value("${redis.pool.softMinEvictableIdleTimeMillis}")
    private long softMinEvictableIdleTimeMillis;
    @Value("${redis.pool.testOnBorrow}")
    private boolean testOnBorrow;
    @Value("${redis.pool.testWhileIdle}")
    private boolean testWhileIdle;
    @Value("${redis.pool.testOnReturn}")
    private boolean testOnReturn;
    @Value("${redis.pool.blockWhenExhausted}")
    private boolean blockWhenExhausted;
    
    // cluster
    @Value("${redis.cluster.host}")
    private String host;
    @Value("${redis.cluster.port}")
    private String port;
    @Value("${redis.cluster.socketTimeout}")
    private int socketTimeout;
    @Value("${redis.cluster.connectionTimeOut}")
    private int connectionTimeOut;
    @Value("${redis.cluster.maxAttempts}")
    private int maxAttempts;
    @Value("${redis.cluster.maxRedirects}")
    private int maxRedirects;
    @Value("${redis.password}")
    private String password;
    
    @Bean
    public JedisPoolConfig jedisPoolConfig() {
        JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
        jedisPoolConfig.setMaxTotal(maxTotal);
        jedisPoolConfig.setMaxIdle(maxIdle);
        jedisPoolConfig.setMinIdle(minIdle);
        jedisPoolConfig.setMaxWaitMillis(maxWaitMillis);
        jedisPoolConfig.setNumTestsPerEvictionRun(numTestsPerEvictionRun);
        jedisPoolConfig
                .setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis);
        jedisPoolConfig
                .setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis);
        jedisPoolConfig
                .setSoftMinEvictableIdleTimeMillis(softMinEvictableIdleTimeMillis);
        jedisPoolConfig.setTestOnBorrow(testOnBorrow);
        jedisPoolConfig.setTestWhileIdle(testWhileIdle);
        jedisPoolConfig.setTestOnReturn(testOnReturn);
        jedisPoolConfig.setBlockWhenExhausted(blockWhenExhausted);

        return jedisPoolConfig;
    }
    
    @Bean
    public JedisCluster jedisCluster(JedisPoolConfig jedisPoolConfig) {
        
        if (StringUtils.isEmpty(host)) {
            LOGGER.info("redis集羣主機未配置");
            throw new LocalException("redis集羣主機未配置");
        }
        if (StringUtils.isEmpty(port)) {
            LOGGER.info("redis集羣端口未配置");
            throw new LocalException("redis集羣端口未配置");
        }
        String[] hosts = host.split(",");
        String[] portArray = port.split(";");
        if (hosts.length != portArray.length) {
            LOGGER.info("redis集羣主機數與端口數不匹配");
            throw new LocalException("redis集羣主機數與端口數不匹配");
        }
        Set<HostAndPort> redisNodes = new HashSet<HostAndPort>();
        for (int i = 0; i < hosts.length; i++) {
            String ports = portArray[i];
            String[] hostPorts = ports.split(",");
            for (String port : hostPorts) {
                HostAndPort node = new HostAndPort(hosts[i], Integer.parseInt(port));
                redisNodes.add(node);
            }
        }
        LOGGER.info("Set<RedisNode> : {}", JSON.toJSONString(redisNodes), true);
        
        return new JedisCluster(redisNodes, connectionTimeOut, socketTimeout, maxAttempts, password, jedisPoolConfig);
    }
}
View Code

  ApplicationCluster.java

package com.lee.redis;

import org.springframework.boot.Banner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.Configuration;

@Configuration
@EnableAutoConfiguration
@ComponentScan(basePackages={"com.lee.redis.config.cluster"})
public class ApplicationCluster {
    public static void main(String[] args) {
        
        SpringApplication app = new SpringApplication(ApplicationCluster.class);
        app.setBannerMode(Banner.Mode.OFF);            // 是否打印banner
        // app.setApplicationContextClass();        // 指定spring應用上下文啓動類
        app.setWebEnvironment(false);
        app.run(args);
    }
}
View Code

  RedisClusterTest.java

package com.lee.redis;

import java.util.List;
import java.util.Map;

import org.junit.Test;
import org.junit.runner.RunWith;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;

import com.alibaba.fastjson.JSON;

import redis.clients.jedis.JedisCluster;
import redis.clients.jedis.JedisPool;

@RunWith(SpringRunner.class)
@SpringBootTest(classes = ApplicationCluster.class)
public class RedisClusterTest {
    private static final Logger LOGGER = LoggerFactory.getLogger(RedisClusterTest.class);
    
    @Autowired
    private JedisCluster jedisCluster;
    
    @Test
    public void initTest() {
        String name = jedisCluster.get("name");
        LOGGER.info("name is {}", name);
        
        // list操做
        long count = jedisCluster.lpush("list:names", "陳芸");    // lpush的返回值是在 push操做後的 list 長度
        LOGGER.info("count = {}", count);
        long nameLen = jedisCluster.llen("list:names");
        LOGGER.info("list:names lens is {}", nameLen);
        List<String> nameList = jedisCluster.lrange("list:names", 0, nameLen);
        LOGGER.info("names : {}", JSON.toJSONString(nameList));
    }
}
View Code

  執行RedisClusterTest.java中的initTest方法, 結果以下

......
2018-03-06 09:56:05|INFO|com.lee.redis.RedisClusterTest|name is youzhibing
2018-03-06 09:56:05|INFO|com.lee.redis.RedisClusterTest|count = 3
2018-03-06 09:56:05|INFO|com.lee.redis.RedisClusterTest|list:names lens is 3
2018-03-06 09:56:05|INFO|com.lee.redis.RedisClusterTest|names : ["陳芸","沈復","沈復"]
......

集羣的伸縮與故障轉移

  cluster擴容

    新增節點:192.168.11.202:6388, 192.168.11.202:6389, 配置文件與以前的基本一致

      redis-6388.conf

port 6388
bind 192.168.11.202
requirepass "myredis"
daemonize yes
logfile "6388.log"
dbfilename "dump-6388.rdb"
dir "/opt/soft/redis/cluster_data"

masterauth "myredis"
cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file "nodes-6388.conf"
View Code

      redis-6389.conf

port 6389
bind 192.168.11.202
requirepass "myredis"
daemonize yes
logfile "6389.log"
dbfilename "dump-6389.rdb"
dir "/opt/soft/redis/cluster_data"

masterauth "myredis"
cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file "nodes-6389.conf"
View Code

      啓動638八、6389節點

[root@slave1 redis_cluster]# cd /opt/redis-3.0.0/redis_cluster/
[root@slave1 redis_cluster]# ./../src/redis-server redis-6388.conf 
[root@slave1 redis_cluster]# ./../src/redis-server redis-6389.conf
View Code

    加入集羣:

[root@slave1 redis_cluster]# ./../src/redis-trib.rb add-node 192.168.11.202:6388 192.168.11.202:6382
#將6389添加成6388的從節點
[root@slave1 redis_cluster]# ./../src/redis-trib.rb add-node --slave --master-id e073db09e7aaed3c20d133726a26c8994932262c 192.168.11.202:6389 192.168.11.202:6382

         遷移槽和數據

      採用redis-trib-rb reshard命令執行槽重分片:[root@slave1 redis_cluster]# ./../src/redis-trib.rb reshard 192.168.11.202:6382

      當出現 How many slots do you want to move (from 1 to 16384)? 提示咱們想移動多少個槽,咱們輸入4096

      當出現What is the receiving node ID? 提示咱們哪一個主節點接收新移動的槽, 咱們輸入6388的節點id:e073db09e7aaed3c20d133726a26c8994932262c,目標節點只能指定一個(節點id能夠拷貝的哦)

      以後輸入源節點的id,用done結束,這裏我用的all,就是從以前的所有主節點中移動4096個槽到6388

      數據遷移以前會打印出全部的槽從源節點到目標節點的計劃,確認無誤後輸入yes執行遷移工做

      若遷移過程沒有出錯,那麼遷移則順利完成

  cluster故障轉移

    6388上的全部key

192.168.11.202:6388> keys *
1) "list:names"
2) "name"
192.168.11.202:6388>

    殺掉6388進程

[root@slave1 redis_cluster]# ps -ef | grep redis-server | grep 6388
root      8280     1  0 Mar05 ?        00:05:07 ./../src/redis-server 192.168.11.202:6388 [cluster]
[root@slave1 redis_cluster]# kill -9 8280

#集羣節點查看
[root@slave1 redis_cluster]# ./../src/redis-cli -h 192.168.11.202 -p 6382 -a myredis -c
192.168.11.202:6382> cluster nodes
4f36b08d8067a003af45dbe96a5363f348643509 192.168.11.202:6386 slave 3771e67edab547deff6bd290e1a07b23646906ee 0 1520304517911 5 connected
0ec055f9daa5b4f570e6a4c4d46e5285d16e0afe 192.168.11.202:6382 myself,master - 0 0 1 connected 1394-5460
a583def1e6a059e4fdb3592557fd6ab691fd61ec 192.168.11.202:6387 slave 10b3789bb30889b5e6f67175620feddcd496d19e 0 1520304514886 6 connected
10b3789bb30889b5e6f67175620feddcd496d19e 192.168.11.202:6384 master - 0 1520304516904 3 connected 12318-16383
3771e67edab547deff6bd290e1a07b23646906ee 192.168.11.202:6383 master - 0 1520304513879 2 connected 7106-10922
e073db09e7aaed3c20d133726a26c8994932262c 192.168.11.202:6388 master,fail - 1520304485678 1520304484473 10 disconnected 0-1393 5461-7105 10923-12317
37de0d2dc1c267760156d4230502fa96a6bba64d 192.168.11.202:6389 slave e073db09e7aaed3c20d133726a26c8994932262c 0 1520304515895 10 connected
7649466ec006e0902a7f1578417247a6d5540c47 192.168.11.202:6385 slave 0ec055f9daa5b4f570e6a4c4d46e5285d16e0afe 0 1520304518923 4 connected
192.168.11.202:6382> 

#查詢key name
192.168.11.202:6382> get name
-> Redirected to slot [5798] located at 192.168.11.202:6389
"youzhibing"
192.168.11.202:6389> keys *
1) "list:names"
2) "name"
192.168.11.202:6389> 

      6389已經成爲主節點,承擔着以前6388的角色,集羣狀態仍是ok的, 對外提供的服務不受任何影響

    從新啓動6388服務

[root@slave1 redis_cluster]# ./../src/redis-server redis-6388.conf

#查看集羣節點
192.168.11.202:6389> cluster nodes
0ec055f9daa5b4f570e6a4c4d46e5285d16e0afe 192.168.11.202:6382 master - 0 1520304789567 1 connected 1394-5460
37de0d2dc1c267760156d4230502fa96a6bba64d 192.168.11.202:6389 myself,master - 0 0 12 connected 0-1393 5461-7105 10923-12317
7649466ec006e0902a7f1578417247a6d5540c47 192.168.11.202:6385 slave 0ec055f9daa5b4f570e6a4c4d46e5285d16e0afe 0 1520304788061 1 connected
a583def1e6a059e4fdb3592557fd6ab691fd61ec 192.168.11.202:6387 slave 10b3789bb30889b5e6f67175620feddcd496d19e 0 1520304787556 3 connected
3771e67edab547deff6bd290e1a07b23646906ee 192.168.11.202:6383 master - 0 1520304786550 2 connected 7106-10922
e073db09e7aaed3c20d133726a26c8994932262c 192.168.11.202:6388 slave 37de0d2dc1c267760156d4230502fa96a6bba64d 0 1520304786047 12 connected
10b3789bb30889b5e6f67175620feddcd496d19e 192.168.11.202:6384 master - 0 1520304785542 3 connected 12318-16383
4f36b08d8067a003af45dbe96a5363f348643509 192.168.11.202:6386 slave 3771e67edab547deff6bd290e1a07b23646906ee 0 1520304788561 2 connected
192.168.11.202:6389>

    能夠看到6388啓動成功後,仍在集羣中,只是是做爲6389的從節點了

  cluster收縮

    一、咱們下線6389和6388節點

    經過集羣節點信息咱們知道6389負責槽:0-1393 5461-7105 10923-12317, 如今將0-1393遷移到6382,5461-7105遷移到6383, 10923-12317遷移到6384

How many slots do you want to move (from 1 to 16384)?1394
What is the receiving node ID? 0ec055f9daa5b4f570e6a4c4d46e5285d16e0afe
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:37de0d2dc1c267760156d4230502fa96a6bba64d
Source node #2:done
......
Do you want to proceed with the proposed reshard plan (yes/no)? yes


How many slots do you want to move (from 1 to 16384)? 1645
What is the receiving node ID? 3771e67edab547deff6bd290e1a07b23646906ee
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:37de0d2dc1c267760156d4230502fa96a6bba64d
Source node #2:done
......
Do you want to proceed with the proposed reshard plan (yes/no)? yes


How many slots do you want to move (from 1 to 16384)?1395
What is the receiving node ID? 10b3789bb30889b5e6f67175620feddcd496d19e
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:37de0d2dc1c267760156d4230502fa96a6bba64d
Source node #2:done
......
Do you want to proceed with the proposed reshard plan (yes/no)? yes
View Code

    槽節點遷移完以後,集羣節點信息,發現6388已經沒有分配槽了

192.168.11.202:6382> cluster nodes
3771e67edab547deff6bd290e1a07b23646906ee 192.168.11.202:6383 master - 0 1520333368013 16 connected 5461-10922
0ec055f9daa5b4f570e6a4c4d46e5285d16e0afe 192.168.11.202:6382 myself,master - 0 0 13 connected 0-5460
10b3789bb30889b5e6f67175620feddcd496d19e 192.168.11.202:6384 master - 0 1520333372037 17 connected 10923-16383
4f36b08d8067a003af45dbe96a5363f348643509 192.168.11.202:6386 slave 3771e67edab547deff6bd290e1a07b23646906ee 0 1520333370024 16 connected
a583def1e6a059e4fdb3592557fd6ab691fd61ec 192.168.11.202:6387 slave 10b3789bb30889b5e6f67175620feddcd496d19e 0 1520333370525 17 connected
37de0d2dc1c267760156d4230502fa96a6bba64d 192.168.11.202:6389 slave e073db09e7aaed3c20d133726a26c8994932262c 0 1520333369017 15 connected
7649466ec006e0902a7f1578417247a6d5540c47 192.168.11.202:6385 slave 0ec055f9daa5b4f570e6a4c4d46e5285d16e0afe 0 1520333367008 13 connected
e073db09e7aaed3c20d133726a26c8994932262c 192.168.11.202:6388 master - 0 1520333371031 15 connected

    二、忘記節點

    因爲集羣內的節點不停地經過Gossip消息彼此交換節點信息,所以須要經過一種健壯的機制讓集羣內全部節點忘記下線的節點。也就是說讓其餘節點再也不與要下線的節點進行Gossip消息交換。

    利用redis-trib.rb del-node命令實現節點下線,先下線從節點再下線主節點,避免沒必要要的全量複製。命令以下

[root@slave1 redis_cluster]# ./../src/redis-trib.rb del-node 192.168.11.202:6389 37de0d2dc1c267760156d4230502fa96a6bba64d
[root@slave1 redis_cluster]# ./../src/redis-trib.rb del-node 192.168.11.202:6388 e073db09e7aaed3c20d133726a26c8994932262c

    集羣節點信息以下

192.168.11.202:6382> cluster nodes
3771e67edab547deff6bd290e1a07b23646906ee 192.168.11.202:6383 master - 0 1520333828887 16 connected 5461-10922
0ec055f9daa5b4f570e6a4c4d46e5285d16e0afe 192.168.11.202:6382 myself,master - 0 0 13 connected 0-5460
10b3789bb30889b5e6f67175620feddcd496d19e 192.168.11.202:6384 master - 0 1520333827377 17 connected 10923-16383
4f36b08d8067a003af45dbe96a5363f348643509 192.168.11.202:6386 slave 3771e67edab547deff6bd290e1a07b23646906ee 0 1520333826880 16 connected
a583def1e6a059e4fdb3592557fd6ab691fd61ec 192.168.11.202:6387 slave 10b3789bb30889b5e6f67175620feddcd496d19e 0 1520333829892 17 connected
7649466ec006e0902a7f1578417247a6d5540c47 192.168.11.202:6385 slave 0ec055f9daa5b4f570e6a4c4d46e5285d16e0afe 0 1520333827879 13 connected

    16384個槽節點都有分佈,集羣狀態ok, 節點6389和6388下線成功

注意點

  一、建立集羣的時候,redis-trib.rb會盡量保證主從節點不分配在同一機器下,所以會從新排序節點順序;節點列表順序用於肯定主從角色,先主節點以後是從節點

  二、redis-trib.rb建立集羣的時候,節點地址必須是不包含任何槽 / 數據的節點,不然會拒絕建立集羣

  三、虛擬槽的採用主要是針對一致性哈希分區的不足而提出的,一致性哈希分區不適用少許節點的狀況,而虛擬槽的範圍(redis cluster槽範圍是0 ~ 16383)通常遠大於節點數的,而後每一個節點負責必定數量的槽,這樣就規避掉了少許節點的問題,由於在數據與節點之間多了一層虛擬槽的映射

  四、Jedis鏈接redis cluster的時候,配置redis-cluster節點的時候只須要配置任意一個可達的節點便可,不必定所有節點都配置上,由於每一個節點都有整個集羣的信息;Jedis鍵命令執行流程以下圖, 有興趣的朋友能夠查看Jedis源碼; 固然,所有節點都配置是更全面的作法

  五、集羣的伸縮與故障轉移對客戶端沒有影響,只要整個集羣狀態是ok,那麼客戶端的請求都是可以獲得正常響應的

  六、只有保證16384個槽節點都能分配到節點上,那麼集羣狀態就是ok,才能正常對外提供服務;因此 不管是集羣擴容仍是收縮,都必須保證16384個槽能正確的分配到節點上

參考

  《Redis開發與運維》

  http://www.redis.cn/topics/sentinel.html

相關文章
相關標籤/搜索