Redis-4.0.14 Cluster 安裝部署

Redis-4.0.14 Cluster 安裝部署

本教程安裝環境是kvm虛擬機,nat模式下內網環境。這裏爲了節約服務器資源採用了在單實例Linux主機上進行演示。生產環境大流量下請勿將多臺Redis實例部署到一臺Linux服務器上(小流量也不推薦一機多redis實例)。idc機器環境下要保證網卡的併發性、雲主機要選用內存型(或者內存io型)。畢竟Redis是高io服務。
Tips:若是Redis是3主3從及以上規模集羣建議關閉主服務器上的bgsave操做,改成在從上進行。下降流量高峯時bgsave產生延時影響io吞吐量。
  • Redis下載、安裝
cd ~
 yum install gcc gcc-c++ -y
 wget http://download.redis.io/releases/redis-4.0.14.tar.gz
 tar -zxf redis-4.0.14.tar.gz
 cd redis-4.0.14
 cd deps
 make hiredis jemalloc linenoise lua geohash-int
 cd ..
 make PREFIX=/app/coohua/redis-9701 install
 make install
  • 將redis-cli加入系統路徑中
[root@redis-nec001 ~]# cp /app/coohua/redis-9701/bin/redis-cli /usr/local/bin/redis-cli
[root@redis-nec001 ~]# redis-cli  --version
redis-cli 4.0.14
  • 製做多副本,生產環境建議每一個實例配置一個單獨的服務器
cd /app/coohua/
mkdir -p redis-9701/conf
cp ~/redis-4.0.14/redis.conf  ./redis-9701/conf/
cp -arp redis-9701 redis-9702
cp -arp redis-9701 redis-9703
cp -arp redis-9701 redis-9704
cp -arp redis-9701 redis-9705
cp -arp redis-9701 redis-9706
cp -arp redis-9701 redis-9707
  • 修改配置文件redis.conf修改或開啓如下幾項
bind 192.168.100.214
port 9701
daemonize yes
pidfile /app/coohua/redis-9701/conf/redis_9701.pid
logfile "/data/coohua/redis-9701/redis.log"
dir /data/coohua/redis-9701/
maxmemory 4096M
cluster-enabled yes
cluster-config-file /app/coohua/redis-9701/conf/nodes-9701.conf
  • 配置ruby環境RVM方式
gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
curl -sSL https://get.rvm.io | bash -s stable
source /etc/profile.d/rvm.sh
rvm install 2.6.4
rvm 2.6.4 --default
gem install redis
cp ~/redis/redis-4.0.14/src/redis-trib.rb /usr/local/bin/
  • 設置系統內核參數
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo "echo never > /sys/kernel/mm/transparent_hugepage/enabled" >> /etc/rc.local
cp ~/redis/redis-4.0.14/src/redis-trib.rb /usr/local/bin/
cat /etc/sysctl.conf
net.ipv4.ip_forward = 0
net.ipv4.conf.default.accept_source_route = 0
kernel.core_uses_pid = 1
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
vm.swappiness = 0
net.ipv4.neigh.default.gc_stale_time=120
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.all.arp_announce=2
net.ipv4.conf.lo.arp_announce=2
net.core.somaxconn = 262144
net.core.netdev_max_backlog = 262144
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_fin_timeout = 2
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_keepalive_time = 30
net.ipv4.tcp_orphan_retries = 2
kernel.core_pattern = /data/coohua/core/core_%e_%p
vm.overcommit_memory = 1
kernel.sysrq = 1
  • 啓動Redis(以coohua用戶啓動)
chown -R coohua:coohua /app/coohua/redis* /data/coohua/
  su - coohua
 /app/coohua/redis-9701/bin/redis-server  /app/coohua/redis-9701/conf/redis.conf
 /app/coohua/redis-9702/bin/redis-server  /app/coohua/redis-9702/conf/redis.conf
 /app/coohua/redis-9703/bin/redis-server  /app/coohua/redis-9703/conf/redis.conf
 /app/coohua/redis-9704/bin/redis-server  /app/coohua/redis-9704/conf/redis.conf
 /app/coohua/redis-9705/bin/redis-server  /app/coohua/redis-9705/conf/redis.conf
 /app/coohua/redis-9706/bin/redis-server  /app/coohua/redis-9706/conf/redis.conf
 /app/coohua/redis-9707/bin/redis-server  /app/coohua/redis-9707/conf/redis.conf
  • 集羣建立分爲3主、3主3從(這也是最少的集羣數量)
redis-trib.rb  create --replicas 0 192.168.100.214:9701 192.168.100.214:9702 192.168.100.214:9703 #3主模式,最小節點集羣,沒法提供高可用.
redis-trib.rb  create --replicas 1 192.168.100.214:9701 192.168.100.214:9702 192.168.100.214:9703 192.168.100.214:9704 192.168.100.214:9705 192.168.100.214:9706 #主從模式,Slave從節點便是Master的備用節點也是數據的讀取節點
  • 建立1個最小的3主集羣
[root@redis-nec001 bin]# redis-trib.rb  create --replicas 0 192.168.100.214:9701  192.168.100.214:9702  192.168.100.214:9703
>>> Creating cluster
>>> Performing hash slots allocation on 3 nodes...
Using 3 masters:
192.168.100.214:9701
192.168.100.214:9702
192.168.100.214:9703
M: fa820855aeebad6551d09d0cd6063aeaefc8f4f9 192.168.100.214:9701
   slots:0-5460 (5461 slots) master
M: 517fd7f65b7e653a91b24aa7a06f1ec360bd8220 192.168.100.214:9702
   slots:5461-10922 (5462 slots) master
M: ccf082f6516ec23c1aee891358a3daf47d2b5ca7 192.168.100.214:9703
   slots:10923-16383 (5461 slots) master
Can I set the above configuration? (type 'yes' to accept): yes         
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join..
>>> Performing Cluster Check (using node 192.168.100.214:9701)
M: fa820855aeebad6551d09d0cd6063aeaefc8f4f9 192.168.100.214:9701
   slots:0-5460 (5461 slots) master
   0 additional replica(s)
M: 517fd7f65b7e653a91b24aa7a06f1ec360bd8220 192.168.100.214:9702
   slots:5461-10922 (5462 slots) master
   0 additional replica(s)
M: ccf082f6516ec23c1aee891358a3daf47d2b5ca7 192.168.100.214:9703
   slots:10923-16383 (5461 slots) master
   0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
  • 查看集羣狀態
[root@redis-nec001 bin]# redis-cli  -h 192.168.100.214 -p 9701 -c
192.168.100.214:9701> cluster nodes
517fd7f65b7e653a91b24aa7a06f1ec360bd8220 192.168.100.214:9702@19702 master - 0 1568616352578 2 connected 5461-10922
ccf082f6516ec23c1aee891358a3daf47d2b5ca7 192.168.100.214:9703@19703 master - 0 1568616353579 3 connected 10923-16383
fa820855aeebad6551d09d0cd6063aeaefc8f4f9 192.168.100.214:9701@19701 myself,master - 0 1568616352000 1 connected 0-5460
192.168.100.214:9701>
  • 添加從節點到集羣中,升級爲3主3從高可用方式
redis-trib.rb add-node --slave 其中 --slave表示添加的節點爲從節點 其Master主節點用--master-id方式,其後是要加入的從節點ip及端口,最後是隨機選擇一個主節點完成命令的格式完成,不知道Redis設計時是如何思考的。非要添加一個可有可無的參數,可是又不可少。
[root@redis-nec001 coohua]# redis-trib.rb add-node --slave --master-id fa820855aeebad6551d09d0cd6063aeaefc8f4f9 192.168.100.214:9704   192.168.100.214:9701
>>> Adding node 192.168.100.214:9704 to cluster 192.168.100.214:9701
>>> Performing Cluster Check (using node 192.168.100.214:9701)
M: fa820855aeebad6551d09d0cd6063aeaefc8f4f9 192.168.100.214:9701
   slots:0-5460 (5461 slots) master
   0 additional replica(s)
M: 517fd7f65b7e653a91b24aa7a06f1ec360bd8220 192.168.100.214:9702
   slots:5461-10922 (5462 slots) master
   0 additional replica(s)
M: ccf082f6516ec23c1aee891358a3daf47d2b5ca7 192.168.100.214:9703
   slots:10923-16383 (5461 slots) master
   0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.100.214:9704 to make it join the cluster.
Waiting for the cluster to join.
>>> Configure node as replica of 192.168.100.214:9701.
[OK] New node added correctly.
  • 添加主節點分配槽位流程
    主節點的添加配置略爲複雜,先要將存儲爲空的Redis實例添加到集羣,再'合理'的分配槽位給它。
    若是源集羣是3主,又添加3主這樣比較好分配,原來的節點每一個分配通常槽位給新加入的節點
    即:
    節點1分配新節點1一半
    節點2分配新節點2一半
    節點3分配新節點3一半
    非對稱方式添加 前提是各主節點槽位數一致、或者接近一致
    現節點槽位數=總槽位數/集羣節點數(包含最新的)
    各節點須要遷移槽位數=源各節點槽位數-現節點槽位數
  • 添加主節點
redis-trib.rb add-node 192.168.100.214:9707  192.168.100.214:9701
  • 查看各主節點槽位 能夠看到除了新節點外,每一個主節點的槽位數基本都是5461.

現節點槽位數(遷移分配後)=16384/4=4096
舊主節點須要遷移槽位數=5461-4094=1365node

[root@redis-nec001 coohua]# redis-trib.rb check 192.168.100.214:9701              
>>> Performing Cluster Check (using node 192.168.100.214:9701)
M: fa820855aeebad6551d09d0cd6063aeaefc8f4f9 192.168.100.214:9701
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: 62b3ded1a7545f0931611e837cfdbe6dc6fa580c 192.168.100.214:9704
   slots: (0 slots) slave
   replicates fa820855aeebad6551d09d0cd6063aeaefc8f4f9
M: 517fd7f65b7e653a91b24aa7a06f1ec360bd8220 192.168.100.214:9702
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 143e3f136118e62ae6b5a6d64fc21e1fcafee4b4 192.168.100.214:9706
   slots: (0 slots) slave
   replicates ccf082f6516ec23c1aee891358a3daf47d2b5ca7
S: 97147860d71d363059927f21ac92b16b4d17c97e 192.168.100.214:9705
   slots: (0 slots) slave
   replicates 517fd7f65b7e653a91b24aa7a06f1ec360bd8220
M: ccf082f6516ec23c1aee891358a3daf47d2b5ca7 192.168.100.214:9703
   slots:10923-16383 (5461 slots) master
M: 2c7eb280218234fb6adbd4718c7e21b128f1a938 192.168.100.214:9707
   slots: (0 slots) master
   0 additional replica(s)
  • 槽位分配
redis-trib.rb reshard 192.168.100.214:9707
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? 2c7eb280218234fb6adbd4718c7e21b128f1a938
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:fa820855aeebad6551d09d0cd6063aeaefc8f4f9
Source node #2:517fd7f65b7e653a91b24aa7a06f1ec360bd8220
Source node #3:ccf082f6516ec23c1aee891358a3daf47d2b5ca7
Source node #4:done
Moving slot 1334 from fa820855aeebad6551d09d0cd6063aeaefc8f4f9
.......
.......
Do you want to proceed with the proposed reshard plan (yes/no)? yes
依次將另外兩個主節點上的槽位也分配過來,過程略
分配事後,各個主節點的槽位基本一致。
槽位遷移過程當中可能會出錯 使用 redis-trib.rb fix命令修復便可 遷移錯誤緣由 --- 主節點服務器當時流量過大、cpu負載負載已經出現問題,避開峯值 --- 主機點開啓了bgsave致使 關閉bgsave,並把bgsave開啓在從節點,減輕主節點壓力
相關文章
相關標籤/搜索