redis 6.0 redis-cluster-proxy集羣代理嘗試

伴隨着Redis6.0的發佈,做爲最使人怦然心動的特性之一,Redis官方同時推出Redis集羣的proxy了:redis-cluster-proxy, https://github.com/RedisLabs/redis-cluster-proxy
相比從前訪問Redis集羣時須要制定集羣中全部的IP節點相比:
1,redis的redis-cluster-proxy實現了redis cluster集羣節點的代理(屏蔽),相似於VIP但又比VIP簡單,客戶端不須要知道集羣中的具體節點個數和主從身份,能夠直接經過代理訪問集羣。
2,不只如此,仍是具備一些很是實用的改進,好比在redis集羣模式下,增長了對multiple操做的支持,跨slot操做等等(有點關係數據庫的分庫分表中間件的感受)。

redis-cluster-proxy主要特性
如下信息來自於官方的說明:
redis-cluster-proxy是Redis集羣的代理。Redis可以在基於自動故障轉移和分片的集羣模式下運行。
這種特殊模式(指Redis集羣模式)須要使用特殊的客戶端來理解集羣協議:經過代理,集羣被抽象了出來,能夠實現像單實例同樣實現redis集羣的訪問。
Redis集羣代理是多線程的,默認狀況下,它目前使用多路複用通訊模型,這樣每一個線程都有本身的集羣鏈接,全部屬於線程自己的客戶端均可以共享該鏈接。
不管如何,在某些特殊狀況下(多事務或阻塞命令),多路複用被禁用,客戶端將擁有本身的集羣鏈接。
經過這種方式,只發送簡單命令(好比get和set)的客戶端將不須要一組到Redis集羣的私有鏈接。

Redis集羣代理的主要特色以下:
1,自動化路由:每一個查詢被自動路由到集羣的正確節點
2,多線程(它目前使用多路複用通訊模型,這樣每一個線程都有本身的集羣鏈接)
3,支持多路複用和私有鏈接模型
4,即便在多路複用上下文中,查詢執行和應答順序也是有保證的
5,在請求/重定向錯誤後自動更新集羣配置:當這些類型的錯誤發生在應答中時,代理經過獲取集羣的更新配置並從新映射全部slot,自動更新集羣的內部表示。
    全部查詢將在更新完成後從新執行,所以,從客戶機的角度來看,一切都將正常運行(客戶機不會收到ASK|重定向錯誤:在更新集羣配置以後,它們將直接收到預期的結果)。
6,跨slot/跨節點查詢:支持許多涉及屬於不一樣slot(甚至不一樣集羣節點)的多個鍵的命令。
    這些命令將把查詢分紅多個查詢,這些查詢將被路由到不一樣的槽/節點。
    這些命令的應答處理是特定於命令的。有些命令,如MGET,將合併全部應答,就好像它們是單個應答同樣。
    其餘命令(如MSET或DEL)將彙總全部應答的結果。因爲這些查詢實際上破壞了命令的原子性,因此它們的使用是可選的(默認狀況下禁用)。
7,一些沒有特定節點/slot的命令(如DBSIZE)被傳遞給全部節點,爲了給出全部應答中包含的全部值的和,應答將被映射簡化。
8,可用於執行某些特定於代理的操做的附加代理命令。

Redis 6.0以及redis-cluster-proxy gcc 5+編譯環境依賴
Redis 6.0以及redis-cluster-proxy的編譯依賴於gcc 5+,centos 7上的默認gcc版本是4.+,沒法知足編譯要求,在編譯時候會出現相似以下的錯誤
server.h:1022:5: error: expected specifier-qualifier-list before '_Atomic
相似錯誤參考這裏: https://wanghenshui.github.io/2019/12/31/redis-ce
解決方案參考,筆者環境爲centos7,爲此折騰了小半天
1, https://stackoverflow.com/questions/55345373/how-to-install-gcc-g-8-on-centos,測試可行
2, https://blog.csdn.net/displayMessage/article/details/85602701 gcc源碼包編譯安裝,120MB的源碼包,有人說是須要40分鐘,筆者機器上編譯超過了1個小時仍未果,所以採用的是上一種方法

 

Redis集羣環境搭建
測試環境拓撲圖,以下所示,基於docker的3主3從6個節點的redis cluster集羣

redis cluster 集羣信息,參考以前的文章,redis cluster 自動化安裝、擴容和縮容,快速實現Redis集羣搭建
html

 
redis-cluster-proxy 安裝
安裝步驟:
1,git clone https://github.com/artix75/redis-cluster-proxy
   cd redis-cluster-proxy
2,解決gcc版本依賴問題,筆者折騰了很久,gcc 5.0+ 源碼包編譯安裝花了一個多小時未果。
 後來嘗試以下這種方法可行,參考 https://stackoverflow.com/questions/55345373/how-to-install-gcc-g-8-on-centos
On CentOS 7, you can install GCC 8 from Developer Toolset. First you need to enable the Software Collections repository:
yum install centos-release-scl

Then you can install GCC 8 and its C++ compiler:
yum install devtoolset-8-gcc devtoolset-8-gcc-c++

To switch to a shell which defaults gcc and g++ to this GCC version, use:
scl enable devtoolset-8 -- bash

You need to wrap all commands under the scl call, so that the process environment changes performed by this command affect all subshells. For example, you could use the scl command to invoke a shell script that performs the required actions.
3,make PREFIX=/usr/local/redis_cluster_proxy install 
 
4,關於rediscluster-proxy配置文件
啓動的時候能夠直接在命令行中指定參數,但最好是使用配置文件模式啓動,配置文件中的節點以下,很清爽,註釋也很清晰,簡單備註了一下,期待發現更多的新特性。
# Redis Cluster Proxy configuration file example.
# 若是指定以配置文件的方式啓動,必須指定-c 參數
# ./redis-cluster-proxy -c /path/to/proxy.conf
 

################################## INCLUDES ###################################
# Include one or more other config files here.  Include files can include other files.
# 指定配置文件的路徑
# If instead you are interested in using includes to override configuration options, it is better to use include as the last line.
# include /path/to/local.conf
# include /path/to/other.conf

######################## CLUSTER ENTRY POINT ADDRESS ##########################
# Indicate the entry point address in the same way it can be indicated in the
# redis cluster集羣自身節點信息,這裏是3主3從的6個節點,分別是192.168.0.61~192.168.0.66
# redis-cluster-proxy command line arguments.
# Note that it can be overridden by the command line argument itself.
# You can also specify multiple entry-points, by adding more lines, ie:
# cluster 127.0.0.1:7000
# cluster 127.0.0.1:7001
# You can also use the "entry-point" alias instead of cluster, ie:
# entry-point 127.0.0.1:7000
#
# cluster 127.0.0.1:7000
cluster 192.168.0.61:8888
cluster 192.168.0.62:8888
cluster 192.168.0.63:8888
cluster 192.168.0.64:8888
cluster 192.168.0.65:8888
cluster 192.168.0.66:8888


################################### MAIN ######################################
# Set the port used by Redis Cluster Proxy to listen to incoming connections
# redis-cluster-proxy 端口號指定
# from clients (default 7777)
port 7777
 
#  IP地址綁定,這裏指定爲redis-proxy-cluster所在節點的IP地址
# If you want you can bind a single interface, if the bind option is not
# specified all the interfaces will listen for incoming connections.
# You can also bind on multiple interfaces by declaring bind on multiple lines
#
# bind 127.0.0.1
bind 192.168.0.12
 
#  socket 文件路徑
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis Cluster Proxy won't
# listen on a Unix socket when not specified.
#
# unixsocket /path/to/proxy.socket

# Set the Unix socket file permissions (default 0)
#
# unixsocketperm 760
 
#  線程數量
# Set the number of threads.
threads 8

# Set the TCP keep-alive value on the Redis Cluster Proxy's socket
#
# tcpkeepalive 300

# Set the TCP backlog on the Redis Cluster Proxy's socket
#
# tcp-backlog 511

#  鏈接池信息
# Size of the connections pool used to provide ready-to-use sockets to
# private connections. The number (size) indicates the number of starting
# connections in the pool.
# Use 0 to disable connections pool at all.
# Every thread will have its pool of ready-to-use connections.
# When the proxy starts, every thread will populate a pool containing
# connections to all the nodes of the cluster.
# Whenever a client needs a private connection, it can take a connection
# from the pool, if available. This will speed-up the client transition from
# the thread's shared connection to its own private connection, since the
# connection from the thread's pool should be already connected and
# ready-to-use. Otherwise, clients with priovate connections must re-connect
# the the nodes of the cluster (this re-connection will act in a 'lazy' way).
#
# connections-pool-size 10

# Minimum number of connections in the the pool. Below this value, the
# thread will start re-spawning connections at the defined rate until
# the pool will be full again.
#
# connections-pool-min-size 10

# Interval in milliseconds used to re-spawn connections in the pool.
# Whenever the number of connections in the pool drops below the minimum
# (see 'connections-pool-min-size' above), the thread will start
# re-spawing connections in the pool, until the pool will be full again.
# New connections will be added at this specified interval.
#
# connections-pool-spawn-every 50

# Number of connections to re-spawn in the pool at every cycle that will
# happen with an interval defined by 'connections-pool-spawn-every' (see above).
#
# connections-pool-spawn-rate 50
 
#  運行模式,一開始最好指定爲no,運行時直接打印出來啓動日誌或者異常信息,這樣能夠方便地查看啓動異常
#  很是奇怪的是:筆者一開始指定爲yes,異常日誌輸出到文件,居然跟直接打印日誌輸出的信息不一致
# Run Redis Cluster Proxy as a daemon.
daemonize yes
 
#  pid 文件指定
# If a pid file is specified, the proxy writes it where specified at startup
# and removes it at exit.
#
# When the proxy runs non daemonized, no pid file is created if none is
# specified in the configuration. When the proxy is daemonized, the pid file
# is used even if not specified, defaulting to
# "/var/run/redis-cluster-proxy.pid".
#
# Creating a pid file is best effort: if the proxy is not able to create it
# nothing bad happens, the server will start and run normally.
#
#pidfile /var/run/redis-cluster-proxy.pid


#  日誌文件指定,若是能夠正常啓動,強烈建議指定一個輸出日誌文件,全部的運行異常或者錯誤均可以從日誌中查找
# Specify the log file name. Also the empty string can be used to force
# Redis Cluster Porxy to log on the standard output. Note that if you use
# standard output for logging but daemonize, logs will be sent to /dev/null
#
#logfile ""
logfile "/usr/local/redis_cluster_proxy/redis_cluster_proxy.log"


#  跨slot操做,這裏設置爲yes,容許
# Enable cross-slot queries that can use multiple keys belonging to different
# slots or even different nodes.
# WARN: these queries will break the the atomicity deisgn of many Redis
# commands.
# NOTE: cross-slots queries are not supported by all the commands, even if
# this feature is enabled
#
# enable-cross-slot no
enable-cross-slot yes
 
# Maximum number of clients allowed
#
# max-clients 10000
 
# 鏈接到redis cluster時候的身份認證,若是redis集羣節點設置了身份認證的話,強烈建議redis集羣全部節點設置一個統一的auth
# Authentication password used to authenticate on the cluster in case its nodes
# are password-protected. The password will be used both for fetching cluster's
# configuration and to automatically authenticate proxy's internal connections
# to the cluster itself (both multiplexing shared connections and clients'
# private connections. So, clients connected to the proxy won't need to issue
# the Redis AUTH command in order to be authenticated.
#
# auth mypassw
auth your_redis_cluster_password
 
#  這個節點是redis 6.0以後的用戶名,這裏沒有指定
# Authentication username (supported by Redis >= 6.0)
#
# auth-user myuser

################################# LOGGING #####################################
# Log level: can be debug, info, success, warning o error.
log-level error

# Dump queries received from clients in the log (log-level debug required)
#
# dump-queries no

# Dump buffer in the log (log-level debug required)
#
# dump-buffer no

# Dump requests' queues (requests to send to cluster, request pending, ...)
# in the log (log-level debug required)
#
# dump-queues no

啓動redis-cluster-proxy,./bin/redis-cluster-proxy -c ./proxy.conf
須要注意的是,首次運行時直接打印出來啓動日誌或者異常信息,保證能夠正常啓動,而後再以daemonize方式運行
由於筆者一開始遇到了一些錯誤,發現一樣的錯誤,控制檯直接打印出來的日誌,跟daemonize方式運行打印到文件的日誌不徹底一致。
node

 

redis-cluster-proxy嘗試
與普通的redis 集羣連接方式不一樣,redis-cluster-proxy模式下,客戶端能夠鏈接至redis-cluster-proxy節點,而無需知道Redis集羣自身的詳細信息,這裏嘗試執行一個multpile操做
c++

這裏使用傳統的集羣連接方式,來查看上面multiple操做的數據,能夠發現的確是寫入到集羣中不一樣的節點中了。
git

 

故障轉移測試
簡單粗暴地關閉一個主節點,這裏直接關閉192.168.0.61節點,看看redis-cluster-proxy可否正常讀寫
1,首先redis cluster自身的故障轉移是沒有問題的,徹底成功
github

2,192.168.0.64接替192.168.0.61成爲主節點
redis

3,proxy節點操做數據卡死
docker

查看redis-cluster-proxy的日誌,說192.168.0.61節點沒法鏈接,proxy失敗退出

因而可知,正如日誌裏說明的,Redis Cluster Proxy v999.999.999 (unstable),期待有更穩定的版本推出。
相似問題做者本人也有迴應,參考:https://github.com/RedisLabs/redis-cluster-proxy/issues/36
The Proxy currently requires that all nodes of the cluster must be up at startup when it fetches the cluster's internal map.
I'll probably change this in the next weeks.shell

 

redis-cluster-proxy是完美的解決方案?
由於剛推出不久,生產環境基本上不會有太多實際的應用,裏面確定有很多坑,但不妨害對其有更多的期待。
初次嘗試能夠感覺的到,redis-cluster-proxy是一個很是輕量級,清爽簡單的proxy代理層,它解決了一些redis cluster存在的一些實際問題,對應於程序來講也帶來了一些方便性。
若是沒有源碼開發能力,相比其餘第三方proxy中間件,必需要認可官方可靠性和權威性。
那麼,redis-cluster-proxy是一個完美的解決方案麼,留下兩個問題
1,如何解決redis-cluster-proxy單點故障?
2,proxy節點的如何面對網絡流量風暴?數據庫

相關文章
相關標籤/搜索