基於springsession構建一個session共享的模塊。 這裏,基於redis的集羣(Redis-5.0.3版本),爲了解決整個物聯網平臺的各個子系統之間共享session需求,且方便各個子系統使用,將這個模塊設計爲一個pom組件,直接在pom.xml文件裏面配置爲dependency。html
今天的主題,就是redis 5.0.3環境的構建。node
和我以前介紹過的redis 3.2.8 (https://www.cnblogs.com/shihuc/p/7882004.html)有些相似,只是redis 5裏面,再也不依賴ruby腳本進行集羣配置,而是直接用c程序實現,直接基於redis-cli指令完成。這些就不作介紹,直接介紹如何配置。redis
首先,須要修改的配置項以下:spring
bind 10.95.200.12 protected-mode no port 7380 daemonize yes pidfile /var/run/redis_7380.pid dbfilename dump-7380.rdb appendonly yes appendfilename "appendonly-7380.aof" cluster-enabled yes cluster-config-file nodes-7380.conf cluster-node-timeout 15000 notify-keyspace-events "Ex"
個人配置環境,是3對主從,上面的配置項,是其中一個redis節點的信息,IP是10.95.200.12,端口爲7380. 參照這個配置信息,將下面的幾個節點都配置上,我這裏一共有三臺虛擬機,分別是IP:10.95.200.12,10.95.200.13, 10.95.200.14,每臺機器上部署兩個實例,分別爲端口7380以及7381.shell
配置信息配置好後,須要將每個實例啓動起來。例如,下面啓動10.95.200.12端口爲7380的實例。ruby
[tkiot@tkwh-kfcs-app2 redis]$ ./bin/redis-server redis-7380.conf 23941:C 10 Jul 2019 08:34:05.607 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 23941:C 10 Jul 2019 08:34:05.607 # Redis version=5.0.3, bits=64, commit=00000000, modified=0, pid=23941, just started 23941:C 10 Jul 2019 08:34:05.607 # Configuration loaded
這裏有個小插曲:bash
每一個服務器上的redis的配置文件,可以正常運行的配置文件名稱如上面:redis-7380.conf,剛開始,我沒有太注意,或者說是本身的馬虎,將這個配置文件命名爲nodes-7380.conf,和配置文件內容配置項cluster-config-file的值搞成同樣的了,真是想罵孃的這個坑,啓動過程一樣顯示上面的輸出,和正確的程序啓動同樣的效果,可是呢,ps查看進程後,發現沒有redis的進程。。。。這個坑,讓我查了一上午。。。。服務器
上面指令啓動redis-7380.conf的操做結束後,能夠查看下啓動狀態(cluster nodes):session
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7380 10.95.200.12:7380> cluster nodes 7f4cf1bffc7e42a0e2d15bcc5a0a5386711813e8 :7380@17380 myself,master - 0 0 0 connected
先看看cluster相關的指令都有哪些:app
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster help Cluster Manager Commands: create host1:port1 ... hostN:portN --cluster-replicas <arg> check host:port --cluster-search-multiple-owners info host:port fix host:port --cluster-search-multiple-owners reshard host:port --cluster-from <arg> --cluster-to <arg> --cluster-slots <arg> --cluster-yes --cluster-timeout <arg> --cluster-pipeline <arg> --cluster-replace rebalance host:port --cluster-weight <node1=w1...nodeN=wN> --cluster-use-empty-masters --cluster-timeout <arg> --cluster-simulate --cluster-pipeline <arg> --cluster-threshold <arg> --cluster-replace add-node new_host:new_port existing_host:existing_port --cluster-slave --cluster-master-id <arg> del-node host:port node_id call host:port command arg arg .. arg set-timeout host:port milliseconds import host:port --cluster-from <arg> --cluster-copy --cluster-replace help
接下來,就是經過redis-cli建立集羣,配置成爲一個shell腳本
#!/bin/bash /u02/redis/bin/redis-cli --cluster create 10.95.200.12:7380 10.95.200.13:7380 10.95.200.14:7380 10.95.200.12:7381 10.95.200.13:7381 10.95.200.14:7381 --cluster-replicas 1
這裏,腳本當中,一共6個節點,不能肯定哪些是master,哪些是slave,主從關係,是在構建集羣的時候,自動配置的。
集羣查看節點經常使用指令:
CLUSTER INFO 打印集羣的信息
CLUSTER NODES 列出集羣當前已知的全部節點(node),以及這些節點的相關信息。
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7380 10.95.200.12:7380> 10.95.200.12:7380> cluster nodes ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380@17380 master - 0 1562720545267 2 connected 5461-10922 2cb8c6a7c86f512db2c6ca88eb67449197f5b885 10.95.200.12:7381@17381 slave cf6ca00cb36850762fdff1223684edf1fb9bd4ba 0 1562720546000 4 connected cf6ca00cb36850762fdff1223684edf1fb9bd4ba 10.95.200.14:7380@17380 master - 0 1562720546000 3 connected 10923-16383 26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135 10.95.200.12:7380@17380 myself,slave 467d4c7508d1cb371ed52c4c6574506cba40c328 0 1562720546000 1 connected 467d4c7508d1cb371ed52c4c6574506cba40c328 10.95.200.13:7381@17381 master - 0 1562720547271 8 connected 0-5460 fbece4571b50904d93a45afbcce66941d53a45b5 10.95.200.14:7381@17381 slave ed309033dbefe2b0b64ad7fb643c4d2531e53b95 0 1562720546269 6 connected
剔除掉一個指定的節點(經過nodeId進行指示要刪除的節點,nodeId就是上面cluster nodes輸出結果的最左邊的那一列)
10.95.200.12:7380> cluster forget ed309033dbefe2b0b64ad7fb643c4d2531e53b95 OK 10.95.200.12:7380> 10.95.200.12:7380> cluster nodes 2cb8c6a7c86f512db2c6ca88eb67449197f5b885 10.95.200.12:7381@17381 slave cf6ca00cb36850762fdff1223684edf1fb9bd4ba 0 1562720579324 4 connected cf6ca00cb36850762fdff1223684edf1fb9bd4ba 10.95.200.14:7380@17380 master - 0 1562720578323 3 connected 10923-16383 26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135 10.95.200.12:7380@17380 myself,slave 467d4c7508d1cb371ed52c4c6574506cba40c328 0 1562720576000 1 connected 467d4c7508d1cb371ed52c4c6574506cba40c328 10.95.200.13:7381@17381 master - 0 1562720577320 8 connected 0-5460 fbece4571b50904d93a45afbcce66941d53a45b5 10.95.200.14:7381@17381 slave - 0 1562720577000 6 connected 10.95.200.12:7380>
不能本身把本身忘記喲:
10.95.200.12:7380> cluster forget 26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135 (error) ERR I tried hard but I can't forget myself... 10.95.200.12:7380>
也不能將本身的master忘記喲:
10.95.200.12:7380> cluster forget 467d4c7508d1cb371ed52c4c6574506cba40c328 (error) ERR Can't forget my master! 10.95.200.12:7380>
上面的forget指令是不能講節點真實刪除的,只是一段時間內forget遺忘而已。由於過一段時間後,再次執行cluster nodes指令,仍是能夠看到完整集羣的節點。
刪除集羣節點,能夠採用下面的指令:
redis-cli --cluster del-node <ip>:<port> <node_id>
下面看看個人操做,這裏是一個錯誤的操做,結果以下:
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster del-node 10.95.200.12:7380 ed309033dbefe2b0b64ad7fb643c4d2531e53b95 >>> Removing node ed309033dbefe2b0b64ad7fb643c4d2531e53b95 from cluster 10.95.200.12:7380 [ERR] Node 10.95.200.13:7380 is not empty! Reshard data away and try again.
1)查看Node 10.95.200.13:7380上是否有信息:
10.95.200.13:7380> keys * 1) "taikang#session:sessions:expires:b6cd8269-dff0-463e-aefb-03167a167292"
2)清除全部的內容:
10.95.200.13:7380> flushdb OK
到此,繼續執行redis-cli --cluster del-node 10.95.200.12:7380 ed309033dbefe2b0b64ad7fb643c4d2531e53b95,最終仍是失敗,不成功的緣由同樣,不是空節點。爲什麼?緣由是沒有依據redis的指令說明在使用del-node指令。
注意,刪除節點,按照上面的操做,顯然是不對的,提示Node不是空的,須要將數據從新分片到其餘節點,這個提示,有點不是太好理解,依據redis的集羣工做原理,是全對等(master之間)節點集羣,刪除節點有三個步驟:
A. 首先刪除slave節點,命令:redis-cli --cluster del-node <ip>:<port> <node_id> B. 將被刪除的slave對應的master節點的slot進行reshard到其餘節點,命令:redis-cli --cluster reshard <master的ip:port> --cluster-from <同一個master的node_id> --cluster-to <接收slot的master的node_id> --cluster-slots <將要參與分片的slot數量,這裏就是待刪除節點的所有slot> --cluster-yes C. 將reshard後的master執行刪除操做,命令:redis-cli --cluster del-node <ip>:<port> <node_id>
下面演示基本操做redis cluster
操做前的節點信息:
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7380 cluster nodes ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380@17380 master - 0 1563276949623 11 connected 0-5459 5461-16382 2cb8c6a7c86f512db2c6ca88eb67449197f5b885 10.95.200.12:7381@17381 slave cf6ca00cb36850762fdff1223684edf1fb9bd4ba 0 1563276951624 10 connected cf6ca00cb36850762fdff1223684edf1fb9bd4ba 10.95.200.14:7380@17380 master - 0 1563276950000 10 connected 16383 26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135 10.95.200.12:7380@17380 myself,slave 467d4c7508d1cb371ed52c4c6574506cba40c328 0 1563276950000 1 connected fbece4571b50904d93a45afbcce66941d53a45b5 10.95.200.14:7381@17381 slave ed309033dbefe2b0b64ad7fb643c4d2531e53b95 0 1563276950000 11 connected 467d4c7508d1cb371ed52c4c6574506cba40c328 10.95.200.13:7381@17381 master - 0 1563276950624 8 connected 5460
1. 刪除節點:
1)先刪除slave節點
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster del-node 10.95.200.14:7381 fbece4571b50904d93a45afbcce66941d53a45b5 >>> Removing node fbece4571b50904d93a45afbcce66941d53a45b5 from cluster 10.95.200.14:7381 >>> Sending CLUSTER FORGET messages to the cluster... >>> SHUTDOWN the node.
slave節點刪除後的結構:
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7380 cluster nodes ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380@17380 master - 0 1563277493737 11 connected 0-5459 5461-16382 2cb8c6a7c86f512db2c6ca88eb67449197f5b885 10.95.200.12:7381@17381 slave cf6ca00cb36850762fdff1223684edf1fb9bd4ba 0 1563277492000 10 connected cf6ca00cb36850762fdff1223684edf1fb9bd4ba 10.95.200.14:7380@17380 master - 0 1563277493000 10 connected 16383 26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135 10.95.200.12:7380@17380 myself,slave 467d4c7508d1cb371ed52c4c6574506cba40c328 0 1563277493000 1 connected 467d4c7508d1cb371ed52c4c6574506cba40c328 10.95.200.13:7381@17381 master - 0 1563277494738 8 connected 5460
2)再對剛纔刪除的slave的master進行slot的reshard:
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster reshard 10.95.200.13:7381 --cluster-from 467d4c7508d1cb371ed52c4c6574506cba40c328 --cluster-to ed309033dbefe2b0b64ad7fb643c4d2531e53b95 --cluster-slots 5460 --cluster-yes
>>> Performing Cluster Check (using node 10.95.200.13:7381) M: 467d4c7508d1cb371ed52c4c6574506cba40c328 10.95.200.13:7381 slots:[5460] (1 slots) master 1 additional replica(s) M: ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380 slots:[0-5459],[5461-16382] (16382 slots) master S: 26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135 10.95.200.12:7380 slots: (0 slots) slave replicates 467d4c7508d1cb371ed52c4c6574506cba40c328 M: cf6ca00cb36850762fdff1223684edf1fb9bd4ba 10.95.200.14:7380 slots:[16383] (1 slots) master 1 additional replica(s) S: 2cb8c6a7c86f512db2c6ca88eb67449197f5b885 10.95.200.12:7381 slots: (0 slots) slave replicates cf6ca00cb36850762fdff1223684edf1fb9bd4ba [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. Ready to move 5460 slots. Source nodes: M: 467d4c7508d1cb371ed52c4c6574506cba40c328 10.95.200.13:7381 slots:[5460] (1 slots) master 1 additional replica(s) Destination node: M: ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380 slots:[0-5459],[5461-16382] (16382 slots) master Resharding plan: Moving slot 5460 from 467d4c7508d1cb371ed52c4c6574506cba40c328 Moving slot 5460 from 10.95.200.13:7381 to 10.95.200.13:7380:
參數基本說明:
--cluster-from:表示分片的源頭節點,即從這個標識指定的參數節點上將分片分配出去給別的節點,參數節點能夠有多個,用逗號分隔。
--cluster-to:和--cluster-from相對,標識接收分片的節點,這個參數指定的節點只能有一個。
--cluster-slots:參與從新分片的slot號。
--cluster-yes:不用顯示分片的過程信息,直接後臺操做。
操做後的集羣信息:
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7380 cluster nodes ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380@17380 master - 0 1563277825000 11 connected 0-16382 2cb8c6a7c86f512db2c6ca88eb67449197f5b885 10.95.200.12:7381@17381 slave cf6ca00cb36850762fdff1223684edf1fb9bd4ba 0 1563277825000 10 connected cf6ca00cb36850762fdff1223684edf1fb9bd4ba 10.95.200.14:7380@17380 master - 0 1563277825386 10 connected 16383 26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135 10.95.200.12:7380@17380 myself,slave ed309033dbefe2b0b64ad7fb643c4d2531e53b95 0 1563277824000 1 connected 467d4c7508d1cb371ed52c4c6574506cba40c328 10.95.200.13:7381@17381 master - 0 1563277826390 8 connected
標紅的節點,已經connected的slot沒有了,在reshard以前,他有5460
3)將master進行下線操做:
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster del-node 10.95.200.13:7381 467d4c7508d1cb371ed52c4c6574506cba40c328 >>> Removing node 467d4c7508d1cb371ed52c4c6574506cba40c328 from cluster 10.95.200.13:7381 >>> Sending CLUSTER FORGET messages to the cluster... >>> SHUTDOWN the node.
操做後的集羣信息(是否是從原來的6個節點變成了如今的4個了):
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7380 cluster nodes ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380@17380 master - 0 1563278285304 11 connected 0-16382 2cb8c6a7c86f512db2c6ca88eb67449197f5b885 10.95.200.12:7381@17381 slave cf6ca00cb36850762fdff1223684edf1fb9bd4ba 0 1563278286304 10 connected cf6ca00cb36850762fdff1223684edf1fb9bd4ba 10.95.200.14:7380@17380 master - 0 1563278284000 10 connected 16383 26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135 10.95.200.12:7380@17380 myself,slave ed309033dbefe2b0b64ad7fb643c4d2531e53b95 0 1563278284000 1 connected
依據上面的操做,所有刪除後,最後只剩下一個master了,就直接kill掉,停服務就行了。
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.13 -p 7380 cluster nodes ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380@17380 myself,master - 0 1563321180000 11 connected 0-16383
下面記錄一下建立一組4個節點的集羣的操做,並添加一主一從的過程。
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster create 10.95.200.12:7380 10.95.200.13:7380 10.95.200.12:7381 10.95.200.13:7381 --cluster-replicas 1 *** ERROR: Invalid configuration for cluster creation. *** Redis Cluster requires at least 3 master nodes. *** This is not possible with 4 nodes and 1 replicas per node. *** At least 6 nodes are required.
這個過程失敗了,呵呵。那就6個節點吧,而後刪除再添加。
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster create 10.95.200.12:7380 10.95.200.13:7380 10.95.200.14:7380 10.95.200.12:7381 10.95.200.13:7381 10.95.200.14:7381 --cluster-replicas 1 >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 10.95.200.13:7381 to 10.95.200.12:7380 Adding replica 10.95.200.12:7381 to 10.95.200.13:7380 Adding replica 10.95.200.14:7381 to 10.95.200.14:7380 >>> Trying to optimize slaves allocation for anti-affinity [OK] Perfect anti-affinity obtained! 。。。。。 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
刪除兩個節點,10.95.200.13:7380 10.95.200.14:7381,刪除的過程很少記錄(當前是4個節點)。
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7381 cluster nodes f855a20340af517e1502f610d82e8280cc6cd803 10.95.200.12:7381@17381 myself,slave 1591155d7df58218a26974b16996eeeba88c84f1 0 1563331430000 4 connected 1cf9c2ef22a7ce7831cfee357aca36bfb6f275b5 10.95.200.13:7381@17381 slave d46ae46cf66445ddeba923d6af84b78ca5f789cb 0 1563331431232 7 connected d46ae46cf66445ddeba923d6af84b78ca5f789cb 10.95.200.12:7380@17380 master - 0 1563331430000 7 connected 0-10922 1591155d7df58218a26974b16996eeeba88c84f1 10.95.200.14:7380@17380 master - 0 1563331430229 3 connected 10923-16383
記得刪除掉的節點,其實已經shutdown了,要將他們起起來後,再執行添加操做。
1)將新的節點10.95.200.13:7380添加到集羣(選擇一個已經在集羣中的節點便可)
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster add-node 10.95.200.13:7380 10.95.200.14:7380 >>> Adding node 10.95.200.13:7380 to cluster 10.95.200.14:7380 >>> Performing Cluster Check (using node 10.95.200.14:7380) M: 1591155d7df58218a26974b16996eeeba88c84f1 10.95.200.14:7380 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 1cf9c2ef22a7ce7831cfee357aca36bfb6f275b5 10.95.200.13:7381 slots: (0 slots) slave replicates d46ae46cf66445ddeba923d6af84b78ca5f789cb S: f855a20340af517e1502f610d82e8280cc6cd803 10.95.200.12:7381 slots: (0 slots) slave replicates 1591155d7df58218a26974b16996eeeba88c84f1 M: d46ae46cf66445ddeba923d6af84b78ca5f789cb 10.95.200.12:7380 slots:[0-10922] (10923 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 10.95.200.13:7380 to make it join the cluster. [OK] New node added correctly.
加完後,看看集羣:
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7381 cluster nodes f855a20340af517e1502f610d82e8280cc6cd803 10.95.200.12:7381@17381 myself,slave 1591155d7df58218a26974b16996eeeba88c84f1 0 1563332042000 4 connected 1cf9c2ef22a7ce7831cfee357aca36bfb6f275b5 10.95.200.13:7381@17381 slave d46ae46cf66445ddeba923d6af84b78ca5f789cb 0 1563332044428 7 connected 4fd334f64bc5b121f5810da3b5800a29d4e8c3ee 10.95.200.13:7380@17380 master - 0 1563332044000 0 connected d46ae46cf66445ddeba923d6af84b78ca5f789cb 10.95.200.12:7380@17380 master - 0 1563332043426 7 connected 0-10922 1591155d7df58218a26974b16996eeeba88c84f1 10.95.200.14:7380@17380 master - 0 1563332043000 3 connected 10923-16383
2)新加入的一個節點默認會變成master,下面給這個master分配slot,就是一個reshard的過程
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster reshard 10.95.200.13:7380 --cluster-from d46ae46cf66445ddeba923d6af84b78ca5f789cb,1591155d7df58218a26974b16996eeeba88c84f1 --cluster-to 4fd334f64bc5b121f5810da3b5800a29d4e8c3ee --cluster-slots 5642
添加完後,看看集羣:
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7381 cluster nodes f855a20340af517e1502f610d82e8280cc6cd803 10.95.200.12:7381@17381 myself,slave 1591155d7df58218a26974b16996eeeba88c84f1 0 1563332267000 4 connected 1cf9c2ef22a7ce7831cfee357aca36bfb6f275b5 10.95.200.13:7381@17381 slave d46ae46cf66445ddeba923d6af84b78ca5f789cb 0 1563332267021 7 connected 4fd334f64bc5b121f5810da3b5800a29d4e8c3ee 10.95.200.13:7380@17380 master - 0 1563332268022 8 connected 0-3761 10923-12802 d46ae46cf66445ddeba923d6af84b78ca5f789cb 10.95.200.12:7380@17380 master - 0 1563332267521 7 connected 3762-10922 1591155d7df58218a26974b16996eeeba88c84f1 10.95.200.14:7380@17380 master - 0 1563332267000 3 connected 12803-16383
3)給剛纔添加的master節點添加slave節點
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster add-node 10.95.200.14:7381 10.95.200.13:7380 --cluster-slave --cluster-master-id 4fd334f64bc5b121f5810da3b5800a29d4e8c3ee >>> Adding node 10.95.200.14:7381 to cluster 10.95.200.13:7380 >>> Performing Cluster Check (using node 10.95.200.13:7380) M: 4fd334f64bc5b121f5810da3b5800a29d4e8c3ee 10.95.200.13:7380 slots:[0-3761],[10923-12802] (5642 slots) master 1 additional replica(s) S: 1cf9c2ef22a7ce7831cfee357aca36bfb6f275b5 10.95.200.13:7381 slots: (0 slots) slave replicates 4fd334f64bc5b121f5810da3b5800a29d4e8c3ee M: d46ae46cf66445ddeba923d6af84b78ca5f789cb 10.95.200.12:7380 slots:[3762-10922] (7161 slots) master M: 1591155d7df58218a26974b16996eeeba88c84f1 10.95.200.14:7380 slots:[12803-16383] (3581 slots) master 1 additional replica(s) S: f855a20340af517e1502f610d82e8280cc6cd803 10.95.200.12:7381 slots: (0 slots) slave replicates 1591155d7df58218a26974b16996eeeba88c84f1 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 10.95.200.14:7381 to make it join the cluster. Waiting for the cluster to join >>> Configure node as replica of 10.95.200.13:7380. [OK] New node added correctly.
最後再看看集羣:
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7381 cluster nodes f855a20340af517e1502f610d82e8280cc6cd803 10.95.200.12:7381@17381 myself,slave 1591155d7df58218a26974b16996eeeba88c84f1 0 1563332620000 4 connected f6093d443470ae37b8330407d20291ae959fc22f :0@0 slave,fail,noaddr d46ae46cf66445ddeba923d6af84b78ca5f789cb 1563332505335 1563332504432 7 disconnected 1cf9c2ef22a7ce7831cfee357aca36bfb6f275b5 10.95.200.13:7381@17381 slave d46ae46cf66445ddeba923d6af84b78ca5f789cb 0 1563332621533 8 connected 4fd334f64bc5b121f5810da3b5800a29d4e8c3ee 10.95.200.13:7380@17380 master - 0 1563332621633 8 connected 0-3761 10923-12802 d46ae46cf66445ddeba923d6af84b78ca5f789cb 10.95.200.12:7380@17380 master - 0 1563332621533 7 connected 3762-10922 1591155d7df58218a26974b16996eeeba88c84f1 10.95.200.14:7380@17380 master - 0 1563332621000 3 connected 12803-16383 9f069b1f62cc93b95cb432f3726bc5bbfb0c8c76 10.95.200.14:7381@17381 slave 4fd334f64bc5b121f5810da3b5800a29d4e8c3ee 0 1563332620533 8 connected
總結一下集羣節點添加的過程:
A) 增長節點(前提是對應節點服務器已經運行了),默認成爲master節點,命令:redis-cli --cluster add-node <new_ip:new_port> <existing_ip:existing_port>
B) 給新增長的這個master節點分配slot,命令:redis-cli --cluster reshard <master_ip:master_port> --cluster-from <集羣已存在master的nodeId,多個須要逗號分隔> --cluster-to <當前master的nodeId> --cluster-slots <分配slot的數量>
C) 給這個master添加slave,slot就不用了,由於是slave,命令:redis-cli --cluster add-node <slave_ip:slave_port> <master_ip:master_port> --cluster-slave --cluster-master-id <mastr node_id>
最後,有一個問題,沒有搞清楚,就是將節點從集羣刪除後,再添加進去的時候,總會遇到提示說該節點不是空節點,致使添加失敗,採起的策略就是將待添加的節點對應的數據文件所有刪除後,從新啓動,再執行添加操做指令,就成功了。