Redis 3.0以後支持了Cluster,大大加強了Redis水平擴展的能力。Redis Cluster是Redis官方的集羣實現方案,在此以前已經有第三方Redis集羣解決方案,如Twenproxy、Codis,與其不一樣的是:Redis Cluster並不是使用Porxy的模式來鏈接集羣節點,而是使用無中心節點的模式來組建集羣。在Cluster出現以前,只有Sentinel保證了Redis的高可用性。html
Redis Cluster實如今多個節點之間進行數據共享,即便部分節點失效或者沒法進行通信時,Cluster仍然能夠繼續處理請求。若每一個主節點都有一個從節點支持,在主節點下線或者沒法與集羣的大多數節點進行通信的狀況下, 從節點提高爲主節點,並提供服務,保證Cluster正常運行,Redis Cluster的節點分片是經過哈希槽(hash slot)實現的,每一個鍵都屬於這 16384(0~16383) 個哈希槽的其中一個,每一個節點負責處理一部分哈希槽。node
Ubuntu 14.04
Redis 3.2.8
主節點:192.168.100.134/135/136:17021
從節點:192.168.100.134/135/136:17022redis
對應主從節點:算法
主 從
134:17021 135:17022 135:17021 136:17022 136:17021 134:17022
①:安裝
按照Redis之Sentinel高可用安裝部署文章中的說明,裝好Redis。只須要修改一下Cluster相關的配置參數:數據庫
安裝好以後開啓Redis:均運行在集羣模式下安全
root@redis-cluster1:~# ps -ef | grep redis redis 4292 1 0 00:33 ? 00:00:03 /usr/local/bin/redis-server 192.168.100.134:17021 [cluster] redis 4327 1 0 01:58 ? 00:00:00 /usr/local/bin/redis-server 192.168.100.134:17022 [cluster]
②:配置主節點ruby
添加節點: cluster meet ip portbash
進入其中任意17021端口的實例,進入集羣模式須要參數-c: ~# redis-cli -h 192.168.100.134 -p 17021 -c 192.168.100.134:17021> cluster meet 192.168.100.135 17021 OK 192.168.100.134:17021> cluster meet 192.168.100.136 17021 OK 節點添加成功
查看集羣狀態:cluster info服務器
192.168.100.134:17021> cluster info cluster_state:fail #集羣狀態 cluster_slots_assigned:0 #被分配的槽位數 cluster_slots_ok:0 #正確分配的槽位 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:3 #當前3個節點 cluster_size:0 cluster_current_epoch:2 cluster_my_epoch:1 cluster_stats_messages_sent:83 cluster_stats_messages_received:83
上面看到集羣狀態是失敗的,緣由是槽位沒有分配,並且須要一次性把16384個槽位徹底分配了,集羣纔可用。接着開始分配槽位:須要登入到各個節點,進行槽位的分配,如:
node1分配:0~5461
node2分配:5462~10922
node3分配:10923~16383分佈式
分配槽位:cluster addslots 槽位,一個槽位只能分配一個節點,16384個槽位必須分配完,不一樣節點不能衝突。
192.168.100.134:17021> cluster addslots 0 OK 192.168.100.135:17021> cluster addslots 0 #衝突 (error) ERR Slot 0 is already busy
目前尚未支持區間範圍的添加槽位操做,因此添加16384個槽位的須要寫一個批量腳本(addslots.sh):
node1: #!/bin/bash n=0 for ((i=n;i<=5461;i++)) do /usr/local/bin/redis-cli -h 192.168.100.134 -p 17021 -a dxy CLUSTER ADDSLOTS $i done node2: #!/bin/bash n=5462 for ((i=n;i<=10922;i++)) do /usr/local/bin/redis-cli -h 192.168.100.135 -p 17021 -a dxy CLUSTER ADDSLOTS $i done node3: #!/bin/bash n=10923 for ((i=n;i<=16383;i++)) do /usr/local/bin/redis-cli -h 192.168.100.136 -p 17021 -a dxy CLUSTER ADDSLOTS $i done
鏈接3個節點分別執行:bash addslots.sh。全部槽位獲得分配以後,在看下集羣狀態:
192.168.100.134:17021> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:3 cluster_size:3 cluster_current_epoch:2 cluster_my_epoch:1 cluster_stats_messages_sent:4193 cluster_stats_messages_received:4193
看到集羣已經成功,那移除一個槽位看看集羣會怎麼樣:cluster delslots 槽位
192.168.100.134:17021> cluster delslots 0 OK 192.168.100.134:17021> cluster info cluster_state:fail cluster_slots_assigned:16383 cluster_slots_ok:16383 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:3 cluster_size:3 cluster_current_epoch:2 cluster_my_epoch:1 cluster_stats_messages_sent:4482 cluster_stats_messages_received:4482
看到16384個槽位若是沒有分配徹底,集羣是不成功的。 到這裏爲止,一個簡單的Redis Cluster已經搭建完成,這裏每一個節點都是一個單點,若出現一個節點不可用,會致使整個集羣的不可用,如何保證各個節點的高可用呢?這能夠對每一個主節點再建一個從節點來保證。
添加從節點(集羣複製): 複製的原理和單機的Redis複製原理同樣,區別是:集羣下的從節點也須要運行在cluster模式下,要先添加到集羣裏面,再作複製。
①:添加從節點到集羣中
192.168.100.134:17021> cluster meet 192.168.100.134 17022 OK 192.168.100.134:17021> cluster meet 192.168.100.135 17022 OK 192.168.100.134:17021> cluster meet 192.168.100.136 17022 OK 192.168.100.134:17021> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 #當前集羣下的全部節點,包括主從節點 cluster_size:3 #當前集羣下的有槽位分配的節點,即主節點 cluster_current_epoch:5 cluster_my_epoch:1 cluster_stats_messages_sent:13438 cluster_stats_messages_received:13438
②:建立從節點 cluster replicate node_id ,經過cluster nodes獲得node_id,須要在要成爲的從節點的Redis(17022)上執行。
192.168.100.134:17022> cluster nodes #查看節點信息 7438368ca8f8a27fdf2da52940bb50098a78c6fc 192.168.100.136:17022 master - 0 1488255023528 5 connected e1b78bb74970d0353832b2913e9b35eba74a2a1a 192.168.100.134:17022 myself,master - 0 0 0 connected 05e72d06edec6a920dd91b050c7a315937fddb66 192.168.100.136:17021 master - 0 1488255022526 2 connected 10923-16383 b461a30fde28409c38ee6c32db1cd267a6cfd125 192.168.100.135:17021 master - 0 1488255026533 3 connected 5462-10922 11f9169577352c33d85ad0d1ca5f5bf0deba3209 192.168.100.134:17021 master - 0 1488255025531 1 connected 0-5461 2b8b518324de0990ca587b47f6316e5f07b1df59 192.168.100.135:17022 master - 0 1488255024530 4 connected
#成爲135:17021的從節點 192.168.100.134:17022> cluster replicate b461a30fde28409c38ee6c32db1cd267a6cfd125 OK
處理其餘2個節點:
#成爲136:17021的從節點 192.168.100.135:17022> cluster replicate 05e72d06edec6a920dd91b050c7a315937fddb66 OK #成爲134:17021的從節點 192.168.100.136:17022> cluster replicate 11f9169577352c33d85ad0d1ca5f5bf0deba3209 OK
查看節點狀態:cluster nodes
2b8b518324de0990ca587b47f6316e5f07b1df59 192.168.100.135:17022 slave 05e72d06edec6a920dd91b050c7a315937fddb66 0 1488255859347 4 connected 11f9169577352c33d85ad0d1ca5f5bf0deba3209 192.168.100.134:17021 myself,master - 0 0 1 connected 0-5461 05e72d06edec6a920dd91b050c7a315937fddb66 192.168.100.136:17021 master - 0 1488255860348 2 connected 10923-16383 e1b78bb74970d0353832b2913e9b35eba74a2a1a 192.168.100.134:17022 slave b461a30fde28409c38ee6c32db1cd267a6cfd125 0 1488255858344 3 connected 7438368ca8f8a27fdf2da52940bb50098a78c6fc 192.168.100.136:17022 slave 11f9169577352c33d85ad0d1ca5f5bf0deba3209 0 1488255856341 5 connected b461a30fde28409c38ee6c32db1cd267a6cfd125 192.168.100.135:17021 master - 0 1488255857343 3 connected 5462-10922
能夠經過查看slave對應的node_id找出它的master節點,如以上操做遇到問題能夠查看/var/log/redis/目錄下的日誌。到此Redis Cluster分片、高可用部署完成,接着繼續說明一下集羣的相關管理命令。
上面已經介紹了一部分Cluster相關的命令,如今對全部的命令因此下說明。
CLUSTER info:打印集羣的信息。 CLUSTER nodes:列出集羣當前已知的全部節點(node)的相關信息。 CLUSTER meet <ip> <port>:將ip和port所指定的節點添加到集羣當中。 CLUSTER addslots <slot> [slot ...]:將一個或多個槽(slot)指派(assign)給當前節點。 CLUSTER delslots <slot> [slot ...]:移除一個或多個槽對當前節點的指派。 CLUSTER slots:列出槽位、節點信息。 CLUSTER slaves <node_id>:列出指定節點下面的從節點信息。 CLUSTER replicate <node_id>:將當前節點設置爲指定節點的從節點。 CLUSTER saveconfig:手動執行命令保存保存集羣的配置文件,集羣默認在配置修改的時候會自動保存配置文件。 CLUSTER keyslot <key>:列出key被放置在哪一個槽上。 CLUSTER flushslots:移除指派給當前節點的全部槽,讓當前節點變成一個沒有指派任何槽的節點。 CLUSTER countkeysinslot <slot>:返回槽目前包含的鍵值對數量。 CLUSTER getkeysinslot <slot> <count>:返回count個槽中的鍵。 CLUSTER setslot <slot> node <node_id> 將槽指派給指定的節點,若是槽已經指派給另外一個節點,那麼先讓另外一個節點刪除該槽,而後再進行指派。 CLUSTER setslot <slot> migrating <node_id> 將本節點的槽遷移到指定的節點中。 CLUSTER setslot <slot> importing <node_id> 從 node_id 指定的節點中導入槽 slot 到本節點。 CLUSTER setslot <slot> stable 取消對槽 slot 的導入(import)或者遷移(migrate)。 CLUSTER failover:手動進行故障轉移。 CLUSTER forget <node_id>:從集羣中移除指定的節點,這樣就沒法完成握手,過時時爲60s,60s後兩節點又會繼續完成握手。 CLUSTER reset [HARD|SOFT]:重置集羣信息,soft是清空其餘節點的信息,但不修改本身的id,hard還會修改本身的id,不傳該參數則使用soft方式。 CLUSTER count-failure-reports <node_id>:列出某個節點的故障報告的長度。 CLUSTER SET-CONFIG-EPOCH:設置節點epoch,只有在節點加入集羣前才能設置。
爲了更好的展現上面命令,先爲這個新集羣插入一些數據:經過腳本插入:
這裏說明一下上面沒有介紹過的管理命令:
①:cluster slots 列出槽位和對應節點的信息
192.168.100.134:17021> cluster slots 1) 1) (integer) 0 2) (integer) 5461 3) 1) "192.168.100.134" 2) (integer) 17021 3) "11f9169577352c33d85ad0d1ca5f5bf0deba3209" 4) 1) "192.168.100.136" 2) (integer) 17022 3) "7438368ca8f8a27fdf2da52940bb50098a78c6fc" 2) 1) (integer) 10923 2) (integer) 16383 3) 1) "192.168.100.136" 2) (integer) 17021 3) "05e72d06edec6a920dd91b050c7a315937fddb66" 4) 1) "192.168.100.135" 2) (integer) 17022 3) "2b8b518324de0990ca587b47f6316e5f07b1df59" 3) 1) (integer) 5462 2) (integer) 10922 3) 1) "192.168.100.135" 2) (integer) 17021 3) "b461a30fde28409c38ee6c32db1cd267a6cfd125" 4) 1) "192.168.100.134" 2) (integer) 17022 3) "e1b78bb74970d0353832b2913e9b35eba74a2a1a"
②:cluster slaves:列出指定節點的從節點
192.168.100.134:17021> cluster slaves 11f9169577352c33d85ad0d1ca5f5bf0deba3209 1) "7438368ca8f8a27fdf2da52940bb50098a78c6fc 192.168.100.136:17022 slave 11f9169577352c33d85ad0d1ca5f5bf0deba3209 0 1488274385311 5 connected"
③:cluster keyslot:列出key放在那個槽上
192.168.100.134:17021> cluster keyslot 9223372036854742675 (integer) 10310
④:cluster countkeysinslot:列出指定槽位的key數量
192.168.100.134:17021> cluster countkeysinslot 1 (integer) 19
⑤:cluster getkeysinslot :列出指定槽位中的指定數量的key
192.168.100.134:17021> cluster getkeysinslot 1 3 1) "9223372036854493093" 2) "9223372036854511387" 3) "9223372036854522344"
⑥:cluster setslot ...:手動遷移192.168.100.134:17021的0槽位到192.168.100.135:17021
1:首先查看各節點的槽位 192.168.100.134:17021> cluster nodes 2b8b518324de0990ca587b47f6316e5f07b1df59 192.168.100.135:17022 slave 05e72d06edec6a920dd91b050c7a315937fddb66 0 1488295105089 4 connected 11f9169577352c33d85ad0d1ca5f5bf0deba3209 192.168.100.134:17021 myself,master - 0 0 7 connected 0-5461 05e72d06edec6a920dd91b050c7a315937fddb66 192.168.100.136:17021 master - 0 1488295107092 2 connected 10923-16383 e1b78bb74970d0353832b2913e9b35eba74a2a1a 192.168.100.134:17022 slave b461a30fde28409c38ee6c32db1cd267a6cfd125 0 1488295106090 6 connected 7438368ca8f8a27fdf2da52940bb50098a78c6fc 192.168.100.136:17022 slave 11f9169577352c33d85ad0d1ca5f5bf0deba3209 0 1488295104086 7 connected b461a30fde28409c38ee6c32db1cd267a6cfd125 192.168.100.135:17021 master - 0 1488295094073 6 connected 5462-10922 2:查看要遷移槽位的key 192.168.100.134:17021> cluster getkeysinslot 0 100 1) "9223372012094975807" 2) "9223372031034975807" 3:到目標節點執行導入操做 192.168.100.135:17021> cluster setslot 0 importing 11f9169577352c33d85ad0d1ca5f5bf0deba3209 OK 192.168.100.135:17021> cluster nodes ... b461a30fde28409c38ee6c32db1cd267a6cfd125 192.168.100.135:17021 myself,master - 0 0 6 connected 5462-10922 [0-<-11f9169577352c33d85ad0d1ca5f5bf0deba3209] ... 4:到源節點進行遷移操做 192.168.100.134:17021> cluster setslot 0 migrating b461a30fde28409c38ee6c32db1cd267a6cfd125 OK 192.168.100.134:17021> cluster nodes ... 11f9169577352c33d85ad0d1ca5f5bf0deba3209 192.168.100.134:17021 myself,master - 0 0 7 connected 0-5461 [0->-b461a30fde28409c38ee6c32db1cd267a6cfd125] ... 5:在源節點遷移槽位中的key到目標節點:MIGRATE host port key destination-db timeout [COPY] [REPLACE] 192.168.100.134:17021> migrate 192.168.100.135 17021 9223372031034975807 0 5000 replace OK 192.168.100.134:17021> migrate 192.168.100.135 17021 9223372012094975807 0 5000 replace OK 192.168.100.134:17021> cluster getkeysinslot 0 100 #key遷移完以後,才能進行下一步 (empty list or set) 6:最後設置槽位到指定節點,命令將會廣播給集羣其餘節點,已經將Slot轉移到目標節點 192.168.100.135:17021> cluster setslot 0 node b461a30fde28409c38ee6c32db1cd267a6cfd125 OK 192.168.100.134:17021> cluster setslot 0 node b461a30fde28409c38ee6c32db1cd267a6cfd125 OK 7:驗證是否遷移成功: 192.168.100.134:17021> cluster nodes ... 11f9169577352c33d85ad0d1ca5f5bf0deba3209 192.168.100.134:17021 myself,master - 0 0 9 connected 1-5461 #變了 ... b461a30fde28409c38ee6c32db1cd267a6cfd125 192.168.100.135:17021 master - 0 1488300965322 10 connected 0 5462-10922 查看槽位信息: 192.168.100.134:17021> cluster slots 1) 1) (integer) 10923 2) (integer) 16383 3) 1) "192.168.100.136" 2) (integer) 17021 3) "05e72d06edec6a920dd91b050c7a315937fddb66" 2) 1) (integer) 1 2) (integer) 5461 3) 1) "192.168.100.134" 2) (integer) 17021 3) "11f9169577352c33d85ad0d1ca5f5bf0deba3209" 3) 1) (integer) 0 2) (integer) 0 3) 1) "192.168.100.135" 2) (integer) 17021 3) "b461a30fde28409c38ee6c32db1cd267a6cfd125" 4) 1) (integer) 5462 2) (integer) 10922 3) 1) "192.168.100.135" 2) (integer) 17021 3) "b461a30fde28409c38ee6c32db1cd267a6cfd125" 查看數據是否遷移成功: 192.168.100.134:17021> cluster getkeysinslot 0 100 (empty list or set) 192.168.100.135:17021> cluster getkeysinslot 0 100 1) "9223372012094975807" 2) "9223372031034975807"
對於大量slot要遷移,並且slot裏也有大量的key的話,能夠按照上面的步驟寫個腳本處理,或則用後面腳本部署裏介紹的處理。
大體的遷移slot的步驟以下:
1,在目標節點上聲明將從源節點上遷入Slot CLUSTER SETSLOT <slot> IMPORTING <source_node_id> 2,在源節點上聲明將往目標節點遷出Slot CLUSTER SETSLOT <slot> migrating <target_node_id> 3,批量從源節點獲取KEY CLUSTER GETKEYSINSLOT <slot> <count> 4,將獲取的Key遷移到目標節點 MIGRATE <target_ip> <target_port> <key_name> 0 <timeout> 重複步驟3,4直到全部數據遷移完畢,MIGRATE命令會將全部的指定的key經過RESTORE key ttl serialized-value REPLACE遷移給target
5,分別向雙方節點發送 CLUSTER SETSLOT <slot> NODE <target_node_id>,該命令將會廣播給集羣其餘節點,取消importing和migrating。
6,等待集羣狀態變爲OK CLUSTER INFO 中的 cluster_state = ok
注意:這裏在操做migrate的時候,若各節點有認證,執行的時候會出現:
(error) ERR Target instance replied with error: NOAUTH Authentication required.
若肯定執行的遷移,本文中是把全部節點的masterauth和requirepass註釋掉以後進行的,等進行完以後再開啓認證。
⑦:cluster forget:從集羣中移除指定的節點,這樣就沒法完成握手,過時時爲60s,60s後兩節點又會繼續完成握手。
192.168.100.134:17021> cluster nodes 05e72d06edec6a920dd91b050c7a315937fddb66 192.168.100.136:17021 master - 0 1488302330582 2 connected 10923-16383 11f9169577352c33d85ad0d1ca5f5bf0deba3209 192.168.100.134:17021 myself,master - 0 0 9 connected 1-5461 b461a30fde28409c38ee6c32db1cd267a6cfd125 192.168.100.135:17021 master - 0 1488302328576 10 connected 0 5462-10922 ... 192.168.100.134:17021> cluster forget 05e72d06edec6a920dd91b050c7a315937fddb66 OK 192.168.100.134:17021> cluster nodes 11f9169577352c33d85ad0d1ca5f5bf0deba3209 192.168.100.134:17021 myself,master - 0 0 9 connected 1-5461 b461a30fde28409c38ee6c32db1cd267a6cfd125 192.168.100.135:17021 master - 0 1488302376718 10 connected 0 5462-10922 ... 一分鐘以後: 192.168.100.134:17021> cluster nodes 05e72d06edec6a920dd91b050c7a315937fddb66 192.168.100.136:17021 master - 0 1488302490107 2 connected 10923-16383 11f9169577352c33d85ad0d1ca5f5bf0deba3209 192.168.100.134:17021 myself,master - 0 0 9 connected 1-5461 b461a30fde28409c38ee6c32db1cd267a6cfd125 192.168.100.135:17021 master - 0 1488302492115 10 connected 0 5462-10922
⑧:cluster failover:手動進行故障轉移,在下一節會詳解。須要注意的是在須要故障轉移的節點上執行,必須在slave節點上執行,不然報錯:
(error) ERR You should send CLUSTER FAILOVER to a slave
⑨:cluster flushslots:須要在沒有key的節點執行,移除指派給當前節點的全部槽,讓當前節點變成一個沒有指派任何槽的節點,該節點全部數據丟失。
192.168.100.136:17022> cluster nodes 05e72d06edec6a920dd91b050c7a315937fddb66 192.168.100.136:17021 master - 0 1488255398859 2 connected 10923-16383 ... 192.168.100.136:17021> cluster flushslots OK 192.168.100.136:17021> cluster nodes 05e72d06edec6a920dd91b050c7a315937fddb66 192.168.100.136:17021 myself,master - 0 0 2 connected ...
⑩:cluster reset :須要在沒有key的節點執行,重置集羣信息。
192.168.100.134:17021> cluster reset OK 192.168.100.134:17021> cluster nodes 11f9169577352c33d85ad0d1ca5f5bf0deba3209 192.168.100.134:17021 myself,master - 0 0 9 connected
Redis Cluster有一套管理腳本,如:建立集羣、遷移節點、增刪槽位等,這些腳本都存放在源碼包裏,都是用ruby編寫的。如今測試用下腳本完成集羣的部署。
①:按照需求建立Redis實例,6個實例(3主3從)。
②:安全須要ruby模塊:
apt-get install ruby gem install redis
③:腳本redis-trib.rb(/usr/local/src/redis-3.2.8/src)
./redis-trib.rb help Usage: redis-trib <command> <options> <arguments ...> #建立集羣 create host1:port1 ... hostN:portN --replicas <arg> #帶上該參數表示是否有從,arg表示從的數量 #檢查集羣 check host:port #查看集羣信息 info host:port #修復集羣 fix host:port --timeout <arg> #在線遷移slot reshard host:port #個是必傳參數,用來從一個節點獲取整個集羣信息,至關於獲取集羣信息的入口 --from <arg> #須要從哪些源節點上遷移slot,可從多個源節點完成遷移,以逗號隔開,傳遞的是節點的node id,還能夠直接傳遞--from all,這樣源節點就是集羣的全部節點,不傳遞該參數的話,則會在遷移過程當中提示用戶輸入 --to <arg> #slot須要遷移的目的節點的node id,目的節點只能填寫一個,不傳遞該參數的話,則會在遷移過程當中提示用戶輸入。 --slots <arg> #須要遷移的slot數量,不傳遞該參數的話,則會在遷移過程當中提示用戶輸入。 --yes #設置該參數,能夠在打印執行reshard計劃的時候,提示用戶輸入yes確認後再執行reshard --timeout <arg> #設置migrate命令的超時時間。 --pipeline <arg> #定義cluster getkeysinslot命令一次取出的key數量,不傳的話使用默認值爲10。 #平衡集羣節點slot數量 rebalance host:port --weight <arg> --auto-weights --use-empty-masters --timeout <arg> --simulate --pipeline <arg> --threshold <arg> #將新節點加入集羣 add-node new_host:new_port existing_host:existing_port --slave --master-id <arg> #從集羣中刪除節點 del-node host:port node_id #設置集羣節點間心跳鏈接的超時時間 set-timeout host:port milliseconds #在集羣所有節點上執行命令 call host:port command arg arg .. arg #將外部redis數據導入集羣 import host:port --from <arg> --copy --replace #幫助 help (show this help) For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.
1)建立集羣 cretate :6個節點,每一個節點一個從庫,這裏有個問題是不能指定那個從庫屬於哪一個主庫,不過能夠先添加3個主庫,經過新增節點(add-node)來添加從庫到指定主庫。
./redis-trib.rb create --replicas 1 192.168.100.134:17021 192.168.100.135:17021 192.168.100.136:17021 192.168.100.134:17022 192.168.100.135:17022 192.168.100.136:17022
2)測試集羣 check ip:port:測試集羣是否分配完了slot
./redis-trib.rb check 192.168.100.134:17021
3)查看集羣信息 info ip:port:查看集羣信息:包括slot、slave、和key的數量分佈
./redis-trib.rb info 192.168.100.134:17021
4)平衡節點的slot數量 rebalance ip:port:平均各個節點的slot數量
./redis-trib.rb rebalance 192.168.100.134:17021
流程:
5)刪除集羣節點 del-node ip:port <node_id>:只能刪除沒有分配slot的節點,從集羣中刪出以後直接關閉實例
./redis-trib.rb del-node 192.168.100.135:17022 77d02fef656265c9c421fef425527c510e4cfcb8
流程:
6)添加集羣節點 add-node :新節點加入集羣,節點能夠爲master,也能夠爲某個master節點的slave。
添加一個主節點:134:17022 加入到134:17021的集羣當中
./redis-trib.rb add-node 192.168.100.134:17022 192.168.100.134:17021
添加一個從節點:135:17022加入到134:17021的集羣當中,而且做爲指定<node_id>的從庫
./redis-trib.rb add-node --slave --master-id 7fa64d250b595d8ac21a42477af5ac8c07c35d83 192.168.100.135:17022 192.168.100.134:17021
最後集羣的信息:
192.168.100.134:17021> cluster nodes 77d02fef656265c9c421fef425527c510e4cfcb8 192.168.100.135:17022 slave 7fa64d250b595d8ac21a42477af5ac8c07c35d83 0 1488346523944 5 connected 5476787f31fa375fda6bb32676a969c8b8adfbc2 192.168.100.134:17022 master - 0 1488346525949 4 connected 7fa64d250b595d8ac21a42477af5ac8c07c35d83 192.168.100.134:17021 myself,master - 0 0 1 connected 0-5460 51bf103f7cf6b5ede6e009ce489fdeec14961be8 192.168.100.135:17021 master - 0 1488346522942 2 connected 5461-10922 0191a8b52646fb5c45323ab0c1a1a79dc8f3aea2 192.168.100.136:17021 master - 0 1488346524948 3 connected 10923-16383
流程:
7)在線遷移slot reshard :在線把集羣的一些slot從集羣原來slot節點遷移到新的節點,便可以完成集羣的在線橫向擴容和縮容。
提示執行:遷移134:17021集羣
./redis-trib.rb reshard 192.168.100.134:17021
>>> Performing Cluster Check (using node 192.168.100.134:17021) M: 7fa64d250b595d8ac21a42477af5ac8c07c35d83 192.168.100.134:17021 slots:0-5460 (5461 slots) master 1 additional replica(s) S: 77d02fef656265c9c421fef425527c510e4cfcb8 192.168.100.135:17022 slots: (0 slots) slave replicates 7fa64d250b595d8ac21a42477af5ac8c07c35d83 M: 5476787f31fa375fda6bb32676a969c8b8adfbc2 192.168.100.134:17022 slots: (0 slots) master 0 additional replica(s) M: 51bf103f7cf6b5ede6e009ce489fdeec14961be8 192.168.100.135:17021 slots:5461-10922 (5462 slots) master 0 additional replica(s) M: 0191a8b52646fb5c45323ab0c1a1a79dc8f3aea2 192.168.100.136:17021 slots:10923-16383 (5461 slots) master 0 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. #遷移幾個槽位? How many slots do you want to move (from 1 to 16384)? 1 #遷移到那個node_id? What is the receiving node ID? 5476787f31fa375fda6bb32676a969c8b8adfbc2 #從哪些node_id遷移? Please enter all the source node IDs. #輸入all,集羣裏的全部節點 Type 'all' to use all the nodes as source nodes for the hash slots. #輸入源節點,回車後再輸入done開始遷移 Type 'done' once you entered all the source nodes IDs. Source node #1:7fa64d250b595d8ac21a42477af5ac8c07c35d83 Source node #2:done Ready to move 1 slots. Source nodes: M: 7fa64d250b595d8ac21a42477af5ac8c07c35d83 192.168.100.134:17021 slots:0-5460 (5461 slots) master 1 additional replica(s) Destination node: M: 5476787f31fa375fda6bb32676a969c8b8adfbc2 192.168.100.134:17022 slots: (0 slots) master 0 additional replica(s) Resharding plan: Moving slot 0 from 7fa64d250b595d8ac21a42477af5ac8c07c35d83 #是否看遷移計劃? Do you want to proceed with the proposed reshard plan (yes/no)? yes Moving slot 0 from 192.168.100.134:17021 to 192.168.100.134:17022: ..........
參數執行:從from指定的node遷移10個slots到to指定的節點
./redis-trib.rb reshard --from 7fa64d250b595d8ac21a42477af5ac8c07c35d83 --to 5476787f31fa375fda6bb32676a969c8b8adfbc2 --slots 10 192.168.100.134:17021
>>> Performing Cluster Check (using node 192.168.100.134:17021) M: 7fa64d250b595d8ac21a42477af5ac8c07c35d83 192.168.100.134:17021 slots:2-5460 (5459 slots) master 1 additional replica(s) S: 77d02fef656265c9c421fef425527c510e4cfcb8 192.168.100.135:17022 slots: (0 slots) slave replicates 7fa64d250b595d8ac21a42477af5ac8c07c35d83 M: 5476787f31fa375fda6bb32676a969c8b8adfbc2 192.168.100.134:17022 slots:0-1 (2 slots) master 0 additional replica(s) M: 51bf103f7cf6b5ede6e009ce489fdeec14961be8 192.168.100.135:17021 slots:5461-10922 (5462 slots) master 0 additional replica(s) M: 0191a8b52646fb5c45323ab0c1a1a79dc8f3aea2 192.168.100.136:17021 slots:10923-16383 (5461 slots) master 0 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. Ready to move 10 slots. Source nodes: M: 7fa64d250b595d8ac21a42477af5ac8c07c35d83 192.168.100.134:17021 slots:2-5460 (5459 slots) master 1 additional replica(s) Destination node: M: 5476787f31fa375fda6bb32676a969c8b8adfbc2 192.168.100.134:17022 slots:0-1 (2 slots) master 0 additional replica(s) Resharding plan: Moving slot 2 from 7fa64d250b595d8ac21a42477af5ac8c07c35d83 Moving slot 3 from 7fa64d250b595d8ac21a42477af5ac8c07c35d83 Moving slot 4 from 7fa64d250b595d8ac21a42477af5ac8c07c35d83 Moving slot 5 from 7fa64d250b595d8ac21a42477af5ac8c07c35d83 Moving slot 6 from 7fa64d250b595d8ac21a42477af5ac8c07c35d83 Moving slot 7 from 7fa64d250b595d8ac21a42477af5ac8c07c35d83 Moving slot 8 from 7fa64d250b595d8ac21a42477af5ac8c07c35d83 Moving slot 9 from 7fa64d250b595d8ac21a42477af5ac8c07c35d83 Moving slot 10 from 7fa64d250b595d8ac21a42477af5ac8c07c35d83 Moving slot 11 from 7fa64d250b595d8ac21a42477af5ac8c07c35d83 Do you want to proceed with the proposed reshard plan (yes/no)? yes Moving slot 2 from 192.168.100.134:17021 to 192.168.100.134:17022: .................... Moving slot 3 from 192.168.100.134:17021 to 192.168.100.134:17022: .......... Moving slot 4 from 192.168.100.134:17021 to 192.168.100.134:17022: .................. Moving slot 5 from 192.168.100.134:17021 to 192.168.100.134:17022: .. Moving slot 6 from 192.168.100.134:17021 to 192.168.100.134:17022: .. Moving slot 7 from 192.168.100.134:17021 to 192.168.100.134:17022: ............................... Moving slot 8 from 192.168.100.134:17021 to 192.168.100.134:17022: .......... Moving slot 9 from 192.168.100.134:17021 to 192.168.100.134:17022: .......................... Moving slot 10 from 192.168.100.134:17021 to 192.168.100.134:17022: ........................................ Moving slot 11 from 192.168.100.134:17021 to 192.168.100.134:17022: ..........
流程:
遷移後的slots分佈:
192.168.100.135:17021> cluster nodes 5476787f31fa375fda6bb32676a969c8b8adfbc2 192.168.100.134:17022 master - 0 1488349695628 7 connected 0-11 7fa64d250b595d8ac21a42477af5ac8c07c35d83 192.168.100.134:17021 master - 0 1488349698634 1 connected 12-5460 51bf103f7cf6b5ede6e009ce489fdeec14961be8 192.168.100.135:17021 myself,master - 0 0 2 connected 5461-10922 77d02fef656265c9c421fef425527c510e4cfcb8 192.168.100.135:17022 slave 7fa64d250b595d8ac21a42477af5ac8c07c35d83 0 1488349697631 1 connected 0191a8b52646fb5c45323ab0c1a1a79dc8f3aea2 192.168.100.136:17021 master - 0 1488349696631 3 connected 10923-16383
新增的節點,slot分佈不均勻,能夠經過上面說的rebalance進行平衡slot。
這裏須要注意的是:要是Redis Server 配置了認證,須要密碼登入,這個腳本就不能執行了,腳本執行的Server之間都是無密碼。若肯定須要登錄,則:能夠暫時修改爲無認證狀態:
192.168.100.134:17022> config set masterauth "" OK 192.168.100.134:17022> config set requirepass "" OK #正常來說是沒有權限寫入的。 #192.168.100.134:17022> config rewrite
等處處理完畢以後,能夠再把密碼設置回去。到此,經過腳本部署也介紹完了,經過手動和腳本部署發如今數據遷移的時候服務器都不能設置密碼,不然認證失敗。在設置了認證的服務器上操做時,須要注意一下。
在上面管理中介紹過failover的命令,如今能夠用這個命令模擬故障檢測轉移,固然也能夠stop掉Redis Server來實現模擬。進行failover節點必須是slave節點,查看集羣裏各個節點和slave的信息:
192.168.100.134:17021> cluster nodes 93a030d6f1d1248c1182114c7044b204aa0ee022 192.168.100.136:17021 master - 0 1488378411940 4 connected 10923-16383 b836dc49206ac8895be7a0c4b8ba571dffa1e1c4 192.168.100.135:17022 slave 23c2bb6fc906b55fb59a051d1f9528f5b4bc40d4 0 1488378410938 1 connected 5980546e3b19ff5210057612656681b505723da4 192.168.100.134:17022 slave 93a030d6f1d1248c1182114c7044b204aa0ee022 0 1488378408935 4 connected 23c2bb6fc906b55fb59a051d1f9528f5b4bc40d4 192.168.100.134:17021 myself,master - 0 0 1 connected 0-5461 526d99b679229c8003b0504e27ae7aee4e9c9c3a 192.168.100.135:17021 master - 0 1488378412941 2 connected 5462-10922 39bf42b321a588dcd93efc4b4cc9cb3b496cacb6 192.168.100.136:17022 slave 526d99b679229c8003b0504e27ae7aee4e9c9c3a 0 1488378413942 5 connected 192.168.100.134:17021> cluster slaves 23c2bb6fc906b55fb59a051d1f9528f5b4bc40d4 1) "b836dc49206ac8895be7a0c4b8ba571dffa1e1c4 192.168.100.135:17022 slave 23c2bb6fc906b55fb59a051d1f9528f5b4bc40d4 0 1488378414945 1 connected"
在134:17021上模擬故障,要到該節點的從節點135:17022上執行failover,經過日誌看如何進行故障轉移
192.168.100.135:17022> cluster failover OK 192.168.100.135:17022> cluster nodes 39bf42b321a588dcd93efc4b4cc9cb3b496cacb6 192.168.100.136:17022 slave 526d99b679229c8003b0504e27ae7aee4e9c9c3a 0 1488378807681 5 connected 23c2bb6fc906b55fb59a051d1f9528f5b4bc40d4 192.168.100.134:17021 slave b836dc49206ac8895be7a0c4b8ba571dffa1e1c4 0 1488378804675 6 connected 526d99b679229c8003b0504e27ae7aee4e9c9c3a 192.168.100.135:17021 master - 0 1488378806679 2 connected 5462-10922 5980546e3b19ff5210057612656681b505723da4 192.168.100.134:17022 slave 93a030d6f1d1248c1182114c7044b204aa0ee022 0 1488378808682 4 connected b836dc49206ac8895be7a0c4b8ba571dffa1e1c4 192.168.100.135:17022 myself,master - 0 0 6 connected 0-5461 93a030d6f1d1248c1182114c7044b204aa0ee022 192.168.100.136:17021 master - 0 1488378809684 4 connected 10923-16383
經過上面結果看到從庫已經提高變成了主庫,而老的主庫起來以後變成了從庫。在日誌裏也能夠看到這2個節點同步的過程。固然有興趣的能夠模擬一下stop的過程。
整個集羣的部署、管理和測試到這裏所有結束,下面附上幾個生成數據的測試腳本:
①:操做集羣(cluster_write_test.py)
②:pipeline操做集羣(cluster_write_pipe_test.py)
③:操做單例(single_write_test.py)
④:pipeline操做單例(single_write_pipe_test.py)
Redis Cluster採用無中心節點方式實現,無需proxy代理,客戶端直接與redis集羣的每一個節點鏈接,根據一樣的hash算法計算出key對應的slot,而後直接在slot對應的Redis上執行命令。從CAP定理來看,Cluster支持了AP(Availability&Partition-Tolerancy),這樣讓Redis從一個單純的NoSQL內存數據庫變成了分佈式NoSQL數據庫。