消息型中間件之RabbitMQ集羣

  在上一篇博客中咱們簡單的介紹了下rabbitmq簡介,安裝配置相關指令的說明以及rabbitmqctl的相關子命令的說明;回顧請參考http://www.javashuo.com/article/p-huxkyxpb-nt.html;今天我 們來聊一聊rabbitmq的集羣;之因此要用集羣是由於在一個分佈式應用環境中,rabbitmq的做用是鏈接各組件,一旦rabbitmq服務掛掉,可能影響整個線上業務,爲了不這樣的問題出現,咱們就必須想辦法對rabbitmq作高可用,可以讓集羣中的每一個rabbitmq節點把自身接收到的消息經過網絡同步到其餘節點,這樣一來使得每一個節點都有整個rabbitmq集羣的全部消息,即使其中一臺rabbitmq宕機不影響消息丟失的狀況;rabbitmq集羣它的主要做用就是各節點互相同步消息,從而實現了數據的冗餘;除了rabbitmq的數據冗餘,咱們還須要考慮,一旦後端有多臺rabbitmq咱們就須要經過對後端多臺rabbitmq-server作負載均衡,使得每一個節點可以分擔一部分流量,同時對客戶端訪問提供一個統一的訪問接口;客戶端就能夠基於負載均衡的地址來請求rabbitmq,經過負載均衡調度,把客戶端的請求分攤到後端多個rabbitmq上;若是某一臺rabbitmq宕機了,根據負載均衡的健康狀態監測,自動將請求不調度到宕機的rabbitmq-server上,從而也實現了對rabbitmq高可用;html

  在實現rabbitmq集羣前咱們須要作如下準備前端

  一、更改各節點的主機名同hosts文件解析的主機名相同,必須保證各節點主機名稱不同,而且能夠經過hosts文件解析出來;node

  二、時間同步,時間同步對於一個集羣來說是最基本的要求;nginx

  三、各節點的cookie信息必須保持一致;web

  實驗環境說明算法

節點名 主機名 ip地址
node01 node01 192.168.0.41
node2 node2 192.168.0.42
負載均衡 node3 192.168.0.43

 

 

 

 

 

   

  一、配置各節點的主機名稱後端

[root@node01 ~]# hostnamectl set-hostname node01
[root@node01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.41 node01
192.168.0.42 node2
192.168.0.43 node3
[root@node01 ~]# scp /etc/hosts node2:/etc/
hosts                                                                                100%  218   116.4KB/s   00:00    
[root@node01 ~]# scp /etc/hosts node3:/etc/
hosts                                                                                100%  218   119.2KB/s   00:00    
[root@node01 ~]# 

  提示:對於rabbitmq集羣來說就只有node01和node2,這兩個節點互相同步消息;而負載均衡是爲了作流量負載而設定的,本質上不屬於rabbitmq集羣;因此對於負載均衡的主機名是什麼均可以;瀏覽器

  驗證:連接個節點驗證主機名是否正確,以及hosts文件bash

[root@node2 ~]# hostname
node2
[root@node2 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.41 node01
192.168.0.42 node2
192.168.0.43 node3
[root@node2 ~]# 

  在各節點安裝rabbitmq-servercookie

yum install rabbitmq-server -y

  啓動各節點rabbitmq-server

  提示:node01上啓動了rabbitmq-management插件,因此15672處於監聽;而node2沒有啓動rabbitmq-management插件,15672端口並無處於監聽狀體;對於一個rabbitmq集羣,25672這個端口就是專用於集羣個節點通訊;

  如今基本環境已經準備好,如今咱們就能夠來配置集羣了,rabbitmq集羣的配置很是簡單,默認狀況啓動一個rabbitmq,它就是一個集羣,因此25672處於監聽狀態嘛,只不過集羣中就只有一個自身節點;

  驗證:各節點集羣狀態信息,節點名是否同主機hostname名稱相同

 

  提示:從上面的信息能夠看到兩個節點的集羣名稱都是同host主機名相同;

  中止node2上的應用,把node2加入node01集羣

  提示:這裏提示咱們沒法鏈接到rabbit@node01,出現以上錯誤的主要緣由有兩個,第一個是主機名稱解析不正確;第二是cookie不一致;

  複製cookie信息

[root@node2 ~]# scp /var/lib/rabbitmq/.erlang.cookie node01:/var/lib/rabbitmq/
The authenticity of host 'node01 (192.168.0.41)' can't be established.
ECDSA key fingerprint is SHA256:EG9nua4JJuUeofheXlgQeL9hX5H53JynOqf2vf53mII.
ECDSA key fingerprint is MD5:57:83:e6:46:2c:4b:bb:33:13:56:17:f7:fd:76:71:cc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node01,192.168.0.41' (ECDSA) to the list of known hosts.
.erlang.cookie                                                                       100%   20    10.6KB/s   00:00    
[root@node2 ~]# 

  驗證:md5sum驗證各節點cookie是否一致

[root@node2 ~]# md5sum /var/lib/rabbitmq/.erlang.cookie 
1d4f9e4d6c92cf0c749cc4ace68317f6  /var/lib/rabbitmq/.erlang.cookie
[root@node2 ~]# ssh node01
Last login: Wed Aug 26 19:41:30 2020 from 192.168.0.232
[root@node01 ~]# md5sum /var/lib/rabbitmq/.erlang.cookie 
1d4f9e4d6c92cf0c749cc4ace68317f6  /var/lib/rabbitmq/.erlang.cookie
[root@node01 ~]# 

  提示:如今兩個節點的cookie信息一致了,再次把node2加入到node01上看看是否可以加入?

[root@node2 ~]# rabbitmqctl join_cluster rabbit@node01
Clustering node rabbit@node2 with rabbit@node01 ...
Error: unable to connect to nodes [rabbit@node01]: nodedown

DIAGNOSTICS
===========

attempted to contact: [rabbit@node01]

rabbit@node01:
  * connected to epmd (port 4369) on node01
  * epmd reports node 'rabbit' running on port 25672
  * TCP connection succeeded but Erlang distribution failed
  * suggestion: hostname mismatch?
  * suggestion: is the cookie set correctly?

current node details:
- node name: rabbitmqctl2523@node2
- home dir: /var/lib/rabbitmq
- cookie hash: HU+eTWySzwx0nMSs5oMX9g==

[root@node2 ~]#

  提示:仍是提示咱們加不進去,這裏的緣由是咱們更新了node01的cookie信息,沒有重啓rabbitmq-server,因此它默認仍是之前的cookie;

  重啓node01上的rabbitmq-server

[root@node01 ~]# systemctl restart rabbitmq-server.service 
[root@node01 ~]# ss -tnl
State       Recv-Q Send-Q              Local Address:Port                             Peer Address:Port              
LISTEN      0      128                     127.0.0.1:631                                         *:*                  
LISTEN      0      128                             *:15672                                       *:*                  
LISTEN      0      100                     127.0.0.1:25                                          *:*                  
LISTEN      0      100                     127.0.0.1:64667                                       *:*                  
LISTEN      0      128                             *:8000                                        *:*                  
LISTEN      0      128                             *:8001                                        *:*                  
LISTEN      0      128                             *:25672                                       *:*                  
LISTEN      0      5                       127.0.0.1:8010                                        *:*                  
LISTEN      0      128                             *:111                                         *:*                  
LISTEN      0      128                             *:80                                          *:*                  
LISTEN      0      128                             *:4369                                        *:*                  
LISTEN      0      5                   192.168.122.1:53                                          *:*                  
LISTEN      0      128                             *:22                                          *:*                  
LISTEN      0      128                           ::1:631                                        :::*                  
LISTEN      0      100                           ::1:25                                         :::*                  
LISTEN      0      128                            :::5672                                       :::*                  
LISTEN      0      128                            :::111                                        :::*                  
LISTEN      0      128                            :::80                                         :::*                  
LISTEN      0      128                            :::4369                                       :::*                  
LISTEN      0      128                            :::22                                         :::*                  
[root@node01 ~]# 

  提示:若是是把node01的cookie複製給node2,咱們須要重啓node2,總之拿到新cookie節點都要重啓,保證在用cookie的信息一致就能夠了;

  再次把node2加入到node01

[root@node2 ~]# rabbitmqctl join_cluster rabbit@node01
Clustering node rabbit@node2 with rabbit@node01 ...
...done.
[root@node2 ~]# 

  提示:加入對應節點集羣沒有報錯就表示加入集羣成功;

  驗證:查看各節點的集羣狀態信息

  提示:在兩個節點上咱們均可以看到兩個節點;到此node2就加入到node01這個集羣中了;可是兩個節點的集羣狀態信息不同,緣由是node2上沒有啓動應用,啓動應用之後,它倆的狀態信息就會是同樣;

  啓動node2上的應用

  提示:此時兩個節點的狀態信息就同樣了;到此rabbitmq集羣就搭建好了;

  驗證:在瀏覽器登陸node1的15672,看看web管理界面是否有節點信息?

  提示:node2之因此沒有統計信息是由於node2上沒有啓動rabbitmq-management插件;啓用插件就能夠統計到數據;

  rabbitmqctl集羣相關子命令

  join_cluster <clusternode> [--ram]:加入指定節點集羣;

  cluster_status:查看集羣狀態

  change_cluster_node_type disc | ram:更改節點存儲類型,disc表示磁盤,ram表示內存;一個集羣中必須有一個節點爲disc類型;

[root@node2 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@node2 ...
[{nodes,[{disc,[rabbit@node01,rabbit@node2]}]},
 {running_nodes,[rabbit@node01,rabbit@node2]},
 {cluster_name,<<"rabbit@node01">>},
 {partitions,[]}]
...done.
[root@node2 ~]# rabbitmqctl change_cluster_node_type ram
Turning rabbit@node2 into a ram node ...
Error: mnesia_unexpectedly_running
[root@node2 ~]#

  提示:這裏提示咱們mnesia_unexpectedly_running,因此咱們更改不了節點類型;解決辦法是中止node2上的應用,而後在更改類型,在啓動應用便可;

[root@node2 ~]# rabbitmqctl stop_app
Stopping node rabbit@node2 ...
...done.
[root@node2 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@node2 ...
[{nodes,[{disc,[rabbit@node01,rabbit@node2]}]}]
...done.
[root@node2 ~]# rabbitmqctl change_cluster_node_type ram
Turning rabbit@node2 into a ram node ...
...done.
[root@node2 ~]# rabbitmqctl start_app
Starting node rabbit@node2 ...
...done.
[root@node2 ~]# rabbitmqctl cluster_status              
Cluster status of node rabbit@node2 ...
[{nodes,[{disc,[rabbit@node01]},{ram,[rabbit@node2]}]},
 {running_nodes,[rabbit@node01,rabbit@node2]},
 {cluster_name,<<"rabbit@node01">>},
 {partitions,[]}]
...done.
[root@node2 ~]# 

  提示:能夠看到node2就變成了ram類型了;

[root@node01 ~]#  rabbitmqctl change_cluster_node_type ram
Turning rabbit@node01 into a ram node ...
Error: mnesia_unexpectedly_running
[root@node01 ~]# rabbitmqctl stop_app
Stopping node rabbit@node01 ...
...done.
[root@node01 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@node01 ...
[{nodes,[{disc,[rabbit@node01]},{ram,[rabbit@node2]}]}]
...done.
[root@node01 ~]#  rabbitmqctl change_cluster_node_type ram
Turning rabbit@node01 into a ram node ...
Error: {resetting_only_disc_node,"You cannot reset a node when it is the only disc node in a cluster. Please convert another node of the cluster to a disc node first."}
[root@node01 ~]# 

  提示:這裏須要注意一個集羣中至少保持一個節點是disc類型;因此node2更改爲ram類型,node01就必須是disc類型;

  forget_cluster_node [--offline]:離開集羣;

[root@node01 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@node01 ...
[{nodes,[{disc,[rabbit@node01]},{ram,[rabbit@node2]}]},
 {running_nodes,[rabbit@node2,rabbit@node01]},
 {cluster_name,<<"rabbit@node01">>},
 {partitions,[]}]
...done.
[root@node01 ~]# rabbitmqctl forget_cluster_node rabbit@node2
Removing node rabbit@node2 from cluster ...
Error: {failed_to_remove_node,rabbit@node2,
                              {active,"Mnesia is running",rabbit@node2}}
[root@node01 ~]# 

  提示:咱們在node01上移除node2,提示咱們node2節點處於活躍狀態不能移除;這也告訴咱們這個子命令只能移除不在線的節點;

  下線node2上的應用

[root@node2 ~]# rabbitmqctl stop_app
Stopping node rabbit@node2 ...
...done.
[root@node2 ~]#

  再次移除node2

[root@node01 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@node01 ...
[{nodes,[{disc,[rabbit@node01]},{ram,[rabbit@node2]}]},
 {running_nodes,[rabbit@node01]},
 {cluster_name,<<"rabbit@node01">>},
 {partitions,[]}]
...done.
[root@node01 ~]# rabbitmqctl forget_cluster_node rabbit@node2          
Removing node rabbit@node2 from cluster ...
...done.
[root@node01 ~]# rabbitmqctl cluster_status                  
Cluster status of node rabbit@node01 ...
[{nodes,[{disc,[rabbit@node01]}]},
 {running_nodes,[rabbit@node01]},
 {cluster_name,<<"rabbit@node01">>},
 {partitions,[]}]
...done.
[root@node01 ~]# 

  update_cluster_nodes clusternode:更新集羣節點信息;

  把node2加入node01這個集羣

[root@node2 ~]# rabbitmqctl stop_app
Stopping node rabbit@node2 ...
...done.
[root@node2 ~]# rabbitmqctl join_cluster rabbit@node01
Clustering node rabbit@node2 with rabbit@node01 ...
...done.
[root@node2 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@node2 ...
[{nodes,[{disc,[rabbit@node01,rabbit@node2]}]}]
...done.
[root@node2 ~]# rabbitmqctl start_app
Starting node rabbit@node2 ...
...done.
[root@node2 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@node2 ...
[{nodes,[{disc,[rabbit@node01,rabbit@node2]}]},
 {running_nodes,[rabbit@node01,rabbit@node2]},
 {cluster_name,<<"rabbit@node01">>},
 {partitions,[]}]
...done.
[root@node2 ~]# 

  停掉node2上的應用

[root@node2 ~]# rabbitmqctl stop_app
Stopping node rabbit@node2 ...
...done.
[root@node2 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@node2 ...
[{nodes,[{disc,[rabbit@node01,rabbit@node2]}]}]
...done.
[root@node2 ~]# 

  提示:若是此時有新節點加入集羣,若是在把node01上的應用停掉,node2再次啓動應用就會提示錯誤;以下

  把node3加入node01

[root@node3 ~]# rabbitmqctl cluster_status            
Cluster status of node rabbit@node3 ...
[{nodes,[{disc,[rabbit@node3]}]},
 {running_nodes,[rabbit@node3]},
 {cluster_name,<<"rabbit@node3">>},
 {partitions,[]}]
...done.
[root@node3 ~]# rabbitmqctl stop_app
Stopping node rabbit@node3 ...
...done.
[root@node3 ~]# rabbitmqctl join_cluster rabbit@node01
Clustering node rabbit@node3 with rabbit@node01 ...
...done.
[root@node3 ~]# rabbitmqctl start_app
Starting node rabbit@node3 ...
...done.
[root@node3 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@node3 ...
[{nodes,[{disc,[rabbit@node01,rabbit@node2,rabbit@node3]}]},
 {running_nodes,[rabbit@node01,rabbit@node3]},
 {cluster_name,<<"rabbit@node01">>},
 {partitions,[]}]
...done.
[root@node3 ~]# 

  停掉node01上的應用

[root@node01 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@node01 ...
[{nodes,[{disc,[rabbit@node01,rabbit@node2,rabbit@node3]}]},
 {running_nodes,[rabbit@node3,rabbit@node01]},
 {cluster_name,<<"rabbit@node01">>},
 {partitions,[]}]
...done.
[root@node01 ~]# rabbitmqctl stop_app
Stopping node rabbit@node01 ...
...done.
[root@node01 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@node01 ...
[{nodes,[{disc,[rabbit@node01,rabbit@node2,rabbit@node3]}]}]
...done.
[root@node01 ~]# 

  啓動node2上的應用

[root@node2 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@node2 ...
[{nodes,[{disc,[rabbit@node01,rabbit@node2]}]}]
...done.
[root@node2 ~]# rabbitmqctl start_app     
Starting node rabbit@node2 ...



BOOT FAILED
===========

Error description:
   {could_not_start,rabbit,
       {bad_return,
           {{rabbit,start,[normal,[]]},
            {'EXIT',
                {rabbit,failure_during_boot,
                    {error,
                        {timeout_waiting_for_tables,
                            [rabbit_user,rabbit_user_permission,rabbit_vhost,
                             rabbit_durable_route,rabbit_durable_exchange,
                             rabbit_runtime_parameters,
                             rabbit_durable_queue]}}}}}}}

Log files (may contain more information):
   /var/log/rabbitmq/rabbit@node2.log
   /var/log/rabbitmq/rabbit@node2-sasl.log

Error: {rabbit,failure_during_boot,
           {could_not_start,rabbit,
               {bad_return,
                   {{rabbit,start,[normal,[]]},
                    {'EXIT',
                        {rabbit,failure_during_boot,
                            {error,
                                {timeout_waiting_for_tables,
                                    [rabbit_user,rabbit_user_permission,
                                     rabbit_vhost,rabbit_durable_route,
                                     rabbit_durable_exchange,
                                     rabbit_runtime_parameters,
                                     rabbit_durable_queue]}}}}}}}}
[root@node2 ~]# 

  提示:此時node2就啓動不起來了,這時咱們就須要用到update_cluster_nodes子命令向node3更新集羣信息,而後再次在node2上啓動應用就不會報錯了;

  向node3詢問更新集羣節點信息,並啓動node2上的應用

[root@node2 ~]# rabbitmqctl update_cluster_nodes rabbit@node3
Updating cluster nodes for rabbit@node2 from rabbit@node3 ...
...done.
[root@node2 ~]# rabbitmqctl cluster_status                   
Cluster status of node rabbit@node2 ...
[{nodes,[{disc,[rabbit@node01,rabbit@node2,rabbit@node3]}]}]
...done.
[root@node2 ~]# rabbitmqctl start_app
Starting node rabbit@node2 ...
...done.
[root@node2 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@node2 ...
[{nodes,[{disc,[rabbit@node01,rabbit@node2,rabbit@node3]}]},
 {running_nodes,[rabbit@node3,rabbit@node2]},
 {cluster_name,<<"rabbit@node01">>},
 {partitions,[]}]
...done.
[root@node2 ~]# 

  提示:能夠看到更新了集羣節點信息後,在node2上查看集羣狀態信息就能夠看到node3了;此時在啓動node2上的應用就沒有任何問題;

  sync_queue queue:同步指定隊列;

  cancel_sync_queue queue:取消指定隊列同步

  set_cluster_name name:設置集羣名稱

[root@node2 ~]# rabbitmqctl cluster_status
Cluster status of node rabbit@node2 ...
[{nodes,[{disc,[rabbit@node01,rabbit@node2,rabbit@node3]}]},
 {running_nodes,[rabbit@node01,rabbit@node3,rabbit@node2]},
 {cluster_name,<<"rabbit@node01">>},
 {partitions,[]}]
...done.
[root@node2 ~]# rabbitmqctl set_cluster_name rabbit@rabbit_node02
Setting cluster name to rabbit@rabbit_node02 ...
...done.
[root@node2 ~]# rabbitmqctl cluster_status                       
Cluster status of node rabbit@node2 ...
[{nodes,[{disc,[rabbit@node01,rabbit@node2,rabbit@node3]}]},
 {running_nodes,[rabbit@node01,rabbit@node3,rabbit@node2]},
 {cluster_name,<<"rabbit@rabbit_node02">>},
 {partitions,[]}]
...done.
[root@node2 ~]# 

  提示:在集羣任意一個節點更更名稱都會同步到其餘節點;也就是說集羣狀態信息在每一個節點都是保持一致的;

  基於haproxy負載均衡rabbitmq集羣

  一、安裝haproxy

[root@node3 ~]# yum install -y haproxy
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Resolving Dependencies
--> Running transaction check
---> Package haproxy.x86_64 0:1.5.18-9.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================
 Package                Arch                  Version                     Repository           Size
====================================================================================================
Installing:
 haproxy                x86_64                1.5.18-9.el7                base                834 k

Transaction Summary
====================================================================================================
Install  1 Package

Total download size: 834 k
Installed size: 2.6 M
Downloading packages:
haproxy-1.5.18-9.el7.x86_64.rpm                                              | 834 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : haproxy-1.5.18-9.el7.x86_64                                                      1/1 
  Verifying  : haproxy-1.5.18-9.el7.x86_64                                                      1/1 

Installed:
  haproxy.x86_64 0:1.5.18-9.el7                                                                     

Complete!
[root@node3 ~]# 

  提示:haproxy能夠從新找個主機部署,也能夠在集羣中的某臺節點上部署;建議從新找個主機部署,這樣可避免端口衝突;

  配置haproxy

  提示:以上就是haproxy負載均衡rabbitmq集羣的示例,咱們經過使用haproxy的tcp模式去代理rabbitmq,而且使用輪詢的算法把請求調度到後端server上;

  驗證:啓動haproxy,看看對應的端口是否處於監聽狀態,狀態頁面是否可以正常檢測到後端server是否在線?

  提示:此時負載均衡就搭建好了,後續使用這個集羣,咱們就能夠把這個負載均衡上監聽的地址給用戶訪問便可;這裏要考慮一點haproxy是新的單點;

  在瀏覽器打開haproxy的狀態頁看看後端server是否在線?

  提示:能夠看到後端3臺rabbitmq-server都是正常在線;

  中止node3上的rabbitmq,看看haproxy是否可以及時發現node3再也不線,並把它標記爲down?

  提示:咱們根據haproxy對後端server作健康狀態檢查來實現rabbitmq集羣的故障轉移,因此對於rabbitmq集羣來說,它只複製消息的同步,實現數據冗餘,真正高可用仍是要靠前端的調度器實現;對於nginx負載均衡rabbitmq能夠參考ngixn對tcp協議的代理來寫配置;有關nginx負載均衡tcp應用相關話題,能夠參考本人博客http://www.javashuo.com/article/p-hsdkijpf-mm.html我這裏就不過多闡述;

相關文章
相關標籤/搜索