消息中間件 --->就是消息隊列html
異步方式:不須要立馬獲得結果,須要排隊java
同步方式:須要實時得到數據,堅定不能排隊node
subprocess 的Q也提供不一樣進程之間的溝通python
應用場景:mysql
買票,搶購linux
堡壘機批量發送文件web
[root@rabbitmq ~]# cat /etc/redhat-release CentOS release 6.6 (Final) [root@rabbitmq ~]# uname -r 2.6.32-504.el6.x86_64
一、安裝依賴包: yum install gcc ncurses ncurses-base ncurses-devel ncurses-libs ncurses-static ncurses-term ocaml-curses ocaml-curses-devel openssl-devel zlib-devel
openssl-devel perl xz xmlto m4 kernel-devel -y 二、下載otp_src_19.3.tar.gz wget http://erlang.org/download/otp_src_19.3.tar.gz
三、tar xvf otp_src_19.3.tar.gz
四、cd otp_src_19.3
五、./configure --prefix=/usr/local/erlang --with-ssl --enable-threads --enable-smp-support --enable-kernel-poll --enable-hipe --without-javac
六、make && make install
七、配置erlang環境: echo "export PATH=$PATH:/usr/local/erlang/bin" >>/etc/profile #使環境變量配置生效 source /etc/profile 七、配置解析 [root@rabbitmq otp_src_19.3]# echo "127.0.0.1 rabbitmq" >>/etc/hosts #rabbitmq改爲你本身主機名
備註:
啓動rabbitmq報錯:
[root@rabbitmq ~]# hostname
rabbitmqsql
而後再執行下面這步shell
echo "127.0.0.1 rabbitmq" >>/etc/hosts
一、下載rabbitmq-server-generic-unix-3.6.5.tar.xz 二、tar xvf rabbitmq-server-generic-unix-3.6.5.tar.xz 三、mv rabbitmq_server-3.6.5/ /usr/local/rabbitmq 四、啓動: #啓動rabbitmq服務 /usr/local/rabbitmq/sbin/rabbitmq-server #後臺啓動 /usr/local/rabbitmq/sbin/rabbitmq-server -detached #關閉rabbitmq服務 /usr/local/rabbitmq/sbin/rabbitmqctl stop 或 ps -ef | grep rabbit 和 kill -9 xxx #開啓插件管理頁面 /usr/local/rabbitmq/sbin/rabbitmq-plugins enable rabbitmq_management #建立用戶 /usr/local/rabbitmq/sbin/rabbitmqctl add_user rabbitadmin 123456 /usr/local/rabbitmq/sbin/rabbitmqctl set_user_tags rabbitadmin administrator
#給用戶受權
#[root@rabbitmq sbin]# rabbitmqctl set_permissions -p / nulige ".*" ".*" ".*"
Setting permissions for user "nulige" in vhost "/" ...
4、登陸RabbitMQ_web頁面windows
登陸帳號信息
#WEB登陸 http://IP:15672 用戶名:rabbitadmin 密碼:123456
一、系統環境
[root@rabbitmq sbin]# cat /proc/version
Linux version 3.10.0-327.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) ) #1 SMP Thu Nov 19 22:10:57 UTC 2015
1.一、Centos7.x關閉防火牆
1 [root@rabbitmq /]# systemctl stop firewalld.service 2 3 [root@rabbitmq /]# systemctl disable firewalld.service 4 Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. 5 Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
若是不想關閉防火牆,能夠經過以下方法處理:
1 開放5672端口: 2 3 firewall-cmd --zone=public --add-port=5672/tcp --permanent 4 firewall-cmd --reload
二、下載erlang和rabbitmq-server的rpm
http://www.rabbitmq.com/releases/erlang/erlang-19.0.4-1.el7.centos.x86_64.rpm
http://www.rabbitmq.com/releases/rabbitmq-server/v3.6.6/rabbitmq-server-3.6.6-1.el7.noarch.rpm
三、安裝erlang
[root@rabbitmq ~]# cd /server/scripts/
[root@rabbitmq scripts]# ll
total 23508
-rw-r--r--. 1 root root 18580960 Jan 28 10:04 erlang-19.0.4-1.el7.centos.x86_64.rpm
-rw-r--r--. 1 root root 5487706 Jan 28 10:04 rabbitmq-server-3.6.6-1.el7.noarch.rpm
[root@rabbitmq scripts]# rpm -ivh erlang-19.0.4-1.el7.centos.x86_64.rpm
測試erlang是否安裝成功:
[root@rabbitmq scripts]# erl
Erlang/OTP 19 [erts-8.0.3] [source] [64-bit] [smp:2:2] [async-threads:10] [hipe] [kernel-poll:false]
Eshell V8.0.3 (abort with ^G)
1> 5+6.
11
2> halt(). #退出
四、安裝socat (備註:安裝RabbitMQ必須先安裝socat依賴,不然會報錯)
[root@rabbitmq scripts]# yum install socat
五、安裝RabbitMQ
[root@rabbitmq scripts]# rpm -ivh rabbitmq-server-3.6.6-1.el7.noarch.rpm
啓動和關閉:
/sbin/service rabbitmq-server start #啓動服務
/sbin/service rabbitmq-server stop #關閉服務
/sbin/service rabbitmq-server status #查看服務狀態
示例:
1 [root@rabbitmq ~]# service rabbitmq-server status 2 Redirecting to /bin/systemctl status rabbitmq-server.service 3 ● rabbitmq-server.service - RabbitMQ broker 4 Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; disabled; vendor preset: disabled) 5 Active: active (running) since Sat 2017-01-28 20:20:46 CST; 8h ago 6 Main PID: 2892 (beam.smp) 7 Status: "Initialized" 8 CGroup: /system.slice/rabbitmq-server.service 9 ├─2892 /usr/lib64/erlang/erts-8.0.3/bin/beam.smp -W w -A 64 -P 1048576 -t 5000000 -st... 10 ├─3027 /usr/lib64/erlang/erts-8.0.3/bin/epmd -daemon 11 ├─3143 erl_child_setup 1024 12 ├─3153 inet_gethost 4 13 └─3154 inet_gethost 4 14 15 Jan 28 20:20:43 rabbitmq rabbitmq-server[2892]: RabbitMQ 3.6.6. Copyright (C) 2007-2016 Pivot...nc. 16 Jan 28 20:20:43 rabbitmq rabbitmq-server[2892]: ## ## Licensed under the MPL. See http...om/ 17 Jan 28 20:20:43 rabbitmq rabbitmq-server[2892]: ## ## 18 Jan 28 20:20:43 rabbitmq rabbitmq-server[2892]: ########## Logs: /var/log/rabbitmq/rabbit@ra...log #日誌存放地址 19 Jan 28 20:20:43 rabbitmq rabbitmq-server[2892]: ###### ## /var/log/rabbitmq/rabbit@ra...log 20 Jan 28 20:20:43 rabbitmq rabbitmq-server[2892]: ########## 21 Jan 28 20:20:43 rabbitmq rabbitmq-server[2892]: Starting broker... 22 Jan 28 20:20:45 rabbitmq rabbitmq-server[2892]: systemd unit for activation check: "rabbitmq-...ce" 23 Jan 28 20:20:46 rabbitmq systemd[1]: Started RabbitMQ broker. 24 Jan 28 20:20:46 rabbitmq rabbitmq-server[2892]: completed with 0 plugins. 25 Hint: Some lines were ellipsized, use -l to show in full.
#查看端口
1 [root@rabbitmq sbin]# ps -ef|grep rabbitmq 2 rabbitmq 2892 1 0 Jan28 ? 00:01:39 /usr/lib64/erlang/erts-8.0.3/bin/beam.smp -W w -A 64 -P 1048576 -t 5000000 -stbt db -zdbbl 32000 -K true -- -root /usr/lib64/erlang -progname erl -- -home /var/lib/rabbitmq -- -pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.6.6/ebin -noshell -noinput -s rabbit boot -sname rabbit@rabbitmq -boot start_sasl -kernel inet_default_connect_options [{nodelay,true}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit error_logger {file,"/var/log/rabbitmq/rabbit@rabbitmq.log"} -rabbit sasl_error_logger {file,"/var/log/rabbitmq/rabbit@rabbitmq-sasl.log"} -rabbit enabled_plugins_file "/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/lib/rabbitmq_server-3.6.6/plugins" -rabbit plugins_expand_dir "/var/lib/rabbitmq/mnesia/rabbit@rabbitmq-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit@rabbitmq" -kernel inet_dist_listen_min 25672 -kernel inet_dist_listen_max 25672 3 rabbitmq 3027 1 0 Jan28 ? 00:00:00 /usr/lib64/erlang/erts-8.0.3/bin/epmd -daemon 4 rabbitmq 3143 2892 0 Jan28 ? 00:00:01 erl_child_setup 1024 5 rabbitmq 3153 3143 0 Jan28 ? 00:00:00 inet_gethost 4 6 rabbitmq 3154 3153 0 Jan28 ? 00:00:00 inet_gethost 4 7 root 24739 21359 0 03:18 pts/0 00:00:00 grep --color=auto rabbitmq
[root@rabbitmq scripts]# cd /sbin/ [root@rabbitmq sbin]# ./rabbitmq-plugins list Configured: E = explicitly enabled; e = implicitly enabled | Status: * = running on rabbit@rabbitmq |/ [ ] amqp_client 3.6.6 [ ] cowboy 1.0.3 [ ] cowlib 1.0.1 [ ] mochiweb 2.13.1 [ ] rabbitmq_amqp1_0 3.6.6 [ ] rabbitmq_auth_backend_ldap 3.6.6 [ ] rabbitmq_auth_mechanism_ssl 3.6.6 [ ] rabbitmq_consistent_hash_exchange 3.6.6 [ ] rabbitmq_event_exchange 3.6.6 [ ] rabbitmq_federation 3.6.6 [ ] rabbitmq_federation_management 3.6.6 [ ] rabbitmq_jms_topic_exchange 3.6.6 [ ] rabbitmq_management 3.6.6 [ ] rabbitmq_management_agent 3.6.6 [ ] rabbitmq_management_visualiser 3.6.6 [ ] rabbitmq_mqtt 3.6.6 [ ] rabbitmq_recent_history_exchange 1.2.1 [ ] rabbitmq_sharding 0.1.0 [ ] rabbitmq_shovel 3.6.6 [ ] rabbitmq_shovel_management 3.6.6 [ ] rabbitmq_stomp 3.6.6 [ ] rabbitmq_top 3.6.6 [ ] rabbitmq_tracing 3.6.6 [ ] rabbitmq_trust_store 3.6.6 [ ] rabbitmq_web_dispatch 3.6.6 [ ] rabbitmq_web_stomp 3.6.6 [ ] rabbitmq_web_stomp_examples 3.6.6 [ ] sockjs 0.3.4 [ ] webmachine 1.10.3
#查看狀態
#查看狀態 [root@rabbitmq sbin]# ./rabbitmqctl status Status of node rabbit@rabbitmq ... [{pid,2892}, {running_applications,[{rabbit,"RabbitMQ","3.6.6"}, {mnesia,"MNESIA CXC 138 12","4.14"}, {rabbit_common,[],"3.6.6"}, {xmerl,"XML parser","1.3.11"}, {os_mon,"CPO CXC 138 46","2.4.1"}, {ranch,"Socket acceptor pool for TCP protocols.", "1.2.1"}, {sasl,"SASL CXC 138 11","3.0"}, {stdlib,"ERTS CXC 138 10","3.0.1"}, {kernel,"ERTS CXC 138 10","5.0.1"}]}, {os,{unix,linux}}, {erlang_version,"Erlang/OTP 19 [erts-8.0.3] [source] [64-bit] [smp:2:2] [async-threads:64] [hipe] [kernel-poll:true]\n"}, {memory,[{total,39981872}, {connection_readers,0}, {connection_writers,0}, {connection_channels,0}, {connection_other,0}, {queue_procs,2832}, {queue_slave_procs,0}, {plugins,0}, {other_proc,13381568}, {mnesia,60888}, {mgmt_db,0}, {msg_index,45952}, {other_ets,952928}, {binary,13072}, {code,17760058}, {atom,752561}, {other_system,7012013}]}, {alarms,[]}, {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,768666828}, {disk_free_limit,50000000}, {disk_free,17276772352}, {file_descriptors,[{total_limit,924}, {total_used,2}, {sockets_limit,829}, {sockets_used,0}]}, {processes,[{limit,1048576},{used,138}]}, {run_queue,0}, {uptime,1060}, {kernel,{net_ticktime,60}}]
#查看隊列消息 [root@rabbitmq sbin]# rabbitmqctl list_queues Listing queues ... hello 1 ...done.
#新增用戶命令,並設置用戶名和密碼 語法: rabbitmqctl add_user Username Password 示例: 增長用戶名:admin,密碼:admin [root@rabbitmq sbin]# ./rabbitmqctl add_user admin admin Creating user "admin" ... #設置用戶權限命令(權限:超級管理員) [root@rabbitmq sbin]# ./rabbitmqctl set_user_tags admin administraotr Setting tags for user "admin" to [administraotr] ... #查看用戶列表命令 [root@rabbitmq sbin]# ./rabbitmqctl list_users Listing users ... admin [administraotr] guest [administrator] #刪除用戶命令 rabbitmqctl delete_user Username #修改用戶的密碼命令 rabbitmqctl change_password Username Newpassword #啓用web管理 [root@rabbitmq sbin]# ./rabbitmq-plugins enable rabbitmq_management The following plugins have been enabled: mochiweb webmachine rabbitmq_web_dispatch amqp_client rabbitmq_management_agent rabbitmq_management Applying plugin configuration to rabbit@rabbitmq... started 6 plugins.
查看rabbitmq安裝目錄
[root@rabbitmq sbin]# whereis rabbitmq
rabbitmq: /usr/lib/rabbitmq /etc/rabbitmq
#出於安全考慮,guest這個默認用戶只能經過http://localhost:15672來登陸,其餘的IP沒法直接用這個guest賬號訪問。
咱們能夠經過修改配置文件來實現遠程登陸管理界面。
#添加配置文件rabbitmq.config
[root@rabbitmq ~]# cd /etc/rabbitmq/
[root@rabbitmq rabbitmq]# vi rabbitmq.config #默認沒有這個文件,須要本身建立
[
{rabbit, [{tcp_listeners, [5672]}, {loopback_users, ["nulige"]}]}
].
#添加用戶爲:nulige ,密碼:123456
[root@rabbitmq /]# cd /sbin/
[root@rabbitmq sbin]# rabbitmqctl add_user nulige 123456
Creating user "nulige" ...
#用戶設置爲administrator才能遠程訪問
[root@rabbitmq sbin]# rabbitmqctl set_user_tags nulige administrator
Setting tags for user "nulige" to [administrator] ...
[root@rabbitmq sbin]# rabbitmqctl set_permissions -p / nulige ".*" ".*" ".*"
Setting permissions for user "nulige" in vhost "/" ...
語法:
1
|
set_permissions [-p <vhost>] <user> <conf> <write> <read>
|
#設置完成,重啓服務生效
service rabbitmq-server stop service rabbitmq-server start
此時就能夠從外部訪問了,但此時再看log文件,發現內容仍是原來的,仍是顯示沒有找到配置文件,能夠手動刪除這個文件再重啓服務,不過這不影響使用。
1 rm rabbit\@mythsky.log #刪除日誌文件再重啓服務 2 service rabbitmq-server stop 3 service rabbitmq-server start
訪問網站方法:
http://ip:15672/#/users
七、用戶角色
按照我的理解,用戶角色可分爲五類,超級管理員, 監控者, 策略制定者, 普通管理者以及其餘。
(1) 超級管理員(administrator)
可登錄管理控制檯(啓用management plugin的狀況下),可查看全部的信息,而且能夠對用戶,策略(policy)進行操做。
(2) 監控者(monitoring)
可登錄管理控制檯(啓用management plugin的狀況下),同時能夠查看rabbitmq節點的相關信息(進程數,內存使用狀況,磁盤使用狀況等)
(3) 策略制定者(policymaker)
可登錄管理控制檯(啓用management plugin的狀況下), 同時能夠對policy進行管理。但沒法查看節點的相關信息(上圖紅框標識的部分)。
與administrator的對比,administrator能看到這些內容
(4) 普通管理者(management)
僅可登錄管理控制檯(啓用management plugin的狀況下),沒法看到節點信息,也沒法對策略進行管理。
(5) 其餘
沒法登錄管理控制檯,一般就是普通的生產者和消費者。
瞭解了這些後,就能夠根據須要給不一樣的用戶設置不一樣的角色,以便按需管理。
設置用戶角色的命令爲:
rabbitmqctl set_user_tags User Tag
User爲用戶名, Tag爲角色名(對應於上面的administrator,monitoring,policymaker,management,或其餘自定義名稱)。
也能夠給同一用戶設置多個角色,例如
rabbitmqctl set_user_tags hncscwc monitoring policymaker
八、用戶權限
用戶權限指的是用戶對exchange,queue的操做權限,包括配置權限,讀寫權限。配置權限會影響到exchange,queue的聲明和刪除。讀寫權限影響到從queue裏取消息,向exchange發送消息以及queue和exchange的綁定(bind)操做。
例如: 將queue綁定到某exchange上,須要具備queue的可寫權限,以及exchange的可讀權限;向exchange發送消息須要具備exchange的可寫權限;從queue裏取數據須要具備queue的可讀權限。詳細請參考官方文檔中"How permissions work"部分。
相關命令爲:
(1) 設置用戶權限
rabbitmqctl set_permissions -p VHostPath User ConfP WriteP ReadP
(2) 查看(指定hostpath)全部用戶的權限信息
rabbitmqctl list_permissions [-p VHostPath]
(3) 查看指定用戶的權限信息
rabbitmqctl list_user_permissions User
(4) 清除用戶的權限信息
rabbitmqctl clear_permissions [-p VHostPath] User
命令詳細參考官方文檔:rabbitmqctl
安裝參考文章:http://www.cnblogs.com/liaojie970/p/6138278.html
RabbitMQ系統優化參考:http://www.blogjava.net/qbna350816/archive/2016/08/02/431415.aspx
官網優化參考地址:http://www.rabbitmq.com/configure.html
mac系統安裝
參考: http://www.rabbitmq.com/install-standalone-mac.html
九、安裝python rabbitMQ module (在windows系統上面安裝)
1
2
3
4
5
6
7
|
pip install pika
or
easy_install pika
or
源碼
https:
/
/
pypi.python.org
/
pypi
/
pika
|
十、幾種典型的使用場景,參考官網:
https://www.rabbitmq.com/tutorials/tutorial-one-python.html
send端
1 #!/usr/bin/env python 2 import pika 3 4 connection = pika.BlockingConnection(pika.ConnectionParameters( 5 'localhost')) #localhost改爲:192.168.1.118 6 channel = connection.channel() #創建了rabbit協議的通道 7 8 #聲明queue 9 channel.queue_declare(queue='hello') 10 11 #n RabbitMQ a message can never be sent directly to the queue, it always needs to go through an exchange. 12 channel.basic_publish(exchange='', 13 routing_key='hello', 14 body='Hello World!') 15 print(" [x] Sent 'Hello World!'") 16 connection.close()
receive端
1 #!/usr/bin/env python 2 # -*- coding:utf-8 -*- 3 #Author: nulige 4 5 import pika 6 7 connection = pika.BlockingConnection(pika.ConnectionParameters( 8 'localhost')) 9 channel = connection.channel() 10 11 # You may ask why we declare the queue again ‒ we have already declared it in our previous code. 12 # We could avoid that if we were sure that the queue already exists. For example if send.py program 13 # was run before. But we're not yet sure which program to run first. In such cases it's a good 14 # practice to repeat declaring the queue in both programs. 15 #通道的實例
channel.queue_declare(queue='hello') 16 17 18 def callback(ch, method, properties, body): 19 print(" [x] Received %r" % body) 20 21 #收到消息就調用這個 22 channel.basic_consume(callback, 23 queue='hello', 24 no_ack=True) 25 26 print(' [*] Waiting for messages. To exit press CTRL+C') 27 channel.start_consuming() #開始消息,是個死循環,一直監聽收消息
在linux系統中,經過: rabbitmqctl list_queues 查看消息。
在這種模式下,RabbitMQ會默認把p發的消息依次分發給各個消費者(c),跟負載均衡差很少。
消息提供者代碼
1 import pika 2 import time 3 connection = pika.BlockingConnection(pika.ConnectionParameters( 4 'localhost')) 5 channel = connection.channel() 6 7 # 聲明queue 8 channel.queue_declare(queue='task_queue') 9 10 # n RabbitMQ a message can never be sent directly to the queue, it always needs to go through an exchange. 11 import sys 12 13 message = ' '.join(sys.argv[1:]) or "Hello World! %s" % time.time() 14 channel.basic_publish(exchange='', 15 routing_key='task_queue', 16 body=message, 17 properties=pika.BasicProperties( 18 delivery_mode=2, # make message persistent(就是消息持久化) 19 ) 20 ) 21 print(" [x] Sent %r" % message) 22 connection.close()
消費者代碼
1 #_*_coding:utf-8_*_ 2 3 import pika, time 4 5 connection = pika.BlockingConnection(pika.ConnectionParameters( 6 'localhost')) 7 channel = connection.channel() 8 9 10 def callback(ch, method, properties, body): 11 print(" [x] Received %r" % body) 12 time.sleep(20) 13 print(" [x] Done") 14 print("method.delivery_tag",method.delivery_tag) 15 ch.basic_ack(delivery_tag=method.delivery_tag) #消息者端吃完包子,返回包子標識符 16 17 18 channel.basic_consume(callback, 19 queue='task_queue', 20 no_ack=True #no_ack=True消息不須要確認,默認no_ack=false,消息須要確認 21 ) 22 23 print(' [*] Waiting for messages. To exit press CTRL+C') 24 channel.start_consuming()
此時,先啓動消息生產者,而後再分別啓動3個消費者,經過生產者多發送幾條消息,你會發現,這幾條消息會被依次分配到各個消費者身上
Doing a task can take a few seconds. You may wonder what happens if one of the consumers starts a long task and dies with it only partly done. With our current code once RabbitMQ delivers message to the customer it immediately removes it from memory. In this case, if you kill a worker we will lose the message it was just processing. We'll also lose all the messages that were dispatched to this particular worker but were not yet handled.
But we don't want to lose any tasks. If a worker dies, we'd like the task to be delivered to another worker.
In order to make sure a message is never lost, RabbitMQ supports message acknowledgments. An ack(nowledgement) is sent back from the consumer to tell RabbitMQ that a particular message had been received, processed and that RabbitMQ is free to delete it.
If a consumer dies (its channel is closed, connection is closed, or TCP connection is lost) without sending an ack, RabbitMQ will understand that a message wasn't processed fully and will re-queue it. If there are other consumers online at the same time, it will then quickly redeliver it to another consumer. That way you can be sure that no message is lost, even if the workers occasionally die.
There aren't any message timeouts; RabbitMQ will redeliver the message when the consumer dies. It's fine even if processing a message takes a very, very long time.
Message acknowledgments are turned on by default. In previous examples we explicitly turned them off via the no_ack=True flag. It's time to remove this flag and send a proper acknowledgment from the worker, once we're done with a task.
1 def callback(ch, method, properties, body): 2 print " [x] Received %r" % (body,) 3 time.sleep( body.count('.') ) 4 print " [x] Done" 5 ch.basic_ack(delivery_tag = method.delivery_tag) 6 7 channel.basic_consume(callback, 8 queue='hello')
Using this code we can be sure that even if you kill a worker using CTRL+C while it was processing a message, nothing will be lost. Soon after the worker dies all unacknowledged messages will be redelivered。
We have learned how to make sure that even if the consumer dies, the task isn't lost(by default, if wanna disable use no_ack=True). But our tasks will still be lost if RabbitMQ server stops.
When RabbitMQ quits or crashes it will forget the queues and messages unless you tell it not to. Two things are required to make sure that messages aren't lost: we need to mark both the queue and messages as durable.
First, we need to make sure that RabbitMQ will never lose our queue. In order to do so, we need to declare it as durable:
1
|
channel.queue_declare(queue
=
'hello'
, durable
=
True
)
|
Although this command is correct by itself, it won't work in our setup. That's because we've already defined a queue called hello which is not durable. RabbitMQ doesn't allow you to redefine an existing queue with different parameters and will return an error to any program that tries to do that. But there is a quick workaround - let's declare a queue with different name, for exampletask_queue:
1
|
channel.queue_declare(queue
=
'task_queue'
, durable
=
True
)
|
This queue_declare change needs to be applied to both the producer and consumer code.
At that point we're sure that the task_queue queue won't be lost even if RabbitMQ restarts. Now we need to mark our messages as persistent - by supplying a delivery_mode property with a value 2.
1
2
3
4
5
6
|
channel.basic_publish(exchange
=
'',
routing_key
=
"task_queue"
,
body
=
message,
properties
=
pika.BasicProperties(
delivery_mode
=
2
,
# make message persistent
))
|
若是Rabbit只管按順序把消息發到各個消費者身上,不考慮消費者負載的話,極可能出現,一個機器配置不高的消費者那裏堆積了不少消息處理不完,同時配置高的消費者卻一直很輕鬆。爲解決此問題,能夠在各個消費者端,配置perfetch=1,意思就是告訴RabbitMQ在我這個消費者當前消息還沒處理完的時候就不要再給我發新消息了。
1 channel.basic_qos(prefetch_count=1)
生產者端帶消息持久化+公平分發的完整代碼
1 #!/usr/bin/env python 2 import pika 3 import sys 4 5 connection = pika.BlockingConnection(pika.ConnectionParameters( 6 host='localhost')) 7 channel = connection.channel() 8 #隊列持久化 9 channel.queue_declare(queue='task_queue', durable=True) 10 11 message = ' '.join(sys.argv[1:]) or "Hello World!" 12 #消息持久化
channel.basic_publish(exchange='', 13 routing_key='task_queue', 14 body=message, 15 properties=pika.BasicProperties( 16 delivery_mode = 2, # make message persistent 17 )) 18 print(" [x] Sent %r" % message) 19 connection.close()
消費者端
1 #!/usr/bin/env python 2 import pika 3 import time 4 5 connection = pika.BlockingConnection(pika.ConnectionParameters( 6 host='localhost')) 7 channel = connection.channel() 8 9 channel.queue_declare(queue='task_queue', durable=True) 10 print(' [*] Waiting for messages. To exit press CTRL+C') 11 12 def callback(ch, method, properties, body): 13 print(" [x] Received %r" % body) 14 time.sleep(body.count(b'.')) 15 print(" [x] Done") 16 ch.basic_ack(delivery_tag = method.delivery_tag) 17 18 channel.basic_qos(prefetch_count=1) 19 channel.basic_consume(callback, 20 queue='task_queue') 21 22 channel.start_consuming()
示例:
rabbit.py (發送消息)
1 import pika 2 3 credentials = pika.PlainCredentials('nulige', '123456') 4 # connection = pika.BlockingConnection(pika.ConnectionParameters(host=url_1, 5 # credentials=credentials, ssl=ssl, port=port)) 6 connection = pika.BlockingConnection(pika.ConnectionParameters( 7 host='192.168.1.118',credentials=credentials)) 8 9 channel = connection.channel() 10 11 # 聲明queue 12 channel.queue_declare(queue='nulige',durable=True) 13 14 # n RabbitMQ a message can never be sent directly to the queue, it always needs to go through an exchange. 15 channel.basic_publish(exchange='', 16 routing_key='nulige', #send msg to this queue 17 body='Hello World!23', 18 properties=pika.BasicProperties( 19 delivery_mode=2, # make message persistent 20 ) 21 ) 22 23 24 print(" [x] Sent 'Hello World!2'") 25 connection.close()
rabbit_recv.py (接收消息)
1 import pika 2 import time 3 credentials = pika.PlainCredentials('nulige', '123456') 4 connection = pika.BlockingConnection(pika.ConnectionParameters( 5 host='192.168.1.118',credentials=credentials)) 6 7 channel = connection.channel() 8 # You may ask why we declare the queue again ‒ we have already declared it in our previous code. 9 # We could avoid that if we were sure that the queue already exists. For example if send.py program 10 # was run before. But we're not yet sure which program to run first. In such cases it's a good 11 # practice to repeat declaring the queue in both programs. 12 channel.queue_declare(queue='nulige',durable=True) 13 14 15 def callback(ch, method, properties, body): 16 print(ch, method, properties) 17 18 print(" [x] Received %r" % body) 19 time.sleep(1) 20 21 22 channel.basic_consume(callback, 23 queue='nulige', 24 #no_ack=True 25 ) 26 channel.basic_qos(prefetch_count=1) 27 print(' [*] Waiting for messages. To exit press CTRL+C') 28 channel.start_consuming()
執行結果:
以前的例子都基本都是1對1的消息發送和接收,即消息只能發送到指定的queue裏,但有些時候你想讓你的消息被全部的Queue收到,相似廣播的效果,這時候就要用到exchange了,
An exchange is a very simple thing. On one side it receives messages from producers and the other side it pushes them to queues. The exchange must know exactly what to do with a message it receives. Should it be appended to a particular queue? Should it be appended to many queues? Or should it get discarded. The rules for that are defined by the exchange type.
譯:
交換是件很簡單的事。在一端從生產者那裏收消息,並將它們推送到queue中。Exchange必須很是清楚的知道。他從生產者那裏收到的消息,要發給誰? 他是應該被追加到一個具體的queue裏,仍是發送到多個queue裏,或者它應該被丟棄。該規則由Exchange類型定義。
Exchange的做用就是轉發消息,給訂閱者發消息。
Exchange在定義的時候是有類型的,以決定究竟是哪些Queue符合條件,能夠接收消息。(一共有四種類型)
一、fanout: 全部bind到此exchange的queue均可以接收消息 (給全部人發消息)
二、direct: 經過routingKey和exchange決定的那個惟一的queue能夠接收消息 (給指定的一些queue發消息)
三、topic(話題):全部符合routingKey(此時能夠是一個表達式)的routingKey所bind的queue能夠接收消息 (給訂閱話題的人發消息)
表達式符號說明:#表明一個或多個字符,*表明任何字符
示例:#.a會匹配a.a,aa.a,aaa.a等
*.a會匹配a.a,b.a,c.a等
備註:使用RoutingKey爲#,Exchange Type爲topic的時候至關於使用fanout
四、headers: 經過headers 來決定把消息發給哪些queue (經過消息頭,決定發送給哪些隊列)
應用場景:
例如:視頻直播
例如:新浪微博
一個明星,他有幾千萬的訂閱用戶,粉絲們想要收到他發送的微博消息(這裏指:微博訂閱的在線用戶發送消息,不發給不在線的用戶,發送消息)
rabbit_fanout_send.py(發送端)
1 import pika 2 import sys 3 4 credentials = pika.PlainCredentials('nulige', '123456') 5 connection = pika.BlockingConnection(pika.ConnectionParameters( 6 host='192.168.1.118',credentials=credentials)) 7 8 channel = connection.channel() 9 channel.exchange_declare(exchange='logs', type='fanout') #發送消息類型爲fanout,就是給全部人發消息 10 11 #若是等於空,就輸出hello world! 12 message = ' '.join(sys.argv[1:]) or "info: Hello World!" 13 14 15 channel.basic_publish(exchange='logs', 16 routing_key='', 17 body=message) 18 19 print(" [x] Sent %r" % message) 20 connection.close()
rabbit_fanout_send.py(接收端)
1 import pika 2 3 credentials = pika.PlainCredentials('nulige', '123456') 4 connection = pika.BlockingConnection(pika.ConnectionParameters( 5 host='192.168.1.118',credentials=credentials)) 6 7 channel = connection.channel() 8 9 channel.exchange_declare(exchange='logs',type='fanout')#指定發送類型 10 #必須能過queue來收消息 11 result = channel.queue_declare(exclusive=True) # 不指定queue名字,rabbit會隨機分配一個名字,exclusive=True會在使用此queue的消費者斷開後,自動將queue刪除 12 13 queue_name = result.method.queue 14 15 channel.queue_bind(exchange='logs',queue=queue_name) #隨機生成的Q,綁定到exchange上面。 16 17 print(' [*] Waiting for logs. To exit press CTRL+C') 18 19 def callback(ch, method, properties, body): 20 print(" [x] %r" % body) 21 22 23 channel.basic_consume(callback, 24 queue=queue_name,
#no_ack=True, 25 ) 26 channel.start_consuming()
執行結果:
RabbitMQ還支持根據關鍵字發送,即:隊列綁定關鍵字,發送者將數據根據關鍵字發送到消息exchange,exchange根據 關鍵字 斷定應該將數據發送至指定隊列。
示例:
rabbit_direct_send.py(發送端)
1 import pika 2 import sys 3 4 credentials = pika.PlainCredentials('nulige', '123456') 5 connection = pika.BlockingConnection(pika.ConnectionParameters( 6 host='192.168.1.118',credentials=credentials)) 7 8 channel = connection.channel() 9 10 channel.exchange_declare(exchange='direct_logs',type='direct') #指定類型 11 12 severity = sys.argv[1] if len(sys.argv) > 1 else 'info' #嚴重程序,級別;斷定條件究竟是info,仍是空,後面接消息 13 14 message = ' '.join(sys.argv[2:]) or 'Hello World!' #消息 15 16 channel.basic_publish(exchange='direct_logs', 17 routing_key=severity, #綁定的是:error 指定關鍵字(哪些隊列綁定了,這個級別,那些隊列就能夠收到這個消息) 18 body=message) 19 20 print(" [x] Sent %r:%r" % (severity, message)) 21 connection.close()
rabbit_direct_recv.py(接收端)
1 import pika 2 import sys 3 4 credentials = pika.PlainCredentials('nulige', '123456') 5 connection = pika.BlockingConnection(pika.ConnectionParameters( 6 host='192.168.1.118',credentials=credentials)) 7 channel = connection.channel() 8 9 channel.exchange_declare(exchange='direct_logs',type='direct') 10 result = channel.queue_declare(exclusive=True) 11 queue_name = result.method.queue 12 13 severities = sys.argv[1:] #接收那些消息(指info,仍是空),沒寫就報錯 14 if not severities: 15 sys.stderr.write("Usage: %s [info] [warning] [error]\n" % sys.argv[0]) #定義了三種接收消息方式info,warning,error 16 sys.exit(1) 17 18 for severity in severities: #[error info warning],循環severities 19 channel.queue_bind(exchange='direct_logs', 20 queue=queue_name, 21 routing_key=severity) #循環綁定關鍵字 22 print(' [*] Waiting for logs. To exit press CTRL+C') 23 24 def callback(ch, method, properties, body): 25 print(" [x] %r:%r" % (method.routing_key, body)) 26 27 channel.basic_consume(callback,queue=queue_name,) 28 channel.start_consuming()
執行結果:
1 首先,設置接收類型爲:info、warning、 error 三個中的其中一種或多種類型,再從發送端指定發送給那種類型,後面再接要發送的消息。 2 3 #接收端 4 D:\python\day42>python3 rabbit_direct_recv.py info error #指定接收類型爲info、erron 5 6 [*] Waiting for logs. To exit press CTRL+C 7 [x] 'error':b'err_hpappend' #接收到的消息 8 9 D:\python\day42>python3 rabbit_direct_recv.py info warning #指定接收類型爲warning 10 11 [*] Waiting for logs. To exit press CTRL+C 12 [x] 'warning':b'nulige' #接收到的消息 13 14 15 #發送端 發送類型 消息 16 D:\python\day42>python3 rabbit_direct_send.py error err_hpappend 17 [x] Sent 'error':'err_hpappend'
18 D:\python\day42>python3 rabbit_direct_send.py warning nulige
19 [x] Sent 'warning':'nulige'
Although using the direct exchange improved our system, it still has limitations - it can't do routing based on multiple criteria.
In our logging system we might want to subscribe to not only logs based on severity, but also based on the source which emitted the log. You might know this concept from the syslog unix tool, which routes logs based on both severity (info/warn/crit...) and facility (auth/cron/kern...).
That would give us a lot of flexibility - we may want to listen to just critical errors coming from 'cron' but also all logs from 'kern'.
topi: 意思是話題
To receive all the logs run:
python receive_logs_topic.py "#" #綁定#號,就是收全部消息,至關於廣播
To receive all logs from the facility "kern":
python receive_logs_topic.py "kern.*" #以kern開頭
Or if you want to hear only about "critical" logs:
python receive_logs_topic.py "*.critical" #以critical結尾
You can create multiple bindings:
python receive_logs_topic.py "kern.*" "*.critical" #收kern開頭而且以critical結尾(至關於收兩個)
And to emit a log with a routing key "kern.critical" type:
python emit_log_topic.py "kern.critical" "A critical kernel error" #發消息到kern.critical裏,內容是:
A critical kernel error
示例:
rabbit_topic_send.py (生產者是發送端)
1 import pika 2 import sys 3 4 credentials = pika.PlainCredentials('nulige', '123456') 5 connection = pika.BlockingConnection(pika.ConnectionParameters( 6 host='192.168.1.118',credentials=credentials)) 7 8 channel = connection.channel() 9 10 channel.exchange_declare(exchange='topic_logs',type='topic') #指定類型 11 12 routing_key = sys.argv[1] if len(sys.argv) > 1 else 'anonymous.info' 13 14 message = ' '.join(sys.argv[2:]) or 'Hello World!' #消息 15 16 channel.basic_publish(exchange='topic_logs', 17 routing_key=routing_key, 18 body=message) 19 print(" [x] Sent %r:%r" % (routing_key, message)) 20 connection.close()
rabbit_topic_recv.py (消費者是接收端)單向的
1 import pika 2 import sys 3 4 credentials = pika.PlainCredentials('nulige', '123456') 5 connection = pika.BlockingConnection(pika.ConnectionParameters( 6 host='192.168.1.118',credentials=credentials)) 7 8 channel = connection.channel() 9 channel.exchange_declare(exchange='topic_logs',type='topic') 10 11 result = channel.queue_declare(exclusive=True) 12 queue_name = result.method.queue 13 14 binding_keys = sys.argv[1:] 15 if not binding_keys: 16 sys.stderr.write("Usage: %s [binding_key]...\n" % sys.argv[0]) 17 sys.exit(1) 18 19 for binding_key in binding_keys: 20 channel.queue_bind(exchange='topic_logs', 21 queue=queue_name, 22 routing_key=binding_key) 23 24 print(' [*] Waiting for logs. To exit press CTRL+C') 25 26 def callback(ch, method, properties, body): 27 print(" [x] %r:%r" % (method.routing_key, body)) 28 29 channel.basic_consume(callback,queue=queue_name) 30 31 channel.start_consuming()
執行結果:
1 #接收端 2 D:\python\day42>python3 rabbit_topic_recv.py error 3 [*] Waiting for logs. To exit press CTRL+C 4 [x] 'error':b'mysql has error' 5 6 7 D:\python\day42>python3 rabbit_topic_recv.py *.warning mysql.* 8 [*] Waiting for logs. To exit press CTRL+C 9 [x] 'mysql.error':b'mysql has error' 10 11 12 D:\python\day42>python3 rabbit_topic_send.py mysql.info "mysql has error" 13 [x] Sent 'mysql.info':'mysql has error' 14 15 16 D:\python\day42>python3 rabbit_topic_recv.py *.error.* 17 [*] Waiting for logs. To exit press CTRL+C 18 [x] 'mysql.error.':b'mysql has error' 19 20 21 #發送端 指定類型:error 消息內容 22 D:\python\day42>python3 rabbit_topic_send.py error "mysql has error" 23 [x] Sent 'error':'mysql has error' 24 25 26 D:\python\day42>python3 rabbit_topic_send.py mysql.error "mysql has error" 27 [x] Sent 'mysql.error':'mysql has error' 28 [x] 'mysql.info':b'mysql has error' 29 30 31 D:\python\day42>python3 rabbit_topic_send.py mysql.error. "mysql has error" 32 [x] Sent 'mysql.error.':'mysql has error'
To illustrate how an RPC service could be used we're going to create a simple client class. It's going to expose a method named call which sends an RPC request and blocks until the answer is received:
1
2
3
|
fibonacci_rpc
=
FibonacciRpcClient()
result
=
fibonacci_rpc.call(
4
)
print
(
"fib(4) is %r"
%
result)
|
應用場景:
示例:實現RPC服務功能
代碼:
rabbit_rpc_send.py(生產者是發送端)
1 import pika 2 import uuid 3 4 class SSHRpcClient(object): 5 def __init__(self): 6 credentials = pika.PlainCredentials('nulige', '123456') 7 self.connection = pika.BlockingConnection(pika.ConnectionParameters( 8 host='192.168.1.118',credentials=credentials)) 9 10 self.channel = self.connection.channel() 11 12 result = self.channel.queue_declare(exclusive=True) #客戶端的結果必需要返回到這個queue 13 self.callback_queue = result.method.queue 14 15 self.channel.basic_consume(self.on_response,queue=self.callback_queue) #聲明從這個queue裏收結果 16 17 def on_response(self, ch, method, props, body): 18 if self.corr_id == props.correlation_id: #任務標識符 19 self.response = body 20 print(body) 21 22 # 返回的結果,放在callback_queue中 23 def call(self, n): 24 self.response = None 25 self.corr_id = str(uuid.uuid4()) #惟一標識符 26 self.channel.basic_publish(exchange='', 27 routing_key='rpc_queue3', #聲明一個Q 28 properties=pika.BasicProperties( 29 reply_to=self.callback_queue, 30 correlation_id=self.corr_id, 31 ), 32 body=str(n)) 33 34 print("start waiting for cmd result ") 35 count = 0 36 while self.response is None: #若是命令沒返回結果 37 print("loop ",count) 38 count +=1 39 self.connection.process_data_events() #以不阻塞的形式去檢測有沒有新事件 40 #若是沒事件,那就什麼也不作, 若是有事件,就觸發on_response事件 41 return self.response 42 43 ssh_rpc = SSHRpcClient() 44 45 print(" [x] sending cmd") 46 response = ssh_rpc.call("ipconfig") 47 48 print(" [.] Got result ") 49 print(response.decode("gbk"))
rabbit_rpc_recv.py(消費端是接收端)
1 import pika 2 import time 3 import subprocess 4 5 credentials = pika.PlainCredentials('nulige', '123456') 6 connection = pika.BlockingConnection(pika.ConnectionParameters( 7 host='192.168.1.118', credentials=credentials)) 8 9 channel = connection.channel() 10 channel.queue_declare(queue='rpc_queue3') 11 12 def SSHRPCServer(cmd): 13 14 print("recv cmd:",cmd) 15 cmd_obj = subprocess.Popen(cmd.decode(),shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE) 16 17 result = cmd_obj.stdout.read() or cmd_obj.stderr.read() 18 return result 19 20 def on_request(ch, method, props, body): 21 22 print(" [.] fib(%s)" % body) 23 response = SSHRPCServer(body) 24 25 ch.basic_publish(exchange='', 26 routing_key=props.reply_to, 27 properties=pika.BasicProperties(correlation_id= \ 28 props.correlation_id), 29 body=response) 30 31 channel.basic_consume(on_request, queue='rpc_queue3') 32 print(" [x] Awaiting RPC requests") 33 channel.start_consuming()
執行結果:
先啓動接收端,再發送消息,直接會返回結果