<uwsgi id = "uwsgibk"> <stats>127.0.0.1:9090</stats> <socket>127.0.0.1:3030</socket> <file>./server.py</file> <enable-threads/> <post-buffering/> <memory-report/> </uwsgi>
The http and http-socket options are entirely different beasts. The first one spawns an additional process forwarding requests to a series of workers (think about it as a form of shield, at the same level of apache or nginx), while the second one sets workers to natively speak the http protocol. TL/DR: if you plan to expose uWSGI directly to the public, use –http, if you want to proxy it behind a webserver speaking http with backends, use –http-socket. .. seealso:: Native HTTP supporthtml
<uwsgi id = "httpbk"> <stats>127.0.0.1:9090</stats> <http-socket>127.0.0.1:3030</http-socket> <file>./server.py</file> <enable-threads/> <post-buffering/> <memory-report/> </uwsgi>
The http and http-socket options are entirely different beasts. The first one spawns an additional process forwarding requests to a series of workers (think about it as a form of shield, at the same level of apache or nginx), while the second one sets workers to natively speak the http protocol. TL/DR: if you plan to expose uWSGI directly to the public, use –http, if you want to proxy it behind a webserver speaking http with backends, use –http-socket. .. seealso:: Native HTTP support前端
<uwsgi id = "http"> <stats>127.0.0.1:9090</stats> <http>:80</http> <file>./server.py</file> <enable-threads/> <post-buffering/> <memory-report/> </uwsgi>
http://uwsgi-docs.readthedocs.org/en/latest/Nginx.html
http://uwsgi-docs.readthedocs.org/en/latest/Fastrouter.html
http://stackoverflow.com/questions/21518533/putting-a-uwsgi-fast-router-in-front-of-uwsgi-servers-running-in-docker-containe
http://stackoverflow.com/questions/26499644/how-to-use-the-uwsgi-fastrouter-whith-nginxnginx
Configurations of Nginxweb
location /test { include uwsgi_params; uwsgi_pass 127.0.0.1:3030; }
Configurations of FastRouterdocker
<uwsgi id = "fastrouter"> <fastrouter>127.0.0.1:3030</fastrouter> <fastrouter-subscription-server>127.0.0.1:3131</fastrouter-subscription-server> <enable-threads/> <master/> <fastrouter-stats>127.0.0.1:9595</fastrouter-stats> </uwsgi>
Configurations of instanceapache
<uwsgi id = "subserver1"> <stats>127.0.0.1:9393</stats> <processes>4</processes> <enable-threads/> <memory-report/> <subscribe-to>127.0.0.1:3131:[server_ip] or [domain]</subscribe-to> <socket>127.0.0.1:3232</socket> <file>./server.py</file> <master/> <weight>8</weight> </uwsgi>
<uwsgi id = "subserver2"> <stats>127.0.0.1:9494</stats> <processes>4</processes> <enable-threads/> <memory-report/> <subscribe-to>127.0.0.1:3131:[server_ip] or [domain]</subscribe-to> <socket>127.0.0.1:3333</socket> <file>./server.py</file> <master/> <weight>2</weight> </uwsgi>
If we HTTP-GET [server_ip] or [domain]/test, the route of request as follows:
Nginx >> FastRouter(port 3030) >> FastRouter(port 3131) >> subserver1(port 3232) or subserver2(port 3333)
and the protocol of all is uwsgi.django
Faster Router also has specific stats.app
http://uwsgi-docs.readthedocs.org/en/latest/HTTP.html
Routerdom
<uwsgi id = "httprouter"> <enable-threads/> <master/> <http>:8080</http> <http-stats>127.0.0.1:9090</http-stats> <http-to>127.0.0.1:8181</http-to> <http-to>127.0.0.1:8282</http-to> </uwsgi>
sub-server1socket
<uwsgi id = "httpserver1"> <stats>127.0.0.1:9191</stats> <socket>127.0.0.1:8181</socket> <memory-report/> <file>./server.py</file> <enable-threads/> <post-buffering/> </uwsgi>
sub-server2
<uwsgi id = "httpserver2"> <stats>127.0.0.1:9292</stats> <memory-report/> <file>./server.py</file> <socket>127.0.0.1:8282</socket> <enable-threads/> <post-buffering/> </uwsgi>
Just like Fast Route and Http Route, the router can forward(http-to) or route requests to other sub-servers, the only difference is protocol.
The protocols as follows:
Fast, HTTP(S), raw, ssl.
Certainly, all of routers have specific stats in the same style.
http://uwsgi-docs.readthedocs.org/en/latest/Emperor.html
Emperor
<uwsgi id = "emperor"> <emperor>./vassals</emperor> <emperor-stats-server>127.0.0.1:9090</emperor-stats-server> </uwsgi>
Vassal1
<vassal1> <uwsgi id = "vassal1"> <http>:8080</http> <stats>127.0.0.1:9191</stats> <memory-report/> <enable-threads/> <post-buffering/> <file>./server.py</file> <chdir>..</chdir> </uwsgi> </vassal1>
Vassal2
<vassal2> <uwsgi id = "vassal2"> <http>:8181</http> <stats>127.0.0.1:9292</stats> <memory-report/> <enable-threads/> <post-buffering/> <file>./server.py</file> <chdir>..</chdir> </uwsgi> </vassal2>
Emperor also has specific stats.
<uwsgi id = "vassal1"> <socket>127.0.0.1:3030</socket> <http>:8080</http> <stats>127.0.0.1:9191</stats> <memory-report/> <enable-threads/> <post-buffering/> <manage-script-name/> <chdir>..</chdir> <mount>/pic=server.py</mount> <mount>/test=fuck.py</mount> <workers>2</workers> </uwsgi>
注意:
http://stackoverflow.com/questions/19475651/how-to-mount-django-app-with-uwsgi
<manage-script-name/>參數很關鍵,但官方help中並無提這個,F*CK YOU!
這樣配置後,uWSGI會根據不一樣的request路徑調用不一樣的app。
uWSGI做爲backend,和Nginx的配合比Apache要好,建議用Nginx做前端。
若是在CentOS上以INET socket爲載體部署Nginx+uWSGI或Apache+uWSGI時,客戶端訪問遇到50x Bad Gateway錯誤時,強烈建議你換到Ubuntu上去試試,由於一樣的配置在CentOS上有這個問題,換到Ubuntu後就可能沒問題。
2014-11-03補充:CentOS下的這個問題的緣由是RedHat系列的Linux發行版默認開啓SELinux,SELinux的某些策略致使的這個問題,不知道算不算是BUG。關閉了SELinux就可解決,具體如何關閉能夠參考:http://www.cnblogs.com/lightnear/archive/2012/10/06/2713090.html。Debian系列的發行版由於默認沒有開啓SELinux,因此不存在這個問題。上週末看到一篇比較Liunx發行版性能的文章時,偶然發現的有SELinux這個東西,今天一試果真。若是再碰到其餘相似的詭異問題,也能夠嘗試關閉SELinux試試。
若是用Apache+uWSGI部署後,客戶端訪問時返回一個文件,那有多是Apache對頁面壓縮後的結果,把Apache的GZIP壓縮模塊關掉就行了。Nginx沒有這個問題,更好用不是嗎?
尼瑪uWSGI的文檔就是一坨SHIT,SHIT!國內這方面的知識可能少不少,在這裏貼出來,方便國內的用戶。
像這種小衆一點的東西,國內資料不多,跟國外比仍是有差距啊,基友們還需努力啊。