官網:https://gunicorn.org/#docshtml
Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. It's a pre-fork worker model. The Gunicorn server is broadly compatible with various web frameworks, simply implemented, light on server resources, and fairly speedy.python
它是一個pythonWSGI HTTP Server,運行於UNIX平臺。pre-fork worker model。react
Gunicorn是一個unix上被普遍使用的高性能的Python WSGI UNIX HTTP Server。web
和大多數的web框架兼容,並具備實現簡單,輕量級,高性能等特色。apache
架構flask
服務模型(Server Model)架構
Gunicorn是基於pre-fork模型的。也就意味着有一箇中心管理進程(masterprocess)用來管理worker進程集合。Master從不知道任何關於客戶端的信息。全部的請求和響應處理都是由worker進程來處理的。併發
Master(管理者)app
主程序是一個簡單的循環,監聽各類信號以及相應的響應進程。master管理着正在運行的worker集合,經過監聽各類信號好比TTIN,TTOU,andCHLD.TTINandTTOU響應的增長和減小worker的數目。CHLD信號代表一個子進程已經結束了,在這種狀況下master會自動的重啓失敗的worker。框架
worker
woker有不少種,包括:ggevent、geventlet、gtornado等等。這裏主要分析ggevent。
每一個ggeventworker啓動的時候會啓動多個server對象:worker首先爲每一個listener建立一個server對象(注:爲何是一組listener,由於gunicorn能夠綁定一組地址,每一個地址對於一個listener),每一個server對象都有運行在一個單獨的geventpool對象中。真正等待連接和處理連接的操做是在server對象中進行的。
pip install gunicorn
示例:
wsgi_my.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from app import create_app
application = app = create_app()
gunicorn wsgi_my:app
默認綁定127.0.0.1,因此不能在其它ip訪問
測試結果:
[root@soft ~]# curl http://localhost:8000
Index.
[root@soft ~]# curl http://localhost:8000/hello
Hello. number: 1
[root@soft ~]# curl http://localhost:8000/hello
Hello. number: 2
doc: https://docs.gunicorn.org/en/stable/run.html
gunicorn
最簡單的運行方式:gunicorn code:application
Gunicorn從三個不一樣地方獲取配置:
框架設置(一般隻影響到Paster應用)
配置文件(python文件):配置文件中的配置會覆蓋框架的設置。
命令行
框架設置只跟Paster(一個Web框架)有關,不討論;命令行配置如上部分所示;
-c CONFIG, --config=CONFIG - Specify a config file in the form $(PATH), file:$(PATH), or python:$(MODULE_NAME).指定配置文件
-b BIND, --bind=BIND - Specify a server socket to bind. Server sockets can be any of $(HOST), $(HOST):$(PORT)指定監聽端口.
-w WORKERS, --workers=WORKERS - The number of worker processes. This number should generally be between 2-4 workers per core in the server. Check the FAQ for ideas on tuning this parameter.指定進程數,通常建議數量設置爲2*CPU+1
-k WORKERCLASS, --worker-class=WORKERCLASS - The type of worker process to run. You’ll definitely want to read the production page for the implications of this parameter. You can set this to $(NAME) where $(NAME) is one of sync, eventlet, gevent, tornado, gthread, gaiohttp (deprecated). sync is the default. See the worker_class documentation for more information.
工做模式,有多種,默認sync,通常gevent,tornado。
--backlog INT 指定最大掛起的鏈接數;
--log-level LEVEL:debug,info,warning,error,critical
--access-logfile FILE 日誌文件,-表示輸出到標準輸出
--error-logfile FILE 錯誤日誌文件
典型啓動命令:
gunicorn -w 1 -k gevent wsgi_my -b 0.0.0.0:9000
生產中一般經過配置文件指定參數,它必須是一個python文件,只是將命令行中的參數寫進py文件中而已,若是須要設置哪一個參數,則在py文件中爲該參數賦值便可。
配置文件案例:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# gunicorn 配置文件
import multiprocessing
debug = True
loglevel = 'debug'
bind = "0.0.0.0:9000"
# 最大掛起鏈接數
backlog = 512
# 日誌
pidfile = "log/gunicorn.pid"
accesslog = "log/access.log"
errorlog = "log/debug.log"
#daemon = True
daemon = False
# 啓動的進程數
workers = multiprocessing.cpu_count()
worker_class = 'gevent'
# 進程沉默限制時間,超過會重啓該worker,單位秒,一般設爲30
timeout = 30
x_forwarded_for_header = 'X-FORWARDED-FOR'
啓動server
gunicorn -c gunicorn_conf.py wsgi_my:app
注意:指定app時不要有後綴名(例如wsgi_my.py),不然會找不到app
日誌文件須要手動建立
測試環境準備:
flask自帶server及uWSGI是不支持併發的;
直接運行app:python run.py
測試結果:
測試併發請求數:5 測試耗時:10.08759880065918
使用gvent模式
[root@soft website]# gunicorn -w 1 -k gevent wsgi_my -b 0.0.0.0:9000
測試併發請求數:5 測試耗時:2.1450159549713135
結果釋義:
阻塞模式下不能同時處理多個請求,因此耗時爲2*5,最終大約爲10;
協程模式下能夠同時處理多個請求,因此耗時爲2*1,最終大約爲2;
import time
from twisted.internet import defer, reactor
from twisted.web.client import getPage
# 測試url
url = b"http://192.168.199.129:9000/hello"
def time_count(*args, **kw):
print('測試結束。')
t = time.time()
print("測試併發請求數:{} 測試耗時:{}".format(kw['req_nums'], t - kw['t_start']))
t_start = time.time()
d = defer.DeferredList([getPage(url) for x in range(5)])
#d = getPage(url)
d.addCallbacks(time_count, lambda x:print('error'), callbackKeywords={'req_nums':5, 't_start':t_start})
reactor.callLater(20, reactor.stop)
reactor.run()
主要測試flask+ gunicorn與werkzurg的性能。
E:\Apache24\bin>ab -n 3000 -c 50 http://192.168.199.129:9000/req_test
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Benchmarking 192.168.199.129 (be patient)
Completed 3000 requests
Finished 3000 requests
Server Software: Werkzeug/0.14.1
Server Hostname: 192.168.199.129
Server Port: 9000
Document Path: /req_test
Document Length: 27 bytes
Concurrency Level: 50
Time taken for tests: 27.518 seconds
Complete requests: 3000
Failed requests: 2951
(Connect: 0, Receive: 0, Length: 2951, Exceptions: 0)
Total transferred: 548002 bytes
HTML transferred: 86002 bytes
Requests per second: 109.02 [#/sec] (mean)
Time per request: 458.632 [ms] (mean)
Time per request: 9.173 [ms] (mean, across all concurrent requests)
Transfer rate: 19.45 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 3.0 1 73
Processing: 35 454 72.2 429 683
Waiting: 4 453 72.1 427 680
Total: 35 455 72.4 430 684
Percentage of the requests served within a certain time (ms)
50% 430
66% 457
75% 499
80% 518
90% 569
95% 592
98% 613
99% 628
100% 684 (longest request)
gunicorn -w 1 -k gevent wsgi_my -b 0.0.0.0:9000
E:\Apache24\bin>ab -n 3000 -c 50 http://192.168.199.129:9000/req_test
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.199.129 (be patient)
Finished 3000 requests
Concurrency Level: 50
Time taken for tests: 10.541 seconds
Complete requests: 3000
Failed requests: 2991
(Connect: 0, Receive: 0, Length: 2991, Exceptions: 0)
Total transferred: 565893 bytes
HTML transferred: 85893 bytes
Requests per second: 284.60 [#/sec] (mean)
Time per request: 175.688 [ms] (mean)
Time per request: 3.514 [ms] (mean, across all concurrent requests)
Transfer rate: 52.43 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 0.4 1 3
Processing: 32 174 16.6 171 277
Waiting: 2 173 16.7 170 277
Total: 32 174 16.6 172 278
Percentage of the requests served within a certain time (ms)
50% 172
66% 174
75% 176
80% 177
90% 184
95% 190
98% 217
99% 276
100% 278 (longest request)
不管是從任務總時長,單個任務耗時各項指標上看,gevent模式是大大優於同步阻塞模式的。至於說具體優化比例要取決於各項因素了,這裏的實驗環境只是一臺虛擬機,不具備實際應用意義。
經常使用命令
gunicorn -w 1 -k gevent wsgi_my -b 0.0.0.0:9000
gunicorn -c gunicorn_conf.py wsgi_my