Nginx 性能調優

原文地址:http://nginx.com/blog/tuning-nginx/nginx

Tuning NGINX for Performanceweb

Nginx 性能調優後端

NGINX is well known as a high performance load balancercache and web server, powering over 40% of the busiest websites in the world.  Most of the default NGINX and Linux settings work well for most use cases, but it can be necessary to do some tuning to achieve optimal performance.  This blog post will discuss some of the NGINX and Linux settings to consider when tuning a system.  There are many settings available, but for this post we will cover the few settings recommended for most users to consider adjusting.  The settings not covered in this post are ones that should only be considered by those with a deep understanding of NGINX and Linux, or after a recommendation by the NGINX support or professional services teams.  NGINX professional services has worked with some of the world’s busiest websites to tune NGINX to get the maximum level of performance and are available to work with any customer who needs to get the most out of their system.緩存

Nginx聞名於高性能負載均衡,緩存和webserver。爲全世界40%最繁忙的站點提供支持。安全

在咱們大多數使用狀況下,默認的 Nginx 和 Linux 配置能獲得知足。網絡

但是有時候調試出更優的性能是很是有必要的。架構

本文將討論調試一個系統時需要考慮的Nginx 和 Linux 設置。有很是多的設置可用。但是本博中咱們僅僅涉及到少數幾個大多數用戶調試時推薦過的設置項。本文沒有說起的配置項通常是那些對Nginx 和 Linux 有着深刻理解的用戶會使用到,或者是在 Nginx 官方或專業服務團隊推薦纔會使用。Nginx 專業服務幫助那些世界上訪問量最大的站點調試 Nginx 以達到最高性能,還有那些需要想要充分利用他們系統的顧客。app


Introduction負載均衡

簡單介紹less

A basic understanding of the NGINX architecture and configuration concepts is assumed.  This post will not attempt to duplicate the NGINX documentation, but will provide an overview of the various options with links to the relevant documentation.

本文若是讀者已經對Nginx的架構和配置的概念有了主要的理解,所以不是去簡單的複製一份 Nginx 文檔,但是會提供各類選項的概述以及相關文檔的連接。

A good rule to follow when doing tuning is to change one setting at a time and if it does not result in a positive change in performance, then to set it back to the default value.

一個很是好的原則是調優時每次僅僅改動一個配置。假設對配置的改動不能提升性能的話。改回默認值。

We will start with a discussion of Linux tuning since some of these values can impact some of the values you will use for your NGINX configuration.

咱們將從Linux調優開始因爲有些值會影響到你調優Nginx時用到的一些配置參數。

Linux Configuration

Linux 配置

Modern Linux kernels (2.6+) do a good job in sizing the various settings but there are some settings that you may want to change.  If the operation system settings are too low then you will see errors in the kernel log to help indicate that you should adjust them.  There are many possible Linux settings but we will cover those settings that are most likely in need of tuning for normal workloads.  Please refer to Linux documentation for details on adjusting these settings.

流行的 Linux 內核(2.6之後)在各類設置的大小調整上作得很是好了但是相同有一些設置是你可能想要改動的。

假設你的操做系統設置過低致使你在內核日誌裏看到錯誤信息了,那代表你應該調整配置了。可能的Linux 配置有很是多但是咱們僅僅討論幾個在普通工做負載調優下需要用到的。請參考 Linux 文檔獲取這些調整到的配置項的詳情。

The Backlog Queue

積壓隊列

The following settings relate directly to connections and how they are queued.  If you have high rate of incoming connections and you are setting uneven levels of performance, for example some connections appear to be stalling, then running these settings may help.

如下這些配置直接與鏈接和鏈接怎樣排隊相關。假設你快速率的接入並且你的性能配置不均衡,好比一些鏈接出現延時的狀況,那麼如下的調優配置將起到做用。

net.core.somaxconnThis sets the size of the queue for connections waiting for NGINX to accept them.  Since NGINX accepts connections very quickly, this value does not usually need to be very large, but the default can be very low, so increasing can be a good idea if you have a high traffic website.  If the setting is too low then you should see error message in the kernel log and increase this value until the errors stop.  Note: if you set this to a value greater then 512, you should change your NGINX configuration using the backlog parameter of the listen directive to match this number.

net.core.somaxconn:這一項設置了等待 Nginx 接收的鏈接隊列的大小。因爲 Nginx 接受鏈接很是快,因此這個值通常不需要太大。但是默認值很是低。因此假設你的站點流量很是高的話把這個值加大是很是好的辦法。

假設這個值太低,你會在內核日誌裏看到錯誤信息,那就要一直增大這個值直到再也不報錯。注意:假設你設置此值大於512。你需要把Nginx 配置中的 listen 指令的 backlog 參數改動與此數相等。(譯者注:listen 指令的 backlog 這個參數設置了等待邊接的隊列的最大長度。

默認狀況下。backlog 在FreeDSB 和 Mac OS X 下設置爲 -1,其它平臺爲511)

net.core.netdev_max_backlogThis sets the rate at which packets can be buffered by the network card before being handed off the the CPU.  For machines with a high amount of bandwidth this value may need to increased.  Check the documentation for your network card for advice on this setting or check the kernel log for errors relating to this setting.

net.core.netdev_max_backlog:這一項設置包在哪一個速率下會被網卡在移交CPU以前緩衝。 當主機需要很大的流量的時候這個值需要添加。查看一下你的網卡的文檔對這個設置的建議,或者看看內核日誌中與該項曙光的錯誤。

File Descriptors

文件描寫敘述符

File descriptors are operating system resources used to handle things such as connections and open files.  NGINX can use up to two file descriptors per connection, for example if it is proxying, then it can have one for the client connection and another for the connection to the proxied server, although if HTTP keepalives are used this ratio will be much lower.  For a system that will see a large number of connections, these settings may need to be adjusted:

文件描寫敘述符是用來處理如鏈接或者打開的文件等的操做系統資源。Nginx 每個鏈接可以創建兩個描寫敘述符。好比它在進行代理 。那麼它有一個指向client鏈接,一個指向代理server鏈接,雖然在使用 HTTP  長鏈接的狀況下這個比率很是低。

sys.fs.file_maxThis is the system wide limit for file descriptors.

sys.fs.file_max:這一項是文件描寫敘述符的系統範圍限制。

nofile: This is the user file descriptor limit and is set in the /etc/security/limits.conf file.

nofile是用戶文件描寫敘述符的限制,在/etc/security/limits.conf文件裏設置。

Ephemeral ports

暫時port

When NGINX is acting as a proxy, each connection to an upstream server uses a temporary, or ephemeral port.

當Nginx做爲一個代理時。每一個到上游server的鏈接都使用暫時port。

net.ipv4.ip_local_port_rangeThis specifies the starting and ending port value to use.  If you see that you are running out of ports, you can increase this range.  A common setting it use ports 1024 to 65000.

net.ipv4.ip_local_port_range:這一項指定了可用port的範圍。假設你發現你執行在這些port之外,那麼你需要增大這個範圍了。

一般的設置範圍爲1024 到 65000。

net.ipv4.tcp_fin_timeoutThis specifies how long after port is no being used that it can be used again for another connection.  This usually defaults to 60 seconds but can usually be safely reduced to 30 or even 15 seconds.

net.ipv4.tcp_fin_timeout:這一項指定一個port多久沒有被使用以後可以被其它鏈接使用。一般默認的默以爲60秒,只是減到30秒甚至15秒會更安全。

NGINX Configuration

Nginx 配置

The following are some NGINX directives that can impact performance.  As stated above, we will only be discussing those directives that we recommend most users look at adjusting.  Any directive not mentioned here is one that we recommend not to be changed without direction from the Nginx team.

接下來是一些會影響性能的 Nginx 指令。

如上所述,咱們僅僅討論大多數用戶調試時推薦的指令。本文沒有提到的指令,咱們建議在沒有Nginx 團隊的指導下不要隨便修改。

Worker Processes

工做進程

NGINX can run multiple worker processes, each capable of processing a large number of connections. You can control how many worker processes are run and how connections are handled with the following directives:

Nginx能執行多個工做進程。每一個工做進程能處理很是大量的鏈接。你可以經過如下的指令集控制執行多少個工做進程。以及假設處理鏈接:

worker_processes:  This controls the number of worker processes that NGINX will run.  In most cases, running one worker process per CPU core works well.  This can be achieved by setting this directive to 「auto」.   There are times when you may want to increase this number, such as when the work processes have to do a lot of disk I/O.  The default is 1.

worker processes:這一項是Nginx執行的工做進程。在多數狀況下。有幾個CPU內核就執行幾個工做進程。這個值也可以經過將該指令設置爲 」auto」 取得。

也有些時間你需要調大這個數,比方當工做進程需要從大量磁盤讀寫的時候。

該值默以爲1.

worker_connections: This is the maximum number of connections that can be processed at one time by each worker process.  The default is 512, but most systems can handle a larger number.   What this number should be set to will depend on the size of the server and the nature of the traffic and can be discovered through testing.

worker_connections:這是一個工做進程可以同一時候處理的最大鏈接數。

默以爲512,但是多數系統可以處理更大的數。這一項的取值取決於server和大小和流量的性質,這些都可以經過測試獲得。

Keepalives

Keepalives

Keepalive connections can have a major impact on performance by reducing the CPU and network overhead needed for opening and closing connections.  NGINX terminates all client connections and has separate and independent connections to the upstream servers.  NGINX supports keepalives for the client and upstream servers.  The following directives deal with client keepalives:

Keepalive 鏈接可以經過減小CPU和網絡在打開和關閉鏈接時需要的開銷來影響性能。Nginx終止了所有client請求並且分離和獨立了所有鏈接上游server的鏈接。Nginx支持client和上游server的長鏈接。接下來這些指令處理client keepalives:

keepalive_requests:  This is the number of requests a client can make over a single keepalive connection.  The default is 100, but can be set to a much higher value and this can be especially useful for testing where the load generating tool is sending many requests from a single client.

keepalive_requests:這是一個client可以經過一個keepalive鏈接的請求次數。缺省值是100,但是也可以調得很是高,而且這對於測試負載生成工具從哪裏使用一個client發送這麼多請求很是實用。

keepalive_timeout:  How long a keepalive connection will remain open once it becomes idle.

The following directives deal with upstream keepalives:

keepalive_tiimeout:一個keepalive 鏈接被閒置之後還能保持多久打開狀態。

如下這些指令處理upstream keepalives:

keepaliveThis specifies the number of idle keepalive connections to an upstream server that remain open for each worker process.  There is no default value for this directive.

keepalive:這一項指定一個工做進程保持打開狀態的閒置的上游server的鏈接數。這一項沒有缺省值。

To enable keepalive connections to the upstream you must add the following directives:

proxy_http_version 1.1;
proxy_set_header Connection 「」;

要啓用upstream的keepalive 鏈接,你必須增長如下的指令:

proxy_http_version 1.1;
proxy_set_header Connection 「」;

Access Logging

訪問日誌記錄

Logging each requests takes both CPU and I/O cycles and one way to reduce this impact is to enable access log buffering.  This will cause NGINX to buffer a series of log entries and write them to the file at one time rather then as separate write operation.  Access log buffering is enabled by specifying the 「buffer=size」 option of the access_log directive.  This sets the size of the buffer to be used.  You can also use the 「flush=time」 option to tell NGINX to write the entries in the buffer after this amount of time.  With these two options defined, NGINX will write entries to the log file when the next log entry will not fit into the buffer or if the entries in the buffer are older than the time specified for the flush parameter.  Log entries will also be written when a worker process is re-opening log files or is shutting down.   It is also possible to disable access logging completely.

記錄每一個請求同一時候需要CPU和I/O週期,下降這一影響的一個方法就是啓用訪問日誌緩存。打開後能使Nginx一次性緩衝一堆日誌內容到文件中而不每一條日誌作一次寫操做。

訪問日誌的緩衝是經過設置 access_log 指的 「buffer=size」  選項來啓用的。這一項設置緩衝區的大小。也可以經過 「flush=time」 一項設置 Nginx 將緩衝區中全部數據寫到文件的間隔時間。這兩項都定義了之後。Nginx將在緩衝區滿了或者緩衝區裏的條目生成時間比 flush 參數指定的時間更早的狀況下把緩衝區裏的全部條目寫入日誌文件。日誌記錄還會在工做進程又一次打開或者關閉日誌文件時寫入。

這也可能完全地彬訪問日誌。

Sendfile

Sendfile is an operating system feature that can be enabled on NGINX.  It can provide for faster tcp data transfers by doing in-kernel copying of data from one file descriptor to another, often achieving zero-copy. NGINX can use it to write cached or on-disk content down a socket, without any context switching to user space, making it extremely fast and using less CPU overhead. Because the data never touches user space, it’s not possible to insert filters that need to access the data into the processing chain, so you cannot use any of the NGINX filters that change the content, e.g. the gzip filter.  It is disabled by default.

Sendfile 是Nginx能夠啓用的一個操做系統功能。

它能經過在內核中從一個文件描寫敘述符拷貝數據到還有一個文件描寫敘述符來提供更快的 tcp 傳輸數據,一般能實現零拷貝。Nginx 能使用這個功能在沒有不論什麼上下文切換到用戶空間的狀況下。經過套接字寫緩存或者磁盤裏的內容,能免速度極快且使用更少的CPU開銷。

因爲數據進不了用戶空間,因此也不可能插入進程鏈中需要到的過濾器,因此你不可使用不論什麼的Nginx過濾器來改動這些內容,好比gzip過濾器,默認是禁用的。

Limits

NGINX and NGINX Plus allow you to set various limits that can be used to help control the resources consumed by clients and therefore impact the performance of your system and also affect user experience and security.  The following are some of these directives:

Nginx 和 Nginx 加可以設置各類限制來幫助控制來自client的資源消耗。提高系統性能,提高用戶體驗和安全性。

如下就是些想着的指令:

limit_conn/limit_conn_zone:  These directives can be used to limit the number of connections NGINX will allow, for example from a single client IP address.  This can help prevent individual clients from opening too many connections and consuming too many resources.

linut_conn/limit_conn_zone:這兩個指令用於限制Nginx贊成的鏈接數量,好比從一個IP地址來的鏈接數量。

能夠幫助阻止個別的client利用打開不少的鏈接來消耗過多的資源。

limit_rate: This will limit the amount of bandwidth allowed for a client on a single connection. This can prevent the system from being overloaded by certain clients and can help to ensure that all clients receive good quality of service.

limit_rate:限制單個鏈接的帶寬量。能夠防止系統因一些client而超載。能確保所有用戶都享用質量的服務。

limit_req/limit_req_zone: These directives can be used to limit the rate of requests being processed by NGINX.  As with limit_rate this can help prevent the system from being overloaded by certain clients and can help to ensure that all clients receive good quality of service.  They can also be used to improve security, especially for login pages, by limiting the requests rate so that it is adequate for a human user but one that will slow programs trying to access your application.

limit_req/limit_req_zone:這些指令用於限制正在被Nginx 處理的請求的比率。

使用 limit_rate 能防止系統某幾個client超載並且能確保所有客戶獲取高質量的服務。這些指令相同能提升安全性。尤爲是登陸頁。經過限制請求率來作更適合人類用戶的請求,減慢試圖訪問你應用的程序用戶。

max_conns: This is set for a server in an upstream group and is the maximum number of simultaneous connections allowed to that server.  This can help prevent the upstream servers from being overloaded.  The default is zero, meaning that there is no limit.

max_conns:該項設置上游分組裏的server贊成同一時候鏈接的最大數目。能限制上游server的的超載。

缺省值爲0。即無限制。

queue: If max_conns is set for any upstream servers, then the queue directive governs what happens when a request cannot be processed because there are no available servers in the upstream group and some of those servers have reached the max_conns limit.  This directive can be set to the number of requests to queue and for how long.  If this directive is not set, then no queueing will occur.

queue:假設有上游server配置的max_conns一項。當有請求因爲沒有可用的上游分組中的server並且有些server達到max_conns的限制時,queue指令便起使用。

queue 指令能決定請求隊列的大小和時長。假設該值沒有配置,那麼不會有隊列產生。

Additional considerations

其它注意事項

There are additional features of NGINX that can be used to increase the performance of a web application that don’t really fall under the heading of tuning but are worth mentioning because their impact can be considerable.  We will discuss two of these features.

另外一些不是非要放到調優這個標題下的Nginx功能能夠提升一個站點應用的性能,但是依舊要提一下因爲他們的影響是值得注意的。咱們討論這當中的兩個功能。

Caching

緩存

By enabling caching on an NGINX instance that is load balancing a set of web or application servers, you can dramatically increase the response time to the client while at the same time dramatically reducing the load on the backend servers.  Caching is a subject of its own and will not be covered here.  For more information on configurating NGINX for caching please see: NGINX Admin Guide – Caching.

在一組作了負載均衡的站點或應用server上開啓緩存 。能夠戲劇性地在減輕後端server負載的同一時候添加(譯者注:怎麼認爲應該是優化或者下降的意思,做者寫錯了?)到client的響應時間。

緩存是Nginx 本身的主題在這時太很少言。

不少其它信息情看:NGINX 管理指南——緩存

Compression

壓縮

Compressing the responses to clients can greatly reduce the size of the responses, requiring less bandwidth, however it does require CPU resources to do the compression so is best used when there is value to reducing bandwidth.  It is important to note that you should not enable compression for objects that are already compressed, such as jpegs.   For more information on configuring NGINX for compression please see: NGINX Admin Guide – Compression and Decompression

壓縮到client的響應能夠很是顯著的下降響應大小,下降所需帶寬。而後需要耗費CPU資源來進行壓縮因此最後是在下降帶寬 有價值的時候再使用。

特別注意的是假設你的對象已經壓縮過了如jpegs,那就不需要再啓用壓縮了。

不少其它關於配置Nginx壓縮的信息請看: NGINX 管理指南——壓縮與解壓縮。

相關文章
相關標籤/搜索