php, nginx高併發優化

linux內核層面

以centos7.0爲例php

# 容許等待中的監聽 echo 50000 >/proc/sys/net/core/somaxconn #tcp鏈接快速回收 echo 1 >/proc/sys/net/ipv4/tcp_tw_recycle # tcp鏈接重用 echo 1 >/proc/sys/net/ipv4/tcp_tw_reuse #不抵禦洪水攻擊 echo 0 >/proc/sys/net/ipv4/tcp_syncookies 

nginx優化

worker_process

worker_process 修改成內核數的1-2倍, 通常是4或8, 8以上優化不大mysql

這裏須要注意, 開太多的worker進程, 會增長cpu開銷,cpu佔用會增高linux

keepalive_timeout

高併發下設爲0nginx

可是文件上傳須要保持鏈接, 開發時需注意, 作好業務拆分redis

worker_connections

設置worker進程最大打開的鏈接數, 建議儘可能高,好比20480sql

worker_rlimit_nofile

將此值增長到大於worker_processes * worker_connections的值。 應該是增長當前worker運行用戶的最大文件打開數值apache

php-fpm

emergency_restart*

# 60秒內有10次子進程中斷,則重啓php-fpm, 防止因php垃圾代碼形成的中斷問題 emergency_restart_threshold =10 emergency_restart_interval =60 

process.max

容許的最大進程數, 一個php-fpm進程大約佔用15-40m的內從, 具體設置值須要根據實際狀況得出 我這裏設爲 512centos

pm.max_children

某個鏈接池容許的最大子進程, 不要超過process_maxapi

pm.max_requests

容許的最大請求 ,設置2048cookie

關掉慢請求日誌

;request_slowlog_timeout = 0 ;slowlog = var/log/slow.log 

成果

環境

硬件

  • i5-3470 CPU
  • 4g 內存

軟件

  • php7.1.30
  • thinkPHP 5.1.35
  • nginx

業務說明

ab 到 thinkPHP框架首頁面, tp開啓了強路由模式, 未配置首頁路由, 走到miss 路由, 返回miss信息, 未調用db,返回的miss信息以下:

{"code":-8,"msg":"api不存在"} 

ab測試結果, 1w併發, 請求10次, 共10w請求

D:\soft\phpstudy\PHPTutorial\Apache\bin>ab -c 10000 -n 100000 http://fs_server.test/ This is ApacheBench, Version 2.3 <$Revision: 1748469 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking fs_server.test (be patient) Completed 10000 requests Completed 20000 requests Completed 30000 requests Completed 40000 requests Completed 50000 requests Completed 60000 requests Completed 70000 requests Completed 80000 requests Completed 90000 requests Completed 100000 requests Finished 100000 requests Server Software: nginx Server Hostname: fs_server.test Server Port: 80 Document Path: / Document Length: 32 bytes Concurrency Level: 10000 Time taken for tests: 492.928 seconds Complete requests: 100000 Failed requests: 0 Total transferred: 19500000 bytes HTML transferred: 3200000 bytes Requests per second: 202.87 [#/sec] (mean) Time per request: 49292.784 [ms] (mean) Time per request: 4.929 [ms] (mean, across all concurrent requests) Transfer rate: 38.63 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 1 2 6.6 2 1365 Processing: 18749 46094 8055.0 49145 52397 Waiting: 12231 45636 8504.8 48793 51627 Total: 18751 46096 8055.0 49147 52399 Percentage of the requests served within a certain time (ms) 50% 49147 66% 49279 75% 49347 80% 49386 90% 49473 95% 49572 98% 49717 99% 50313 100% 52399 (longest request) 

無丟失的請求, 就是花費時間有些長, 畢竟雖然沒走db,也是走的一套完整的tp框架, 整體算是合理的結果

不過這塊有個很是難堪的問題, 若是進行mysql,redis的操做, 會由於存儲媒介的鏈接問題, 形成響應丟失, nginx直接5XX錯誤,初步方案是提升其最大鏈接數待測試.

附幾個經常使用指令, 能夠查看當前開啓了幾個fpm進程, 總內存開銷, 正在處理請求的進程等

# 確認php-fpm的worker進程是否夠用,若是不夠用就等於沒有開啓同樣 計算開啓worker進程數目: ps -ef | grep 'php-fpm'|grep -v 'master'|grep -v 'grep' |wc -l #計算正在使用的worker進程,正在處理的請求 netstat -anp | grep 'php-fpm'|grep -v 'LISTENING'|grep -v 'php-fpm.conf'|wc -l # 內存開銷 ps auxf | grep php | grep -v grep | grep -v master | awk '{sum+=$6} END {print sum}'
相關文章
相關標籤/搜索