介紹keepalived的高可用和Nginx的主機反代理javascript
架構圖參考博客:centos7 ansible yum安裝web軟件css
[front]html
1.一、keepalive主備的配置java
先中止服務:node
# ansible front -i /root/ans/ansible_inventory.txt -m systemd -a "name=keepalived state=stopped"nginx
# ansible front -i /root/ans/ansible_inventory.txt -m shell -a "cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak"web
拉取某個文件到本機進行配置:shell
# ansible front -i /root/ans/ansible_inventory.txt -m fetch -a "src=/etc/keepalived/keepalived.conf dest=/root/ans/conf.d/ flat=yes" --limit 10.11.11.106後端
主從文件準備:centos
準備master和backup的keepalived.conf文件
master文件:
# cat keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL_xcp
}
vrrp_script chk_ngx {
script "/etc/keepalived/chk_ngx.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state MASTER
interface eth1
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass DFMpS
}
virtual_ipaddress {
10.11.7.20
}
track_script {
chk_ngx
}
}
backup文件:
# cat keepalived.conf.backup
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL_xcp
}
vrrp_script chk_ngx {
script "/etc/keepalived/chk_ngx.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface eth1
virtual_router_id 51
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass DFMpS
}
virtual_ipaddress {
10.11.11.20
}
track_script {
chk_ngx
}
}
chk_ngx文件:
# cat chk_ngx.sh
#!/bin/bash
if [ "$(ps -ef | grep "nginx: master process"| grep -v grep )" == "" ];then
#echo 1
/etc/init.d/nginx start
sleep 5
if [ "$(ps -ef | grep "nginx: master process"| grep -v grep )" == "" ];then
/etc/init.d/keepalived stop
#echo 2
fi
fi
使用ansible把keepalived.conf配置文件和nginx監控腳本文件copy到指定的主機上:
# ansible front -i /root/ans/ansible_inventory.txt -m copy -a "src=/root/ans/conf.d/keepalived.conf dest=/etc/keepalived/keepalived.conf backup=yes" --limit=10.11.11.106
# ansible front -i /root/ans/ansible_inventory.txt -m copy -a "src=/root/ans/conf.d/keepalived.conf.backup dest=/etc/keepalived/keepalived.conf backup=yes" --limit=10.11.11.217
# ansible front -i /root/ans/ansible_inventory.txt -m copy -a "src=/root/ans/conf.d/chk_ngx.sh dest=/etc/keepalived/ mode=0755 backup=yes"
或命令修改權限
# ansible front -i /root/ans/ansible_inventory.txt -m shell -a "chmod +x /etc/keepalived/chk_ngx.sh"
驗證操做
# ansible front -i /root/ans/ansible_inventory.txt -m systemd -a "name=keepalived state=started" --limit=10.11.11.106
# ansible front -i /root/ans/ansible_inventory.txt -m systemd -a "name=keepalived state=started" --limit=10.11.11.217
# ansible front -i /root/ans/ansible_inventory.txt -m shell -a "ip addr | grep eth1" --limit=10.11.11.106
10.11.11.106 | SUCCESS | rc=0 >>
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
inet 10.11.11.2/24 brd 10.11.7.255 scope global eth1
inet 10.11.11.20/32 scope global eth1
# ansible front -i /root/ans/ansible_inventory.txt -m shell -a "ip addr | grep eth1" --limit=110.11.11.217
10.11.7.209 | SUCCESS | rc=0 >>
5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
inet 10.11.11.3/24 brd 10.11.7.255 scope global eth1
inet 10.11.11.20/32 scope global eth1
本機ping操做:
# ping 10.11.11.20
主master宕機測試,是否正常轉移:
主宕機、主nginx服務失敗
# ansible front -i /root/ans/ansible_inventory.txt -m systemd -a "name=keepalived state=stopped"
#ping 10.11.11.20 ---->不通
# ansible front -i /root/ans/ansible_inventory.txt -m systemd -a "name=keepalived state=started" --limit=10.11.11.106
# ansible front -i /root/ans/ansible_inventory.txt -m systemd -a "name=nginx state=started" --limit=10.11.7.106
#ping 110.11.11.20 ---->通
經過vip可以正常訪問web,訪問的是主節點
主節點中止nignx服務
# ansible front -i /root/ans/ansible_inventory.txt -m systemd -a "name=nginx state=stopped" --limit=10.11.11.106
#ping 10.11.11.20 ---->通(某些雲鬚要手動綁定以後,才能夠實現自動漂移vip)
經過vip可以正常訪問web,訪問的是備節點
主機點恢復正常後,是不會搶佔從節點的
# ansible front -i /root/ans/ansible_inventory.txt -m systemd -a "name=keepalived state=started" --limit=10.11.11.217
# ansible front -i /root/ans/ansible_inventory.txt -m systemd -a "name=nginx state=started" --limit=10.11.11.217
(某些雲鬚要手動從新綁定vip回到主節點)
#ping 10.11.11.20 ---->通
# ansible front -i /root/ans/ansible_inventory.txt -m systemd -a "name=keepalived state=restarted"
# ansible front -i /root/ans/ansible_inventory.txt -m systemd -a "name=nginx state=restarted"
#ping 10.11.11.20 ---->通
1.二、nginx反向代理
下面是Nginx反向代理的一個例子,我每次編寫完Nginx都是經過nginx -t進行測試的。。。
$ ls
koi-utf modules nginx.conf uwsgi_params
conf.d koi-win nginx.conf.bak win-utf
fastcgi_params mime.types naproxy.conf scgi_params
$ cat nginx.conf
# nginx conf conf/nginx.conf
# Last Updated 2017.07.01
#user nginx nginx;
user www www;
worker_processes 3;
error_log /data/var/logs/nginx/error.log notice; #日誌位置我作了修改
pid /var/run/nginx.pid;
#access_log off;
worker_rlimit_nofile 65535;
events {
use epoll;
worker_connections 65535;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
server_tokens off;
server_name_in_redirect off;
server_names_hash_bucket_size 128;
client_header_buffer_size 32k;
large_client_header_buffers 4 32k;
client_max_body_size 100m;
sendfile on;
tcp_nopush on;
keepalive_timeout 60;
tcp_nodelay on;
gzip on;
gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types text/plain application/x-javascript text/css application/xml application/xml+rss text/javascript application/javascript;
gzip_vary on;
log_format wwwlogs '$remote_addr - $remote_user [$time_local] $request $status $body_bytes_sent $http_referer $http_user_agent $http_x_forwarded_for';
#include default.conf;
#
server {
listen 80 default;
server_name _;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
}
access_log off;
include /etc/nginx/conf.d/*.conf; #web網站的具體配置文件
}
Nginx反向代理主機組文件:(假設爲upstream.conf,具體能夠參考Nginx官網)
$ cd /etc/nginx/conf.d
$ cat upstream.conf
upstream upload {
#server 192.168.10.16:8081;
server 10.11.11.107:8089 weight=10;
server 10.11.11.108:8089 weight=10;
}
upstream admin {
#server 192.168.10.43:8082 weight=10;
server 10.11.11.107:8082 weight=10;
server 10.11.11.108:8082 weight=10;
}
使用主機組代理到主機後端
$ proxy.conf
server
{
listen 80;
server_name admin.xincheping.com;
#access_log /data/var/logs/nginx/admin/access.log;
#error_log /data/var/logs/nginx/admin/error.log;
location / {
proxy_pass http://admin;
include naproxy.conf; #這裏是代理選項的部分參數配置
}
location /p_w_picpath/upload.do {
proxy_pass http://upload;
include naproxy.conf;
}
}
該部分爲可選選項(可保留yum的默認安裝)
$ cat naproxy.conf
proxy_connect_timeout 60s;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 512k;
proxy_buffers 4 512k;
proxy_busy_buffers_size 1024k;
proxy_redirect off;
proxy_hide_header Vary;
proxy_set_header Accept-Encoding '';
proxy_set_header Host $host;
proxy_set_header Referer $http_referer;
proxy_set_header Cookie $http_cookie;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;