官方文檔上的高可用配置,它推薦的是使用haproxy的上層代理來實現服務組件的主備訪問、或者負載均衡訪問
一開始我也是使用haproxy來作的,但後來方式改了
測試環境:haproxy + nginx
科興環境:haproxy
先拋開測試環境,等下我再在4.2節中解說一下配置
兩邊的kxcontroller主備控制節點均安裝
yum install -y haproxy
建立目錄
mkdir -p /home/haproxy/log && mkdir -p /home/haproxy/run/
賦予目錄權限
chown -R haproxy:haproxy /home/haproxy
在kxcontroller1上的配置示例
[root@kxcontroller1 ~]# vi /etc/haproxy/haproxy.cfg
#全局配置
global
chroot /home/haproxy/log
daemon
group haproxy
maxconn 20000
pidfile /home/haproxy/run/haproxy.pid
user haproxy
defaults
log global
maxconn 20000
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
#dashboard界面,不考慮VIP在誰身上,這個服務能夠負載均衡訪問
listen dashboard_cluster_80
bind 10.120.42.10:80
balance source
option tcpka
option httpchk
option tcplog
server kxcontroller1 10.120.42.1:80 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:80 check inter 2000 rise 2 fall 5
#數據庫集羣訪問,使用backup訪問實現只訪問kxcontroller1,而當1掛了的時候纔去訪問kxcontroller2
listen galera_cluster_3306
bind 10.120.42.10:3306
mode tcp
balance source
option tcpka
option httpchk
server kxcontroller1 10.120.42.1:3306 check port 9200 inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:3306 backup check port 9200 inter 2000 rise 2 fall 5
#隊列 RabbitMQ 訪問,使用訪問實現只訪問1臺,當VIP在kxcontroller1上時,它的只訪問kxcontroller1上的rabbitmq
listen rabbitmq_cluster_5672
bind 10.120.42.10:5672
mode tcp
balance roundrobin
server kxcontroller1 10.120.42.1:5672 check inter 2000 rise 2 fall 5
#鏡像Glance API 訪問,使用訪問實現只訪問1臺,不管VIP在誰身上時,它只訪問kxcontroller2上的Glance API ,kxcontroller1天天凌晨定時向controller2同步image文件,當kxcontroller2有故障時作手工冷備切換至controller1
listen glance_api_cluster_9292
bind 10.120.42.10:9292
balance source
option tcpka
option httpchk
option tcplog
# server kxcontroller1 10.120.42.1:9292 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:9292 check inter 2000 rise 2 fall 5
#鏡像Glance 註冊 訪問,使用訪問實現只訪問1臺,不管VIP在誰身上時 ,它只訪問kxcontroller2上的 Glance 註冊 ,kxcontroller1天天凌晨定時向controller2同步image文件,當kxcontroller2有故障時作手工冷備切換至controller1
listen glance_registry_cluster_9191
bind 10.120.42.10:9191
balance source
option tcpka
option tcplog
# server kxcontroller1 10.120.42.1:9191 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:9191 check inter 2000 rise 2 fall 5
#keystone 35357訪問,使用訪問實現只訪問1臺,當VIP在kxcontroller1上時,它只訪問kxcontroller1上的 keystone 35357
listen keystone_admin_cluster_35357
bind 10.120.42.10:35357
balance source
option tcpka
option httpchk
option tcplog
server kxcontroller1 10.120.42.1:35357 check inter 2000 rise 2 fall 5
# server kxcontroller2 10.120.42.2:35357 check inter 2000 rise 2 fall 5
#keystone 5000訪問,使用訪問實現只訪問1臺,當VIP在kxcontroller1上時,它只訪問kxcontroller1上的 keystone 5000
listen keystone_public_internal_cluster_5000
bind 10.120.42.10:5000
balance source
option tcpka
option httpchk
option tcplog
server kxcontroller1 10.120.42.1:5000 check inter 2000 rise 2 fall 5
# server kxcontroller2 10.120.42.2:5000 check inter 2000 rise 2 fall 5
#nova api訪問,不考慮VIP在誰身上,這個服務能夠負載均衡訪問
listen nova_compute_api_cluster_8774
bind 10.120.42.10:8774
balance source
option tcpka
option httpchk
option tcplog
server kxcontroller1 10.120.42.1:8774 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:8774 check inter 2000 rise 2 fall 5
#nova 元數據 訪問,不考慮VIP在誰身上,這個服務能夠負載均衡訪問
listen nova_metadata_api_cluster_8775
bind 10.120.42.10:8775
balance source
option tcpka
option tcplog
server kxcontroller1 10.120.42.1:8775 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:8775 check inter 2000 rise 2 fall 5
#cinder 塊存儲訪問,雖然這裏VIP開啓,但後端服務我沒開啓,暫時掛在這裏
listen cinder_api_cluster_8776
bind 10.120.42.10:8776
balance source
option tcpka
option httpchk
option tcplog
server kxcontroller1 10.120.42.1:8776 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:8776 check inter 2000 rise 2 fall 5
#ceilometer 訪問,雖然這裏VIP開啓,但後端服務我沒開啓,暫時掛在這裏
listen ceilometer_api_cluster_8777
bind 10.120.42.10:8777
balance source
option tcpka
option tcplog
server kxcontroller1 10.120.42.1:8777 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:8777 check inter 2000 rise 2 fall 5
#nova VNC訪問,不考慮VIP在誰身上,這個後端 服務能夠負載均衡訪問
listen nova_vncproxy_cluster_6080
bind 10.120.42.10:6080
balance source
option tcpka
option tcplog
server kxcontroller1 10.120.42.1:6080 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:6080 check inter 2000 rise 2 fall 5
#neutron api訪問,不考慮VIP在誰身上,這個後端服務能夠負載均衡訪問
listen neutron_api_cluster_9696
bind 10.120.42.10:9696
balance source
option tcpka
option httpchk
option tcplog
server kxcontroller1 10.120.42.1:9696 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:9696 check inter 2000 rise 2 fall 5
#swift 塊存儲訪問,雖然這裏VIP開啓,但後端服務我沒開啓,暫時掛在這裏
listen swift_proxy_cluster_8080
bind 10.120.42.10:8080
balance source
option tcplog
option tcpka
server kxcontroller1 10.120.42.1:8080 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:8080 check inter 2000 rise 2 fall 5
#展現普能用戶可用於查詢的頁面http://10.120.42.10:8888/stats 用戶和密碼admin:admin
listen admin_stats
bind 0.0.0.0:8888
option httplog
#因爲defaut沒有聲明,默認使用tcp,因此在裏要額外配置mode http
mode http
stats refresh 30s
stats uri /stats
stats realm Haproxy Manager
stats auth admin:admin
#展現管理員 admin 頁面可供修改頁面http://10.120.42.10:8008/admin-venic 用戶和密碼venic:venic8888
listen stats_auth 0.0.0.0:8008
#因爲defaut沒有聲明,默認使用tcp, 因此在listen裏要額外配置 mode http
mode http
stats enable
stats uri /admin-venic
stats auth venic:venic8888
stats admin if TRUE
[root@kxcontroller2 ~]# vi /etc/haproxy/haproxy.cfg
#全局配置
global
chroot /home/haproxy/log
daemon
group haproxy
maxconn 20000
pidfile /home/haproxy/run/haproxy.pid
user haproxy
defaults
log global
maxconn 20000
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
#dashboard界面,不考慮VIP在誰身上,這個服務能夠負載均衡訪問
listen dashboard_cluster_80
bind 10.120.42.10:80
balance source
option tcpka
option httpchk
option tcplog
server kxcontroller1 10.120.42.1:80 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:80 check inter 2000 rise 2 fall 5
#數據庫集羣訪問,使用backup訪問實現只訪問kxcontroller1,而當1掛了的時候纔去訪問kxcontroller2
listen galera_cluster_3306
bind 10.120.42.10:3306
mode tcp
balance source
option tcpka
option httpchk
server kxcontroller1 10.120.42.1:3306 check port 9200 inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:3306 backup check port 9200 inter 2000 rise 2 fall 5
#隊列 RabbitMQ 訪問,使用訪問實現只訪問1臺,當VIP在kxcontroller2上時,它只訪問kxcontroller2上的rabbitmq
listen rabbitmq_cluster_5672
bind 10.120.42.10:5672
mode tcp
balance roundrobin
server kxcontroller1 10.120.42.1:5672 check inter 2000 rise 2 fall 5
#鏡像Glance API 訪問,使用訪問實現只訪問1臺,不管VIP在誰身上時,它只訪問kxcontroller2上的Glance API ,kxcontroller1天天凌晨定時向controller2同步image文件,當kxcontroller2有故障時作手工冷備切換至controller1
listen glance_api_cluster_9292
bind 10.120.42.10:9292
balance source
option tcpka
option httpchk
option tcplog
# server kxcontroller1 10.120.42.1:9292 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:9292 check inter 2000 rise 2 fall 5
#鏡像Glance 註冊 訪問,使用訪問實現只訪問1臺,不管VIP在誰身上時 ,它只訪問kxcontroller2上的 Glance 註冊 ,kxcontroller1天天凌晨定時向controller2同步image文件,當kxcontroller2有故障時作手工冷備切換至controller1
listen glance_registry_cluster_9191
bind 10.120.42.10:9191
balance source
option tcpka
option tcplog
# server kxcontroller1 10.120.42.1:9191 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:9191 check inter 2000 rise 2 fall 5
#keystone 35357訪問,使用訪問實現只訪問1臺,當VIP在kxcontroller2上時,它只訪問kxcontroller2上的 keystone 35357
listen keystone_admin_cluster_35357
bind 10.120.42.10:35357
balance source
option tcpka
option httpchk
option tcplog
# server kxcontroller1 10.120.42.1:35357 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:35357 check inter 2000 rise 2 fall 5
#keystone 5000訪問,使用訪問實現只訪問1臺,當VIP在kxcontroller1上時,它只訪問kxcontroller2上的 keystone 5000
listen keystone_public_internal_cluster_5000
bind 10.120.42.10:5000
balance source
option tcpka
option httpchk
option tcplog
# server kxcontroller1 10.120.42.1:5000 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:5000 check inter 2000 rise 2 fall 5
#nova api訪問,不考慮VIP在誰身上,這個服務能夠負載均衡訪問
listen nova_compute_api_cluster_8774
bind 10.120.42.10:8774
balance source
option tcpka
option httpchk
option tcplog
server kxcontroller1 10.120.42.1:8774 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:8774 check inter 2000 rise 2 fall 5
#nova 元數據 訪問,不考慮VIP在誰身上,這個服務能夠負載均衡訪問
listen nova_metadata_api_cluster_8775
bind 10.120.42.10:8775
balance source
option tcpka
option tcplog
server kxcontroller1 10.120.42.1:8775 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:8775 check inter 2000 rise 2 fall 5
#cinder 塊存儲訪問,雖然這裏VIP開啓,但後端服務我沒開啓,暫時掛在這裏
listen cinder_api_cluster_8776
bind 10.120.42.10:8776
balance source
option tcpka
option httpchk
option tcplog
server kxcontroller1 10.120.42.1:8776 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:8776 check inter 2000 rise 2 fall 5
#ceilometer 訪問,雖然這裏VIP開啓,但後端服務我沒開啓,暫時掛在這裏
listen ceilometer_api_cluster_8777
bind 10.120.42.10:8777
balance source
option tcpka
option tcplog
server kxcontroller1 10.120.42.1:8777 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:8777 check inter 2000 rise 2 fall 5
#nova VNC訪問,不考慮VIP在誰身上,這個後端 服務能夠負載均衡訪問
listen nova_vncproxy_cluster_6080
bind 10.120.42.10:6080
balance source
option tcpka
option tcplog
server kxcontroller1 10.120.42.1:6080 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:6080 check inter 2000 rise 2 fall 5
#neutron api訪問,不考慮VIP在誰身上,這個後端服務能夠負載均衡訪問
listen neutron_api_cluster_9696
bind 10.120.42.10:9696
balance source
option tcpka
option httpchk
option tcplog
server kxcontroller1 10.120.42.1:9696 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:9696 check inter 2000 rise 2 fall 5
#cinder 塊存儲訪問,雖然這裏VIP開啓,但後端服務我沒開啓,暫時掛在這裏
listen swift_proxy_cluster_8080
bind 10.120.42.10:8080
balance source
option tcplog
option tcpka
server kxcontroller1 10.120.42.1:8080 check inter 2000 rise 2 fall 5
server kxcontroller2 10.120.42.2:8080 check inter 2000 rise 2 fall 5
#展現普能用戶可用於查詢的頁面http://10.120.42.10:8888/stats 用戶和密碼admin:admin
listen admin_stats
bind 0.0.0.0:8888
option httplog
#因爲defaut沒有聲明,默認使用tcp,因此在裏要額外配置mode http
mode http
stats refresh 30s
stats uri /stats
stats realm Haproxy Manager
stats auth admin:admin
#展現管理員 admin 頁面可供修改頁面http://10.120.42.10:8008/admin-venic 用戶和密碼venic:venic8888
listen stats_auth 0.0.0.0:8008
#因爲defaut沒有聲明,默認使用tcp, 因此在listen裏要額外配置 mode http
mode http
stats enable
stats uri /admin-venic
stats auth venic:venic8888
stats admin if TRUE
兩個科興的控制主備節點配置好後,
systemctl start haproxy.service
systemctl enable haproxy.service
測試監控頁面是否生效,以判斷haproxy是否正常工做
http://10.120.42.10:8888/stats
http://10.120.42.10:8008/admin-venic
啓動時,沒有獲取到VIP備節點會發現haproxy服務啓動不了。緣由以下,
haproxy啓動時提示失敗:
[ALERT] 164/1100300 (11606) : Starting proxy linuxyw.com: cannot bind socket
修復前,在主備節點上執行 netstat -anp | grep haproxy,檢測VIP的端口是否都在監聽。
這個問題,其實就是由於你的haproxy沒有獲得VIP的緣由,而你的配置文件又綁定了當前不存在VIP地址,因此會提示以上錯誤
固然,咱們要確保的haproxy服務要提早先啓動,否則等故障時,到去手動啓動haproxy服務,就沒法高可用了。
解決方法:
修改內核參數:
vi /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind=1
保存結果,使內核參數生效
sysctl -p
再啓動haproxy就能夠啓動了