Docker 部署完整的先後端主從熱備系統

系統解決問題

  1. 解決物理機不夠用的問題
  2. 解決物理機資源使用不充分問題
  3. 解決系統高可用問題
  4. 解決不停機更新問題

系統部署準備工做

  1. 一臺裝有Ubuntu16.04.6版本的服務器,可聯網
  2. 服務器配置最低爲4核8G,內存越大越好

系統部署方案設計圖

說明:圖中爲整個系統方案設計圖,上半部分爲一臺業務服務器,下半部分爲數據庫服務器,本文檔只介紹業務服務器的部署流程,下半部分比較簡單。另外圖中所標註IP爲做者本身在虛擬機中的IP,可根據實際狀況本身進行設定。php

相關概念

1 LVS

LVS是一個開源的軟件,能夠實現傳輸層四層負載均衡。LVS是Linux Virtual Server的縮寫,意思是Linux虛擬服務器。目前有三種IP負載均衡技術(VS/NAT、VS/TUN和VS/DR);八種調度算法(rr,wrr,lc,wlc,lblc,lblcr,dh,sh)。html

2 Keepalived做用

LVS能夠實現負載均衡,可是不可以進行健康檢查,好比一個rs出現故障,LVS仍然會把請求轉發給故障的rs服務器,這樣就會致使請求的無效性。keepalive軟件能夠進行健康檢查,並且能同時實現LVS的高可用性,解決LVS單點故障的問題,其實keepalive就是爲LVS而生的。前端

3 keepalived和其工做原理

keepalived是一個相似於Layer2,4,7交換機制的軟件。是Linux集羣管理中保證集羣高可用的一個服務軟件,其功能是用來防止單點故障。vue

keepalived是基於VRRP協議實現的保證集羣高可用的一個服務軟件,主要功能是實現真機的故障隔離和負載均衡器間的失敗切換,防止單點故障。在瞭解keepalived原理以前先了解一下VRRP協議。java

4 VRRP協議:Virtual Route

Redundancy Protocol虛擬路由冗餘協議。是一種容錯協議,保證當主機的下一跳路由出現故障時,由另外一臺路由器來代替出現故障的路由器進行工做,從而保持網絡通訊的連續性和可靠性。在介紹VRRP以前先介紹一些關於VRRP的相關術語:linux

虛擬路由器:由一個 Master 路由器和多個 Backup 路由器組成。主機將虛擬路由器看成默認網關。nginx

VRID:虛擬路由器的標識。有相同 VRID 的一組路由器構成一個虛擬路由器。c++

Master 路由器:虛擬路由器中承擔報文轉發任務的路由器。web

Backup 路由器: Master 路由器出現故障時,可以代替 Master 路由器工做的路由器。算法

虛擬 IP 地址:虛擬路由器的 IP 地址。一個虛擬路由器能夠擁有一個或多個IP 地址。

IP 地址擁有者:接口 IP 地址與虛擬 IP 地址相同的路由器被稱爲 IP 地址擁有者。

虛擬 MAC 地址:一個虛擬路由器擁有一個虛擬 MAC 地址。虛擬 MAC 地址的格式爲 00-00-5E-00-01-{VRID}。一般狀況下,虛擬路由器迴應 ARP 請求使用的是虛擬 MAC 地址,只有虛擬路由器作特殊配置的時候,纔回應接口的真實 MAC 地址。

優先級: VRRP 根據優先級來肯定虛擬路由器中每臺路由器的地位。

非搶佔方式:若是 Backup 路由器工做在非搶佔方式下,則只要 Master 路由器沒有出現故障,Backup 路由器即便隨後被配置了更高的優先級也不會成爲Master 路由器。

搶佔方式:若是 Backup 路由器工做在搶佔方式下,當它收到 VRRP 報文後,會將本身的優先級與通告報文中的優先級進行比較。若是本身的優先級比當前的 Master 路由器的優先級高,就會主動搶佔成爲 Master 路由器;不然,將保持 Backup 狀態。

VRRP將局域網內的一組路由器劃分在一塊兒,造成一個VRRP備份組,它在功能上至關於一臺路由器的功能,使用虛擬路由器號進行標識(VRID)。虛擬路由器有本身的虛擬IP地址和虛擬MAC地址,它的外在變現形式和實際的物理路由徹底同樣。局域網內的主機將虛擬路由器的IP地址設置爲默認網關,經過虛擬路由器與外部網絡進行通訊。

虛擬路由器是工做在實際的物理路由器之上的。它由多個實際的路由器組成,包括一個Master路由器和多個Backup路由器。 Master路由器正常工做時,局域網內的主機經過Master與外界通訊。當Master路由器出現故障時, Backup路由器中的一臺設備將成爲新的Master路由器,接替轉發報文的工做。(路由器的高可用)

5 VRRP的工做流程

  1. 虛擬路由器中的路由器根據優先級選舉出 Master。 Master 路由器經過發送免費 ARP 報文,將本身的虛擬 MAC 地址通知給與它鏈接的設備或者主機,從而承擔報文轉發任務;
  2. Master 路由器週期性發送 VRRP 報文,以公佈其配置信息(優先級等)和工做情況;
  3. 若是 Master 路由器出現故障,虛擬路由器中的 Backup 路由器將根據優先級從新選舉新的 Master;
  4. 虛擬路由器狀態切換時, Master 路由器由一臺設備切換爲另一臺設備,新的 Master 路由器只是簡單地發送一個攜帶虛擬路由器的 MAC 地址和虛擬 IP地址信息的ARP 報文,這樣就能夠更新與它鏈接的主機或設備中的ARP 相關信息。網絡中的主機感知不到 Master 路由器已經切換爲另一臺設備。
  5. Backup 路由器的優先級高於 Master 路由器時,由 Backup 路由器的工做方式(搶佔方式和非搶佔方式)決定是否從新選舉 Master。
  6. VRRP優先級的取值範圍爲0到255(數值越大代表優先級越高)

6 Docker

  1. Docker是世界領先的軟件容器平臺。
  2. Docker使用Google公司推出的Go語言進行開發實現,基於Linux內核的cgroup,namespace,以及AUFS類的UnionFS等技術,對進程進行封裝隔離,屬於操做系統層面的虛擬化技術。 因爲隔離的進程獨立於宿主和其它的隔離的進程,所以也稱其爲容器。Docke最初實現是基於LXC。
  3. Docker可以自動執行重複性任務,例如搭建和配置開發環境,從而解放了開發人員以便他們專一在真正重要的事情上:構建傑出的軟件。
  4. 用戶能夠方便地建立和使用容器,把本身的應用放入容器。容器還能夠進行版本管理、複製、分享、修改,就像管理普通的代碼同樣。

可參考我寫的一篇文章:juejin.im/post/5dae55…

7 Nginx

nginx是一款高性能的http 服務器/反向代理服務器及電子郵件(IMAP/POP3)代理服務器。

做用:集羣(提升吞吐量,減輕單臺服務器壓力),反向代理(不暴露真實IP地址),虛擬服務器,靜態服務器(動靜分離)。解決跨域問題,使用nginx搭建企業級api接口網關

開始部署

安裝Docker

1 卸載舊版本Docker,系統未安裝則可跳過

sudo apt-get remove docker docker-engine docker.io containerd runc複製代碼

2 更新索引列表

sudo apt-get update複製代碼

3 容許apt經過https使用repository安裝軟件包

sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common複製代碼

4 安裝GPG證書

sudo curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -複製代碼

5 驗證key的指紋

sudo apt-key fingerprint 0EBFCD88複製代碼

6 添加穩定的倉庫並更新索引

sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update複製代碼

7 查看docker版本列表

apt-cache madison docker-ce複製代碼

8 下載自定義版本docker

sudo apt-get install -y docker-ce=17.12.1~ce-0~ubuntu複製代碼

9 驗證docker 是否安裝成功

docker --version複製代碼

10 將root用戶加入docker 組,以容許免sudo執行docker

sudo gpasswd -a 用戶名 docker  #用戶名改爲本身的登陸名複製代碼

11 重啓服務並刷新docker組成員,到此完成

sudo service docker restartnewgrp - docker複製代碼

Docker自定義網絡

由於容器重啓以後IP會變,可是這個不是咱們但願的,咱們但願容器有本身的固定IP。容器默認使用Docker0這個網橋,這個是沒法自定義IP的,須要咱們本身建立一個網橋,而後指定容器IP,這樣容器在重啓以後IP會保持不變。

docker network create --subnet=172.18.0.0/24 mynet複製代碼

使用ifconfig查看咱們建立的網絡

宿主機安裝Keepalived

1 預裝編譯環境

sudo apt-get install -y gcc
sudo apt-get install -y g++
sudo apt-get install -y libssl-dev
sudo apt-get install -y daemon
sudo apt-get install -y make
sudo apt-get install -y sysv-rc-conf複製代碼

2 下載並安裝keepalived

cd /usr/local/
wget http://www.keepalived.org/software/keepalived-1.2.18.tar.gz
tar zxvf keepalived-1.2.18.tar.gz


cd keepalived-1.2.18


./configure --prefix=/usr/local/keepalived


make && make insta複製代碼

3 將keepalived設置爲系統服務

mkdir /etc/keepalived 
mkdir /etc/sysconfig 
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/ 
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/ 
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ 
ln -s /usr/local/sbin/keepalived /usr/sbin/ 
ln -s /usr/local/keepalived/sbin/keepalived /sbin/

複製代碼

4 修改keepalived啓動的配置文件

由於除Readhat以外的linux沒有/etc/rc.d/init.d/functions,因此須要修改原來的啓動文件

  • 將 . /etc/rc.d/init.d/functions 修改成 . /lib/lsb/init-functions
  • 將 daemon keepalived ${KEEPALIVED_OPTIONS} 修改成 daemon keepalived start

修改後總體內容以下

#!/bin/sh
#
# Startup script for the Keepalived daemon
#
# processname: keepalived
# pidfile: /var/run/keepalived.pid
# config: /etc/keepalived/keepalived.conf
# chkconfig: - 21 79
# description: Start and stop Keepalived


# Source function library
#. /etc/rc.d/init.d/functions
. /lib/lsb/init-functions
# Source configuration file (we set KEEPALIVED_OPTIONS there)
. /etc/sysconfig/keepalived


RETVAL=0


prog="keepalived"


start() {
    echo -n $"Starting $prog: "
    daemon keepalived start
    RETVAL=$?
    echo
    [ $RETVAL -eq 0 ] && touch /var/lock/subsys/$prog
}


stop() {
    echo -n $"Stopping $prog: "
    killproc keepalived
    RETVAL=$?
    echo
    [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog
}


reload() {
    echo -n $"Reloading $prog: "
    killproc keepalived -1
    RETVAL=$?
    echo
}


# See how we were called.
case "$1" in
    start)
        start
        ;;
    stop)
        stop
        ;;
    reload)
        reload
        ;;
    restart)
        stop
        start
        ;;
    condrestart)
        if [ -f /var/lock/subsys/$prog ]; then
            stop
            start
        fi
        ;;
    status)
        status keepalived
        RETVAL=$?
        ;;
    *)
        echo "Usage: $0 {start|stop|reload|restart|condrestart|status}"
        RETVAL=1
esac


exit 複製代碼

5 修改keepalived配置文件

cd /etc/keepalived
cp keepalived.conf keepalived.conf.back
rm keepalived.conf
vim keepalived.conf複製代碼

添加內容以下

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.227.88
        192.168.227.99
        


    }
}


virtual_server  192.168.227.88 80 {
    delay_loop 6
    lb_algo rr
    lb_kind NAT
    persistence_timeout 50
    protocol TCP
     real_server 172.18.0.210 {
            weight 1


     }


}


virtual_server  192.168.227.99 80 {
    delay_loop 6
    lb_algo rr
    lb_kind NAT
    persistence_timeout 50
    protocol TCP
     real_server 172.18.0.220 {
            weight 1


     複製代碼

須要注意的是:interface 這個是根據本身服務器網卡名稱設定的,否則沒法作VIP映射

6 啓動keepalived

systemctl daemon-reload
systemctl enable keepalived.service
systemctl start keepalived.service

複製代碼

每次更改配置文件以後必須執行 systemctl daemon-reload 操做,否則配置無效。

7 查看keepalived進程是否存在

ps -ef|grep keepalived複製代碼

8 查看keepalived運行狀態

systemctl status keepalived.service複製代碼

9 查看虛擬IP是否完成映射

ip addr複製代碼

10 Ping下兩個IP

能夠看到兩個IP都是通的,到此keepalived安裝成功



Docker容器實現前端主從熱備系統

宿主機中的環境只須要Docker引擎和keepalived虛擬IP映射,後面的工做在Docker容器中進行。爲何還要在centos7中安裝keepalived呢,這是由於咱們沒法經過外網IP直接訪問容器內部,因此須要宿主機虛擬出來一個IP,與容器進行一個橋接,讓他們實現內部對接。

看下圖,一目瞭然:

圖中訪問的IP應該是容器內部虛擬出來的172.18.0.210,此處更正說明下。

接下來咱們安裝前端服務器的主從系統部分

1 拉取centos7鏡像

docker pull centos:7複製代碼

2 建立容器

docker run -it -d --name centos1 -d centos:7複製代碼

3 進入容器centos1

docker exec -it centos1 bash複製代碼

4 安裝經常使用工具

yum update -y
yum install -y vim
yum install -y wget
yum install -y  gcc-c++  
yum install -y pcre pcre-devel  
yum install -y zlib zlib-devel  
yum install -y  openssl-devel
yum install -y popt-devel
yum install -y initscripts
yum install -y net-tools

複製代碼

5 將容器打包成新的鏡像,之後直接以該鏡像建立容器

docker commit -a 'cfh' -m 'centos with common tools' centos1 centos_base複製代碼

6 刪除以前建立的centos1 容器,從新以基礎鏡像建立容器,安裝keepalived+nginx

docker rm -f centos1
#容器內須要使用systemctl服務,須要加上/usr/sbin/init
docker run -it --name centos_temp -d --privileged centos_base /usr/sbin/init
docker exec -it centos_temp bash複製代碼

7 安裝nginx

#使用yum安裝nginx須要包括Nginx的庫,安裝Nginx的庫
rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
# 使用下面命令安裝nginx
yum install -y nginx
#啓動nginx
systemctl start nginx.service


複製代碼

8 安裝keepalived

1.下載keepalived
wget http://www.keepalived.org/software/keepalived-1.2.18.tar.gz


2.解壓安裝:
tar -zxvf keepalived-1.2.18.tar.gz -C /usr/local/


3.下載插件openssl


yum install -y openssl openssl-devel(須要安裝一個軟件包)


4.開始編譯keepalived 
cd  /usr/local/keepalived-1.2.18/ && ./configure --prefix=/usr/local/keepalived


5.make一下
make && make ins複製代碼

9 將keepalived 安裝成系統服務

mkdir /etc/keepalived
cp /usr/local/keepalived/etc/keepalived/keepalived.conf  /etc/keepalived/
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
ln -s /usr/local/sbin/keepalived /usr/sbin/
能夠設置開機啓動:chkconfig keepalived on
到此咱們安裝完畢!


#若啓動報錯,則執行下面命令
cd /usr/sbin/ 
rm -f keepalived  
cp /usr/local/keepalived/sbin/keepalived  /usr/sbin/ 


#啓動keepalived
systemctl daemon-reload  從新加載
systemctl enable keepalived.service  設置開機自動啓動
systemctl start keepalived.service 啓動
systemctl status keepalived.service  查看服務狀複製代碼

10 修改/etc/keepalived/keepalived.conf文件

#備份配置文件 
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.backup
rm -f keepalived.conf 
vim keepalived.conf 


#配置文件以下


vrrp_script chk_nginx {
    script "/etc/keepalived/nginx_check.sh"
    interval 2
    weight -20
}


vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 121
    mcast_src_ip 172.18.0.201
    priority 100
    nopreempt
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }


    track_script {
        chk_nginx
    }


    virtual_ipaddress {
        172.18.0.210
    }複製代碼

11 修改nginx的配置文件 

vim /etc/nginx/conf.d/default.conf

upstream tomcat{
  server 172.18.0.11:80;
  server 172.18.0.12:80;
  server 172.18.0.13:80;


}


server {
    listen       80;
    server_name  172.18.0.210;


    #charset koi8-r;
    #access_log /var/log/nginx/host.access.log main;


    location / {
        proxy_pass http://tomcat;
        index index.html index.html;
    }


    #error_page 404 /404.html;


    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }


    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    # proxy_pass http://127.0.0.1;
    #}


    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    # root html;
    # fastcgi_pass 127.0.0.1:9000;
    # fastcgi_index index.php;
    # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
    # include fastcgi_params;
    #}


    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    # deny all;
 複製代碼

12 添加心跳檢測文件

vim nginx_check.sh
#如下是腳本內容
#!/bin/bash
A=`ps -C nginx –no-header |wc -l`
if [ $A -eq 0 ];then
    /usr/local/nginx/sbin/nginx
    sleep 2
    if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
        killall keepalived
    fi
fi複製代碼

13 給腳本賦予執行權限

chmod +x nginx_check.sh複製代碼

14 設置開機啓動

systemctl enable keepalived.service


#開啓keepalived
systemctl daemon-reload
systemctl start keepalived.service

複製代碼

15 檢測虛擬IP是否成功

ping 172.18.0.210複製代碼

16 將centos_temp 容器從新打包成鏡像

docker commit -a 'cfh' -m 'centos with keepalived nginx' centos_temp centos_kn複製代碼

17 刪除全部容器

docker rm -f `docker ps -a -q`複製代碼

18 使用以前打包的鏡像從新建立容器

取名爲centos_web_master,和centos_web_slave

docker run --privileged  -tid \
--name centos_web_master --restart=always \
--net mynet --ip 172.18.0.201 \
centos_kn /usr/sbin/init




docker run --privileged  -tid \
--name centos_web_slave --restart=always \
--net mynet --ip 172.18.0.202 \
centos_kn /usr/sbin/init複製代碼

19 修改centos_web_slave裏面的nginx和keepalived的配置文件

keepalived修改地方以下

state SLAVE #設置爲從服務器
 mcast_src_ip 172.18.0.202  #修改成本機的IP
 priority 80  #權重設置比master小

複製代碼

Nginx配置以下

upstream tomcat{
  server 172.18.0.14:80;
  server 172.18.0.15:80;
  server 172.18.0.16:80;


}


server {
    listen       80;
    server_name  172.18.0.210;


    #charset koi8-r;
    #access_log /var/log/nginx/host.access.log main;


    location / {
        proxy_pass http://tomcat;
        index index.html index.html;
    }


    #error_page 404 /404.html;


    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }


    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    # proxy_pass http://127.0.0.1;
    #}


    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    # root html;
    # fastcgi_pass 127.0.0.1:9000;
    # fastcgi_index index.php;
    # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
    # include fastcgi_params;
    #}


    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    # deny all;
 複製代碼

重啓keepalived和nginx

systemctl daemon-reload
systemctl restart keepalived.service
systemctl restart nginx.service複製代碼

20 使用Nginx啓動6臺前端服務器

docker pull nginx


nginx_web_1='/home/root123/cfh/nginx1'
nginx_web_2='/home/root123/cfh/nginx2'
nginx_web_3='/home/root123/cfh/nginx3'
nginx_web_4='/home/root123/cfh/nginx4'
nginx_web_5='/home/root123/cfh/nginx5'
nginx_web_6='/home/root123/cfh/nginx6'


mkdir -p ${nginx_web_1}/conf ${nginx_web_1}/conf.d ${nginx_web_1}/html ${nginx_web_1}/logs
mkdir -p ${nginx_web_2}/conf ${nginx_web_2}/conf.d ${nginx_web_2}/html ${nginx_web_2}/logs
mkdir -p ${nginx_web_3}/conf ${nginx_web_3}/conf.d ${nginx_web_3}/html ${nginx_web_3}/logs
mkdir -p ${nginx_web_4}/conf ${nginx_web_4}/conf.d ${nginx_web_4}/html ${nginx_web_4}/logs
mkdir -p ${nginx_web_5}/conf ${nginx_web_5}/conf.d ${nginx_web_5}/html ${nginx_web_5}/logs
mkdir -p ${nginx_web_6}/conf ${nginx_web_6}/conf.d ${nginx_web_6}/html ${nginx_web_6}/logs






docker run -it --name temp_nginx -d nginx
docker ps
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_1}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_1}/conf.d/default.conf




docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_2}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_2}/conf.d/default.conf


docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_3}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_3}/conf.d/default.conf


docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_4}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_4}/conf.d/default.conf


docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_5}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_5}/conf.d/default.conf


docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_6}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_6}/conf.d/default.conf


docker rm -f temp_nginx




docker run -d  --name nginx_web_1 \
--network=mynet --ip 172.18.0.11 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_1}/html/:/usr/share/nginx/html \
-v ${nginx_web_1}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_1}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_1}/logs/:/var/log/nginx --privileged --restart=always nginx


docker run -d  --name nginx_web_2 \
--network=mynet --ip 172.18.0.12 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_2}/html/:/usr/share/nginx/html \
-v ${nginx_web_2}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_2}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_2}/logs/:/var/log/nginx --privileged --restart=always nginx


docker run -d  --name nginx_web_3 \
--network=mynet --ip 172.18.0.13 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_3}/html/:/usr/share/nginx/html \
-v ${nginx_web_3}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_3}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_3}/logs/:/var/log/nginx --privileged --restart=always nginx


docker run -d  --name nginx_web_4 \
--network=mynet --ip 172.18.0.14 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_4}/html/:/usr/share/nginx/html \
-v ${nginx_web_4}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_4}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_4}/logs/:/var/log/nginx --privileged --restart=always nginx


docker run -d  --name nginx_web_5 \
--network=mynet --ip 172.18.0.15 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_5}/html/:/usr/share/nginx/html \
-v ${nginx_web_5}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_5}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_5}/logs/:/var/log/nginx --privileged --restart=always nginx


docker run -d  --name nginx_web_6 \
--network=mynet --ip 172.18.0.16 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_6}/html/:/usr/share/nginx/html \
-v ${nginx_web_6}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_6}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_6}/logs/:/var/log/nginx --privileged --restart=always nginx






cd ${nginx_web_1}/html
cp /home/server/envconf/index.html ${nginx_web_1}/html/index.html


cd ${nginx_web_2}/html
cp /home/server/envconf/index.html ${nginx_web_2}/html/index.html


cd ${nginx_web_3}/html
cp /home/server/envconf/index.html ${nginx_web_3}/html/index.html


cd ${nginx_web_4}/html
cp /home/server/envconf/index.html ${nginx_web_4}/html/index.html


cd ${nginx_web_5}/html
cp /home/server/envconf/index.html ${nginx_web_5}/html/index.html


cd ${nginx_web_6}/html
cp /home/server/envconf/index.html ${ngi複製代碼

/home/server/envconf/ 是我本身存放文件的地方,讀者可自行新建目錄,下面附上index.html文件內容

<!DOCTYPE html>
<html lang="en" xmlns:v-on="http://www.w3.org/1999/xhtml">
<head>
    <meta charset="UTF-8">
    <title>主從測試</title>


</head>


<script src="https://cdn.jsdelivr.net/npm/vue"></script>
<script src="https://cdn.staticfile.org/vue-resource/1.5.1/vue-resource.min.js"></script>
<body>
<div id="app" style="height: 300px;width: 600px">




    <h1 style="color: red">我是前端工程 WEB 頁面</h1>


    <br>
    showMsg:{{message}}
    <br>
    <br>
    <br>
    <button v-on:click="getMsg">獲取後臺數據</button>


</div>
</body>
</html>


<script>


    var app = new Vue({
        el: '#app',
        data: {
            message: 'Hello Vue!'
        },
        methods: {
            getMsg: function () {
                var ip="http://192.168.227.99"
                var that=this;
                //發送get請求
                that.$http.get(ip+'/api/test').then(function(res){
                   that.message=res.data;
                },function(){
                    console.log('請求失敗處理');
                });
;            }
        }
    })




</複製代碼

21 瀏覽器訪問 192.168.227.88,會看到index.html顯示的界面。

22 測試

  1. 中止centos_web_master 容器,查看頁面是否能正常訪問
  2. 重啓centos_web_master容器,查看訪問的頁面是否由從到主切換了(爲了主從切換明顯,能夠在index.html 裏面自行加上標記,好比在標題裏面加上主或從的字樣)
  3. 隨意關閉主服務器所對應的web容器,看nginx負載均衡是否起做用。

以上測試正常,則表示前端主功能完成



Docker容器實現後端主從熱備系統

後端服務器咱們使用openjdk做爲jar包運行容器,主從容器建立使用上面的centos_kn鏡像建立,而後修改配置就好了

爲了讓openjdk能夠在容器運行的時候自動運行jar程序,咱們須要使用Dockerfile從新構建鏡像,讓其具有該功能

1 建立Dockerfile文件

FROM openjdk:10
MAINTAINER cfh
WORKDIR /home/soft
CMD ["nohup","java","-jar","docker_server.jar"]

複製代碼

2 構建鏡像

docker build -t myopenjdk .複製代碼

3 使用構建的鏡像建立6臺後端服務器

docker volume create S1
docker volume inspect S1


docker volume create S2
docker volume inspect S2




docker volume create S3
docker volume inspect S3


docker volume create S4
docker volume inspect S4


docker volume create S5
docker volume inspect S5


docker volume create S6
docker volume inspect S6


cd /var/lib/docker/volumes/S1/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S1/_data/docker_server.jar


cd /var/lib/docker/volumes/S2/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S2/_data/docker_server.jar


cd /var/lib/docker/volumes/S3/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S3/_data/docker_server.jar


cd /var/lib/docker/volumes/S4/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S4/_data/docker_server.jar


cd /var/lib/docker/volumes/S5/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S5/_data/docker_server.jar


cd /var/lib/docker/volumes/S6/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S6/_data/docker_server.jar






docker run -it -d --name server_1  -v S1:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.101 --restart=always myopenjdk


docker run -it -d --name server_2  -v S2:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.102 --restart=always myopenjdk


docker run -it -d --name server_3  -v S3:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.103 --restart=always myopenjdk


docker run -it -d --name server_4  -v S4:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.104 --restart=always myopenjdk


docker run -it -d --name server_5  -v S5:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.105 --restart=always myopenjdk


docker run -it -d --name server_6  -v S6:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.106 --restar複製代碼

docker_server.jar爲測試程序,主要代碼以下

import org.springframework.web.bind.annotation.RestController;


import javax.servlet.http.HttpServletResponse;
import java.util.LinkedHashMap;
import java.util.Map;


@RestController
@RequestMapping("api")
@CrossOrigin("*")
public class TestController {


    @Value("${server.port}")
    public int port;


    @RequestMapping(value = "/test",method = RequestMethod.GET)
    public Map<String,Object> test(HttpServletResponse response){
        response.setHeader("Access-Control-Allow-Origin", "*");
        response.setHeader("Access-Control-Allow-Methods", "GET");
        response.setHeader("Access-Control-Allow-Headers","token");
        Map<String,Object> objectMap=new LinkedHashMap<>();
        objectMap.put("code",10000);
        objectMap.put("msg","ok");
        objectMap.put("server_port","服務器端口:"+port);
        return objectMap;
    }複製代碼

4 建立後端的主從容器

主服務器

docker run --privileged  -tid --name centos_server_master --restart=always --net mynet --ip 172.18.0.203 centos_kn /usr/sbin/init

複製代碼

主服務器keepalived配置

vrrp_script chk_nginx {
    script "/etc/keepalived/nginx_check.sh"
    interval 2
    weight -20
}


vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 110
    mcast_src_ip 172.18.0.203
    priority 100
    nopreempt
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }


    track_script {
        chk_nginx
    }


    virtual_ipaddress {
        172.18.0.220
    }
複製代碼

主服務器nginx配置

upstream tomcat{
  server 172.18.0.101:6001;
  server 172.18.0.102:6002;
  server 172.18.0.103:6003;


}


server {
    listen       80;
    server_name  172.18.0.220;


    #charset koi8-r;
    #access_log /var/log/nginx/host.access.log main;


    location / {
        proxy_pass http://tomcat;
        index index.html index.html;
    }


    #error_page 404 /404.html;


    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }


    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    # proxy_pass http://127.0.0.1;
    #}


    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    # root html;
    # fastcgi_pass 127.0.0.1:9000;
    # fastcgi_index index.php;
    # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
    # include fastcgi_params;
    #}


    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    # deny all;
 複製代碼

重啓keepalived和nginx

systemctl daemon-reload
systemctl restart keepalived.service
systemctl restart nginx.service

複製代碼

從服務器

docker run --privileged  -tid --name centos_server_slave --restart=always --net mynet --ip 172.18.0.204 centos_kn /usr/sbin/init

複製代碼

從服務器的keepalived配置

cript chk_nginx {
    script "/etc/keepalived/nginx_check.sh"
    interval 2
    weight -20
}


vrrp_instance VI_1 {
    state SLAVE
    interface eth0
    virtual_router_id 110
    mcast_src_ip 172.18.0.204
    priority 80
    nopreempt
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }


    track_script {
        chk_nginx
    }


    virtual_ipaddress {
        172.18.0.220
    }
複製代碼

從服務器的nginx配置

upstream tomcat{
  server 172.18.0.104:6004;
  server 172.18.0.105:6005;
  server 172.18.0.106:6006;


}


server {
    listen       80;
    server_name  172.18.0.220;


    #charset koi8-r;
    #access_log /var/log/nginx/host.access.log main;


    location / {
        proxy_pass http://tomcat;
        index index.html index.html;
    }


    #error_page 404 /404.html;


    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }


    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    # proxy_pass http://127.0.0.1;
    #}


    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    # root html;
    # fastcgi_pass 127.0.0.1:9000;
    # fastcgi_index index.php;
    # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
    # include fastcgi_params;
    #}


    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    # deny all;
 複製代碼

重啓keepalived和nginx

systemctl daemon-reload
systemctl restart keepalived.service
systemctl restart nginx.service

複製代碼

命令行驗證

瀏覽器驗證

portainer安裝

它是容器管理界面,能夠看到容器的運行狀態

docker search portainer


docker pull portainer/portainer


docker run -d -p 9000:9000 \
    --restart=always \
    -v /var/run/docker.sock:/var/run/docker.sock \
    --name prtainer-eureka\
    portainer/portainer




http://192.168.227.171:90複製代碼

首次進入的時候須要輸入密碼,默認帳號爲admin,密碼建立以後頁面跳轉到下一界面,選擇管理本地的容器,也就是Local,點擊肯定。

結語:

以上就是整個方案的設計與部署。另外更深一層的技術有docker composer的容器的統一管理,還有K8s的Docker集羣管理。這部分須要花很大精力研究。

另外Docker的三要素要搞明白:鏡像/容器,數據卷,網絡管理。

相關文章
相關標籤/搜索