「實戰篇」開源項目docker化運維部署-前端java部署(八)

原創文章,歡迎轉載。轉載請註明:轉載自 IT人故事會,謝謝! 原文連接地址: 「實戰篇」開源項目docker化運維部署-前端java部署(八)
本節主要說說前端的部署須要注意的點,自己renren-fast這個項目就是經過nodejs來進行開發的,nodejs編譯後生成html,css,img因此,我們不用在容器直接用nginx就能夠訪問靜態文件。源碼: github.com/limingios/n…前端/ github.com/daxiongYang…

修改鏈接地址


打包

  • 修改鏡像,國內打包比較快點

  • 安裝
你可使用咱們定製的 cnpm (gzip 壓縮支持) 命令行工具代替默認的 npm:
$ npm install -g cnpm --registry=https://registry.npm.taobao.org複製代碼
這個目錄上傳到nginx上。

renren-nginx<1>

這裏的nginx並非作負載均衡的,而是作靜態的html的靜態運行環境的。
  • 建立容器
用宿主機的網段
docker run -it -d --name fn1 \
-v /root/fn1/nginx.conf:/etc/nginx/nginxc.conf \
-v /root/fn1/renren-vue:/home/fn1/renren-vue \
--privileged --net=host nginx複製代碼
  • 編寫nginx的配置文件
user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush on;

    keepalive_timeout  65;

    #gzip on;
    
    proxy_redirect          off;
    proxy_set_header        Host $host;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    client_max_body_size    10m;
    client_body_buffer_size   128k;
    proxy_connect_timeout   5s;
    proxy_send_timeout      5s;
    proxy_read_timeout      5s;
    proxy_buffer_size        4k;
    proxy_buffers           4 32k;
    proxy_busy_buffers_size  64k;
    proxy_temp_file_write_size 64k;
    
    server {
        listen 6501;
        server_name  192.168.66.100;
        location  /  {
            root  /home/fn1/renren-vue;
            index  index.html;
        }
    }
}複製代碼

renren-nginx<2>

這裏的nginx並非作負載均衡的,而是作靜態的html的靜態運行環境的。
  • 建立容器
用宿主機的網段
docker run -it -d --name fn2 \
-v /root/fn2/nginx.conf:/etc/nginx/nginxc.conf \
-v /root/fn2/renren-vue:/home/fn1/renren-vue \
--privileged --net=host nginx複製代碼
  • 編寫nginx的配置文件
user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush on;

    keepalive_timeout  65;

    #gzip on;
    
    proxy_redirect          off;
    proxy_set_header        Host $host;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    client_max_body_size    10m;
    client_body_buffer_size   128k;
    proxy_connect_timeout   5s;
    proxy_send_timeout      5s;
    proxy_read_timeout      5s;
    proxy_buffer_size        4k;
    proxy_buffers           4 32k;
    proxy_busy_buffers_size  64k;
    proxy_temp_file_write_size 64k;
    
    server {
        listen 6502;
        server_name  192.168.66.100;
        location  /  {
            root  /home/fn2/renren-vue;
            index  index.html;
        }
    }
}複製代碼

renren-nginx<3>

這裏的nginx並非作負載均衡的,而是作靜態的html的靜態運行環境的。
  • 建立容器
用宿主機的網段
docker run -it -d --name fn3 \
-v /root/fn3/nginx.conf:/etc/nginx/nginxc.conf \
-v /root/fn3/renren-vue:/home/fn1/renren-vue \
--privileged --net=host nginx複製代碼
  • 編寫nginx的配置文件
user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush on;

    keepalive_timeout  65;

    #gzip on;
    
    proxy_redirect          off;
    proxy_set_header        Host $host;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    client_max_body_size    10m;
    client_body_buffer_size   128k;
    proxy_connect_timeout   5s;
    proxy_send_timeout      5s;
    proxy_read_timeout      5s;
    proxy_buffer_size        4k;
    proxy_buffers           4 32k;
    proxy_busy_buffers_size  64k;
    proxy_temp_file_write_size 64k;
    
    server {
        listen 6503;
        server_name  192.168.66.100;
        location  /  {
            root  /home/fn3/renren-vue;
            index  index.html;
        }
    }
}複製代碼

qia負載均衡


nginx-ff1

  • ff1 容器的建立
docker run -it -d --name ff1 \
-v /root/ff1/nginx.conf:/etc/nginx/nginx.conf \
--net=host \
--privileged --net=host nginx複製代碼
  • 負載均衡ff1 - nginx的配置
user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush on;

    keepalive_timeout  65;

    #gzip on;
    
    proxy_redirect          off;
    proxy_set_header        Host $host;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    client_max_body_size    10m;
    client_body_buffer_size   128k;
    proxy_connect_timeout   5s;
    proxy_send_timeout      5s;
    proxy_read_timeout      5s;
    proxy_buffer_size        4k;
    proxy_buffers           4 32k;
    proxy_busy_buffers_size  64k;
    proxy_temp_file_write_size 64k;
    
    upstream fn {
        server 192.168.66.100:6501;
        server 192.168.66.100:6502;
        server 192.168.66.100:6503;
    }
    server {
        listen       6601;
        server_name  192.168.66.100; 
        location / {  
            proxy_pass   http://fn;
            index  index.html index.htm;  
        }  

    }
}複製代碼

nginx-ff2

  • ff2 容器的建立
docker run -it -d --name ff2 \
-v /root/ff2/nginx.conf:/etc/nginx/nginx.conf \
--net=host \
--privileged --net=host nginx複製代碼
  • 負載均衡ff2 - nginx的配置
user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush on;

    keepalive_timeout  65;

    #gzip on;
    
    proxy_redirect          off;
    proxy_set_header        Host $host;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    client_max_body_size    10m;
    client_body_buffer_size   128k;
    proxy_connect_timeout   5s;
    proxy_send_timeout      5s;
    proxy_read_timeout      5s;
    proxy_buffer_size        4k;
    proxy_buffers           4 32k;
    proxy_busy_buffers_size  64k;
    proxy_temp_file_write_size 64k;
    
    upstream fn {
        server 192.168.66.100:6501;
        server 192.168.66.100:6502;
        server 192.168.66.100:6503;
    }
    server {
        listen       6602;
        server_name  192.168.66.100; 
        location / {  
            proxy_pass   http://fn;
            index  index.html index.htm;  
        }  

    }
}複製代碼

前端項目的雙機熱備負載均衡方案

以前已經設置了ff1 和ff2,均可以正常的訪問後端,可是沒有設置keepalived,他們以前沒法爭搶ip,沒法作到雙機熱備。此次說說雙機熱備。

進入容器ff1而後安裝keepalived
keepalived必須在ff1所在的容器以內,也能夠在docker倉庫裏面下載一個nginx-keepalived的鏡像。這裏直接在容器內安裝keepalived。
docker exec -it ff1 /bin/bash
#寫入dns,防止apt-get update找不到服務器
echo "nameserver 8.8.8.8" | tee /etc/resolv.conf > /dev/null 
apt-get clean
apt-get update
apt-get install vim
vi /etc/apt/sources.list複製代碼
sources.list 添加下面的內容
deb http://mirrors.163.com/ubuntu/ precise main universe restricted multiverse 
deb-src http://mirrors.163.com/ubuntu/ precise main universe restricted multiverse 
deb http://mirrors.163.com/ubuntu/ precise-security universe main multiverse restricted 
deb-src http://mirrors.163.com/ubuntu/ precise-security universe main multiverse restricted 
deb http://mirrors.163.com/ubuntu/ precise-updates universe main multiverse restricted 
deb http://mirrors.163.com/ubuntu/ precise-proposed universe main multiverse restricted 
deb-src http://mirrors.163.com/ubuntu/ precise-proposed universe main multiverse restricted 
deb http://mirrors.163.com/ubuntu/ precise-backports universe main multiverse restricted 
deb-src http://mirrors.163.com/ubuntu/ precise-backports universe main multiverse restricted 
deb-src http://mirrors.163.com/ubuntu/ precise-updates universe main multiverse restricted複製代碼
  • 更新apt源
apt-get clean
apt-get update
apt-get install keepalived
apt-get install vim複製代碼

  • keepalived配置文件
容器內的路徑:/etc/keepalived/keepalived.conf
vi /etc/keepalived/keepalived.conf複製代碼
keepalived.conf
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        192.168.66.152
    }
}
virtual_server 192.168.66.152 6701{
    delay_loop 3
    lb_algo rr
    lb_kind NAT
    persistence_timeout 50
    protocol TCP
    real_server 192.168.66.100 6601{
        weight 1
    }
}複製代碼
  1. VI_1 名稱能夠自定義
  2. state MASTER | keepalived的身份(MASTER主服務器,BACKUP備份服務器,不會搶佔虛擬機ip)。若是都是主MASTER的話,就會進行互相爭搶IP,若是搶到了就是MASTER,另外一個就是SLAVE。
  3. interface網卡,定義一個虛擬IP定義到那個網卡上邊。網卡設備的名稱。eth33是宿主機是網卡。
  4. virtual_router_id 51 | 虛擬路由標識,MASTER和BACKUP的虛擬路由標識必須一致。標識能夠是0-255。
  5. priority 100 | 權重。MASTER權重要高於BACKUP 數字越大優選級越高。能夠根據硬件的配置來完成,權重最大的獲取搶到的級別越高。
  6. advert_int 1 | 心跳檢測。MASTER與BACKUP節點間同步檢查的時間間隔,單位爲秒。主備之間必須一致。
  7. authentication | 主從服務器驗證方式。主備必須使用相同的密碼才能正常通訊。進行心跳檢測須要登陸到某個主機上邊全部有帳號密碼。
  8. virtual_ipaddress | 虛擬ip地址,能夠設置多個虛擬ip地址,每行一個。根據上邊配置的eth33上配置的ip。192.168.66.151 是本身定義的虛擬ip
  • 啓動keeplived
容器內啓動
service keepalived start複製代碼
進入容器ff2而後安裝keepalived
keepalived必須在ff2所在的容器以內,也能夠在docker倉庫裏面下載一個nginx-keepalived的鏡像。這裏直接在容器內安裝keepalived。
docker exec -it ff2 /bin/bash
#寫入dns,防止apt-get update找不到服務器
echo "nameserver 8.8.8.8" | tee /etc/resolv.conf > /dev/null 
apt-get clean
apt-get update
apt-get install vim
vi /etc/apt/sources.list複製代碼
sources.list 添加下面的內容
deb http://mirrors.163.com/ubuntu/ precise main universe restricted multiverse 
deb-src http://mirrors.163.com/ubuntu/ precise main universe restricted multiverse 
deb http://mirrors.163.com/ubuntu/ precise-security universe main multiverse restricted 
deb-src http://mirrors.163.com/ubuntu/ precise-security universe main multiverse restricted 
deb http://mirrors.163.com/ubuntu/ precise-updates universe main multiverse restricted 
deb http://mirrors.163.com/ubuntu/ precise-proposed universe main multiverse restricted 
deb-src http://mirrors.163.com/ubuntu/ precise-proposed universe main multiverse restricted 
deb http://mirrors.163.com/ubuntu/ precise-backports universe main multiverse restricted 
deb-src http://mirrors.163.com/ubuntu/ precise-backports universe main multiverse restricted 
deb-src http://mirrors.163.com/ubuntu/ precise-updates universe main multiverse restricted複製代碼

  • 更新apt源
apt-get clean
apt-get update
apt-get install keepalived
apt-get install vim複製代碼
  • keepalived配置文件
容器內的路徑:/etc/keepalived/keepalived.conf
vi /etc/keepalived/keepalived.conf複製代碼
keepalived.conf
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        192.168.66.152
    }
}
virtual_server 192.168.66.152 6701{
    delay_loop 3
    lb_algo rr
    lb_kind NAT
    persistence_timeout 50
    protocol TCP
    real_server 192.168.66.100 6602{
        weight 1
    }
}複製代碼
  1. VI_1 名稱能夠自定義
  2. state MASTER | keepalived的身份(MASTER主服務器,BACKUP備份服務器,不會搶佔虛擬機ip)。若是都是主MASTER的話,就會進行互相爭搶IP,若是搶到了就是MASTER,另外一個就是SLAVE。
  3. interface網卡,定義一個虛擬IP定義到那個網卡上邊。網卡設備的名稱。eth33是宿主機是網卡。
  4. virtual_router_id 51 | 虛擬路由標識,MASTER和BACKUP的虛擬路由標識必須一致。標識能夠是0-255。
  5. priority 100 | 權重。MASTER權重要高於BACKUP 數字越大優選級越高。能夠根據硬件的配置來完成,權重最大的獲取搶到的級別越高。
  6. advert_int 1 | 心跳檢測。MASTER與BACKUP節點間同步檢查的時間間隔,單位爲秒。主備之間必須一致。
  7. authentication | 主從服務器驗證方式。主備必須使用相同的密碼才能正常通訊。進行心跳檢測須要登陸到某個主機上邊全部有帳號密碼。
  8. virtual_ipaddress | 虛擬ip地址,能夠設置多個虛擬ip地址,每行一個。根據上邊配置的eth33上配置的ip。192.168.66.151 是本身定義的虛擬ip
  • 啓動keeplived
容器內啓動
service keepalived start複製代碼
PS:先後端部署基本是同樣的都是按照思路,先啓動多個容器,而後創建2個負載,負載內安裝keepalived作熱備。重點是想好端口。可是說實話,這是日常練習和我的項目,若是是多臺機器,就不能這麼搞了,下次一塊兒經過docker swarm的網絡方式讓多臺機器。

相關文章
相關標籤/搜索