說明:圖中爲整個系統方案設計圖,上半部分爲一臺業務服務器,下半部分爲數據庫服務器,本文檔只介紹業務服務器的部署流程,下半部分比較簡單。另外圖中所標註IP爲做者本身在虛擬機中的IP,可根據實際狀況本身進行設定。php
LVS是一個開源的軟件,能夠實現傳輸層四層負載均衡。LVS是Linux Virtual Server的縮寫,意思是Linux虛擬服務器。目前有三種IP負載均衡技術(VS/NAT、VS/TUN和VS/DR);八種調度算法(rr,wrr,lc,wlc,lblc,lblcr,dh,sh)。html
LVS能夠實現負載均衡,可是不可以進行健康檢查,好比一個rs出現故障,LVS仍然會把請求轉發給故障的rs服務器,這樣就會致使請求的無效性。keepalive軟件能夠進行健康檢查,並且能同時實現LVS的高可用性,解決LVS單點故障的問題,其實keepalive就是爲LVS而生的。前端
keepalived是一個相似於Layer2,4,7交換機制的軟件。是Linux集羣管理中保證集羣高可用的一個服務軟件,其功能是用來防止單點故障。vue
keepalived是基於VRRP協議實現的保證集羣高可用的一個服務軟件,主要功能是實現真機的故障隔離和負載均衡器間的失敗切換,防止單點故障。在瞭解keepalived原理以前先了解一下VRRP協議。java
Redundancy Protocol虛擬路由冗餘協議。是一種容錯協議,保證當主機的下一跳路由出現故障時,由另外一臺路由器來代替出現故障的路由器進行工做,從而保持網絡通訊的連續性和可靠性。在介紹VRRP以前先介紹一些關於VRRP的相關術語:linux
虛擬路由器:由一個 Master 路由器和多個 Backup 路由器組成。主機將虛擬路由器看成默認網關。nginx
VRID:虛擬路由器的標識。有相同 VRID 的一組路由器構成一個虛擬路由器。c++
Master 路由器:虛擬路由器中承擔報文轉發任務的路由器。web
Backup 路由器: Master 路由器出現故障時,可以代替 Master 路由器工做的路由器。算法
虛擬 IP 地址:虛擬路由器的 IP 地址。一個虛擬路由器能夠擁有一個或多個IP 地址。
IP 地址擁有者:接口 IP 地址與虛擬 IP 地址相同的路由器被稱爲 IP 地址擁有者。
虛擬 MAC 地址:一個虛擬路由器擁有一個虛擬 MAC 地址。虛擬 MAC 地址的格式爲 00-00-5E-00-01-{VRID}。一般狀況下,虛擬路由器迴應 ARP 請求使用的是虛擬 MAC 地址,只有虛擬路由器作特殊配置的時候,纔回應接口的真實 MAC 地址。
優先級: VRRP 根據優先級來肯定虛擬路由器中每臺路由器的地位。
非搶佔方式:若是 Backup 路由器工做在非搶佔方式下,則只要 Master 路由器沒有出現故障,Backup 路由器即便隨後被配置了更高的優先級也不會成爲Master 路由器。
搶佔方式:若是 Backup 路由器工做在搶佔方式下,當它收到 VRRP 報文後,會將本身的優先級與通告報文中的優先級進行比較。若是本身的優先級比當前的 Master 路由器的優先級高,就會主動搶佔成爲 Master 路由器;不然,將保持 Backup 狀態。
VRRP將局域網內的一組路由器劃分在一塊兒,造成一個VRRP備份組,它在功能上至關於一臺路由器的功能,使用虛擬路由器號進行標識(VRID)。虛擬路由器有本身的虛擬IP地址和虛擬MAC地址,它的外在變現形式和實際的物理路由徹底同樣。局域網內的主機將虛擬路由器的IP地址設置爲默認網關,經過虛擬路由器與外部網絡進行通訊。
虛擬路由器是工做在實際的物理路由器之上的。它由多個實際的路由器組成,包括一個Master路由器和多個Backup路由器。 Master路由器正常工做時,局域網內的主機經過Master與外界通訊。當Master路由器出現故障時, Backup路由器中的一臺設備將成爲新的Master路由器,接替轉發報文的工做。(路由器的高可用)
可參考我寫的一篇文章:juejin.im/post/5dae55…
nginx是一款高性能的http 服務器/反向代理服務器及電子郵件(IMAP/POP3)代理服務器。
做用:集羣(提升吞吐量,減輕單臺服務器壓力),反向代理(不暴露真實IP地址),虛擬服務器,靜態服務器(動靜分離)。解決跨域問題,使用nginx搭建企業級api接口網關
sudo apt-get remove docker docker-engine docker.io containerd runc複製代碼
sudo apt-get update複製代碼
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common複製代碼
sudo curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -複製代碼
sudo apt-key fingerprint 0EBFCD88複製代碼
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update複製代碼
apt-cache madison docker-ce複製代碼
sudo apt-get install -y docker-ce=17.12.1~ce-0~ubuntu複製代碼
docker --version複製代碼
sudo gpasswd -a 用戶名 docker #用戶名改爲本身的登陸名複製代碼
sudo service docker restartnewgrp - docker複製代碼
由於容器重啓以後IP會變,可是這個不是咱們但願的,咱們但願容器有本身的固定IP。容器默認使用Docker0這個網橋,這個是沒法自定義IP的,須要咱們本身建立一個網橋,而後指定容器IP,這樣容器在重啓以後IP會保持不變。
docker network create --subnet=172.18.0.0/24 mynet複製代碼
sudo apt-get install -y gcc
sudo apt-get install -y g++
sudo apt-get install -y libssl-dev
sudo apt-get install -y daemon
sudo apt-get install -y make
sudo apt-get install -y sysv-rc-conf複製代碼
cd /usr/local/
wget http://www.keepalived.org/software/keepalived-1.2.18.tar.gz
tar zxvf keepalived-1.2.18.tar.gz
cd keepalived-1.2.18
./configure --prefix=/usr/local/keepalived
make && make insta複製代碼
mkdir /etc/keepalived
mkdir /etc/sysconfig
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
ln -s /usr/local/sbin/keepalived /usr/sbin/
ln -s /usr/local/keepalived/sbin/keepalived /sbin/
複製代碼
由於除Readhat以外的linux沒有/etc/rc.d/init.d/functions,因此須要修改原來的啓動文件
修改後總體內容以下
#!/bin/sh
#
# Startup script for the Keepalived daemon
#
# processname: keepalived
# pidfile: /var/run/keepalived.pid
# config: /etc/keepalived/keepalived.conf
# chkconfig: - 21 79
# description: Start and stop Keepalived
# Source function library
#. /etc/rc.d/init.d/functions
. /lib/lsb/init-functions
# Source configuration file (we set KEEPALIVED_OPTIONS there)
. /etc/sysconfig/keepalived
RETVAL=0
prog="keepalived"
start() {
echo -n $"Starting $prog: "
daemon keepalived start
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/$prog
}
stop() {
echo -n $"Stopping $prog: "
killproc keepalived
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$prog
}
reload() {
echo -n $"Reloading $prog: "
killproc keepalived -1
RETVAL=$?
echo
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
reload)
reload
;;
restart)
stop
start
;;
condrestart)
if [ -f /var/lock/subsys/$prog ]; then
stop
start
fi
;;
status)
status keepalived
RETVAL=$?
;;
*)
echo "Usage: $0 {start|stop|reload|restart|condrestart|status}"
RETVAL=1
esac
exit 複製代碼
cd /etc/keepalived
cp keepalived.conf keepalived.conf.back
rm keepalived.conf
vim keepalived.conf複製代碼
添加內容以下
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.227.88
192.168.227.99
}
}
virtual_server 192.168.227.88 80 {
delay_loop 6
lb_algo rr
lb_kind NAT
persistence_timeout 50
protocol TCP
real_server 172.18.0.210 {
weight 1
}
}
virtual_server 192.168.227.99 80 {
delay_loop 6
lb_algo rr
lb_kind NAT
persistence_timeout 50
protocol TCP
real_server 172.18.0.220 {
weight 1
複製代碼
須要注意的是:interface 這個是根據本身服務器網卡名稱設定的,否則沒法作VIP映射
systemctl daemon-reload
systemctl enable keepalived.service
systemctl start keepalived.service
複製代碼
每次更改配置文件以後必須執行 systemctl daemon-reload 操做,否則配置無效。
ps -ef|grep keepalived複製代碼
systemctl status keepalived.service複製代碼
ip addr複製代碼
能夠看到兩個IP都是通的,到此keepalived安裝成功
宿主機中的環境只須要Docker引擎和keepalived虛擬IP映射,後面的工做在Docker容器中進行。爲何還要在centos7中安裝keepalived呢,這是由於咱們沒法經過外網IP直接訪問容器內部,因此須要宿主機虛擬出來一個IP,與容器進行一個橋接,讓他們實現內部對接。
看下圖,一目瞭然:
圖中訪問的IP應該是容器內部虛擬出來的172.18.0.210,此處更正說明下。
接下來咱們安裝前端服務器的主從系統部分
docker pull centos:7複製代碼
docker run -it -d --name centos1 -d centos:7複製代碼
docker exec -it centos1 bash複製代碼
yum update -y
yum install -y vim
yum install -y wget
yum install -y gcc-c++
yum install -y pcre pcre-devel
yum install -y zlib zlib-devel
yum install -y openssl-devel
yum install -y popt-devel
yum install -y initscripts
yum install -y net-tools
複製代碼
docker commit -a 'cfh' -m 'centos with common tools' centos1 centos_base複製代碼
docker rm -f centos1
#容器內須要使用systemctl服務,須要加上/usr/sbin/init
docker run -it --name centos_temp -d --privileged centos_base /usr/sbin/init
docker exec -it centos_temp bash複製代碼
#使用yum安裝nginx須要包括Nginx的庫,安裝Nginx的庫
rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
# 使用下面命令安裝nginx
yum install -y nginx
#啓動nginx
systemctl start nginx.service
複製代碼
1.下載keepalived
wget http://www.keepalived.org/software/keepalived-1.2.18.tar.gz
2.解壓安裝:
tar -zxvf keepalived-1.2.18.tar.gz -C /usr/local/
3.下載插件openssl
yum install -y openssl openssl-devel(須要安裝一個軟件包)
4.開始編譯keepalived
cd /usr/local/keepalived-1.2.18/ && ./configure --prefix=/usr/local/keepalived
5.make一下
make && make ins複製代碼
mkdir /etc/keepalived
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
ln -s /usr/local/sbin/keepalived /usr/sbin/
能夠設置開機啓動:chkconfig keepalived on
到此咱們安裝完畢!
#若啓動報錯,則執行下面命令
cd /usr/sbin/
rm -f keepalived
cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
#啓動keepalived
systemctl daemon-reload 從新加載
systemctl enable keepalived.service 設置開機自動啓動
systemctl start keepalived.service 啓動
systemctl status keepalived.service 查看服務狀複製代碼
#備份配置文件
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.backup
rm -f keepalived.conf
vim keepalived.conf
#配置文件以下
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 121
mcast_src_ip 172.18.0.201
priority 100
nopreempt
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_nginx
}
virtual_ipaddress {
172.18.0.210
}複製代碼
vim /etc/nginx/conf.d/default.conf
upstream tomcat{
server 172.18.0.11:80;
server 172.18.0.12:80;
server 172.18.0.13:80;
}
server {
listen 80;
server_name 172.18.0.210;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://tomcat;
index index.html index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
複製代碼
vim nginx_check.sh
#如下是腳本內容
#!/bin/bash
A=`ps -C nginx –no-header |wc -l`
if [ $A -eq 0 ];then
/usr/local/nginx/sbin/nginx
sleep 2
if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
killall keepalived
fi
fi複製代碼
chmod +x nginx_check.sh複製代碼
systemctl enable keepalived.service
#開啓keepalived
systemctl daemon-reload
systemctl start keepalived.service
複製代碼
ping 172.18.0.210複製代碼
docker commit -a 'cfh' -m 'centos with keepalived nginx' centos_temp centos_kn複製代碼
docker rm -f `docker ps -a -q`複製代碼
取名爲centos_web_master,和centos_web_slave
docker run --privileged -tid \
--name centos_web_master --restart=always \
--net mynet --ip 172.18.0.201 \
centos_kn /usr/sbin/init
docker run --privileged -tid \
--name centos_web_slave --restart=always \
--net mynet --ip 172.18.0.202 \
centos_kn /usr/sbin/init複製代碼
keepalived修改地方以下
state SLAVE #設置爲從服務器
mcast_src_ip 172.18.0.202 #修改成本機的IP
priority 80 #權重設置比master小
複製代碼
Nginx配置以下
upstream tomcat{
server 172.18.0.14:80;
server 172.18.0.15:80;
server 172.18.0.16:80;
}
server {
listen 80;
server_name 172.18.0.210;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://tomcat;
index index.html index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
複製代碼
重啓keepalived和nginx
systemctl daemon-reload
systemctl restart keepalived.service
systemctl restart nginx.service複製代碼
docker pull nginx
nginx_web_1='/home/root123/cfh/nginx1'
nginx_web_2='/home/root123/cfh/nginx2'
nginx_web_3='/home/root123/cfh/nginx3'
nginx_web_4='/home/root123/cfh/nginx4'
nginx_web_5='/home/root123/cfh/nginx5'
nginx_web_6='/home/root123/cfh/nginx6'
mkdir -p ${nginx_web_1}/conf ${nginx_web_1}/conf.d ${nginx_web_1}/html ${nginx_web_1}/logs
mkdir -p ${nginx_web_2}/conf ${nginx_web_2}/conf.d ${nginx_web_2}/html ${nginx_web_2}/logs
mkdir -p ${nginx_web_3}/conf ${nginx_web_3}/conf.d ${nginx_web_3}/html ${nginx_web_3}/logs
mkdir -p ${nginx_web_4}/conf ${nginx_web_4}/conf.d ${nginx_web_4}/html ${nginx_web_4}/logs
mkdir -p ${nginx_web_5}/conf ${nginx_web_5}/conf.d ${nginx_web_5}/html ${nginx_web_5}/logs
mkdir -p ${nginx_web_6}/conf ${nginx_web_6}/conf.d ${nginx_web_6}/html ${nginx_web_6}/logs
docker run -it --name temp_nginx -d nginx
docker ps
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_1}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_1}/conf.d/default.conf
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_2}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_2}/conf.d/default.conf
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_3}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_3}/conf.d/default.conf
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_4}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_4}/conf.d/default.conf
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_5}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_5}/conf.d/default.conf
docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_6}/conf
docker cp temp_nginx:/etc/nginx/conf.d/default.conf ${nginx_web_6}/conf.d/default.conf
docker rm -f temp_nginx
docker run -d --name nginx_web_1 \
--network=mynet --ip 172.18.0.11 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_1}/html/:/usr/share/nginx/html \
-v ${nginx_web_1}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_1}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_1}/logs/:/var/log/nginx --privileged --restart=always nginx
docker run -d --name nginx_web_2 \
--network=mynet --ip 172.18.0.12 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_2}/html/:/usr/share/nginx/html \
-v ${nginx_web_2}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_2}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_2}/logs/:/var/log/nginx --privileged --restart=always nginx
docker run -d --name nginx_web_3 \
--network=mynet --ip 172.18.0.13 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_3}/html/:/usr/share/nginx/html \
-v ${nginx_web_3}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_3}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_3}/logs/:/var/log/nginx --privileged --restart=always nginx
docker run -d --name nginx_web_4 \
--network=mynet --ip 172.18.0.14 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_4}/html/:/usr/share/nginx/html \
-v ${nginx_web_4}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_4}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_4}/logs/:/var/log/nginx --privileged --restart=always nginx
docker run -d --name nginx_web_5 \
--network=mynet --ip 172.18.0.15 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_5}/html/:/usr/share/nginx/html \
-v ${nginx_web_5}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_5}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_5}/logs/:/var/log/nginx --privileged --restart=always nginx
docker run -d --name nginx_web_6 \
--network=mynet --ip 172.18.0.16 \
-v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \
-v ${nginx_web_6}/html/:/usr/share/nginx/html \
-v ${nginx_web_6}/conf/nginx.conf:/etc/nginx/nginx.conf \
-v ${nginx_web_6}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \
-v ${nginx_web_6}/logs/:/var/log/nginx --privileged --restart=always nginx
cd ${nginx_web_1}/html
cp /home/server/envconf/index.html ${nginx_web_1}/html/index.html
cd ${nginx_web_2}/html
cp /home/server/envconf/index.html ${nginx_web_2}/html/index.html
cd ${nginx_web_3}/html
cp /home/server/envconf/index.html ${nginx_web_3}/html/index.html
cd ${nginx_web_4}/html
cp /home/server/envconf/index.html ${nginx_web_4}/html/index.html
cd ${nginx_web_5}/html
cp /home/server/envconf/index.html ${nginx_web_5}/html/index.html
cd ${nginx_web_6}/html
cp /home/server/envconf/index.html ${ngi複製代碼
/home/server/envconf/ 是我本身存放文件的地方,讀者可自行新建目錄,下面附上index.html文件內容
<!DOCTYPE html>
<html lang="en" xmlns:v-on="http://www.w3.org/1999/xhtml">
<head>
<meta charset="UTF-8">
<title>主從測試</title>
</head>
<script src="https://cdn.jsdelivr.net/npm/vue"></script>
<script src="https://cdn.staticfile.org/vue-resource/1.5.1/vue-resource.min.js"></script>
<body>
<div id="app" style="height: 300px;width: 600px">
<h1 style="color: red">我是前端工程 WEB 頁面</h1>
<br>
showMsg:{{message}}
<br>
<br>
<br>
<button v-on:click="getMsg">獲取後臺數據</button>
</div>
</body>
</html>
<script>
var app = new Vue({
el: '#app',
data: {
message: 'Hello Vue!'
},
methods: {
getMsg: function () {
var ip="http://192.168.227.99"
var that=this;
//發送get請求
that.$http.get(ip+'/api/test').then(function(res){
that.message=res.data;
},function(){
console.log('請求失敗處理');
});
; }
}
})
</複製代碼
以上測試正常,則表示前端主功能完成
後端服務器咱們使用openjdk做爲jar包運行容器,主從容器建立使用上面的centos_kn鏡像建立,而後修改配置就好了
爲了讓openjdk能夠在容器運行的時候自動運行jar程序,咱們須要使用Dockerfile從新構建鏡像,讓其具有該功能
FROM openjdk:10
MAINTAINER cfh
WORKDIR /home/soft
CMD ["nohup","java","-jar","docker_server.jar"]
複製代碼
docker build -t myopenjdk .複製代碼
docker volume create S1
docker volume inspect S1
docker volume create S2
docker volume inspect S2
docker volume create S3
docker volume inspect S3
docker volume create S4
docker volume inspect S4
docker volume create S5
docker volume inspect S5
docker volume create S6
docker volume inspect S6
cd /var/lib/docker/volumes/S1/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S1/_data/docker_server.jar
cd /var/lib/docker/volumes/S2/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S2/_data/docker_server.jar
cd /var/lib/docker/volumes/S3/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S3/_data/docker_server.jar
cd /var/lib/docker/volumes/S4/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S4/_data/docker_server.jar
cd /var/lib/docker/volumes/S5/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S5/_data/docker_server.jar
cd /var/lib/docker/volumes/S6/_data
cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S6/_data/docker_server.jar
docker run -it -d --name server_1 -v S1:/home/soft -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.101 --restart=always myopenjdk
docker run -it -d --name server_2 -v S2:/home/soft -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.102 --restart=always myopenjdk
docker run -it -d --name server_3 -v S3:/home/soft -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.103 --restart=always myopenjdk
docker run -it -d --name server_4 -v S4:/home/soft -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.104 --restart=always myopenjdk
docker run -it -d --name server_5 -v S5:/home/soft -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.105 --restart=always myopenjdk
docker run -it -d --name server_6 -v S6:/home/soft -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai --net mynet --ip 172.18.0.106 --restar複製代碼
docker_server.jar爲測試程序,主要代碼以下
import org.springframework.web.bind.annotation.RestController;
import javax.servlet.http.HttpServletResponse;
import java.util.LinkedHashMap;
import java.util.Map;
@RestController
@RequestMapping("api")
@CrossOrigin("*")
public class TestController {
@Value("${server.port}")
public int port;
@RequestMapping(value = "/test",method = RequestMethod.GET)
public Map<String,Object> test(HttpServletResponse response){
response.setHeader("Access-Control-Allow-Origin", "*");
response.setHeader("Access-Control-Allow-Methods", "GET");
response.setHeader("Access-Control-Allow-Headers","token");
Map<String,Object> objectMap=new LinkedHashMap<>();
objectMap.put("code",10000);
objectMap.put("msg","ok");
objectMap.put("server_port","服務器端口:"+port);
return objectMap;
}複製代碼
docker run --privileged -tid --name centos_server_master --restart=always --net mynet --ip 172.18.0.203 centos_kn /usr/sbin/init
複製代碼
主服務器keepalived配置
vrrp_script chk_nginx {
script "/etc/keepalived/nginx_check.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 110
mcast_src_ip 172.18.0.203
priority 100
nopreempt
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_nginx
}
virtual_ipaddress {
172.18.0.220
}
複製代碼
主服務器nginx配置
upstream tomcat{
server 172.18.0.101:6001;
server 172.18.0.102:6002;
server 172.18.0.103:6003;
}
server {
listen 80;
server_name 172.18.0.220;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://tomcat;
index index.html index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
複製代碼
重啓keepalived和nginx
systemctl daemon-reload
systemctl restart keepalived.service
systemctl restart nginx.service
複製代碼
docker run --privileged -tid --name centos_server_slave --restart=always --net mynet --ip 172.18.0.204 centos_kn /usr/sbin/init
複製代碼
從服務器的keepalived配置
cript chk_nginx {
script "/etc/keepalived/nginx_check.sh"
interval 2
weight -20
}
vrrp_instance VI_1 {
state SLAVE
interface eth0
virtual_router_id 110
mcast_src_ip 172.18.0.204
priority 80
nopreempt
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
track_script {
chk_nginx
}
virtual_ipaddress {
172.18.0.220
}
複製代碼
從服務器的nginx配置
upstream tomcat{
server 172.18.0.104:6004;
server 172.18.0.105:6005;
server 172.18.0.106:6006;
}
server {
listen 80;
server_name 172.18.0.220;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://tomcat;
index index.html index.html;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
複製代碼
重啓keepalived和nginx
systemctl daemon-reload
systemctl restart keepalived.service
systemctl restart nginx.service
複製代碼
它是容器管理界面,能夠看到容器的運行狀態
docker search portainer
docker pull portainer/portainer
docker run -d -p 9000:9000 \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
--name prtainer-eureka\
portainer/portainer
http://192.168.227.171:90複製代碼
首次進入的時候須要輸入密碼,默認帳號爲admin,密碼建立以後頁面跳轉到下一界面,選擇管理本地的容器,也就是Local,點擊肯定。
以上就是整個方案的設計與部署。另外更深一層的技術有docker composer的容器的統一管理,還有K8s的Docker集羣管理。這部分須要花很大精力研究。
另外Docker的三要素要搞明白:鏡像/容器,數據卷,網絡管理。