集羣部署
部署環境
IP地址node |
計算機名nginx |
部署的服務c++ |
172.16.10.10git |
node1.fastdfsgithub |
StorageGroup1redis |
172.16.10.11apache |
node2.fastdfs後端 |
StorageGroup1緩存 |
172.16.10.12bash |
node3.fastdfs |
StorageGroup2 |
172.16.10.13 |
node4.fastdfs |
StorageGroup2 |
172.16.10.17 |
node5.fastdfs |
Tracker1 |
172.16.10.18 |
node6.fastdfs |
Tracker2 |
172.16.10.14 |
node1.nginx |
nginx HAProxy keepalived |
172.16.10.15 |
node2.nginx |
nginx HAProxy keepalived |
172.16.10.16(VIP) |
服務器操做系統:CentOS Linux release 7.3.1611 (Core)
SELinux:關閉
Iptables:清空
時間:保持同步
FastDFS版本:v5.08(2016-02-14最新版本)
Hosts文件解析:
172.16.10.10 node1.fastdfs 172.16.10.11 node2.fastdfs 172.16.10.12 node3.fastdfs 172.16.10.13 node4.fastdfs 172.16.10.17 node5.fastdfs 172.16.10.18 node6.fastdfs 172.16.10.14 node1.nginx 172.16.10.15 node2.nginx
使用的軟件包:
FastDFS_v5.08.tar.gz
fastdfs-nginx-module_v1.16.tar.gz
libfastcommon-master.zip
nginx-1.6.2.tar.gz
ngx_cache_purge-2.3.tar.gz
全部軟件下載地址:點這裏
架構圖以下所示
環境部署
在tracker節點和storage節點安裝FastDFS
在tarcker節點和storage節點執行如下操做(172.16.10.10,172.16.10.11,172.16.10.12,172.16.10.13,172.16.10.17,172.16.10.18)
安裝基礎開發包
yum -y install gcc gcc-c++
首先須要安裝libfastcommon
下載地址:https://github.com/happyfish100/libfastcommon,在源碼包的INSTALL文件有說明
下載完成後解壓進入到解壓後目錄執行如下命令
./make.sh ./make.sh install
安裝成功後會生成一個文件:/usr/lib64/libfastcommon.so
咱們須要建立軟連接,由於FastDFS程序設置的目錄不是這裏
ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so
安裝FastDFS
下載完成後進入安裝目錄執行如下命令
./make.sh ./make.sh install
安裝完成後會配置文件在:/etc/fdfs
配置tracker節點
在tarcker節點執行下面命令(172.16.10.17,172.16.10.18)
建立的數據存儲和日誌存儲目錄
mkdir -pv /data/fastdfs-tracker
重命名tarcker配置文件
cd /etc/fdfs && mv tracker.conf.sample tracker.conf
修改tracker.conf配置文件
修改base_path的值爲剛建立的目錄
修改store_lookup的值爲0
說明:store_lookup的值默認爲2,2表明負載均衡模式,0表明輪詢模式,1表明指定節點,爲方便一會測試,咱們選改成0。另外store_group這人選項只有當store_lookup值爲1的時候纔會生效
啓動fdfs_trackerd服務
service fdfs_trackerd start
啓動成功後在剛建立的目錄下面會生成data和logs兩個目錄
/data/fastdfs-tracker ├── data │ ├── fdfs_trackerd.pid │ └── storage_changelog.dat └── logs └── trackerd.log
日誌輸出內容大概以下圖所示
查看是否監聽22122端口
配置開機自啓動
chkconfig fdfs_trackerd on
配置storage節點
在storage節點執行如下操做(172.16.10.10,172.16.10.11,172.16.10.12,172.16.10.13)
建立數據存儲目錄和日誌存儲目錄
mkdir -p /data/fastdfs-storage
重命名tarcker配置文件
cd /etc/fdfs/ && mv storage.conf.sample storage.conf
修改配置文件
修改base_path路徑爲剛建立的路徑
修改store_path0路徑爲剛建立的路徑
修改tracker_server後面的IP的端口爲tarcker服務器的IP和監聽的端口,就算在同一臺機器也不可使用127.0.0.1,另外咱們還須要再增長一個tarcker_server的選項,指定不一樣的tarcker,以下所示
base_path=/data/fastdfs-storage store_path0=/data/fastdfs-storage tracker_server=172.16.10.17:22122 tracker_server=172.16.10.18:22122
注意:配置文件裏面有一項爲group_name,用於指定group名,咱們有兩個分組,這裏須要注意不一樣的storage節點,這裏的名稱都不同,其它都同樣,默認爲group1,若是在是在172.16.10.12節點和172.16.10.13節點安裝這裏須要改爲group2
啓動服務
service fdfs_storaged start
啓動成功後在剛建立的目錄下面會生成data和logs兩個目錄,而且data目錄裏面會有不少子目錄,最的會監聽23000端口
日誌說明:下面日誌說明了客戶端啓動鏈接兩臺tracker server成功,選舉172.16.10.13爲leader,最後說明鏈接同一個group的storage server成功
加入開機自啓動
chkconfig fdfs_storaged on
第一次測試
主要測試tracker server的高可用,咱們能夠試圖關掉storage server選擇出來的leader,這樣正常狀況下會觸發從新選舉出新的leader,而且會報錯鏈接不上被關掉的tracker server,若是再把關掉的tracker server啓動的話會提示鏈接成功的信息,這些都能在日誌裏面體現
在集羣內任何的storage server節點執行如下命令
fdfs_monitor /etc/fdfs/storage.conf
會輸出一樣的信息,信息中包含有兩個group,而且會輸出每一個group內的storage信息
客戶端上傳圖片測試
在任何一臺tracker節點測試
重命名客戶端配置文件
cd /etc/fdfs/ && mv client.conf.sample client.conf
修改配置文件
修改base_path的值爲tracker配置文件裏面的base_path同樣的路徑
修改tracker_server爲tracker監控的IP和端口,若是都在本機也不可使127.0.0.1
以下所示
base_path=/data/fastdfs-tracker tracker_server=172.16.10.17:22122 tracker_server=172.16.10.18:22122
上傳圖片測試,執行下面命令
fdfs_upload_file client.conf test.png
fdfs_upload_file 命令
client.conf 指定的客戶端配置文件
test.png 指定圖片路徑
上傳成功後會返回一個相似下面的路徑
group1/M00/00/00/rBAKCloXyT2AFH_AAAD4kx1mwCw538.png
group1 表明上傳到了group1的內的storage節點
M00 表明磁盤目錄,若是隻有一個磁盤那麼只有M00,多個就是M01……
00/00 表明磁盤目錄,每一個目錄下又有00到FF共256個目錄,兩級目錄就有256*256個目錄
rBAKCloXyT2AFH_AAAD4kx1mwCw538.png 這是最終上傳上去的文件
最終咱們知道咱們的圖片被上傳到了哪幾臺服務器的哪一個目錄,咱們能夠直接在服務器上找到咱們上傳的圖片,同一個group內的圖片同樣的由於咱們前面在配置tracker節點的時候咱們配置的爲0模式(輪詢)所以咱們上傳的時候一次爲group1一次爲group2,若是有一個group宕機,那麼就始終在另外的group以下圖所示
這時候咱們的group1裏面全部storage的M00號磁盤上面的00/00目錄下面將有rBAKClobtG6AS0JKAANxJpb_3dc838.png和rBAKC1obtHCAEMpMAANxJpb_3dc032.png而group2裏面全部storage的M00號磁盤上面的00/00目錄下面將有rBAKDFobtG-AIj2EAANxJpb_3dc974.png和rBAKDVobtHGAJgzTAANxJpb_3dc166.png
以下圖所示
說明:若是同一組的其中一臺storage發生故障,那麼上傳的文件只能存放到同一組的其它設備上面,等故障恢復後會自動將數據同步到該故障設備上面,不須要人工干預
加入開機自啓動
chkconfig fdfs_storaged on
與Nginx結合
前面咱們測試只是用客戶端測試,咱們須要使用http的方式來上傳和下載,所以咱們還須要搭建Nginx或apache,這裏咱們就使用,使用最多的Nginx
在全部的storage節點部署Nginx
將全部源碼包複製到/usr/local/src目錄下面,而後解壓
進入到/usr/local/src/fastdfs-nginx-module/src/
cd /usr/local/src/fastdfs-nginx-module/src
修改config文件裏面的/usr/local/include/fastdfs爲/usr/include/fastdfs
修改config文件裏面的/usr/local/include/fastcommon/爲/usr/include/fastcommon/
進入到Nginx解壓後的目錄執行下面命令
yum -y install zlib-devel openssl-devel ./configure --prefix=/usr/local/nginx --with-pcre --add-module=/usr/local/src/fastdfs-nginx-module/src make make install
添加nginx可執行文件到環境變量
cat >> /etc/profile.d/nginx.sh << EOF #!/bin/sh PATH=$PATH:/usr/local/nginx/sbin export PATH EOF
刷新環境變量
source /etc/profile.d/nginx.sh
複製配置文件
cp /usr/local/src/fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/ cp /usr/local/src/FastDFS/conf/{http.conf,mime.types} /etc/fdfs/
建立Nginx配置文件
nginx.conf(/usr/local/nginx/conf/nginx.conf)
worker_processes 2; worker_rlimit_nofile 51200; events { use epoll; worker_connections 51200; } http { include mime.types; default_type application/octet-stream; client_max_body_size 50m; sendfile on; tcp_nopush on; keepalive_timeout 60; gzip on; server_tokens off; include vhost/*.conf; }
FastDFS.conf(/usr/local/nginx/conf/vhost/FastDFS.conf)
server { listen 9000; location ~/group[1-3]/M00 { ngx_fastdfs_module; } }
更改Linux最大可打開文件數量
編輯/etc/security/limits.conf在文件最後加入如下內容
* soft nofile 65536 * hard nofile 65536
註銷,從新登陸便可
編輯/etc/fdfs/mod_fastdfs.conf配置文件
修改connect_timeout爲10
修改tracker_server爲taacker監聽的服務器IP和地址,不可使用127.0.0.1
修改url_have_group_name爲true
修改store_path0的路徑爲storage配置文件配置的路徑
修改group_count爲2(由於咱們就只有兩個組)
在最後增長如下配置
[group1] group_name=group1 storage_server_port=23000 store_path_count=1 store_path0=/data/fastdfs-storage [group2] group_name=group2 storage_server_port=23000 store_path_count=1 store_path0=/data/fastdfs-storage
以下所示
connect_timeout=10 tracker_server=172.16.10.17:22122 tracker_server=172.16.10.18:22122 group_name=group1 url_have_group_name = true store_path0=/data/fastdfs-storage group_count = 2 [group1] group_name=group1 storage_server_port=23000 store_path_count=1 store_path0=/data/fastdfs-storage [group2] group_name=group2 storage_server_port=23000 store_path_count=1 store_path0=/data/fastdfs-storage
Nginx啓動,中止,重啓,Reload,配置檢查腳本,腳本名(/etc/init.d/nginx)
#!/bin/bash # chkconfig: - 30 21 # description: http service. # Source Function Library . /etc/init.d/functions # Nginx Settings NGINX_SBIN="/usr/local/nginx/sbin/nginx" NGINX_CONF="/usr/local/nginx/conf/nginx.conf" NGINX_PID="/usr/local/nginx/logs/nginx.pid" RETVAL=0 prog="Nginx" start() { echo -n $"Starting $prog: " mkdir -p /dev/shm/nginx_temp daemon $NGINX_SBIN -c $NGINX_CONF RETVAL=$? echo return $RETVAL } stop() { echo -n $"Stopping $prog: " killproc -p $NGINX_PID $NGINX_SBIN -TERM rm -rf /dev/shm/nginx_temp RETVAL=$? echo return $RETVAL } reload(){ echo -n $"Reloading $prog: " killproc -p $NGINX_PID $NGINX_SBIN -HUP RETVAL=$? echo return $RETVAL } restart(){ stop start } configtest(){ $NGINX_SBIN -c $NGINX_CONF -t return 0 } case "$1" in start) start ;; stop) stop ;; reload) reload ;; restart) restart ;; configtest) configtest ;; *) echo $"Usage: $0 {start|stop|reload|restart|configtest}" RETVAL=1 esac exit $RETVAL
將nginx添加以系統服務,並設置開機自啓動,最後再啓動
chkconfig --add nginx chkconfig nginx on service nginx start
測試:經過任何一個存儲節點的Nginx均可以訪問到咱們上傳的全部圖片
在全部tracker節點部署Nginx
在/usr/local/src目錄下解壓nginx-1.6.2.tar.gz和ngx_cache_purge-2.3.tar.gz
安裝依賴包
yum -y install zlib-devel openssl-devel
進入到/usr/local/src/nginx-1.6.2這個目錄執行下面命令進行安裝
./configure --prefix=/usr/local/nginx --with-pcre --add-module=/usr/local/src/ngx_cache_purge-2.3 make make install
添加nginx可執行文件到環境變量:參考在storage節點部署Nginx
編輯Nginx配置文件(nginx.conf)
worker_processes 2; worker_rlimit_nofile 51200; events { use epoll; worker_connections 51200; } http { include mime.types; default_type application/octet-stream; client_max_body_size 50m; sendfile on; tcp_nopush on; keepalive_timeout 60; gzip on; server_tokens off; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 16k; proxy_buffers 4 64k; proxy_busy_buffers_size 128k; proxy_temp_file_write_size 128k; proxy_cache_path /data/cache/nginx/proxy_cache levels=1:2 keys_zone=http-cache:200m max_size=1g inactive=30d; proxy_temp_path /data/cache/nginx/proxy_cache/tmp; include vhost/*.conf; }
建立緩存目錄和子配置文件目錄
mkdir -p /data/cache/nginx/proxy_cache/tmp mkdir /usr/local/nginx/conf/vhost
修改子配置文件(/usr/local/nginx/conf/vhost/FastDFS.conf)
upstream fdfs_group1 { server 172.16.10.10:9000 weight=1 max_fails=2 fail_timeout=30s; server 172.16.10.11:9000 weight=1 max_fails=2 fail_timeout=30s; } upstream fdfs_group2 { server 172.16.10.12:9000 weight=1 max_fails=2 fail_timeout=30s; server 172.16.10.13:9000 weight=1 max_fails=2 fail_timeout=30s; } server { listen 8000; location /group1/M00 { proxy_next_upstream http_500 http_502 http_503 error timeout invalid_header; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_cache http-cache; proxy_cache_valid 200 304 12h; proxy_cache_key $uri$is_args$args; proxy_pass http://fdfs_group1; expires 30d; } location /group2/M00 { proxy_next_upstream http_500 http_502 http_503 error timeout invalid_header; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_cache http-cache; proxy_cache_valid 200 304 12h; proxy_cache_key $uri$is_args$args; proxy_pass http://fdfs_group2; expires 30d; } location ~/purge(/.*) { allow all; proxy_cache_purge http-cache $1$is_args$args; } }
Nginx啓動,中止,重啓,Reload,配置檢查腳本,腳本名(/etc/init.d/nginx):參考在storage節點部署Nginx
將nginx添加以系統服務,並設置開機自啓動,最後再啓動:參考在storage節點部署Nginx
測試:經過兩中中的任意一臺trackr節點的8000端口去訪問後端任何group裏面的圖片都沒有問題
在nginx節點部署Nginx+HAProxy+Keepalived高可用
在172.16.10.17和172.16.10.18上面執行下面操做
安裝軟件
yum -y install nginx haproxy keepalived
node1的Keepalived配置
! Configuration File for keepalived global_defs { router_id NodeA } vrrp_script chk_nginx { script "/etc/keepalived/nginx_check.sh" interval 2 weight 20 } vrrp_script chk_haproxy { script "/etc/keepalived/haproxy_check.sh" interval 2 weight 20 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1314 } track_script { chk_nginx } virtual_ipaddress { 172.16.10.16/24 } } vrrp_instance VI_2 { state MASTER interface eth0 virtual_router_id 52 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 2222 } track_script { chk_haproxy } virtual_ipaddress { 172.16.10.16/24 } }
node2的Keepalived配置
! Configuration File for keepalived global_defs { router_id NodeB } vrrp_script chk_nginx { script "/etc/keepalived/nginx_check.sh" interval 2 weight 20 } vrrp_script chk_haproxy { script "/etc/keepalived/haproxy_check.sh" interval 2 weight 20 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 51 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1314 } track_script { chk_nginx } virtual_ipaddress { 172.16.10.16/24 } } vrrp_instance VI_2 { state BACKUP interface eth0 virtual_router_id 52 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 2222 } track_script { chk_haproxy } virtual_ipaddress { 172.16.10.16/24 } }
nginx_check.sh腳本內容(/etc/keepalived/nginx_check.sh)172.16.10.17和172.16.10.18同樣
#!/bin/bash A=`ps -C nginx --no-header | wc -l` if [ $A -eq 0 ];then nginx sleep 2 if [ `ps -C nginx --no-header | wc -l` -eq 0 ];then pkill keepalived fi fi
haproxy_check.sh腳本內容(/etc/keepalived/haproxy_check.sh)172.16.10.17和172.16.10.18同樣
#!/bin/bash A=`ps -C haproxy --no-header | wc -l` if [ $A -eq 0 ];then haproxy -f /etc/haproxy/haproxy.cfg sleep 2 if [ `ps -C haproxy --no-header | wc -l` -eq 0 ];then pkill keepalived fi fi
nginx.conf的配置(172.16.10.17和172.16.10.18同樣)
user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; include /usr/share/nginx/modules/*.conf; events { worker_connections 51200; use epoll; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; include /etc/nginx/conf.d/*.conf; }
FastDFS.conf的配置(/etc/nginx/conf.d/FastDFS.conf)
upstream tracker_server { server 172.16.10.17:8000; server 172.16.10.18:8000; } server { listen 80; location /fastdfs { proxy_pass http://tracker_server/; proxy_set_header Host $http_host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 300m; } }
haproxy.cfg配置文件的內容(172.16.10.17和172.16.10.18同樣)
global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 4000 listen rabbitmq_cluster bind 0.0.0.0:22122 mode tcp option tcplog timeout client 1m timeout server 1m timeout connect 1m balance roundrobin server node1 172.16.10.17:22122 check inter 5000 rise 2 fall 3 server node2 172.16.10.18:22122 check inter 2000 rise 2 fall 3
啓動Keepalived
chmod 755 /etc/keepalived/*.sh systemctl start keepalived
配置開機自啓動
systemctl enable nginx systemctl enable keepalived systemctl enable haproxy
檢查VIP
ip addr
開啓HAProxy日誌(172.16.10.17和172.16.10.18同樣)
編輯/etc/rsyslog.conf,打開如下四行註釋
$ModLoad imudp
$UDPServerRun 514
$ModLoad imtcp
$InputTCPServerRun 514
增長一行配置
local2.* /var/log/haproxy.log
重啓rsyslog服務
systemctl restart rsyslog
到目前爲止,RabbitMQ集羣部署完畢,最後就是測試集羣
集羣測試
關閉集羣中一半相同功能的服務器,集羣能夠照常運行
如:
組1:172.16.10.11,172.16.10.13,172.16.10.15,172.16.10.18
組2:172.16.10.10,172.16.10.12,172.16.10.14,172.16.10.17
集羣啓動順序
1. 首先啓動全部節點的Nginx
2. 再啓動Tracker節點
3. 最後啓動Storage節點