隨着應用系統規模的不斷擴大,對數據的安全性和可靠性也提出的更好的要求,rsync在高端業務系統中也逐漸暴露出了不少不足。
首先,rsync在同步數據時,須要掃描全部文件後進行比對,進行差量傳輸。若是文件數量達到了百萬甚至千萬量級,掃描全部文件將是很是耗時的,而且正在發生變化的每每是其中不多的一部分,這是很是低效的方式。
其次,rsync不能實時的去監測、同步數據,雖然它能夠經過linux守護進程的方式進行觸發同步,可是兩次觸發動做必定會有時間差,這樣就致使了服務端和客戶端數據可能出現不一致,沒法在應用故障時徹底的恢復數據。html
基於以上兩種狀況,可使用rsync+inotify的組合來解決,能夠實現數據的實時同步。linux
inotify是一種強大的、細粒度的、異步的文件系統事件控制機制。linux內核從2.6.13起,加入了inotify支持,經過inotify能夠監控文件系統中添加、刪除、修改、移動等各類事件,利用這個內核接口,第三方軟件就能夠監控文件系統下文件的各類變化狀況,而inotify-tools正是實施監控的軟件。
在使用rsync首次全量同步後,結合inotify對源目錄進行實時監控,只有有文件變更或新文件產生,就會馬上同步到目標目錄下,很是高效使用!nginx
分別將c++
192.168.1.1的/Data/fangfull_upload和/Data/erp_upload 192.168.1.2的/Data/xqsj_upload/和/Data/fa`n`ghu_upload_src 192.168.1.3的/Data/Static_img/webroot/ssapp-prod和/usr/local/nginx/html/ssapp.prod
實時同步到192.168.1.5的/home/backup/image-back目錄下對應的fangfull_upload、erp_upload、xqsj_upload、fanghu_upload_src、ssapp-prod和ssapp.prod目錄。
git
這樣的話:
(1)192.168.1.一、192.168.1.二、192.168.1.3這三臺服務器是源服務器,做爲rsync的客戶端,部署rsync+inotify。
(2)192.168.1.5是目標服務器,做爲rsync的服務端。只須要安裝配置rsync便可,不須要安裝inotify。github
vim /etc/selinux/config SELINUX=disabled setenforce 0
firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.1.1" port protocol="tcp" port="22" accept" firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.1.1" port protocol="tcp" port="873" accept" firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.1.2" port protocol="tcp" port="22" accept" firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.1.2" port protocol="tcp" port="873" accept" firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.1.3" port protocol="tcp" port="22" accept" firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="192.168.1.3" port protocol="tcp" port="873" accept" systemctl restart firewalld
yum install rsync xinetd vim /etc/xinetd.d/rsync ..... disable = no #由默認的yes改成no,設置開機啓動rsync
/etc/init.d/xinetd start
web
vim /etc/rsyncd.conf log file = /var/log/rsyncd.log #日誌文件位置,啓動rsync後自動產生這個文件,無需提早建立 pidfile = /var/run/rsyncd.pid #pid文件的存放位置 lock file = /var/run/rsync.lock #支持max connections參數的鎖文件 secrets file = /etc/rsync.pass #用戶認證配置文件,裏面保存用戶名稱和密碼,後面會建立這個文件 motd file = /etc/rsyncd.Motd #rsync啓動時歡迎信息頁面文件位置(本身建立這個文件,內容隨便自定義) [fangfull_upload] #自定義名稱 path = /home/backup/image-back/fangfull_upload #rsync服務端數據目錄路徑,即同步到目標目錄後的存放路徑 comment = fangfull_upload #模塊名稱與[fangfull_upload]自定義名稱相同 uid = nobody #設置rsync運行的uid權限。這個要保證同步到目標目錄後的權限和源目錄一致,即都是nobody! gid = nobody #設置rsync運行的gid權限。 port=873 #默認的rsync端口 use chroot = no #默認爲true,修改成no或false,增長對目錄文件軟鏈接的備份 read only = no #設置rsync服務端文件爲讀寫權限 list = no #不顯示rsync服務端資源列表 max connections = 200 #最大鏈接數 timeout = 600 #設置超時時間 auth users = RSYNC_USER #執行數據同步的用戶名,須要後面手動設置。能夠設置多個,用英文狀態下逗號隔開 hosts allow = 192.168.1.1 #容許進行數據同步的客戶端IP地址,能夠設置多個,用英文狀態下逗號隔開 hosts deny = 192.168.1.194 #禁止數據同步的客戶端IP地址,能夠設置多個,用英文狀態下逗號隔開(若是沒有禁止,就不用設置這一行) [erp_upload] path = /home/backup/image-back/erp_upload comment = erp_upload uid = nobody gid = nobody port=873 use chroot = no read only = no list = no max connections = 200 timeout = 600 auth users = RSYNC_USER hosts allow = 192.168.1.1 [xqsj_upload] path = /home/backup/image-back/xqsj_upload comment = xqsj_upload uid = nobody gid = nobody port=873 use chroot = no read only = no list = no max connections = 200 timeout = 600 auth users = RSYNC_USER hosts allow = 192.168.1.2 [fanghu_upload_src] path = /home/backup/image-back/fanghu_upload_src comment = fanghu_upload_src uid = nobody gid = nobody port=873 use chroot = no read only = no list = no max connections = 200 timeout = 600 auth users = RSYNC_USER hosts allow = 192.168.1.2 [ssapp-prod] path = /home/backup/image-back/ssapp-prod comment = ssapp-prod uid = nginx gid = nginx port=873 use chroot = no read only = no list = no max connections = 200 timeout = 600 auth users = RSYNC_USER hosts allow = 192.168.1.3 [ssapp.prod] path = /home/backup/image-back/ssapp.prod comment = ssapp.prod uid = nginx gid = nginx port=873 use chroot = no read only = no list = no max connections = 200 timeout = 600 auth users = RSYNC_USER hosts allow = 192.168.1.3
vim /etc/rsync.pass xiaoshengyu:123456@rsync
chmod 600 /etc/rsyncd.conf chmod 600 /etc/rsync.pass
/etc/init.d/xinetd restart lsof -i:873 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME xinetd 22041 root 5u IPv6 3336440 0t0 TCP *:rsync (LISTEN)
cd /home/backup/image-back/ mkdir fangfull_upload erp_upload xqsj_upload fanghu_upload_src ssapp-prod ssapp.prod
vim /etc/selinux/config SELINUX=disabled setenforce 0
yum install rsync xinetd vim /etc/xinetd.d/rsync ..... disable = no #由默認的yes改成no,設置開機啓動rsync
/etc/init.d/xinetd start lsof -i:873 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME xinetd 22041 root 5u IPv6 3336440 0t0 TCP *:rsync (LISTEN)
vim /etc/rsync.pass 123456@rsync
chmod 600 /etc/rsync.pass
vim
ll /proc/sys/fs/inotify max_queued_events max_user_instances max_user_watches
yum install make gcc gcc-c++ #安裝編譯工具 cd /usr/local/src wget http://github.com/downloads/rvoicilas/inotify-tools/inotify-tools-3.14.tar.gz tar zxvf inotify-tools-3.14.tar.gz cd inotify-tools-3.14 ./configure --prefix=/usr/local/inotify make && make install
vim /etc/profile export PATH=$PATH:/usr/local/inotify/bin source /etc/profile
vim /etc/ld.so.conf /usr/local/inotify/lib ldconfig
查看系統默認參數值數組
sysctl -a | grep max_queued_events fs.inotify.max_queued_events = 16384 sysctl -a | grep max_user_watches fs.inotify.max_user_watches = 8192 sysctl -a | grep max_user_instances fs.inotify.max_user_instances = 128
sysctl -w fs.inotify.max_queued_events="99999999" sysctl -w fs.inotify.max_user_watches="99999999" sysctl -w fs.inotify.max_user_instances="65535 "
max_queued_events:
inotify隊列最大長度,若是值過小,會出現" Event Queue Overflow "錯誤,致使監控文件不許確
max_user_watches:
要同步的文件包含多少目錄,能夠用:find /Data/xqsj_upload -type d | wc -l 統計這些源目錄下的目錄數,必須保證max_user_watches值大於統計結果(這裏/Data/xqsj_upload爲同步的源文件目錄)
max_user_instances:
每一個用戶建立inotify實例最大值安全
在192.168.1.1服務器上
第一次全量同步:
rsync -avH --port=873 --progress --delete /Data/fangfull_upload/ RSYNC_USER@192.168.1.5::fangfull_upload --password-file=/etc/rsync.pass rsync -avH --port=873 --progress --delete /Data/erp_upload/ RSYNC_USER@192.168.1.5::erp_upload --password-file=/etc/rsync.pass
實時同步腳本里添加的是--delete-before參數,而不是--delete參數(第一次全量同步時rsync用的參數),兩者區別:
--delete參數:表示rsync同步前,強制刪除目標目錄中的全部文件,而後再執行同步操做。
--delete-before參數:表示rsync同步前,會先對目標目錄進行一次掃描檢索,刪除目標目錄中對比源目錄的多餘文件,而後再執行同步操做。顯然比--delete參數安全些。
cd /home/rsync/ cat rsync_fangfull_upload_inotify.sh #!/bin/bash SRCDIR=/Data/fangfull_upload/ USER=RSYNC_USER IP=192.168.1.5 DESTDIR=fangfull_upload /usr/local/inotify/bin/inotifywait -mrq --timefmt '%d/%m/%y %H:%M' --format '%T %w%f%e' -e close_write,modify,delete,create,attrib,move $SRCDIR | while read file do /usr/bin/rsync -avH --port=873 --progress --delete-before $SRCDIR $USER@$IP::$DESTDIR --password-file=/etc/rsync.pass echo " ${file} was rsynced" >> /tmp/rsync.log 2>&1 done
cat rsync_erp_upload_inotify.sh #!/bin/bash SRCDIR=/Data/erp_upload/ USER=RSYNC_USER IP=192.168.1.5 DESTDIR=erp_upload /usr/local/inotify/bin/inotifywait -mrq --timefmt '%d/%m/%y %H:%M' --format '%T %w%f%e' -e close_write,modify,delete,create,attrib,move $SRCDIR | while read file do /usr/bin/rsync -avH --port=873 --progress --delete-before $SRCDIR $USER@$IP::$DESTDIR --password-file=/etc/rsync.pass echo " ${file} was rsynced" >> /tmp/rsync.log 2>&1 done
nohup sh rsync_fangfull_upload_inotify.sh & nohup sh rsync_erp_upload_inotify.sh &
ps -ef|grep inotify
root 11390 1 0 13:41 ? 00:00:00 sh rsync_erp_upload_inotify.sh
root 11392 11390 0 13:41 ? 00:00:00 sh rsync_erp_upload_inotify.sh
root 11397 1 0 13:41 ? 00:00:00 sh rsync_fangfull_upload_inotify.sh
root 11399 11397 0 13:41 ? 00:00:00 sh rsync_fangfull_upload_inotify.sh
root 21842 11702 0 17:22 pts/0 00:00:00 grep --color=auto inotify
好比在源目錄/Data/fangfull_upload中建立一個文件或目錄,會自動實時同步到目標機器192.168.1.5的目標目錄/home/backup/image-back/fangfull_upload中。
第一次全量同步:
rsync -avH --port=873 --progress --delete /Data/xqsj_upload/ RSYNC_USER@192.168.1.5::xqsj_upload --password-file=/etc/rsync.pass rsync -avH --port=873 --progress --delete /Data/fanghu_upload_src/ RSYNC_USER@192.168.1.5::fanghu_upload_src --password-file=/etc/rsync.pass
rsync+inotify實時同步:
cd /home/rsync/ cat rsync_xqsj_upload_inotify.sh #!/bin/bash SRCDIR=/Data/xqsj_upload/ USER=RSYNC_USER IP=192.168.1.5 DESTDIR=xqsj_upload /usr/local/inotify/bin/inotifywait -mrq --timefmt '%d/%m/%y %H:%M' --format '%T %w%f%e' -e close_write,modify,delete,create,attrib,move $SRCDIR | while read file do /usr/bin/rsync -avH --port=873 --progress --delete-before $SRCDIR $USER@$IP::$DESTDIR --password-file=/etc/rsync.pass echo " ${file} was rsynced" >> /tmp/rsync.log 2>&1 done
cat rsync_fanghu_upload_src_inotify.sh #!/bin/bash SRCDIR=/Data/fanghu_upload_src/ USER=RSYNC_USER IP=192.168.1.5 DESTDIR=fanghu_upload_src /usr/local/inotify/bin/inotifywait -mrq --timefmt '%d/%m/%y %H:%M' --format '%T %w%f%e' -e close_write,modify,delete,create,attrib,move $SRCDIR | while read file do /usr/bin/rsync -avH --port=873 --progress --delete-before $SRCDIR $USER@$IP::$DESTDIR --password-file=/etc/rsync.pass echo " ${file} was rsynced" >> /tmp/rsync.log 2>&1 done
nohup sh rsync_xqsj_upload_inotify.sh & nohup rsync_fanghu_upload_src_inotify.sh &
好比在源目錄/Data/xqsj_upload中建立一個文件或目錄,會自動實時同步到目標機器192.168.1.5的目標目錄/home/backup/image-back/xqsj_upload中。
第一次全量同步:
rsync -avH --port=873 --progress --delete /Data/Static_img/webroot/ssapp-prod/ RSYNC_USER@192.168.1.5::ssapp-prod --password-file=/etc/rsync.pass rsync -avH --port=873 --progress --delete /usr/local/nginx/html/ssapp.prod/ RSYNC_USER@192.168.1.5::ssapp.prod --password-file=/etc/rsync.pass
cd /home/rsync/ cat rsync_ssapp-prod_inotify.sh #!/bin/bash SRCDIR=/Data/Static_img/webroot/ssapp-prod/ USER=RSYNC_USER IP=192.168.1.5 DESTDIR=ssapp-prod /usr/local/inotify/bin/inotifywait -mrq --timefmt '%d/%m/%y %H:%M' --format '%T %w%f%e' -e close_write,modify,delete,create,attrib,move $SRCDIR | while read file do /usr/bin/rsync -avH --port=873 --progress --delete-before $SRCDIR $USER@$IP::$DESTDIR --password-file=/etc/rsync.pass echo " ${file} was rsynced" >> /tmp/rsync.log 2>&1 done
cat rsync_ssapp.prod_inotify.sh #!/bin/bash SRCDIR=/usr/local/nginx/html/ssapp.prod/ USER=RSYNC_USER IP=192.168.1.5 DESTDIR=ssapp.prod /usr/local/inotify/bin/inotifywait -mrq --timefmt '%d/%m/%y %H:%M' --format '%T %w%f%e' -e close_write,modify,delete,create,attrib,move $SRCDIR | while read file do /usr/bin/rsync -avH --port=873 --progress --delete-before $SRCDIR $USER@$IP::$DESTDIR --password-file=/etc/rsync.pass echo " ${file} was rsynced" >> /tmp/rsync.log 2>&1 done
nohup sh rsync_ssapp-prod_inotify.sh & nohup rsync_ssapp.prod_inotify.sh &
好比在源目錄/Data/Static_img/webroot/ssapp-prod中建立一個文件或目錄,會自動實時同步到目標機器192.168.1.5的目標目錄/home/backup/image-back/ssapp-prod中。
若是在同步過程當中,發現中途報錯!重複執行同步命令一直是報這個錯誤:
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at
main.c(1505)
最後發現緣由:
是由於在同步的時候,源目錄下有軟連接文件!
rsync同步軟連接文件,應該加參數-l
因此,最好在使用rsync同步命令的時候,後面跟-avpgolr參數組合(將上面的-avH改爲-avpgolr)
-a:遞歸 -v:打印詳細過程 -p:保持文件屬性 -g:文件所屬組不變 -o:文件所屬者不變 -l:軟鏈接屬性 -r:同步目錄時的參數