記一次防火牆致使greenplum裝機失敗及定位修復過程

1、問題現象

20180201:15:06:25:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:----------------------------------------
20180201:15:06:25:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Greenplum Primary Segment Configuration
20180201:15:06:25:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:----------------------------------------
20180201:15:06:25:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-sdw1-1 /home/primary/gpseg0 40000 2 0 41000
20180201:15:06:25:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-sdw1-1 /home/primary/gpseg1 40001 3 1 41001
20180201:15:06:25:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-sdw1-2 /home/primary/gpseg2 40000 4 2 41000
20180201:15:06:25:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-sdw1-2 /home/primary/gpseg3 40001 5 3 41001
20180201:15:06:25:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:---------------------------------------
20180201:15:06:25:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Greenplum Mirror Segment Configuration
20180201:15:06:25:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:---------------------------------------
20180201:15:06:25:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-sdw1-2 /home/mirror/gpseg0 50000 6 0 51000
20180201:15:06:25:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-sdw1-2 /home/mirror/gpseg1 50001 7 1 51001
20180201:15:06:25:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-sdw1-1 /home/mirror/gpseg2 50000 8 2 51000
20180201:15:06:25:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-sdw1-1 /home/mirror/gpseg3 50001 9 3 51001
Continue with Greenplum creation Yy/Nn>
y
20180201:15:06:28:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Building the Master instance database, please wait...
20180201:15:06:38:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Starting the Master in admin mode
20180201:15:06:46:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Commencing parallel build of primary segment instances
20180201:15:06:46:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Spawning parallel processes batch [1], please wait...
....
20180201:15:06:46:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
........................
20180201:15:07:10:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:------------------------------------------------
20180201:15:07:10:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Parallel process exit status
20180201:15:07:10:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:------------------------------------------------
20180201:15:07:10:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Total processes marked as completed = 4
20180201:15:07:10:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Total processes marked as killed = 0
20180201:15:07:10:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Total processes marked as failed = 0
20180201:15:07:10:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:------------------------------------------------
20180201:15:07:10:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Commencing parallel build of mirror segment instances
20180201:15:07:10:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Spawning parallel processes batch [1], please wait...
....
20180201:15:07:11:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
....
20180201:15:07:15:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:------------------------------------------------
20180201:15:07:15:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Parallel process exit status
20180201:15:07:15:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:------------------------------------------------
20180201:15:07:15:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Total processes marked as completed = 0
20180201:15:07:15:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Total processes marked as killed = 0
20180201:15:07:15:028653 gpinitsystem:sdw1-2:gpadmin-[WARN]:-Total processes marked as failed = 4 <<<<<
20180201:15:07:15:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:------------------------------------------------
20180201:15:07:15:028653 gpinitsystem:sdw1-2:gpadmin-[FATAL]:-Errors generated from parallel processes
20180201:15:07:15:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Dumped contents of status file to the log file
20180201:15:07:15:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Building composite backout file
20180201:15:07:15:gpinitsystem:sdw1-2:gpadmin-[FATAL]:-Failures detected, see log file /home/gpadmin/gpAdminLogs/gpinitsystem_20180201.log for more detail Script Exiting!
20180201:15:07:15:028653 gpinitsystem:sdw1-2:gpadmin-[WARN]:-Script has left Greenplum Database in an incomplete state
20180201:15:07:15:028653 gpinitsystem:sdw1-2:gpadmin-[WARN]:-Run command /bin/bash /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_20180201_150615 to remove these changes
20180201:15:07:15:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-Start Function BACKOUT_COMMAND
20180201:15:07:15:028653 gpinitsystem:sdw1-2:gpadmin-[INFO]:-End Function BACKOUT_COMMANDpython

在裝機的時候,發現全部的segment都啓動失敗linux

檢查全部計算節點文件和日誌,沒有明顯的異常信息sql

 

2、定位過程

一、查看master日誌

根據提示查看/home/gpadmin/gpAdminLogs/gpinitsystem_20180201.log日誌信息數據庫

20180201:15:07:12:015183 gpcreateseg.sh:sdw1-2:gpadmin-[INFO]:-End Function BACKOUT_COMMAND
20180201:15:07:12:015183 gpcreateseg.sh:sdw1-2:gpadmin-[INFO][3]:-Completed to start segment instance database sdw1-1 /home/mirror/gpseg3
20180201:15:07:12:015183 gpcreateseg.sh:sdw1-2:gpadmin-[INFO]:-Copying data for mirror on sdw1-1 using remote copy from primary sdw1-2 ...
20180201:15:07:12:015183 gpcreateseg.sh:sdw1-2:gpadmin-[INFO]:-Start Function RUN_COMMAND_REMOTE
20180201:15:07:12:015183 gpcreateseg.sh:sdw1-2:gpadmin-[INFO]:-Commencing remote /bin/ssh sdw1-2 export GPHOME=/usr/local/gpdb; . /usr/local/gpdb/greenplum_path.sh; /usr/local/gpdb/bin/lib/pysync.py -x pg_log -x postgresql.conf -x postmaster.pid /home/primary/gpseg3 \[sdw1-1\]:/home/mirror/gpseg3
Killed by signal 1.^M
Killed by signal 1.^M
Killed by signal 1.^M
Traceback (most recent call last):
File "/usr/local/gpdb/bin/lib/pysync.py", line 669, in <module>
sys.exit(LocalPysync(sys.argv, progressTimestamp=True).run())
File "/usr/local/gpdb/bin/lib/pysync.py", line 647, in run
code = self.work()
File "/usr/local/gpdb/bin/lib/pysync.py", line 611, in work
self.socket.connect(self.connectAddress)
File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 113] No route to host
20180201:15:07:15:014991 gpcreateseg.sh:sdw1-2:gpadmin-[FATAL]:- Command export GPHOME=/usr/local/gpdb; . /usr/local/gpdb/greenplum_path.sh; /usr/local/gpdb/bin/lib/pysync.py -x pg_log -x postgresql.conf -x postmaster.pid /home/primary/gpseg2 \[sdw1-1\]:/home/mirror/gpseg2 on sdw1-2 failed with error status 1
20180201:15:07:15:014991 gpcreateseg.sh:sdw1-2:gpadmin-[INFO]:-End Function RUN_COMMAND_REMOTE
20180201:15:07:15:014991 gpcreateseg.sh:sdw1-2:gpadmin-[FATAL][2]:-Failed remote copy of segment data directory from sdw1-2 to sdw1-1
Killed by signal 1.^M
Traceback (most recent call last):
File "/usr/local/gpdb/bin/lib/pysync.py", line 669, in <module>
sys.exit(LocalPysync(sys.argv, progressTimestamp=True).run())
File "/usr/local/gpdb/bin/lib/pysync.py", line 647, in run
code = self.work()
File "/usr/local/gpdb/bin/lib/pysync.py", line 611, in work
self.socket.connect(self.connectAddress)
File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 113] No route to host
Traceback (most recent call last):
File "/usr/local/gpdb/bin/lib/pysync.py", line 669, in <module>
sys.exit(LocalPysync(sys.argv, progressTimestamp=True).run())
File "/usr/local/gpdb/bin/lib/pysync.py", line 647, in run
code = self.work()
File "/usr/local/gpdb/bin/lib/pysync.py", line 611, in work
self.socket.connect(self.connectAddress)
File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 113] No route to hostcentos

此處的關鍵信息在於:No route to hostbash

按照此信息判斷是集羣內機器有不互通的狀況,以後作以下檢查:python2.7

  • 全部物理機是否能正常連通
  • 全部物理機hosts文件配置是否正確
  • 是否ssh免密打通

全部內容檢查完後,發現一切都是正常的。ssh

二、查看segment目錄

Commencing remote /bin/ssh sdw1-2 export GPHOME=/usr/local/gpdb; . /usr/local/gpdb/greenplum_path.sh; /usr/local/gpdb/bin/lib/pysync.py -x pg_log -x postgresql.conf -x postmaster.pid /home/primary/gpseg3 \[sdw1-1\]:/home/mirror/gpseg3

根據報錯的這段信息,找到sdw1-1和sdw1-2機器的primary和mirror目錄,作以下檢查:socket

  • 目錄是否正常建立
  • 文件是否完成
  • 權限是否正確(文件夾權限應該授予給數據庫的管理員帳戶)

檢查完成後發現都是正常的post

三、檢查segment日誌文件

檢查完全部相關的segment日誌文件後,基本segment的啓動都是正常的,僅找到以下的信息:
2018-02-01 15:07:07.854785 CST,,,p9642,th708450368,,,,0,,,seg-1,,,,,"LOG","00000","database system is ready to accept connections","PostgreSQL 8.3.23 (Greenplum Database 4.3.99.00 build dev) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4) compiled on Jan 18 2018 15:33:53 (with assert checking)",,,,,,0,,"postmaster.c",4337,
2018-02-01 15:07:08.853415 CST,,,p9642,th708450368,,,,0,,,seg-1,,,,,"LOG","00000","received smart shutdown request",,,,,,,0,,"postmaster.c",4075,
2018-02-01 15:07:08.855196 CST,,,p9664,th708450368,,,,0,,,seg-1,,,,,"LOG","00000","shutting down",,,,,,,0,,"xlog.c",8616,
2018-02-01 15:07:08.863175 CST,,,p9664,th708450368,,,,0,,,seg-1,,,,,"LOG","00000","database system is shut down",,,,,,,0,,"xlog.c",8632,

這裏說明數據庫並無啓動完成,問題也不是在於segment上

至此,沒有多少有價值的信息來解決問題,由於翻閱官方文檔,在https://www.emc.com/collateral/TechnicalDocument/docu51071.pdf文檔的第三章提到須要關閉防火牆,因而使用以下命令查看防火牆狀態

systemctl status firewalld.service

發現防火牆是開啓狀態

3、解決方案:

一、回滾安裝

根據裝機時的日誌,提供了以下語句進行回滾:

/bin/bash /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_20180201_150615

二、配置防火牆

systemctl stop firewalld.service
systemctl disable firewalld.service#禁止防火牆開機後自啓動

注意:以上語句適用於centos 7

三、從新執行安裝步驟便可

 

疑問:目前爲何必定要關閉防火牆,尚未深刻研究,另外如何在防火牆開啓的狀態下部署集羣,本人也尚未成功過。

相關文章
相關標籤/搜索