上面左邊是個人我的微信,如需進一步溝通,請加微信。 右邊是個人公衆號「Openstack私有云」,若有興趣,請關注。node
記錄一下今天正常關閉一個物理宿主機的過程。環境是3node HA,控制存儲計算融合節點,kolla部署環境,啓用ceph存儲,關閉其中一臺融合節點controller03 。大概過程是先熱遷移這個物理機上的虛擬機,而後設置ceph集羣osd noout ,使關閉這個節點後ceph的osd數據不會重平衡,避免大的數據震盪,接着在web界面上關閉節點,最後ssh登錄節點關機:
web
1.熱遷移這個節點的虛擬機。登錄web管理界面,「管理員」->「實例」->選擇這個節點中的虛擬機->「熱遷移」 ->選擇其餘節點 ,等待遷移成功,並驗證;
docker
2.設置全部ceph節點osd noout ,登錄全部ceph節點,並運行: docker exec -it ceph_mon ceph osd set noout ;bash
3.web界面上關閉節點。登錄web管理界面,「管理員」->「虛擬機管理器」->「計算主機」-> 選擇對應的宿主機->「關閉服務」 ;微信
4.ssh登錄節點關機。執行命令: shutdown -h now ;app
關閉的時候執行命令ceph -w 實時查看ceph集羣osd數據是否有重平衡動做:
ssh
[root@control02 mariadb]# docker exec -it ceph_mon ceph -w cluster 33932e16-1909-4d68-b085-3c01d0432adc health HEALTH_WARN noout flag(s) set monmap e2: 3 mons at {192.168.1.130=192.168.1.130:6789/0,192.168.1.131=192.168.1.131:6789/0,192.168.1.132=192.168.1.132:6789/0} election epoch 72, quorum 0,1,2 192.168.1.130,192.168.1.131,192.168.1.132 osdmap e466: 9 osds: 9 up, 9 in flags noout,sortbitwise,require_jewel_osds pgmap v712835: 640 pgs, 13 pools, 14902 MB data, 7300 objects 30288 MB used, 824 GB / 854 GB avail 640 active+clean
用ceph -s查看狀態:
ide
[root@control01 kolla]# docker exec -it ceph_mon ceph osd set noout set noout [root@control01 kolla]# docker exec -it ceph_mon ceph -s cluster 33932e16-1909-4d68-b085-3c01d0432adc health HEALTH_WARN 412 pgs degraded 404 pgs stuck unclean 412 pgs undersized recovery 4759/14600 objects degraded (32.596%) 3/9 in osds are down noout flag(s) set 1 mons down, quorum 0,1 192.168.1.130,192.168.1.131 monmap e2: 3 mons at {192.168.1.130=192.168.1.130:6789/0,192.168.1.131=192.168.1.131:6789/0,192.168.1.132=192.168.1.132:6789/0} election epoch 74, quorum 0,1 192.168.1.130,192.168.1.131 osdmap e468: 9 osds: 6 up, 9 in; 412 remapped pgs flags noout,sortbitwise,require_jewel_osds pgmap v712931: 640 pgs, 13 pools, 14902 MB data, 7300 objects 30285 MB used, 824 GB / 854 GB avail 4759/14600 objects degraded (32.596%) 412 active+undersized+degraded 228 active+clean [root@control01 kolla]# [root@control01 kolla]# [root@control01 kolla]# docker exec -it ceph_mon ceph -s cluster 33932e16-1909-4d68-b085-3c01d0432adc health HEALTH_WARN 412 pgs degraded 405 pgs stuck unclean 412 pgs undersized recovery 4759/14600 objects degraded (32.596%) 3/9 in osds are down noout flag(s) set 1 mons down, quorum 0,1 192.168.1.130,192.168.1.131 monmap e2: 3 mons at {192.168.1.130=192.168.1.130:6789/0,192.168.1.131=192.168.1.131:6789/0,192.168.1.132=192.168.1.132:6789/0} election epoch 74, quorum 0,1 192.168.1.130,192.168.1.131 osdmap e468: 9 osds: 6 up, 9 in; 412 remapped pgs flags noout,sortbitwise,require_jewel_osds pgmap v712981: 640 pgs, 13 pools, 14902 MB data, 7300 objects 30285 MB used, 824 GB / 854 GB avail 4759/14600 objects degraded (32.596%) 412 active+undersized+degraded 228 active+clean client io 7559 B/s rd, 20662 B/s wr, 11 op/s rd, 1 op/s wr
發現3個 osd down,可是仍是 in狀態,同時 pgmap 始終都是 412 active+undersized+degraded ,228 active+clean ,說明數據沒有重平衡。ui
另外,檢查全部的虛擬機,正常運行。
spa