本文檔記錄了VSM Import Cluster功能驗證過程及過程當中遇到的問題。node
1) Management Network:VSM控制節點對其餘節點的管理網絡,本例爲172.16.34.0/24python
2) Ceph Public Network:ceph-client <---> ceph-mon 以及ceph-client <---> ceph-osd之間的通訊網絡,本例爲192.1.35.0/24linux
3) Ceph Cluster Network:ceph-osd <---> ceph-mon以及ceph-osd<---> ceph-osd之間的通訊網絡,本例爲192.3.35.0/24git
在啓動新部署以前,須要執行某些預配置的步驟。 下面是針對VM的狀況,但通常步驟也應該適用於裸機:github
VSM至少須要三個存儲節點和一個控制器,由於咱們此次搭建的是vsm import cluster的集羣,所以建立四個centos虛擬機。 其中一個將是VSM控制器,其餘三個將是集羣中的存儲節點。 因爲存儲節點配置基本相同,咱們只需指定並安裝一個控制節點,而後將其克隆便可。對於VSM控制器。咱們能夠克隆存儲節點,再加些配置便可。sql
1)更改/etc/sysconfig/network-scripts/ifcfg-ens32:centos
sed -i "s/BOOTPROTO=dhcp/BOOTPROTO=static/g" /etc/sysconfig/network-scripts/ifcfg-ens32
cat << EOF >> /etc/sysconfig/network-scripts/ifcfg-ens32 IPADDR=172.16.34.52 GATEWAY=172.16.34.524 NETMASK=255.255.0.0 DNS1=10.19.8.10 DNS2=8.8.4.4 EOF
2)添加/etc/sysconfig/network-scripts/ifcfg-ens33,/etc/sysconfig/network-scripts/ifcfg-ens35,內容分別以下服務器
TYPE=Ethernet BOOTPROTO=static DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=ens33 DEVICE=ens33 ONBOOT=yes IPADDR=192.1.35.52 GATEWAY=192.1.35.254 NETMASK=255.255.0.0
TYPE=Ethernet BOOTPROTO=static DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=ens35 DEVICE=ens35 ONBOOT=yes IPADDR=192.3.35.52 GATEWAY=192.3.35.254 NETMASK=255.255.0.0
3)更改IP後,執行:網絡
service network restart
setenforce 0 sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
2.3.1.4 安裝NTP和SSH服務器:ssh
yum install -y ntp ntpdate ntp-doc yum install -y openssh-server
2.3.1.5.更改hostname和hosts文件
hostnamectl set-hostname ceph01
cat << EOF >> /etc/hosts 192.1.35.52 ceph01 192.1.35.53 ceph02 192.1.35.54 ceph03 EOF
1)配置阿里雲的centos的源
yum install wget yum clean all rm -rf /etc/yum.repos.d/*.repo wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo #sed -i 's/$releasever/7.2.1511/g' /etc/yum.repos.d/CentOS-Base.repo
2)配置ceph安裝源
cat << EOF >/etc/yum.repos.d/ceph.repo [ceph] name=Ceph packages for $basearch baseurl= http://mirrors.aliyun.com/ceph/rpm-hammer/el7/x86_64/ enabled=1 priority=2 gpgcheck=0 type=rpm-md [ceph-noarch] name=Ceph noarch packages baseurl=http://mirrors.aliyun.com/ceph/rpm-hammer/el7/noarch/enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=Ceph source packages baseurl=http://mirrors.aliyun.com/ceph/rpm-hammer/el7/SRPMS enabled=0 priority=2 gpgcheck=0 type=rpm-md EOF
ceph1更新軟件庫並安裝ceph-deploy:
非ceph1執行:
yum update
ceph1執行
yum –y update && yum -y install ceph-deploy
1)生成 SSH 密鑰對,但不要用 sudo 或 root 用戶
ssh-keygen
2) 把公鑰拷貝到各 Ceph 節點
ssh-copy-id root@ceph01 ssh-copy-id root@ceph02 ssh-copy-id root@ceph03
命令以下:
mkdir -p /root/.ssh touch /root/.ssh/config cat << EOF > /root/.ssh/config Host ceph01 Hostname ceph01 User root Host ceph02 Hostname ceph02 User root Host ceph03 Hostname ceph03 User root EOF
在管理節點ceph1上建立一個目錄,用於保存 ceph-deploy 生成的配置文件和密鑰對
ssh root@ceph1 mkdir -p /home/ceph-cluster cd /home/ceph-cluster
下面操做在ceph1節點下操做
1. 建立集羣
ceph-deploy new ceph01 ceph02 ceph03
2. 若是你有多個網卡,能夠把 public network 寫入 Ceph 配置文件的 [global] 段下
public network = {ip-address}/{netmask}
echo "public network = 192.1.35.0/24" | sudo tee -a ceph.conf echo "cluster network = 192.2.35.0/24" | sudo tee -a ceph.conf
3 安裝 Ceph
ceph-deploy install ceph01 ceph02 ceph03 --no-adjust-repos
4.配置初始監視器、並收集全部密鑰
ceph-deploy mon create-initial
ceph-deploy disk zap ceph01:/dev/sdb ceph-deploy osd prepare ceph01:/dev/sdb ceph-deploy osd activate ceph01:/dev/sdb1 ceph-deploy disk zap ceph02:/dev/sdb ceph-deploy osd prepare ceph02:/dev/sdb ceph-deploy osd activate ceph02:/dev/sdb1 ceph-deploy disk zap ceph03:/dev/sdb ceph-deploy osd prepare ceph03:/dev/sdb ceph-deploy osd activate ceph03:/dev/sdb1
克隆一個虛擬機,做爲vsm-controller節點,進行相應的配置。ip、hostname、/etc/hosts/、ssh無密訪問、ceph源。
1.設置IP
1)更改/etc/sysconfig/network-scripts/ifcfg-ens32:
sed -i "s/BOOTPROTO=dhcp/BOOTPROTO=static/g" /etc/sysconfig/network-scripts/ifcfg-ens32
cat << EOF >> /etc/sysconfig/network-scripts/ifcfg-ens32 IPADDR=172.16.34.51 GATEWAY=172.16.34.524 NETMASK=255.255.0.0 DNS1=10.19.8.10 DNS2=8.8.4.4 EOF
2)添加/etc/sysconfig/network-scripts/ifcfg-ens33,內容以下
TYPE=Ethernet BOOTPROTO=static DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=ens33 DEVICE=ens33 ONBOOT=yes IPADDR=192.1.35.51 GATEWAY=192.1.35.254 NETMASK=255.255.0.0
2.更改hostname
hostnamectl set-hostname ceph-vsm-console
3.設置安裝源
見2.3.1.6 節 配置安裝源
4.配置主機名解析 在全部節點上配置/etc/hosts
cat << EOF >> /etc/hosts 172.16.34.51 ceph-vsm-console 172.16.34.52 ceph01 172.16.34.53 ceph02 172.16.34.54 ceph03 192.1.35.51 ceph-vsm-console 192.1.35.52 ceph01 192.1.35.53 ceph02 192.1.35.54 ceph03 EOF
5.配置與ceph集羣ssh無密訪問
ssh-keygen ssh-copy-id root@ceph01 ssh-copy-id root@ceph02 ssh-copy-id root@ceph03
6.關閉SELinux及iptables 在全部節點上執行一下配置
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
wget https://github.com/01org/virtual-storage-manager/releases/download/v2.1.0/2.1.0-336_centos7.tar.gz
tar -xzvf 2.1.0-336_centos7.tar.gz
解壓後文件結構以下:
# tree 2.1.0-336
2.1.0-336
├── CHANGELOG.md
├── CHANGELOG.pdf
├── get_pass.sh
├── INSTALL.md
├── INSTALL.pdf
├── installrc
├── install.sh
├── LICENSE
├── manifest
│ ├── cluster.manifest.sample
│ └── server.manifest.sample
├── NOTICE
├── prov_node.sh
├── README.md
├── RELEASE
├── rpms.lst
├── uninstall.sh
├── VERSION
└── vsmrepo
├── python-vsmclient-2.1.0-336.noarch.rpm
├── repodata
│ ├── 09c2465aa2670cc6b31e3eda4818b2983eeab0432965a184d0594a3f4669d885-primary.sqlite.bz2
│ ├── 52d75398c3b713a4d7bb089c9fef6d13e7fd0d0b305a86b5dff117c720990507-other.xml.gz
│ ├── 8299117fe070fbbb4e3439d0643a074fbe77a17da9cbaf9abd77e1f758050f38-filelists.sqlite.bz2
│ ├── aa6bec470daa2eb404e71a228252eaa5712c84deaf2052ed313afd2c5948d826-other.sqlite.bz2
│ ├── d8b7b087bd3b68158cc2d7e44cc5bad17b3b299351a57f81c2d791f9043e0998-primary.xml.gz
│ ├── dfc4f61fdf63c9903e137c2ae20f54186876e672ad84b24a9b9ab8d368931d62-filelists.xml.gz
│ └── repomd.xml
├── vsm-2.1.0-336.noarch.rpm
├── vsm-dashboard-2.1.0-336.noarch.rpm
└── vsm-deploy-2.1.0-336.x86_64.rpm
3 directories, 28 files
AGENT_ADDRESS_LIST="172.16.34.56 172.16.34.57 172.16.34.58" CONTROLLER_ADDRESS="172.16.34.55"
1).在manifest文件夾中新建四個文件夾分別以四個ip地址命名。
manifest/
├── 172.16.34.51
├── 172.16.34.52
├── 172.16.34.53
├── 172.16.34.54
├── cluster.manifest.sample
└── server.manifest.sample
2). 把cluster.manifest.sample拷到controller節點ip的文件夾中,重命名成cluster.manifest。
修改裏面的storage_class、storage_group、addr等信息,通常是你使用哪一種硬盤就改爲哪一種,ip地址修改爲相應的網段;
cp cluster.manifest.sample 172.16.34.51 mv 172.16.34.55/cluster.manifest.sample 172.16.34.51/cluster.manifest
cluster.manifest 更改後以下:
[storage_class] vm_sas [storage_group] #format: [storage group name] [user friendly storage group name] [storage class] vm_sas vm_sas vm_sas [cluster] vsm_ceph [file_system] xfs [management_addr] 172.16.34.0/24 [ceph_public_addr] 192.1.35.0/24 [ceph_cluster_addr] 192.2.35.0/24 [settings] storage_group_near_full_threshold 65 storage_group_full_threshold 85 ceph_near_full_threshold 75 ceph_full_threshold 90 pg_count_factor 100 heartbeat_interval 5 osd_heartbeat_interval 10 osd_heartbeat_grace 10 [ec_profiles] [cache_tier_defaults]
3).把Server.manifest.sample拷貝到其他三個節點ip的文件夾中。修改vsm_controller_ip、role和硬盤路徑。具體配置信息可參考官網的配置文檔。
cp server.manifest.sample 172.16.34.52 mv 172.16.34.52/cluster.manifest.sample 172.16.34.51/cluster.manifest
cluster.manifest 更改後以下:
[vsm_controller_ip]
172.16.34.51
[role]
storage
monitor
[auth_key]
f0f0603a754a432daa0edbaf28229bae-d19eed1d25fd46699ba0717c7b95ebd2
[vm_sas]
#format [sas_device] [journal_device]
/dev/sdb1 /dev/sdb2
4)設置好VSM信息後,manifest文件夾結構以下:
[root@console manifest]# tree
.
├── 172.16.34.51
│ └── cluster.manifest
├── 172.16.34.52
│ └── server.manifest
├── 172.16.34.53
│ └── server.manifest
├── 172.16.34.54
│ └── server.manifest
├── cluster.manifest.sample
└── server.manifest.sample
4 directories, 6 files
./install.sh -v 2.1 --check-dependence-package ./install.sh -v 2.1 --controller 172.16.34.51
./install.sh -v 2.1 --agent agent1-ip,agent2-ip,agent3-ip
./install.sh -v 2.1 --agent 172.16.34.52,172.16.34.53,172.16.34.54
在ceph各節點執行
python /usr/bin/vsm-agent --config-file /etc/vsm/vsm.conf --log-file /var/log/vsm/vsm-agent.log 2>&1 &