13 Openstack-Ussuri-接入ceph-nautilus集羣-centos8

@TOChtml

1 前言

參考:ceph接入openstack配置 ==#本文解釋從Netonline大佬轉載過來的,用來解釋接入集羣的一些配置== Openstack環境中,數據存儲可分爲臨時性存儲與永久性存儲。node

臨時性存儲:主要由本地文件系統提供,並主要用於nova虛擬機的本地系統與臨時數據盤,以及存儲glance上傳的系統鏡像;python

永久性存儲:主要由cinder提供的塊存儲與swift提供的對象存儲構成,以cinder提供的塊存儲應用最爲普遍,塊存儲一般以雲盤的形式掛載到虛擬機中使用。web

Openstack中須要進行數據存儲的三大項目主要是nova項目(虛擬機鏡像文件),glance項目(共用模版鏡像)與cinder項目(塊存儲)。算法

下圖爲cinder,glance與nova訪問ceph集羣的邏輯圖:swift

ceph與openstack集成主要用到ceph的rbd服務,ceph底層爲rados存儲集羣,ceph經過librados庫實現對底層rados的訪問;vim

openstack各項目客戶端調用librbd,再由librbd調用librados訪問底層rados; 實際使用中,nova須要使用libvirtdriver驅動以經過libvirt與qemu調用librbd;cinder與glance可直接調用librbd;後端

寫入ceph集羣的數據被條帶切分紅多個object,object經過hash函數映射到pg(構成pg容器池pool),而後pg經過幾圈crush算法近似均勻地映射到物理存儲設備osd(osd是基於文件系統的物理存儲設備,如xfs,ext4等)。 在這裏插入圖片描述centos

2 openstack集羣上的操做

#在openstack全部控制和計算節點安裝ceph nautilus源碼包,centos8有默認安裝,可是版本必定要跟你鏈接的ceph版本如出一轍! #默認安裝源碼包的方法以下:api

yum install centos-release-ceph-nautilus.noarch
複製代碼

#可是我這裏使用的是nautilus14.2.10,所以

[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el8/$basearch
enabled=1
gpgcheck=0
type=rpm-md

[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el8/noarch
enabled=1
gpgcheck=0
type=rpm-md

[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el8/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
複製代碼

#glance-api服務所在節點須要安裝python3-rbd; #這裏glance-api服務運行在3個控制節點,所以三臺都必須安裝

yum install python3-rbd -y
複製代碼

#cinder-volume與nova-compute服務所在節點須要安裝ceph-common; #這裏cinder-volume與nova-compute服務運行在2個計算(存儲)節點

yum install ceph-common -y
複製代碼

3 ceph集羣上的操做

#==ceph集羣建立,在OpenStack Ussuri 集羣部署教程 - centos8我有提供兩個選擇,自行選擇== #在ceph集羣和openstack集羣添加hosts #vim /etc/hosts

172.16.1.131 ceph131
172.16.1.132 ceph132
172.16.1.133 ceph133

172.16.1.160 controller160
172.16.1.161 controller161
172.16.1.162 controller162
172.16.1.168 controller168
172.16.1.163 compute163
172.16.1.164 compute164
複製代碼

3.1 建立openstack集羣將要使用的pool

#Ceph默認使用pool的形式存儲數據,pool是對若干pg進行組織管理的邏輯劃分,pg裏的對象被映射到不一樣的osd,所以pool分佈到整個集羣裏。 #能夠將不一樣的數據存入1個pool,但如此操做不便於客戶端數據區分管理,所以通常是爲每一個客戶端分別建立pool。 #爲cinder,nova,glance分別建立pool,命名爲:volumes,vms,images ==#這裏volumes池是永久性存儲,vms是實例臨時後端存儲,images是鏡像存儲== ==#pg數是有算法的,可使用官網計算器去計算!== ==#PG數量的預估 集羣中單個池的PG數計算公式以下:PG 總數 = (OSD 數 * 100) / 最大副本數 / 池數 (結果必須舍入到最接近2的N次冪的值)== #在ceph集羣裏操做,建立pools

[root@ceph131 ~]# ceph osd pool create volumes 16 16 replicated
pool 'volumes' created
[root@ceph131 ~]# ceph osd pool create vms 16 16 replicated
pool 'vms' created
[root@ceph131 ~]# ceph osd pool create images 16 16 replicated
pool 'images' created
[root@ceph131 ~]# ceph osd lspools
1 cephfs_data
2 cephfs_metadata
3 rbd_storage
4 .rgw.root
5 default.rgw.control
6 default.rgw.meta
7 default.rgw.log
10 volumes
11 vms
12 images
複製代碼

3.2 ceph受權設置

3.2.1 建立用戶

#ceph默認啓用cephx authentication,須要爲nova/cinder與glance客戶端建立新的用戶並受權; #可在管理節點上分別爲運行cinder-volume與glance-api服務的節點建立client.glance與client.cinder用戶並設置權限; #針對pool設置權限,pool名對應建立的pool

[root@ceph131 ~]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
[client.cinder]
	key = AQDYA/9eFE4vIBAArdMpCCNxKxLUpSKaKc6nDg==
[root@ceph131 ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]
	key = AQDkA/9eCaGUERAAxKTqRS5Vk7iuLugYnEP5BQ==
複製代碼

3.2.2 推送client.glance&client.cinder祕鑰

#配置節點免密操做

[cephdeploy@ceph131 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:CsvXYKm8mRzasMFwgWVLx5LvvfnPrRc5S1wSb6kPytM root@ceph131
The key's randomart image is: +---[RSA 2048]----+ | +o. | | =oo. . | |. oo o . | | .. . . = | |. ....+ S . * | | + o.=.+ O | | + * oo.. + * | | B *o .+.E . | | o * ...++. | +----[SHA256]-----+ 複製代碼

#推送密鑰至各openstack集羣節點

ssh-copy-id root@controller160
ssh-copy-id root@controller161
ssh-copy-id root@controller162
ssh-copy-id root@compute163
ssh-copy-id root@compute164
複製代碼

#這裏nova-compute服務與nova-volume服務運行在相同節點,沒必要重複操做。

#將建立client.glance用戶生成的祕鑰推送到運行glance-api服務的節點
[root@ceph131 ~]# ceph auth get-or-create client.glance | tee /etc/ceph/ceph.client.glance.keyring
/ceph/ceph.client.glance.keyring
[root@ceph131 ~]# ceph auth get-or-create client.glance | ssh root@controller160 tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
	key = AQDkA/9eCaGUERAAxKTqRS5Vk7iuLugYnEP5BQ==
[root@ceph131 ~]# ceph auth get-or-create client.glance | ssh root@controller161 tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
	key = AQDkA/9eCaGUERAAxKTqRS5Vk7iuLugYnEP5BQ==
[root@ceph131 ~]# ceph auth get-or-create client.glance | ssh root@controller162 tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
	key = AQDkA/9eCaGUERAAxKTqRS5Vk7iuLugYnEP5BQ==


#同時修改祕鑰文件的屬主與用戶組
#chown glance:glance /etc/ceph/ceph.client.glance.keyring
ssh root@controller160 chown glance:glance /etc/ceph/ceph.client.glance.keyring
ssh root@controller161 chown glance:glance /etc/ceph/ceph.client.glance.keyring
ssh root@controller162 chown glance:glance /etc/ceph/ceph.client.glance.keyring

#將建立client.cinder用戶生成的祕鑰推送到運行cinder-volume服務的節點
[root@ceph131 ceph]# ceph auth get-or-create client.cinder | ssh root@compute163 tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
	key = AQDYA/9eFE4vIBAArdMpCCNxKxLUpSKaKc6nDg==
[root@ceph131 ceph]# ceph auth get-or-create client.cinder | ssh root@compute164 tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
	key = AQDYA/9eFE4vIBAArdMpCCNxKxLUpSKaKc6nDg==

#同時修改祕鑰文件的屬主與用戶組
ssh root@compute163 chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
ssh root@compute164 chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
複製代碼

3.2.3 libvirt祕鑰

#nova-compute所在節點須要將client.cinder用戶的祕鑰文件存儲到libvirt中;當基於ceph後端的cinder卷被attach到虛擬機實例時,libvirt須要用到該祕鑰以訪問ceph集羣;

#在管理節點向計算(存儲)節點推送client.cinder祕鑰文件,生成的文件是臨時性的,將祕鑰添加到libvirt後可刪除

[root@ceph131 ceph]# ceph auth get-key client.cinder | ssh root@compute164 tee /etc/ceph/client.cinder.key
AQDYA/9eFE4vIBAArdMpCCNxKxLUpSKaKc6nDg==
[root@ceph131 ceph]# ceph auth get-key client.cinder | ssh root@compute163 tee /etc/ceph/client.cinder.key
AQDYA/9eFE4vIBAArdMpCCNxKxLUpSKaKc6nDg==
複製代碼

#在計算(存儲)節點將祕鑰加入libvirt,以compute163節點爲例; #首先生成1個uuid,==所有計算(存儲)節點可共用此uuid(其餘節點不用操做此步)==; #uuid後續配置nova.conf文件時也會用到,請保持一致

[root@compute163 ~]# uuidgen
e9776771-b980-481d-9e99-3ddfdbf53d1e
[root@compute163 ~]# cd /etc/ceph/
[root@compute163 ceph]# touch secret.xml
[root@compute163 ceph]# vim secret.xml

<secret ephemeral='no' private='no'>
        <uuid>cb26bb6c-2a84-45c2-8187-fa94b81dd53d</uuid>
        <usage type='ceph'>
                <name>client.cinder secret</name>
        </usage>
</secret>

[root@compute163 ceph]# virsh secret-define --file secret.xml
Secret cb26bb6c-2a84-45c2-8187-fa94b81dd53d created

[root@compute163 ceph]# virsh secret-set-value --secret cb26bb6c-2a84-45c2-8187-fa94b81dd53d --base64 $(cat /etc/ceph/client.cinder.key)
Secret value set
複製代碼

#推送ceph.conf

[root@ceph131 ceph]# scp ceph.conf root@controller160:/etc/ceph/
ceph.conf                                                                                                                                                                                           100%  514   407.7KB/s   00:00
[root@ceph131 ceph]# scp ceph.conf root@controller161:/etc/ceph/
ceph.conf                                                                                                                                                                                           100%  514   631.5KB/s   00:00
[root@ceph131 ceph]# scp ceph.conf root@controller162:/etc/ceph/
ceph.conf                                                                                                                                                                                           100%  514   218.3KB/s   00:00
[root@ceph131 ceph]# scp ceph.conf root@compute163:/etc/ceph/
ceph.conf                                                                                                                                                                                           100%  514     2.3KB/s   00:00
[root@ceph131 ceph]# scp ceph.conf root@compute164:/etc/ceph/
ceph.conf                                                                                                                                                                                           100%  514     3.6KB/s   00:00
複製代碼

4 Glance集成Ceph

4.1 配置glance-api.conf

#在運行glance-api服務的節點修改glance-api.conf文件,含3個控制節點,以controller160節點爲例 #如下只列出涉及glance集成ceph的相關 #vim /etc/glance/glance-api.conf

[DEFAULT]
#打開copy-on-write功能
show_image_direct_url = True
[glance_store]
stores = rbd
default_store = rbd
rbd_store_chunk_size = 8
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
#stores = file,http
#default_store = file
#filesystem_store_datadir = /var/lib/glance/images/
複製代碼

#變動配置文件,重啓服務

systemctl restart openstack-glance-api.service
複製代碼

#上傳cirros鏡像

[root@controller160 ~]# glance image-create --name "rbd_cirros-05" \
  --file cirros-0.4.0-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --visibility=public
+------------------+----------------------------------------------------------------------------------+
| Property         | Value                                                                            |
+------------------+----------------------------------------------------------------------------------+
| checksum         | 443b7623e27ecf03dc9e01ee93f67afe                                                 |
| container_format | bare                                                                             |
| created_at       | 2020-07-06T15:25:40Z                                                             |
| direct_url       | rbd://76235629-6feb-4f0c-a106-4be33d485535/images/f6da37cd-449a-436c-b321-e0c1c0 |
|                  | 6761d8/snap                                                                      |
| disk_format      | qcow2                                                                            |
| id               | f6da37cd-449a-436c-b321-e0c1c06761d8                                             |
| min_disk         | 0                                                                                |
| min_ram          | 0                                                                                |
| name             | rbd_cirros-05                                                                    |
| os_hash_algo     | sha512                                                                           |
| os_hash_value    | 6513f21e44aa3da349f248188a44bc304a3653a04122d8fb4535423c8e1d14cd6a153f735bb0982e |
|                  | 2161b5b5186106570c17a9e58b64dd39390617cd5a350f78                                 |
| os_hidden        | False                                                                            |
| owner            | d3dda47e8c354d86b17085f9e382948b                                                 |
| protected        | False                                                                            |
| size             | 12716032                                                                         |
| status           | active                                                                           |
| tags             | []                                                                               |
| updated_at       | 2020-07-06T15:26:06Z                                                             |
| virtual_size     | Not available                                                                    |
| visibility       | public                                                                           |
+------------------+----------------------------------------------------------------------------------+
#遠程查看images pool有沒有此文件ID,要可以在controller節點使用rbd命令,須要安裝ceph-common
[root@controller161 ~]# rbd -p images --id glance -k /etc/ceph/ceph.client.glance.keyring ls
f6da37cd-449a-436c-b321-e0c1c06761d8
[root@ceph131 ceph]# ceph df
RAW STORAGE:
    CLASS     SIZE       AVAIL      USED        RAW USED     %RAW USED
    hdd       96 GiB     93 GiB     169 MiB      3.2 GiB          3.30
    TOTAL     96 GiB     93 GiB     169 MiB      3.2 GiB          3.30

POOLS:
    POOL                    ID     STORED      OBJECTS     USED        %USED     MAX AVAIL
    cephfs_data              1         0 B           0         0 B         0        29 GiB
    cephfs_metadata          2     8.9 KiB          22     1.5 MiB         0        29 GiB
    rbd_storage              3      33 MiB          20     100 MiB      0.11        29 GiB
    .rgw.root                4     1.2 KiB           4     768 KiB         0        29 GiB
    default.rgw.control      5         0 B           8         0 B         0        29 GiB
    default.rgw.meta         6       369 B           2     384 KiB         0        29 GiB
    default.rgw.log          7         0 B         207         0 B         0        29 GiB
    volumes                 10         0 B           0         0 B         0        29 GiB
    vms                     11         0 B           0         0 B         0        29 GiB
    images                  12      12 MiB           8      37 MiB      0.04        29 GiB
[root@ceph131 ceph]# rbd ls images
f6da37cd-449a-436c-b321-e0c1c06761d8

複製代碼

#查看ceph集羣,發現有個HEALTH_WARN,緣由是剛剛建立的未定義pool池類型,可定義爲'cephfs', 'rbd', 'rgw'等

[root@ceph131 ceph]# ceph -s
  cluster:
    id:     76235629-6feb-4f0c-a106-4be33d485535
    health: HEALTH_WARN
            application not enabled on 1 pool(s)

  services:
    mon: 3 daemons, quorum ceph131,ceph132,ceph133 (age 3d)
    mgr: ceph131(active, since 4d), standbys: ceph132, ceph133
    mds: cephfs_storage:1 {0=ceph132=up:active}
    osd: 3 osds: 3 up (since 3d), 3 in (since 4d)
    rgw: 1 daemon active (ceph131)

  task status:
    scrub status:
        mds.ceph132: idle

  data:
    pools:   10 pools, 224 pgs
    objects: 271 objects, 51 MiB
    usage:   3.2 GiB used, 93 GiB / 96 GiB avail
    pgs:     224 active+clean

  io:
    client:   4.1 KiB/s rd, 0 B/s wr, 4 op/s rd, 2 op/s wr
[root@ceph131 ceph]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
    application not enabled on pool 'images'
    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
複製代碼

#解決方法,定義pool類型爲rbd

[root@ceph131 ceph]# ceph osd pool application enable images rbd
enabled application 'rbd' on pool 'images'
[root@ceph131 ceph]# ceph osd pool application enable volumes rbd
enabled application 'rbd' on pool 'volumes'
[root@ceph131 ceph]# ceph osd pool application enable vms rbd
enabled application 'rbd' on pool 'vms'
複製代碼

#驗證方法以下:

[root@ceph131 ceph]# ceph health detail
HEALTH_OK
[root@ceph131 ceph]# ceph osd pool application get images
{
    "rbd": {}
}
複製代碼

5 Cinder集成Ceph

4.1 配置cinder.conf

#cinder利用插件式結構,支持同時使用多種後端存儲,在cinder-volume所在節點設置cinder.conf中設置相應的ceph rbd驅動便可,以compute163爲例 #vim /etc/cinder/cinder.conf

# 後端使用ceph存儲
[DEFAULT]
#enabled_backends = lvm #註釋掉本行
enabled_backends = ceph
[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1
glance_api_version = 2
rbd_user = cinder
#注意替換uuid
rbd_secret_uuid = cb26bb6c-2a84-45c2-8187-fa94b81dd53d
volume_backend_name = ceph
複製代碼

#變動配置文件,重啓服務

[root@compute163 ceph]# systemctl restart openstack-cinder-volume.service
複製代碼

#驗證

[root@controller160 ~]# openstack volume service list
+------------------+-----------------+------+---------+-------+----------------------------+
| Binary           | Host            | Zone | Status  | State | Updated At                 |
+------------------+-----------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller160   | nova | enabled | up    | 2020-07-06T16:10:36.000000 |
| cinder-scheduler | controller162   | nova | enabled | up    | 2020-07-06T16:10:39.000000 |
| cinder-scheduler | controller161   | nova | enabled | up    | 2020-07-06T16:10:33.000000 |
| cinder-volume    | compute163@lvm  | nova | enabled | down  | 2020-07-06T16:07:09.000000 |
| cinder-volume    | compute164@lvm  | nova | enabled | down  | 2020-07-06T16:07:04.000000 |
| cinder-volume    | compute164@ceph | nova | enabled | up    | 2020-07-06T16:10:38.000000 |
| cinder-volume    | compute163@ceph | nova | enabled | up    | 2020-07-06T16:10:39.000000 |
+------------------+-----------------+------+---------+-------+----------------------------+
複製代碼

4.2 建立一個volume

#設置卷類型,在控制節點爲cinder的ceph後端存儲建立對應的type,在配置多存儲後端時可區分類型;可經過「cinder type-list」查看

[root@controller160 ~]# cinder type-create ceph
+--------------------------------------+------+-------------+-----------+
| ID                                   | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| bc90d094-a76b-409f-affa-f8329d2b54d5 | ceph | -           | True      |
+--------------------------------------+------+-------------+-----------+
複製代碼

#爲ceph type設置擴展規格,鍵值」 volume_backend_name」,value值」ceph」

[root@controller160 ~]# cinder type-key ceph set volume_backend_name=ceph
[root@controller160 ~]# cinder extra-specs-list
+--------------------------------------+-------------+---------------------------------+
| ID                                   | Name        | extra_specs                     |
+--------------------------------------+-------------+---------------------------------+
| 0aacd847-535a-447e-914c-895289bf1a19 | __DEFAULT__ | {}                              |
| bc90d094-a76b-409f-affa-f8329d2b54d5 | ceph        | {'volume_backend_name': 'ceph'} |
+--------------------------------------+-------------+---------------------------------+
複製代碼

#建立一個volume

[root@controller160 ~]# cinder create --volume-type ceph --name ceph-volume 1
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2020-07-06T16:14:36.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| group_id                       | None                                 |
| id                             | 63cb956c-2e4f-434e-a21a-9280530f737e |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | ceph-volume                          |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | d3dda47e8c354d86b17085f9e382948b     |
| provider_id                    | None                                 |
| replication_status             | None                                 |
| service_uuid                   | None                                 |
| shared_targets                 | True                                 |
| size                           | 1                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | ec8c820dba1046f6a9d940201cf8cb06     |
| volume_type                    | ceph                                 |
+--------------------------------+--------------------------------------+
複製代碼

#驗證

[root@controller160 ~]# openstack volume list
+--------------------------------------+-------------+-----------+------+-------------+
| ID                                   | Name        | Status    | Size | Attached to |
+--------------------------------------+-------------+-----------+------+-------------+
| 63cb956c-2e4f-434e-a21a-9280530f737e | ceph-volume | available |    1 |             |
| 9575c54a-d44e-46dd-9187-0c464c512c01 | test1       | available |    2 |             |
+--------------------------------------+-------------+-----------+------+-------------+
[root@ceph131 ceph]# rbd ls volumes
volume-63cb956c-2e4f-434e-a21a-9280530f737e
複製代碼

6 Nova集成Ceph

6.1 配置ceph.conf

#若是須要從ceph rbd中啓動虛擬機,必須將ceph配置爲nova的臨時後端; #推薦在計算節點的配置文件中啓用rbd cache功能; #爲了便於故障排查,配置admin socket參數,這樣每一個使用ceph rbd的虛擬機都有1個socket將有利於虛擬機性能分析與故障解決; #相關配置只涉及所有計算節點ceph.conf文件的[client]與[client.cinder]字段,以compute163節點爲例

[root@compute163 ~]# vim /etc/ceph/ceph.conf
[client]
rbd cache = true
rbd cache writethrough until flush = true
admin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/qemu/qemu-guest-$pid.log
rbd concurrent management ops = 20

[client.cinder]
keyring = /etc/ceph/ceph.client.cinder.keyring

# 建立ceph.conf文件中指定的socker與log相關的目錄,並更改屬主
[root@compute163 ~]# mkdir -p /var/run/ceph/guests/ /var/log/qemu/
[root@compute163 ~]# chown qemu:libvirt /var/run/ceph/guests/ /var/log/qemu/
複製代碼

6.2 配置nova.conf

#在所有計算節點配置nova後端使用ceph集羣的vms池,以compute163節點爲例

[root@compute01 ~]# vim /etc/nova/nova.conf

[DEFAULT]
vif_plugging_is_fatal = False  
vif_plugging_timeout = 0 
[libvirt]
images_type = rbd
images_rbd_pool = vms
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
rbd_secret_uuid = cb26bb6c-2a84-45c2-8187-fa94b81dd53d #uuid先後一致
disk_cachemodes="network=writeback"
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
# 禁用文件注入
inject_password = false
inject_key = false
inject_partition = -2
# 虛擬機臨時root磁盤discard功能,」unmap」參數在scsi接口類型磁盤釋放後可當即釋放空間
hw_disk_discard = unmap
複製代碼

#變動配置文件,重啓計算服務

[root@compute163 ~]# systemctl restart libvirtd.service openstack-nova-compute.service
[root@compute163 ~]# systemctl status libvirtd.service openstack-nova-compute.service 
複製代碼

6.3 配置live-migration

6.3.1 修改/etc/libvirt/libvirtd.conf

#在所有計算節點操做,以compute163節點爲例; #如下給出libvirtd.conf文件的修改處所在的行num

[root@compute163 ~]# egrep -vn "^$|^#" /etc/libvirt/libvirtd.conf 
# 取消如下三行的註釋
22:listen_tls = 0
33:listen_tcp = 1
45:tcp_port = "16509"
# 取消註釋,並修改監聽端口
55:listen_addr = "172.16.1.163"
# 取消註釋,同時取消認證
158:auth_tcp = "none" 
複製代碼

6.3.2 修改/etc/sysconfig/libvirtd

#在所有計算節點操做,以compute163節點爲例; #如下給出libvirtd文件的修改處所在的行num

[root@compute163 ~]# egrep -vn "^$|^#" /etc/sysconfig/libvirtd
# 取消註釋
9:LIBVIRTD_ARGS="--listen" 
複製代碼

6.3.3 計算節點設置免密訪問

#全部計算節點都必須設置免密,遷移必備!

[root@compute163 ~]# usermod -s /bin/bash nova
[root@compute163 ~]# passwd nova
Changing password for user nova.
New password:
BAD PASSWORD: The password contains the user name in some form
Retype new password:
passwd: all authentication tokens updated successfully.
[root@compute163 ~]# su - nova
[nova@compute163 ~]$ ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /var/lib/nova/.ssh/id_rsa.
Your public key has been saved in /var/lib/nova/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:bnGCcG6eRvSG3Bb58eu+sXwEAnb72hUHmlNja2bQLBU nova@compute163
The key's randomart image is: +---[RSA 3072]----+ | +E. | | o.. o B | | . o.oo.. B + | | * = ooo= * .| | B S oo.* o | | + = + ..o | | + o ooo | | . . .o.o. | | .*o | +----[SHA256]-----+ [nova@compute163 ~]$ ssh-copy-id nova@compute164 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/var/lib/nova/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys nova@compute164's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'nova@compute164'"
and check to make sure that only the key(s) you wanted were added.

[nova@compute163 ~]$ ssh nova@172.16.1.164
Activate the web console with: systemctl enable --now cockpit.socket

Last failed login: Wed Jul  8 00:24:02 CST 2020 from 172.16.1.163 on ssh:notty
There were 5 failed login attempts since the last successful login.
[nova@compute164 ~]$ 
複製代碼

6.3.4 設置iptables - ==前面已經關閉了iptables,所以不用設置==

#live-migration時,源計算節點主動鏈接目的計算節點tcp16509端口,可使用」virsh -c qemu+tcp://{node_ip or node_name}/system」鏈接目的計算節點測試; #遷移先後,在源目計算節點上的被遷移instance使用tcp49152~49161端口作臨時通訊; #因虛擬機已經啓用iptables相關規則,此時切忌隨意重啓iptables服務,儘可能使用插入的方式添加規則; #同時以修改配置文件的方式寫入相關規則,切忌使用」iptables saved」命令; #在所有計算節點操做,以compute163節點爲例

[root@compute163 ~]# iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 16509 -j ACCEPT
[root@compute163 ~]# iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 49152:49161 -j ACCEPT 
複製代碼

6.3.5 重啓服務

systemctl mask libvirtd.socket libvirtd-ro.socket \
   libvirtd-admin.socket libvirtd-tls.socket libvirtd-tcp.socket
service libvirtd restart
systemctl restart openstack-nova-compute.service
複製代碼

#驗證

[root@compute163 ~]# netstat -lantp|grep libvirtd
tcp        0      0 172.16.1.163:16509      0.0.0.0:*               LISTEN      582582/libvirtd
複製代碼

6.4 驗證是否集成

6.4.1建立基於ceph存儲的bootable存儲卷

#當nova從rbd啓動instance時,鏡像格式必須是raw格式,不然虛擬機在啓動時glance-api與cinder均會報錯; #首先進行格式轉換,將*.img文件轉換爲*.raw文件 #cirros-0.4.0-x86_64-disk.img 這個文件網上本身下載

[root@controller160 ~]# qemu-img convert -f qcow2 -O raw ~/cirros-0.4.0-x86_64-disk.img ~/cirros-0.4.0-x86_64-disk.raw

# 生成raw格式鏡像
[root@controller160 ~]# openstack image create "cirros-raw" \
> --file ~/cirros-0.4.0-x86_64-disk.raw \
> --disk-format raw --container-format bare \
> --public
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                   |
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | ba3cd24377dde5dfdd58728894004abb                                                                                                                                   |
| container_format | bare                                                                                                                                   |
| created_at       | 2020-07-07T02:13:06Z                                                                                                                                   |
| disk_format      | raw                                                                                                                                   |
| file             | /v2/images/459f5ddd-c094-4b0f-86e5-f55baa33595c/file                                                                                                                                   |
| id               | 459f5ddd-c094-4b0f-86e5-f55baa33595c                                                                                                                                   |
| min_disk         | 0                                                                                                                                   |
| min_ram          | 0                                                                                                                                   |
| name             | cirros-raw                                                                                                                                   |
| owner            | d3dda47e8c354d86b17085f9e382948b                                                                                                                                   |
| properties       | direct_url='rbd://76235629-6feb-4f0c-a106-4be33d485535/images/459f5ddd-c094-4b0f-86e5-f55baa33595c/snap', os_hash_algo='sha512', os_hash_value='b795f047a1b10ba0b7c95b43b2a481a59289dc4cf2e49845e60b194a911819d3ada03767bbba4143b44c93fd7f66c96c5a621e28dff51d1196dae64974ce240e', os_hidden='False', owner_specified.openstack.md5='ba3cd24377dde5dfdd58728894004abb', owner_specified.openstack.object='images/cirros-raw', owner_specified.openstack.sha256='87ddf8eea6504b5eb849e418a568c4985d3cea59b5a5d069e1dc644de676b4ec', self='/v2/images/459f5ddd-c094-4b0f-86e5-f55baa33595c' |
| protected        | False                                                                                                                                   |
| schema           | /v2/schemas/image                                                                                                                                   |
| size             | 46137344                                                                                                                                   |
| status           | active                                                                                                                                   |
| tags             |                                                                                                                                   |
| updated_at       | 2020-07-07T02:13:42Z                                                                                                                                   |
| visibility       | public                                                                                                                                   |
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

複製代碼

#使用新鏡像建立bootable卷

[root@controller160 ~]# cinder create --image-id 459f5ddd-c094-4b0f-86e5-f55baa33595c --volume-type ceph --name ceph-boot 1
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2020-07-07T02:17:42.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| group_id                       | None                                 |
| id                             | 46a45564-e148-4f85-911b-a4542bdbd4f0 |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | ceph-boot                            |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | d3dda47e8c354d86b17085f9e382948b     |
| provider_id                    | None                                 |
| replication_status             | None                                 |
| service_uuid                   | None                                 |
| shared_targets                 | True                                 |
| size                           | 1                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | ec8c820dba1046f6a9d940201cf8cb06     |
| volume_type                    | ceph                                 |
+--------------------------------+--------------------------------------+

#查看新建立的bootable卷,建立須要時間,過會查status纔會是available
[root@controller160 ~]# cinder list
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
| ID                                   | Status    | Name        | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
| 62a21adb-0a22-4439-bf0b-121442790515 | available | ceph-boot   | 1    | ceph        | true     |             |
| 63cb956c-2e4f-434e-a21a-9280530f737e | available | ceph-volume | 1    | ceph        | false    |             |
| 9575c54a-d44e-46dd-9187-0c464c512c01 | available | test1       | 2    | __DEFAULT__ | false    |             |
+--------------------------------------+-----------+-------------+------+-------------+----------+-------------+
複製代碼

#從基於ceph後端的volumes新建實例; #「--boot-volume」指定具備」bootable」屬性的卷,啓動後,虛擬機運行在volumes卷

#建立雲主機類型
[root@controller160 ~]# openstack flavor create --id 1 --vcpus 1 --ram 256 --disk 1 m1.nano
+----------------------------+---------+
| Field                      | Value   |
+----------------------------+---------+
| OS-FLV-DISABLED:disabled   | False   |
| OS-FLV-EXT-DATA:ephemeral  | 0       |
| disk                       | 1       |
| id                         | 1       |
| name                       | m1.nano |
| os-flavor-access:is_public | True    |
| properties                 |         |
| ram                        | 256     |
| rxtx_factor                | 1.0     |
| swap                       |         |
| vcpus                      | 1       |
+----------------------------+---------+
#安全規則
[root@controller160 ~]# openstack security group rule create --proto icmp default
[root@controller160 ~]# openstack security group rule create --proto tcp --dst-port 22 'default'
#建立虛擬網絡
openstack network create --share --external \
--provider-physical-network provider \
--provider-network-type flat provider-eth1

#建立子網,根據實際進行修改
openstack subnet create --network provider-eth1 \
--allocation-pool start=172.16.2.220,end=172.16.2.229 \
--dns-nameserver 114.114.114.114 --gateway 172.16.2.254 --subnet-range 172.16.2.0/24 \
172.16.2.0/24

#雲主機可用類型
openstack flavor list

#可用鏡像
openstack image list

#可用的安全組
openstack security group list

#可用的網絡
openstack network list
#建立實例,也能夠經過web建立
[root@controller160 ~]# nova boot --flavor m1.nano \
 --boot-volume d3770a82-068c-49ad-a9b7-ef863bb61a5b \
 --nic net-id=53b98327-3a47-4316-be56-cba37e8f20f2 \
 --security-group default \
 ceph-boot02
 [root@controller160 ~]# nova show c592ca1a-dbce-443a-9222-c7e47e245725
+--------------------------------------+----------------------------------------------------------------------------------+
| Property                             | Value                                                                            |
+--------------------------------------+----------------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | AUTO                                                                             |
| OS-EXT-AZ:availability_zone          | nova                                                                             |
| OS-EXT-SRV-ATTR:host                 | compute163                                                                       |
| OS-EXT-SRV-ATTR:hostname             | ceph-boot02                                                                      |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute163                                                                       |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000025                                                                |
| OS-EXT-SRV-ATTR:kernel_id            |                                                                                  |
| OS-EXT-SRV-ATTR:launch_index         | 0                                                                                |
| OS-EXT-SRV-ATTR:ramdisk_id           |                                                                                  |
| OS-EXT-SRV-ATTR:reservation_id       | r-ty5v9w8n                                                                       |
| OS-EXT-SRV-ATTR:root_device_name     | /dev/vda                                                                         |
| OS-EXT-STS:power_state               | 1                                                                                |
| OS-EXT-STS:task_state                | -                                                                                |
| OS-EXT-STS:vm_state                  | active                                                                           |
| OS-SRV-USG:launched_at               | 2020-07-07T10:55:39.000000                                                       |
| OS-SRV-USG:terminated_at             | -                                                                                |
| accessIPv4                           |                                                                                  |
| accessIPv6                           |                                                                                  |
| config_drive                         |                                                                                  |
| created                              | 2020-07-07T10:53:30Z                                                             |
| description                          | -                                                                                |
| flavor:disk                          | 1                                                                                |
| flavor:ephemeral                     | 0                                                                                |
| flavor:extra_specs                   | {}                                                                               |
| flavor:original_name                 | m1.nano                                                                          |
| flavor:ram                           | 256                                                                              |
| flavor:swap                          | 0                                                                                |
| flavor:vcpus                         | 1                                                                                |
| hostId                               | 308132ea4792b277acfae8d3c5d88439d3d5d6ba43d8b06395581d77                         |
| host_status                          | UP                                                                               |
| id                                   | c592ca1a-dbce-443a-9222-c7e47e245725                                             |
| image                                | Attempt to boot from volume - no image supplied                                  |
| key_name                             | -                                                                                |
| locked                               | False                                                                            |
| locked_reason                        | -                                                                                |
| metadata                             | {}                                                                               |
| name                                 | ceph-boot02                                                                      |
| os-extended-volumes:volumes_attached | [{"id": "d3770a82-068c-49ad-a9b7-ef863bb61a5b", "delete_on_termination": false}] |
| progress                             | 0                                                                                |
| provider-eth1 network                | 172.16.2.228                                                                     |
| security_groups                      | default                                                                          |
| server_groups                        | []                                                                               |
| status                               | ACTIVE                                                                           |
| tags                                 | []                                                                               |
| tenant_id                            | d3dda47e8c354d86b17085f9e382948b                                                 |
| trusted_image_certificates           | -                                                                                |
| updated                              | 2020-07-07T10:55:40Z                                                             |
| user_id                              | ec8c820dba1046f6a9d940201cf8cb06                                                 |
+--------------------------------------+----------------------------------------------------------------------------------+

複製代碼

6.4.2 從ceph rbd啓動虛擬機

#--nic:net-id指網絡id,非subnet-id;
#最後「cirros-cephrbd-instance1」爲instance名稱
[root@controller160 ~]# openstack server create --flavor m1.nano --image cirros-raw --nic net-id=53b98327-3a47-4316-be56-cba37e8f20f2 --security-group default cirros-cephrbd-instance1
+-------------------------------------+---------------------------------------------------+
| Field                               | Value                                             |
+-------------------------------------+---------------------------------------------------+
| OS-DCF:diskConfig                   | MANUAL                                            |
| OS-EXT-AZ:availability_zone         |                                                   |
| OS-EXT-SRV-ATTR:host                | None                                              |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                              |
| OS-EXT-SRV-ATTR:instance_name       |                                                   |
| OS-EXT-STS:power_state              | NOSTATE                                           |
| OS-EXT-STS:task_state               | scheduling                                        |
| OS-EXT-STS:vm_state                 | building                                          |
| OS-SRV-USG:launched_at              | None                                              |
| OS-SRV-USG:terminated_at            | None                                              |
| accessIPv4                          |                                                   |
| accessIPv6                          |                                                   |
| addresses                           |                                                   |
| adminPass                           | CRiNuZoK6ftt                                      |
| config_drive                        |                                                   |
| created                             | 2020-07-07T15:13:08Z                              |
| flavor                              | m1.nano (1)                                       |
| hostId                              |                                                   |
| id                                  | 6ea79ec0-1ec6-47ff-b185-233c565b1fab              |
| image                               | cirros-raw (459f5ddd-c094-4b0f-86e5-f55baa33595c) |
| key_name                            | None                                              |
| name                                | cirros-cephrbd-instance1                          |
| progress                            | 0                                                 |
| project_id                          | d3dda47e8c354d86b17085f9e382948b                  |
| properties                          |                                                   |
| security_groups                     | name='eea8a6b4-2b6d-4f11-bfe8-12b56bafe36c'       |
| status                              | BUILD                                             |
| updated                             | 2020-07-07T15:13:10Z                              |
| user_id                             | ec8c820dba1046f6a9d940201cf8cb06                  |
| volumes_attached                    |                                                   |
+-------------------------------------+---------------------------------------------------+
#查詢生成的instance
[root@controller160 ~]# nova list
+--------------------------------------+--------------------------+--------+------------+-------------+----------------------------+
| ID                                   | Name                     | Status | Task State | Power State | Networks                   |
+--------------------------------------+--------------------------+--------+------------+-------------+----------------------------+
| c592ca1a-dbce-443a-9222-c7e47e245725 | ceph-boot02              | ACTIVE | -          | Running     | provider-eth1=172.16.2.228 |
| 6ea79ec0-1ec6-47ff-b185-233c565b1fab | cirros-cephrbd-instance1 | ACTIVE | -          | Running     | provider-eth1=172.16.2.226 |
+--------------------------------------+--------------------------+--------+------------+-------------+----------------------------+
複製代碼

6.4.3 對rbd啓動的虛擬機進行live-migration

#使用「nova show 6ea79ec0-1ec6-47ff-b185-233c565b1fab」得知從rbd啓動的instance在遷移前位於compute163節點;
#或使用」nova hypervisor-servers compute163」進行驗證;
[root@controller01 ~]# nova live-migration cirros-cephrbd-instance1 compute164

# 遷移過程當中可查看狀態
[root@controller160 ~]# nova list
+--------------------------------------+--------------------------+-----------+------------+-------------+----------------------------+
| ID                                   | Name                     | Status    | Task State | Power State | Networks                   |
+--------------------------------------+--------------------------+-----------+------------+-------------+----------------------------+
| c592ca1a-dbce-443a-9222-c7e47e245725 | ceph-boot02              | ACTIVE    | -          | Running     | provider-eth1=172.16.2.228 |
| 6ea79ec0-1ec6-47ff-b185-233c565b1fab | cirros-cephrbd-instance1 | MIGRATING | migrating  | Running     | provider-eth1=172.16.2.226 |
+--------------------------------------+--------------------------+-----------+------------+-------------+----------------------------+
複製代碼
# 遷移完成後,查看instacn所在節點;
# 或使用「nova show 6ea79ec0-1ec6-47ff-b185-233c565b1fab」命令查看」hypervisor_hostname」
[root@controller01 ~]# nova hypervisor-servers compute163
[root@controller01 ~]# nova hypervisor-servers compute164
複製代碼

X.過程當中遇到的問題

eg1.2020-07-04 00:39:56.394 671959 ERROR glance.common.wsgi rados.ObjectNotFound: [errno 2] error calling conf_read_file
緣由是:找不到ceph.conf配置文件
解決方案:從ceph集羣複製ceph.conf配置至各節點/etc/ceph/裏面

eg2.2020-07-04 01:01:27.736 1882718 ERROR glance_store._drivers.rbd [req-fd768a6d-e7e2-476b-b1d3-d405d7a560f2 ec8c820dba1046f6a9d940201cf8cb06 d3dda47e8c354d86b17085f9e382948b - default default] Error con
necting to ceph cluster.: rados.ObjectNotFound: [errno 2] error connecting to the cluster

eg3.libvirtd[580770]: --listen parameter not permitted with systemd activation sockets, see 'man libvirtd' for further guidance
緣由是:默認使用了systemd模式,要恢復到傳統模式,全部的systemd必須被屏蔽
解決方案:
systemctl mask libvirtd.socket libvirtd-ro.socket \
   libvirtd-admin.socket libvirtd-tls.socket libvirtd-tcp.socket
而後使用如下命令重啓便可:
service libvirtd restart

eg4.AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/guests/ceph-client.cinder.596406.94105140863224.asok': (13) Permission denied
緣由是:這個是由於/var/run/ceph目錄權限有問題,qemu起的這些虛擬機示例,其屬主屬組都是qemu,可是/var/run/ceph目錄的屬主屬組是ceph:ceph,權限是770
解決方案:直接將/var/run/ceph目錄的權限改成777,另外,/var/log/qemu/也最好設置一下權限,設置爲777

eg5.Error on AMQP connection <0.9284.0> (172.16.1.162:55008 -> 172.16.1.162:5672, state: starting):
AMQPLAIN login refused: user 'guest' can only connect via localhost
緣由是:從3版本開始不支持guest遠程登錄
解決方案:vim /etc/rabbitmq/rabbitmq.config添加如下字段,有點!!而後重啓rabbitmq服務
[{rabbit, [{loopback_users, []}]}].

eg6.Failed to allocate network(s): nova.exception.VirtualInterfaceCreateException: Virtual Interface creation failed
在計算節點的/etc/nova/nova.conf中添加下面兩句,而後重啓
vif_plugging_is_fatal = False  
vif_plugging_timeout = 0  

eg7.Resize error: not able to execute ssh command: Unexpected error while running command.
緣由是:openstack遷移是以ssh爲基礎,所以須要計算節點間都配置免密
解決方案:見6.3.3
複製代碼
相關文章
相關標籤/搜索