08.存儲Cinder→5.場景學習→12.Ceph Volume Provider→1.配置

  1. 配置ceph(控制節點)這裏僅僅是對控制節點的配置文件進行更改,具體安裝流程參考見04.搭建實驗環境→2.搭建環境(devstack)       (在配置控制節點前記得在計算節點執行unstack.sh,以便關閉計算節點全部服務,使其不影響控制節點)  
    1. 在控制節點的local.conf添加ceph plugin: 
      1
      2
      3
      4
      5
      stack@controller:~/devstack$ vim local.conf
      ...
      #ceph
      enable_plugin devstack-plugin-ceph https://opendev.org/openstack/devstack-plugin-ceph
      # use TryStack git mirror

    2. 運行stack.sh文件
    3. 在安裝中若是出現「umount: /var/lib/ceph/drives/sdb1: mountpoint not found」的問題,解決以下:
      1. 出現這個問題每每是由於在安裝過程當中先出現了Could not find a version that satisfies the requirement...問題 參考見04.搭建實驗環境→2.搭建環境(devstack)
        1. 添加了新的python源以後從新進行stack,發現問題解決
        2. 而後就發現新的問題即「umount: /var/lib/ceph/drives/sdb1: mountpoint not found」
      2. 此時應先unstack而後再stack 因爲添加的python源解決了第一個問題,所以第一個問題不會出現,第二個問題也沒有出現
  1. 配置計算節點 
    1. 從新安裝計算節點devstack環境,並在控制節點發現計算節點:root@controller:~# /opt/stack/devstack/tools/discover_hosts.sh 
    2. 安裝ceph客戶端:root@compute:~# apt-get install ceph-common
    3. 受權設置:
      1. client.cinder祕鑰:root@controller:~# ceph auth get-or-create client.cinder | ssh root@compute tee  /etc/ceph/ceph.client.cinder.keyring
        1. tee用法:讀取標準輸入的數據,並將其內容輸出成文件
          1
          2
          3
          4
          5
          6
          7
          8
          root@cmp-2:~# tee zhao
          36
          36
          q
          q
          root@cmp-2:~# cat zhao
          36
          q
      2. libvirt祕鑰:root@controller:~# ceph auth get-key client.cinder | ssh root@compute tee /etc/ceph/client.cinder.key結果同樣
        1. 配置文件
          1
          2
          3
          4
          5
          6
          7
          root@compute:~# vim /etc/ceph/secret.xml
          <secret ephemeral='no' private='no'>
            <uuid>df0d0b60-047a-45f5-b5be-f7d2b4beadee</uuid>
            <usage type='ceph'>
              <name>client.cinder secret</name>
            </usage>
          </secret>
          其中這個uuid查看控制節點的nova.conf
        2. 從xml文件定義或修改secret:
          1. root@compute:~# virsh secret-define --file /etc/ceph/secret.xml 
        3. 設置secret:
          1. root@compute:~# virsh secret-set-value --secret df0d0b60-047a-45f5-b5be-f7d2b4beadee --base64 $(cat /etc/ceph/client.cinder.key) 
        4. 查看secret:virsh secret-list全部計算節點的secret相同
        5. 刪除文件:rm /etc/ceph/client.cinder.key && rm /etc/ceph/secret.xml 
      3. client.admin祕鑰
        1. root@controller:~# scp /etc/ceph/ceph.client.admin.keyring compute:/etc/ceph/
    4. 配置文件:
      計算節點:
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      root@compute:~# vim /etc/ceph/ceph.conf
      [global]
      rbd default features = 1
      osd pool default size = 1
      osd journal size = 100
      osd crush chooseleaf type = 0
      filestore_xattr_use_omap = true
      auth_client_required = cephx
      auth_service_required = cephx
      auth_cluster_required = cephx
      mon_host = 172.16.1.17
      mon_initial_members = controller
      fsid = eab37548-7aef-466a-861c-3757a12ce9e8
      
      root@compute:~# vim /etc/nova/nova-cpu.conf
      [libvirt]
      images_rbd_ceph_conf = /etc/ceph/ceph.conf
      images_rbd_pool = vms
      images_type = rbd
      disk_cachemodes = network=writeback
      inject_partition = -2
      inject_key = false
      rbd_secret_uuid = df0d0b60-047a-45f5-b5be-f7d2b4beadee
      rbd_user = cinder
      live_migration_uri = qemu+ssh://stack@%s/system
      cpu_mode = none
      virt_type = kvm
      修改nova-cpu.conf而不是修改nova.conf
    5. 重啓計算服務
      1. root@compute:~# systemctl restart libvirtd.service
      2. root@compute:~# systemctl restart devstack@n-cpu.service
    6. 驗證
      1. 建立虛擬機,肯定該虛擬機是在計算節點上建立(virsh list),使用rbd ls vms查看虛機鏡像文件
  1. 配置對比
不使用ceph plugin搭建的devstack環境 使用ceph plugin搭建的devstack環境
glance-api.conf
1
2
[glance_store]
filesystem_store_datadir = /opt/stack/data/glance/images/
1
2
3
4
5
6
7
[glance_store]
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
stores = file, http, rbd
default_store = rbd
filesystem_store_datadir = /opt/stack/data/glance/images/
nova.conf 或 nova-cpu.conf
1
2
3
4
[libvirt]
live_migration_uri = qemu+ssh://stack@%s/system
cpu_mode = none
virt_type = kvm
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
[libvirt]
images_rbd_ceph_conf = /etc/ceph/ceph.conf
images_rbd_pool = vms
images_type = rbd
disk_cachemodes = network=writeback
inject_partition = -2
inject_key = false
rbd_secret_uuid = df0d0b60-047a-45f5-b5be-f7d2b4beadee
rbd_user = cinder
live_migration_uri = qemu+ssh://stack@%s/system
cpu_mode = none
virt_type = kvm
cinder.conf
1
2
3
4
5
6
7
8
[lvmdriver-1]
image_volume_cache_enabled = True
volume_clear = zero
lvm_type = auto
iscsi_helper = tgtadm
volume_group = stack-volumes-lvmdriver-1
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = lvmdriver-1
cinder.conf
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
[ceph]
image_volume_cache_enabled = True
volume_clear = zero
rbd_max_clone_depth = 5
rbd_flatten_volume_from_snapshot = False
rbd_secret_uuid = df0d0b60-047a-45f5-b5be-f7d2b4beadee
rbd_user = cinder
rbd_pool = volumes
rbd_ceph_conf = /etc/ceph/ceph.conf
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph

ceph.conf
控制節點在配置好後顯示的:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
[global]
rbd default features = 1
osd pool default size = 1
osd journal size = 100
osd crush chooseleaf type = 0
filestore_xattr_use_omap = true
auth_client_required = cephx
auth_service_required = cephx
auth_cluster_required = cephx
mon_host = 172.16.1.17
mon_initial_members = controller
fsid = eab37548-7aef-466a-861c-3757a12ce9e8
在ceph集羣的admin節點(在該節點用ceph-deploy創建ceph存儲集羣)建立初始化monitor後,會獲得多個祕鑰文件,要將這些祕鑰文件和該節點配置的ceph.conf文件分發到其餘全部節點(其餘ceph節點,計算節點,控制節點...)
fsid是自動配置的

  1. ceph的日誌文件
    1
    2
    3
    4
    5
    6
    7
    8
    9
    root@controller:~# ll /var/log/ceph/    
    total 2856
    drwxrws--T  2 ceph ceph      4096 Jun 25 17:48 ./
    drwxrwxr-x 13 root syslog    4096 Jun 25 17:46 ../
    -rw-------  1 ceph ceph     35669 Jun 26 16:38 ceph.audit.log
    -rw-------  1 ceph ceph      4504 Jun 26 15:01 ceph.log
    -rw-r--r--  1 ceph ceph   2719445 Jun 26 17:06 ceph-mgr.x.log
    -rw-r--r--  1 root ceph     32990 Jun 25 19:31 ceph-mon.controller.log
    -rw-r--r--  1 ceph ceph    106920 Jun 26 14:29 ceph-osd.0.log

    1. ceph的日誌等級有 ERR、WRN、INFO
相關文章
相關標籤/搜索