設計規劃
目前設計了2類角色, ceph和nova. 只要不是ceph集羣的節點, 則都是nova, 須要承擔計算服務,控制節點和網絡節點目前由ceph{01..03}擔任. 接雙線.html
vlan | 名稱 | 網段(CIDR標記) | 用途 | 設備 | 備註 |
---|---|---|---|---|---|
1031-1060 | os-tenant | 用戶自定義 | 項目私有網絡 | 計算及網絡節點所在的二層交換機 | 31個私有網絡, 應該夠了, 否則從此擴展爲900-1030吧. |
1031 | os-wuhan31 | 100.100.31.0/24 | 業務區(wuhan31)主機網絡 | 計算及網絡節點所在的二層交換機 | 本集羣不須要. 寫着是爲了不用錯. |
33 | os-extnet | 192.168.33.0/24 | 浮動IP網. 私有網絡NAT. | 全部節點的二層交換機, 三層交換機. | 讓私有網絡能夠訪問外界, 或者從外界進入(綁定浮動IP後) |
34-37 | os-pubnet | 192.168.34.0/24 - 192.168.37.0/24 | 直通內網 | 全部節點的二層交換機, 三層交換機 | 通常做爲公共的出口網絡. |
IP及主機名規劃
網關爲 100.100.31.1node
127.0.0.1 localhost ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 100.100.31.254 cloud-wuhan31.***.org 100.100.31.201 wuhan31-ceph01.v3.os wuhan31-ceph01 100.100.31.202 wuhan31-ceph02.v3.os wuhan31-ceph02 100.100.31.203 wuhan31-ceph03.v3.os wuhan31-ceph03 100.100.31.102 wuhan31-nova01.v3.os wuhan31-nova01 100.100.31.103 wuhan31-nova02.v3.os wuhan31-nova02
虛擬機規格
cpu 1 2 4 8python
內存 1 2 4 8 16linux
磁盤 20 50web
並限定 內存/cpu 的值須要在1-4之間. 編寫腳本以下: 最終生成了22個flavor.redis
#!/bin/bash desc="create flavors for openstack." log_file="/dev/shm/create-flavor.log" # config cpu, ram, and disk. seperated value with space. cpu_count_list="1 2 4 8" ram_gb_list="1 2 4 8 16" disk_gb_list="20 50" # accept ram/cpu ratio. ram_cpu_factor_min=1 ram_cpu_factor_max=4 tip(){ echo >&2 "$*"; } die(){ tip "$*"; exit 1; } #openstack flavor create [-h] [-f {json,shell,table,value,yaml}] # [-c COLUMN] [--max-width <integer>] # [--fit-width] [--print-empty] [--noindent] # [--prefix PREFIX] [--id <id>] [--ram <size-mb>] # [--disk <size-gb>] [--ephemeral <size-gb>] # [--swap <size-mb>] [--vcpus <vcpus>] # [--rxtx-factor <factor>] [--public | --private] # [--property <key=value>] [--project <project>] # [--description <description>] # [--project-domain <project-domain>] # <flavor-name> OSC="openstack flavor create" if [ "$1" != "run" ]; then tip "Usage: $0 [run] -- $desc" tip " add argument 'run' to execute these command really, otherwise show it on screen only." tip "" OSC="echo $OSC" else # check openrc env. [ -z "$OS_USERNAME" ] && die "to run openstack command, you need source openrc file first." fi for cpu in $cpu_count_list; do for ram in $ram_gb_list; do ram_cpu_factor=$((ram/cpu)) [ $ram_cpu_factor -lt $ram_cpu_factor_min ] && \ { tip "INFO: ignore flavor beacuse ram_cpu_factor is less \ than ram_cpu_factor_min: $ram/$cpu < $ram_cpu_factor_min" continue; } [ $ram_cpu_factor -gt $ram_cpu_factor_max ] && \ { tip "INFO: ignore flavor beacuse ram_cpu_factor is more \ than ram_cpu_factor_max: $ram/$cpu > $ram_cpu_factor_max" continue; } for disk in $disk_gb_list; do name="c$cpu-m${ram}G-d${disk}G" $OSC --id "$name" \ --vcpus "$cpu" \ --ram $((ram*1024)) \ --disk "$disk" "$name" sleep 0.01 done done done
這是安裝完成後查看的docker
[root@wuhan31-ceph01 ~]# openstack flavor list +--------------+--------------+-------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +--------------+--------------+-------+------+-----------+-------+-----------+ | c1-m1G-d20G | c1-m1G-d20G | 1024 | 20 | 0 | 1 | True | | c1-m1G-d50G | c1-m1G-d50G | 1024 | 50 | 0 | 1 | True | | c1-m2G-d20G | c1-m2G-d20G | 2048 | 20 | 0 | 1 | True | | c1-m2G-d50G | c1-m2G-d50G | 2048 | 50 | 0 | 1 | True | | c1-m4G-d20G | c1-m4G-d20G | 4096 | 20 | 0 | 1 | True | | c1-m4G-d50G | c1-m4G-d50G | 4096 | 50 | 0 | 1 | True | | c2-m2G-d20G | c2-m2G-d20G | 2048 | 20 | 0 | 2 | True | | c2-m2G-d50G | c2-m2G-d50G | 2048 | 50 | 0 | 2 | True | | c2-m4G-d20G | c2-m4G-d20G | 4096 | 20 | 0 | 2 | True | | c2-m4G-d50G | c2-m4G-d50G | 4096 | 50 | 0 | 2 | True | | c2-m8G-d20G | c2-m8G-d20G | 8192 | 20 | 0 | 2 | True | | c2-m8G-d50G | c2-m8G-d50G | 8192 | 50 | 0 | 2 | True | | c4-m16G-d20G | c4-m16G-d20G | 16384 | 20 | 0 | 4 | True | | c4-m16G-d50G | c4-m16G-d50G | 16384 | 50 | 0 | 4 | True | | c4-m4G-d20G | c4-m4G-d20G | 4096 | 20 | 0 | 4 | True | | c4-m4G-d50G | c4-m4G-d50G | 4096 | 50 | 0 | 4 | True | | c4-m8G-d20G | c4-m8G-d20G | 8192 | 20 | 0 | 4 | True | | c4-m8G-d50G | c4-m8G-d50G | 8192 | 50 | 0 | 4 | True | | c8-m16G-d20G | c8-m16G-d20G | 16384 | 20 | 0 | 8 | True | | c8-m16G-d50G | c8-m16G-d50G | 16384 | 50 | 0 | 8 | True | | c8-m8G-d20G | c8-m8G-d20G | 8192 | 20 | 0 | 8 | True | | c8-m8G-d50G | c8-m8G-d50G | 8192 | 50 | 0 | 8 | True | +--------------+--------------+-------+------+-----------+-------+-----------+ [root@wuhan31-ceph01 ~]#
虛擬機網絡
爲虛擬機提供2種聯網方式. 直通內網和私有網絡. shell
vlan規劃請參考對應的章節.json
直通內網
提供4個/24網段, 能夠最多接入 251*4 臺設備. (254主機ip-1網關-2dhcp), 後期若有需求, 請自行擴容.bootstrap
# 建立能夠直通內網的私有網絡. 由於vlan id不是上述定義的範圍, 因此須要使用管理員權限建立. for net in {34..37}; do openstack network create --provider-network-type vlan --provider-physical-network physnet0 --provider-segment "$net" --share --project admin net-lan$net openstack subnet create --network net-lan$net --gateway 192.168.$net.1 --subnet-range 192.168.$net.0/24 --dns-nameserver 100.100.31.254 subnet-lan$net done
私有網絡
預置30個vlan, 能夠部署30個獨立網絡, 每一個網絡子網數量及大小不設限制.
私有網絡能夠隨意建立子網及路由. 建議僅用於建立集羣內網.
若是須要和外界通訊, 能夠接入浮動IP網. 若是須要從外界訪問, 須要綁定浮動IP, 或者使用負載均衡?(此段待確認)
浮動IP網目前有250個IP, 若是網絡內存在大量的虛擬機須要從外界訪問, 建議選擇 "直通內網"的方式接入網絡.
# 建立外部網絡, 管理員權限. for net in {33..33}; do openstack network create --external --provider-network-type vlan --provider-physical-network physnet1 --provider-segment "$net" --share --project antiy net-ext-lan$net openstack subnet create --network net-ext-lan$net --gateway 192.168.$net.1 --subnet-range 192.168.$net.0/24 --dns-nameserver 100.100.31.254 subnet-floating$net done
如下操做可使用普通用戶完成:
# 建立私有網絡. 用戶權限亦可. openstack network create --project antiy net-private-antiy01 # 建立路由. openstack router create --ha --project antiy router-antiy # 把路由接入網絡, 我還沒找到配置external network的命令, 建議這段在web界面配置. #openstack router add subnet router-antiy subnet-private-antiy01 #openstack router add subnet router-antiy subnet-floating43
物理網絡配置
配置略,核心x0/0/1鏈接C8-41 x0/0/1
以上爲網絡的設計規劃,下面正式部署
一,基礎環境準備
1,環境準備
系統 | ip | 主機名 | 角色 |
---|---|---|---|
centos7.4 | 100.100.31.201 | wuhan31-ceph01.v3.os | ceph0一、kolla-ansible |
centos7.4 | 100.100.31.202 | wuhan31-ceph02.v3.os | ceph02 |
centos7.4 | 100.100.31.203 | wuhan31-ceph03.v3.os | ceph03 |
centos7.4 | 100.100.31.101 | wuhan31-nova01.v3.os | nova01 |
centos7.4 | 100.100.31.102 | wuhan31-nova02.v3.os | nova01 |
ip和主機名寫入到/etc/hosts裏
2,修改主機名
hostnamectl set-hostname wuhan31-ceph01.v3.os hostnamectl set-hostname wuhan31-ceph02.v3.os hostnamectl set-hostname wuhan31-ceph03.v3.os
3,關閉防火牆、selinux
systemctl stop firewalld systemctl disable firewalld sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config setenforce 0
4,配置yum源:
修改yum源爲公司內部源. 包括centos的cloud和ceph的mimic源: curl -v http://mirrors.***.org/repo/centos7.repo > /etc/yum.repos.d/CentOS-Base.repo curl -v http://mirrors.***.org/repo/cloud.repo > /etc/yum.repos.d/cloud.repo yum makecache
5,統一網卡名稱
[root@localhost network-scripts]# cat ifcfg-bond0 DEVICE=bond0 BOOTPROTO=static TYPE=bond ONBOOT=yes IPADDR=100.100.31.203 NETMASK=255.255.255.0 GATEWAY=100.100.31.1 DNS1=192.168.55.55 USERCTL=no BONDING_MASTER=yes BONDING_OPTS="miimon=200 mode=1" [root@localhost network-scripts]# cat ifcfg-em1 TYPE=Ethernet BOOTPROTO=none DEVICE=em1 ONBOOT=yes MASTER=bond0 SLAVE=yes [root@localhost network-scripts]# cat ifcfg-em2 TYPE=Ethernet BOOTPROTO=none DEVICE=em2 ONBOOT=yes MASTER=bond0 SLAVE=yes [root@localhost network-scripts]#
全部設備網卡名稱使用bond0
6,安裝docker
配置docker yum源
cat > /etc/yum.repos.d/docker.repo <<EOF [docker] name=docker baseurl=https://download.docker.com/linux/centos/7/x86_64/stable enabled=1 gpgcheck=0 EOF
而後安裝docker-ce
curl http://mirrors.***.org/repo/docker.repo > /etc/yum.repos.d/docker.repo yum install docker-ce
配置私有倉庫
mkdir /etc/docker cat > /etc/docker/daemon.json <<EOF { "registry-mirrors": ["http://mirrors.***.org:5000"] } EOF
啓動服務
systemctl enable docker systemctl start docker
7,安裝所需軟件
全部的節點須要安裝:
yum install ceph python-pip -y
調試輔助工具,爲了方便調試, 建議安裝補全腳本.
yum install bash-completion-extras libvirt-bash-completion net-tools bind-utils sysstat iftop nload tcpdump htop -y
8,安裝kolla-ansible
安裝pip.
yum install python-pip -y
安裝kolla-ansible所需的依賴軟件:
yum install ansible python2-setuptools python-cryptography python-openstackclient -y
使用pip安裝kolla-ansible:
pip install kolla-ansible
注意:
若是出現`requests 2.20.0 has requirement idna<2.8,>=2.5, but you'll have idna 2.4 which is incompatible.`錯誤,則強制更新requets庫 pip install --ignore-installed requests 一樣,出現Cannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.錯誤,強制更新 sudo pip install --ignore-installed PyYAML
注:步驟1-7,9全部節點操做,步驟8在部署節點操做(這裏用wuhan32-ceph01)
二,部署ceph集羣
1,配置ceph用戶遠程登陸
全部ceph節點操做。(如下公鑰根據本身機器實際狀況填下,這一步的目的是能讓ceph用戶經過key登陸系統)
ssh-keygen -t rsa //一路回車 usermod -s /bin/bash ceph mkdir ~ceph/.ssh/ cat >> ~ceph/.ssh/authorized_keys << EOF ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW6VghEC1cUrTZ6TfI9XcOEJZShkoL5YqtHBMtm2iZUnw8Pj6S3S1TCwKfdY0m+kInKlfZhoFCw3Xyee9XY7ZwPX6IEnixZMqO9EpC58LfxH841lw6xC0HesfF0QwWs+EVs5I1RwCN+Zoz2NPfu8RH30LHhBoSQpm75vRkF2trEbdtEI/kuzysO+73oF7R42lGJtgJtFbzLQSO2Vp/Xo7jdD/tdD/gcEsPniSPP3vFQg4EuSafdwxnJFuAxLAMCK+K1SQg7eNqboWYGhSWjOy39bTCZjieXOyNehPTVoqn3/qyC88c7D0PEbvTYxbNkuFU2MM7x9/k+ZGyvYnpex4t root@wuhan31-ceph01.os EOF cat >> ~/.ssh/authorized_keys << EOF ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW6VghEC1cUrTZ6TfI9XcOEJZShkoL5YqtHBMtm2iZUnw8Pj6S3S1TCwKfdY0m+kInKlfZhoFCw3Xyee9XY7ZwPX6IEnixZMqO9EpC58LfxH841lw6xC0HesfF0QwWs+EVs5I1RwCN+Zoz2NPfu8RH30LHhBoSQpm75vRkF2trEbdtEI/kuzysO+73oF7R42lGJtgJtFbzLQSO2Vp/Xo7jdD/tdD/gcEsPniSPP3vFQg4EuSafdwxnJFuAxLAMCK+K1SQg7eNqboWYGhSWjOy39bTCZjieXOyNehPTVoqn3/qyC88c7D0PEbvTYxbNkuFU2MM7x9/k+ZGyvYnpex4t root@wuhan31-ceph01.os EOF cat > /etc/sudoers.d/ceph <<EOF ceph ALL = (root) NOPASSWD:ALL Defaults:ceph !requiretty EOF chown -R ceph:ceph ~ceph/.ssh/ chmod -R o-rwx ~ceph/.ssh/
2,建立ceph集羣
在部署節點操做
安裝部署工具ceph-deploy
yum install ceph-deploy -y
mkdir ~ceph/ceph-deploy cd ~ceph/ceph-deploy ceph-deploy new wuhan31-ceph{01..03}.os
編輯配置文件ceph.conf
vim ceph.conf [global] fsid = 567be343-d631-4348-8f9d-2f18be36ce74 mon_initial_members = wuhan31t-ceph01, wuhan31-ceph02,wuhan31-ceph03 mon_host = wuhan31-ceph01,wuhan31-ceph02,wuhan31-ceph03 mon_addr = 100.100.31.201:6789,00.100.31.202:6789,00.100.31.203:6789 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx filestore_xattr_use_omap = true mon_allow_pool_delete = 1 [osd] osd_client_message_size_cap = 524288000 osd_deep_scrub_stride = 131072 osd_op_threads = 2 osd_disk_threads = 1 osd_mount_options_xfs = "rw,noexec,nodev,noatime,nodiratime,nobarrier" osd_recovery_op_priority = 1 osd_recovery_max_active = 1 osd_max_backfills = 1 osd-recovery-threads=1 [client] rbd_cache = true rbd_cache_size = 1073741824 rbd_cache_max_dirty = 134217728 rbd_cache_max_dirty_age = 5 rbd_cache_writethrough_until_flush = true rbd_concurrent_management_ops = 50 rgw frontends = civetweb port=7480
而後建立初始節點:
ceph-deploy mon create-initial ceph-deploy admin wuhan31-ceph01 wuhan31-ceph02,wuhan31-ceph03 # 可選: 容許ceph用戶使用admin keyring. sudo setfacl -m u:ceph:r /etc/ceph/ceph.client.admin.keyring
建立mgr:ceph-deploy mgr create wuhan31-ceph01 wuhan31-ceph02,wuhan31-ceph03
添加osd
這裏是複用的硬盤, 故須要先zap disk:(個人機器磁盤是sdb到sdk)
ceph-deploy disk zap wuhan31-ceph01 /dev/sd{b..k} ceph-deploy disk zap wuhan31-ceph02 /dev/sd{b..k} ceph-deploy disk zap wuhan31-ceph03 /dev/sd{b..k}
可使用以下的腳本批量添加osd:
for dev in /dev/sd{b..k}; do ceph-deploy osd create --data "$dev"wuhan31-ceph01 || break; done for dev in /dev/sd{b..k}; do ceph-deploy osd create --data "$dev" wuhan31-ceph02 || break; done for dev in /dev/sd{b..k}; do ceph-deploy osd create --data "$dev" wuhan31-ceph03 || break; done
若是期間遇到了錯誤, 能夠單獨繼續執行。
3,建立pools
在部署節點操做
建立openstack 所需的pools:
計算方法:https://ceph.com/pgcalc/
根據大小設置不一樣的pg數. 因爲目前只有3*10個osd. 故初步定pg數量以下: 後期按需擴大.
images 32 volumes 256 vms 64 backups 128
在任意具有ceph admin權限的節點執行建立:
ceph osd pool create images 32 ceph osd pool create volumes 256 ceph osd pool create vms 64 ceph osd pool create backups 128
4,建立ceph客戶端
在部署節點操做
建立客戶端, 並賦予權限,如下信息寫入腳本執行,或者直接執行
# 定義客戶端 clients="client.cinder client.nova client.glance client.cinder-backup" # 建立客戶端. for client in $clients; do ceph auth get-or-create "$client" done # 配置權限 ceph auth caps client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=cinder-ssd, allow rwx pool=vms, allow rwx pool=images' ceph auth caps client.nova mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=cinder-ssd, allow rwx pool=vms, allow rwx pool=images' ceph auth caps client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' ceph auth caps client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' # 導出 for client in $clients; do ceph auth export "$client" -o /etc/ceph/ceph."$client".keyring done
定義建立客戶端:
ceph auth get-or-create client.cinder ceph auth get-or-create client.nova ceph auth get-or-create client.glance ceph auth get-or-create client.cinder-backup
配置權限
ceph auth caps client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=cinder-ssd, allow rwx pool=vms, allow rwx pool=images' ceph auth caps client.nova mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=cinder-ssd, allow rwx pool=vms, allow rwx pool=images' ceph auth caps client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' ceph auth caps client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'
導出keyting
ceph auth export client.cinder -o /etc/ceph/ceph.client.cinder.keyring ceph auth export client.nova -o /etc/ceph/ceph.client.nova.keyring ceph auth export client.glance -o /etc/ceph/ceph.client.glance.keyring ceph auth export client.cinder-backup -o /etc/ceph/ceph.client.cinder-backup.keyring
5,配置ceph插件dashboard
在部署節點操做
ceph mgr module enable dashboard ceph config set mgr mgr/dashboard/ssl false ceph config set mgr mgr/dashboard/server_address :: ceph config set mgr mgr/dashboard/server_port 7000 ceph dashboard set-login-credentials 用戶名 密碼
三,kolla部署openstack
如下部署節點操做
1,編寫配置
複製模板
複製kolla-ansible的模板, 這裏是使用pip安裝的:
必選: 複製配置模板cp -ar /usr/share/kolla-ansible/etc_examples/* /etc/
2,生成密碼
請務必完成 "複製模板" 的環節. 不然沒法生成密碼
執行以下命令便可kolla-genpwd
glolals.yml
編輯修改/etc/kolla/globals.yml
# 這裏是openstack的版本信息. 這裏選擇rocky版本,source即源碼安裝, 由於這種方式的軟件包最全. 若是爲binary且爲CentOS系統, 那麼只有紅帽提供的包, 有些不全. kolla_install_type: "source" openstack_release: "rocky" # 若是有多個控制節點, 則啓用高可用, 注意, vip(虛擬IP)必須爲目前未用到的IP. 且和節點IP位於同一網段. enable_haproxy: "yes" kolla_internal_vip_address: "100.100.31.254" # 這些fqdn須要在內網DNS和hosts文件同時作好解析. kolla_internal_fqdn: "xiaoxuantest.***.org" kolla_external_fqdn: "xiaoxuantest.***.org" # 這裏就是自定義配置的路徑. 只在部署節點上. node_custom_config: "/etc/kolla/config" # 虛擬化類型, 若是是在虛擬機裏作實驗, 這裏的類型須要改成qemu. 慢點就慢點. # kvm類型須要CPU,主板和BIOS支持, 且BIOS啓用了硬件虛擬化. 若是在計算節點沒法安裝kvm內核模塊, 請根據dmesg報錯排查. nova_compute_virt_type: "kvm" # 網絡接口. 注意external必須爲獨立接口, 否則會致使節點斷網. neutron_external_interface: "eth1" network_interface: "bond0" api_interface: "bond0" storage_interface: "bond0" cluster_interface: "bond0" tunnel_interface: "bond0" # dns_interface: "eth" # dns功能未集成, 後期自行研究吧. # 網絡虛擬化技術. 咱們這裏不使用openvswitch, 直接使用linuxbridge neutron_plugin_agent: "linuxbridge" enable_openvswitch: "no" # 網絡高可用, 就是建立多個agent: dhcp和l3(路由) enable_neutron_agent_ha: "yes" # 網絡封裝, 目前都是vlan, flat留着備用, 用於直接使用物理網卡. neutron_type_drivers: "flat,vlan" # 租戶網絡的隔離方式, 這裏是vlan, 可是kolla不支持, 因此咱們須要本身在node_custom_config這項對應的目錄里加自定義配置. neutron_tenant_network_types: "vlan" # 網絡插件 enable_neutron_lbaas: "yes" enable_neutron_***aas: "yes" enable_neutron_fwaas: "yes" # elk集中日誌管理 enable_central_logging: "yes" # 啓用debug模式, 日誌很詳細. 按需臨時開啓. #openstack_logging_debug: "True" # 忘了這裏的用途... 能夠關了試試, 若是其餘組件有依賴會自動開的. enable_kafka: "yes" enable_fluentd: "yes" # 這裏是咱們使用了外部的ceph, 不讓kolla部署, 由於kolla部署時部分osd可能會出問題, 致使osd id順序錯位, 看着不方便. 並且後期從主機管理存儲集羣也別捏. enable_ceph: "no" glance_backend_ceph: "yes" cinder_backend_ceph: "yes" nova_backend_ceph: "yes" gnocchi_backend_storage: "ceph" enable_manila_backend_cephfs_native: "yes" # 啓用的功能. #enable_ceilometer: "yes" enable_cinder: "yes" #enable_designate: "yes" enable_destroy_images: "yes" #enable_gnocchi: "yes" enable_grafana: "yes" enable_heat: "yes" enable_horizon: "yes" #enable_ironic: "yes" #enable_ironic_ipxe: "yes" #enable_ironic_neutron_agent: "yes" #enable_kuryr: "yes" #enable_magnum: "yes" # enable_neutron_dvr # enable_ovs_dpdk #enable_nova_serialconsole_proxy: "yes" #enable_octavia: "yes" enable_redis: "yes" #enable_trove: "yes" # 其餘配置 glance_backend_file: "no" #designate_ns_record: "nova." #ironic_dnsmasq_dhcp_range: "11.0.0.10,11.0.0.111" openstack_region_name: "xiaoxuantest"
3,編寫inventory文件
建立個目錄, 用於編寫inventory文件:
mkdir kolla-ansible cp /usr/share/kolla-ansible/ansible/inventory/multinode kolla-ansible/inventory-xiaoxuantest
編輯後的inventory文件關鍵內容, 其餘地方不變:
關鍵內容
[control] wuhan31-ceph01 wuhan31-ceph02 wuhan31-ceph03 [network] wuhan31-ceph01 wuhan31-ceph02 wuhan31-ceph03 [external-compute] wuhan31-ceph01 wuhan31-ceph02 wuhan31-ceph03 [monitoring:children] control [storage:children] control
4,ceph集成
網絡
由於咱們使用了vlan, 因此須要手動配置:
mkdir /etc/kolla/config/neutron cat > /etc/kolla/config/neutron/ml2_conf.ini <<EOF [ml2_type_vlan] network_vlan_ranges = physnet0:1031:1060,physnet1 [linux_bridge] physical_interface_mappings = physnet0:eth0,physnet1:eth1 EOF
Dashboard
建立虛擬機界面禁止默認建立新卷.
mkdir /etc/kolla/config/horizon/ cat > /etc/kolla/config/horizon/custom_local_settings <<EOF LAUNCH_INSTANCE_DEFAULTS = { 'create_volume': False, } EOF
直接貼上/etc/kolla/config/ 目錄全部的文件
[root@wuhan32-ceph01 config]# ls -lR .: total 4 lrwxrwxrwx. 1 kolla kolla 19 Mar 11 17:06 ceph.conf -> /etc/ceph/ceph.conf drwxr-xr-x. 4 kolla kolla 117 Mar 28 14:43 cinder -rw-r--r--. 1 root root 39 Mar 28 14:39 cinder.conf drwxr-xr-x. 2 kolla kolla 80 Mar 11 17:18 glance drwxr-xr-x. 2 root root 35 Mar 19 11:21 horizon drwxr-xr-x. 2 root root 26 Mar 14 15:49 neutron drwxr-xr-x. 2 kolla kolla 141 Mar 11 17:18 nova ./cinder: total 8 lrwxrwxrwx. 1 kolla kolla 19 Mar 11 17:10 ceph.conf -> /etc/ceph/ceph.conf drwxr-xr-x. 2 kolla kolla 81 Mar 11 17:18 cinder-backup -rwxr-xr-x. 1 kolla kolla 274 Feb 26 16:47 cinder-backup.conf drwxr-xr-x. 2 kolla kolla 40 Mar 11 17:18 cinder-volume -rwxr-xr-x. 1 kolla kolla 534 Mar 28 14:38 cinder-volume.conf ./cinder/cinder-backup: total 0 lrwxrwxrwx. 1 kolla kolla 43 Mar 11 17:18 ceph.client.cinder-backup.keyring -> /etc/ceph/ceph.client.cinder-backup.keyring lrwxrwxrwx. 1 kolla kolla 36 Mar 11 17:18 ceph.client.cinder.keyring -> /etc/ceph/ceph.client.cinder.keyring ./cinder/cinder-volume: total 0 lrwxrwxrwx. 1 kolla kolla 36 Mar 11 17:18 ceph.client.cinder.keyring -> /etc/ceph/ceph.client.cinder.keyring ./glance: total 4 lrwxrwxrwx. 1 kolla kolla 36 Mar 11 17:18 ceph.client.glance.keyring -> /etc/ceph/ceph.client.glance.keyring lrwxrwxrwx. 1 kolla kolla 19 Mar 11 17:07 ceph.conf -> /etc/ceph/ceph.conf -rwxr-xr-x. 1 kolla kolla 138 Feb 27 11:55 glance-api.conf ./horizon: total 4 -rw-r--r--. 1 root root 59 Mar 19 11:21 custom_local_settings ./neutron: total 4 -rw-r--r--. 1 root root 141 Mar 14 15:49 ml2_conf.ini ./nova: total 8 lrwxrwxrwx. 1 kolla kolla 36 Mar 11 17:18 ceph.client.cinder.keyring -> /etc/ceph/ceph.client.cinder.keyring lrwxrwxrwx. 1 kolla kolla 34 Mar 11 17:18 ceph.client.nova.keyring -> /etc/ceph/ceph.client.nova.keyring lrwxrwxrwx. 1 kolla kolla 19 Mar 11 17:07 ceph.conf -> /etc/ceph/ceph.conf -rwxr-xr-x. 1 kolla kolla 101 Feb 27 17:28 nova-compute.conf -rwxr-xr-x. 1 kolla kolla 39 Dec 24 16:52 nova-scheduler.conf
摘錄config目錄相關配置文件內容以下:
# find -type f -printf "=== FILE: %p ===\n" -exec cat {} \; === FILE: ./cinder/cinder-backup.conf === [DEFAULT] backup_ceph_conf=/etc/ceph/ceph.conf backup_ceph_user=cinder-backup backup_ceph_chunk_size = 134217728 backup_ceph_pool=backups backup_driver = cinder.backup.drivers.ceph backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true === FILE: ./cinder/cinder-volume.conf === [DEFAULT] enabled_backends=cinder-sas,cinder-ssd [cinder-sas] rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=cinder backend_host=rbd:volumes rbd_pool=volumes volume_backend_name=cinder-sas volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_secret_uuid=5b3ec4eb-c276-4cf2-a042-8ec906d05f69 [cinder-ssd] rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=cinder backend_host=rbd:volumes rbd_pool=cinder-ssd volume_backend_name=cinder-ssd volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_secret_uuid=5b3ec4eb-c276-4cf2-a042-8ec906d05f69 === FILE: ./glance/glance-api.conf === [glance_store] default_store = rbd stores = rbd rbd_store_pool = images rbd_store_user = glance rbd_store_ceph_conf = /etc/ceph/ceph.conf === FILE: ./nova/nova-compute.conf === [libvirt] images_rbd_pool=vms images_type=rbd images_rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=nova === FILE: ./nova/nova-scheduler.conf === [DEFAULT] scheduler_max_attempts = 100 === FILE: ./neutron/ml2_conf.ini === [ml2_type_vlan] network_vlan_ranges = physnet0:1000:1030,physnet1 [linux_bridge] physical_interface_mappings = physnet0:eth0,physnet1:eth1 === FILE: ./horizon/custom_local_settings === LAUNCH_INSTANCE_DEFAULTS = { 'create_volume': False, } === FILE: ./cinder.conf === [DEFAULT] default_volume_type=standard
*注意,./cinder/cinder-volume.conf裏的rbd_secret_uuid=填寫下面的這個值
[root@wuhan31-ceph01 ~]# grep cinder_rbd_secret_uuid /etc/kolla/passwords.yml cinder_rbd_secret_uuid: 1ae2156b-7c33-4fbb-a26a-c770fadc54b6
建立的鏈接以下
ceph.conf: ln -s /etc/ceph/ceph.conf /etc/kolla/config/nova/ ln -s /etc/ceph/ceph.conf /etc/kolla/config/glance/ ln -s /etc/ceph/ceph.conf /etc/kolla/config/cinder/ keyring: ln -s /etc/ceph/ceph.client.cinder-backup.keyring /etc/kolla/config/cinder/cinder-backup/ ln -s /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/cinder/cinder-backup/ ln -s /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/cinder/cinder-volume/ ln -s /etc/ceph/ceph.client.glance.keyring /etc/kolla/config/glance/ ln -s /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/nova/ ln -s /etc/ceph/ceph.client.nova.keyring /etc/kolla/config/nova/
5,部署openstack
預配節點環境
bootstrap也會安裝很多東西, 後期能夠考慮提早安裝這些.kolla-ansible -i kolla-ansible/inventory-xiaoxuantest bootstrap-servers
預檢查kolla-ansible -i kolla-ansible/inventory-xiaoxuantest prechecks
正式部署kolla-ansible -i kolla-ansible/inventory-xiaoxuantest deploy
調整配置及從新部署
若是須要調整配置. 那麼編輯globals.yml後, 而後運行reconfigure. 使用 -t 參數能夠只對變更的模塊進行調整.
kolla-ansible -i kolla-ansible/inventory-xiaoxuantest reconfigure -t neutron kolla-ansible -i kolla-ansible/inventory-xiaoxuantest deploy -t neutron
完成部署
這步主要是生成admin-openrc.sh.
kolla-ansible post-deploy . /etc/kolla/admin-openrc.sh
初始demo
執行後會自動下載cirros鏡像, 建立網絡, 並建立一批測試虛擬機.
/usr/share/kolla-ansible/init-runonce
查詢登陸密碼:
grep admin /etc/kolla/passwords.yml
若是沒有內部源部署時間比較長,國外的資源下載很慢。
官方的故障排查指南: https://docs.openstack.org/kolla-ansible/latest/user/troubleshooting.html
部署成果後運行的容器以下:
[root@wuhan31-ceph01 ~]# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d141ac504ec6 kolla/centos-source-grafana:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks grafana 41e12c24ba7f kolla/centos-source-horizon:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks horizon 6989a4aeb33a kolla/centos-source-heat-engine:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks heat_engine 35665589b4a4 kolla/centos-source-heat-api-cfn:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks heat_api_cfn f18c98468796 kolla/centos-source-heat-api:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks heat_api bc77f4d3c957 kolla/centos-source-neutron-metadata-agent:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks neutron_metadata_agent 7334b93c6564 kolla/centos-source-neutron-lbaas-agent:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks neutron_lbaas_agent 0dd7a55245c4 kolla/centos-source-neutron-l3-agent:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks neutron_l3_agent beec0f19ec7f kolla/centos-source-neutron-dhcp-agent:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks neutron_dhcp_agent af6841ebc21e kolla/centos-source-neutron-linuxbridge-agent:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks neutron_linuxbridge_agent 49dc0457445d kolla/centos-source-neutron-server:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks neutron_server 677c0be4ab6b kolla/centos-source-nova-compute:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_compute 402b1e673777 kolla/centos-source-nova-novncproxy:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_novncproxy e35729b76996 kolla/centos-source-nova-consoleauth:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_consoleauth 8b193f562e47 kolla/centos-source-nova-conductor:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_conductor 885581445be0 kolla/centos-source-nova-scheduler:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_scheduler 171128b7bcb7 kolla/centos-source-nova-api:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_api 8d7f3de2ad63 kolla/centos-source-nova-placement-api:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks placement_api ab763320f268 kolla/centos-source-nova-libvirt:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_libvirt bbd4c3e2c961 kolla/centos-source-nova-ssh:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks nova_ssh 80e7098f0bfb kolla/centos-source-cinder-backup:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks cinder_backup 20e2ff43d0e1 kolla/centos-source-cinder-volume:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks cinder_volume 6caba29f7ce2 kolla/centos-source-cinder-scheduler:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks cinder_scheduler 3111622e4e83 kolla/centos-source-cinder-api:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks cinder_api 2c011cfae829 kolla/centos-source-glance-api:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks glance_api be84e405afdd kolla/centos-source-kafka:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks kafka 09aef04ad59e kolla/centos-source-keystone-fernet:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks keystone_fernet 2ba9e19844fd kolla/centos-source-keystone-ssh:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks keystone_ssh 8eebe226b065 kolla/centos-source-keystone:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks keystone 662d85c00a64 kolla/centos-source-rabbitmq:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks rabbitmq 7d373ef0fdee kolla/centos-source-mariadb:rocky "dumb-init kolla_sta…" 8 weeks ago Up 8 weeks mariadb ab9f5d612925 kolla/centos-source-memcached:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks memcached a728298938f7 kolla/centos-source-kibana:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks kibana 7d22d71cc31b kolla/centos-source-keepalived:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks keepalived dae774ca7e33 kolla/centos-source-haproxy:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks haproxy 14b340bb8139 kolla/centos-source-redis-sentinel:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks redis_sentinel 3023e95f465f kolla/centos-source-redis:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks redis a3ed7e8fe8ff kolla/centos-source-elasticsearch:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks elasticsearch 06b28cd0f7c7 kolla/centos-source-zookeeper:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks zookeeper 630219f5fb29 kolla/centos-source-chrony:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks chrony 6f6189a4dfda kolla/centos-source-cron:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks cron 039f08ec1bbf kolla/centos-source-kolla-toolbox:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks kolla_toolbox f839d23859cc kolla/centos-source-fluentd:rocky "dumb-init --single-…" 8 weeks ago Up 8 weeks fluentd [root@wuhan31-ceph01 ~]#
這是添加的網絡,所有使用直通內網
四,常見故障
mariadb,這種狀況是全部節點關機後遇到的。
vip 3306監聽狀態,節點3306非監聽狀態,容器反覆重啓中
可能緣由是節點同時斷開後,mariadb服務不可用,須要恢復服務kolla-ansible -i kolla-ansible/inventory-xiaoxuantest mariadb_recovery
執行完後確認各節點3306爲監聽狀
遇到的問題:
ironic : Checking ironic-agent files exist for Ironic Inspector
TASK [ironic : Checking ironic-agent files exist for Ironic Inspector] ******************** failed: [localhost -> localhost] (item=ironic-agent.kernel) => {"changed": false, "failed_when_result": true, "item": "ironic-agent.kernel", "stat": {"exists": false}} failed: [localhost -> localhost] (item=ironic-agent.initramfs) => {"changed": false, "failed_when_result": true, "item": "ironic-agent.initramfs", "stat": {"exists": false}}
臨時關閉enable_ironic開頭的配置解決.
neutron : Checking if 'MountFlags' for docker service is set to 'shared'
TASK [neutron : Checking if 'MountFlags' for docker service is set to 'shared'] *********** fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["systemctl", "show", "docker"], "delta": "0:00:00.010391", "end": "2018-12-24 20:44:46.791156", "failed_when_result": true, "rc": 0, "start": "2018-12-24 20:44:46.780765",...
見章節 環境準備--docker--
ceilometer : Checking gnocchi backend for ceilometer
TASK [ceilometer : Checking gnocchi backend for ceilometer] ******************************* fatal: [localhost -> localhost]: FAILED! => {"changed": false, "msg": "gnocchi is required but not enabled"}
啓用 gnocchi
octavia : Checking certificate files exist for octavia
TASK [octavia : Checking certificate files exist for octavia] ***************************** failed: [localhost -> localhost] (item=cakey.pem) => {"changed": false, "failed_when_result": true, "item": "cakey.pem", "stat": {"exists": false}} failed: [localhost -> localhost] (item=ca_01.pem) => {"changed": false, "failed_when_result": true, "item": "ca_01.pem", "stat": {"exists": false}} failed: [localhost -> localhost] (item=client.pem) => {"changed": false, "failed_when_result": true, "item": "client.pem", "stat": {"exists": false}}
運行 kolla-ansible certificates 依舊沒有生成, 查了下, 官方還沒修復: https://bugs.launchpad.net/kolla-ansible/+bug/1668377
先禁用 octavia . 後期排查了人工生成.
common : Restart fluentd container
RUNNING HANDLER [common : Restart fluentd container] ************************************** fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unknown error message: Get https://192.168.55.201:4000/v1/_ping: dial tcp 100.100.31.201:4000: getsockopt: connection refused"}
看了下, 確實沒啓動4000端口. 根據官方文檔[參1]部署了registry.