Ironic is an OpenStack project which provisions bare metal (as opposed to virtual) machines. It may be used independently or as part of an OpenStack Cloud, and integrates with the OpenStack Identity (keystone), Compute (nova), Network (neutron), Image (glance), and Object (swift) services.html
The Bare Metal service manages hardware through both common (eg. PXE and IPMI) and vendor-specific remote management protocols. It provides the cloud operator with a unified interface to a heterogeneous fleet of servers while also providing the Compute service with an interface that allows physical servers to be managed as though they were virtual machines.html5
官方文檔:https://docs.openstack.org/ironic/latest/node
裸金屬節點特指沒有部署操做系統的物理服務器,相對於虛擬機,裸金屬節點具備更強的計算能力、資源獨佔以及安全隔離等優勢。Ironic 旨在爲用戶提供自助式的裸金屬管理服務,Ironic 便可以獨立使用,也能夠與 OpenStack 集成,咱們主要關注後者。python
注: Bifrost 是自動化部署 Standalone Ironic 的 Ansible playbooks 集合。git
Ironic 爲 OpenStack 提供裸金屬管理服務,容許用戶像管理虛擬機同樣管理裸金屬節點,部署裸機就像是部署虛擬機同樣簡單,爲用戶提供了多租戶網絡的裸金屬雲基礎設施。Ironic 主要依賴 PXE 和 IPMI 技術來實現裸金屬節點批量部署和系統控制,所以大部分物理服務器型號均可以經過 Ironic 進行系統安裝和電源狀態管理,對於個別物理服務器型號,也能夠基於 Ironic 的可插拔驅動架構快速開發出針對性的管理驅動程序。憑藉標準 API、普遍的驅動程序支持和輕量級的空間佔用,使 Ironic 適用於從小型邊緣部署到大型數據中心的各類用例,提供了理想的運行環境來託管高性能的雲應用程序和架構,包括當下流行的 Kubernetes 等容器編排平臺。web
應用 Ironic 可以解決的問題:算法
Ironic 與其餘 OpenStack Projects 的協同:docker
注:網絡管理可分爲帶外管理(out-of-band)和帶內管理(in-band)兩種方式。數據庫
• node:裸金屬的基礎信息。包括 CPU、存儲等信息,還包括 Ironic 管理該裸金屬所使用的 Driver 類型信息。
• chassis:裸金屬模板信息,用於 node 的管理分類。
• port:裸金屬網口的基礎信息,包括 MAC 地址、LLDP 等信息。
• portgroup:裸金屬上聯交換機對裸金屬網口的端口組配置信息。
• conductor:記錄 ironic-conductor 的狀態及其支持 Driver 類型的信息。
• volume connector/target:記錄裸金屬的塊設備掛載信息。ubuntu
官方文檔:https://docs.openstack.org/ironic/rocky/contributor/states.html
當裸機完成硬件安裝、網絡連線等上架工做後,由管理員將裸機的信息錄入到 Ironic 進行納管,以此支持它的各項後續操做。該階段中,根據須要能夠應用 Ironic Inspector 的功能實現裸機硬件配置信息以及上聯交換機信息的自動採集,即裸金屬的自檢。但基於具體 Drivers 實現的不一樣,Ironic Inspector 有時並不能完成所有的工做,仍須要人工錄入。
自檢階段主要由 IPA 和 Ironic Inspector 共同完成,前者負責信息的採集和傳輸,後者負責信息的處理。IPA 採集的信息,包括 CPU 個數、CPU Feature Flag、Memory Size、Disk Size、網卡 IP、MAC 地址等,這些信息會被做爲 Nova Scheduler 的調度因子。若是接入了 SDN 網絡,還須要監聽裸機業務網卡上收到的 LLDP(鏈路發現協議)報文。LLDP 報文由 SDN 控制器發出,包含了 Chassis ID 和 Port ID,用於標記交換機的端口信息,使得 SDN 控制器能夠將指定的轉發規則下發到指定的交換機端口上。
Inspection 狀態機:【數據錄入】-> 【運維管理】-> 【數據檢測】-> 【節點可用】。
manage
API 請求,Ironic 會依賴用戶錄入的信息作依賴數據校驗,確認用戶錄入的數據是否知足下一步操做的要求;provide
API 請求(此動做意在告訴 Ironic 此裸金屬節點已經準備好,可進入下一步的操做系統部署工做)以後,裸金屬節點進入 cleaning 狀態(才階段的意義在於爲用戶提供一個純淨的裸金屬節點),等待 cleaning 完成以後該裸金屬節點正式標記爲可用狀態。用戶能夠隨意使用,此時裸金屬節點的基礎信息(e.g. CPU、RAM、Disks、NICs)所有錄入完畢。在裸金屬完成上架自檢處於 available 的狀態以後,就能夠進入 Provision 部署階段。即用戶根據業務須要指定鏡像、網絡等信息來部署裸金屬實例。由雲平臺自動化完成資源調度、操做系統安裝、網絡配置等工做。一旦實例建立成功後,用戶就可使用物理服務器運行業務了。
Provision 狀態機:【設定部署模板】-> 【傳入部署參數】-> 【進入裸金屬階段 randisk】-> 【ironic-python-agent 接管裸金屬節點】-> 【注入鏡像數據】->【引導操做系統】->【激活裸金屬節點】。
actice
API 請求執行操做系統部署,請求的同時須要將部署的詳細信息(e.g. user-image、instance metadata、網絡資源分配)持久化到裸金屬數據庫表記錄中;Clean 階段保證了 Ironic 可以在多租戶環境中爲不一樣用戶提供始終純淨(配置統1、無非必要數據殘留)的裸金屬節點。Clean 階段對裸金屬節點的配置以及數據清理設計了一個統一可擴展的流程。經過這套流程,用戶能夠指定須要的清理流程,好比:抹盤、RAID 配置、BIOS 設置等項目,以及這些項目的執行優先級排序。
下載 Devstack:
git clone https://git.openstack.org/openstack-dev/devstack.git -b stable/stein sudo ./devstack/tools/create-stack-user.sh sudo su - stack
配置 local.conf:
[[local|localrc]] HOST_IP=192.168.1.100 # Use TryStack(99cloud) git mirror GIT_BASE=http://git.trystack.cn #GIT_BASE=https://git.openstack.org # Reclone each time RECLONE=no # Enable Logging DEST=/opt/stack LOGFILE=$DEST/logs/stack.sh.log VERBOSE=True LOG_COLOR=True SCREEN_LOGDIR=$DEST/logs LOGDAYS=1 # Define images to be automatically downloaded during the DevStack built process. DOWNLOAD_DEFAULT_IMAGES=False IMAGE_URLS="http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img" # use TryStack git mirror GIT_BASE=http://git.trystack.cn NOVNC_REPO=http://git.trystack.cn/kanaka/noVNC.git SPICE_REPO=http://git.trystack.cn/git/spice/sice-html5.git # Apache Frontend ENABLE_HTTPD_MOD_WSGI_SERVICES=False # IP Version IP_VERSION=4 # Credentials ADMIN_PASSWORD=password DATABASE_PASSWORD=password RABBIT_PASSWORD=password SERVICE_PASSWORD=password SERVICE_TOKEN=password SWIFT_HASH=password SWIFT_TEMPURL_KEY=password # Enable Ironic plugin enable_plugin ironic https://git.openstack.org/openstack/ironic stable/stein # Disable nova novnc service, ironic does not support it anyway. disable_service n-novnc # Enable Swift for the direct deploy interface. enable_service s-proxy enable_service s-object enable_service s-container enable_service s-account # Cinder VOLUME_GROUP_NAME="stack-volumes" VOLUME_NAME_PREFIX="volume-" VOLUME_BACKING_FILE_SIZE=100G # Neutron ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta # By default, DevStack creates a 10.0.0.0/24 network for instances. # If this overlaps with the hosts network, you may adjust with the # following. NETWORK_GATEWAY=10.1.0.1 FIXED_RANGE=10.1.0.0/24 FIXED_NETWORK_SIZE=256 # Swift temp URL's are required for the direct deploy interface SWIFT_ENABLE_TEMPURLS=True # Create 3 virtual machines to pose as Ironic's baremetal nodes. IRONIC_VM_COUNT=3 IRONIC_BAREMETAL_BASIC_OPS=True DEFAULT_INSTANCE_TYPE=baremetal # Enable additional hardware types, if needed. #IRONIC_ENABLED_HARDWARE_TYPES=ipmi,fake-hardware # Don't forget that many hardware types require enabling of additional # interfaces, most often power and management: #IRONIC_ENABLED_MANAGEMENT_INTERFACES=ipmitool,fake #IRONIC_ENABLED_POWER_INTERFACES=ipmitool,fake # The 'ipmi' hardware type's default deploy interface is 'iscsi'. # This would change the default to 'direct': #IRONIC_DEFAULT_DEPLOY_INTERFACE=direct # Change this to alter the default driver for nodes created by devstack. # This driver should be in the enabled list above. IRONIC_DEPLOY_DRIVER=ipmi # The parameters below represent the minimum possible values to create # functional nodes. IRONIC_VM_SPECS_RAM=1280 IRONIC_VM_SPECS_DISK=10 # Size of the ephemeral partition in GB. Use 0 for no ephemeral partition. IRONIC_VM_EPHEMERAL_DISK=0 # To build your own IPA ramdisk from source, set this to True IRONIC_BUILD_DEPLOY_RAMDISK=False VIRT_DRIVER=ironic # Log all output to files LOGFILE=/opt/stack/devstack.log LOGDIR=/opt/stack/logs IRONIC_VM_LOG_DIR=/opt/stack/ironic-bm-logs
服務狀態檢查:
[root@localhost ~]# openstack compute service list +----+------------------+-----------------------+----------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+------------------+-----------------------+----------+---------+-------+----------------------------+ | 3 | nova-scheduler | localhost.localdomain | internal | enabled | up | 2019-05-03T18:56:18.000000 | | 6 | nova-consoleauth | localhost.localdomain | internal | enabled | up | 2019-05-03T18:56:22.000000 | | 7 | nova-conductor | localhost.localdomain | internal | enabled | up | 2019-05-03T18:56:14.000000 | | 1 | nova-conductor | localhost.localdomain | internal | enabled | up | 2019-05-03T18:56:15.000000 | | 3 | nova-compute | localhost.localdomain | nova | enabled | up | 2019-05-03T18:56:18.000000 | +----+------------------+-----------------------+----------+---------+-------+----------------------------+ [root@localhost ~]# openstack network agent list +--------------------------------------+--------------------+-----------------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+-----------------------+-------------------+-------+-------+---------------------------+ | 52f23bda-a645-4459-bcac-686d98d23345 | Open vSwitch agent | localhost.localdomain | None | :-) | UP | neutron-openvswitch-agent | | 7113312f-b0b7-4ce8-ab15-428768b30855 | L3 agent | localhost.localdomain | nova | :-) | UP | neutron-l3-agent | | a45fb074-3b24-4b9e-8c8a-43117f6195f2 | Metadata agent | localhost.localdomain | None | :-) | UP | neutron-metadata-agent | | f207648b-03f3-4161-872e-5210f29099c6 | DHCP agent | localhost.localdomain | nova | :-) | UP | neutron-dhcp-agent | +--------------------------------------+--------------------+-----------------------+-------------------+-------+-------+---------------------------+ [root@localhost ~]# openstack volume service list +------------------+-----------------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+-----------------------------------+------+---------+-------+----------------------------+ | cinder-scheduler | localhost.localdomain | nova | enabled | up | 2019-05-03T18:56:54.000000 | | cinder-volume | localhost.localdomain@lvmdriver-1 | nova | enabled | up | 2019-05-03T18:56:53.000000 | +------------------+-----------------------------------+------+---------+-------+----------------------------+ [root@localhost ~]# openstack baremetal driver list +---------------------+----------------+ | Supported driver(s) | Active host(s) | +---------------------+----------------+ | fake-hardware | localhost | | ipmi | localhost | +---------------------+----------------+ [root@localhost ~]# openstack baremetal node list +--------------------------------------+--------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------+---------------+-------------+--------------------+-------------+ | adda54fb-1038-4634-8d82-53922e875a1f | node-0 | None | power off | available | False | | 6952e923-11ae-4506-b010-fd7a3c4278f5 | node-1 | None | power off | available | False | | f3b8fe69-a840-42dd-9cbf-217be8a95431 | node-2 | None | power off | available | False | +--------------------------------------+--------+---------------+-------------+--------------------+-------------+
部署裸金屬實例:
[root@localhost ~]# openstack server create --flavor baremetal --image cirros-0.4.0-x86_64-disk --key-name default --nic net-id=5c86f931-64da-4c69-a0f1-e2da6d9dd082 VM1 +-------------------------------------+-----------------------------------------------------------------+ | Field | Value | +-------------------------------------+-----------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | k3TgBf5Xjsqv | | config_drive | | | created | 2019-05-03T20:26:28Z | | flavor | baremetal (8f6fd22b-9bec-4b4d-b427-7c333e47d2c2) | | hostId | | | id | 70e9f2b1-a292-4e95-90d4-55864bb0a71d | | image | cirros-0.4.0-x86_64-disk (4ff12aca-b762-436c-b98c-579ad2a21649) | | key_name | default | | name | VM1 | | progress | 0 | | project_id | cbf936fc5e9d4cfcaa1dbc06cd9d2e3e | | properties | | | security_groups | name='default' | | status | BUILD | | updated | 2019-05-03T20:26:28Z | | user_id | 405fad83a4b3470faf7d6c616fe9f7f4 | | volumes_attached | | +-------------------------------------+-----------------------------------------------------------------+ [root@localhost ~]# openstack baremetal node list +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ | adda54fb-1038-4634-8d82-53922e875a1f | node-0 | None | power off | available | False | | 6952e923-11ae-4506-b010-fd7a3c4278f5 | node-1 | None | power off | available | False | | f3b8fe69-a840-42dd-9cbf-217be8a95431 | node-2 | 70e9f2b1-a292-4e95-90d4-55864bb0a71d | power off | deploying | False | +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ [root@localhost ~]# openstack server list --long +--------------------------------------+------+--------+------------+-------------+-------------------+--------------------------+--------------------------------------+-------------+--------------------------------------+-------------------+-----------------------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | Image ID | Flavor Name | Flavor ID | Availability Zone | Host | Properties | +--------------------------------------+------+--------+------------+-------------+-------------------+--------------------------+--------------------------------------+-------------+--------------------------------------+-------------------+-----------------------+------------+ | 70e9f2b1-a292-4e95-90d4-55864bb0a71d | VM1 | ACTIVE | None | Running | private=10.0.0.40 | cirros-0.4.0-x86_64-disk | 4ff12aca-b762-436c-b98c-579ad2a21649 | baremetal | 8f6fd22b-9bec-4b4d-b427-7c333e47d2c2 | nova | localhost.localdomain | | +--------------------------------------+------+--------+------------+-------------+-------------------+--------------------------+--------------------------------------+-------------+--------------------------------------+-------------------+-----------------------+------------+ [root@localhost ~]# openstack baremetal node list +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ | adda54fb-1038-4634-8d82-53922e875a1f | node-0 | None | power off | available | False | | 6952e923-11ae-4506-b010-fd7a3c4278f5 | node-1 | None | power off | available | False | | f3b8fe69-a840-42dd-9cbf-217be8a95431 | node-2 | 70e9f2b1-a292-4e95-90d4-55864bb0a71d | power on | deploying | False | +--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+ [root@localhost ~]# ssh cirros@10.0.0.40 $
此時 Ironic 做爲 OpenStack Nova 驅動存在:
# nova.conf [DEFAULT] ... ompute_driver = ironic.IronicDriver
首先配置一個 Physical Network 做爲 Provisioning Network,用於提供 DHCP、PXE 功能,即裸金屬節點部署網絡。
# /etc/neutron/plugins/ml2/ml2_conf.ini [ml2_type_flat] flat_networks = public, physnet1 [ovs] datapath_type = system bridge_mappings = public:br-ex, physnet1:br-eth2 tunnel_bridge = br-tun local_ip = 172.22.132.93
$ sudo ovs-vsctl add-br br-eth2 $ sudo ovs-vsctl add-port br-eth2 eth2
$ sudo systemctl restart devstack@q-svc.service $ sudo systemctl restart devstack@q-agt.service
$ neutron net-create sharednet1 \ --shared \ --provider:network_type flat \ --provider:physical_network physnet1 $ neutron subnet-create sharednet1 172.22.132.0/24 \ --name sharedsubnet1 \ --ip-version=4 --gateway=172.22.132.254 \ --allocation-pool start=172.22.132.180,end=172.22.132.200 \ --enable-dhcp
NOTE:要注意 Enable DHCP,爲裸金屬節點的 PXE 網卡提供 IP 地址。
當使用 ironic node clean 功能時,須要使用 Cleaning Network,這裏咱們將 Cleaning Network 和 Provisioning Network 合併。
# /etc/ironic/ironic.conf [neutron] cleaning_network = sharednet1
$ sudo systemctl restart devstack@ir-api.service $ sudo systemctl restart devstack@ir-cond.service
Deploy Image 和 User Image 均可以經過 Disk Image Builder 工具來完成,Deploy Image 在 DevStack 部署的時候已經建立好了,咱們再也不重複。
$ virtualenv dib $ source dib/bin/activate (dib) $ pip install diskimage-builder
$ cat <<EOF > k8s.repo [kubernetes] name=Kubernetes baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF $ DIB_YUM_REPO_CONF=k8s.repo \ DIB_DEV_USER_USERNAME=kyle \ DIB_DEV_USER_PWDLESS_SUDO=yes \ DIB_DEV_USER_PASSWORD=r00tme \ disk-image-create \ centos7 \ dhcp-all-interfaces \ devuser \ yum \ epel \ baremetal \ -o k8s.qcow2 \ -p vim,docker,kubelet,kubeadm,kubectl,kubernetes-cni ... Converting image using qemu-img convert Image file k8s.qcow2 created... $ ls dib k8s.d k8s.initrd k8s.qcow2 k8s.repo k8s.vmlinuz
# Kernel $ openstack image create k8s.kernel \ --public \ --disk-format aki \ --container-format aki < k8s.vmlinuz # Initrd $ openstack image create k8s.initrd \ --public \ --disk-format ari \ --container-format ari < k8s.initrd # Qcow2 $ export MY_VMLINUZ_UUID=$(openstack image list | awk '/k8s.kernel/ { print $2 }') $ export MY_INITRD_UUID=$(openstack image list | awk '/k8s.initrd/ { print $2 }') $ openstack image create k8s \ --public \ --disk-format qcow2 \ --container-format bare \ --property kernel_id=$MY_VMLINUZ_UUID \ --property ramdisk_id=$MY_INITRD_UUID < k8s.qcow2
這裏的 Ironic Node 並不是指代運行 Ironic 守護進程的節點,而是指代裸機,這一點叫法上的習慣性區別須要注意。
$ ironic driver-list +---------------------+----------------+ | Supported driver(s) | Active host(s) | +---------------------+----------------+ | agent_ipmitool | ironic-dev | | fake | ironic-dev | | ipmi | ironic-dev | | pxe_ipmitool | ironic-dev | +---------------------+----------------+
NOTE:若缺失,則能夠經過 Set up the drivers for the Bare Metal service 的指示添加。
$ export DEPLOY_VMLINUZ_UUID=$(openstack image list | awk '/ipmitool.kernel/ { print $2 }') $ export DEPLOY_INITRD_UUID=$(openstack image list | awk '/ipmitool.initramfs/ { print $2 }') $ ironic node-create -d agent_ipmitool \ -n bare-node-1 \ -i ipmi_address=172.20.3.194 \ -i ipmi_username=maas \ -i ipmi_password=passwd \ -i ipmi_port=623 \ -i ipmi_terminal_port=9000 \ -i deploy_kernel=$DEPLOY_VMLINUZ_UUID \ -i deploy_ramdisk=$DEPLOY_INITRD_UUID
$ export NODE_UUID=$(ironic node-list | awk '/bare-node-1/ { print $2 }') $ ironic node-update $NODE_UUID add \ properties/cpus=4 \ properties/memory_mb=8192 \ properties/local_gb=100 \ properties/root_gb=100 \ properties/cpu_arch=x86_64
NOTE:上述信息也能夠經過 Ironic Inspector 來進行錄入,須要額外的配置。e.g.
# /etc/ironic-inspector/dnsmasq.conf no-daemon port=0 interface=eth1 bind-interfaces dhcp-range=172.22.132.200,172.22.132.210 dhcp-match=ipxe,175 dhcp-boot=tag:!ipxe,undionly.kpxe dhcp-boot=tag:ipxe,http://172.22.132.93:3928/ironic-inspector.ipxe dhcp-sequential-ip $ devstack@ironic-inspector-dhcp.service $ devstack@ironic-inspector.service
Inspection 流程:
$ ironic port-create -n $NODE_UUID -a NODE_PXE_NIC_MAC_ADDRESS
$ ironic node-validate $NODE_UUID +------------+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Interface | Result | Reason | +------------+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | boot | False | Cannot validate image information for node 8e6fd86a-8eed-4e24-a510-3f5ebb0a336a because one or more parameters are missing from its instance_info. Missing are: ['ramdisk', 'kernel', 'image_source'] | | console | False | Missing 'ipmi_terminal_port' parameter in node\'s driver_info. | | deploy | False | Cannot validate image information for node 8e6fd86a-8eed-4e24-a510-3f5ebb0a336a because one or more parameters are missing from its instance_info. Missing are: ['ramdisk', 'kernel', 'image_source'] | | inspect | True | | | management | True | | | network | True | | | power | True | | | raid | True | | | storage | True | | +------------+--------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
NOTE:Ironic 做爲 Node Driver 的場景中國 boot、deploy False 是正常的。
$ ironic --ironic-api-version 1.34 node-set-provision-state $NODE_UUID manage $ ironic --ironic-api-version 1.34 node-set-provision-state $NODE_UUID provide $ ironic node-list +--------------------------------------+--------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+--------+---------------+-------------+--------------------+-------------+ | 0c20cf7d-0a36-46f4-ac38-721ff8bfb646 | bare-0 | None | power off | cleaning | False | +--------------------------------------+--------+---------------+-------------+--------------------+-------------+
NOTE:直到 Ironic Node 的 State 從 cleaning => available 以後,該節點正式納管成功,能夠繼續進行上述提到過的裸金屬實例部署操做。
每一個 Conductor 實例在啓動時都會被註冊到數據庫,註冊的信息包含了本實例支持的 Drivers 列表,而且按期更新本身的時間戳,使得 Ironic 可以知道哪些 Conductor 和哪些驅動是可用的。當用戶註冊裸金屬節點時須要指定相應的 Driver,Ironic 會根據裸金屬節點的註冊信息將其分配到支持對應 Driver 的 Conductor 實例上。
對於多個 Conductor 和多個裸金屬節點之間的關聯場景,因爲須要作到有狀態管理和避免管理衝突問題,Ironic 採用了一致性哈希算法,多個裸金屬節點依據一致性哈希算法映射在一組 Conductor 上。當 Conductor 實例加入或者退出集羣,裸金屬階段會從新映射到不一樣的 Conductor 上,此時會觸發驅動的多種動做,好比 take-over 或者 clean-up。
Ironic 設計 Drivers 模型時儘可能考慮到了功能模塊的高內聚、鬆耦合、可複用性和可組合性。使用了一個 Drivers Set 對應一個裸金屬節點的管理。
Drivers 的類型可分爲:
Ironic Python Agent,簡稱 IPA,運行的物理載體是裸金屬節點,軟件載體是 ramdisk。用於控制和配置裸金屬節點,執行擦盤和鏡像注入等任務。這是經過引導自定義的 Linux 內核和運行 IPA 並鏈接到 Ironic Conductor 的 initramfs 鏡像來完成的。
IPA 的功能清單:
IPA 能夠經過下面命令來生成:
disk-image-create -c ironic-agent ubuntu dynamic-login stable-interface-names proliant-tools -o ironic-agent
工做流程:從 PXE 引導系統,而後進入 IPA,IPA 會把收集到的信息發送給 ironic-inspector,inspector 根據發來的信息獲得 IPMI 的 IP/MAC 地址來和 ironic-api 通訊註冊節點,而後 ironic-api 就能夠和 IPA 就能夠進行通訊來使用 agent_*driver 進入 Clean 階段或進行部署了。
Ironic 支持兩種 Console 類型:
Shellinabox 能夠將終端輸出轉換成 Ajax 實現的 http 服務,能夠直接經過瀏覽器訪問,呈現出相似 Terminal 的界面。Socat 與 Shellinabox 相似,都是充當一個管道做用,只不過 Socat 是將終端流重定向到 TCP 鏈接。Shellinabox 是比較早的方式,它的限制在於只能在 Ironic Conductor 節點上運行 WEB 服務,訪問範圍受限,因此社區又用 Socat 實現了一套。Socat 提供 TCP 鏈接,能夠和 Nova 的 Serial Console 對接。要使用這二者,須要在 Ironic Conductor 節點安裝相應的工具。Socat 在 yum 源裏就能夠找到,Shellinabox 不在標準源裏,要從 EPEL 源裏下載,它沒有外部依賴,因此直接下載這個 rpm 包安裝就能夠了。Ironic 的驅動中,以 Socat 結尾的驅動是支持 Socat 的,好比 agent_ipmitool_socat,其它的則是 Shellinabox 方式。使用哪一種終端方式,在建立節點時要肯定好。這兩種方式都是雙向的,能夠查看終端輸出,也能夠鍵盤輸入。
Ironic 部署一臺裸金屬節點須要兩種鏡像類型,都可以經過 Disk Image Builder 工具構建:
Deploy Images 內含了部署最重要的模塊 IPA,它既完成目標主機在部署階段提供 tgt+iSCSI 服務,又提供物理機發現階段收集上報目標主機的物理信息。經過下列指令構建 deploy image 以後會生成兩個文件:ironic-deploy.vmlinuz(ironic-deploy.kernel)和 ironic-deploy.initramfs。
disk-image-create ironic-agent centos7 -o ironic-deploy
User Images 是真正的用戶目的操做系統鏡像,經過下列指令構建 user image 以後會生成 my-image.qcow二、my-image.vmlinuz 和 my-image.initrd 三個文件,用於引導和啓動操做系統。
image-create centos7 baremetal dhcp-all-interfaces grub2 -o my-image
https://mp.weixin.qq.com/s/SbHqZ-lUVh9EOD_UhQNQ-A
https://mp.weixin.qq.com/s/-o1SYXT50nYU4FKqX0qs6Q
https://mp.weixin.qq.com/s/RND6MX-RnM6FcLus-gbM3w
https://mp.weixin.qq.com/s?__biz=MzIzMzk0MDgxNQ==&mid=2247485064&idx=1&sn=1e0f461a2f0f61f857a38842a11756ec&scene=21#wechat_redirect
https://k2r2bai.com/2017/08/16/openstack/ironic/#創建-Bare-metal-網路