openstack ha 部署

1、控制節點架構以下圖:

2、初始化環境:

一、配置IP地址:node

一、節點1:
ip addr add dev eth0 192.168.142.110/24 echo 'ip addr add dev eth0 192.168.142.110/24' >> /etc/rc.local chmod +x /etc/rc.d/rc.local
二、節點2: ip addr add dev eth0
192.168.142.111/24 echo 'ip addr add dev eth0 192.168.142.111/24' >> /etc/rc.local chmod +x /etc/rc.d/rc.local 三、節點3: ip addr add dev eth0 192.168.142.112/24 echo 'ip addr add dev eth0 192.168.142.112/24' >> /etc/rc.local chmod +x /etc/rc.d/rc.local

二、更改主機名:python

配置主機名+修改/etc/hosts文件:
1、節點1
hostnamectl --static --transient  set-hostname  controller1
hosts文件:
192.168.142.110 controller1
192.168.142.111 controller2
192.168.142.112 controller3
2、節點2:
hostnamectl --static --transient  set-hostname controller2
hosts文件:
192.168.142.110 controller1
192.168.142.111 controller2
192.168.142.112 controller3
3、節點3:
hostnamectl --static --transient  set-hostname controller3
hosts文件:
192.168.142.110 controller1
192.168.142.111 controller2
192.168.142.112 controller3

三、設置防火牆及selinux:mysql

systemctl disable firewalld
systemctl stop firewalld
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 

四、設置時間同步:linux

yum install ntp -y
ntpdate cn.pool.ntp.org

五、安裝基礎軟件包:sql

yum install -y centos-release-openstack-ocata
yum upgrade -y 
yum install -y python-openstackclient

 3、安裝基礎基礎服務:

一、安裝Pacemaker

(1~4在三個節點都執行)
一、配置免密碼登陸:
節點1: ssh
-keygen -t rsa ssh-copy-id root@controller2 ssh-copy-id root@controller3 節點2: ssh-keygen -t rsa ssh-copy-id root@controller1 ssh-copy-id root@controller3 節點3: ssh-keygen -t rsa ssh-copy-id root@controller1 ssh-copy-id root@controller2 二、安裝pacemaker yum install -y pcs pacemaker corosync fence-agents-all resource-agents 三、啓動pcsd服務(開機自啓動) systemctl start pcsd.service systemctl enable pcsd.service 四、建立集羣用戶: echo 'password' |passwd --stdin hacluster (此用戶在安裝pcs時候會自動建立) 五、集羣各節點之間進行認證: pcs cluster auth controller1 controller2 controller3 -u hacluster -p password (此處須要輸入的用戶名必須爲pcs自動建立的hacluster,其餘用戶不能添加成功) 六、建立並啓動名爲openstack-ha的集羣: pcs cluster setup --start --name openstack-ha controller1 controller2 controller3
6、設置集羣自啓動: 
pcs cluster enable
--all

7、查看並設置集羣屬性:
查看當前集羣狀態:
pcs cluster status
檢驗Corosync的安裝及當前corosync狀態:
corosync
-cfgtool -s corosync-cmapctl | grep members pcs status corosync
檢查配置是否正確(倘若沒有輸出任何則配置正確):
crm_verify
-L -V
禁用STONITH:
pcs property set stonith
-enabled=false
沒法仲裁時候,選擇忽略:
pcs property set no
-quorum-policy=ignore

 二、Haproxy安裝配置:

(1~3在三個節點都執行) 1、安裝haproxy:
yum install -y haproxy lrzsz
2、初始化環境:
echo "net.ipv4.ip_nonlocal_bind=1" > /etc/sysctl.d/haproxy.conf
sysctl -p echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind cat >/etc/sysctl.d/tcp_keepalive.conf << EOF net.ipv4.tcp_keepalive_intvl = 1 net.ipv4.tcp_keepalive_probes = 5 net.ipv4.tcp_keepalive_time = 5 EOF sysctl net.ipv4.tcp_keepalive_intvl=1 sysctl net.ipv4.tcp_keepalive_probes=5 sysctl net.ipv4.tcp_keepalive_time=5 mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak &&cd /etc/haproxy/ (上傳haproxy.cfg文件)

haproxy的配置文件:mongodb

global
    daemon
    group    haproxy                         
    maxconn  4000
    pidfile  /var/run/haproxy.pid           
    user     haproxy  
    stats    socket /var/lib/haproxy/stats   
    log      192.168.142.110 local0
defaults
    mode tcp
    maxconn 10000
    timeout  connect 10s
    timeout  client 1m
    timeout  server 1m
    timeout  check 10s
listen stats                   
    mode          http
    bind          192.168.142.110:8080                       
    stats         enable                     
    stats         hide-version                
    stats uri     /haproxy?openstack          
    stats realm   Haproxy\Statistics           
    stats admin if TRUE 
    stats auth    admin:admin 
    stats refresh 10s
frontend vip-db
    bind 192.168.142.201:3306
    timeout client 90m
    default_backend db-vms-galera

frontend vip-qpid
    bind 192.168.142.215:5672
    timeout client 120s
    default_backend qpid-vms
frontend vip-horizon
    bind 192.168.142.211:80
    timeout client 180s
    cookie SERVERID insert indirect nocache
    default_backend horizon-vms
frontend vip-ceilometer
    bind 192.168.142.214:8777
    timeout client 90s
    default_backend ceilometer-vms
frontend vip-rabbitmq
    option clitcpka
    bind 192.168.142.202:5672
    timeout client 900m
    default_backend rabbitmq-vms
frontend vip-keystone-admin
    bind 192.168.142.203:35357
    default_backend keystone-admin-vms
backend keystone-admin-vms
    balance roundrobin
    server controller1-vm 192.168.142.110:35357 check inter 1s
    server controller2-vm 192.168.142.111:35357 check inter 1s
    server controller3-vm 192.168.142.112:35357 check inter 1s
frontend vip-keystone-public
    bind 192.168.142.203:5000
    default_backend keystone-public-vms
backend keystone-public-vms
    balance roundrobin
    server controller1-vm 192.168.142.110:5000 check inter 1s
    server controller2-vm 192.168.142.111:5000 check inter 1s
    server controller3-vm 192.168.142.112:5000 check inter 1s
frontend vip-glance-api
    bind 192.168.142.205:9191
    default_backend glance-api-vms
backend glance-api-vms
    balance roundrobin
    server controller1-vm 192.168.142.110:9191 check inter 1s
    server controller2-vm 192.168.142.111:9191 check inter 1s
    server controller3-vm 192.168.142.112:9191 check inter 1s
frontend vip-glance-registry
    bind 192.168.142.205:9292
    default_backend glance-registry-vms
backend glance-registry-vms
    balance roundrobin
    server controller1-vm 192.168.142.110:9292 check inter 1s
    server controller2-vm 192.168.142.111:9292 check inter 1s
    server controller3-vm 192.168.142.112:9292 check inter 1s
frontend vip-cinder
    bind 192.168.142.206:8776
    default_backend cinder-vms
backend cinder-vms
    balance roundrobin
    server controller1-vm 192.168.142.110:8776 check inter 1s
    server controller2-vm 192.168.142.111:8776 check inter 1s
    server controller3-vm 192.168.142.112:8776 check inter 1s
frontend vip-swift
    bind 192.168.142.208:8080
    default_backend swift-vms
backend swift-vms
    balance roundrobin
    server controller1-vm 192.168.142.110:8080 check inter 1s
    server controller2-vm 192.168.142.111:8080 check inter 1s
    server controller3-vm 192.168.142.112:8080 check inter 1s
frontend vip-neutron
    bind 192.168.142.209:9696
    default_backend neutron-vms
backend neutron-vms
    balance roundrobin
    server controller1-vm 192.168.142.110:9696 check inter 1s
    server controller2-vm 192.168.142.111:9696 check inter 1s
    server controller3-vm 192.168.142.112:9696 check inter 1s
frontend vip-nova-vnc-novncproxy
    bind 192.168.142.210:6080
    default_backend nova-vnc-novncproxy-vms
backend nova-vnc-novncproxy-vms
    balance roundrobin
    server controller1-vm 192.168.142.110:6080 check inter 1s
    server controller2-vm 192.168.142.111:6080 check inter 1s
    server controller3-vm 192.168.142.112:6080 check inter 1s
frontend vip-nova-vnc-xvpvncproxy
    bind 192.168.142.210:6081
    default_backend nova-vnc-xvpvncproxy-vms
backend nova-vnc-xvpvncproxy-vms
    balance roundrobin
    server controller1-vm 192.168.142.110:6081 check inter 1s
    server controller2-vm 192.168.142.111:6081 check inter 1s
    server controller3-vm 192.168.142.112:6081 check inter 1s
frontend vip-nova-metadata
    bind 192.168.142.210:8775
    default_backend nova-metadata-vms
backend nova-metadata-vms
    balance roundrobin
    server controller1-vm 192.168.142.110:8775 check inter 1s
    server controller2-vm 192.168.142.111:8775 check inter 1s
    server controller3-vm 192.168.142.112:8775 check inter 1s
frontend vip-nova-api
    bind 192.168.142.210:8774
    default_backend nova-api-vms
backend nova-api-vms
    balance roundrobin
    server controller1-vm 192.168.142.110:8774 check inter 1s
    server controller2-vm 192.168.142.111:8774 check inter 1s
    server controller3-vm 192.168.142.112:8774 check inter 1s
backend horizon-vms
    balance roundrobin
    timeout server 108s
    server controller1-vm 192.168.142.110:80 check inter 1s
    server controller2-vm 192.168.142.111:80 check inter 1s
    server controller3-vm 192.168.142.112:80 check inter 1s
frontend vip-heat-cfn
    bind 192.168.142.212:8000
    default_backend heat-cfn-vms
backend heat-cfn-vms
    balance roundrobin
    server controller1-vm 192.168.142.110:8000 check inter 1s
    server controller2-vm 192.168.142.111:8000 check inter 1s
    server controller3-vm 192.168.142.112:8000 check inter 1s
frontend vip-heat-cloudw
    bind 192.168.142.212:8004
    default_backend heat-cloudw-vms
backend heat-cloudw-vms
    balance roundrobin
    server controller1-vm 192.168.142.110:8004 check inter 1s
    server controller2-vm 192.168.142.111:8004 check inter 1s
    server controller3-vm 192.168.142.112:8004 check inter 1s
frontend vip-heat-srv
    bind 192.168.142.212:8004
    default_backend heat-srv-vms
backend heat-srv-vms
    balance roundrobin
    server controller1-vm 192.168.142.110:8004 check inter 1s
    server controller2-vm 192.168.142.111:8004 check inter 1s
    server controller3-vm 192.168.142.112:8004 check inter 1s
backend ceilometer-vms
    balance roundrobin
    server controller1-vm 192.168.142.110:8777 check inter 1s
    server controller2-vm 192.168.142.111:8777 check inter 1s
    server controller3-vm 192.168.142.112:8777 check inter 1s
backend qpid-vms
    stick-table type ip size 2
    stick on dst
    timeout server 120s
    server controller1-vm 192.168.142.110:5672 check inter 1s
    server controller2-vm 192.168.142.111:5672 check inter 1s
    server controller3-vm 192.168.142.112:5672 check inter 1s
backend db-vms-galera
    option httpchk
    option tcpka
    stick-table type ip size 1000
    stick on dst
    timeout server 90m
    server controller1-vm 192.168.142.110:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions
    server controller2-vm 192.168.142.111:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions
    server controller3-vm 192.168.142.112:3306 check inter 1s port 9200 backup on-marked-down shutdown-sessions
backend rabbitmq-vms
    option srvtcpka
    balance roundrobin
    timeout server 900m
    server controller1-vm 192.168.142.110:5672 check inter 1s
    server controller2-vm 192.168.142.111:5672 check inter 1s
    server controller3-vm 192.168.142.112:5672 check inter 1s
haproxy.cfg
3、建立haproxy的pacemaker資源:
pcs resource create lb-haproxy systemd:haproxy --clone
pcs resource enable lb-haproxy

四、建立vip(Python腳本)數據庫

import os
components=['db','rabbitmq','keystone','memcache','glance','cinder',
             'swift-brick','swift','neutron','nova','horizon','heat',
            'mongodb','ceilometer','qpid']
offset=201
internal_network='192.168.142'
for section in components:
    if section in ['memcache','swift-brick','mongodb']:
        pass
    else:
        print("%s:%s.%s"%(section,internal_network,offset))
#        os.system('pcs resource delete vip-%s '%section)
        os.system('pcs resource create vip-%s IPaddr2 ip=%s.%s nic=eth0'%(section,internal_network,offset))
        os.system('pcs constraint order start vip-%s then lb-haproxy-clone kind=Optional'%section)
        os.system('pcs constraint colocation add vip-%s with lb-haproxy-clone'%section)
    offset += 1
os.system('pcs cluster stop --all')
os.system('pcs cluster start --all')
create_vip.py

三、安裝mariadb:

(1~6在三個節點都執行)
一、安裝mariadb-galera:
yum install mariadb-galera-server xinetd rsync -y
pcs resource disable lb-haproxy

二、galera集羣檢查: cat
> /etc/sysconfig/clustercheck << EOF MYSQL_USERNAME="clustercheck" MYSQL_PASSWORD="hagluster" MYSQL_HOST="localhost" MYSQL_PORT="3306" EOF

三、建立集羣用戶: systemctl start mariadb.service
mysql_secure_installation  mysql
-e "CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY 'hagluster';" systemctl stop mariadb.service

四、配置galera.cnf文件:bootstrap

cat > /etc/my.cnf.d/galera.cnf << EOF
[mysqld]
skip-name-resolve=1
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind_address=192.168.142.110

wsrep_cluster_address = "gcomm://"
wsrep_cluster_address = "gcomm://192.168.142.111,192.168.142.112"
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name="galera_cluster"
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
wsrep_sst_method=rsync
wsrep_on=ON
EOF
節點一的galera.cnf文件
cat > /etc/my.cnf.d/galera.cnf << EOF
[mysqld]
skip-name-resolve=1
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind_address=192.168.142.111

wsrep_cluster_address = "gcomm://192.168.142.110,192.168.142.112"
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name="galera_cluster"
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
wsrep_sst_method=rsync
wsrep_on=ON
EOF
節點二的galera.cnf文件
cat > /etc/my.cnf.d/galera.cnf << EOF
[mysqld]
skip-name-resolve=1
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind_address=192.168.142.112

wsrep_cluster_address = "gcomm://192.168.142.110,192.168.142.111"
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name="galera_cluster"
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
wsrep_sst_method=rsync
wsrep_on=ON
EOF
節點三的galera.cnf文件

五、配置基於http對數據庫檢查:
cat > /etc/xinetd.d/galera-monitor << EOF
service galera-monitor
{
        port            = 9200
        disable         = no
        socket_type     = stream
        protocol        = tcp
        wait            = no
        user            = root
        group           = root
        groups          = yes
        server          = /usr/bin/clustercheck
        type            = UNLISTED
        per_source      = UNLIMITED
        log_on_success  = 
        log_on_failure  = HOST
        flags           = REUSE
}
EOF

systemctl enable xinetd
systemctl start xinetd

六、受權: chown mysql:mysql
-R /var/log/mariadb chown mysql:mysql -R /var/lib/mysql chown mysql:mysql -R /var/run/mariadb/

七、建立galera集羣資源
pcs resource create galera galera enable_creation=true wsrep_cluster_address="gcomm://controller1,controller2,controller3" additional_parameters='--open-files-limit=16384' meta master-max=3 ordered=true op promote timeout=300s on-fail=block --master
pcs resource enable lb-haproxy
pcs constraint order start lb-haproxy-clone then start galera-master
八、檢查服務:
clustercheck pcs resource show

建立數據庫表:swift

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova';
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';
CREATE DATABASE heat;
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'localhost' IDENTIFIED BY 'heat';
GRANT ALL PRIVILEGES ON heat.* TO 'heat'@'%' IDENTIFIED BY 'heat';
flush privileges;
View Code

四、安裝memcache:

(1 在三個控制節點上執行)
一、安裝memcache:
yum install -y memcached
二、建立pacemaker資源 pcs resource create memcached systemd:memcached
--clone interleave=true pcs status

五、安裝RabbitMQ:

(1~3在三個節點都執行)
1、安裝rabbitmq: yum install -y rabbitmq-server
2、添加rabbitmq文件: 節點1: cat > /etc/rabbitmq/rabbitmq-env.conf << EOF NODE_IP_ADDRESS=192.168.142.110 EOF 節點2: cat > /etc/rabbitmq/rabbitmq-env.conf << EOF NODE_IP_ADDRESS=192.168.142.111 EOF 節點3: cat > /etc/rabbitmq/rabbitmq-env.conf << EOF NODE_IP_ADDRESS=192.168.142.112 EOF
3、建立目錄並受權: mkdir -p /var/lib/rabbitmq chown -R rabbitmq:rabbitmq /var/lib/rabbitmq
4、建立pacemaker資源 pcs resource create rabbitmq-cluster ocf:rabbitmq:rabbitmq-server-ha --master erlang_cookie=DPMDALGUKEOMPTHWPYKC node_port=5672 op monitor interval=30 timeout=120 op monitor interval=27 role=Master timeout=120 op monitor interval=103 role=Slave timeout=120 OCF_CHECK_LEVEL=30 op start interval=0 timeout=120 op stop interval=0 timeout=120 op promote interval=0 timeout=60 op demote interval=0 timeout=60 op notify interval=0 timeout=60 meta notify=true ordered=false interleave=false master-max=1 master-node-max=1
5、配置集羣隊列鏡像,用戶及權限(在主節點上): rabbitmqctl set_policy ha-all "." '{"ha-mode":"all", "ha-sync-mode":"automatic"}' --apply-to all --priority 0 rabbitmqctl add_user openstack openstack rabbitmqctl set_permissions openstack ".*" ".*" ".*"
6、查看配置是否成功: rabbitmqctl list_policies rabbitmqctl list_users rabbitmqctl list_permissions

 五、安裝mongodb:

(1~3在三個節點上都執行)
一、安裝軟件包:
yum -y install mongodb mongodb-server
二、修改配置文件: sed
-i "s/bind_ip = 127.0.0.1/bind_ip = 0.0.0.0/g" /etc/mongod.conf sed -i "s/#replSet = arg/replSet = ceilometer/g" /etc/mongod.conf sed -i "s/#smallfiles = true/smallfiles = true/g" /etc/mongod.conf

三、測試是否能啓動: systemctl start mongod systemctl status mongod systemctl stop mongod

四、建立pacemaker資源:
pcs resource create mongodb systemd:mongod op start timeout=300s --clone
五、編寫建立集羣腳本:
cat >> /root/mongo_replica_setup.js << EOF
rs.initiate()
sleep(10000)
    rs.add("controller1");
    rs.add("controller2");
    rs.add("controller3");
EOF

六、執行腳本: mongo
/root/mongo_replica_setup.js

七、查看集羣時候建立成功:
mongo #進入交互窗口後執行:
rs.status()
ceilometer:PRIMARY> rs.status()
{
        "set" : "ceilometer",
        "date" : ISODate("2017-12-07T06:41:00Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 0,
                        "name" : "controller1:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 1163,
                        "optime" : Timestamp(1512628074, 2),
                        "optimeDate" : ISODate("2017-12-07T06:27:54Z"),
                        "electionTime" : Timestamp(1512628064, 2),
                        "electionDate" : ISODate("2017-12-07T06:27:44Z"),
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "controller2:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 786,
                        "optime" : Timestamp(1512628074, 2),
                        "optimeDate" : ISODate("2017-12-07T06:27:54Z"),
                        "lastHeartbeat" : ISODate("2017-12-07T06:40:59Z"),
                        "lastHeartbeatRecv" : ISODate("2017-12-07T06:40:59Z"),
                        "pingMs" : 1,
                        "syncingTo" : "controller1:27017"
                },
                {
                        "_id" : 2,
                        "name" : "controller3:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 786,
                        "optime" : Timestamp(1512628074, 2),
                        "optimeDate" : ISODate("2017-12-07T06:27:54Z"),
                        "lastHeartbeat" : ISODate("2017-12-07T06:40:59Z"),
                        "lastHeartbeatRecv" : ISODate("2017-12-07T06:40:59Z"),
                        "pingMs" : 1,
                        "syncingTo" : "controller1:27017"
                }
        ],
        "ok" : 1
}
輸出狀態信息

 5、安裝openstack服務:

一、安裝配置keystone:

(1~3在三個控制節點執行)
1、安裝軟件包: yum install -y openstack-keystone httpd mod_wsgi openstack-utils python-keystoneclient 2、修改配置文件: openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:keystone@192.168.142.201/keystone openstack-config --set /etc/keystone/keystone.conf token provider fernet openstack-config --set /etc/keystone/keystone.conf memcache servers controller1:11211,controller2:11211,controller3:11211 openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_host controller1,controller2,controller3 openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_ha_queues true openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit heartbeat_timeout_threshold 60 openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/keystone/keystone.conf oslo_messaging_rabbit rabbit_password openstack 查看配置: cat /etc/keystone/keystone.conf | grep -v "^#"|grep -v "^$" 3、修改httpd配置文件: ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ sed -i "s/Listen 80/Listen 81/g" /etc/httpd/conf/httpd.conf 節點1: sed -i "s/#ServerName www.example.com:80/ServerName controller1:80/g" /etc/httpd/conf/httpd.conf sed -i "s/Listen 5000/Listen 192.168.142.110:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf sed -i "s/Listen 35357/Listen 192.168.142.110:35357/g" /etc/httpd/conf.d/wsgi-keystone.conf 節點2: sed -i "s/#ServerName www.example.com:80/ServerName controller2:80/g" /etc/httpd/conf/httpd.conf sed -i "s/Listen 5000/Listen 192.168.142.111:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf sed -i "s/Listen 35357/Listen 192.168.142.111:35357/g" /etc/httpd/conf.d/wsgi-keystone.conf 節點3: sed -i "s/#ServerName www.example.com:80/ServerName controller3:80/g" /etc/httpd/conf/httpd.conf sed -i "s/Listen 5000/Listen 192.168.142.112:5000/g" /etc/httpd/conf.d/wsgi-keystone.conf sed -i "s/Listen 35357/Listen 192.168.142.112:35357/g" /etc/httpd/conf.d/wsgi-keystone.conf 4、建立pacemaker資源: pcs resource create keystone-http systemd:httpd --clone interleave=true 5、同步數據庫: su -s /bin/sh -c "keystone-manage db_sync" keystone 6、建立管理員帳戶(節點1執行): keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone keystone-manage bootstrap --bootstrap-password keystone --bootstrap-admin-url http://192.168.142.203:35357/v3/ --bootstrap-internal-url http://192.168.142.203:5000/v3/ --bootstrap-public-url http://192.168.142.203:5000/v3/ --bootstrap-region-id RegionOne七、配置環境變量: export OS_USERNAME=admin export OS_PASSWORD=keystone export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_AUTH_URL=http://192.168.142.203:35357/v3 export OS_IDENTITY_API_VERSION=3 7、將keys文件複製到其餘兩個節點: 節點(2,3)執行: mkdir /etc/keystone/credential-keys/ mkdir /etc/keystone/fernet-keys/ 節點(1)執行: scp /etc/keystone/credential-keys/* root@controller2:/etc/keystone/credential-keys/ scp /etc/keystone/credential-keys/* root@controller3:/etc/keystone/credential-keys/ scp /etc/keystone/credential-keys/* root@controller2:/etc/keystone/fernet-keys/ scp /etc/keystone/credential-keys/* root@controller3:/etc/keystone/fernet-keys/ 節點(2,3)執行: chown -R keystone:keystone /etc/keystone/credential-keys/ chown -R keystone:keystone /etc/keystone/fernet-keys/ chmod 700 /etc/keystone/credential-keys/ chmod 700 /etc/keystone/fernet-keys/ 八、建立服務及用戶: openstack project create --domain default --description "Service Project" service openstack project create --domain default --description "Demo Project" demo openstack user create --domain default --password demo demo openstack role create user openstack role add --project demo --user demo user openstack --os-auth-url http://192.168.142.203:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue openstack --os-auth-url http://192.168.142.203:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue --os-password demo 九、爲glance服務建立用戶、服務及endpoint: openstack user create --domain default --password glance glance openstack role add --project service --user glance admin openstack service create --name glance --description "OpenStack Image" image openstack endpoint create --region RegionOne image public http://192.168.142.205:9292 openstack endpoint create --region RegionOne image internal http://192.168.142.205:9292 openstack endpoint create --region RegionOne image admin http://192.168.142.205:9292 十、爲cinder服務建立用戶、服務及endpoint: openstack user create --domain default --password cinder cinder openstack role add --project service --user cinder admin openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2 openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3 openstack endpoint create --region RegionOne volumev2 public http://192.168.142.206:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne volumev2 internal http://192.168.142.206:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne volumev2 admin http://192.168.142.206:8776/v2/%\(project_id\)s openstack endpoint create --region RegionOne volumev3 public http://192.168.142.206:8776/v3/%\(project_id\)s openstack endpoint create --region RegionOne volumev3 internal http://192.168.142.206:8776/v3/%\(project_id\)s openstack endpoint create --region RegionOne volumev3 admin http://192.168.142.206:8776/v3/%\(project_id\)s 11、爲neutron服務建立用戶、服務及endpoint: openstack user create --domain default --password neutron neutron openstack role add --project service --user neutron admin openstack service create --name neutron --description "OpenStack Networking" network openstack endpoint create --region RegionOne network public http://192.168.142.209:9696 openstack endpoint create --region RegionOne network internal http://192.168.142.209:9696 openstack endpoint create --region RegionOne network admin http://192.168.142.209:9696 12、爲nova服務建立用戶、服務及endpoint: openstack user create --domain default --password nova nova openstack role add --project service --user nova admin openstack service create --name nova --description "OpenStack Compute" compute openstack endpoint create --region RegionOne compute public http://192.168.142.210:8774/v2.1 openstack endpoint create --region RegionOne compute internal http://192.168.142.210:8774/v2.1 openstack endpoint create --region RegionOne compute admin http://192.168.142.210:8774/v2.1
openstack user create --domain default --password placement placement
openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API" placement
openstack endpoint create --region RegionOne placement public http://192.168.142.210:8778
openstack endpoint create --region RegionOne placement internal http://192.168.142.210:8778
openstack endpoint create --region RegionOne placement admin http://192.168.142.210:8778
13、爲heat服務建立用戶、服務及endpoint:
openstack user create --domain default --password heat heat
openstack role add --project service --user heat admin
openstack service create --name heat --description "Orchestration" orchestration
openstack service create --name heat-cfn --description "Orchestration"  cloudformation
openstack endpoint create --region RegionOne orchestration public http://192.168.142.212:8004/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne orchestration internal http://192.168.142.212:8004/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne orchestration admin http://192.168.142.212:8004/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne cloudformation public http://192.168.142.212:8000/v1
openstack endpoint create --region RegionOne cloudformation internal http://192.168.142.212:8000/v1
openstack endpoint create --region RegionOne cloudformation admin http://192.168.142.212:8000/v1

14、爲ceilometer服務建立用戶、服務及endpoint:
openstack user create --domain default --password ceilometer ceilometer

 驗證配置結果:centos

1、驗證keystone服務是否正常:
unset OS_AUTH_URL OS_PASSWORD
openstack --os-auth-url http://192.168.142.203:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
openstack --os-auth-url http://192.168.142.203:5000/v3 --os-project-domain-name default --os-user-domain-name default  --os-project-name demo --os-username demo token issue
2、查看用戶:
openstack user list
3、查看服務:
openstack service list
4、查看服務目錄:
 openstack endpoint list

 二、安裝glance:

(1~3在三個控制節點上執行)
1、安裝軟件包: yum install -y openstack-glance cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak 二、修改glance-api文件: openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:glance@192.168.142.201/glance openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://192.168.142.203:5000 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://192.168.142.203:35357 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211 openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name default openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name default openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password glance openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http openstack-config --set /etc/glance/glance-api.conf glance_store default_store file openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/ openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_password openstack openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_hosts controller1,controller2,controller3 openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit rabbit_ha_queues true openstack-config --set /etc/glance/glance-api.conf oslo_messaging_rabbit heartbeat_timeout_threshold 60 openstack-config --set /etc/glance/glance-api.conf DEFAULT registry_host 192.168.142.205 節點1: openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host 192.168.142.110 節點2: openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host 192.168.142.111 節點3: openstack-config --set /etc/glance/glance-api.conf DEFAULT bind_host 192.168.142.112 三、修改glance-registry文件: openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:glance@192.168.142.201/glance openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://192.168.142.203:5000 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://192.168.142.203:35357 openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211 openstack-config --set /etc/glanceg/lance-registry.conf keystone_authtoken auth_type password openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name default openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name default openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password glance openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_userid openstack openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_password openstack openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_hosts controller1,controller2,controller3 openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit rabbit_ha_queues true openstack-config --set /etc/glance/glance-registry.conf oslo_messaging_rabbit heartbeat_timeout_threshold 60 openstack-config --set /etc/glance/glance-registry.conf DEFAULT registry_host 192.168.142.205 節點1: openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host 192.168.142.110 節點2: openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host 192.168.142.111 節點3: openstack-config --set /etc/glance/glance-registry.conf DEFAULT bind_host 192.168.142.112 4、建立nfs資源: pcs resource create glance-fs Filesystem device="192.168.72.100:/home/glance" directory="/var/lib/glance" fstype="nfs" options="v3" --clone chown glance:nobody /var/lib/glance
5、同步數據庫: su -s /bin/sh -c "glance-manage db_sync" glance

六、建立pacemaker資源: pcs resource create glance
-registry systemd:openstack-glance-registry --clone interleave=true pcs resource create glance-api systemd:openstack-glance-api --clone interleave=true pcs constraint order start glance-fs-clone then glance-registry-clone pcs constraint colocation add glance-registry-clone with glance-fs-clone pcs constraint order start glance-registry-clone then glance-api-clone pcs constraint colocation add glance-api-clone with glance-registry-clone pcs constraint order start keystone-http-clone then glance-registry-clone

三、安裝配置cinder:

四、安裝配置neutron:

1、安裝軟件包:
yum install -y openstack-neutron openstack-neutron-openvswitch openstack-neutron-ml2 python-neutronclient  which  

2、修改neutron.conf文件:
cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak
節點1:
openstack-config --set  /etc/neutron/neutron.conf DEFAULT bind_host 192.168.142.110
節點2:
openstack-config --set  /etc/neutron/neutron.conf DEFAULT bind_host 192.168.142.111
節點3:
openstack-config --set  /etc/neutron/neutron.conf DEFAULT bind_host 192.168.142.112

######default######
openstack-config --set  /etc/neutron/neutron.conf DEFAULT rpc_backend  rabbit 
openstack-config --set  /etc/neutron/neutron.conf DEFAULT auth_strategy  keystone
openstack-config --set  /etc/neutron/neutron.conf DEFAULT core_plugin  ml2 
openstack-config --set  /etc/neutron/neutron.conf DEFAULT service_plugins  router 
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes  True 
openstack-config --set  /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes  True 

#######keystone######
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_url http://192.168.142.203:35357
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_uri http://192.168.142.203:5000
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller1:11211,controller2:11211,controller3:11211
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken auth_type  password 
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_domain_name  default 
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken user_domain_name  default 
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken project_name  service
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken username  neutron 
openstack-config --set  /etc/neutron/neutron.conf keystone_authtoken password  neutron

######database#######
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:neutron@192.168.142.201:3306/neutron

######rabbit######
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_hosts controller1,controller2,controller3
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_ha_queues true
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit heartbeat_timeout_threshold 60
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid  openstack   
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password  openstack

######nova######
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://192.168.142.203:35357/
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_id default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_id default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password nova

三、修改ml2_conf.ini文件:
cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak

節點1:
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 192.168.142.110
節點2:
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 192.168.142.111
節點3:
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 192.168.142.112
openstack-config --set  /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers  flat,vlan,gre,vxlan 
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks external
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs bridge_mappings external:br-ex
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini agent tunnel_types gre
rm -rf /etc/neutron/plugin.ini && ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
四、l3與dhcp高可用設置:
openstack-config --set /etc/neutron/neutron.conf DEFAULT l3_ha True
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_automatic_l3agent_failover True
openstack-config --set /etc/neutron/neutron.conf DEFAULT max_l3_agents_per_router 3
openstack-config --set /etc/neutron/neutron.conf DEFAULT min_l3_agents_per_router 2
openstack-config --set /etc/neutron/neutron.conf DEFAULT dhcp_agents_per_network 3
五、啓動openvswitch:
systemctl enable openvswitch
systemctl start openvswitch

ovs-vsctl del-br br-int
ovs-vsctl del-br br-ex
ovs-vsctl del-br br-tun

ovs-vsctl add-br br-int
ovs-vsctl add-br br-ex

ovs-vsctl add-port br-ex ens8
ethtool -K ens8 gro off

openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini agent l2_population False

六、修改metadata.agent.ini文件:
cp -a /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini_bak
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_uri http://192.168.142.203:5000
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://192.168.142.203:35357
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_host 192.168.142.203
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name services
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT project_domain_id default
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT user_domain_id default
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip 192.168.142.210
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_port 8775
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret
neutron_shared_secret
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_workers 4
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_backlog 2048
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT verbose True
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT debug True

七、修改dhcp_agent.ini文件:
cp -a /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini_bak
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_delete_namespaces False
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf
echo "dhcp-option-force=26,1454" >/etc/neutron/dnsmasq-neutron.conf
pkill dnsmasq

八、修改L3_agent文件:
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT handle_internal_only_routers True
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT send_arp_for_ha 3
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT router_delete_namespaces False
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT external_network_bridge br-ex
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT verbose True
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT debug True

rm -rf /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig && cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service

九、修改/etc/sysctl.conf:
sed -e '/^net.bridge/d' -e '/^net.ipv4.conf/d' -i /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables=1" >>/etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables=1" >>/etc/sysctl.conf
echo "net.ipv4.conf.all.rp_filter=0" >>/etc/sysctl.conf
echo "net.ipv4.conf.default.rp_filter=0" >>/etc/sysctl.conf
sysctl -p >>/dev/null

 

 

# source adminrc


su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

systemctl start neutron-server

systemctl stop neutron-server

pcs resource delete neutron-server-api --force
pcs resource delete neutron-scale --force
pcs resource delete neutron-ovs-cleanup --force
pcs resource delete neutron-netns-cleanup --force
pcs resource delete neutron-openvswitch-agent --force
pcs resource delete neutron-dhcp-agent --force
pcs resource delete neutron-l3-agent --force
pcs resource delete neutron-metadata-agent --force

pcs resource create neutron-server-api systemd:neutron-server op start timeout=180 --clone interleave=true

pcs resource create neutron-scale ocf:neutron:NeutronScale --clone globally-unique=true clone-max=3 interleave=true
pcs resource create neutron-ovs-cleanup ocf:neutron:OVSCleanup --clone interleave=true
pcs resource create neutron-netns-cleanup ocf:neutron:NetnsCleanup --clone interleave=true
pcs resource create neutron-openvswitch-agent  systemd:neutron-openvswitch-agent --clone interleave=true
pcs resource create neutron-dhcp-agent systemd:neutron-dhcp-agent --clone interleave=true
pcs resource create neutron-l3-agent systemd:neutron-l3-agent --clone interleave=true
pcs resource create neutron-metadata-agent systemd:neutron-metadata-agent  --clone interleave=true

pcs constraint order start keystone-clone then neutron-server-api-clone
pcs constraint order start neutron-scale-clone then neutron-openvswitch-agent-clone
pcs constraint colocation add neutron-openvswitch-agent-clone with neutron-scale-clone
pcs constraint order start neutron-scale-clone then neutron-ovs-cleanup-clone
pcs constraint colocation add neutron-ovs-cleanup-clone with neutron-scale-clone



pcs constraint order start neutron-ovs-cleanup-clone then neutron-netns-cleanup-clone
pcs constraint colocation add neutron-netns-cleanup-clone with neutron-ovs-cleanup-clone
pcs constraint order start neutron-netns-cleanup-clone then neutron-openvswitch-agent-clone
pcs constraint colocation add neutron-openvswitch-agent-clone with neutron-netns-cleanup-clone
pcs constraint order start neutron-openvswitch-agent-clone then neutron-dhcp-agent-clone
pcs constraint colocation add neutron-dhcp-agent-clone with neutron-openvswitch-agent-clone
pcs constraint order start neutron-dhcp-agent-clone then neutron-l3-agent-clone
pcs constraint colocation add neutron-l3-agent-clone with neutron-dhcp-agent-clone
pcs constraint order start neutron-l3-agent-clone then neutron-metadata-agent-clone
pcs constraint colocation add neutron-metadata-agent-clone with neutron-l3-agent-clone
pcs constraint order start neutron-server-api-clone then neutron-scale-clone
pcs constraint order start keystone-clone then neutron-scale-clone
 

source /root/adminrc
loop=0; while ! neutron net-list > /dev/null 2>&1 && [ "$loop" -lt 60 ]; do
    echo waiting neutron to be stable
    loop=$((loop + 1))
    sleep 5
done


if neutron router-list
    then
    neutron router-gateway-clear admin-router ext-net
    neutron router-interface-delete admin-router admin-subnet
    neutron router-delete admin-router
    neutron subnet-delete admin-subnet
    neutron subnet-delete ext-subnet
    neutron net-delete admin-net
    neutron net-delete ext-net
fi

neutron net-create ext-net --router:external --provider:physical_network external --provider:network_type flat
neutron subnet-create ext-net 192.168.115.0/24 --name ext-subnet --allocation-pool start=192.168.115.200,end=192.168.115.250  --disable-dhcp --gateway 192.168.115.254
neutron net-create admin-net
neutron subnet-create admin-net 192.128.1.0/24 --name admin-subnet --gateway 192.128.1.1
neutron router-create admin-router
neutron router-interface-add admin-router admin-subnet 
neutron router-gateway-set admin-router ext-net
相關文章
相關標籤/搜索