VIII openstack(1)javascript
傳統的數據中心面臨的問題:成本、效率(要快)、管理方面(物理機、雲主機);html
雲計算:對運維需求整體量是減小了,尤爲是硬件工程師,對運維的技術要求提升了;html5
雲計算是個概念,指資源的使用模式;java
雲計算特色:必須經過網絡使用;彈性計算(按需付費,現公有云已支持按s、m、h計費);對用戶是透明的(用戶不考慮後端的具體實現);node
雲計算分類:私有云、公有云(amazon是老大、aliyun、tingyun、tencentyun)、混合雲;python
雲計算分層:Iaas(infrastructure as a service)、Paas(platform as a service)、SaaS(software as a service);mysql
注:linux
IaaS(也叫Hardware-as-a-Service,早期若是你想在辦公室或者公司的網站上運行一些企業應用,你須要去買服務器,或者別的高昂的硬件來控制本地應用,讓你的業務運行起來,如今有了IaaS,你能夠將硬件外包到別的地方去,IaaS公司會提供場外服務器,存儲和網絡硬件,你能夠租用,節省了維護成本和辦公場地,公司能夠在任什麼時候候利用這些硬件來運行其應用;一些大的IaaS公司包括Amazon、Microsoft、VMWare、Rackspace、Red Hat,這些公司都有本身的專長,好比Amazon和微軟給你提供的不僅是IaaS,他們還會將其計算能力出租給你來host你的網站);c++
PaaS(有時也叫中間件,公司全部的開發均可以在這一層進行,節省了時間和資源,PaaS爲公司在網上提供各類開發和分發應用的解決方案,好比虛擬服務器和操做系統,這節省了你在硬件上的費用,也讓分散的工做室之間的合做變得更加容易,網頁應用管理,應用設計,應用虛擬主機,存儲,安全以及應用開發協做工具等;一些大的PaaS提供者有Google App Engine、Microsoft Azure、Force.com、Heroku、Engine Yard、AppFog、Mendix、Standing Cloud);git
SaaS(這一層和咱們的生活密切相關,咱們大多時候是經過網頁瀏覽器來接入,任何一個遠程服務器上的應用均可以經過網絡來運行,這就是SaaS,它是一種經過Internet提供軟件的模式,廠商將應用軟件統一部署在本身的服務器上,客戶能夠根據本身實際需求,經過互聯網向廠商定購所需的應用軟件服務,按定購的服務多少和時間長短向廠商支付費用,並經過互聯網得到廠商提供的服務,用戶不用再購買軟件,而改用向提供商租用基於Web的軟件,來管理企業經營活動,且無需對軟件進行維護,服務提供商會全權管理和維護軟件,軟件廠商在向客戶提供互聯網應用的同時,也提供軟件的離線操做和本地數據存儲,讓用戶隨時隨地均可以使用其定購的軟件和服務,對於許多小型企業來講,SaaS是採用先進技術的最好途徑,它消除了企業購買、構建和維護基礎設施和應用程序的須要,你消費的服務徹底是從網頁如Netflix、MOG、Google Apps、Box.net、Dropbox或者蘋果的iCloud那裏進入這些分類,儘管這些網頁服務是用做商務和娛樂或者二者都有,但這也算是雲技術的一部分;一些用做商務的SaaS應用包括Citrix的GoToMeeting、Cisco的WebEx、Salesforce的CRM、ADP、Workday、SuccessFactors);
注:百科中的雲計算:
背景(雲計算是繼1980年代大型計算機到客戶端-服務器的大轉變以後的又一種鉅變;雲計算(Cloud Computing)是分佈式計算(Distributed Computing)、並行計算(Parallel Computing)、效用計算(Utility Computing)、網絡存儲(Network Storage Technologies)、虛擬化(Virtualization)、負載均衡(LoadBalance)、熱備份冗餘(High Available)等傳統計算機和網絡技術發展融合的產物);雲計算(cloud computing)是基於互聯網的相關服務的增長、使用和交付模式,一般涉及經過互聯網來提供動態易擴展且常常是虛擬化的資源(Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet.);
美國國家標準與技術研究院(NIST)定義:雲計算是一種按使用量付費的模式,這種模式提供可用的、便捷的、按需的網絡訪問,進入可配置的計算資源共享池(資源包括網絡,服務器,存儲,應用軟件,服務),這些資源可以被快速提供,只需投入不多的管理工做,或與服務供應商進行不多的交互。XenSystem,以及在國外已經很是成熟的Intel和IBM,各類「雲計算」的應用服務範圍正日漸擴大,影響力也無可估量;
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage,applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
雲計算用到了虛擬化技術(提升了資源利用率、便捷性和可管理性更強);
虛擬化(徹底虛擬化FV、半虛擬化PV);
KVM(kernel-based virtual machine,結構簡單,包括兩部分(設備驅動/dev/kvm;針對模擬PC硬件的用戶空間組件),KVM只支持FV,須要CPU中硬件虛擬化的支持,只可在有硬件虛擬化的CPU上運行,即具備VT功能的intel cpu和具備amd-v功能的amd cpu);
KVM虛擬化特性(嵌入到Linux正式kernel(提升兼容性);代碼級資源調用(提升性能);虛擬機就是一個進程(內存易於管理);直接支持numa技術(提升擴展性);redhat已收購,更好的服務支持及商業保障;保持開源發展模式);
[root@server1 ~]# egrep --color 'vmx|svm'/proc/cpuinfo #(intel-vt關鍵字爲vmx,amd-v關鍵字svm)
http://www.openstack.org/
openstack的誕生(nova、swift,a way for the rest of the world to compete with Amazon);
openstack的使命(to produce the ubiquitous open source cloud computing platform that will meet the needs of public and private clouds regardless of size,by being simple to implement and massively scalable);
openstack發展過程(austin(2010.10);bexar(2011.2);cactus(2011.4);diablo(2011.10);essex(2012.4);folsom(2012.9);grizzly(2013.4);havana(2013.10);icehouse(2014.4);juno(2014.10);kilo(2015.4);liberty;mitaka;newton);
注:從havana版本開始網絡管理由quantum改名爲neutron;openstack每隔6個月發佈一次新版
icehouse included components組件:nova、neutron、cinder、glance、swift、horizon、keystone、ceilometer、heat、trove:
服務名稱 |
項目名稱 |
描述 |
Dashboard |
Horizon |
基於openstack api接口使用django開發的web管理界面 |
Compute |
Nava |
經過虛擬化技術提供計算資源池 |
Networking |
Neutron |
實現了虛擬機的網絡資源管理 |
Storage: |
||
Object storage |
Swift |
對象存儲,適用於一次寫入屢次讀取 |
Block storage |
Cinder |
塊存儲,提供存儲資源池 |
Shared services: |
||
Identity service |
Keystone |
認證管理 |
Image service |
Glance |
提供虛擬鏡像的註冊和存儲管理 |
Telemetry |
Ceilometer |
提供監控和數據採集、計量服務 |
High-level services: |
||
Orchestration |
Heat |
自動化部署 |
Database service |
Trove |
提供數據庫應用服務 |
openstack三大核心組件:nova(coumpute service計算服務);neutron(networking service網絡服務);cinder(block storage塊存儲);
其它組件:keystone(identity service認證服務);horizon(dashboard儀表板,web界面);glance(image service鏡像服務);
基礎服務:MySQL;rabbitmq(組件間通訊的交通樞杻);
language:python(68%);XML(16%);javascript(5%);other(11%);
openstack概念架構:
openstack概念圖:
SOA,service oriented architecture面向服務的體系結構,是一個組件模型,它將應用程序的不一樣功能單元(稱爲服務)經過這些服務之間定義良好的接口和契約聯繫起來,接口是採用中立的方式進行定義的,它應該獨立於實現服務的硬件平臺、操做系統和編程語言,這使得構建在各類這樣的系統中的服務能夠以一種統一和通用的方式進行交互;SOA,它能夠根據需求經過網絡對鬆散耦合的粗粒度應用組件進行分佈式部署、組合和使用,服務層是SOA的基礎,能夠直接被應用調用,從而有效控制系統中與軟件代理交互的人爲依賴性;SOA是一種粗粒度、鬆耦合服務架構,服務之間經過簡單、精肯定義接口進行通信,不涉及底層編程接口和通信模型,SOA能夠看做是B/S模型、XML(標準通用標記語言的子集)/WebService技術以後的天然延伸;SOA將可以幫助軟件工程師們站在一個新的高度理解企業級架構中的各類組件的開發、部署形式,它將幫助企業系統架構者以更迅速、更可靠、更具重用性架構整個業務系統,較之以往,以SOA架構的系統可以更加從容地面對業務的急劇變化;
DUBBO是一個分佈式服務框架,致力於提供高性能和透明化的RPC遠程服務調用方案,是阿里巴巴SOA服務化治理方案的核心框架,天天爲2,000+個服務提供3,000,000,000+次訪問量支持,並被普遍應用於阿里巴巴集團的各成員站點;
Provider: 暴露服務的服務提供方。
Consumer: 調用遠程服務的服務消費方。
Registry: 服務註冊與發現的註冊中心。
Monitor: 統計服務的調用次調和調用時間的監控中心。
Container: 服務運行容器。
0. 服務容器負責啓動,加載,運行服務提供者。
1. 服務提供者在啓動時,向註冊中心註冊本身提供的服務。
2. 服務消費者在啓動時,向註冊中心訂閱本身所需的服務。
3. 註冊中心返回服務提供者地址列表給消費者,若是有變動,註冊中心將基於長鏈接推送變動數據給消費者。
4. 服務消費者,從提供者地址列表中,基於軟負載均衡算法,選一臺提供者進行調用,若是調用失敗,再選另外一臺調用。
5. 服務消費者和提供者,在內存中累計調用次數和調用時間,定時每分鐘發送一次統計數據到監控中心。
(1)keystone;(2)glance;(3)nova;(4)neutron
注:keystone和glance都是共享服務;
(1)
openstack驗證服務identity service(keystone:用戶認證;服務目錄):
要將openstack的全部組件註冊到keystone服務上,keystone能夠追蹤每個組件並在網絡中定位該組件服務的位置;
用戶認證(用戶權限和用戶行爲跟蹤,跟蹤用戶及其權限):
user(一我的、系統或服務在openstack中的數字表示,已經登陸的用戶分配令牌環以訪問資源,用戶能夠直接分配給特定的租戶,像隸屬於每一個組;user可被添加到任意一個全局的或租戶內的角色中,在全局role中,用戶的role權限做用於全部的租戶,便可對全部的租戶執行role規定的權限;在租戶內的role中,用戶僅能在當前租戶內執行role規定的權限);
tenatnt(租戶,一個組織或孤立資源的容器,租戶能夠組織或隔離認證對象,根據服務運營的要求,一個租戶能夠映射到客戶、帳戶、組織或項目);
token(令牌,一個用於訪問openstack api和資源的字母數字字符串,某個令牌能夠隨時撤銷,也可在一段時間內有效);
role(角色,表明一組用戶可訪問的資源權限,定製化的包含特定用戶權限和特權權限的集合,如nova中的VM,glance中的image);
credential(憑證,用於確認用戶身份的數據,如用戶名和密碼,用戶名和api key,認證服務提供的身份驗證令牌);
authentication(驗證,確認用戶身份的過程);
服務目錄(提供一個服務目錄,包括全部服務項與相關api的endpoint端點):
service(服務,如nova、glance、swift,一個服務能夠確認當前用戶是否具備訪問其資源的權限,但當一個user嘗試訪問其租戶內的service時,它必須知道這個service是否存在及如何訪問這個service);
endpoint(端點,可理解爲它是一個服務暴露出來的訪問點,若是須要訪問一個服務,則必須知道它的endpoint,endpoint的每一個URL都對應一個服務實例的訪問地址,且具備public、private、admin這三種權限,public url可被全局訪問,privateurl只能被LAN訪問,admin url被從常規的訪問中分離);
keystone client(keystone命令行工具,經過該工具可建立用戶、角色、服務、端點);
經過admin token連到keystone上,建立用戶,建立好用戶後就不要再使用admin token了;
使用keystone有兩種方式:
相似在CLI下使用mysql,如#mysql -uUSERNAME-pPASSWORD -hIP_ADDR;
在環境變量中設好用戶名和密碼(keystone的環境變量中設有:用戶名、密碼、endpoint(keystone的API訪問方式)),直接執行命令便可;
(2)
glance鏡像服務image service(glance-api、glance-registry、image store):
glance使用戶能發現、註冊、檢索虛擬機鏡像(.img文件);它提供了一個REST api接口,使用戶能夠查詢VM image metadata和檢索一個實際的鏡像文件;不管是簡單的文件存儲仍是openstack對象存儲,均可以經過glance服務在不一樣的位置存儲虛擬機鏡像;默認上傳的VM鏡像存儲路徑爲/var/lib/glance/imgaes/;
glance-api(一個用來接受鏡像發現、檢索和存儲的api接口;接受雲系統鏡像的建立、刪除、讀取請求;接收RESTapi的請求,功能上相似nova api,都是接收REST api請求,而後經過其它模塊glanceregistry和image store來完成諸如鏡像的查找、獲取、上傳、刪除等操做;glance api默認port爲9292);
glance-registry(用來存儲、處理、檢索鏡像的metadata(元數據包含對象的大小和類型);glanceregistry是一個openstack鏡像服務使用的內部服務,不能透露給用戶;雲系統的鏡像註冊服務;與DB交互,支持大多數數據庫,MySQL orSQLite,DB用於存儲鏡像的元數據metadata(大小、類型),提供鏡像元數據相關的REST接口;glance registry的port爲9191;在DB中有兩張表,image和image property,image保存鏡像格式、大小等信息,image property主要保存鏡像的定製化信息);
image store(存儲可在本地,也可在分佈式存儲上;image store是一個存儲接口層,經過這個接口,glance可獲取鏡像,image store支持的存儲有amazon'sS3、openstack自己的swift、ceph、sheepdog、glusterfs等;image store是鏡像保存與獲取的接口,它僅僅是一個接口層,具體的實現要外部的存儲支持);
storage repository for image files(鏡像文件的存儲倉庫,支持包括普通FS在內的各類存儲類型,包括對象存儲、RADOS塊設備、http、amazon的S3,但有些存儲只支持只讀訪問);
(3)
nova計算服務compute service(API;compute core;networking forVMs;console interface;image management(EC2 scenario);command-line clients and other interfaces;other components):
早期glance、neutron、cinder都是nova的組件,後拆分紅獨立的組件;
compute service用於計算主機和管理雲計算系統,是IaaS的重要組成部分;主要模塊使用python語言實現;
nova compute經過MQ接收並管理VM的生命週期,nova compute經過libvirt管理KVM,經過xenAPI管理xen,經過vCenterAPI管理vmware;
openstack中,計算節點不能隨便改主機名,若改動後會被自動識別爲新的計算節點,以前的計算節點會被幹掉;
API(API包括nova-api service和nova-api-metadata service;nova-api service,接收並響應終端用戶計算API調用(終端用戶外部調用),支持openstack api、amazon EC2 api和特殊的管理特權API;nova-api-metadataservice,接受從實例元數據發來的請求,該服務一般與nova-network服務在安裝多主機模式下運行);
compute core(包括nova-compute service、nova-schedulerservice、nova-conductor module;nova-compute service,一個守護進程,經過虛擬化層API接口建立和終止虛擬機實例,如XenAPI for XenServer/XCP,libvirt for KVM or QEMU,VMwareAPI for VMware;nova-schedulerservice,用於雲主機調度,從隊列中獲取虛擬機實例請求,並計算由哪臺計算服務運行該虛擬機;nova-conductor module,計算node訪問數據的中間件,協調nova-compute service和DB之間交互數據,避免nova-compute service直接訪問雲DB,不要將該模塊部署在nova-compute運行的node上);
networking for VMs(包括nova-network worker daemon、nova-consoleauth daemon、nova-novncproxy daemon、nova-spicehtml5proxy、nova-cert daemon;nova-networkworker daemon,相似nova-compute service,接受來自隊列的網絡任務和操做網絡,如網卡bridge或改變iptables規則;nova-consoleauth daemon,在控制檯代理提供用戶受權令牌;nova-novncproxy daemon,提供了一個經過vnc鏈接來訪問運行的虛擬機實例的代理,支持基於browser的novnc客戶端;nova-spicehtml5proxy,提供了一個經過spice鏈接來訪問運行的虛擬機實例的代理,支持基於browser的html5客戶端;nova-cert deamon,x509證書);
image management(EC2scenario)(包括nova-objectstoredaemon、euca2ools client;nova-objectstore daemon,一個amazon S3的接口,用於將amazon S3的鏡像註冊到openstack;euca2oolsclient,用於兼容amazon E2接口的命令行工具);
command-line clients andother interfaces(nova client,nova命令行工具);
other components(包括the queue、SQL database;the queue,在進程之間傳遞消息的中心,經過使用rabbitmq;SQL database,保存雲基礎設置創建和運行時的狀態信息);
openstack-nova-api(nova-api組件實現了RESTful API功能,是外部訪問nova的惟一途徑;接收外部請求並經過MQ將請求發送給其它服務組件,同時也兼容EC2 API,因此也可用EC2的管理工具對nova進行平常管理);
openstack-nova-scheduler(用於雲主機調度;該模塊做用,決定VM建立在哪一個compute node上;決策一個VM應調度到某物理節點,分兩步:filter和計算權值weight;filter scheduler首先獲得未通過過濾的主機列表,而後根據過濾屬性,選擇服務條件的compute node主機;通過主機過濾後,須要對主機進行權值的計算,根據策略選擇相應的某一臺主機,對於每個要建立的虛擬主機而言,優先選擇剩餘資源多的主機node存放VM);
openstack-nova-cert(負責身份認證);
openstack-nova-conductor(計算node訪問數據的中間件,協調nova-computeservice和DB之間交互數據,避免nova-compute service直接訪問雲DB,不要將該模塊部署在nova-compute運行的node上)
openstack-nova-console(在控制檯代理提供用戶受權令牌)
openstack-nova-novncproxy(提供了一個經過vnc鏈接來訪問運行的虛擬機實例的代理,支持基於browser的novnc客戶端)
注:應用過程當中,會出現找不到有效的主機,看nova-scheduler的日誌,是資源不夠仍是其它緣由
(4)
neutron網絡組件networking component(openstacknetworking;nova-network networkingservice):
演變(nova-network-->quantum-->neutron);
openstack networking(neutron,支持爲每個實例配置多種網絡類型;包含支持虛擬網絡的各類插件);
nova-network networkingservice(只能爲每個實例配置單一網絡類型;提供基本網絡功能);
網絡(在實際的物理環境下,使用switch把多個計算機鏈接起來造成了網絡,在neutron世界裏,網絡也是將多個不一樣的雲主機鏈接起來;neutron網絡的目的是,爲openstack雲更靈活地劃分物理網絡,在多租戶環境下提供給每一個租戶獨立的網絡環境,另,neutron提供api來實現這種目標;neutron中網絡是一個能夠被用戶建立的對象,若是要和物理環境下的概念映射的話,這個對象至關於一個巨大的switch,能夠擁有無限多個動態可建立和銷燬的虛擬端口);
端口(在實際的物理環境下,每一個子網或每一個網絡都有不少端口,端口是用於鏈接設備進入網絡的地方,如switch的端口供計算機鏈接,在neutron世界裏也同樣,有相似的功能,雲主機的網卡會對應到一個端口上,它是路由器和虛擬機掛接網絡的着附點);
路由器(在實際的物理環境下,不一樣網絡或不一樣邏輯子網間若要通訊,須要經過路由器進行路由;neutron中的路由也同樣,是一個路由選擇和轉發部件,用來鏈接不一樣的網絡或子網,在neutron中可建立和銷燬軟部件);
子網(在實際物理環境下,在一個網絡中可劃分多個邏輯子網,子網是由一組IP組成的地址池,不一樣子網間的通訊須要路由器的支持;在neutron世界裏同樣,neutron中子網隸屬於網絡);
公共網絡(向租戶提供訪問和api調用);
管理網絡(雲中物理機之間的通訊);
存儲網絡(雲中存儲的網絡,如iSCSI或glusterfs使用);
服務網絡(虛擬機內部使用的網絡);
neutron網絡的一種典型結構:
openstack網絡模型:
OpenStack虛擬網絡Neutron把部分傳統網絡管理的功能推到了租戶方,租戶經過它能夠建立一個本身專屬的虛擬網絡及其子網,建立路由器等等,在虛擬網絡功能的幫助下,基礎物理網絡就能夠向外提供額外的網絡服務了,好比租戶徹底能夠建立一個屬於本身的相似於數據中心網絡的虛擬網絡;Neutron 提供了比較完善的多租戶環境下的虛擬網絡模型以及 API;像部署物理網絡同樣,使用 Neutron 建立虛擬網絡時也須要作一些基本的規劃和設計
icehouse加入ML2 plugin(module layer2;實現agent間代碼共用;使得不一樣的主機使用不一樣的網絡類型multi-vender-support,使不一樣的網絡類型共存,如linuxbridge和openvswitch和其它;實現不一樣的網絡拓撲,以前要麼VLAN,要麼GRE,要麼FLAT單一平面網絡);
上圖,FLAT單一平面網絡(物理機和VM在一個網絡中;生產中用於255臺如下的VM;缺點(存在單一網絡瓶頸,缺少可伸縮性;缺少合適的多租戶隔離)
(5)dashboard儀表板,代號horizon:
theopenstack dashboard(horizon)provides a baseline userinterface for managing openstack services.
horizonnotes:stateless;error handling is delegated toback-end;doesn't support allapi functions;can use memcached ordatabase to store sessions;gets updated vianova-api polling;
horizon是一個容許管理和普通用戶管理openstack各類資源和服務的web界面;經過openstack apis使web界面與openstack計算雲控制器進行交互;horizon容許自定義;horizon提供了一組核心類庫和可重複利用的模板及工具;
安裝先決條件:安裝openstackcompute(nova)和identity service(keystone);安裝python2.6或2.7,必須支持django;browser必須支持HTML5,並啓用cookies和javascript功能;
(6)cinder:
nova-volume(nova-volume manages thecreation,attaching and detaching of persistent volumes to compute instances;optional;iSCSI solution which uses LVM;volume can be attached only to 1instance at a time;persistent volumeskeep their state independent of instances;withinsingle openstack deployment different storage providers cannot be used;nova-volume drivers(iSCSI;Xen storage manager;nexenta;netapp;san);
###########################################################################################
VIII openstack(2)
操做openstack icehouse:
準備:
控制節點:linux-node1.example.com(eth0:10.96.20.118;eth1:192.168.10.118);
計算節點:linux-node2.example.com(eth0:10.96.20.119;eth1:192.168.10.119);
兩臺主機均開啓虛擬化功能:IntelVT-x/EPT或AMD-V/RVI
http://mirrors.aliyun.com/repo/Centos-6.repo
http://mirrors.ustc.edu.cn/epel/6/x86_64/epel-release-6-8.noarch.rpm(或http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm)
https://www.rdoproject.org/repos/rdo-release.rpm
https://repos.fedorapeople.org/repos/openstack/
https://repos.fedorapeople.org/repos/openstack/EOL/ #(openstack yum源位置)
[root@linux-node1 ~]# uname -rm
2.6.32-431.el6.x86_64 x86_64
[root@linux-node1 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.5(Santiago)
#yum -y install yum-plugin-priorities #(防止高優先級軟件被低優先級軟件覆蓋)
#yum -y install openstack-selinux #(openstack可自動管理SELinux)
關閉iptables;關閉selinux;時間同步;
兩個node均操做:
[root@linux-node1 ~]# rpm -ivh http://mirrors.ustc.edu.cn/epel/6/x86_64/epel-release-6-8.noarch.rpm
[root@linux-node1 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo
[root@linux-node1 ~]# vim /etc/yum.repos.d/CentOS-Base.repo
:%s/$releasever/6/g
[root@linux-node1 ~]# yum clean all
[root@linux-node1 ~]# yum makecache
[root@linux-node1 ~]# yum -y install python-pip gcc gcc-c++ make libtool patch auto-make python-devel libxslt-devel MySQL-python openssl-devel libudev-devel libvirt libvirt-python git wget qemu-kvm gedit python-numdisplay python-eventlet device-mapper bridge-utils libffi-devel libffi
[root@linux-node1 ~]# cd /etc/yum.repos.d
[root@linux-node1 yum.repos.d]# yum install -y https://www.rdoproject.org/repos/rdo-release.rpm
[root@linux-node1 yum.repos.d]# vim rdo-release.repo #(此文件是支持最新版本mitaka,改成icehouse版本的地址)
[openstack-icehouse]
name=OpenStack Icehouse Repository
baseurl=https://repos.fedorapeople.org/repos/openstack/EOL/openstack-icehouse/epel-6/
#baseurl=http://mirror.centos.org/centos/7/cloud/$basearch/openstack-mitaka/
gpgcheck=0
enabled=1
#gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
[root@linux-node1 yum.repos.d]# yum cleanall
[root@linux-node1 yum.repos.d]# yum makecache
(一)
node1控制節點操做(安裝基礎軟件MySQL、rabbitmq):
生產中MySQL要作集羣,除horizon外openstack的其它組件都要鏈接MySQL,有nova、neutron、cinder、glance、keystone;
除horizon和keystone外,openstack的其它組件都要鏈接rabbitmq;openstack經常使用的消息代理軟件有:rabbitmq、qpid、zeromq;
[root@linux-node1 ~]# yum -y install mysql-server
[root@linux-node1 ~]# cp /usr/share/mysql/my-large.cnf /etc/my.cnf
cp: overwrite `/etc/my.cnf'? y
[root@linux-node1 ~]# vim /etc/my.cnf #(innodb_file_per_table每表一個表空間,默認是共享表空間;在zabbix中要用獨享表空間,數據量很大,若用共享表空間後續優化很困難)
[mysqld]
default-storage-engine=InnoDB
innodb_file_per_table=1
init_connect='SET NAMESutf8'
default-character-set=utf8
default-collation=utf8_general_ci
[root@linux-node1 ~]# service mysqld start
Starting mysqld: [ OK ]
[root@linux-node1 ~]# chkconfig mysqld on
[root@linux-node1 ~]# chkconfig --listmysqld
mysqld 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@linux-node1 ~]# netstat -tnulp | grep:3306
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 11215/mysqld
[root@linux-node1 ~]# mysqladmin create nova #(分別爲要安裝的服務受權本地和遠程兩個帳戶)
[root@linux-node1 ~]# mysql -e "grant all on nova.* to 'nova'@'localhost' identified by 'nova'"
[root@linux-node1 ~]# mysql -e "grant all on nova.* to 'nova'@'%' identifiedby 'nova'"
[root@linux-node1 ~]# mysqladmin create neutron
[root@linux-node1 ~]# mysql -e "grant all on neutron.* to'neutron'@'localhost' identified by 'neutron'"
[root@linux-node1 ~]# mysql -e "grant all on neutron.* to 'neutron'@'%' identified by 'neutron'"
[root@linux-node1 ~]# mysqladmin create cinder
[root@linux-node1 ~]# mysql -e "grant all on cinder.* to 'cinder'@'localhost' identified by 'cinder'"
[root@linux-node1 ~]# mysql -e "grant all on cinder.* to 'cinder'@'%' identified by 'cinder'"
[root@linux-node1 ~]# mysqladmin create keystone
[root@linux-node1 ~]# mysql -e "grant all on keystone.* to 'keystone'@'localhost' identified by 'keystone'"
[root@linux-node1 ~]# mysql -e "grant all on keystone.* to 'keystone'@'%' identified by 'keystone'"
[root@linux-node1 ~]# mysqladmin create glance
[root@linux-node1 ~]# mysql -e "grant all on glance.* to 'glance'@'localhost' identified by 'glance'"
[root@linux-node1 ~]# mysql -e "grant all on glance.* to 'glance'@'%' identified by 'glance'"
[root@linux-node1 ~]# mysqladmin flush-privileges
[root@linux-node1 ~]# mysql -e 'use mysql;select User,Host from user;'
+----------+-------------------------+
| User | Host |
+----------+-------------------------+
| cinder | % |
| glance | % |
| keystone | % |
| neutron | % |
| nova | % |
| root | 127.0.0.1 |
| | linux-node1.example.com |
| root | linux-node1.example.com |
| | localhost |
| cinder | localhost |
| glance | localhost |
| keystone | localhost |
| neutron | localhost |
| nova | localhost |
| root | localhost |
+----------+-------------------------+
[root@linux-node1 ~]# yum -y install rabbitmq-server
[root@linux-node1 ~]# service rabbitmq-server start
Starting rabbitmq-server: SUCCESS
rabbitmq-server.
[root@linux-node1 ~]# chkconfig rabbitmq-server on
[root@linux-node1 ~]# chkconfig --list rabbitmq-server
rabbitmq-server 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@linux-node1 ~]# /usr/lib/rabbitmq/bin/rabbitmq-plugins list
[ ] amqp_client 3.1.5
[ ] cowboy 0.5.0-rmq3.1.5-git4b93c2d
[ ] eldap 3.1.5-gite309de4
[ ] mochiweb 2.7.0-rmq3.1.5-git680dba8
[ ] rabbitmq_amqp1_0 3.1.5
[ ] rabbitmq_auth_backend_ldap 3.1.5
[ ] rabbitmq_auth_mechanism_ssl 3.1.5
[ ] rabbitmq_consistent_hash_exchange 3.1.5
[ ] rabbitmq_federation 3.1.5
[ ] rabbitmq_federation_management 3.1.5
[ ] rabbitmq_jsonrpc 3.1.5
[ ] rabbitmq_jsonrpc_channel 3.1.5
[ ] rabbitmq_jsonrpc_channel_examples 3.1.5
[ ]rabbitmq_management 3.1.5
[ ] rabbitmq_management_agent 3.1.5
[ ] rabbitmq_management_visualiser 3.1.5
[ ] rabbitmq_mqtt 3.1.5
[ ] rabbitmq_shovel 3.1.5
[ ] rabbitmq_shovel_management 3.1.5
[ ] rabbitmq_stomp 3.1.5
[ ] rabbitmq_tracing 3.1.5
[ ] rabbitmq_web_dispatch 3.1.5
[ ] rabbitmq_web_stomp 3.1.5
[ ] rabbitmq_web_stomp_examples 3.1.5
[ ] rfc4627_jsonrpc 3.1.5-git5e67120
[ ] sockjs 0.3.4-rmq3.1.5-git3132eb9
[ ] webmachine 1.10.3-rmq3.1.5-gite9359c7
[root@linux-node1 ~]# /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management #(啓用web管理)
The following plugins have been enabled:
mochiweb
webmachine
rabbitmq_web_dispatch
amqp_client
rabbitmq_management_agent
rabbitmq_management
Plugin configuration has changed. RestartRabbitMQ for changes to take effect.
[root@linux-node1 ~]# service rabbitmq-server restart
Restarting rabbitmq-server: SUCCESS
rabbitmq-server.
[root@linux-node1 ~]# netstat -tnulp | grep:5672
tcp 0 0 :::5672 :::* LISTEN 11760/beam
[root@linux-node1 ~]# netstat -tnulp | grep5672 #(15672和55672用於web界面,若用55672登陸會自動跳轉至15672)
tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN 11760/beam
tcp 0 0 0.0.0.0:55672 0.0.0.0:* LISTEN 11760/beam
tcp 0 0 :::5672 :::* LISTEN 11760/beam
http://10.96.20.118:15672/ #(username、password默認均是guest,可經過#rabbitmqctlchange_password guest NEW_PASSWORD更改密碼,若是執行修改,同時要修改rabbitmq的配置文件中的rabbit_password,還有openstack各組件服務配置文件中的默認密碼;下方HTTP API用於作監控)
(二)
node1控制節點操做(安裝配置keystone):
[root@linux-node1 ~]# yum -y install openstack-keystone python-keystoneclient
[root@linux-node1 ~]# id keystone
uid=163(keystone) gid=163(keystone)groups=163(keystone)
[root@linux-node1 ~]# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone #(常見通用證書的密鑰,並限制相關文件的訪問權限)
Generating RSA private key, 2048 bit longmodulus
.................................................................................................................+++
.............................................................................................................+++
e is 65537 (0x10001)
Generating RSA private key, 2048 bit longmodulus
...............+++
.....................................................+++
e is 65537 (0x10001)
Using configuration from/etc/keystone/ssl/certs/openssl.conf
Check that the request matches thesignature
Signature ok
The Subject's Distinguished Name is asfollows
countryName :PRINTABLE:'US'
stateOrProvinceName :ASN.1 12:'Unset'
localityName :ASN.1 12:'Unset'
organizationName :ASN.1 12:'Unset'
commonName :ASN.1 12:'www.example.com'
Certificate is to be certified until Sep 2502:29:07 2026 GMT (3650 days)
Write out database with 1 new entries
Data Base Updated
[root@linux-node1 ~]# chown -R keystone:keystone /etc/keystone/ssl/
[root@linux-node1 ~]# chmod -R o-rwx /etc/keystone/ssl/
[root@linux-node1 ~]# vim /etc/keystone/keystone.conf #(admin_token爲初始管理令牌,生產中admin_token的值要是隨機數,#openssl rand -hex 10;connection配置數據庫訪問;provider配置UUID提供者;driver配置SQL驅動)
[DEFAULT]
admin_token=ADMIN
debug=true
verbose=true
log_file=/var/log/keystone/keystone.log
[database]
connection=mysql://keystone:keystone@localhost/keystone
[token]
provider=keystone.token.providers.uuid.Provider
driver=keystone.token.backends.sql.Token
[root@linux-node1 ~]# keystone-manage db_sync #(創建keystone的表結構,初始化keystone數據庫,是用root同步的數據)
[root@linux-node1 ~]# chown -R keystone:keystone /var/log/keystone/ #(同步數據用的是root,日誌文件屬主要爲keystone,不然啓動會報錯)
[root@linux-node1 ~]# mysql -ukeystone -pkeystone -e 'use keystone;show tables;'
+-----------------------+
| Tables_in_keystone |
+-----------------------+
| assignment |
| credential |
| domain |
| endpoint |
| group |
| migrate_version |
| policy |
| project |
| region |
| role |
| service |
| token |
| trust |
| trust_role |
| user |
| user_group_membership |
+-----------------------+
[root@linux-node1 ~]# service openstack-keystone start
Starting keystone: [ OK ]
[root@linux-node1 ~]# chkconfig openstack-keystone on
[root@linux-node1 ~]# chkconfig --list openstack-keystone
openstack-keystone 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@linux-node1 ~]# netstat -tnlp | egrep '5000|35357' #(35357爲keystone的管理port)
tcp 0 0 0.0.0.0:35357 0.0.0.0:* LISTEN 16599/python
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 16599/python
[root@linux-node1 ~]# less /var/log/keystone/keystone.log #(public_bind_host爲0.0.0.0,admin_bind_host爲0.0.0.0,computer_port爲8774與nova相關)
[root@linux-node1 ~]# keystone --help
[root@linux-node1 ~]# export OS_SERVICE_TOKEN=ADMIN #(配置初始管理員令牌)
[root@linux-node1 ~]# export OS_SERVICE_ENDPOINT=http://10.96.20.118:35357/v2.0 #(配置端點)
[root@linux-node1 ~]# keystone role-list
+----------------------------------+----------+
| id | name |
+----------------------------------+----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
+----------------------------------+----------+
[root@linux-node1 ~]# keystone tenant-create --name admin --description "Admin Tenant" #(建立admin租戶)
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Admin Tenant |
| enabled | True |
| id | d14e4731327047c58a2431e9e2221626 |
| name | admin |
+-------------+----------------------------------+
[root@linux-node1 ~]# keystone user-create --name admin --pass admin --email admin@linux-node1.example.com #(建立admin用戶)
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | admin@linux-node1.example.com |
| enabled | True |
| id | 4e907efbf23b42ac8da392d1a201534c|
| name | admin |
| username | admin |
+----------+----------------------------------+
[root@linux-node1 ~]# keystone role-create --name admin #(建立admin角色)
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| id | f175bdcf962e4ba0a901f9eae1c9b8a1|
| name | admin |
+----------+----------------------------------+
[root@linux-node1 ~]# keystone user-role-add --tenant admin --user admin --role admin #(添加admin用戶到admin角色)
[root@linux-node1 ~]# keystone role-create --name _member_
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| id | 9fe2ff9ee4384b1894a90878d3e92bab |
| name | _member_ |
+----------+----------------------------------+
[root@linux-node1 ~]# keystone user-role-add --tenant admin --user admin --role _member_ #(添加admin租戶和用戶到_member_角色)
[root@linux-node1 ~]# keystone tenant-create --name demo --description "Demo Tenant" #(建立一個用於演示的demo租戶)
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Demo Tenant |
| enabled | True |
| id | 5ca17bf131f3443c81cf8947a6a2da03 |
| name | demo |
+-------------+----------------------------------+
[root@linux-node1 ~]# keystone user-create --name demo --pass demo --email demo@linux-node1.example.com #(建立demo用戶)
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | demo@linux-node1.example.com |
| enabled | True |
| id | 97f5bae389c447bbbe43838470d7427d|
| name | demo |
| username | demo |
+----------+----------------------------------+
[root@linux-node1 ~]# keystone user-role-add --tenant demo --user demo --role admin #(添加demo租戶和用戶到admin角色)
[root@linux-node1 ~]# keystone user-role-add --tenant demo --user demo --role _member_ #(添加demo租戶和用戶到_member_角色)
[root@linux-node1 ~]# keystone tenant-create --name service --description "Service Tenant" #(openstack服務也須要一個租戶,用戶、角色和其它服務進行交互,所以此步建立一個service的租戶,任何一個openstack服務都要和它關聯)
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Service Tenant |
| enabled | True |
| id | 06c47e96dbf7429bbff4f93822222ca9 |
| name | service |
+-------------+----------------------------------+
建立服務實體和api端點:
[root@linux-node1 ~]# keystone service-create --name keystone --type identity --description "OpenStack identity" #(建立服務實體)
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack identity |
| enabled | True |
| id | 93adbcc42e6145a39ecab110b3eb1942 |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
[root@linux-node1 ~]# keystone endpoint-create --service-id `keystone service-list | awk '/identity/{print $2}'` --publicurl http://10.96.20.118:5000/v2.0 --internalurl http://10.96.20.118:5000/v2.0 --adminurl http://10.96.20.118:35357/v2.0 --region regionOne #(openstack環境中,identity服務-管理目錄及服務相關api端點,服務使用這個目錄來溝通其它服務,openstack爲每一個服務提供了三個api端點:admin、internal、public;此步爲identity服務建立api端點)
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://10.96.20.118:35357/v2.0 |
| id |fd14210787ab44d2b61480598b1c1c82 |
| internalurl | http://10.96.20.118:5000/v2.0 |
| publicurl | http://10.96.20.118:5000/v2.0 |
| region | regionOne |
| service_id | 93adbcc42e6145a39ecab110b3eb1942|
+-------------+----------------------------------+
確認以上操做(分別用admin和demo查看令牌、租戶列表、用戶列表、角色列表):
[root@linux-node1 ~]# keystone --os-tenant-name admin --os-username admin--os-password admin --os-auth-url http://10.96.20.118:35357/v2.0 token-get #(使用admin租戶和用戶請求認證令牌)
+-----------+----------------------------------+
| Property | Value |
+-----------+----------------------------------+
| expires | 2016-09-27T10:21:46Z |
| id | cca2d1f0f8244d848eea0cad0cda7f04 |
| tenant_id | d14e4731327047c58a2431e9e2221626 |
| user_id | 4e907efbf23b42ac8da392d1a201534c|
+-----------+----------------------------------+
[root@linux-node1 ~]# keystone --os-tenant-name admin --os-username admin --os-password admin --os-auth-url http://10.96.20.118:35357/v2.0 tenant-list #(以admin租戶和用戶的身份查看租戶列表)
+----------------------------------+---------+---------+
| id | name | enabled |
+----------------------------------+---------+---------+
| d14e4731327047c58a2431e9e2221626 | admin | True |
| 5ca17bf131f3443c81cf8947a6a2da03 | demo | True |
| 06c47e96dbf7429bbff4f93822222ca9 | service | True |
+----------------------------------+---------+---------+
[root@linux-node1 ~]# keystone --os-tenant-name admin --os-username admin --os-password admin --os-auth-url http://10.96.20.118:35357/v2.0 user-list #(以admin租戶和用戶的身份查看用戶列表)
+----------------------------------+-------+---------+-------------------------------+
| id | name | enabled | email |
+----------------------------------+-------+---------+-------------------------------+
| 4e907efbf23b42ac8da392d1a201534c | admin | True | admin@linux-node1.example.com |
| 97f5bae389c447bbbe43838470d7427d| demo | True | demo@linux-node1.example.com |
+----------------------------------+-------+---------+-------------------------------+
[root@linux-node1 ~]# keystone --os-tenant-name admin --os-username admin --os-password admin --os-auth-url http://10.96.20.118:35357/v2.0 role-list #(以admin租戶和用戶的身份查看角色列表)
+----------------------------------+----------+
| id | name |
+----------------------------------+----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
| f175bdcf962e4ba0a901f9eae1c9b8a1| admin |
+----------------------------------+----------+
使用環境變量直接查詢相關操做:
[root@linux-node1 ~]# unsetOS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
[root@linux-node1 ~]# vim keystone-admin.sh #(使用環境變量定義,以後用哪一個用戶查詢,source對應文件便可)
exportOS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=admin
exportOS_AUTH_URL=http://10.96.20.118:35357/v2.0
[root@linux-node1 ~]# vim keystone-demo.sh
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
exportOS_AUTH_URL=http://10.96.20.118:35357/v2.0
[root@linux-node1 ~]# source keystone-admin.sh
[root@linux-node1 ~]# keystone token-get
+-----------+----------------------------------+
| Property | Value |
+-----------+----------------------------------+
| expires | 2016-09-27T10:32:05Z |
| id | 465e30389d2f46a2bea3945bbcff157a |
| tenant_id | d14e4731327047c58a2431e9e2221626 |
| user_id | 4e907efbf23b42ac8da392d1a201534c|
+-----------+----------------------------------+
[root@linux-node1 ~]# keystone role-list
+----------------------------------+----------+
| id | name |
+----------------------------------+----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
| f175bdcf962e4ba0a901f9eae1c9b8a1| admin |
+----------------------------------+----------+
[root@linux-node1 ~]# source keystone-demo.sh
[root@linux-node1 ~]# keystone user-list
+----------------------------------+-------+---------+-------------------------------+
| id | name | enabled | email |
+----------------------------------+-------+---------+-------------------------------+
| 4e907efbf23b42ac8da392d1a201534c | admin | True | admin@linux-node1.example.com |
| 97f5bae389c447bbbe43838470d7427d| demo | True | demo@linux-node1.example.com |
+----------------------------------+-------+---------+-------------------------------+
(三)
node1控制節點操做(配置安裝glance):
[root@linux-node1 ~]# yum -y install openstack-glance python-glanceclient python-crypto
[root@linux-node1 ~]# vim /etc/glance/glance-api.conf
[DEFAULT]
verbose=True
debug=true
log_file=/var/log/glance/api.log
# ============ Notification System Options=====================
rabbit_host=10.96.20.118
rabbit_port=5672
rabbit_use_ssl=false
rabbit_userid=guest
rabbit_password=guest
rabbit_virtual_host=/
rabbit_notification_exchange=glance
rabbit_notification_topic=notifications
rabbit_durable_queues=False
[database]
connection=mysql://glance:glance@localhost/glance
[keystone_authtoken]
#auth_host=127.0.0.1
auth_host=10.96.20.118
auth_port=35357
auth_protocol=http
#admin_tenant_name=%SERVICE_TENANT_NAME%
admin_tenant_name=service
#admin_user=%SERVICE_USER%
admin_user=glance
#admin_password=%SERVICE_PASSWORD%
admin_password=glance
[paste_deploy]
flavor=keystone
[root@linux-node1 ~]# vim /etc/glance/glance-registry.conf
[DEFAULT]
verbose=True
debug=true
log_file=/var/log/glance/registry.log
[database]
connection=mysql://glance:glance@localhost/glance
[keystone_authtoken]
#auth_host=127.0.0.1
auth_host=10.96.20.118
auth_port=35357
auth_protocol=http
#admin_tenant_name=%SERVICE_TENANT_NAME%
admin_tenant_name=service
#admin_user=%SERVICE_USER%
admin_user=glance
#admin_password=%SERVICE_PASSWORD%
admin_password=glance
[paste_deploy]
flavor=keystone
[root@linux-node1 ~]# glance-manage db_sync
/usr/lib64/python2.6/site-packages/Crypto/Util/number.py:57:PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attackvulnerability.
_warn("Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attackvulnerability.", PowmInsecureWarning)
針對上面的報錯,解決:
[root@linux-node1 ~]# yum -y groupinstall "Development tools"
[root@linux-node1 ~]# yum -y install gcclibgcc glibc libffi-devel libxml2-devel libxslt-devel openssl-devel zlib-devel bzip2-devel ncurses-devel python-devel
[root@linux-node1 ~]# wget https://ftp.gnu.org/gnu/gmp/gmp-6.0.0a.tar.bz2
[root@linux-node1 ~]# tar xf gmp-6.0.0a.tar.bz2
[root@linux-node1 ~]# cd gmp-6.0.0
[root@linux-node1 gmp-6.0.0]#./configure
[root@linux-node1 gmp-6.0.0]# make && make install
[root@linux-node1 gmp-6.0.0]# cd
[root@linux-node1 ~]# pip uninstall PyCrypto
[root@linux-node1 ~]# wget https://ftp.dlitz.net/pub/dlitz/crypto/pycrypto/pycrypto-2.6.1.tar.gz
[root@linux-node1 ~]# tar xf pycrypto-2.6.1.tar.gz
[root@linux-node1 ~]# cd pycrypto-2.6.1
[root@linux-node1 pycrypto-2.6.1]# ./configure
[root@linux-node1 pycrypto-2.6.1]# python setup.py install
[root@linux-node1 pycrypto-2.6.1]# cd
[root@linux-node1 ~]# mysql -uglance -pglance -e 'use glance;show tables;'
+------------------+
| Tables_in_glance |
+------------------+
| image_locations |
| image_members |
| image_properties |
| image_tags |
| images |
| migrate_version |
| task_info |
| tasks |
+------------------+
[root@linux-node1 ~]# source keystone-admin.sh
[root@linux-node1 ~]# keystone user-create --name glance --pass glance --email glance@linux-node1.example.com
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | glance@linux-node1.example.com |
| enabled | True |
| id | b3d70ee1067a44f4913f9b6000535b26|
| name | glance |
| username | glance |
+----------+----------------------------------+
[root@linux-node1 ~]# keystone user-role-add --user glance --tenant service --role admin
[root@linux-node1 ~]# keystone service-create --name glance --type image --description "OpenStack image Service"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack image Service |
| enabled | True |
| id | 7f2db750a630474b82740dacb55e70b3|
| name | glance |
| type | image |
+-------------+----------------------------------+
[root@linux-node1 ~]# keystone endpoint-create --service-id `keystone service-list | awk '/image/{print $2}'` --publicurl http://10.96.20.118:9292 --internalurl http://10.96.20.118:9292 --adminurl http://10.96.20.118:9292 --region regionOne
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://10.96.20.118:9292 |
| id | 9815501af47f464db00cfb2eb30c649d |
| internalurl | http://10.96.20.118:9292 |
| publicurl | http://10.96.20.118:9292 |
| region | regionOne |
| service_id | 7f2db750a630474b82740dacb55e70b3 |
+-------------+----------------------------------+
[root@linux-node1 ~]# keystone service-list
+----------------------------------+----------+----------+-------------------------+
| id | name | type | description |
+----------------------------------+----------+----------+-------------------------+
| 7f2db750a630474b82740dacb55e70b3 | glance | image | OpenStack image Service |
| 93adbcc42e6145a39ecab110b3eb1942 | keystone | identity | OpenStack identity |
+----------------------------------+----------+----------+-------------------------+
[root@linux-node1 ~]# keystone endpoint-list
+----------------------------------+-----------+-------------------------------+-------------------------------+--------------------------------+----------------------------------+
| id | region | publicurl | internalurl | adminurl | service_id |
+----------------------------------+-----------+-------------------------------+-------------------------------+--------------------------------+----------------------------------+
| 9815501af47f464db00cfb2eb30c649d | regionOne | http://10.96.20.118:9292 | http://10.96.20.118:9292 | http://10.96.20.118:9292 | 7f2db750a630474b82740dacb55e70b3 |
| fd14210787ab44d2b61480598b1c1c82| regionOne | http://10.96.20.118:5000/v2.0 | http://10.96.20.118:5000/v2.0 |http://10.96.20.118:35357/v2.0 | 93adbcc42e6145a39ecab110b3eb1942 |
+----------------------------------+-----------+-------------------------------+-------------------------------+--------------------------------+----------------------------------+
[root@linux-node1 ~]# id glance
uid=161(glance) gid=161(glance)groups=161(glance)
[root@linux-node1 ~]# chown -R glance:glance /var/log/glance/*
[root@linux-node1 ~]# service openstack-glance-api start
Starting openstack-glance-api: [FAILED]
啓動失敗,查看日誌得知ImportError:/usr/lib64/python2.6/site-packages/Crypto/Cipher/_AES.so: undefined symbol:rpl_malloc,解決辦法:
[root@linux-node1 ~]# cd pycrypto-2.6.1
[root@linux-node1pycrypto-2.6.1]# exportac_cv_func_malloc_0_nonnull=yes
[root@linux-node1pycrypto-2.6.1]# easy_install -U PyCrypto
[root@linux-node1 ~]# service openstack-glance-api start
Starting openstack-glance-api: [ OK ]
[root@linux-node1 ~]# service openstack-glance-registry start
Starting openstack-glance-registry: [ OK ]
[root@linux-node1 ~]# chkconfig openstack-glance-api on
[root@linux-node1 ~]# chkconfig openstack-glance-registry on
[root@linux-node1 ~]# chkconfig --list openstack-glance-api
openstack-glance-api 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@linux-node1 ~]# chkconfig --list openstack-glance-registry
openstack-glance-registry 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@linux-node1 ~]# netstat -tnlp | egrep "9191|9292" #(openstack-glance-api9292;openstack-glance-registry9191)
tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN 50030/python
tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 50096/python
http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img #(下載鏡像文件,cirros是一個小linux鏡像,用它來驗證鏡像服務是否安裝成功)
[root@linux-node1 ~]# ll cirros-0.3.4-x86_64-disk.img-h
-rw-r--r--. 1 root root 13M Sep 28 00:58 cirros-0.3.4-x86_64-disk.img
[root@linux-node1 ~]# glance image-create --name "cirros-0.3.4-x86_64" --disk-format qcow2 --container-format bare --is-public True --file /root/cirros-0.3.4-x86_64-disk.img --progress #(--name指定鏡像名稱;--disk-format鏡像的磁盤格式,支持ami|ari|aki|vhd|vmdk|raw|qcow2|vdi|iso;--container-format鏡像容器格式,支持ami|ari|aki|bare|ovf;--is-public鏡像是否能夠被公共訪問;--file指定上傳文件的位置;--progress顯示上傳進度)
[=============================>] 100%
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2016-09-28T08:10:19 |
| deleted | False |
| deleted_at |None |
| disk_format | qcow2 |
| id | 22434c1b-f25f-4ee4-bead-dc19c055d763 |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros-0.3.4-x86_64 |
| owner | d14e4731327047c58a2431e9e2221626 |
| protected | False |
| size | 13287936 |
| status | active |
| updated_at | 2016-09-28T08:10:19 |
| virtual_size | None |
+------------------+--------------------------------------+
[root@linux-node1 ~]# glance image-list
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| ID | Name | Disk Format | ContainerFormat | Size | Status |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| 22434c1b-f25f-4ee4-bead-dc19c055d763 | cirros-0.3.4-x86_64| qcow2 | bare | 13287936 | active |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
(四)
node1控制節點操做以下(安裝配置nova):
[root@linux-node1 ~]# yum -y install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient
[root@linux-node1 ~]# vim /etc/nova/nova.conf
#---------------------file start----------------
[DEFAULT]
rabbit_host=10.96.20.118
rabbit_port=5672
rabbit_use_ssl=false
rabbit_userid=guest
rabbit_password=guest
rpc_backend=rabbit
auth_strategy=keystone
novncproxy_base_url=http://10.96.20.118:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=10.96.20.118
vnc_enabled=true
vnc_keymap=en-us
my_ip=10.96.20.118
glance_host=$my_ip
glance_port=9292
lock_path=/var/lib/nova/tmp
state_path=/var/lib/nova
instances_path=$state_path/instances
compute_driver=libvirt.LibvirtDriver
verbose=true
[keystone_authtoken]
auth_host=10.96.20.118
auth_port=35357
auth_protocol=http
auth_uri=http://10.96.20.118:5000
auth_version=v2.0
admin_user=nova
admin_password=nova
admin_tenant_name=service
[database]
connection=mysql://nova:nova@localhost/nova
#---------------------file end---------------
[root@linux-node1 ~]# nova-manage db sync
[root@linux-node1 ~]# mysql -unova -pnova -hlocalhost -e 'use nova;show tables;'
+--------------------------------------------+
| Tables_in_nova |
+--------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| block_device_mapping |
| bw_usage_cache |
| cells |
| certificates |
| compute_nodes |
| console_pools |
| consoles |
| dns_domains |
| fixed_ips |
| floating_ips |
| instance_actions |
| instance_actions_events |
| instance_faults |
| instance_group_member |
| instance_group_metadata |
| instance_group_policy |
| instance_groups |
| instance_id_mappings |
| instance_info_caches |
| instance_metadata |
| instance_system_metadata |
| instance_type_extra_specs |
| instance_type_projects |
| instance_types |
| instances |
| iscsi_targets |
| key_pairs |
| migrate_version |
| migrations |
| networks |
| pci_devices |
| project_user_quotas |
| provider_fw_rules |
| quota_classes |
| quota_usages |
| quotas |
| reservations |
| s3_images |
| security_group_default_rules |
| security_group_instance_association |
| security_group_rules |
| security_groups |
| services |
| shadow_agent_builds |
| shadow_aggregate_hosts |
| shadow_aggregate_metadata |
| shadow_aggregates |
| shadow_block_device_mapping |
| shadow_bw_usage_cache |
| shadow_cells |
| shadow_certificates |
| shadow_compute_nodes |
| shadow_console_pools |
| shadow_consoles |
| shadow_dns_domains |
| shadow_fixed_ips |
| shadow_floating_ips |
| shadow_instance_actions |
| shadow_instance_actions_events |
| shadow_instance_faults |
| shadow_instance_group_member |
| shadow_instance_group_metadata |
| shadow_instance_group_policy |
| shadow_instance_groups |
| shadow_instance_id_mappings |
| shadow_instance_info_caches |
| shadow_instance_metadata |
| shadow_instance_system_metadata |
| shadow_instance_type_extra_specs |
| shadow_instance_type_projects |
| shadow_instance_types |
| shadow_instances |
| shadow_iscsi_targets |
| shadow_key_pairs |
| shadow_migrate_version |
| shadow_migrations |
| shadow_networks |
| shadow_pci_devices |
| shadow_project_user_quotas |
| shadow_provider_fw_rules |
| shadow_quota_classes |
| shadow_quota_usages |
| shadow_quotas |
| shadow_reservations |
| shadow_s3_images |
| shadow_security_group_default_rules |
|shadow_security_group_instance_association |
| shadow_security_group_rules |
| shadow_security_groups |
| shadow_services |
| shadow_snapshot_id_mappings |
| shadow_snapshots |
| shadow_task_log |
| shadow_virtual_interfaces |
| shadow_volume_id_mappings |
| shadow_volume_usage_cache |
| shadow_volumes |
| snapshot_id_mappings |
| snapshots |
| task_log |
| virtual_interfaces |
| volume_id_mappings |
| volume_usage_cache |
| volumes |
+--------------------------------------------+
[root@linux-node1 ~]# source keystone-admin.sh
[root@linux-node1 ~]# keystone user-create --name nova --pass nova --email nova@linux-node1.example.com
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | nova@linux-node1.example.com |
| enabled | True |
| id | f21848326192439fa7482a78a4cf9203 |
| name | nova |
| username | nova |
+----------+----------------------------------+
[root@linux-node1 ~]# keystone user-role-add --user nova --tenant service --role admin
[root@linux-node1 ~]# keystone service-create --name nova --type compute --description "OpenStack Compute"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 484cf61f5c2b464eb61407b6ef394046|
| name | nova |
| type | compute |
+-------------+----------------------------------+
[root@linux-node1 ~]# keystone endpoint-create --service-id `keystone service-list | awk '/compute/{print $2}'` --publicurl http://10.96.20.118:8774/v2/%\(tenant_id\)s --internalurl http://10.96.20.118:8774/v2/%\(tenant_id\)s --adminurl http://10.96.20.118:8774/v2/%\(tenant_id\)s --region regionOne
+-------------+-------------------------------------------+
| Property | Value |
+-------------+-------------------------------------------+
| adminurl |http://10.96.20.118:8774/v2/%(tenant_id)s |
| id | fab29981641741a3a4ab4767d9868722 |
| internalurl | http://10.96.20.118:8774/v2/%(tenant_id)s|
| publicurl |http://10.96.20.118:8774/v2/%(tenant_id)s |
| region | regionOne |
| service_id | 484cf61f5c2b464eb61407b6ef394046 |
+-------------+-------------------------------------------+
[root@linux-node1 ~]# for i in {api,cert,conductor,consoleauth,novncproxy,scheduler} ; do service openstack-nova-$i start ; done
Starting openstack-nova-api: [ OK ]
Starting openstack-nova-cert: [ OK ]
Starting openstack-nova-conductor: [ OK ]
Starting openstack-nova-consoleauth: [ OK ]
Starting openstack-nova-novncproxy: [ OK ]
Starting openstack-nova-scheduler: [ OK ]
[root@linux-node1 ~]# for i in {api,cert,conductor,consoleauth,novncproxy,scheduler} ; do chkconfig openstack-nova-$i on ; chkconfig
--list openstack-nova-$i; done
openstack-nova-api 0:off 1:off 2:on 3:on 4:on 5:on 6:off
openstack-nova-cert 0:off 1:off 2:on 3:on 4:on 5:on 6:off
openstack-nova-conductor 0:off 1:off 2:on 3:on 4:on 5:on 6:off
openstack-nova-consoleauth 0:off 1:off 2:on 3:on 4:on 5:on 6:off
openstack-nova-novncproxy 0:off 1:off 2:on 3:on 4:on 5:on 6:off
openstack-nova-scheduler 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@linux-node1 ~]# nova host-list #(service有4個,cert、consoleauth、conductor、scheduler)
+-------------------------+-------------+----------+
| host_name | service | zone |
+-------------------------+-------------+----------+
| linux-node1.example.com | conductor | internal |
| linux-node1.example.com | consoleauth |internal |
| linux-node1.example.com | cert | internal |
| linux-node1.example.com | scheduler | internal |
+-------------------------+-------------+----------+
[root@linux-node1 ~]# nova service-list
+------------------+-------------------------+----------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+-------------------------+----------+---------+-------+----------------------------+-----------------+
| nova-conductor | linux-node1.example.com | internal |enabled | up | 2016-09-29T08:32:36.000000| - |
| nova-consoleauth |linux-node1.example.com | internal | enabled | up | 2016-09-29T08:32:35.000000| - |
| nova-cert | linux-node1.example.com | internal |enabled | up | 2016-09-29T08:32:37.000000| - |
| nova-scheduler | linux-node1.example.com | internal |enabled | up | 2016-09-29T08:32:29.000000| - |
+------------------+-------------------------+----------+---------+-------+----------------------------+-----------------+
node2在計算節點操做:
[root@linux-node2 ~]# egrep --color 'vmx|svm' /proc/cpuinfo #(若硬件不支持虛擬化virt_type=qemu;若硬件支持虛擬化virt_type=kvm)
flags :fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflushdts mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc uparch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperfunfair_spinlock pni pclmulqdq vmx ssse3 fma cx16pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm ida aratxsaveopt pln pts dts tpr_shadow vnmi ept vpid fsgsbase bmi1 avx2 smep bmi2invpcid
[root@linux-node2 ~]# yum -y install openstack-nova-compute python-novaclient libvirt qemu-kvm virt-manager
[root@linux-node2 ~]# scp 10.96.20.118:/etc/nova/nova.conf /etc/nova/ #(從控制節點拷至計算節點)
root@10.96.20.118's password:
nova.conf 100% 97KB 97.1KB/s 00:00
[root@linux-node2 ~]# vim /etc/nova/nova.conf #(使用salt自動部署時,vncserver_proxyclient_address此處用janjia模板)
vncserver_proxyclient_address=10.96.20.119
[root@linux-node2 ~]# service libvirtdstart
Starting libvirtd daemon: libvirtd:relocation error: libvirtd: symbol dm_task_get_info_with_deferred_remove, version Base not defined in file libdevmapper.so.1.02 withlink time reference
[FAILED]
[root@linux-node2 ~]# yum -y install device-mapper
[root@linux-node2 ~]# service libvirtd start
Starting libvirtd daemon: [ OK ]
[root@linux-node2 ~]# service messagebus start
Starting system message bus: [ OK ]
[root@linux-node2 ~]# service openstack-nova-compute start
Starting openstack-nova-compute: [ OK ]
[root@linux-node2 ~]# chkconfig libvirtd on
[root@linux-node2 ~]# chkconfig messagebus on
[root@linux-node2 ~]# chkconfig openstack-nova-compute on
[root@linux-node2 ~]# chkconfig --list libvirtd
libvirtd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@linux-node2 ~]# chkconfig --list messagebus
messagebus 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@linux-node2 ~]# chkconfig --list openstack-nova-compute
openstack-nova-compute 0:off 1:off 2:on 3:on 4:on 5:on 6:off
在node1上再次查看:
[root@linux-node1 ~]# source keystone-admin.sh
[root@linux-node1 ~]# nova host-list
+-------------------------+-------------+----------+
| host_name | service | zone |
+-------------------------+-------------+----------+
| linux-node1.example.com | conductor | internal |
| linux-node1.example.com | consoleauth |internal |
| linux-node1.example.com | cert | internal |
| linux-node1.example.com | scheduler | internal |
| linux-node2.example.com| compute | nova |
+-------------------------+-------------+----------+
[root@linux-node1 ~]# nova service-list
+------------------+-------------------------+----------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+-------------------------+----------+---------+-------+----------------------------+-----------------+
| nova-conductor | linux-node1.example.com | internal |enabled | up | 2016-09-29T08:45:06.000000| - |
| nova-consoleauth |linux-node1.example.com | internal | enabled | up | 2016-09-29T08:45:05.000000| - |
| nova-cert | linux-node1.example.com | internal |enabled | up | 2016-09-29T08:45:07.000000| - |
| nova-scheduler | linux-node1.example.com | internal |enabled | up | 2016-09-29T08:45:10.000000| - |
| nova-compute | linux-node2.example.com | nova | enabled | up | 2016-09-29T08:45:07.000000| - |
+------------------+-------------------------+----------+---------+-------+----------------------------+-----------------+
(五)neutron在控制節點和計算節點都要安裝
node1控制節點上操做:
[root@linux-node1 ~]# yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient
[root@linux-node1 ~]# vim /etc/neutron/neutron.conf
#-----------------file start----------------
[DEFAULT]
verbose = True
debug = True
state_path = /var/lib/neutron
lock_path = $state_path/lock
core_plugin = ml2
service_plugins = router,firewall,lbaas
api_paste_config =/usr/share/neutron/api-paste.ini
auth_strategy = keystone
rabbit_host = 10.96.20.118
rabbit_password = guest
rabbit_port = 5672
rabbit_userid = guest
rabbit_virtual_host = /
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://10.96.20.118:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = 06c47e96dbf7429bbff4f93822222ca9
nova_admin_password = nova
nova_admin_auth_url =http://10.96.20.118:35357/v2.0
[agent]
root_helper = sudo neutron-rootwrap/etc/neutron/rootwrap.conf
[keystone_authtoken]
# auth_host = 127.0.0.1
auth_host = 10.96.20.118
auth_port = 35357
auth_protocol = http
# admin_tenant_name = %SERVICE_TENANT_NAME%
admin_tenant_name = service
# admin_user = %SERVICE_USER%
admin_user = neutron
# admin_password = %SERVICE_PASSWORD%
admin_password = neutron
[database]
connection =mysql://neutron:neutron@localhost:3306/neutron
[service_providers]
service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider=***:openswan:neutron.services.***.service_drivers.ipsec.IPsec***Driver:default
#--------------------file end------------------------
[root@linux-node1 ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = flat,vlan,gre,vxlan
mechanism_drivers = linuxbridge,openvswitch
[ml2_type_flat]
flat_networks = physnet1
[securitygroup]
enable_security_group = True
[root@linux-node1 ~]# vim /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
[vlans]
network_vlan_ranges = physnet1
[linux_bridge]
physical_interface_mappings = physnet1:eth0
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = True
[root@linux-node1 ~]# vim /etc/nova/nova.conf #(在nova中配置neutron)
[DEFAULT]
neutron_url=http://10.96.20.118:9696
neutron_admin_username=neutron
neutron_admin_password=neutron
neutron_admin_tenant_id=06c47e96dbf7429bbff4f93822222ca9
neutron_admin_tenant_name=service
neutron_admin_auth_url=http://10.96.20.118:5000/v2.0
neutron_auth_strategy=keystone
network_api_class=nova.network.neutronv2.api.API
linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver
security_group_api=neutron
firewall_driver=nova.virt.firewall.NoopFirewallDriver
vif_driver=nova.virt.libvirt.vif.NeutronLinuxBridgeVIFDriver
[root@linux-node1 ~]# for i in {api,conductor,scheduler} ; do service openstack-nova-$i restart ; done
Stopping openstack-nova-api: [ OK ]
Starting openstack-nova-api: [ OK ]
Stopping openstack-nova-conductor: [ OK ]
Starting openstack-nova-conductor: [ OK ]
Stopping openstack-nova-scheduler: [ OK ]
Starting openstack-nova-scheduler: [ OK ]
[root@linux-node1 ~]# keystone user-create --name neutron --pass neutron --email neutron@linux-node1.example.com
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | neutron@linux-node1.example.com |
| enabled | True |
| id | b4ece50c887848b4a1d7ff54c799fd4d|
| name | neutron |
| username | neutron |
+----------+----------------------------------+
[root@linux-node1 ~]# keystone user-role-add --user neutron --tenant service --role admin
[root@linux-node1 ~]# keystone service-create --name neutron --type network --description "OpenStack Networking"
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id |e967e7b3589647e68e26cabd587ebef4 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
[root@linux-node1 ~]# keystone endpoint-create --service-id `keystone service-list | awk '/network/{print $2}'` --publicurl http://10.96.20.118:9696 --internalurl http://10.96.20.118:9696 --adminurl http://10.96.20.118:9696 --region regionOne
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://10.96.20.118:9696 |
| id |b16b9392d8344fd5bbe01cc83be954d8 |
| internalurl | http://10.96.20.118:9696 |
| publicurl | http://10.96.20.118:9696 |
| region | regionOne |
| service_id | e967e7b3589647e68e26cabd587ebef4 |
+-------------+----------------------------------+
[root@linux-node1 ~]# keystone service-list
+----------------------------------+----------+----------+-------------------------+
| id | name | type | description |
+----------------------------------+----------+----------+-------------------------+
| 7f2db750a630474b82740dacb55e70b3 | glance | image | OpenStack image Service |
| 93adbcc42e6145a39ecab110b3eb1942 | keystone | identity | OpenStack identity |
| e967e7b3589647e68e26cabd587ebef4 |neutron | network | OpenStack Networking |
| 484cf61f5c2b464eb61407b6ef394046| nova | compute | OpenStack Compute |
+----------------------------------+----------+----------+-------------------------+
[root@linux-node1 ~]# keystone endpoint-list
+----------------------------------+-----------+-------------------------------------------+-------------------------------------------+-------------------------------------------+----------------------------------+
| id | region | publicurl | internalurl | adminurl | service_id |
+----------------------------------+-----------+-------------------------------------------+-------------------------------------------+-------------------------------------------+----------------------------------+
| 9815501af47f464db00cfb2eb30c649d | regionOne | http://10.96.20.118:9292 | http://10.96.20.118:9292 | http://10.96.20.118:9292 | 7f2db750a630474b82740dacb55e70b3|
| b16b9392d8344fd5bbe01cc83be954d8 |regionOne | http://10.96.20.118:9696 | http://10.96.20.118:9696 | http://10.96.20.118:9696 |e967e7b3589647e68e26cabd587ebef4 |
| fab29981641741a3a4ab4767d9868722 | regionOne | http://10.96.20.118:8774/v2/%(tenant_id)s| http://10.96.20.118:8774/v2/%(tenant_id)s |http://10.96.20.118:8774/v2/%(tenant_id)s | 484cf61f5c2b464eb61407b6ef394046|
| fd14210787ab44d2b61480598b1c1c82| regionOne | http://10.96.20.118:5000/v2.0 | http://10.96.20.118:5000/v2.0 | http://10.96.20.118:35357/v2.0 | 93adbcc42e6145a39ecab110b3eb1942|
+----------------------------------+-----------+-------------------------------------------+-------------------------------------------+-------------------------------------------+----------------------------------+
[root@linux-node1 ~]# neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini #(在前臺啓動)
……
2016-09-29 23:09:37.035 69080 DEBUGneutron.service [-]********************************************************************************log_opt_values /usr/lib/python2.6/site-packages/oslo/config/cfg.py:1955
2016-09-29 23:09:37.035 69080 INFOneutron.service [-] Neutron service started, listeningon 0.0.0.0:9696
2016-09-29 23:09:37.044 69080 INFOneutron.openstack.common.rpc.common [-] Connected to AMQP server on10.96.20.118:5672
2016-09-29 23:09:37.046 69080 INFOneutron.wsgi [-] (69080) wsgi starting up on http://0.0.0.0:9696/
[root@linux-node1 ~]# vim /etc/init.d/neutron-server #(修改腳本配置文件位置,注意config中定義的內容,不能有多餘空格)
configs=(
"/etc/neutron/neutron.conf" \
"/etc/neutron/plugins/ml2/ml2_conf.ini" \
"/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini" \
)
[root@linux-node1 ~]# service neutron-server start #(控制節點啓動兩個:neutron-server和neutron-linuxbridge-agent;計算節點僅啓動neutron-linuxbridge-agent)
Starting neutron: [ OK ]
[root@linux-node1 ~]# service neutron-linuxbridge-agent start
Starting neutron-linuxbridge-agent: [ OK ]
[root@linux-node1 ~]# chkconfig neutron-server on
[root@linux-node1 ~]# chkconfig neutron-linuxbridge-agent on
[root@linux-node1 ~]# chkconfig --list neutron-server
neutron-server 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@linux-node1 ~]# chkconfig --list neutron-linuxbridge-agent
neutron-linuxbridge-agent 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@linux-node1 ~]# netstat -tnlp | grep:9696
tcp 0 0 0.0.0.0:9696 0.0.0.0:* LISTEN 69154/python
[root@linux-node1 ~]# . keystone-admin.sh
[root@linux-node1 ~]# neutron ext-list #(列出加載的擴展模塊,確認成功啓動neutron-server)
+-----------------------+-----------------------------------------------+
| alias | name |
+-----------------------+-----------------------------------------------+
| service-type | Neutron Service TypeManagement |
| ext-gw-mode | Neutron L3 Configurable externalgateway mode |
| security-group | security-group |
| l3_agent_scheduler | L3 Agent Scheduler |
| lbaas_agent_scheduler | LoadbalancerAgent Scheduler |
| fwaas | Firewall service |
| binding | Port Binding |
| provider | Provider Network |
| agent | agent |
| quotas | Quota management support |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| multi-provider | Multi Provider Network |
| external-net | Neutron external network |
| router | Neutron L3 Router |
| allowed-address-pairs | Allowed AddressPairs |
| extra_dhcp_opt | Neutron Extra DHCP opts |
| lbaas | LoadBalancing service |
| extraroute | Neutron Extra Route |
+-----------------------+-----------------------------------------------+
[root@linux-node1 ~]# neutron agent-list #(待計算節點成功啓動後,纔有)
+--------------------------------------+--------------------+-------------------------+-------+----------------+
| id |agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-------------------------+-------+----------------+
| 1283a47d-4d1b-4403-9d0e-241da803762b | Linux bridgeagent | linux-node1.example.com | :-) |True |
+--------------------------------------+--------------------+-------------------------+-------+----------------+
node2計算節點操做:
[root@linux-node2 ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter= 1
net.ipv4.conf.all.rp_filter= 1
[root@linux-node2 ~]# sysctl -p
[root@linux-node2 ~]# yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient #(與控制節點安裝同樣)
[root@linux-node2 ~]# scp 10.96.20.118:/etc/neutron/neutron.conf /etc/neutron/
root@10.96.20.118's password:
neutron.conf 100% 18KB 18.1KB/s 00:00
[root@linux-node2 ~]# scp 10.96.20.118:/etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/
root@10.96.20.118's password:
ml2_conf.ini 100% 2447 2.4KB/s 00:00
[root@linux-node2 ~]# scp 10.96.20.118:/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini /etc/neutron/plugins/linuxbridge/
root@10.96.20.118's password:
linuxbridge_conf.ini 100%3238 3.2KB/s 00:00
[root@linux-node2 ~]# scp 10.96.20.118:/etc/init.d/neutron-server /etc/init.d/
root@10.96.20.118's password:
neutron-server 100%1861 1.8KB/s 00:00
[root@linux-node2 ~]# scp 10.96.20.118:/etc/init.d/neutron-linuxbridge-agent /etc/init.d/
root@10.96.20.118's password:
neutron-linuxbridge-agent 100%1824 1.8KB/s 00:00
[root@linux-node2 ~]# service neutron-linuxbridge-agent start #(計算節點只啓動neutron-linuxbridge-agent)
Starting neutron-linuxbridge-agent: [ OK ]
[root@linux-node2 ~]# chkconfig neutron-linuxbridge-agent on
[root@linux-node2 ~]# chkconfig --listneutron-linuxbridge-agent
neutron-linuxbridge-agent 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@linux-node1 ~]# neutron agent-list #(在控制節點上查看)
+--------------------------------------+--------------------+-------------------------+-------+----------------+
| id |agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-------------------------+-------+----------------+
| 1283a47d-4d1b-4403-9d0e-241da803762b | Linuxbridge agent | linux-node1.example.com | :-) | True |
|8d061358-ddfa-4979-bf2e-5d8c1c7a7f65 | Linux bridge agent |linux-node2.example.com | :-) |True |
+--------------------------------------+--------------------+-------------------------+-------+----------------+
(六)配置安裝horizon(openstack-dashboard)
在node1控制節點上操做:
[root@linux-node1 ~]# yum -y install openstack-dashboard httpd mod_wsgi memcached python-memcached
[root@linux-node1 ~]# vim /etc/openstack-dashboard/local_settings #(第15行ALLOWED_HOSTS;啓用第98-103行CACHES=;第128行OPENSTACK_HOST=)
ALLOWED_HOSTS = ['horizon.example.com','localhost', '10.96.20.118']
CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '127.0.0.1:11211',
}
}
OPENSTACK_HOST = "10.96.20.118"
[root@linux-node1 ~]# scp /etc/nova/nova.conf 10.96.20.119:/etc/nova/
root@10.96.20.119's password:
nova.conf 100% 98KB 97.6KB/s 00:00
在node2計算節點操做以下:
[root@linux-node2 ~]# vim /etc/nova/nova.conf #(在計算節點更改vncserver_proxyclient_address爲本地ip)
vncserver_proxyclient_address=10.96.20.119
[root@linux-node2 ~]# ps aux | egrep "nova-compute|neutron-linuxbridge-agent" | grep -v grep
neutron 27827 0.0 2.9 268448 29600 ? S Sep29 0:00 /usr/bin/python /usr/bin/neutron-linuxbridge-agent --log-file/var/log/neutron/linuxbridge-agent.log --config-file/usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf--config-file /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
nova 28211 0.9 5.6 1120828 56460 ? Sl 00:12 0:00 /usr/bin/python /usr/bin/nova-compute --logfile/var/log/nova/compute.log
在node1控制節點:
[root@linux-node1 ~]# ps aux | egrep "mysqld|rabbitmq|nova|keystone|glance|neutron" #(確保這些服務正常運行)
……
[root@linux-node1 ~]# . keystone-admin.sh
[root@linux-node1 ~]# nova host-list #(有5個)
+-------------------------+-------------+----------+
| host_name | service | zone |
+-------------------------+-------------+----------+
| linux-node1.example.com | conductor | internal |
| linux-node1.example.com | consoleauth |internal |
| linux-node1.example.com | cert | internal |
| linux-node1.example.com | scheduler | internal |
| linux-node2.example.com | compute | nova |
+-------------------------+-------------+----------+
[root@linux-node1 ~]# neutron agent-list #(有2個)
+--------------------------------------+--------------------+-------------------------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-------------------------+-------+----------------+
| 1283a47d-4d1b-4403-9d0e-241da803762b | Linux bridgeagent | linux-node1.example.com | :-) |True |
| 8d061358-ddfa-4979-bf2e-5d8c1c7a7f65| Linux bridge agent | linux-node2.example.com | :-) | True |
+--------------------------------------+--------------------+-------------------------+-------+----------------+
[root@linux-node1 ~]# service memcached start
Starting memcached: [ OK ]
[root@linux-node1 ~]# service httpd start
Starting httpd: [ OK ]
[root@linux-node1 ~]# chkconfig memcached on
[root@linux-node1 ~]# chkconfig httpd on
[root@linux-node1 ~]# chkconfig --listmemcached
memcached 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@linux-node1 ~]# chkconfig --listhttpd
httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
測試:
http://10.96.20.118/dashboard,用戶名admin密碼admin;
在控制節點操做(建立網絡):
[root@linux-node1 ~]# . keystone-demo.sh
[root@linux-node1 ~]# keystone tenant-list #(記錄demo的id)
+----------------------------------+---------+---------+
| id | name | enabled |
+----------------------------------+---------+---------+
| d14e4731327047c58a2431e9e2221626 | admin | True |
| 5ca17bf131f3443c81cf8947a6a2da03 | demo | True |
| 06c47e96dbf7429bbff4f93822222ca9 | service | True |
+----------------------------------+---------+---------+
[root@linux-node1 ~]# neutron net-create --tenant-id 5ca17bf131f3443c81cf8947a6a2da03 falt_net --shared --provider:network_typeflat --provider:physical_network physnet1 #(爲demo建立網絡,在CLI下建立網絡,在web界面下建立子網)
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id |e80860ec-1be6-4d11-b8e2-bcb3ea29822b |
| name | falt_net |
| provider:network_type | flat |
| provider:physical_network | physnet1 |
| provider:segmentation_id | |
| shared | True |
| status | ACTIVE |
| subnets | |
| tenant_id | 5ca17bf131f3443c81cf8947a6a2da03 |
+---------------------------+--------------------------------------+
[root@linux-node1 ~]# neutron net-list
+--------------------------------------+----------+---------+
| id | name | subnets |
+--------------------------------------+----------+---------+
| e80860ec-1be6-4d11-b8e2-bcb3ea29822b |falt_net | |
+--------------------------------------+----------+---------+
在web界面下建立子網:
管理員-->系統面板-->網絡-->點網絡名稱(flat_net)-->點建立子網,如圖:子網名稱flat_subnet,網絡地址10.96.20.0/24,IP版本IPv4,網關IP10.96.20.1-->下一步-->子網詳情:分配地址池10.96.20.120,10.96.20.130;DNS域名解析服務123.125.81.6-->建立
web界面右上角點退出,用demo用戶登陸:
項目-->Compute-->實例-->啓動雲主機-->雲主機名稱demo,雲主機類型m1.tiny,雲主機啓動源從鏡像啓動,鏡像名稱cirros-0.3.4-x86_64(12.7MB)-->運行