2016年3月18日14:24html
搞了三天,實在搞不定,可是想一想老師又要用它來接項目,把這個大一件事交給我,我又不得不盡全力,不過通過三天的無休止的專研,我真的盡力了,主要是沒有太多的時間能給我了,我還要準備看書複習,找暑假的實習,只能暫時告一段落,如今把這三天的研究成果記錄一下,以便往後有機會的話再繼續。前端
首先貼幾個連接:git
https://github.com/cmusatyalab/elijah-openstackgithub
https://github.com/cmusatyalab/elijah-cloudlet編程
http://hail.elijah.cs.cmu.edu/bootstrap
http://www.aboutyun.com/thread-13063-1-1.html後端
http://blog.sina.com.cn/s/blog_7643a1bf0102vhga.htmlapi
http://blog.csdn.net/bianer199/article/details/39687875服務器
主要是第一個連接,裏面包含了在openstack中擴展Cloudlet的詳細過程,還有源代碼連接,可是步驟都是省略的,好多經過devstack安裝openstack的過程,好多組件的安裝沒有提到,但這又是問題的關鍵。網絡
如下是維基百科Cloudlet後用谷歌翻譯的:
一朵雲是一個流動性加強小規模的雲數據中心是位於網絡的邊緣。 在雲霧的主要目的是經過提供強大的計算資源,移動設備與更低的延遲支持資源密集型和互動的移動應用。 這是擴展今天的一種新的建築元素的雲計算基礎設施。 它表明了一個3層層次結構的中間層:移動設備---朵雲---雲 。 一個一片雲能夠被看做是在一個盒子 ,其目標是使雲更接近一個數據中心 。 該朵雲一詞最先由創造M. Satyanarayanan , 維克多巴爾 ,拉蒙·卡塞雷斯和奈傑爾·戴維斯, [1]和原型實現開發的卡耐基梅隆大學的一個研究項目。 [2]朵雲的概念也被稱爲移動邊緣計算, [3] [4 ]跟我雲, [5]和手機微雲。 [6]的雲霧是露點計算的子應用程序的一個很好的例子[7]是在移動類設備應用範例。 露計算集成了雲霧,微服務,邊計算,有霧和雲計算分佈式的信息服務環境一塊兒造成。
動機:
許多移動服務分割應用到前端客戶端程序和一個後端服務器程序如下傳統客戶端-服務器模型 。 前端的移動應用卸載其因各類緣由後端服務器的功能,如加快處理。 雲計算的出現,後端服務器一般承載在雲數據中心 。 雖然使用了雲數據中心提供了各類好處,例如可擴展性和彈性,其整合和centralizion引線向移動設備及其相關聯的數據中心之間的大的分離。終端到終端的通訊則涉及高延遲,低帶寬許多網絡啤酒花和結果。
等待時間的緣由,一些新興的移動應用須要雲卸載基礎設施,以接近所述移動裝置,以實現低的響應時間。 [8]在理想的狀況下,它僅僅是一個無線跳程。 例如,卸載基礎設施能夠位於一個蜂窩基站或它能夠LAN鏈接到一組無線網絡的多個基站。 此卸載基礎設施的單個元素被稱爲cloudlets。 和cloudlets的整個集合被稱爲移動的計算,這是由歐洲電信標準協會(ETSI)開發的工業積極性。 [3]
Cloudlets旨在支持都是資源密集型和互動。移動應用加強現實應用程序使用頭部跟蹤系統須要小於16毫秒終端到終端的延遲。 [9] 雲遊戲遠程渲染也須要低延遲和高帶寬。 [10]可穿戴認知輔助系統結合了像設備谷歌眼鏡與基於雲的處理經過一個複雜的任務來引導用戶。 這個將來的應用類型是由美國國家科學基金會2013研討會對無線網絡的將來發展方向的報告定性爲「驚人的變革」。 [11]這些應用程序的實時用戶交互的關鍵路徑使用雲資源。 所以,他們不能耐受超過幾十毫秒的端至端的操做延遲。 蘋果的Siri和谷歌載入其在雲中執行計算密集型的語音識別,在這個新興空間進一步例子。
有一個在雲和雲霧的要求顯著重疊。 在這兩個級別,也就是須要:(1)不受信任的用戶級別計算之間的嚴格隔離; (二)認證,訪問控制和計量機制; (三)對用戶級計算的動態資源分配; 和,(d)支持在一個很是普遍的用戶級別計算,以對他們的處理結構最少的限制,編程語言或操做系統的能力。 在雲數據中心,這些要求是今天使用知足虛擬機 (VM)的抽象。 爲它們在今天雲計算中使用的相同的緣由,虛擬機被用做抽象爲cloudlets。 同時,還有云和雲霧之間的少數,但重要的差別化。
從被在其存儲層啓動現有的虛擬機圖像優化的雲數據中心不一樣的是,cloudlets須要在他們的配置更加靈活。 它們與移動設備的關聯是高度動態的,具備至關的流失,因爲用戶的移動性。 從很遠的用戶能夠在雲霧意外現身(例如,若是他剛下車國際航班),並嘗試使用它的應用程序,如個性化語言的翻譯。 對於該用戶,他以前的提供延遲是可以使用的應用程序的影響可用性。 [12]
若是移動設備用戶移動從他當前正在使用的一片雲遠,交互式響應會下降做爲邏輯網絡距離增長。 以解決用戶的移動性的這種效果,須要在第一一片雲的卸載服務被轉移到第二一片雲保持端至端網絡質量。 [13]這相似於在雲計算動態遷移,但在某種意義上大大不一樣之處在於在VM區切換髮生在廣域網(WAN)。
因爲一片雲模型須要從新配置或硬件/軟件的其餘部署,它提供給激勵部署一個系統的方式是重要的。 然而,它能夠面對一個經典引導問題。 Cloudlets須要實際應用來鼓勵朵雲部署。 可是,開發商不能在很大程度上依賴於基礎設施的一片雲,直到它被普遍部署。 爲了打破這一僵局,並引導了朵雲部署,研究人員在卡耐基梅隆大學提出的OpenStack ++擴展的OpenStack利用其開放式的生態系統。 [2]的OpenStack ++提供了一組特定的雲霧-API做爲OpenStack的擴展。 [14]
原文是:
A cloudlet is a mobility-enhanced small-scale cloud datacenter that is located at the edge of the Internet. The main purpose of the cloudlet is supporting resource-intensive and interactive mobile applications by providing powerful computing resources to mobile devices with lower latency. It is a new architectural element that extends today’s cloud computing infrastructure. It represents the middle tier of a 3-tier hierarchy: mobile device --- cloudlet --- cloud. A cloudlet can be viewed as a data center in a box whose goal is to bring the cloud closer. The cloudlet term was first coined by M. Satyanarayanan, Victor Bahl, Ramón Cáceres, and Nigel Davies,[1] and a prototype implementation is developed by Carnegie Mellon University as a research project.[2] The concept of cloudlet is also known as mobile edge computing,[3][4] follow me cloud,[5] and mobile micro-cloud.[6] The cloudlet is a good example of sub-application of the Dew computing[7] paradigm that is applied on the mobile-like devices. Dew Computing integrates the cloudlet, micro service, edge computing, forming together with Fog and Cloud Computing the Distributed information service environment.
Many mobile services split the application into a front-end client program and a back-end server program following the traditional client-server model. The front-end mobile application offloads its functionality to the back-end servers for various reasons such as speeding up processing. With the advent of cloud computing, the back-end server is typically hosted at the cloud datacenter. Though the use of a cloud datacenter offers various benefits such as scalability and elasticity, its consolidation and centralizion lead to a large separation between a mobile device and its associated datacenter. End-to-end communication then involves many network hops and results in high latencies and low bandwidth.
For the reasons of latency, some emerging mobile applications require cloud offload infrastructure to be close to the mobile device to achieve low response time.[8] In the ideal case, it is just one wireless hop away. For example, the offload infrastructure could be located in a cellular base station or it could be LAN-connected to a set of Wi-Fi base stations. The individual elements of this offload infrastructure are referred to as cloudlets. And the entire collection of cloudlets is referred to as Mobile-edge Computing, which is an industry initiative created by the European Telecommunications Standards Institute (ETSI).[3]
Cloudlets aim to support mobile applications that are both resource-intensive and interactive. Augmented reality applications that use head-tracked systems require end-to-end latencies of less than 16 ms.[9] Cloud games with remote rendering also require low latencies and high bandwidth.[10] Wearable cognitive assistance system combines a device like Google Glass with cloud-based processing to guide a user through a complex task. This futuristic genre of applications is characterized as 「astonishingly transformative」 by the report of the 2013 NSF Workshop on Future Directions in Wireless Networking.[11] These applications use cloud resources in the critical path of real-time user interaction. Consequently, they cannot tolerate end-to-end operation latencies of more than a few tens of milliseconds. Apple Siri and Google Now which perform compute-intensive speech recognition in the cloud, are further examples in this emerging space.
There is significant overlap in the requirements for cloud and cloudlet. At both levels, there is the need for: (a) strong isolation between untrusted user-level computations; (b) mechanisms for authentication, access control, and metering; (c) dynamic resource allocation for user-level computations; and, (d) the ability to support a very wide range of user-level computations, with minimal restrictions on their process structure, programming languages or operating systems. At a cloud datacenter, these requirements are met today using the virtual machine (VM) abstraction. For the same reasons they are used in cloud computing today, VMs are used as abstraction for cloudlets. Meanwhile, there are a few but important differentiators between cloud and cloudlet.
Different from cloud data centers that are optimized for launching existing VM images in their storage tier, cloudlets need to be much more agile in their provisioning. Their association with mobile devices is highly dynamic, with considerable churn due to user mobility. A user from far away may unexpectedly show up at a cloudlet (e.g., if he just got off an international flight) and try to use it for an application such as a personalized language translator. For that user, the provisioning delay before he is able to use the application impacts usability.[12]
If a mobile device user moves away from the cloudlet he is currently using, interactive response will degrade as the logical network distance increases. To address this effect of user mobility, the offloaded services on the first cloudlet need to be transferred to the second cloudlet maintaining end-to-end network quality.[13] This resembles live migration in cloud computing, but differs considerably in a sense that the VM handoff happens in Wide Area Network (WAN).
Since the cloudlet model requires reconfiguration or additional deployment of hardware/software, it is important to provide a systematic way to incentivise the deployment. However, it can face a classic bootstrapping problem. Cloudlets need practical applications to incentivize cloudlet deployment. However, developers cannot heavily rely on cloudlet infrastructure until it is widely deployed. To break this deadlock and bootstrap the cloudlet deployment, researchers atCarnegie Mellon University proposed OpenStack++ that extends OpenStack to leverage its open ecosystem.[2] OpenStack++ provides a set of cloudlet-specific API as OpenStack extensions.[14]