Zircon
來源 https://github.com/zhangpf/fuchsia-docs-zh-CN/tree/master/docs/the-booknode
國內鏡像源 https://hexang.org/mirrors/fuchsia.gitgit
Fuchsia is not Linux
英文原文快照github
模塊化的capability-based操做系統shell
本文檔是一系列描述Fuchsia操做系統的文章集合,圍繞特定子系統而組織,各個章節將隨着時間的推移而被填充。macos
目錄
Zircon內核
Zircon是位於Fuchsia其他部分底層的微內核,Zircon還提供了核心驅動程序和Fuchsia的libc實現。ubuntu
Zircon核心
- 設備管理器 & 設備主機
- 設備驅動開發(DDK)
- C系統庫(libc)
- POSIX I/O(libfdio)
- 進程啓動/ELF加載(liblaunchpad)
Framework框架
存儲
網絡
圖形化
媒體
- 聲音
- 視頻
- 數字版權管理(DRM)
智能
- 上下文
- 代理框架
- 建議
用戶接口
- 設備,用戶和story shell
- story和模塊
向後兼容性
- 輕量級POSIX(包括咱們支持哪些POSIX的子集合以及其緣由)
更新和恢復
- 驗證啓動
- 更新器
C++ in Zircon
A subset of the C++14 language is used in the Zircon tree. This includes both the upper layers of the kernel (above the lk layer), as well as some userspace code. In particular, Zircon does not use the C++ standard library, and many language features are not used or allowed. Language features * Not allowed - Exceptions - RTTI and `dynamic_cast` - Operator overloading - Default parameters - Virtual inheritance - Statically constructed objects - Trailing return type syntax - Exception: when necessary for lambdas with otherwise unutterable return types - Initializer lists - `thread_local` in kernel code
* Allowed - Pure interface inheritance - Lambdas - `constexpr` - `nullptr` - `enum class`es - `template`s - Plain old classes - `auto` - Multiple implementation inheritance - But be judicious. This is used widely for e.g. intrusive container mixins.
* Needs more ruling TODO(cpu) - Global constructors - Currently we have these for global data structures.
快速入門指南
英文原文快照xcode
檢出Zircon代碼
注意:因爲Fuchsia也包含了Zircon的代碼,請查看Fuchsia的入門指南。若是僅專一於Zircon的開發,請參考此文檔。promise
Zircon的git倉庫位於:https://fuchsia.googlesource.com/zircon (譯者注:Github的鏡像在此)服務器
假設在環境中已設置好$SRC變量(譯者注:即Fuchsia的工做目錄),克隆Zircon倉庫到本地:網絡
git clone https://fuchsia.googlesource.com/zircon $SRC/zircon # 或者 git clone https://github.com/fuchsia-mirror/zircon $SRC/zircon
在文檔接下來的部分,咱們已假設Zircon已經被檢出到$SRC/zircon
目錄下,進而工具鏈、QEMU等也將在$SRC
下進行構建。多個make命令經過-j32
選項被並行調用,若是這種並行度對於你的機器比較吃力,請嘗試-j16
或者-j8
選項。
準備構建環境
Ubuntu
在Ubuntu系統中,下面的命令能夠獲取所需的依賴:
sudo apt-get install texinfo libglib2.0-dev autoconf libtool libsdl-dev build-essential
macOS
安裝Xcode命令行工具(Command Line Tools):
xcode-select --install
安裝其餘的依賴項:
- 使用Homebrew:
brew install wget pkg-config glib autoconf automake libtool
- 使用MacPorts:
port install autoconf automake libtool libpixman pkgconfig glib2
安裝工具鏈
若是你的開發環境是Linux或macOS,已有預編譯的工具鏈可直接進行下載,只需在Zircon的工做目錄下運行下列腳本便可:
./scripts/download-prebuilt
若是你想本身構建工具鏈,請依照本文檔後面的步驟執行。
構建Zircon
構建生成的文件位於$SRC/zircon/build-{arm64,x64}
下。
對於特定的構建目標,下面示例中的$BUILDDIR
變量指向構建的輸出目錄。
cd $SRC/zircon # 對於aarch64 make -j32 arm64 # 對於x64 make -j32 x64
使用Clang
若是你想使用Clang做爲構建Zircon的工具鏈,請在調用make時使用USE_CLANG=true
選項。
cd $SRC/zircon # 對於aarch64 make -j32 USE_CLANG=true arm64 # 對於x86-64 make -j32 USE_CLANG=true x64
爲全部目標體系結構構建Zircon
# -r選項也將同時編譯release版本 ./scripts/buildall -r
請在提交代碼變動以前爲全部目標體系結構構建一次,已保證代碼在全部體系結構上能工做。
QEMU
若是你使用真實硬件進行測試,那麼能夠跳過此步驟,可是模擬器能夠很方便地進行快速本地測試,因此該步驟一般是值得進行的。
對於在zircon中構建和使用QEMU,請查看相應的文檔(英文原文)。
構建Toolchains(可選項)
若是預編譯工具鏈對於你的系統不適用,爲了在ARM64和x86-64上構建Zircon,也有一些腳本可用於你下載和構建合適的gcc工具鏈:
cd $SRC git clone https://fuchsia.googlesource.com/third_party/gcc_none_toolchains toolchains cd toolchains. ./do-build --target arm-none ./do-build --target aarch64-none ./do-build --target x86_64-none
爲toolchains配置PATH環境變量
若是使用的是預編譯工具鏈,構建過程能夠自動找到它們,所以可跳過此步驟。
# 對於Linux export PATH=$PATH:$SRC/toolchains/aarch64-elf-5.3.0-Linux-x86_64/bin export PATH=$PATH:$SRC/toolchains/x86_64-elf-5.3.0-Linux-x86_64/bin # 對於Mac export PATH=$PATH:$SRC/toolchains/aarch64-elf-5.3.0-Darwin-x86_64/bin export PATH=$PATH:$SRC/toolchains/x86_64-elf-5.3.0-Darwin-x86_64/bin
Zircon雙向拷貝文件
若本地IPv6網絡配置成功,即可以使用主機工具./build-zircon-ARCH/tools/netcp
來拷貝文件。
# 拷貝myprogram文件到Zircon netcp myprogram :/tmp/myprogram # 拷貝Zircon的myprogram文件到開發主機 netcp :/tmp/myprogram myprogram
添加額外的用戶態文件
Zircon的構建過程會產生一個bootfs鏡像,它包含系統啓動必需的用戶態組件(包括設備管理器,一些設備驅動等)。除此以外,內核也可以包含一個以ramdisk鏡像的形式,由QEMU或者bootloader提供的額外鏡像。
爲了產生該bootfs鏡像,請使用在構建中同時生成的mkbootfs
工具。mkbootfs
能夠經過兩種方式裝配出一個bootfs鏡像:經過目標目錄(該目錄的全部文件和子目錄都將包含在內)或經過逐行列出須要包括在內的清單文件:
$BUILDDIR/tools/mkbootfs -o extra.bootfs @/path/to/directory echo "issue.txt=/etc/issue" > manifest echo "etc/hosts=/etc/hosts" >> manifest $BUILDDIR/tools/mkbootfs -o extra.bootfs manifest
在Zircon系統啓動完成後,bootfs鏡像中的文件將出如今/boot
目錄下,因此上面例子中的"host"文件位於/boot/etc/hosts
。
對於QEMU,請使用run-zircon-*
腳本的-x選項來指定額外的bootfs鏡像。
網絡啓動
有兩種機制支持網絡啓動Zircon:Gigaboot和Zirconboot。Gigaboot是基於EFI的bootloader,而Zirconboot則是容許一個最小化的zircon系統來充當啓動zircon自身的bootloader。
在支持經過EFI啓動的硬件設備(如Acer和NUC)上,上述兩種方式都是支持的。而對於其餘系統,zirconboot多是網絡啓動的惟一選項。
經過Gigaboot啓動
基於GigaBoot20x6(英文原文)的bootloader使用一種簡單的基於IPV6的UDP網絡啓動協議,它不須要特殊的服務器配置和使用權限設置。
它利用IPV6鏈路的本地尋址和廣播來達到此目的,容許目標設備發佈它的啓動消息,開發主機在接收消息後發送啓動鏡像到目標設備。
若是你有運行GigaBoot20x6的硬件設備(例如配備Broadwell或Skylake結構的CPU的Intel NUC),請首先手動建立USB啓動盤(英文原文),或使用腳本(僅針對Linux)。而後運行:
$BUILDDIR/tools/bootserver $BUILDDIR/zircon.bin # 若是有額外的bootfs鏡像(見上): $BUILDDIR/tools/bootserver $BUILDDIR/zircon.bin /path/to/extra.bootfs
引導服務器默認將一直運行,一旦檢測出有網絡啓動的請求,它就會將內核(包括bootfs,若是有的話)發送到該請求設備上。若是在啓動引導服務時傳遞-1選項,那麼它將在執行一次成功的啓動後中止服務並退出。
經過Zirconboot啓動
Zirconboot是一種容許Zircon系統充當啓動Zircon自身的bootloader機制,並且Zirconboot使用和前文提到的Gigaboot相同的啓動協議。
爲了使用Zirconboot,請經過在kernel命令行種傳遞netsvc.netboot=true
選項到Zircon的方式。當Zirconboot啓動時,它將試圖從掛載於開發主機的引導服務器上獲取並啓動zircon系統。
經過網絡查看日誌
Zircon的構建過程默認包含了網絡日誌服務,它經過本地IPv6的UDP協議廣播系統日誌。請注意,這只是一種快速能用的方式,將來某個時刻協議確定會發生改變。
當前,若是你在QEMU上加-N選項運行zircon或者在配備以太網卡的真實硬件上(ASIX上的USB網絡適配器或NUC上的Intel網卡),loglistener工具能夠經過本地網絡接收日誌廣播:
$BUILDDIR/tools/loglistener
調試
關於在Zircon環境中進行調試的隨機提示信息,請查看調試(英文原文)部分。
貢獻代碼變動
- 請查看貢獻代碼(英文原文)部分。
Zircon驅動開發工具(DDK)
Zircon Device Model
Introduction
In Zircon, device drivers are implemented as ELF shared libraries (DSOs) which are loaded into Device Host (devhost) processes. The Device Manager (devmgr) process, contains the Device Coordinator which keeps track of drivers and devices, manages the discovery of drivers, the creation and direction of Device Host processes, and maintains the Device Filesystem (devfs), which is the mechanism through which userspace services and applications (constrained by their namespaces) gain access to devices.
The Device Coordinator views devices as part of a single unified tree. The branches (and sub-branches) of that tree consist of some number of devices within a Device Host process. The decision as to how to sub-divide the overall tree among Device Hosts is based on system policy for isolating drivers for security or stability reasons and colocating drivers for performance reasons.
NOTE: The current policy is simple (each device representing a physical bus-master capable hardware device and its children are place into a separate devhost). It will evolve to provide finer-grained partitioning.
Devices, Drivers, and Device Hosts
Here's a (slightly trimmed for clarity) dump of the tree of devices in Zircon running on Qemu x86-64:
$ dm dump [root] <root> pid=1509 [null] pid=1509 /boot/driver/builtin.so [zero] pid=1509 /boot/driver/builtin.so [misc] <misc> pid=1645 [console] pid=1645 /boot/driver/console.so [dmctl] pid=1645 /boot/driver/dmctl.so [ptmx] pid=1645 /boot/driver/pty.so [i8042-keyboard] pid=1645 /boot/driver/pc-ps2.so [hid-device-001] pid=1645 /boot/driver/hid.so [i8042-mouse] pid=1645 /boot/driver/pc-ps2.so [hid-device-002] pid=1645 /boot/driver/hid.so [sys] <sys> pid=1416 /boot/driver/bus-acpi.so [acpi] pid=1416 /boot/driver/bus-acpi.so [pci] pid=1416 /boot/driver/bus-acpi.so [00:00:00] pid=1416 /boot/driver/bus-pci.so [00:01:00] pid=1416 /boot/driver/bus-pci.so <00:01:00> pid=2015 /boot/driver/bus-pci.proxy.so [bochs_vbe] pid=2015 /boot/driver/bochs-vbe.so [framebuffer] pid=2015 /boot/driver/framebuffer.so [00:02:00] pid=1416 /boot/driver/bus-pci.so <00:02:00> pid=2052 /boot/driver/bus-pci.proxy.so [intel-ethernet] pid=2052 /boot/driver/intel-ethernet.so [ethernet] pid=2052 /boot/driver/ethernet.so [00:1f:00] pid=1416 /boot/driver/bus-pci.so [00:1f:02] pid=1416 /boot/driver/bus-pci.so <00:1f:02> pid=2156 /boot/driver/bus-pci.proxy.so [ahci] pid=2156 /boot/driver/ahci.so [00:1f:03] pid=1416 /boot/driver/bus-pci.so
The names in square brackets are devices. The names in angle brackets are proxy devices, which are instantiated in the "lower" devhost, when process isolation is being provided. The pid= field indicates the process object id of the devhost process that device is contained within. The path indicates which driver implements that device.
Above, for example, the pid 1416 devhost contains the pci bus driver, which has created devices for each PCI device in the system. PCI device 00:02:00 happens to be an intel ethernet interface, which we have a driver for (intel-ethernet.so). A new devhost (pid 2052) is created, set up with a proxy device for PCI 00:02:00, and the intel ethernet driver is loaded and bound to it.
Proxy devices are invisible within the Device filesystem, so this ethernet device appears as /dev/sys/pci/00:02:00/intel-ethernet
.
Protocols, Interfaces, and Classes
Devices may implement Protocols, which are C ABIs used by child devices to interact with parent devices in a device-specific manner. The PCI Protocol, USB Protocol, Block Core Protocol, and Ethermac Protocol, are examples of these. Protocols are usually in-process interactions between devices in the same devhost, but in cases of driver isolation, they may take place via RPC to a "higher" devhost.
Devices may implement Interfaces, which are RPC protocols that clients (services, applications, etc). The base device interface supports posix style open/close/read/write style IO. Currently Interfaces are supported via the ioctl operation in the base device interface. In the future, Fuchsia's interface definition language and bindings (FIDL) will be supported.
In many cases a Protocol is used to allow drivers to be simpler by taking advantage of a common implementation of an Interface. For example, the "block" driver implements the common block interface, and binds to devices implementing the Block Core Protocol, and the "ethernet" driver does the same thing for the Ethernet Interface and Ethermac Protocol. Some protocols, such as the two cited here, make use of shared memory, and non-rpc signaling for more efficient, lower latency, and higher throughput than could be achieved otherwise.
Classes represent a promise that a device implements an Interface or Protocol. Devices exist in the Device Filesystem under a topological path, like /sys/pci/00:02:00/intel-ethernet
. If they are a specific class, they also appear as an alias under /dev/class/CLASSNAME/...
. The intel-ethernet
driver implements the Ethermac interface, so it also shows up at /dev/class/ethermac/000
. The names within class directories are unique but not meaningful, assigned on demand.
NOTE: Currently names in class directories are 3 digit decimal numbers, but they are likely to change form in the future. Clients should not assume there is any specific meaning to a class alias name.
Device Driver Lifecycle
Device drivers are loaded into devhost processes when it is determined they are needed. What determines if they are loaded or not is the Binding Program, which is a description of what device a driver can bind to. The Binding Program is defined using macros in ddk/binding.h
An example Binding Program from the Intel Ethernet driver:
ZIRCON_DRIVER_BEGIN(intel_ethernet, intel_ethernet_driver_ops, "zircon", "0.1", 9) BI_ABORT_IF(NE, BIND_PROTOCOL, ZX_PROTOCOL_PCI), BI_ABORT_IF(NE, BIND_PCI_VID, 0x8086), BI_MATCH_IF(EQ, BIND_PCI_DID, 0x100E), // Qemu BI_MATCH_IF(EQ, BIND_PCI_DID, 0x15A3), // Broadwell BI_MATCH_IF(EQ, BIND_PCI_DID, 0x1570), // Skylake BI_MATCH_IF(EQ, BIND_PCI_DID, 0x1533), // I210 standalone BI_MATCH_IF(EQ, BIND_PCI_DID, 0x15b7), // Skull Canyon NUC BI_MATCH_IF(EQ, BIND_PCI_DID, 0x15b8), // I219 BI_MATCH_IF(EQ, BIND_PCI_DID, 0x15d8), // Kaby Lake NUC ZIRCON_DRIVER_END(intel_ethernet)
The ZIRCON_DRIVER_BEGIN and _END macros include the necessary compiler directives to put the binding program into an ELF NOTE section which allows it to be inspected by the Device Coordinator without needing to fully load the driver into its process. The second parameter to the _BEGIN macro is a zx_driver_ops_t structure pointer (defined by [ddk/driver.h](../../system/ulib/ddk/include/ddk/driver.h)
which defines the init, bind, create, and release methods.
init()
is invoked when a driver is loaded into a Device Host process and allows for any global initialization. Typically none is required. If the init()
method is implemented and fails, the driver load will fail.
bind()
is invoked to offer the driver a device to bind to. The device is one that has matched the bind program the driver has published. If the bind()
method succeeds, the driver must create a new device and add it as a child of the device passed in to the bind()
method. See Device Lifecycle for more information.
create()
is invoked for platform/system bus drivers or proxy drivers. For the vast majority of drivers, this method is not required.
release()
is invoked before the driver is unloaded, after all devices it may have created in bind()
and elsewhere have been destroyed. Currently this method is never invoked. Drivers, once loaded, remain loaded for the life of a Device Host process.
Device Lifecycle
Within a Device Host process, devices exist as a tree of zx_device_t
structures which are opaque to the driver. These are created with device_add()
which the driver provides a zx_protocol_device_t
structure to. The methods defined by the function pointers in this structure are the "device ops". The various structures and functions are defined in device.h
The device_add()
function creates a new device, adding it as a child to the provided parent device. That parent device must be either the device passed in to the bind()
method of a device driver, or another device which has been created by the same device driver.
A side-effect of device_add()
is that the newly created device will be added to the global Device Filesystem maintained by the Device Coordinator. If the device is created with the DEVICE_ADD_INVISIBLE flag, it will not be accessible via opening its node in devfs until device_make_visible()
is invoked. This is useful for drivers that have to do extended initialization or probing and do not want to visibly publish their device(s) until that succeeds (and quietly remove them if that fails).
Devices are reference counted. When a driver creates one with device_add()
, it then holds a reference on that device until it eventually calls device_remove()
. If a device is opened by a remote process via the Device Filesystem, a reference is acquired there as well. When a device's parent is removed, its unbind()
method is invoked. This signals to the driver that it should start shutting the device down and remove and child devices it has created by calling device_remove()
on them.
Since a child device may have work in progress when its unbind()
method is called, it's possible that the parent device which just called device_remove()
on the child could continue to receive device method calls or protocol method calls on behalf of that child. It is advisable that before removing its children, the parent device should arrange for these methods to return errors, so that calls from a child before the child removal is completed do not start more work or cause unexpected interactions.
From the moment that device_add()
is called without the DEVICE_ADD_INVSIBLE flag, or device_make_visible()
is called on an invisible device, other device ops may be called by the Device Host.
The release()
method is only called after the creating driver has called device_remove()
on the device, all open instances of that device have been closed, and all children of that device have been removed and released. This is the last opportunity for the driver to destroy or free any resources associated with the device. It is not valid to refer to the zx_device_t
for that device after release()
returns. Calling any device methods or protocol methods for protocols obtained from the parent device past this point is illegal and will likely result in a crash.
=================== End