[root@docker-build-86-050 ~]# ls /usr/bin |grep docker docker docker-compose docker-containerd docker-containerd-ctr docker-containerd-shim dockerd docker-proxy docker-runc
你們必定很困惑 dockerd, containerd, ctr,shim, runc,等這幾個進程的關係究竟是啥html
初窺得出的結論是:linux
runc init [args ...]
進程關係模型:git
docker ctr | | V V dockerd -> containerd ---> shim -> runc -> runc init -> process |-- > shim -> runc -> runc init -> process +-- > shim -> runc -> runc init -> process
[root@docker-build-86-050 ~]# ps -aux|grep docker root 3925 0.0 0.1 2936996 74020 ? Ssl 3月06 68:14 /usr/bin/dockerd --storage-driver=aufs -H 0.0.0.0:2375 --label ip=10.1.86.50 -H unix:///var/run/docker.sock --insecure-registry 192.168.86.106 --insecure-registry 10.1.86.51 --insecure-registry dev.reg.iflytek.com root 3939 0.0 0.0 1881796 27096 ? Ssl 3月06 9:10 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --runtime docker-runc root 21238 0.0 0.0 487664 6212 ? Sl 4月20 0:00 docker-containerd-shim 48119c50a0ca8a53967364f75fb709017cc272ae248b78062e0dafaa22108d21 /var/run/docker/libcontainerd/48119c50a0ca8a53967364f75fb709017cc272ae248b78062e0dafaa22108d21 docker-runc
首先dockerd的main函數相信你能找到cmd/dockerd/docker.go
github
其它的先略過,直接進start看一看:docker
err = daemonCli.start(opts)
這函數裏咱們先去關注兩件事:json
這個New很重要api
containerdRemote, err := libcontainerd.New(cli.getLibcontainerdRoot(), cli.getPlatformRemoteOptions()...)
進去看看:架構
... err := r.runContainerdDaemon(); ... conn, err := grpc.Dial(r.rpcAddr, dialOpts...) if err != nil { return nil, fmt.Errorf("error connecting to containerd: %v", err) } r.rpcConn = conn r.apiClient = containerd.NewAPIClient(conn) ...
啓動了一個containerd進程,並與之創建鏈接。經過protobuf進行rpc通訊, grpc相關介紹看這裏app
具體如何建立containerd進程的能夠進入runContainerDaemon裏細看異步
cmd := exec.Command(containerdBinary, args...) // redirect containerd logs to docker logs cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr cmd.SysProcAttr = setSysProcAttr(true) cmd.Env = nil // clear the NOTIFY_SOCKET from the env when starting containerd for _, e := range os.Environ() { if !strings.HasPrefix(e, "NOTIFY_SOCKET") { cmd.Env = append(cmd.Env, e) } } if err := cmd.Start(); err != nil { return err }
看不明白的話,去標準庫裏惡補一下cmd怎麼用。 cmd.Start()異步建立進程,建立完直接返回
因此建立一個協程等待子進程退出
go func() { cmd.Wait() close(r.daemonWaitCh) }() // Reap our child when needed
代碼中的一句話解釋:shim for container lifecycle and reconnection
, 容器生命週期和重連, 因此能夠順着這個思路去看。
先看containerd/linux/runtime.go裏的一段代碼:
Runtime 的Create方法裏有這一行,這裏的Runtime對象也是註冊到register裏面的,能夠看init函數,而後containerd進程啓動時去加載了這個Runtime
s, err := newShim(path, r.remote)
縮減版內容:
func newShim(path string, remote bool) (shim.ShimClient, error) { l, err := sys.CreateUnixSocket(socket) //建立了一個UnixSocket cmd := exec.Command("containerd-shim") f, err := l.(*net.UnixListener).File() cmd.ExtraFiles = append(cmd.ExtraFiles, f) //留意一下這個,很是很是重要,不知道這個原理可能就看不懂shim裏面的代碼了 if err := reaper.Default.Start(cmd); err != nil { //啓動了一個shim進程 } return connectShim(socket) // 這裏返回了與shim進程通訊的客戶端 }
再去看看shim的代碼:
shim進程啓動乾的最主要的一件事就是啓動一個grpc server:
if err := serve(server, "shim.sock"); err != nil {
進去一探究竟:
func serve(server *grpc.Server, path string) error { l, err := net.FileListener(os.NewFile(3, "socket")) logrus.WithField("socket", path).Debug("serving api on unix socket") go func() { if err := server.Serve(l); err != nil && } }() }
我曾經由於這個os.NewFile(3, "socket")
看了半天看不懂,爲啥是3?聯繫cmd.ExtraFiles = append(cmd.ExtraFiles, f)
建立shim進程時的這句,問題解決了。
這個3的文件描述符,就是containerd用於建立UnixSocket的文件,這樣containerd的client恰好與這邊啓動的 grpc server鏈接上了,能夠遠程調用其接口了:
type ContainerServiceClient interface { Create(ctx context.Context, in *CreateRequest, opts ...grpc.CallOption) (*CreateResponse, error) Start(ctx context.Context, in *StartRequest, opts ...grpc.CallOption) (*google_protobuf.Empty, error) Delete(ctx context.Context, in *DeleteRequest, opts ...grpc.CallOption) (*DeleteResponse, error) Info(ctx context.Context, in *InfoRequest, opts ...grpc.CallOption) (*containerd_v1_types1.Container, error) List(ctx context.Context, in *ListRequest, opts ...grpc.CallOption) (*ListResponse, error) Kill(ctx context.Context, in *KillRequest, opts ...grpc.CallOption) (*google_protobuf.Empty, error) Events(ctx context.Context, in *EventsRequest, opts ...grpc.CallOption) (ContainerService_EventsClient, error) Exec(ctx context.Context, in *ExecRequest, opts ...grpc.CallOption) (*ExecResponse, error) Pty(ctx context.Context, in *PtyRequest, opts ...grpc.CallOption) (*google_protobuf.Empty, error) CloseStdin(ctx context.Context, in *CloseStdinRequest, opts ...grpc.CallOption) (*google_protobuf.Empty, error) }
再看shim與runc的關係,這個比較簡單了,直接進入shim service 實現的Create方法便可
sv = shim.New(path)
func (s *Service) Create(ctx context.Context, r *shimapi.CreateRequest) (*shimapi.CreateResponse, error) { process, err := newInitProcess(ctx, s.path, r) return &shimapi.CreateResponse{ Pid: uint32(pid), }, nil }
進入到newInitProcess裏面:
func newInitProcess(context context.Context, path string, r *shimapi.CreateRequest) (*initProcess, error) { runtime := &runc.Runc{ Command: r.Runtime, Log: filepath.Join(path, "log.json"), LogFormat: runc.JSON, PdeathSignal: syscall.SIGKILL, } p := &initProcess{ id: r.ID, bundle: r.Bundle, runc: runtime, } if err := p.runc.Create(context, r.ID, r.Bundle, opts); err != nil { return nil, err } return p, nil }
能夠看到,在這裏調用了runc的API去真正執行建立容器的操做。其本質是調用了runc create --bundle [bundle] [containerid]
命令,在此很少做介紹了
上文可知,shim進程建立runc子進程。
看docker建立了這麼多子進程,而後到了runc咱們期待的本身Dockerfile中的CMD進程就要被建立了,想一想都有點小激動,然而。。。
runc進程啓動後會去啓動init進程,去建立容器,而後在容器中建立進程,那纔是真正咱們須要的進程
關於runc init進程關鍵看StartInitialization方法(main_unix.go)
ctr 是一個containerd的client,之間經過proto rpc通訊, containerd監聽了unix:///run/containerd/containerd.sock。
[root@dev-86-201 ~]# docker-containerd --help NAME: containerd - High performance container daemon USAGE: docker-containerd [global options] command [command options] [arguments...] VERSION: 0.2.0 commit: 0ac3cd1be170d180b2baed755e8f0da547ceb267 COMMANDS: help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --debug enable debug output in the logs --state-dir "/run/containerd" runtime state directory --metrics-interval "5m0s" interval for flushing metrics to the store --listen, -l "unix:///run/containerd/containerd.sock" proto://address on which the GRPC API will listen --runtime, -r "runc" name or path of the OCI compliant runtime to use when executing containers --runtime-args [--runtime-args option --runtime-args option] specify additional runtime args --shim "containerd-shim" Name or path of shim --pprof-address http address to listen for pprof events --start-timeout "15s" timeout duration for waiting on a container to start before it is killed --retain-count "500" number of past events to keep in the event log --graphite-address Address of graphite server --help, -h show help --version, -v print the version
[root@dev-86-201 ~]# docker-containerd-ctr --help NAME: ctr - High performance container daemon cli USAGE: docker-containerd-ctr [global options] command [command options] [arguments...] VERSION: 0.2.0 commit: 0ac3cd1be170d180b2baed755e8f0da547ceb267 COMMANDS: checkpoints list all checkpoints containers interact with running containers events receive events from the containerd daemon state get a raw dump of the containerd state version return the daemon version help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --debug enable debug output in the logs --address "unix:///run/containerd/containerd.sock" proto://address of GRPC API --conn-timeout "1s" GRPC connection timeout --help, -h show help --version, -v print the version
比較複雜也比較重要,因此我將單獨寫一篇相關的介紹 這裏
mkdir /mycontainer cd /mycontainer mkdir rootfs docker export $(docker create busybox) | tar -C rootfs -xvf - # 生成容器的配置文件config.json runc spec runc run mycontainerid
默認存在/run/runc目錄下,無論是docker engine建立的容器仍是經過runc直接建立的容器都會在/run/runc目錄下建立一個以容器名命名的目錄,下面有個state.json文件用於存儲文件狀態
更多問題歡迎關注個人github: https://github.com/fanux