docker in all

docker vs hyper-v,vmware,xen,kvmphp

docker host, docker container, docker engineen, docker imagehtml

images = stopped containernode

container = running imagesmysql

 

docker操做示意圖linux

workflownginx

 

開始使用docker(以windows下爲例)git

PS G:\dockerdata> docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:2557e3c07ed1e38f26e389462d03ed943586f744621577a99efb77324b0fe535
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

PS G:\dockerdata> docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest fce289e99eb9 2 months ago 1.84kB
docker4w/nsenter-dockerd latest 2f1c802f322f 4 months ago 187kBgithub

 

 

以上docker run hello-world命令本質上的執行過程:golang

1. docker client向docker daemon(engine)聯絡,告訴docker engine,請幫我運行一個hello-wold containerweb

2. docker daemon(engine)收到該命令後先在本地查找是否有hello-world這個image,若是沒有則從regisry查找而且pull下來

3. docker daemon以該image實例化一個container,而且運行該image定義的executable,而這個executable將產生output;

4. docker daemon streamed that output to the docker client,這樣咱們就看到了hello world的消息

docker image到底包含了什麼?

強烈建議: https://www.csdn.net/article/2015-08-21/2825511

咱們知道linux系統由內核+發行版組成,一樣的內核好比3.8之上,咱們能夠有debian, ubuntu, centos等不一樣的發行版本。相似地,Docker鏡像就是相似於「ubuntu操做系統發行版」,可 以在任何知足要求的Linux內核之上運行。簡單一點有「Debian操做系統發行版」Docker鏡像、「Ubuntu操做系統發行版」Docker鏡 像;若是在Debian鏡像中安裝MySQL 5.6,那咱們能夠將其命名爲Mysql:5.6鏡像;若是在Debian鏡像中安裝有Golang 1.3,那咱們能夠將其命名爲golang:1.3鏡像;以此類推,你們能夠根據本身安裝的軟件,獲得任何本身想要的鏡像。

修改默認pull image存放位置

在windows下本質上docker engine是工做在hyper-v虛擬機中,全部的docker客戶端敲的命令在該虛擬機中運行,pull的image也放在該虛擬機中,所以咱們要修改image保存的位置實際上只要修改hyper-v的MobyLinuxVM對應的vhdx文件的位置便可。

http://www.cnblogs.com/show668/p/5341283.html

 docker ps/docker images

PS G:\dockerdata> docker images
REPOSITORY                 TAG                 IMAGE ID            CREATED             SIZE
hello-world                latest              fce289e99eb9        2 months ago        1.84kB
docker4w/nsenter-dockerd   latest              2f1c802f322f        4 months ago        187kB
PS G:\dockerdata> docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
PS G:\dockerdata> docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
135da1372a06        hello-world         "/hello"            24 minutes ago      Exited (0) 24 minutes ago                       modest_spence

 pull特定版本的image

docker pull ubuntu:14.04

Repository:是對一個docker image的存儲定義

將docker hub mirror配置爲阿里雲加速器

刪除本地的image

PS G:\dockerdata> docker images
REPOSITORY                 TAG                 IMAGE ID            CREATED             SIZE
ubuntu                     latest              47b19964fb50        3 weeks ago         88.1MB
alpine                     latest              caf27325b298        4 weeks ago         5.53MB
hello-world                latest              fce289e99eb9        2 months ago        1.84kB
docker4w/nsenter-dockerd   latest              2f1c802f322f        4 months ago        187kB
PS G:\dockerdata> docker rmi ubuntu
Untagged: ubuntu:latest
Untagged: ubuntu@sha256:7a47ccc3bbe8a451b500d2b53104868b46d60ee8f5b35a24b41a86077c650210
Deleted: sha256:47b19964fb500f3158ae57f20d16d8784cc4af37c52c49d3b4f5bc5eede49541
Deleted: sha256:d4c69838355b876cd3eb0d92b4ef27b1839f5b094a4eb1ad2a1d747dd5d6088f
Deleted: sha256:1c29a32189d8f2738d0d99378dc0912c9f9d289b52fb698bdd6c1c8cd7a33727
Deleted: sha256:d801a12f6af7beff367268f99607376584d8b2da656dcd8656973b7ad9779ab4
Deleted: sha256:bebe7ce6215aee349bee5d67222abeb5c5a834bbeaa2f2f5d05363d9fd68db41

docker run detached mode啓動一個web服務

PS G:\dockerdata> docker run -d --name web -p 9090:8080 nigelpoulton/pluralsight-docker-ci
Unable to find image 'nigelpoulton/pluralsight-docker-ci:latest' locally
latest: Pulling from nigelpoulton/pluralsight-docker-ci
a3ed95caeb02: Pull complete
3b231ed5aa2f: Pull complete
7e4f9cd54d46: Pull complete
929432235e51: Pull complete
6899ef41c594: Pull complete
0b38fccd0dab: Pull complete
Digest: sha256:7a6b0125fe7893e70dc63b2c42ad779e5866c6d2779ceb9b12a28e2c38bd8d3d
Status: Downloaded newer image for nigelpoulton/pluralsight-docker-ci:latest
27b4bc07a3e299e738ea8fc05bb6de9fa160c192a5ab71886b84e432d5422aea #這就是docker host主機上面的container id
PS G:\dockerdata> docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
27b4bc07a3e2 nigelpoulton/pluralsight-docker-ci "/bin/sh -c 'cd /src…" 4 minutes ago Up 4 minutes 0.0.0.0:9090->8080/tcp web

上面的命令執行後將在docker host主機上啓動一個web服務器,使用http://localhost:9090就能夠直接訪問到該container的服務了!!

啓動一個container而且在該container中執行bash

PS G:\dockerdata> docker run -it --name temp ubuntu:latest /bin/bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
6cf436f81810: Pull complete
987088a85b96: Pull complete
b4624b3efe06: Pull complete
d42beb8ded59: Pull complete
Digest: sha256:7a47ccc3bbe8a451b500d2b53104868b46d60ee8f5b35a24b41a86077c650210
Status: Downloaded newer image for ubuntu:latest
root@9b4970dcb02a:/# ls
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

簡單批量維護命令:

PS G:\dockerdata> docker ps -aq
9b4970dcb02a
27b4bc07a3e2
135da1372a06
PS G:\dockerdata> docker stop $(docker ps -aq)
9b4970dcb02a
27b4bc07a3e2
135da1372a06

 

swarm:

一羣docker engines加入一個cluster分組就被稱爲swarm, a cluster = a swarm

swarm裏面的engine工做於swarm mode

manager nodes維護swarm,

worker nodes 執行manager nodes分發過來的tasks

services: declarative/scalable

tasks: assigned to worker nodes ,means  ~ containers  currently

docker swarm init --advertise-addr xxx:2377 --listen-addr xxx:2377 
# engine port 2375, secure engine port: 2376, swarm port: 2377

docker service create --name web-fe --replicas 5 ...

 

Container

container is isolated area of an OS with resource usage limits applied.

它由name space和control group(限定cpu,ram,networking吞吐量,io吞吐量)約束造成的獨立運行環境。

engine 

engine經過外部api接受命令負責屏蔽OS的namespace及cgroup,而且建立對應的container運行於host環境中

不一樣module協同工做實現的container運行過程

一旦container被啓動運行後,containerd和它之間就能夠沒有了關係,之後能夠經過發現過程來取得新的聯繫

image

image包含app運行所需的

1.OS Files library, objects;

2. app files

3. manifest-->定義這些文件是如何組織在一塊兒工做的

image是層疊結構的文件系統.

docker image pull redis的工做分兩步:第一步從registry這裏獲取到manifest文件;第二步pull layers

 

docker history redis  # 羅列出全部可以建立redis這個image的命令列表
$ docker image inspect redis
[
    {
        "Id": "sha256:0f55cf3661e92cc44014f9d93e6f7cbd2a59b7220a26edcdb0828289cf6a361f",
        "RepoTags": [
            "redis:latest"
        ],
        "RepoDigests": [
            "redis@sha256:dd5b84ce536dffdcab79024f4df5485d010affa09e6c399b215e199a0dca38c4"
        ],
        "Parent": "",
        "Comment": "",
        "Created": "2019-02-06T09:02:43.375297494Z",
        "Container": "1abd8103d4a4423fa8339aabdb3442026bf6b8e9dca21c4ed44973e73ffd90cf",
        "ContainerConfig": {
            "Hostname": "1abd8103d4a4",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "6379/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "GOSU_VERSION=1.10",
                "REDIS_VERSION=5.0.3",
                "REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-5.0.3.tar.gz",
                "REDIS_DOWNLOAD_SHA=e290b4ddf817b26254a74d5d564095b11f9cd20d8f165459efa53eb63cd93e02"
            ],
            "Cmd": [
                "/bin/sh",
                "-c",
                "#(nop) ",
                "CMD [\"redis-server\"]"
            ],
            "ArgsEscaped": true,
            "Image": "sha256:68d73e8c5e2090bf28a588569b92595ab2d60e38eb92ba968be552b496eb6ed3",
            "Volumes": {
                "/data": {}
            },
            "WorkingDir": "/data",
            "Entrypoint": [
                "docker-entrypoint.sh"
            ],
            "OnBuild": null,
            "Labels": {}
        },
        "DockerVersion": "18.06.1-ce",
        "Author": "",
        "Config": {
            "Hostname": "",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "6379/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "GOSU_VERSION=1.10",
                "REDIS_VERSION=5.0.3",
                "REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-5.0.3.tar.gz",
                "REDIS_DOWNLOAD_SHA=e290b4ddf817b26254a74d5d564095b11f9cd20d8f165459efa53eb63cd93e02"
            ],
            "Cmd": [
                "redis-server"
            ],
            "ArgsEscaped": true,
            "Image": "sha256:68d73e8c5e2090bf28a588569b92595ab2d60e38eb92ba968be552b496eb6ed3",
            "Volumes": {
                "/data": {}
            },
            "WorkingDir": "/data",
            "Entrypoint": [
                "docker-entrypoint.sh"
            ],
            "OnBuild": null,
            "Labels": null
        },
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 94993858,
        "VirtualSize": 94993858,
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/1aeb385f6b9def8e0c2048213c6a68446b233f4d44c9230657859257505dace5/diff:/var/lib/docker/overlay2/5e8dc35e2ed45cee79a8b5108cc74bfe7000311e75db45bd83d254f21e1892e7/diff:/var/lib/docker/overlay2/bfb61b0335946076ea36f25716da9e43d133dd6e8cf0211e7abadb6a23c001f3/diff:/var/lib/docker/overlay2/591b4074f127d18d3b7d84078891e464eb9c808439bd70f78f653ece9fa1101e/diff:/var/lib/docker/overlay2/30c283b2c4910e51dc162b23d6344575697e9fb478aeccf330edcef05c90aeae/diff",
                "MergedDir": "/var/lib/docker/overlay2/358068125c47e5995e7b1308b71a7ba11dd1509a9a69b36c1495e5c23a5c71f0/merged",
                "UpperDir": "/var/lib/docker/overlay2/358068125c47e5995e7b1308b71a7ba11dd1509a9a69b36c1495e5c23a5c71f0/diff",
                "WorkDir": "/var/lib/docker/overlay2/358068125c47e5995e7b1308b71a7ba11dd1509a9a69b36c1495e5c23a5c71f0/work"
            },
            "Name": "overlay2"
        },
        "RootFS": {
            "Type": "layers",
            "Layers": [
                "sha256:0a07e81f5da36e4cd6c89d9bc3af643345e56bb2ed74cc8772e42ec0d393aee3",
                "sha256:943fb767d8100f2c44a54abbdde4bf2c0f6340da71125f4ef73ad2db7007841d",
                "sha256:16d37f04beb4896e44557df69c060fc93e1486391c4c3dedf3c6ebd773098d90",
                "sha256:5e1afad325f9c970c66dcc5db47d19f034691f29492bf2fe83b7fec680a9d122",
                "sha256:d98df0140af1ee738e8987862268e80074503ab33212f6ebe253195b0f461a43",
                "sha256:b437bb5668d3cd5424015d7b7aefc99332c4af3530b17367e6d9d067ce9bb6d5"
            ]
        },

        "Metadata": {
            "LastTagTime": "0001-01-01T00:00:00Z"
        }
    }
]

docker支持的網絡模式

bridge模式: -net = bridge

這是默認網絡,docker engine一旦啓動後就會在宿主host上建立一個docker0的網橋(能夠理解爲switch),默認建立的容器都是添加到該網橋(switch)的網段中,能夠想象這些容器就是鏈接在一個交換機的不一樣網口上,他們的網關就是docker0的ip(172.17.0.1)

host模式: -net = host

容器不會得到獨立的network namespace,而是與宿主host主機共用一個,這也意味着container不會擁有本身的網卡信息,而是使用宿主機的。host模式的容器之間除了網絡,其餘都是隔離的。

none模式: -net = none

容器將獲取獨立的network namespace,可是不會爲容器進行任何網絡配置,須要咱們本身去手工配置

container模式: -net = container:Name/ID

這種模式建立的容器將與指定的容器使用同一個network namespace,具備一樣的網絡配置信息,這種容器之間除了網絡,其餘都是隔離的。

自定義網絡模式:

與默認的bridge原理同樣,但自定義網絡內部具有dns發現的能力,能夠經過容器名或者主機名容器之間網絡通訊

docker logs經過查看容器log來定位調試問題

默認狀況下docker logs和ldocker service logs命令顯示命令執行的輸出,就像是你在命令行直接執行該程序時的情形同樣。unix和linux程序每每會打開三個I/O Streams,分別稱爲STDIN,STDOUT,STDERR。其中stdin是命令的input stream, 能夠包含從鍵盤得到的input或者從其餘命令的輸出做爲input;    stdout是應用程序的normal output.而stderr則被用於錯誤信息輸出。默認狀況下,docker logs將顯示命令的stdout和stderr輸出。基於以上信息,在多重場景下docker logs沒法提供有效的log:

1. 若是你使用了一個logging driver(logging driver是docker提供的從運行的container或者service中獲取有用信息的機制)將log發往一個文件,或者一個外部的主機,一個數據庫或者其餘的logging back-end,那麼docker logs將不會顯示任何有用的信息;

https://docs.docker.com/config/containers/logging/configure/

docker daemon有一個默認的logging driver,每一個啓動的容器都將使用它除非你配置了使用一個不一樣的logging driver.

好比,咱們能夠配置docker daemon使用syslog來作log的收集,他就會經過syslog將運行容器的stdout,stderr信息實時打印到遠程服務器。在這種狀況下,咱們實際上就不可能使用docker logs來查看運行時的狀態,而只能經過syslog服務器來獲取信息;

2. 若是咱們的image運行在non-interactive 進程中,好比web server或者database的進程,這種進程會將其輸出信息直接送往log文件,而不是stdout或者stderr.

在這種狀況下,咱們一方面能夠進入容器來查看相似nginx和myql的log文件獲取運行時信息;另一方面官方的nginx,httpd都提供了workaround方式,好比nginx image的構建中經過建立一個符號鏈接將 /var/log/nginx/access.log指向到/dev/stdout; 將/var/log/nginx/error.log指向到/dev/stderr的方式來解決。 httpd image則默認輸出到/proc/self/fd/1 (stdout),   error則將寫往 /proc/self/fd/2(stderr)

這樣咱們依然能夠經過docker logs -tail 8 -f來實時查看log

docker networking

https://success.docker.com/article/networking

建議使用自定義網絡,docker默認的docker0 bridge支持--link參數,可是--link參數未來也會廢棄。

$ brctl show
bridge name     bridge id               STP enabled     interfaces
docker0         8000.0242bd712cd8       no
br-9694b511a9af         8000.0242e7c72a3d       no
br-81195db0babc         8000.0242d6feb257       no              veth375600f
                                                                vethbc86c59
br-c301fa0c30d5         8000.024241d93a8e       no              veth73040a3
                                                                veth72eebce
                                                                vethd5af9cd
                                                                veth12d8ab4
                                                                veth6d89a9d        

我們來看一下使用laradock docker-compose up -d nginx mysql以後的網絡拓補圖分析過程:

$ brctl show
bridge name     bridge id               STP enabled     interfaces
docker0         8000.0242bd712cd8       no
br-9694b511a9af         8000.0242e7c72a3d       no
br-81195db0babc         8000.0242d6feb257       no              veth375600f
                                                        vethbc86c59
br-c301fa0c30d5         8000.024241d93a8e       no              veth73040a3
                                                        veth72eebce
                                                        vethd5af9cd
                                                        veth12d8ab4
                                                        veth6d89a9d
[node1] (local) root@192.168.0.13 ~/apiato/laradock
$ docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                           NAMES
25dd9253f860        laradock_nginx       "/bin/bash /opt/star…"   2 hours ago         Up 2 hours          0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   laradock_nginx_1
a2070a01035c        laradock_php-fpm     "docker-php-entrypoi…"   2 hours ago         Up 2 hours          9000/tcp                           laradock_php-fpm_1
d1f9327cb61c        laradock_workspace   "/sbin/my_init"          2 hours ago         Up 2 hours          0.0.0.0:2222->22/tcp                       laradock_workspace_1
a70f2b180a0d        laradock_mysql       "docker-entrypoint.s…"   2 hours ago         Up 2 hours          0.0.0.0:3306->3306/tcp, 33060/tcp          laradock_mysql_1
01f438a6efa9        docker:dind          "dockerd-entrypoint.…"   2 hours ago         Up 2 hours          2375/tcp                           laradock_docker-in-docker_1
[node1] (local) root@192.168.0.13 ~/apiato/laradock
$
[node1] (local) root@192.168.0.13 ~/apiato/laradock
$
[node1] (local) root@192.168.0.13 ~/apiato/laradock
$
[node1] (local) root@192.168.0.13 ~/apiato/laradock
$
[node1] (local) root@192.168.0.13 ~/apiato/laradock
$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
60e8d0d3dd8c        bridge              bridge              local
5130e0e1e134        host                host                local
c301fa0c30d5        laradock_backend    bridge              local
9694b511a9af        laradock_default    bridge              local
81195db0babc        laradock_frontend   bridge              local
cb098f68c7be        none                null                local
[node1] (local) root@192.168.0.13 ~/apiato/laradock
$ brctl show
bridge name     bridge id               STP enabled     interfaces
docker0         8000.0242bd712cd8       no
br-9694b511a9af         8000.0242e7c72a3d       no
br-81195db0babc         8000.0242d6feb257       no              veth375600f
                                                        vethbc86c59
br-c301fa0c30d5         8000.024241d93a8e       no              veth73040a3
                                                        veth72eebce
                                                        vethd5af9cd
                                                        veth12d8ab4
                                                        veth6d89a9d
[node1] (local) root@192.168.0.13 ~/apiato/laradock
$ docker network inspect c301
[
    {
        "Name": "laradock_backend",
        "Id": "c301fa0c30d5f44e8daab0ffecf8166012f63edee764ce2abeaf3e884ce54446",
        "Created": "2019-03-13T12:25:42.645372888Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.21.0.0/16",
                    "Gateway": "172.21.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "01f438a6efa996b4e5c8df8f36b742ae468bf09762a1e6eabdefd66f5c920e11": {
                "Name": "laradock_docker-in-docker_1",
                "EndpointID": "d01c244fc579cd288bf8b1e79a6e936486b348f3167db3e7034044e08beae44c",
                "MacAddress": "02:42:ac:15:00:02",
                "IPv4Address": "172.21.0.2/16",
                "IPv6Address": ""
            },
            "25dd9253f860588321b1ff05ae4b43226ae6c22f83044973b86c0c57871ed924": {
                "Name": "laradock_nginx_1",
                "EndpointID": "24b527973345960c10bf2f97a11612c33562a5146732e9c4049625fc99cadca8",
                "MacAddress": "02:42:ac:15:00:06",
                "IPv4Address": "172.21.0.6/16",
                "IPv6Address": ""
            },
            "a2070a01035cbd8c15005c074e9e19ea18f795cdf6a2bc48863d86cc638b35b5": {
                "Name": "laradock_php-fpm_1",
                "EndpointID": "b3071a2d3d019a6e10b0b778ce0b4f99efbaff28898d295d3829d41e840aa15c",
                "MacAddress": "02:42:ac:15:00:05",
                "IPv4Address": "172.21.0.5/16",
                "IPv6Address": ""
            },
            "a70f2b180a0dfcc18c26e4991897946b9389b678ce4ea2cd6527859c301bb78e": {
                "Name": "laradock_mysql_1",
                "EndpointID": "815e801431b16f4a245b0a243e08cc9642482b3933b09480928ae40fadd56b14",
                "MacAddress": "02:42:ac:15:00:03",
                "IPv4Address": "172.21.0.3/16",
                "IPv6Address": ""
            },
            "d1f9327cb61cbd26f43c55911cbffa1cd3f53b912f783725bbf73e0c6edad5ef": {
                "Name": "laradock_workspace_1",
                "EndpointID": "5bbe5ceae7d15ff3eb65236ab0243619591d69474f3a0a13df07e507d2e25a22",
                "MacAddress": "02:42:ac:15:00:04",
                "IPv4Address": "172.21.0.4/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "backend",
            "com.docker.compose.project": "laradock",
            "com.docker.compose.version": "1.23.2"
        }
    }
]
[node1] (local) root@192.168.0.13 ~/apiato/laradock
$ docker network inspect 8119
[
    {
        "Name": "laradock_frontend",
        "Id": "81195db0babc4aff1b4ae09b2ad078038b74643c798b396409a46f2948ff89c8",
        "Created": "2019-03-13T12:25:42.057604176Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.20.0.0/16",
                    "Gateway": "172.20.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "25dd9253f860588321b1ff05ae4b43226ae6c22f83044973b86c0c57871ed924": {
                "Name": "laradock_nginx_1",
                "EndpointID": "e1ad08b19608cc3884a9da04e509a71566ca4847245db12310d77463bcb80814",
                "MacAddress": "02:42:ac:14:00:03",
                "IPv4Address": "172.20.0.3/16",
                "IPv6Address": ""
            },
            "d1f9327cb61cbd26f43c55911cbffa1cd3f53b912f783725bbf73e0c6edad5ef": {
                "Name": "laradock_workspace_1",
                "EndpointID": "64d65215f6e0d6135bb7dbf5f341bd858972bc8e869cd8a177991d27d5652491",
                "MacAddress": "02:42:ac:14:00:02",
                "IPv4Address": "172.20.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "frontend",
            "com.docker.compose.project": "laradock",
            "com.docker.compose.version": "1.23.2"
        }
    }
]
[node1] (local) root@192.168.0.13 ~/apiato/laradock
$ docker network inspect 9694
[
    {
        "Name": "laradock_default",
        "Id": "9694b511a9afac9a43d3b45ae4296976bf193633148465141f5e0cd787b12082",
        "Created": "2019-03-13T12:25:41.924774946Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.19.0.0/16",
                    "Gateway": "172.19.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "default",
            "com.docker.compose.project": "laradock",
            "com.docker.compose.version": "1.23.2"
        }
    }
]
[node1] (local) root@192.168.0.13 ~/apiato/laradock
$ docker network inspect 5130
[
    {
        "Name": "host",
        "Id": "5130e0e1e1340fb58d5704528257cfb0f7dc98e9f718055c3e32f96705355597",
        "Created": "2019-03-13T12:23:30.472608001Z",
        "Scope": "local",
        "Driver": "host",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
[node1] (local) root@192.168.0.13 ~/apiato/laradock
$ docker network inspect 60e8
[
    {
        "Name": "bridge",
        "Id": "60e8d0d3dd8c376a31a802f9965227301dc06a74910852895f9b010d07fd4417",
        "Created": "2019-03-13T12:23:30.540268336Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

關於環境變量env

https://vsupalov.com/docker-arg-env-variable-guide/

關於volumes

https://docs.docker.com/storage/volumes/

若是咱們不須要永久持久化,可是又須要在運行時保存一些狀態信息,能夠考慮使用tmpfs mount直接mount到內存中,加快速度。

 

容器中進程啓動的兩種模式:shell模式和exec模式

docker容器內啓動的全部進程所有都是宿主機上的獨立進程;另外,該進程是否是docker容器進程自己(即:1號進程)取決於dockerfile的寫法。

在ENTRYPOINT和CMD命令中,有兩種不一樣的進程執行方式:shell和exec.

1.在shell方式中,CMD/ENTRYPOINT指令以下方式定義

CMD executable param1 param2

此時PID=1的進程爲/bin/sh -c "executable param1 param2",真正的executable工做進程是其子進程

2.在exec方式中,CMD/ENTRYPOINT指令則以下方式定義:

CMD ["executable", "param1","param2"]

此時PID=1的進程直接是工做進程executable param1 param2

這兩種啓動模式還帶來進程退出機制的區別,若是使用不當會形成殭屍進程。

docker提供了docker stop和docker kill兩個命令向1號進程發送信號。當執行docker stop時,docker會先想PID1的進程發送一個SIGTERM信號,若是容器收到該信號後沒有結束進程,則docker daemon會在等待10秒後發送SIGKILL信號,將容器進程殺死(PID1)並變爲退出狀態。

PID1的進程必須可以正確處理SIGTERM信號並通知全部子進程退出。若是用shell腳本啓動容器,其1號進程爲shell進程,而shell進程中並無對SIGTERM信號的處理邏輯,所以會忽略接收到的SIGTERM信號,這樣就沒法實現優雅的退出(好比持久化數據),所以docker官方建議的模式是:令每一個容器中只包含一個進程,同時採用exec模式啓動進程。或者使用定製化shell腳本啓動,須要可以接受SIGTERM信號而且分發該信號到全部的子進程,或者工做進程以exec方式啓動,同時該工做進程可以處理SIGTERM並負責分發給子進程

docker daemon只監控1號進程。

Docker容器的運行時模型

linux中的父進程用fork命令建立子進程,而後調用exec執行子進程函數,每一個進程都有一個PID。另外,除了常見的通常應用進程,操做系統中還有如下特殊的進程。

1. PID=0是調度進程,該進程是內核的一部分,不會執行磁盤上的任何程序;

2. PID=1爲init進程,一般讀取與系統有關的初始化文件/etc/rc*文件,/etc/inittab,/etc/init.d/中的文件

3. PID=2爲頁守護進程,負責支持虛擬存儲系統的分頁操做。

Docker啓動時,利用fork命令從Docker-containerd進程中fork出一個子進程,而後以exec方式啓動本身的程序。容器進程被fork以後便建立了namespace,下面就要執行一系列的初始化操做,該操做分爲三個階段,dockerinit負責初始化網絡棧;ENTRYPOINT負責完成用戶態配置;CMD負責啓動入口。啓動後的docker容器和docker daemon就經過sock文件描述符實現IPC通訊。

docker volumes vs binding mount

docker數據持久化建議有兩種或者說3種模式:

1. bind mounts;

2. named volumes

3. volumes in dockerfile

bind mounts的做用是將host的本地目錄mount到container中,

docker run -v /hostdir:/containerdir IMAGE_NAME
docker run --mount type=bind,source=/hostdir,target=/containerdir IMAGE_NAME

named volumes是經過docker volume create volume_name的方式手工建立的volumes,他們都存儲在/var/lib/docker/volumes目錄下,能夠僅僅使用volume name來引用。好比,若是咱們建立了mysql_data這個volume,則能夠在docker run -v mysql_data:/containerdata IMAGE_NAME來引用它。

而在dockerfile中定義的volumes,是使用VOLUME指令來建立的,他們也存儲於/var/lib/docker/volumes中,可是他們沒有一個自定義的名字,通常使用hash做爲其名稱,而且dockerfile中定義的volumes後續參數其實是指定了在container中的路徑,若是在image中已經populate了數據,則container執行後會自動將該目錄數據copy到host自動建立的目錄中(若是指定了host路徑則不會覆蓋host的數據!)

https://stackoverflow.com/questions/41935435/understanding-volume-instruction-in-dockerfile

docker from development to production

通常來講,咱們在開發時但願經過一個volume來綁定host主機的source代碼以方便即改即調的快捷流程,可是在production階段

咱們每每直接將代碼複製到image中從而實現容器就是代碼,獨立於主機能夠在任何地點運行的便捷。

一個比較好的策略是

docker-compose.yml中這樣定義:

version: '2'
services:
    app:
        build: .
        image: app:1.0.0-test
        volumes:
        - ./host_src:/user/share/nginx/html
        ports:
        - "8080:80"

其中nginx app build時須要使用的Dockerfile能夠簡單定義以下:

FROM nginx
COPY host_src /usr/share/nginx/html

在nginx app中首先COPY host_src到container對應的目錄中,隨後在dev的compose yml中爲方便實時修改代碼和測試則mount了一個volume將host_src也映射到nginx app中相同目錄下;

隨後,在nginx app變爲production時,咱們能夠這樣建立一個docker-compose-production.yml

version: '2'
services:
    app:
        build: .
        image: app:1.0.0-production
        ports:
        - "80:80"

和dev的yml文件相比,咱們僅僅剔除了volume的綁定,而是直接使用COPY到image中的代碼去運行

是否能夠修改從parent image中繼承的volume data?

好比,A image的dockerfile以下:

FROM bash
RUN mkdir "/data" && echo "FOO" > "/data/test"
VOLUME "/data"

咱們再定義一個B image,它繼承於A,咱們在dockerfile中但願修改A image中的「默認」數據:

FROM A
RUN echo "BAR" > "/data/test"

以上測試中B image中的/data/test實際上其值爲FOO,並非BAR

這其實是Docker自己的特性使然,如何workaround?

1. 直接修改parent docker file,咱們從google搜索如下信息

docker <image-name:version> source

咱們就可以找到對應的父親image的dockerfile,經過刪除其volume來實現。

VOLUMES自己並非IMAGE的一部分,所以咱們須要經過seed data來實現上面的需求。當docker image被放到另外地方運行時,它將在啓動後是一個空的volume,所以,若是你但願將數據和image一塊兒打包,就不要使用volume,而應該使用copy.

若是你確實須要從新build新的image的話,你應該先將這個volume刪除掉。

https://stackoverflow.com/questions/46227454/modifying-volume-data-inherited-from-parent-image

docker volume create$docker run -v /host_path:container_path$VOLUME in Dockerfile

使用volume是docker推薦的持久化數據的方式,可是volume的用法有不少種,他們之間到底有什麼區別?

要回答這個問題先得明白"volume是一個持久化數據的目錄,存在於/var/lib/docker/volumes/..."

這個事實。你能夠:

1. 在Dockerfile中聲明一個volume,這意味着每次從image中運行一個container時,該volume就將被created,可是確是(empty)空的,即使你並未使用一個-v參數在docker run -v命令中

2.你能夠在運行時指定mount的volume:

docker run -v [host-dir:]container-dir
docker run -d \ --name devtest \ --mount source=myvol2,target=/app \ nginx:latest
# -v和--mount有相同的效果,若是還不存在myvol2則建立一個volume到/var/lib/docker/volumes目錄,隨後mount到container中
docker run -d \ --name devtest \ -v myvol2:/app \ nginx:latest

這種模式就結合了VOLUME in dockerfile和docker run -v二者的優勢,他會將host folder mount到由container持久化並存儲於/var/lib/docker/volumes/...的卷

3.docker volume create將建立一個命名式的volume,能夠快速被其餘容器來mount

https://stackoverflow.com/questions/34809646/what-is-the-purpose-of-volume-in-dockerfile

docker run -d \
  -it \
  --name devtest \
  --mount type=bind,source="$(pwd)"/target,target=/app \
  nginx:latest
# 等價於如下命令,bind mount主機的目錄到target機器上
docker run -d \
  -it \
  --name devtest \
  -v "$(pwd)"/target:/app \
  nginx:latest

 

dockerfile執行順序及變動

1 FROM ubuntu:16.04
2 RUN apt-get update
3 RUN apt-get install nginx
4 RUN apt-get install php5

若是上面的dockerfile咱們作過build,隨後咱們想把nginx換成apache,並從新build,則這時候第1和第2行不會再運行,由於都保存在cache中,可是第3和第4行都會從新執行,由於第3行作了變動,而第4行又依賴於第3行,所以第3和第4行都將從新執行最終構建出image

Docker AUFS原理

 

使用docker數據容器的備份策略

咱們知道在網站平常運維中會有不少數據產生,包括數據庫自己,不少配置文件,包括dockerfile, docker-compose等數據,如何備份這個數據是一個挑戰。之前直接使用雲主機提供商提供的數據卷鏡像備份雖然能夠work,可是每每備份了不少沒必要要的數據,額外佔用的空間將產生額外的費用。而目前不少容器服務提供商可以免費提供私有數據容器存儲,這又能夠爲咱們節省一筆開支。

個人建議思路是:使用busybox基礎鏡像,COPY指令將須要備份的數據copy到鏡像中,而且tag後push到私有倉庫來存儲。

FROM busybox:1.30
COPY ./datainhost /dataincontainer

須要注意的是./datainhost目錄是相對於Dockerfile-databackup這個文件的相對路徑。

若是須要copy不在build context中的目錄到image中,能夠這麼作:

  • go to you build path
  • mkdir -p some_name
  • sudo mount --bind src_dir ./some_name

而後在dockerfile的copy指令中直接用some_name來引用外部文件夾而且實施copy便可。

 

隨後在host上(包含dockerfile的那個目錄上)執行如下shell命令:

docker build -f Dockerfile-databackup -t registry-internal.cn-shanghai.aliyuncs.com/namespace/reponame:$(date +"%F") .

該命令將會生成registry-internal.../reponame:2019-03-20相似的tag到構建好的image上去。

隨後直接push一下就行了。

注意上述registry對於阿里雲主機使用內網ip不佔用帶寬,很是快速好用

docker 子命令 tab completion

https://github.com/samneirinck/posh-docker

相關文章
相關標籤/搜索