Dokcer使用總結(Dockerfile、Compose、Swarm)

Dokcer基礎

查看Linux版本html

uname -r

查看Linux詳盡信息node

cat /etc/*elease
CentOS Linux release 7.6.1810 (Core) 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

CentOS Linux release 7.6.1810 (Core) 
CentOS Linux release 7.6.1810 (Core) 
View Code

容器的五大隔離python

  • pid:進程隔離
  • net:網絡隔離 (獨有的ip地址,網關,子網掩碼)
  • ipc:進程間交互隔離
  • mnt:文件系統隔離
  • uts:主機和域名隔離 (hostname,domainname)container 有本身的機器名

centos上安裝docker

官方地址:https://docs.docker.com/install/linux/docker-ce/centos/linux

  1. 卸載舊版本
    sudo yum remove docker \
                      docker-client \
                      docker-client-latest \
                      docker-common \
                      docker-latest \
                      docker-latest-logrotate \
                      docker-logrotate \
                      docker-engine
  2. 安裝包環境
    sudo yum install -y yum-utils \
      device-mapper-persistent-data \
      lvm2
  3. 設置倉儲地址
    # 阿里雲
    sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    # 官方
    sudo yum-config-manager \
        --add-repo \
        https://download.docker.com/linux/centos/docker-ce.repo
  4. 安裝Docker-CE
    sudo yum install docker-ce docker-ce-cli containerd.io
  5. 啓動Docker,運行開機自啓
    systemctl start docker
    systemctl enable docker

Docker安裝位置

  • 查找Docker可執行程序地址 /usr/bin/docker 
    find / -name docker
    /run/docker
    /sys/fs/cgroup/pids/docker
    /sys/fs/cgroup/cpuset/docker
    /sys/fs/cgroup/freezer/docker
    /sys/fs/cgroup/devices/docker
    /sys/fs/cgroup/blkio/docker
    /sys/fs/cgroup/perf_event/docker
    /sys/fs/cgroup/memory/docker
    /sys/fs/cgroup/net_cls,net_prio/docker
    /sys/fs/cgroup/hugetlb/docker
    /sys/fs/cgroup/cpu,cpuacct/docker
    /sys/fs/cgroup/systemd/docker
    /etc/docker
    /var/lib/docker
    /var/lib/docker/overlay2/ec5a827479e221461a396c7d0695226ec60b642544f2f921e2da967426b1853c/diff/docker
    /var/lib/docker/overlay2/cf92e8387d988e9f87dc3656bb21d3a2fefff02e3505e1d282c0d105cb703ab1/merged/docker
    /var/lib/docker/overlay2/df3551b1764d57ad79604ace4c1b75ab1e47cdca2fb6d526940af8b400eee4aa/diff/etc/dpkg/dpkg.cfg.d/docker
    /usr/bin/docker
    /usr/share/bash-completion/completions/docker
    /docker
    View Code
  • 查找Docker服務端程序 /usr/bin/dockerd nginx

    find / -name dockerd
    /etc/alternatives/dockerd
    /var/lib/alternatives/dockerd
    /usr/bin/dockerd
    View Code
  • lib + data: /var/lib/dockergit

  • config: /etc/dockergithub

  • 查找docker.service服務程序 /usr/lib/systemd/system/docker.service 
    find / -name docker.service
    [root@localhost ~]# cat /usr/lib/systemd/system/docker.service
    [Unit]
    Description=Docker Application Container Engine
    Documentation=https://docs.docker.com
    BindsTo=containerd.service
    After=network-online.target firewalld.service containerd.service
    Wants=network-online.target
    Requires=docker.socket
    
    [Service]
    Type=notify
    # the default is not to use systemd for cgroups because the delegate issues still
    # exists and systemd currently does not support the cgroup feature set required
    # for containers run by docker
    ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
    ExecReload=/bin/kill -s HUP $MAINPID
    TimeoutSec=0
    RestartSec=2
    Restart=always
    
    # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
    # Both the old, and new location are accepted by systemd 229 and up, so using the old location
    # to make them work for either version of systemd.
    StartLimitBurst=3
    
    # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
    # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
    # this option work for either version of systemd.
    StartLimitInterval=60s
    
    # Having non-zero Limit*s causes performance problems due to accounting overhead
    # in the kernel. We recommend using cgroups to do container-local accounting.
    LimitNOFILE=infinity
    LimitNPROC=infinity
    LimitCORE=infinity
    
    # Comment TasksMax if your systemd version does not supports it.
    # Only systemd 226 and above support this option.
    TasksMax=infinity
    
    # set delegate yes so that systemd does not reset the cgroups of docker containers
    Delegate=yes
    
    # kill only the docker process, not all processes in the cgroup
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target

解讀dockerd配置文件

dockerd:https://docs.docker.com/engine/reference/commandline/dockerd/web

硬盤掛載

  1. 使用 fdisk -l 命令查看主機上的硬盤
    fdisk -l
    [root@localhost ~]# fdisk -l
    
    Disk /dev/vda: 53.7 GB, 53687091200 bytes, 104857600 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0x000b0ebb
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/vda1   *        2048   104856254    52427103+  83  Linux
    
    Disk /dev/vdb: 536.9 GB, 536870912000 bytes, 1048576000 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    View Code
  2. 使用mkfs.ext4命令把硬盤格式化
    # mkfs.ext4    磁盤名稱
    
    mkfs.ext4   /dev/vdb
  3. 使用mount命令掛載磁盤
    mount /dev/vdb /boot
  4. 輸入指令: df -h 查看當前磁盤的狀況
    df -h
    [root@localhost ~]# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/vda1        50G  7.4G   40G  16% /
    devtmpfs        7.8G     0  7.8G   0% /dev
    tmpfs           7.8G     0  7.8G   0% /dev/shm
    tmpfs           7.8G  592K  7.8G   1% /run
    tmpfs           7.8G     0  7.8G   0% /sys/fs/cgroup
    overlay          50G  7.4G   40G  16% /var/lib/docker/overlay2/c76fb87ef4c263e24c7f6874121fb161ce9b22db572db66ff1992ca6daf5768b/merged
    shm              64M     0   64M   0% /var/lib/docker/containers/afe151311ee560e63904e3e9d3c1053b8bbb6fd5e3b2d4c74001091b132fe3bd/mounts/shm
    overlay          50G  7.4G   40G  16% /var/lib/docker/overlay2/5ca6ed8e1671cb590705f53f89af8f8f5b85a6cdfc8137b3e12e4fec6c76fcea/merged
    shm              64M  4.0K   64M   1% /var/lib/docker/containers/79427c180de09f78e33974278043736fca80b724db8b9bce42e44656d04823b3/mounts/shm
    tmpfs           1.6G     0  1.6G   0% /run/user/0
    /dev/vdb        493G   73M  467G   1% /boot
    View Code
  5. 用 blkid  獲取磁盤的uuid和屬性
    [root@localhost ~]# blkid
    /dev/vda1: UUID="105fa8ff-bacd-491f-a6d0-f99865afc3d6" TYPE="ext4" 
    /dev/vdb: UUID="97a17b64-d025-478c-8981-105214e99ff4" TYPE="ext4" 

     

  6. 設置開機自動mountredis

    vim /etc/fstab
    
    UUID=97a17b64-d025-478c-8981-105214e99ff4  /data  ext4  defaults  1  1

修改docker存儲位置

  1. 建立或修改docker配置文件
    # 建立或修改docker配置文件
    vim /etc/docker/daemon.json
    
    {
     "data-root": "/data/docker"
    }
  2. 建立docker數據存儲文件夾
    # 建立docker數據存儲文件夾
    mkdir /data
    mkdir /data/docker
  3. 中止Dockerdocker

    # 中止Docker
    service docker stop
  4. 拷貝存儲文件
    # 拷貝存儲文件
    cp -r /var/lib/docker/* /data/docker/
  5. 刪除源文件
    # 刪除源文件(不建議先刪除,後面沒問題了再刪除)
    # rm -rf /var/lib/docker/
  6. 驗證docker數據存儲位置是否改變

    # 驗證docker數據存儲位置是否改變
    docker info
    注意:最好在docker剛安裝完就執行切換數據目錄,否則等容器運行起來后里面的一些volume會仍是使用的原來的

鏡像加速器

sudo mkdir -p /etc/docker
vim /etc/docker/daemon.json

{
  "registry-mirrors": ["https://uwxsp1y1.mirror.aliyuncs.com"],
  "data-root": "/data/docker"
}

sudo systemctl daemon-reload
sudo systemctl restart docker

查看系統日誌

# 修改配置信息
vim /etc/docker/daemon.json

{
  "registry-mirrors": ["https://uwxsp1y1.mirror.aliyuncs.com"],
  "data-root": "/data/docker",
  "debug":true
}


# journalctl 統一查看service全部的日誌。
journalctl -u docker.service -f 

遠程鏈接docker deamon

  1. 修改docker.service啓動信息
    # 修改docker.service啓動信息
    vim /usr/lib/systemd/system/docker.service
    # ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
    ExecStart=/usr/bin/dockerd --containerd=/run/containerd/containerd.sock
  2. 修改daemon.json

    #修改daemon.json
    vim /etc/docker/daemon.json
    
    {
      "registry-mirrors": ["https://uwxsp1y1.mirror.aliyuncs.com"],
      "data-root": "/data/docker",
      "debug":true,
      "hosts": ["192.168.103.240:6381","unix:///var/run/docker.sock"]
    }
  3. 重載、重啓

    # 重載、重啓
    sudo systemctl daemon-reload
    service docker restart
  4. 查看端口

    # 查看端口
    netstat -tlnp
    
    [root@localhost docker]# netstat -tlnp
    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 192.168.103.240:6381    0.0.0.0:*               LISTEN      27825/dockerd       
    tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd           
    tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      3743/dnsmasq        
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      3122/sshd           
    tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      3109/cupsd          
    tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      3479/master         
    tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      14503/sshd: root@pt 
    tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd           
    tcp6       0      0 :::22                   :::*                    LISTEN      3122/sshd           
    tcp6       0      0 ::1:631                 :::*                    LISTEN      3109/cupsd          
    tcp6       0      0 ::1:25                  :::*                    LISTEN      3479/master         
    tcp6       0      0 ::1:6010                :::*                    LISTEN      14503/sshd: root@pt 
  5. 遠程鏈接測試

    # 遠程鏈接測試
    docker -H 192.168.103.240:6381 ps

容器基礎

docker container 中經常使用操控命令

docker run --help
[root@localhost ~]# docker run --help

Usage:    docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Run a command in a new container

Options:
      --add-host list                  Add a custom host-to-IP mapping (host:ip)
  -a, --attach list                    Attach to STDIN, STDOUT or STDERR
      --blkio-weight uint16            Block IO (relative weight), between 10 and 1000, or 0 to disable (default 0)
      --blkio-weight-device list       Block IO weight (relative device weight) (default [])
      --cap-add list                   Add Linux capabilities
      --cap-drop list                  Drop Linux capabilities
      --cgroup-parent string           Optional parent cgroup for the container
      --cidfile string                 Write the container ID to the file
      --cpu-period int                 Limit CPU CFS (Completely Fair Scheduler) period
      --cpu-quota int                  Limit CPU CFS (Completely Fair Scheduler) quota
      --cpu-rt-period int              Limit CPU real-time period in microseconds
      --cpu-rt-runtime int             Limit CPU real-time runtime in microseconds
  -c, --cpu-shares int                 CPU shares (relative weight)
      --cpus decimal                   Number of CPUs
      --cpuset-cpus string             CPUs in which to allow execution (0-3, 0,1)
      --cpuset-mems string             MEMs in which to allow execution (0-3, 0,1)
  -d, --detach                         Run container in background and print container ID
      --detach-keys string             Override the key sequence for detaching a container
      --device list                    Add a host device to the container
      --device-cgroup-rule list        Add a rule to the cgroup allowed devices list
      --device-read-bps list           Limit read rate (bytes per second) from a device (default [])
      --device-read-iops list          Limit read rate (IO per second) from a device (default [])
      --device-write-bps list          Limit write rate (bytes per second) to a device (default [])
      --device-write-iops list         Limit write rate (IO per second) to a device (default [])
      --disable-content-trust          Skip image verification (default true)
      --dns list                       Set custom DNS servers
      --dns-option list                Set DNS options
      --dns-search list                Set custom DNS search domains
      --entrypoint string              Overwrite the default ENTRYPOINT of the image
  -e, --env list                       Set environment variables
      --env-file list                  Read in a file of environment variables
      --expose list                    Expose a port or a range of ports
      --group-add list                 Add additional groups to join
      --health-cmd string              Command to run to check health
      --health-interval duration       Time between running the check (ms|s|m|h) (default 0s)
      --health-retries int             Consecutive failures needed to report unhealthy
      --health-start-period duration   Start period for the container to initialize before starting health-retries countdown (ms|s|m|h) (default 0s)
      --health-timeout duration        Maximum time to allow one check to run (ms|s|m|h) (default 0s)
      --help                           Print usage
  -h, --hostname string                Container host name
      --init                           Run an init inside the container that forwards signals and reaps processes
  -i, --interactive                    Keep STDIN open even if not attached
      --ip string                      IPv4 address (e.g., 172.30.100.104)
      --ip6 string                     IPv6 address (e.g., 2001:db8::33)
      --ipc string                     IPC mode to use
      --isolation string               Container isolation technology
      --kernel-memory bytes            Kernel memory limit
  -l, --label list                     Set meta data on a container
      --label-file list                Read in a line delimited file of labels
      --link list                      Add link to another container
      --link-local-ip list             Container IPv4/IPv6 link-local addresses
      --log-driver string              Logging driver for the container
      --log-opt list                   Log driver options
      --mac-address string             Container MAC address (e.g., 92:d0:c6:0a:29:33)
  -m, --memory bytes                   Memory limit
      --memory-reservation bytes       Memory soft limit
      --memory-swap bytes              Swap limit equal to memory plus swap: '-1' to enable unlimited swap
      --memory-swappiness int          Tune container memory swappiness (0 to 100) (default -1)
      --mount mount                    Attach a filesystem mount to the container
      --name string                    Assign a name to the container
      --network string                 Connect a container to a network (default "default")
      --network-alias list             Add network-scoped alias for the container
      --no-healthcheck                 Disable any container-specified HEALTHCHECK
      --oom-kill-disable               Disable OOM Killer
      --oom-score-adj int              Tune host's OOM preferences (-1000 to 1000)
      --pid string                     PID namespace to use
      --pids-limit int                 Tune container pids limit (set -1 for unlimited)
      --privileged                     Give extended privileges to this container
  -p, --publish list                   Publish a container's port(s) to the host
  -P, --publish-all                    Publish all exposed ports to random ports
      --read-only                      Mount the container's root filesystem as read only
      --restart string                 Restart policy to apply when a container exits (default "no")
      --rm                             Automatically remove the container when it exits
      --runtime string                 Runtime to use for this container
      --security-opt list              Security Options
      --shm-size bytes                 Size of /dev/shm
      --sig-proxy                      Proxy received signals to the process (default true)
      --stop-signal string             Signal to stop a container (default "SIGTERM")
      --stop-timeout int               Timeout (in seconds) to stop a container
      --storage-opt list               Storage driver options for the container
      --sysctl map                     Sysctl options (default map[])
      --tmpfs list                     Mount a tmpfs directory
  -t, --tty                            Allocate a pseudo-TTY
      --ulimit ulimit                  Ulimit options (default [])
  -u, --user string                    Username or UID (format: <name|uid>[:<group|gid>])
      --userns string                  User namespace to use
      --uts string                     UTS namespace to use
  -v, --volume list                    Bind mount a volume
      --volume-driver string           Optional volume driver for the container
      --volumes-from list              Mount volumes from the specified container(s)
  -w, --workdir string                 Working directory inside the container
View Code

docker run,docker exec

run可讓容器從鏡像中實例化出來,實例化過程當中能夠塞入不少參數

Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

docker run -d --name some-redis redis 外界沒法訪問,由於是網絡隔離,默認bridge模式。

  • -a stdin: 指定標準輸入輸出內容類型,可選 STDIN/STDOUT/STDERR 三項;

  • -d: 後臺運行容器,並返回容器ID;

  • -i: 以交互模式運行容器,一般與 -t 同時使用;

  • -P: 隨機端口映射,容器內部端口隨機映射到主機的高端口

  • -p: 指定端口映射,格式爲:主機(宿主)端口:容器端口

  • -t: 爲容器從新分配一個僞輸入終端,一般與 -i 同時使用;

  • --name="nginx-lb": 爲容器指定一個名稱;

  • --dns 8.8.8.8: 指定容器使用的DNS服務器,默認和宿主一致;

  • --dns-search example.com: 指定容器DNS搜索域名,默認和宿主一致;

  • -h "mars": 指定容器的hostname;

  • -e username="ritchie": 設置環境變量;

    # 設置東八區
    docker run -e TZ=Asia/Shanghai -d --name some-redis redis
  • --env-file=[]: 從指定文件讀入環境變量;

  • --cpuset="0-2" or --cpuset="0,1,2": 綁定容器到指定CPU運行;

  • -m :設置容器使用內存最大值;

  • --net="bridge": 指定容器的網絡鏈接類型,支持 bridge/host/none/container:<name|id> 四種類型;

  • --link=[]: 添加連接到另外一個容器;

  • --expose=[]: 開放一個端口或一組端口;

  • --volume , -v: 綁定一個卷

    docker run -p 16379:6379 -d --name some-redis redis
  • --add-host: 添加自定義ip
    # 場景:consul作健康檢查的時候,須要宿主機的ip地址
    docker run --add-host machineip:192.168.103.240 -d --name some-redis redis
    
    docker exec -it some-redis bash
    tail /etc/hosts

docker start,docker stop, docker kill

  • docker start :啓動一個或多個已經被中止的容器

  • docker stop :中止一個運行中的容器

  • docker restart :重啓容器

  • docker kill :殺掉一個運行中的容器。

batch delete 容器

docker rm -f 
docker rm -f `docker ps -a -q`
docker containers prune
# 極其強大的刪除清理方式,慎重使用
# docker system prune

docker container 狀態監控命令

查看容器日誌

docker logs
[root@localhost ~]# docker logs some-redis
1:C 09 Jul 2019 03:07:03.406 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 09 Jul 2019 03:07:03.406 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 09 Jul 2019 03:07:03.406 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 09 Jul 2019 03:07:03.406 * Running mode=standalone, port=6379.
1:M 09 Jul 2019 03:07:03.406 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 09 Jul 2019 03:07:03.406 # Server initialized
1:M 09 Jul 2019 03:07:03.406 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 09 Jul 2019 03:07:03.406 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 09 Jul 2019 03:07:03.406 * Ready to accept connections
View Code

容器性能指標

docker stats
[root@localhost ~]# docker stats

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
aaa8bec01038        some-redis          0.04%               8.375MiB / 1.795GiB   0.46%               656B / 0B           139kB / 0B          4
View Code

容器 -> 宿主機端口

查詢port映射關係

知道容器的端口,不知道宿主機的端口。。。
不知道容器的端口,知道宿主機的端口。。。

docker port [container]
[root@localhost ~]# docker port some-redis-2
6379/tcp -> 0.0.0.0:16379
View Code

查看容器內運行的進程

docker top [container]
[root@localhost ~]# docker top some-redis-2
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
polkitd             18356               18338               0                   13:20               pts/0               00:00:00            redis-server *:6379
View Code

容器的詳細信息

docker inspect [OPTIONS] NAME|ID [NAME|ID...]
[root@localhost ~]# docker inspect some-redis-2
[
    {
        "Id": "6248c674f0672620d0cd8fd4a573c0db48f5f7c75b61fbd5150072eaac6ed4b2",
        "Created": "2019-07-09T05:20:06.985445479Z",
        "Path": "docker-entrypoint.sh",
        "Args": [
            "redis-server"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 18356,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2019-07-09T05:20:07.255368955Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:bb0ab8a99fe694e832e56e15567c83dee4dcfdd321d0ad8ab9bd64d82d6a6bfb",
        "ResolvConfPath": "/data/docker/containers/6248c674f0672620d0cd8fd4a573c0db48f5f7c75b61fbd5150072eaac6ed4b2/resolv.conf",
        "HostnamePath": "/data/docker/containers/6248c674f0672620d0cd8fd4a573c0db48f5f7c75b61fbd5150072eaac6ed4b2/hostname",
        "HostsPath": "/data/docker/containers/6248c674f0672620d0cd8fd4a573c0db48f5f7c75b61fbd5150072eaac6ed4b2/hosts",
        "LogPath": "/data/docker/containers/6248c674f0672620d0cd8fd4a573c0db48f5f7c75b61fbd5150072eaac6ed4b2/6248c674f0672620d0cd8fd4a573c0db48f5f7c75b61fbd5150072eaac6ed4b2-json.log",
        "Name": "/some-redis-2",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {
                "6379/tcp": [
                    {
                        "HostIp": "",
                        "HostPort": "16379"
                    }
                ]
            },
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "shareable",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": [],
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/asound",
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware"
            ],
            "ReadonlyPaths": [
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/data/docker/overlay2/c7693e58e45a483a6cb66deac7d281a647a56e3c9043722f3379a5dd496646d7-init/diff:/data/docker/overlay2/d26d3067261173cfa34d57bbdc3371b164805203ff05a2d71ce868ddc5b5a2bc/diff:/data/docker/overlay2/6a35d92d8841364ee7443a84e18b42c22f60294a748f552ad4a0852507236c7f/diff:/data/docker/overlay2/5ed2ceb6771535d14cd64f375cc31462a82ff57503bbc3abace0589be3124955/diff:/data/docker/overlay2/9543ee1ade1f2d4341c00cadef3ec384eb3761c35d10726cc6ade4a3bfb99be2/diff:/data/docker/overlay2/86f47cf021b01ddec50356ae4c5387b910f65f75f97298de089336b4a413ce25/diff:/data/docker/overlay2/df3551b1764d57ad79604ace4c1b75ab1e47cdca2fb6d526940af8b400eee4aa/diff",
                "MergedDir": "/data/docker/overlay2/c7693e58e45a483a6cb66deac7d281a647a56e3c9043722f3379a5dd496646d7/merged",
                "UpperDir": "/data/docker/overlay2/c7693e58e45a483a6cb66deac7d281a647a56e3c9043722f3379a5dd496646d7/diff",
                "WorkDir": "/data/docker/overlay2/c7693e58e45a483a6cb66deac7d281a647a56e3c9043722f3379a5dd496646d7/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "volume",
                "Name": "88f774ae0567f3e3f834a9f469c0db377be8948b82d05ee757e6eabe185903e6",
                "Source": "/data/docker/volumes/88f774ae0567f3e3f834a9f469c0db377be8948b82d05ee757e6eabe185903e6/_data",
                "Destination": "/data",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "6248c674f067",
            "Domainname": "",
            "User": "",
            "AttachStdin": true,
            "AttachStdout": true,
            "AttachStderr": true,
            "ExposedPorts": {
                "6379/tcp": {}
            },
            "Tty": true,
            "OpenStdin": true,
            "StdinOnce": true,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "GOSU_VERSION=1.10",
                "REDIS_VERSION=5.0.5",
                "REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-5.0.5.tar.gz",
                "REDIS_DOWNLOAD_SHA=2139009799d21d8ff94fc40b7f36ac46699b9e1254086299f8d3b223ca54a375"
            ],
            "Cmd": [
                "redis-server"
            ],
            "ArgsEscaped": true,
            "Image": "redis",
            "Volumes": {
                "/data": {}
            },
            "WorkingDir": "/data",
            "Entrypoint": [
                "docker-entrypoint.sh"
            ],
            "OnBuild": null,
            "Labels": {}
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "31f5b2c1c0d59c3f8866fa2b02db2889e4d4d54076cbf88ae7d6057758b3f40a",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "6379/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "16379"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/31f5b2c1c0d5",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "ab4f1a16403dfd415703868b52b33ea0b6d9d28b750e5ce80810d0f9b89f4af1",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.3",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:03",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "80fba7499001738402fe35f0c1bb758ddd5f680abf75f4bd6a0456b3021ee5fe",
                    "EndpointID": "ab4f1a16403dfd415703868b52b33ea0b6d9d28b750e5ce80810d0f9b89f4af1",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:03",
                    "DriverOpts": null
                }
            }
        }
    }
]
View Code

容器的導入導出

  • docker export :將文件系統做爲一個tar歸檔文件導出到STDOUT。
    docker export [OPTIONS] CONTAINER
    
    # OPTIONS說明:
    # -o :將輸入內容寫到文件。
    
    # PS:
    # docker export -o /app2/1.tar.gz some-redis
  • docker import : 從歸檔文件中建立鏡像。

    docker import [OPTIONS] file|URL|- [REPOSITORY[:TAG]]
    
    # OPTIONS說明:
    # -c :應用docker 指令建立鏡像;
    # -m :提交時的說明文字;
    
    # PS:
    # 還原鏡像
    # docker import /app2/1.tar.gz newredis
    # 建立容器並運行redis-server啓動命令
    # docker run -d --name new-some-redis-2 newredis redis-server

docker images命令詳解

docker image
[root@localhost app2]# docker image

Usage:    docker image COMMAND

Manage images

Commands:
  build       Build an image from a Dockerfile
  history     Show the history of an image
  import      Import the contents from a tarball to create a filesystem image
  inspect     Display detailed information on one or more images
  load        Load an image from a tar archive or STDIN
  ls          List images
  prune       Remove unused images
  pull        Pull an image or a repository from a registry
  push        Push an image or a repository to a registry
  rm          Remove one or more images
  save        Save one or more images to a tar archive (streamed to STDOUT by default)
  tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

Run 'docker image COMMAND --help' for more information on a command.
View Code

鏡像的獲取,刪除,查看

  • docker pull : 從鏡像倉庫中拉取或者更新指定鏡像
    docker pull [OPTIONS] NAME[:TAG|@DIGEST]
    
    # OPTIONS說明:
    # -a :拉取全部 tagged 鏡像
    # --disable-content-trust :忽略鏡像的校驗,默認開啓
  • docker rmi : 刪除本地一個或多少鏡像。
    docker rmi [OPTIONS] IMAGE [IMAGE...]
    
    # OPTIONS說明:
    # -f :強制刪除;
    # --no-prune :不移除該鏡像的過程鏡像,默認移除;
  • docker inspect : 獲取容器/鏡像的元數據。
    docker inspect [OPTIONS] NAME|ID [NAME|ID...]
    
    # OPTIONS說明:
    # -f :指定返回值的模板文件。
    # -s :顯示總的文件大小。
    # --type :爲指定類型返回JSON。
  • docker images : 列出本地鏡像。
    docker images [OPTIONS] [REPOSITORY[:TAG]]
    
    # OPTIONS說明:
    # -a :列出本地全部的鏡像(含中間映像層,默認狀況下,過濾掉中間映像層);
    # --digests :顯示鏡像的摘要信息;
    # -f :顯示知足條件的鏡像;
    # --format :指定返回值的模板文件;
    # --no-trunc :顯示完整的鏡像信息;
    # -q :只顯示鏡像ID。

鏡像的導入導出,遷移

docker export/import 對容器進行打包
docker save / load 對鏡像進行打包

  • docker save : 將指定鏡像保存成 tar 歸檔文件。
    docker save [OPTIONS] IMAGE [IMAGE...]
    
    # OPTIONS 說明:
    # -o :輸出到的文件。
    
    # PS:
    # docker save -o /app2/1.tar.gz redis
  • docker load : 導入使用 docker save 命令導出的鏡像。
    docker load [OPTIONS]
    
    # OPTIONS 說明:
    # -i :指定導出的文件。
    # -q :精簡輸出信息。
    
    # PS:
    # docker load -i /app2/1.tar.gz

docker tag

打標籤的目的,方便我上傳到本身的私有倉庫

  • docker tag : 標記本地鏡像,將其納入某一倉庫。
    docker tag [OPTIONS] IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]
    
    # PS:
    # docker tag redis:latest 13057686866/redis_1
    # 登陸
    # docker login
    # 推送到遠程私有倉庫
    # docker push 13057686866/redis_1

手工構建

  • docker build 命令用於使用 Dockerfile 建立鏡像。
    docker build [OPTIONS] PATH | URL | -
    
    # OPTIONS說明:
    # --build-arg=[] :設置鏡像建立時的變量;
    # --cpu-shares :設置 cpu 使用權重;
    # --cpu-period :限制 CPU CFS週期;
    # --cpu-quota :限制 CPU CFS配額;
    # --cpuset-cpus :指定使用的CPU id;
    # --cpuset-mems :指定使用的內存 id;
    # --disable-content-trust :忽略校驗,默認開啓;
    # -f :指定要使用的Dockerfile路徑;
    # --force-rm :設置鏡像過程當中刪除中間容器;
    # --isolation :使用容器隔離技術;
    # --label=[] :設置鏡像使用的元數據;
    # -m :設置內存最大值;
    # --memory-swap :設置Swap的最大值爲內存+swap,"-1"表示不限swap;
    # --no-cache :建立鏡像的過程不使用緩存;
    # --pull :嘗試去更新鏡像的新版本;
    # --quiet, -q :安靜模式,成功後只輸出鏡像 ID;
    # --rm :設置鏡像成功後刪除中間容器;
    # --shm-size :設置/dev/shm的大小,默認值是64M;
    # --ulimit :Ulimit配置。
    # --tag, -t: 鏡像的名字及標籤,一般 name:tag 或者 name 格式;能夠在一次構建中爲一個鏡像設置多個標籤。
    # --network: 默認 default。在構建期間設置RUN指令的網絡模式

dockerfile

docker build本身動手構建鏡像

官方文檔:https://docs.docker.com/engine/reference/builder/

dockerfile參數

  • FROM

  • ENV

  • RUN

  • CMD

  • LABEL

  • EXPOSE

  • ADD

    不只能夠copy文件,還能夠下載遠程文件。。。
    若是是本地的zip包,還能自動解壓。

  • COPY

  • ENTRYPOINT

  • VOLUME

  • USER

  • WORKDIR

  • ONBUILD

  • STOPSIGNAL

  • HEALTHCHECK

  1. 新建項目 WebApplication1 空項目便可
  2. 新建 Dockerfile 配置文件
    # 1-有了基礎鏡像
    FROM mcr.microsoft.com/dotnet/core/sdk:2.2
    
    # 2-把個人文件拷貝到這個操做系統中的/app文件夾中
    COPY . /app
    
    # 工做目錄
    WORKDIR /app
    
    # 3-publish
    RUN cd /app && dotnet publish "WebApplication1.csproj" -c Release -o /work
    
    # 4-告訴外界個人app暴露的是80端口
    EXPOSE 80
    
    # else
    ENV TZ Asia/Shanghai
    ENV ASPNETCORE_ENVIRONMENT Production
    
    # 做者信息
    LABEL version="1.0"
    LABEL author="wyt"
    
    # 執行角色
    USER root
    
    # 設置工做目錄
    WORKDIR /work
    
    # 4-啓動
    CMD ["dotnet","WebApplication1.dll"]
  3. 將 WebApplication1 整個目錄拷貝到遠程服務器下

  4. 構建鏡像

    cd /app/WebApplication1
    docker build -t 13057686866/webapp:v1 .
  5. 運行容器

    docker run -d -p 18000:80 --name webapp3 13057686866/webapp:v1
  6. 運行成功

    curl http://192.168.103.240:18000/
    Hello World!

Dockerfile優化策略

使用 .dockerignore 忽略文件

官方地址:https://docs.docker.com/engine/reference/builder/#dockerignore-file

**/.dockerignore
**/.env
**/.git
**/.gitignore
**/.vs
**/.vscode
**/*.*proj.user
**/azds.yaml
**/charts
**/bin
**/obj
**/Dockerfile
**/Dockerfile.develop
**/docker-compose.yml
**/docker-compose.*.yml
**/*.dbmdl
**/*.jfm
**/secrets.dev.yaml
**/values.dev.yaml
**/.toolstarget

咱們徹底可使用VS來建立Dockerfile,會自動生成 .dockerignore 

使用多階段構建

多階段構建:一個From一個階段

dockerfile中只有最後一個From是生效的,其餘的from只是給最後一個from打輔助。。。

當最後一個from生成完畢的時候,其餘的from都會自動銷燬。。。

 FROM build AS publish  給當前的鏡像取一個別名。。

FROM mcr.microsoft.com/dotnet/core/aspnet:2.2 AS base
WORKDIR /app
EXPOSE 80

FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build
WORKDIR /src
COPY ["WebApplication1.csproj", ""]
RUN dotnet restore "WebApplication1.csproj"
COPY . .
WORKDIR "/src/"
RUN dotnet build "WebApplication1.csproj" -c Release -o /app

FROM build AS publish
RUN dotnet publish "WebApplication1.csproj" -c Release -o /app

FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "WebApplication1.dll"] 

及時移除沒必要須的包

# 3-publish
RUN cd /app && dotnet publish "WebApplication1.csproj" -c Release -o /work && rm -rf /app

最小化層的個數   

  • 可參考官方dockerfile
  • ADD 和 COPY,ADD 會增長 layer的個數。
  • RUN儘量合併

搭建本身的私有registry倉庫

官網介紹:https://docs.docker.com/registry/deploying/

搭建本身內網倉庫,能夠加速

  1. 拉取本地倉庫鏡像
    docker pull registry:2
  2. 運行本地倉庫容器

    # 運行本地倉庫容器
    docker run -d -p 5000:5000 --restart=always --name registry registry:2
  3. 拉取alpine鏡像
    # 拉取alpine鏡像
    docker pull alpine
  4. 重命名標籤,指向本地倉庫
    # 重命名標籤,指向本地倉庫
    docker tag alpine 192.168.103.240:5000/alpine:s1
  5. 遠程推送到本地倉庫
    # 遠程推送到本地倉庫
    docker push 192.168.103.240:5000/alpine:s1

    故障:http: server gave HTTP response to HTTPS client(https client 不接受  http response)
    解決辦法: https://docs.docker.com/registry/insecure/

    # 編輯該daemon.json文件,其默認位置 /etc/docker/daemon.json在Linux或 C:\ProgramData\docker\config\daemon.jsonWindows Server上。若是您使用Docker Desktop for Mac或Docker Desktop for Windows,請單擊Docker圖標,選擇 Preferences,而後選擇+ Daemon。
    # 若是該daemon.json文件不存在,請建立它。假設文件中沒有其餘設置,則應具備如下內容:
    
    {
      "insecure-registries" : ["192.168.103.240:5000"]
    }
    
    # 將不安全註冊表的地址替換爲示例中的地址。
    
    # 啓用了不安全的註冊表後,Docker將執行如下步驟:
    # 1-首先,嘗試使用HTTPS。
    # 2-若是HTTPS可用但證書無效,請忽略有關證書的錯誤。
    # 3-若是HTTPS不可用,請回退到HTTP。
    
    # 從新啓動Docker以使更改生效。
    service docker restart
  6. 驗證鏡像是否推送成功
    docker pull 192.168.103.240:5000/alpine:s1
  7. 拉取開源registry UI鏡像
    官方地址:https://hub.docker.com/r/joxit/docker-registry-ui
    # 拉取registry-ui鏡像
    docker pull joxit/docker-registry-ui
  8. 設置容許repositry跨域
    # 設置容許跨域https://docs.docker.com/registry/configuration/
    # 複製文件到本地
    docker cp registry:/etc/docker/registry/config.yml /app
    # 修改配置文件,添加跨域
    vim /etc/docker/registry/config.yml
    
    version: 0.1
    log:
      fields:
        service: registry
    storage:
      cache:
        blobdescriptor: inmemory
      filesystem:
        rootdirectory: /var/lib/registry
    http:
      addr: :5000
      headers:
        X-Content-Type-Options: [nosniff]
        Access-Control-Allow-Origin: ['*']
        Access-Control-Allow-Methods: ['*']
        Access-Control-Max-Age: [1728000]
    health:
      storagedriver:
        enabled: true
        interval: 10s
        threshold: 3
        
    # 從新啓動registry容器
    docker rm registry -f
    docker run -d -p 5000:5000 --restart=always --name registry -v /app/config.yml:/etc/docker/registry/config.yml registry:2
  9. 運行registry-ui容器
    # 運行容器
    docker rm -f registry-ui
    docker run -d -p 8002:80 --name registry-ui joxit/docker-registry-ui
  10. 訪問可視化容器

使用阿里雲鏡像存儲服務

官方地址:https://cr.console.aliyun.com/cn-hangzhou/instances/repositories

接入操做:

  1. 登陸阿里雲Docker Registry

    sudo docker login --username=tb5228628_2012 registry.cn-hangzhou.aliyuncs.com

    用於登陸的用戶名爲阿里雲帳號全名,密碼爲開通服務時設置的密碼。

  2. 從Registry中拉取鏡像

    sudo docker pull registry.cn-hangzhou.aliyuncs.com/wyt_registry/wyt_registry:[鏡像版本號]
  3. 將鏡像推送到Registry

    sudo docker tag [ImageId] registry.cn-hangzhou.aliyuncs.com/wyt_registry/wyt_registry:[鏡像版本號]
    sudo docker push registry.cn-hangzhou.aliyuncs.com/wyt_registry/wyt_registry:[鏡像版本號]

volume數據掛載

三種方式可讓 數據 脫離到 容器以外,減小容器層的size,也提高了性能(避免容器的讀寫層)。

volume 管理

# 建立數據卷
docker volume create redisdata
# 使用數據卷
docker run -d -v redisdata:/data --name some-redis redis

優勢:

  • 不考慮宿主機文件結構,因此更加方便遷移,backup。
  • 可使用 docker cli 命令統一管理
  • volumes支持多平臺,不用考慮多平臺下的文件夾路徑問題。
  • 使用volumn plugin 能夠方便和 aws, 等雲平臺遠程存儲。

bind 管理 (文件,文件夾)

將宿主機文件夾初始化送入容器中,後續進行雙向綁定。

tmpfs 容器內目錄掛載到宿主機內存中

# 不隱藏容器內/tmp內文件
docker run --rm -it webapp bash
# 隱藏容器內/tmp內文件
docker run --rm --tmpfs /tmp -it webapp bash

network網絡

單機網絡

默認狀況下是 bridge,overlay,host, macvlan,none

docker host 的bridge 的 docker0 默認網橋

默認的 bridge 的大概原理

當docker啓動的時候,會生成一個默認的docker0網橋。。。

當啓動容器的時候,docker會生成一對 veth設備。。。。這個設備的一端鏈接到host的docker0網橋,一端鏈接到container中,重命名爲eth0

veth一端收到了數據,會自動傳給另外一端。。。

docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:a4ff:fe79:a36f  prefixlen 64  scopeid 0x20<link>
        ether 02:42:a4:79:a3:6f  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11  bytes 1439 (1.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


vethfc5e4ce: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::f802:99ff:fe73:34d7  prefixlen 64  scopeid 0x20<link>
        ether fa:02:99:73:34:d7  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 17  bytes 1947 (1.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
6: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever



[root@localhost ~]#  docker run -it alpine ash
/ # ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
8: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 
View Code

默認的bridge缺陷

無服務發現功能,同一個子網,沒法經過 「服務名」 互通,只能經過 ip 地址。。。

自定義bridge網絡

自帶服務發現機制

# 建立橋接網絡
docker network create my-net 
# 建立容器
docker run -it --network my-net --name some-redis alpine ash
docker run -it --network my-net --name some-redis2 alpine ash
# 在some-redis中ping容器some-redis2
ping some-redis2

容器網絡發佈

若是讓宿主機以外的程序能可以訪問host上的bridge內的container:-p 發佈端口

# 運行容器進行端口轉發
docker run -it --network my-net -p 80:80 --name some-redis-1 alpine ash
# 查看網絡轉發詳情
iptables -t nat -L -n

Chain DOCKER (2 references)
target     prot opt source               destination         
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           
RETURN     all  --  0.0.0.0/0            0.0.0.0/0           
DNAT       tcp  --  0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:172.18.0.4:80

多機網絡

overlay網絡

可實現多機訪問

  1. 使用docker swarm init 實現docker 集羣網絡
    # 192.168.103.240
    docker swarm init
    # 192.168.103.226
    docker swarm join --token SWMTKN-1-0g4cs8fcatshczn5koupqx7lulak20fbvu99uzjb5asaddblny-bio99e9kktn023k527y3tjgyv 192.168.103.240:2377
  2. 實現自定義的 可獨立添加容器的 overlay網絡
    docker network create --driver=overlay --attachable test-net

    TCP 2377 集羣 manage 節點交流的
    TCP 的 7946 和 UDP 的 7946 nodes 之間交流的
    UDP 4789 是用於overlay network 流量傳輸的。

演示

  1. 192.168.103.226 redis啓動
    docker run --network test-net --name some-redis -d redis
  2. 192.168.103.240 python
    mkdir /app
    vim /app/app.py
    vim /app/Dockerfile
    vim /app/requirements.txt

    app.pv

    from flask import Flask
    from redis import Redis, RedisError
    import os
    import socket
    
    # Connect to Redis
    redis = Redis(host="some-redis", db=0, socket_connect_timeout=2, socket_timeout=2)
    
    app = Flask(__name__)
    
    @app.route("/")
    def hello():
        try:
            visits = redis.incr("counter")
        except RedisError:
            visits = "<i>cannot connect to Redis, counter disabled</i>"
    
        html = "<b>Hostname:</b> {hostname}<br/>" \
               "<b>Visits:</b> {visits}"
        return html.format(hostname=socket.gethostname(), visits=visits)
    
    if __name__ == "__main__":
        app.run(host='0.0.0.0', port=80)
    View Code

    Dockerfile

    FROM python:2.7-slim
    
    WORKDIR /app
    
    COPY . .
    
    EXPOSE 80
    
    RUN pip install --trusted-host pypi.python.org -r requirements.txt
    
    VOLUME [ "/app" ]
    
    CMD [ "python", "app.py" ]
    View Code

    requirements.txt

    Flask
    Redis
    View Code
    # 構建鏡像
    docker build -t pyweb:v1 .
    # 運行容器
    docker run -d --network test-net -p 80:80 -v /app:/app --name pyapp pyweb:v1

    訪問結果

host 模式 

這種模式不和宿主機進行網絡隔離,直接使用宿主機網絡

最簡單最粗暴的方式

overlay雖然複雜,可是強大, 很差控制。

docker-compose

什麼是docker-compose?應用程序棧一鍵部署,(獨立程序一鍵部署),docker-compose 能夠管理你的整個應用程序棧的生命週期。

下載

官方地址:https://docs.docker.com/compose/install/

# 下載Docker Compose的當前穩定版本
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# https://github.com/docker/compose/releases/download/1.24.1/docker-compose-Linux-x86_64
# 建議迅雷下載後進行重命名,這樣速度快
# 對二進制文件應用可執行權限
sudo chmod +x /usr/local/bin/docker-compose
# 測試安裝
docker-compose --version

簡單示例

  1. 新建項目 WebApplication1 空網站項目添加NLog、Redis包支持
    Install-Package NLog.Targets.ElasticSearch
    Install-Package StackExchange.Redis
  2. 修改 Program.cs 使用80端口
    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .UseUrls("http://*:80")
            .UseStartup<Startup>();
  3. 修改 Startup.cs 添加日誌和redis
    public Logger logger = LogManager.GetCurrentClassLogger();
    public ConnectionMultiplexer redis = ConnectionMultiplexer.Connect("redis");
    
    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }
    
        app.Run(async (context) =>
        {
            var count = await redis.GetDatabase(0).StringIncrementAsync("counter");
            var info= $"you have been seen {count} times !";
            logger.Info(info);
    
            await context.Response.WriteAsync(info);
        });
    }
  4. 添加 nlog.config 配置文件
    <?xml version="1.0" encoding="utf-8" ?>
    <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          autoReload="true"
          internalLogLevel="Warn">
    
        <extensions>
            <add assembly="NLog.Targets.ElasticSearch"/>
        </extensions>
    
        <targets>
            <target name="ElasticSearch" xsi:type="BufferingWrapper" flushTimeout="5000" >
                <target xsi:type="ElasticSearch" uri="http://elasticsearch:9200" documentType="web.app"/>
            </target>
        </targets>
    
        <rules>
            <logger name="*" minlevel="Trace" writeTo="ElasticSearch" />
        </rules>
    </nlog>
  5. 添加 Dockerfile 文件

    FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-stretch-slim AS base
    
    WORKDIR /data
    COPY . .
    
    EXPOSE 80
    
    ENTRYPOINT ["dotnet", "WebApplication1.dll"]
  6. 添加 docker-compose.yml 文件

    version: '3.0'
    
    services:
    
      webapp: 
        build: 
          context: .
          dockerfile: Dockerfile
        ports: 
          - 80:80
        depends_on: 
          - redis
        networks: 
          - netapp
    
      redis: 
        image: redis
        networks: 
          - netapp
    
      elasticsearch: 
        image: elasticsearch:5.6.14
        networks: 
          - netapp
    
      kibana: 
        image: kibana:5.6.14
        ports: 
          - 5601:5601
        networks: 
          - netapp
    
    networks: 
      netapp:
  7. 發佈項目文件,並拷貝到遠程服務器/app文件夾內

  8. 運行 docker-compose 

    cd /app
    docker-compose up --build
  9. 查看效果
    訪問網站http://192.168.103.240/

    訪問Kibana查看日誌http://192.168.103.240:5601

docker-compose 常見命令

  •  操控命令
    docker-compose ps
    docker-compose images
    docker-compose kill webapp
    docker-compose build
    docker-compose run      -> docker exec
    docker-compose scale
    docker-compose up       -> docker run
    docker-compose down
  • 狀態命令

    docker-compose logs
    docker-compose ps
    docker-compose top
    docker-compose port 
    docker-compose config

compose命令講解

官方地址:https://docs.docker.com/compose/compose-file/

yml經常使用命令分析

version      3.7 
services
config    (swarm)
secret    (swarm)
volume     
networks  

appstack 補充

修改 WebApplication1 項目中的 docker-compose.yml 

version: '3.0'

services:

  webapp: 
    build: 
      context: .
      dockerfile: Dockerfile
    image: wyt/webapp
    container_name: webapplication
    restart: always
    ports: 
      - 80:80
    depends_on: 
      - redis
    networks: 
      - netapp

  redis: 
    image: redis
    networks: 
      - netapp

  elasticsearch: 
    image: elasticsearch:5.6.14
    networks: 
      - netapp
    volumes:
      - "esdata:/usr/share/elasticsearch/data"

  kibana: 
    image: kibana:5.6.14
    ports: 
      - 5601:5601
    networks: 
      - netapp

volumes:
  esdata:

networks: 
  netapp:
View Code

部分docker-compose腳本:https://download.csdn.net/download/qq_25153485/11324352

docker-compose 一些使用原則

使用多文件部署

  • 生產環境代碼直接放在容器中,test環境實現代碼掛載
    test:   docker-compose -f  docker-compose.yml  -f test.yml   up 
    prd:   docker-compose -f  docker-compose.yml  -f prd.yml   up 
  • 生產環境中綁定程序默認端口,測試機防衝突綁定其餘端口。

  • 生產環境配置 restart: always , 能夠容器就能夠掛掉以後重啓。
  • 添加日誌聚合,對接es

按需編譯,按需構建

# 只構建service名稱爲webapp的鏡像,也會構建其依賴
docker-compose build webapp
# 只構建service名稱爲webapp的鏡像,不構建其依賴
docker-compose up --no-deps --build -d webapp

變量插值

  1. 設置宿主機環境變量
    # 設置環境變量
    export ASPNETCORE_ENVIRONMENT=Production
    # 獲取環境變量
    echo $ASPNETCORE_ENVIRONMENT
    # hostip 網卡ip 埋進去,方便獲取
    # image的版本號
  2. 修改 docker-compose.yml 讀取環境變量
    environment:
      ASPNETCORE_ENVIRONMENT: ${ASPNETCORE_ENVIRONMENT}

docker可視化portainer

安裝教程參考:http://www.javashuo.com/article/p-gnnqvvqj-s.html

yml文件

protainer:
  image: portainer/portainer
  ports:
    - 9000:9000
  volumes:
    - /var/run/docker.sock:/var/run/docker.sock
  restart: always
  networks: 
    - netapp

使用python 和 C# 遠程訪問 docker

  1. 開放tcp端口,方便遠程訪問
    修改 docker.service ,修改掉ExecStart
    vim /usr/lib/systemd/system/docker.service
    
    # ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.soc
    ExecStart=/usr/bin/dockerd --containerd=/run/containerd/containerd.soc

    配置 daemon.json 

    vim /etc/docker/daemon.json
    
    "hosts": ["192.168.103.240:18080","unix:///var/run/docker.sock"]
  2. 刷新配置文件,重啓docker
    systemctl daemon-reload
    systemctl restart docker
  3. 查看docker進程是否監聽

    netstat -ano | grep 18080
    
    tcp        0      0 192.168.103.240:18080   0.0.0.0:*               LISTEN      off (0.00/0/0)

python訪問docker

官方地址:https://docs.docker.com/develop/sdk/examples/

c#訪問docker

社區地址:https://github.com/microsoft/Docker.DotNet

class Program
{
    static  async Task Main(string[] args)
    {
        DockerClient client = new DockerClientConfiguration(
                new Uri("http://192.168.103.240:18080"))
            .CreateClient();
        IList<ContainerListResponse> containers = await client.Containers.ListContainersAsync(
            new ContainersListParameters()
            {
                Limit = 10,
            });
        Console.WriteLine("Hello World!");
    }
}

cluster volumes

開源分佈式文件系統:https://www.gluster.org/

  1. 部署前準備,修改 /etc/hosts 文件,增長以下信息
    2臺機器
    vim /etc/hosts
    
    192.168.103.240 fs1
    192.168.103.226 fs2
  2. 安裝GlusterFS   【兩個node】
    yum install -y centos-release-gluster
    yum install -y glusterfs-server
    systemctl start glusterd
    systemctl enable glusterd
  3. 將fs2加入到集羣中
    # 在fs1中執行
    # 將fs2加入集羣節點中
    gluster peer probe fs2
    # 查看集羣狀態
    gluster peer status
    # 查看集羣列表
    gluster pool list
    # 查看全部命令
    gluster help global
  4. 建立volume

    # 建立文件夾(兩個都要建立)
    mkdir -p /data/glusterfs/glustervolume
    # 建立同步副本數據卷 replica集羣 2複製分發 force強制(fs1)
    gluster volume create glusterfsvolumne replica 2 fs1:/data/glusterfs/glustervolume fs2:/data/glusterfs/glustervolume force
    # 啓動卷使用
    gluster volume start glusterfsvolumne

    至關於兩臺機器都擁有了glusterfsvolumne

  5. 建立本地文件夾掛載 volume 便可
    # 分別建立
    mkdir /app
    # 【交叉掛載】
    # fs1
    mount -t glusterfs fs2:/glusterfsvolumne /app
    # fs2
    mount -t glusterfs fs1:/glusterfsvolumne /app
    [root@localhost app]# df -h
    文件系統                 容量  已用  可用 已用% 掛載點
    /dev/mapper/centos-root   17G   12G  5.8G   67% /
    devtmpfs                 903M     0  903M    0% /dev
    tmpfs                    920M     0  920M    0% /dev/shm
    tmpfs                    920M   90M  830M   10% /run
    tmpfs                    920M     0  920M    0% /sys/fs/cgroup
    /dev/sda1               1014M  232M  783M   23% /boot
    tmpfs                    184M   12K  184M    1% /run/user/42
    tmpfs                    184M     0  184M    0% /run/user/0
    overlay                   17G   12G  5.8G   67% /data/docker/overlay2/46ed811c8b335a3a59cae93a77133599390c4a6bf2767a690b01b8b2999eb1e3/merged
    shm                       64M     0   64M    0% /data/docker/containers/f7044f3d2b744f97f60a2fd004402300a8f4d1c1494f86dfd0852a89d4626efd/mounts/shm
    fs2:/glusterfsvolumne     17G   12G  5.7G   68% /app
    overlay                   17G   12G  5.8G   67% /data/docker/overlay2/b681972965562fe4f608f0724430906078130a65d3dbe9031cb9ab40ce29698f/merged
    shm                       64M     0   64M    0% /data/docker/containers/d43a7653a61a9a6d6ad89cb178b9567d99b5b0c6976ece90bd7b92f8cc2ebcaf/mounts/shm
    View Code
    [root@localhost app]# df -h
    文件系統                 容量  已用  可用 已用% 掛載點
    /dev/mapper/centos-root   17G  8.2G  8.9G   48% /
    devtmpfs                 903M     0  903M    0% /dev
    tmpfs                    920M     0  920M    0% /dev/shm
    tmpfs                    920M   90M  830M   10% /run
    tmpfs                    920M     0  920M    0% /sys/fs/cgroup
    /dev/sda1               1014M  232M  783M   23% /boot
    tmpfs                    184M  4.0K  184M    1% /run/user/42
    tmpfs                    184M   36K  184M    1% /run/user/0
    overlay                   17G  8.2G  8.9G   48% /data/docker/overlay2/20ae619da7d4578d9571a5ab9598478bce496423254833c110c67641e9f2d817/merged
    shm                       64M     0   64M    0% /data/docker/containers/fc31990633d41fd4bf21a8b0601db1cfb7cf9b2d5920bf1a13cf696e111d91e2/mounts/shm
    fs1:/glusterfsvolumne     17G   12G  5.7G   67% /app
    View Code

    在fs1新建文件

    在fs2中查看

  6. 容器部署
    # fs1 fs2
    # 數據是共享的
    docker run --name some-redis -p 6379:6379 -v /app/data:/data -d  redis

搭建本身的docker swarm集羣

集羣的搭建

  1. 準備三臺服務器
    192.168.103.240 manager1
    192.168.103.226 node1
    192.168.103.227 node2
  2. 初始化swarm

    # 192.168.103.240 manager1
    docker swarm init
    [root@localhost ~]# docker swarm init
    Swarm initialized: current node (ryi7o7xcww2c9e4j1lotygfbu) is now a manager.
    
    To add a worker to this swarm, run the following command:
    
        docker swarm join --token SWMTKN-1-10bndgdxqph4nqmjn0g4oqse83tdgx9cbb50pcgmf0tn7yhlno-6mako3nf0a0504tiopu9jefxc 192.168.103.240:2377
    
    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
    View Code
  3. 加入節點

    # 192.168.103.226 node1
    # 192.168.103.227 node2
    docker swarm join --token SWMTKN-1-10bndgdxqph4nqmjn0g4oqse83tdgx9cbb50pcgmf0tn7yhlno-6mako3nf0a0504tiopu9jefxc 192.168.103.240:2377

 關鍵詞解釋

  • managernode 

    用於管理這個集羣。(manager + work )
    用於分發task 給 worknode 去執行。

  • worknode
    用於執行 manager 給過來的 task。
    給manager report task的執行狀況 或者一些 統計信息。
  • service 服務
  • task 容器
  • overlay 網絡

swarm操做的基本命令

  • docker swarm 
    docker swarm init
    docker swarm join
    docker swarm join-token
    docker swarm leave
  • docker node 
    docker node demote / promote 
    docker node ls / ps
  • docker service 

    docker service create
    docker service update
    docker service scale
    docker service ls
    docker service ps
    docker service rm
    # 在隨機節點上建立一個副本
    docker service create --name redis redis:3.0.6
    # 建立每一個節點都有的redis實例
    docker service create --mode global --name redis redis:3.0.6
    # 建立隨機節點的5個隨機的redis實例
    docker service create --name redis --replicas=5 redis:3.0.6
    # 建立端口映射3個節點的redis實例
    docker service create --name my_web --replicas 3 -p 6379:6379 redis:3.0.6
    # 更新服務,副本集提升成5個
    docker service update --replicas=5 redis
    # 更新服務,副本集提升成2個
    docker service scale redis=2
    # 刪除副本集
    docker service rm redis

compose.yml自定義swarm集羣

官方文檔:https://docs.docker.com/compose/compose-file/#deploy

全部分佈式部署都使用compose中的 deploy 進行節點部署

使用compose中的 deploy 進行節點部署

  1. 準備4臺服務器
    192.168.103.240 manager1
    192.168.103.228 manager2
    192.168.103.226 node1
    192.168.103.227 node2
  2. 編寫 docker-compose.yml 文件
    vim /app/docker-compose.yml
    
    version: '3.7'
    services:
      webapp:
        image: nginx
        ports:
          - 80:80
        deploy:
          replicas: 5
  3. 運行yml文件
    # 與docker-compose不一樣,這裏是基於stack deploy的概念
    docker stack deploy -c ./docker-compose.yml nginx
  4. 查看stack

    # 查看全部棧
    docker stack ls
    # 查看名稱爲nginx的棧
    docker stack ps nginx

帶狀態的容器進行約束

placement:
  constraints:
    - xxxxxx
  1. 藉助node的自帶信息
    https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints---constraint
    node.id / node.hostname / node.role
    node.id Node ID node.id==2ivku8v2gvtg4
    node.hostname Node hostname node.hostname!=node-2
    node.role Node role node.role==manager
    node.labels user defined node labels node.labels.security==high
    engine.labels Docker Engine's labels
  2. 藉助node的自定義標籤信息  [更大的靈活性]
    node.labels / node.labels.country==china

讓 5個task 分配在 node1節點上

  1. 編寫 docker-compose.yml 文件|
    vim /app/docker-compose.yml
    
    version: '3.7'
    services:
      webapp:
        image: nginx
        ports:
          - 80:80
        deploy:
          replicas: 5
          placement:
            constraints:
              - node.id == icyia3s2mavepwebkyr0tqxly
  2. 運行yml文件
    # 先刪除,發佈,延遲5秒、查看詳情
    docker stack rm nginx &&  docker stack deploy -c ./docker-compose.yml nginx && sleep 5 && docker stack ps nginx

     

讓 5 個 task 在東部地區運行

  1. 給node打標籤
    docker node update --label-add region=east --label-add country=china  0pbg8ynn3wfimr3q631t4b01s
    docker node update --label-add region=west --label-add country=china  icyia3s2mavepwebkyr0tqxly
    docker node update --label-add region=east --label-add country=usa  27vlmifw8bwyc19tpo0tbgt3e
  2. 編寫 docker-compose.yml 文件
    vim /app/docker-compose.yml
    
    version: '3.7'
    services:
      webapp:
        image: nginx
        ports:
          - 80:80
        deploy:
          replicas: 5
          placement:
            constraints:
              - node.labels.region == east
  3. 運行yml文件

    # 先刪除,發佈,延遲5秒、查看詳情
    docker stack rm nginx &&  docker stack deploy -c ./docker-compose.yml nginx && sleep 5 && docker stack ps nginx

     

讓 5 個 task 在中國東部地區運行

deploy:
  replicas: 5
  placement:
    constraints:
      - node.labels.region == east
      - node.labels.country == china

均勻分佈

目前只有 spread 這種策略,用於讓task在指定的node標籤上均衡的分佈。

placement:
  preferences:
    - spread: node.labels.zone

讓 8 個task 在 region 均勻分佈

  1. 編寫 docker-compose.yml 文件
    vim /app/docker-compose.yml
    
    version: '3.7'
    services:
      webapp:
        image: nginx
        ports:
          - 80:80
        deploy:
          replicas: 8
          placement:
            constraints:
              - node.id != ryi7o7xcww2c9e4j1lotygfbu
            preferences:
              - spread: node.labels.region
  2. 運行yml文件

    # 先刪除,發佈,延遲5秒、查看詳情
    docker stack rm nginx &&  docker stack deploy -c ./docker-compose.yml nginx && sleep 5 && docker stack ps nginx

     

重啓策略

deploy:
  restart_policy:
    condition: on-failure
    delay: 5s
    max_attempts: 3
    window: 120s

默認是any,(always) 單要知道和 on-failure, 前者若是我stop 容器,同樣重啓, 後者則不是

version: '3.7'
services:
  webapp:
    image: nginx
    ports:
      - 80:80
    deploy:
      replicas: 2
      restart_policy:
        condition: on-failure
        delay: 5s
      placement:
        constraints:
          - node.role == worker

其餘屬性

endpoint_mode vip -> keepalive 【路由器的一個協議】
labels:標籤信息
mode:分發仍是全局模式
resources:限制可用資源
update_config 【覆蓋的一個策略】

把以前的單機版程序修改放到分佈式環境中

修改 docker-compose.yml 文件

version: '3.0'

services:

  webapp:
    image: registry.cn-hangzhou.aliyuncs.com/wyt_registry/wyt_registry
    ports:
      - 80:80
    depends_on:
      - redis
    networks:
      - netapp
    deploy:
      replicas: 3
      placement:
        constraints:
          - node.id == ryi7o7xcww2c9e4j1lotygfbu

  redis:
    image: redis
    networks:
      - netapp
    deploy:
      placement:
        constraints:
          - node.role == worker

  elasticsearch:
    image: elasticsearch:5.6.14
    networks:
      - netapp
    deploy:
      placement:
        constraints:
          - node.role == worker

  kibana:
    image: kibana:5.6.14
    ports:
      - 5601:5601
    networks:
      - netapp
    deploy:
      placement:
        constraints:
          - node.role == worker
networks:
  netapp:

在私有倉庫拉取的時候記得 帶上這個參數,,不然會 no such image 這樣的報錯的。

docker stack deploy -c ./docker-compose.yml nginx --with-registry-auth

docker新特性

使用config實現全局掛載

  1. 建立config配置
    vim /app/nlog.config
    
    <?xml version="1.0" encoding="utf-8" ?>
    <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          autoReload="true"
          internalLogLevel="Warn">
    
        <extensions>
            <add assembly="NLog.Targets.ElasticSearch"/>
        </extensions>
    
        <targets>
            <target name="ElasticSearch" xsi:type="BufferingWrapper" flushTimeout="5000" >
                <target xsi:type="ElasticSearch" uri="http://elasticsearch:9200" documentType="web.app"/>
            </target>
        </targets>
    
        <rules>
            <logger name="*" minlevel="Trace" writeTo="ElasticSearch" />
        </rules>
    </nlog>
    # 建立名稱爲nlog的配置
    docker config create nlog /app/nlog.config
  2. 查看config內容,默認是base64編碼
    docker config inspect nlog
    
    [
        {
            "ID": "1zwa2o8f71i6zm6ie47ws987n",
            "Version": {
                "Index": 393
            },
            "CreatedAt": "2019-07-11T10:30:58.255006156Z",
            "UpdatedAt": "2019-07-11T10:30:58.255006156Z",
            "Spec": {
                "Name": "nlog",
                "Labels": {},
                "Data": "PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiID8+CjxubG9nIHhtbG5zPSJodHRwOi8vd3d3Lm5sb2ctcHJvamVjdC5vcmcvc2NoZW1hcy9OTG9nLnhzZCIKICAgICAgeG1sbnM6eHNpPSJodHRwOi8vd3d3LnczLm9yZy8yMDAxL1hNTFNjaGVtYS1pbnN0YW5jZSIKICAgICAgYXV0b1JlbG9hZD0idHJ1ZSIKICAgICAgaW50ZXJuYWxMb2dMZXZlbD0iV2FybiI+CgogICAgPGV4dGVuc2lvbnM+CiAgICAgICAgPGFkZCBhc3NlbWJseT0iTkxvZy5UYXJnZXRzLkVsYXN0aWNTZWFyY2giLz4KICAgIDwvZXh0ZW5zaW9ucz4KCiAgICA8dGFyZ2V0cz4KICAgICAgICA8dGFyZ2V0IG5hbWU9IkVsYXN0aWNTZWFyY2giIHhzaTp0eXBlPSJCdWZmZXJpbmdXcmFwcGVyIiBmbHVzaFRpbWVvdXQ9IjUwMDAiID4KICAgICAgICAgICAgPHRhcmdldCB4c2k6dHlwZT0iRWxhc3RpY1NlYXJjaCIgdXJpPSJodHRwOi8vZWxhc3RpY3NlYXJjaDo5MjAwIiBkb2N1bWVudFR5cGU9IndlYi5hcHAiLz4KICAgICAgICA8L3RhcmdldD4KICAgIDwvdGFyZ2V0cz4KCiAgICA8cnVsZXM+CiAgICAgICAgPGxvZ2dlciBuYW1lPSIqIiBtaW5sZXZlbD0iVHJhY2UiIHdyaXRlVG89IkVsYXN0aWNTZWFyY2giIC8+CiAgICA8L3J1bGVzPgo8L25sb2c+Cg=="
            }
        }
    ]
    
    
    #解密
    <?xml version="1.0" encoding="utf-8" ?>
    <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          autoReload="true"
          internalLogLevel="Warn">
    
        <extensions>
            <add assembly="NLog.Targets.ElasticSearch"/>
        </extensions>
    
        <targets>
            <target name="ElasticSearch" xsi:type="BufferingWrapper" flushTimeout="5000" >
                <target xsi:type="ElasticSearch" uri="http://elasticsearch:9200" documentType="web.app"/>
            </target>
        </targets>
    
        <rules>
            <logger name="*" minlevel="Trace" writeTo="ElasticSearch" />
        </rules>
    </nlog>
  3. 給servcie做用域加上 config 文件, 根目錄有一個 nlog 文件

    docker service create --name redis --replicas 3 --config nlog redis
    [root@localhost app]# docker ps
    CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS               NAMES
    e5f7b18e8377        redis:latest        "docker-entrypoint.s…"   About a minute ago   Up About a minute   6379/tcp            redis.3.usqs8c5mucee16mokib7143aa
    [root@localhost app]# docker exec -it e5f7b18e8377 bash
    root@e5f7b18e8377:/data# cd /
    root@e5f7b18e8377:/# ls
    bin  boot  data  dev  etc  home  lib  lib64  media  mnt  nlog  opt  proc  root    run  sbin  srv    sys  tmp  usr  var
    root@e5f7b18e8377:/# cd nlog 
    bash: cd: nlog: Not a directory
    root@e5f7b18e8377:/# cat nlog 
    <?xml version="1.0" encoding="utf-8" ?>
    <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          autoReload="true"
          internalLogLevel="Warn">
    
        <extensions>
            <add assembly="NLog.Targets.ElasticSearch"/>
        </extensions>
    
        <targets>
            <target name="ElasticSearch" xsi:type="BufferingWrapper" flushTimeout="5000" >
                <target xsi:type="ElasticSearch" uri="http://elasticsearch:9200" documentType="web.app"/>
            </target>
        </targets>
    
        <rules>
            <logger name="*" minlevel="Trace" writeTo="ElasticSearch" />
        </rules>
    </nlog>
    View Code
  4. 使用docker-compose實現

    vim /app/docker-compose.yml
    
    version: "3.7"
    services:
      redis:
        image: redis:latest
        deploy:
          replicas: 3
        configs:
          - nlog2
    configs:
      nlog2:
        file: ./nlog.config
  5. 運行

    docker stack deploy -c docker-compose.yml redis --with-registry-auth
  6. 掛載到指定目錄(這裏的意思是掛在到容器內的/root文件夾內)

    vim /app/docker-compose.yml
    
    version: "3.7"
    services:
      redis:
        image: redis:latest
        deploy:
          replicas: 1
        configs:
          - source: nlog2
            target: /root/nlog2
    configs:
      nlog2:
        file: ./nlog.config

serect掛載明文和密文

若是你有敏感的配置須要掛載在swarm的service中,能夠考慮使用 serect

  1. 用戶名和密碼
  2. 生產的數據庫鏈接串   

使用方式與config一致,掛在目錄在:/run/secrets/<secret_name>

相關文章
相關標籤/搜索