你不知道的docker奇淫技巧

一   在容器裏操做宿主機的docker daemonnode

1,先看下文檔linux

 

$ docker run -t -i -v /var/run/docker.sock:/var/run/docker.sock -v /path/to/static-docker-binary:/usr/bin/docker busybox sh

 

By bind-mounting the docker unix socket and statically linked docker binary (refer to get the linux binary), you give the container the full access to create and manipulate(操控) the host’s Docker daemon.nginx

簡單講: 在docker run時候 ,掛載宿主機的docker.sock和docker命令路徑,你就賦予了docker徹底操控宿主機docker daemon的權限.web

參考docker

2,動手實踐下centos

我啓動一個jenkins,目標是在jenkins容器裏能運行doceker命令(操控藉助宿主機的docker)網絡

 

 
  1. docker run -d -u root \socket

  2. -p 8080:8080 \ide

  3. -v /var/run/docker.sock:/var/run/docker.sock \oop

  4. -v $(which docker):/usr/bin/docker \

  5. -v /var/jenkins_home:/var/jenkins_home \

  6. jenkins

先啓動:

 

進入jenkins容器

 

docker version #查看版本

 

 

doceker pull busybox#拉取1個鏡像

 

在宿主機查看,實際上在容器裏的操做,其實是在操做宿主機的docker.

自此操控成功.

3, 排錯: 版本緣由致使容器不能操控宿主機的docker daemon

緣由: centos7默認yum安裝的docker版本以下, 我用如下這個版本實驗時候就報錯了. 報錯主要有2個,以下

 

 
  1. [root@node85 ~]# docker --version

  2. Docker version 1.12.6, build 96d83a5/1.12.6

 

報錯1:

 
  1. [root@node85 ~]# docker run -t -i -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker busybox sh

  2. / # docker ps -a

  3. /usr/bin/docker: .: line 2: can't open '/etc/sysconfig/docker'

  4. / # docker info

  5. /usr/bin/docker: .: line 2: can't open '/etc/sysconfig/docker'

 

報錯2:

 
  1. root@9047b488d698:/# docker info

  2. You don't have either docker-client or docker-client-latest installed. Please install either one and retry.

  3. root@9047b488d698:/# docker ps -a

  4. You don't have either docker-client or docker-client-latest installed. Please install either one and retry.


 

二  [容器]將容器的進程映射到主機-nginx

若是啓動容器指定網絡類型爲 --net=host,則容器和宿主機共享網絡namespace,

也就是說容器的端口和進程在宿主機上能夠看到.

如我啓動一個nginx,指定網絡類型爲host

 

docker run -d --net=host --name mynginx nginx


那麼在宿主機上,ps -ef|grep nginx,會顯示nginx進程.

 

 

三 docker網絡

 

 

Default Networks

 

When you install Docker, it creates three networks automatically. You can list these networks using the docker network lscommand:

 
  1. $ docker network ls

  2.  
  3. NETWORK ID NAME DRIVER

  4. 7fca4eb8c647 bridge bridge

  5. 9f904ee27bf5 none null

  6. cf03ee007fb4 host host

1,bridge--docker網絡默認模式

邏輯圖以下:

容器訪問internet,使用snat

外界訪問容器使用dnat

 

 

The default bridge network is present on all Docker hosts. If you do not specify a different network, new containers are automatically connected to the default bridge network.

The docker network inspect command returns information about a network:

$ docker network inspect bridge

[
   {
       "Name": "bridge",
       "Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
       "Scope": "local",
       "Driver": "bridge",
       "IPAM": {
           "Driver": "default",
           "Config": [
               {
                   "Subnet": "172.17.0.1/16",
                   "Gateway": "172.17.0.1"
               }
           ]
       },
       "Containers": {},
       "Options": {
           "com.docker.network.bridge.default_bridge": "true",
           "com.docker.network.bridge.enable_icc": "true",
           "com.docker.network.bridge.enable_ip_masquerade": "true",
           "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
           "com.docker.network.bridge.name": "docker0",
           "com.docker.network.driver.mtu": "9001"
       },
       "Labels": {}
   }
]

Run the following two commands to start two busybox containers, which are each connected to the default bridge network.

 
  1. $ docker run -itd --name=container1 busybox

  2.  
  3. 3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c

  4.  
  5. $ docker run -itd --name=container2 busybox

  6.  
  7. 94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c

Inspect the bridge network again after starting two containers. Both of the busybox containers are connected to the network. Make note of their IP addresses, which will be different on your host machine than in the example below.

$ docker network inspect bridge

{[
    {
        "Name": "bridge",
        "Id": "f7ab26d71dbd6f557852c7156ae0574bbf62c42f539b50c8ebde0f728a253b6f",
        "Scope": "local",
        "Driver": "bridge",
        "IPAM": {
            "Driver": "default",
            "Config": [
                {
                    "Subnet": "172.17.0.1/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Containers": {
            "3386a527aa08b37ea9232cbcace2d2458d49f44bb05a6b775fba7ddd40d8f92c": {
                "EndpointID": "647c12443e91faf0fd508b6edfe59c30b642abb60dfab890b4bdccee38750bc1",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            },
            "94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c": {
                "EndpointID": "b047d090f446ac49747d3c37d63e4307be745876db7f0ceef7b311cbba615f48",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "9001"
        },
        "Labels": {}
    }
]

Containers connected to the default bridge network can communicate with each other by IP address. Docker does not support automatic service discovery on the default bridge network. If you want containers to be able to resolve IP addresses by container name, you should use user-defined networks instead. You can link two containers together using the legacy docker run --linkoption, but this is not recommended in most cases.

 

2,host網絡-和主機共享網絡模式

 

The host network adds a container on the host’s network stack. As far as the network is concerned, there is no isolation between the host machine and the container. For instance, if you run a container that runs a web server on port 80 using host networking, the web server is available on port 80 of the host machine.

 

3, none模式

The none network adds a container to a container-specific network stack. That container lacks a network interface. Attaching to such a container and looking at its stack you see this:

 

 
  1. $ docker attach nonenetcontainer

  2.  
  3. root@0cb243cd1293:/# cat /etc/hosts

  4. 127.0.0.1 localhost

  5. ::1 localhost ip6-localhost ip6-loopback

  6. fe00::0 ip6-localnet

  7. ff00::0 ip6-mcastprefix

  8. ff02::1 ip6-allnodes

  9. ff02::2 ip6-allrouters

  10. root@0cb243cd1293:/# ifconfig

  11. lo Link encap:Local Loopback

  12. inet addr:127.0.0.1 Mask:255.0.0.0

  13. inet6 addr: ::1/128 Scope:Host

  14. UP LOOPBACK RUNNING MTU:65536 Metric:1

  15. RX packets:0 errors:0 dropped:0 overruns:0 frame:0

  16. TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

  17. collisions:0 txqueuelen:0

  18. RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

  19.  
  20. root@0cb243cd1293:/#

 

 

4,自定義網絡

相關文章
相關標籤/搜索
本站公眾號
   歡迎關注本站公眾號,獲取更多信息