docker Dockerfile 建立鏡像

Docker 組件php

1. docker client : docker的客戶端html

2. docker server : docker daemon的主要組成部分,接受用戶經過docker client發送的請求,並按照響應的路由規則實時路由分發。java

3. docker image : docker鏡像運行以後變成容器 (docker run),啓動快,採用了分層模式。python

4.docker Registry: registry是 docker鏡像的中央存儲倉庫(pull/push)mysql

 

docker 使用yum 安裝最新版linux

[root@docker1 yum.repos.d]# cat docker.repo 
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

 

docker 安裝nginx

[root@docker1 ~]# yum -y install docker-engine

啓動docker
[root@docker1 ~]# systemctl start docker.service
[root@docker1 ~]# systemctl enable docker.service

 

使用dockerfile 生成docker鏡像, 從docker.io 下載鏡像。c++

搜索centos鏡像
[root@docker1 ~]#docker search  centos

pull centos鏡像
[root@docker1 ~]#docker pull centos

查看下載的鏡像
[root@docker1 ~]# docker images
REPOSITORY    TAG    IMAGE ID            CREATED             SIZE

centos          latest   67591570dd29      7 weeks ago         192 MB

 

  

[root@docker1 ~]# git clone https://git.oschina.net/dockerf/docker-training.git
[root@docker1 ~]# ls
docker-training git

[root@docker1 ~]# cd docker-training
[root@docker1 docker-training]# lsweb

centos7 mysql php-fpm wordpress 4個目錄
構建一個 centos7 php-fpm mysql wordpress 的docker鏡像

[root@docker1 centos7]# ls
1.repo Centos-7.repo Dockerfile supervisord.conf

Dockerfile 是一個自動構建docker鏡像的配置文件,

[root@docker1 centos7]# cat Dockerfile 
#須要一個基礎鏡像,centos7.1.1503 是從docker上pull下來
#FROM       centos:centos7.1.1503
FROM       centos:latest
 
#維護者
MAINTAINER fengjian <fengjian@senyint.com>
#設置一個時區的環境變量
ENV TZ
"Asia/Shanghai"
#虛擬終端
ENV TERM xterm

#dockerfile中有2條命令能夠複製文件,1.copy 2.add, add比copy多2個功能,能夠寫成鏈接 直接copy到container,若是是壓縮文件,add能自動解壓 ADD Centos
-7.repo /etc/yum.repos.d/CentOS-Base.repo ADD 1.repo /etc/yum.repos.d/epel.repo RUN yum install -y curl wget tar bzip2 libtool-ltdl-devel unzip vim-enhanced passwd sudo yum-utils hostname net-tools rsync man && \ yum install -y gcc gcc-c++ git make automake cmake patch logrotate python-devel libpng-devel libjpeg-devel && \ yum install -y python-pip #RUN pip install --upgrade pip
#supervisor 進程管理工具,運行單個進程能夠不使用
RUN pip install
-i https://pypi.tuna.tsinghua.edu.cn/simple supervisor ADD supervisord.conf /etc/supervisord.conf
#/etc/supervisor.conf.d 存放啓動進程的配置文件 RUN mkdir
-p /etc/supervisor.conf.d && \ mkdir -p /var/log/supervisor
#
container想暴露22端口給宿主機
EXPOSE 22
#最後一條ENTRYPOINT 才能生效
ENTRYPOINT ["/usr/bin/supervisord", "-n", "-c", "/etc/supervisord.conf"]

 

[root@docker1 centos7]# cat 1.repo 
[bash]
name=centos7
baseurl=http://192.168.20.220/centos7/Packages/
enabled=1
gpgcheck=0

  

[root@docker1 centos7]# cat Centos-7.repo 
# CentOS-Base.repo
#
# The mirror system uses the connecting IP address of the client and the
# update status of each mirror to pick mirrors that are updated to and
# geographically close to the client.  You should use this for CentOS updates
# unless you are manually picking other mirrors.
#
# If the mirrorlist= does not work for you, as a fall back you can try the 
# remarked out baseurl= line instead.
#
#
 
[base]
name=CentOS-$releasever - Base - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/os/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#released updates 
[updates]
name=CentOS-$releasever - Updates - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/updates/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/extras/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/centosplus/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/centosplus/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
 
#contrib - packages by Centos Users
[contrib]
name=CentOS-$releasever - Contrib - mirrors.aliyun.com
failovermethod=priority
baseurl=http://mirrors.aliyun.com/centos/$releasever/contrib/$basearch/
        http://mirrors.aliyuncs.com/centos/$releasever/contrib/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib
gpgcheck=1
enabled=0
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
[root@docker1 centos7]# cat supervisord.conf 
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0700              ; socket file mode (default 0700)

[supervisord]
logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB
logfile_backup=10
loglevel=info
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=true           ; (Start in foreground if true; default false)
minfds=1024                 ; (min. avail startup file descriptors;default 1024)
minprocs=200                ; (min. avail process descriptors;default 200)

[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL  for a unix socket

[include]
files = /etc/supervisor.conf.d/*.conf

 

從docker.io 下載centos,根據dockerfile 生成docker鏡像, 
[root@docker1 centos7]# docker build -t fengjian/centos:7.3  .

查看生成的鏡像

[root@docker1 centos7]#  docker images

REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
fengjian/centos         7.1                 03a49ca4b7b9        13 days ago         667.8 MB

 

經過docker鏡像 生成docker 容器。 docker run命令

-p 小寫, container 端口,指定到宿主機的端口

-P  大寫, container的 端口,映射到宿主機的 隨機端口

生成docker 容器

[root@docker1 ~]#docker run -d -p 2222:22 --name base fengjian/centos:7.3

查看容器信息

[root@docker1 ~]#docker ps -a

[root@docker1 ~]# docker ps -a
CONTAINER ID        IMAGE                COMMAND                  CREATED              STATUS              PORTS                            NAMES
ebce60d09d31        fengjian/centos:7.1   "/usr/bin/supervisord"   About a minute ago   Up About a minute   2222/tcp, 0.0.0.0:2222->22/tcp   base

 

進入docker container中
[root@docker1 ~]# docker exec -it ebce60d09d31 /bin/bash

 

製做php-fpm的鏡像

[root@docker1 php-fpm]# ls
Dockerfile  nginx_default.conf  nginx_nginx.conf  php_www.conf  supervisor_nginx.conf  supervisor_php-fpm.conf
[root@docker1 php-fpm]# vim Dockerfile 


FROM       fengjian/centos:7.3
MAINTAINER fengjian <fengjian@senyint.com>

# Set environment variable
ENV     APP_DIR /app

RUN     yum -y swap -- remove fakesystemd -- install systemd systemd-libs && \
        yum -y install nginx php-cli php-mysql php-pear php-ldap php-mbstring php-soap php-dom php-gd php-xmlrpc php-fpm php-mcrypt && \
        yum clean all

ADD     nginx_nginx.conf /etc/nginx/nginx.conf
ADD     nginx_default.conf /etc/nginx/conf.d/default.conf

ADD     php_www.conf /etc/php-fpm.d/www.conf
RUN     sed -i 's/;cgi.fix_pathinfo=1/cgi.fix_pathinfo=0/' /etc/php.ini

RUN     mkdir -p /app && echo "<?php phpinfo(); ?>" > ${APP_DIR}/info.php

EXPOSE  80 443

ADD     supervisor_nginx.conf /etc/supervisor.conf.d/nginx.conf
ADD     supervisor_php-fpm.conf /etc/supervisor.conf.d/php-fpm.conf

ONBUILD ADD . /app
ONBUILD RUN chown -R nginx:nginx /app

 

[root@docker1 php-fpm]# vim supervisor_nginx.conf 

[program:nginx]
directory=/
command=/usr/sbin/nginx -c /etc/nginx/nginx.conf
user=root
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log


[program:php-fpm]
directory=/
command=/usr/sbin/php-fpm
user=root
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log

 

建立php-fpm的 鏡像

[root@docker1 php-fpm]# docker build -t fengjian/php-fpm:5.4  .

啓動一個container

[root@docker1 php-fpm]# docker run -d -p 8080:80 --name wesite fengjian/php-fpm:5.4

 

訪問宿主機的 8080端口

http://192.168.20.209:8080/info.php

 

建立mysql鏡像

[root@docker1 mysql]# docker build -t fengjian/mysql:5.5 .

###docker run -d -p 3306:3306 -v host_dir:container_dir   -v參數是映射container路徑到宿主機上###########

[root@docker1 mysql]# docker run -d -p 3306:3306 -v  /data/mysql/data:/var/lib/mysql  --name dbserver fengjian/mysql:5.5

 

宿主機上的/data/mysql/data目錄文件

[root@docker1 data]# ll /data/mysql/data/
total 28700
-rw-rw---- 1 27   27    16384 Jan 19 15:25 aria_log.00000001
-rw-rw---- 1 27   27       52 Jan 19 15:25 aria_log_control
drwx------ 2 27   27       19 Jan 19 15:40 fengjian
-rw-rw---- 1 27   27 18874368 Jan 19 15:25 ibdata1
-rw-rw---- 1 27   27  5242880 Jan 19 15:26 ib_logfile0
-rw-rw---- 1 27   27  5242880 Jan 19 15:25 ib_logfile1
drwx------ 2 27 root     4096 Jan 19 15:25 mysql
srwxrwxrwx 1 27   27        0 Jan 19 15:26 mysql.sock
drwx------ 2 27   27     4096 Jan 19 15:25 performance_schema
drwx------ 2 27 root        6 Jan 19 15:25 test
[root@docker1 data]# 

 

 刪除container後, 在生成新的container 數據還能夠繼續使用。
[root@docker1 data]#  docker rm -f dbserver (刪除)
[root@docker1 data]#  docker run -d -p 3306:3306 -v /data/mysql/data:/var/lib/mysql --name newmysqldb  fengjian/mysql:5.5

 

 

構建動態網站wordpress, 使用php-fpm鏡像

[root@docker1 wordpress]# ls
Dockerfile  init.sh      readme.html      wp-admin            wp-comments-post.php  wp-content   wp-includes        wp-load.php   wp-mail.php      wp-signup.php     xmlrpc.php
index.php   license.txt  wp-activate.php  wp-blog-header.php  wp-config-sample.php  wp-cron.php  wp-links-opml.php  wp-login.php  wp-settings.php  wp-trackback.php

 

[root@docker1 wordpress]# vim Dockerfile

from
fengjian/php-fpm:5.4 add init.sh /init.sh entrypoint ["/init.sh", "/usr/bin/supervisord", "-n", "-c", "/etc/supervisord.conf"]
#先執行 /init.sh腳本, 而後再執行後面的服務,實際上是啓動了nginx 和php-fpm

 

父鏡像php-fpm 的Dockerfile最後兩行

ONBUILD ADD . /app
ONBUILD RUN chown -R nginx:nginx /app

ONBUILD 在構建 wordpress 的時候生效

全部的代碼文件所有copy到/app下, 可是Dockerfile沒有做用,因此 能夠在目錄下新建一個.dockerignore, 排除Dockerfile

[root@docker1 wordpress]# vim .dockerignore

Dockerfile

建立wordpress鏡像,版本4.2

[root@docker1 wordpress]# docker build -t fengjian/wordpress:4.2  .

 

查看鏡像

[root@docker1 wordpress]# docker images
REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
fengjian/wordpress      4.2                 8591f07cc2e2        15 seconds ago      848.4 MB
fengjian/mysql          5.5                 b54f78aeefb8        21 hours ago        848.2 MB
fengjian/php-fpm        5.4                 fc1856e25486        21 hours ago        810.8 MB
fengjian/centos         7.1                 fbafb1b36c30        21 hours ago        712.8 MB
tomcat                 latest              47bd812c12f6        5 weeks ago         355.2 MB
mysql                  latest              594dc21de8de        5 weeks ago         400.1 MB
centos                 centos7.1.1503      285396d0a019        4 months ago        212.1 MB
centos                 centos7.1.1503      285396d0a019        4 months ago        212.1 MB
kubeguide/tomcat-app   v1                  a29e200a18e9        6 months ago        358.2 MB

 

啓動一個容器, 使用-e 參數,傳遞環境變量,WORDPRESS_DB_HOST是在init.sh 腳本中定義的

[root@docker1 wordpress]# docker run -d -p 80:80 --name wordpress  -e WORDPRESS_DB_HOST=192.168.20.209 -e WORDPRESS_DB_USER=fengjian -e WORDPRESS_DB_PASSWORD=123456 fengjian/wordpress:4.2

 



訪問ip的80端口

 

 

 ENTRYPOINT  與  CMD 命令區別

運行一個Docker容器像運行一個程序同樣,若是寫了10條,只有最後一條生效。

1. ENTRYPOINT ["executable","param1","param2"]
2. ENTRPOINT command param1 param2 (shell from)

docker run -it -entrypoint=覆蓋 Dockerfile ENTRYPOINT[]


CMD 用法
1.CMD["executable","param1","param2"] (exec from , this is the preferred form)
第一種用法:運行一個可執行的文件並提供參數

2.CMD [「param1」,"param2"] (as default parameters to ENTRYPOINT)
第二種用法: 爲ENTRPOINT指定參數

3.CMD command param1 param2 (sehll form)
第三種用法(shell form): 是以"/bin/sh -c" 的方法執行的命令

例子:
CMD ["/bin/echo","This is test CMD"]
docker run -it -rm fengjian/cmd:0.1 /bin/bash

 

################################################################################################# 

第二docker 實戰之 Registry 以及持續集成

 

構建一個企業內部的registry

docker1: 192.168.20.209

registry: 192.168.20.135  5000端口 

 

[root@registry ~]# docker search registry

 

再次把registry pull到本地

[root@registry ~]# docker pull registry
Using default tag: latest
latest: Pulling from library/registry
b7f33cc0b48e: Pull complete 
46730e1e05c9: Pull complete 
458210699647: Pull complete 
0cf045fea0fd: Pull complete 
b78a03aa98b7: Pull complete 
Digest: sha256:0e40793ad06ac099ba63b5a8fae7a83288e64b50fe2eafa2b59741de85fd3b97
Status: Downloaded newer image for registry:latest

 

查看docker1 192.168.20.209  鏡像

[root@docker1 ~]# docker images
REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
fengjian/wordpress      4.2                 8591f07cc2e2        2 days ago          848.4 MB
fengjian/mysql          5.5                 b54f78aeefb8        3 days ago          848.2 MB
fengjian/php-fpm        5.4                 fc1856e25486        3 days ago          810.8 MB
fengjian/centos         7.1                 fbafb1b36c30        3 days ago          712.8 MB
registry               latest              d1e32b95d8e8        4 days ago          33.17 MB
tomcat                 latest              47bd812c12f6        5 weeks ago         355.2 MB
mysql                  latest              594dc21de8de        5 weeks ago         400.1 MB
centos                 centos7.1.1503      285396d0a019        4 months ago        212.1 MB
centos                 centos7.1.1503      285396d0a019        4 months ago        212.1 MB

 

查看registry 192.168.20.135  鏡像

[root@registry ~]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
registry            latest              d1e32b95d8e8        2 weeks ago         33.2 MB

 

registry啓動一個 registry的容器

[root@registry ~]# docker run -d -p 5000:5000 -v /opt/registry:/var/lib/registry -p 5000:5000 --restart=always --name registry registry:latest

 

Registry服務默認會將上傳的鏡像保存在容器的/var/lib/registry,咱們將主機的/opt/registry目錄掛載到該目錄,便可實現將鏡像保存到主機的/opt/registry目錄了。

registry_url: 啓動的registry,本地registry:5000地址

namespace : 指定目錄

name: 鏡像的名字

 

registry_url/namespace/tomcat:v1.0

經過 docker tag 能夠爲容器打一個標記,相似於別名的做用,

 

[root@docker1 ~] docker pull fengjian/fengjian
對fengjian 打標記成 192.168.20.209:5000/fengjian/fengjian/fengjian:20170122v1.0
[root@docker1 ~]# docker tag fengjian/fengjian:latest  192.168.20.135:5000/fengjian/fengjian/fengjian:20170122v1.0

   查看鏡像

[root@docker1 ~]# docker images
REPOSITORY                                    TAG                 IMAGE ID            CREATED             SIZE
fengjian/wordpress                             4.2                 8591f07cc2e2        2 days ago          848.4 MB
fengjian/mysql                                 5.5                 b54f78aeefb8        3 days ago          848.2 MB
fengjian/php-fpm                               5.4                 fc1856e25486        3 days ago          810.8 MB
fengjian/centos                                7.1                 fbafb1b36c30        3 days ago          712.8 MB
registry                                      latest              d1e32b95d8e8        4 days ago          33.17 MB
tomcat                                        latest              47bd812c12f6        5 weeks ago         355.2 MB
mysql                                         latest              594dc21de8de        5 weeks ago         400.1 MB
centos                                        centos7.1.1503      285396d0a019        4 months ago        212.1 MB
centos                                        centos7.1.1503      285396d0a019        4 months ago        212.1 MB
kubeguide/tomcat-app                          v1                  a29e200a18e9        6 months ago        358.2 MB
192.168.20.135:5000/fengjian/fengjian/fengjian   20170122v1.0        3468c34fa83b        13 months ago       97.95 MB
fengjian/fengjian                               latest              3468c34fa83b        13 months ago       97.95 MB


運行docker push將hello-world鏡像push到咱們的私有倉庫中

  [root@docker ~]# docker push  192.168.20.135:5000/fengjian/fengjian/fengjian:20170122v1.0



  The push refers to a repository [192.168.20.135:5000/fengjian/nginx20170203]
  Get https://192.168.20.135:5000/v1/_ping: http: server gave HTTP response to HTTPS client

 

出現沒法push鏡像到私有倉庫的問題。這是由於咱們啓動的registry服務不是安全可信賴的。這是咱們須要修改docker的配置文件/usr/lib/systemd/system/docker.service,添加下面的內容,注意: registry 和 docker1 2臺服務器都須要修改

 

上傳鏡像到 192.168.20.135

[root@docker overlay]# docker push  192.168.20.135:5000/fengjian/fengjian/fengjian:20170122v1.0


The push refers to a repository [192.168.20.135:5000/fengjian/fengjian/fengjian:20170122v1.0]

23c8d40ebb9e: Pushed
5526182de2ab: Pushed
652f3c2c3f57: Pushed
bf76891beffc: Pushed
f696adb3bd45: Pushed
46db44806cd4: Pushed
2dd577fe2559: Pushed
bbc4847eb1d2: Pushed
747f5baee8ac: Pushed
29003cbb49e1: Pushed
f5d4b5d6f2ff: Pushed
ee745a500b91: Pushed
3383431a5cc0: Pushed
8aabcc6c5e8d: Pushed
967105df7f61: Pushed
0c051da11cb4: Pushed
34e7b85d83e4: Pushed
v1: digest: sha256:4e5d763dfb99ecd95128d1033e14bb4740613045c89bb2646006ac7db08f5a6f size: 3871

 

經過瀏覽器,查詢上傳結果

 

使用docker pull從咱們的私有倉庫中獲取192.168.20.135:5000/fengjian/fengjian/fengjian:20170122v1.0鏡像

[root@docker ~ ]# docker pull 192.168.20.135:5000/fengjian/nginx20170203:v1

v1: Pulling from fengjian/nginx20170203
17385548ba54: Already exists
59da822a5404: Already exists
ec5de50f3658: Already exists
751fb563feef: Already exists
8145f1a2090b: Already exists
575600a5843d: Already exists
035deb98f67f: Already exists
2e1f8c7e36ce: Already exists
3cf27705cd77: Pull complete
d4e37a9633b1: Pull complete
1aab1e953ef2: Pull complete
31afde0ced92: Pull complete
253eadce8153: Pull complete
750606d876c8: Pull complete
f96cc19c204f: Pull complete
eea9946ffb66: Pull complete
da59d6a4a8bd: Pull complete
Digest: sha256:4e5d763dfb99ecd95128d1033e14bb4740613045c89bb2646006ac7db08f5a6f
Status: Downloaded newer image for 192.168.20.135:5000/fengjian/nginx20170203:v1

192.168.20.209啓動一個容器

[root@docker ~ ]# docker run -d -p 8081:80 --name nginx 192.168.20.135:5000/fengjian/nginx20170203:v1

[root@docker ~ ]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7539cd87c9bd 192.168.20.135:5000/fengjian/nginx20170203:v1 "/usr/bin/supervis..." 2 minutes ago Up 2 minutes 22/tcp, 0.0.0.0:8081->80/tcp nginx

登錄到容器中

[root@docker overlay]# docker exec -it nginx /bin/bash

容器nginx啓動正常

[root@7539cd87c9bd nginx-1.11.2]# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 17:21 ? 00:00:00 /usr/bin/python2 /usr/bin/supervisord -n -c /etc/supervisord.conf
root 9 1 0 17:21 ? 00:00:00 nginx: master process /data/nginx/sbin/nginx
nobody 10 9 0 17:21 ? 00:00:00 nginx: worker process
root 88 0 1 17:25 ? 00:00:00 /bin/bash
root 104 1 0 17:25 ? 00:00:00 /data/nginx/sbin/nginx
root 105 88 0 17:25 ? 00:00:00 ps -ef

 

注: registry服務器,暫時看不到上傳的image鏡像。已經存在的鏡像,在

[root@registry repositories]# pwd
/data/registry/docker/registry/v2/repositorie

 

docker  registry   https服務

1. 啓動registry倉庫的鏡像,與上面相同
[root@registry ~]#
docker run -d -p 5000:5000 -v /opt/registry:/var/lib/registry --restart=always --name registry registry:latest
2. 關閉 /usr/lib/systemd/system/docker.service 配置文件中 "--insecure-registry=192.168.20.135:5000"
[root@registry ~]# systemctl daemon-reload
[root@registry ~]# systemctl restart docker.service

3. 啓動 nginx容器 映射443端口
[root@registry ~]#  docker pull nginx (也能夠本身製做nginx鏡像)
運行nginx容器
[root@registry ~]#  docker run -d -p 443:443 --name nginx  nginx:latest

4. 修改nginx 配置文件,已經添加域名證書
[root@registry ~]#  docker cp  nginx.conf  /etc/nginx/nginx.conf 
[root@registry ~]#  docker cp  sslkey  /etc/nginx/
5. 登錄到容器中從新啓動nginx
[root@123131nginx ~]#   /etc/init.d/nginx restart

########################
nginx.conf  配置文件

events {
    worker_connections  1024;
}

http {

  upstream docker-registry {
    server 192.168.20.135:5000;
  }

  server {
    listen 443 ssl;
    server_name docker.cinyi.com;

    ssl_certificate /data/nginx/sslkey/cinyi.crt;
    ssl_certificate_key /data/nginx/sslkey/cinyi.key;

    client_max_body_size 0;

    chunked_transfer_encoding on;

    location / {
      proxy_pass                          http://docker-registry;
      proxy_set_header  Host              $http_host;
      proxy_set_header  X-Real-IP         $remote_addr;
      proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
      proxy_set_header  X-Forwarded-Proto $scheme;
      proxy_read_timeout                  900;
    }
  }
}


6. 在 192.168.20.209 服務器上,查看鏡像,對鏡像打tag,而後push 到registry

[root@docker1 ~]# docker images
[root@docker1 ~]# docker tag senyint/im-web docker.cinyi.com:443/senyint/im-web:443
[root@docker1 ~]# docker push  docker.cinyi.com:443/senyint/im-web:443
 

 

 

查看registry 裏面存儲的鏡像

使用 Registry V2 API。能夠列出全部鏡像:

curl http://<私有registry地址>/v2/_catalog

例如 
[root@docker225 ~]# curl https://docker.cinyi.com/v2/_catalog
{"repositories":["fengjian/nginx20170203","mysql20170203","senyint/centos7.3","senyint/im-web","senyint/nginx"]}



 

查看registry 裏面存儲的鏡像tag

使用 Registry V2 API。能夠列出全部鏡像:

curl GET <protocol>://<registry_host>/v2/<鏡像名>/tags/list

例如 
[root@docker225 ~]# curl GET https://docker.cinyi.com/v2/senyint/im-web/tags/list
{"name":"senyint/im-web","tags":["latest","443"]}

 

刪除docker.cinyi.com docker registry的鏡像

1.在啓動倉庫時,需在配置文件中的storage配置中增長delete=true配置項,容許刪除鏡像
[root@registry ~]#tail -f /etc/docker/registry/config.yml文件 ,修改容器yml文件後,一直顯示在報錯,後續在處理
storage:
  delete:
    enabled: true

2. 拿到disgest_hash參數 curl --header "Accept: application/vnd.docker.distribution.manifest.v2+json" -I -X GET https://docker.cinyi.com/v2/senyint/nginx/manifests/latest HTTP/1.1 200 OK Server: nginx/1.11.2 Date: Wed, 15 Feb 2017 01:24:56 GMT Content-Type: application/vnd.docker.distribution.manifest.v2+json Content-Length: 3669 Connection: keep-alive Docker-Content-Digest: sha256:609a595020f0827301064ebc07b3ec3a5751641ef975a7a186518cf6b0d70f63 Docker-Distribution-Api-Version: registry/2.0 Etag: "sha256:609a595020f0827301064ebc07b3ec3a5751641ef975a7a186518cf6b0d70f63" X-Content-Type-Options: nosniff 3.複製disgest_hash Docker-Content-Digest: <digest_hash>
Docker-Content-Digest: sha256:609a595020f0827301064ebc07b3ec3a5751641ef975a7a186518cf6b0d70f63
4.刪除registry鏡像
curl -I -X DELETE <protocol>://<registry_host>/v2/<repo_name>/manifests/<digest_hash>
[root@docker225 ~]# curl -I  -X DELETE https://docker.cinyi.com/v2/senyint/im-web/manifests/sha256:609a595020f0827301064ebc07b3ec3a5751641ef975a7a186518cf6b0d70f63


第二種方法

1. 打開鏡像的存儲目錄, 刪除鏡像文件夾

  [root@registry repositories]# docker exec registry rm -rf senyint

 2.執行垃圾回收操做
  [root@registry repositories]# docker exec registry /bin/registry garbage-collect /etc/docker/registry/config.yml

  3. 重啓容器

  [root@registry repositories]#  docker restart registry

 

 

docker-compose編排工具安裝,一次啓動多個容器。

#對安裝好的pip進行一次升級
pip install --upgrade pip

安裝docker-compose
pip install docker-compose

運行docker-compose
出現報錯
pkg_resources.DistributionNotFound: backports.ssl-match-hostname>=3.5

使用pip 更新backports.ssl-match-hostname的版本
pip install --upgrade backports.ssl_match_hostname
更新backports.ssl_match_hostname 到3.5版本後問題解決

 

 

 

[root@docker1 certs]# docker-compose up
ERROR:
Can't find a suitable configuration file in this directory or any
parent. Are you in the right directory?

 

Supported filenames: docker-compose.yml, docker-compose.yaml

 

編寫docker-compose.yml 文件

 

[root@docker1 second]# vim docker-compose.yml

mysql:
image: fengjian/mysql:5.5
ports:
- "3306:3306"
volumes:
- /var/lib/docker/vfs/dir/dataxc:/var/lib/mysql
hostname: mydb.server.com

tomcat:
image: tomcat
ports:
- "8080:8080"
links:
- mysql:db
environment:
- TOMCAT_USER=admin
- TOMCAT_PASS=admin
hostname: tomcat.server.com

 

在後臺啓動,在docker-compose.yml 下啓動

[root@docker1 second]# docker-compose up -d

查看啓動的容器

[root@docker1 second]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5b844baf351e tomcat "catalina.sh run" 6 minutes ago Up 6 minutes 0.0.0.0:8080->8080/tcp second_tomcat_1
f88ccf720119 fengjian/mysql:5.5 "/scripts/start" 6 minutes ago Up 6 minutes 22/tcp, 0.0.0.0:3306->3306/tcp second_mysql_1

停掉2個container

[root@docker1 second]# docker-compose stop

經過ps 查看經過docker-compose啓動的有那些container。

[root@docker1 second]# docker-compose ps
Name Command State Ports
----------------------------------------------------
second_mysql_1 /scripts/start Exit 137
second_tomcat_1 catalina.sh run Exit 143

刪除經過docker-compose創建的2個container.

[root@docker1 second]# docker-compose rm
Name Command State Ports
----------------------------------------------------
second_mysql_1 /scripts/start Exit 137
second_tomcat_1 catalina.sh run Exit 143

 

 

 

經過docker倉庫自動構建(jenkins)

構建jenkins 鏡像
構建maven鏡像:

[root@docker /]# mkdir maven-tar

[root@docker /]# cd maven-tar/

[root@docker maven-tar]# wget http://mirror.bit.edu.cn/apache/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz

 

docker最大的優點在於部署,jenkins最強大的在於做業調度和插件系統,如何結合二者?

 

建立一個jenkins鏡像

apache-maven-3.3.9-bin.tar.gz  Dockerfile  jdk.tar.gz  jenkins.war  rc.local  settings.xml  supervisor_tomcat.conf  tomcat

DockerFile 文件
FROM       centos7.3:20170204
MAINTAINER fengjian <fengjian@senyint.com>


# Install maven
ADD apache-maven-3.3.9-bin.tar.gz /data/
ADD jdk.tar.gz  /data/
COPY tomcat /data/tomcat
COPY jenkins.war /data/tomcat/webapps/

COPY settings.xml /data/maven/conf/settings.xml
ADD  supervisor_tomcat.conf /etc/supervisor.conf.d/tomcat.conf

 

supervisor tomcat的啓動配置文件
[root@docker maven-tar]# vim supervisor_tomcat.conf 

[program:tomcat]
directory=/
command=/data/tomcat/bin/catalina.sh start
user=root
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
~                                                       

 

tomcat 啓動文件 vim /data/tomcat/bin/catalina.sh 

export JENKINS_HOME="/data/jenkins_home"
export JAVA_HOME=/data/jdk
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib

# OS specific support.  $var _must_ be set to either true or false.

 

建立jenkins鏡像, 啓動jenkins鏡像

[root@docker ~]# docker build -t jenkins .
[root@docker ~]# docker run -d -p 8080:80 --name jenkins jenkins

 

 

 

 

把 container 容器 打包成鏡像

1. 關閉jenkins
[root@docker ~ ]# docker stop jenkins

root@docker maven-tar]# docker ps -a
CONTAINER ID        IMAGE      COMMAND                   CREATED             STATUS                   PORTS                          NAMES
9174cf36cdfc        jenkins     "/usr/bin/supervis..."   2 hours ago         Up About an hour         22/tcp, 0.0.0.0:8080->80/tcp   jenkins


2. 把容器打包成鏡像
[root@docker ~ ]# docker  commit   9174cf36cdfc   jenkins20170204

3. 查看新的jenkins 鏡像
[root@docker ~] docker images

  REPOSITORY       TAG     IMAGE ID     CREATED       SIZE
  jenkins20170204    latest    5254a69cb614    41 seconds ago    1.62 GB



 

jenkins鏡像裏內置了docker client命令行工具,/usr/bin/docker,所以咱們只須要傳遞 DOCKER_HOST 環境變量 或者映射 docker.sock 文件給jenkins容器,就可讓jenkins容器裏面擁有docker的操做能力,進而將二者結合起來。

建立jenkins 新的鏡像,把/usr/bin/docker 和 /var/run/docker.sock 映射給 jenkins 容器

/usr/bin/dockerDOCKER_HOSTdocker.sock
建立jenkins 新的鏡像,把/usr/bin/docker 和 /var/run/docker.sock 映射給 jenkins 容器
docker run -d -p 8080:80 -v /usr/bin/docker:/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock --name jenkins20170204 jenkins20170204:latest

登錄到jenkins容器中,查看jenkins 進程是否啓動

[root@docker ~]# docker exec -it jenkins20170204 /bin/bash

  

 

測試docker 是否可用


解決辦法:

yum install libtool-ltdl-devel

再次執行,查看啓動的容器

 

查看鏡像

 


打開瀏覽器,經過jenkins,把build-nginx git到jenkins本地後,打包成鏡像。

 

注意 $WORKSPACE是指定的 git 下載的路徑名稱:好比 build-nginx

 

 


 開始構建

 

 

 

 

 已經生成鏡像。

 

java項目實現流程

        1.jenkins 新建一個項目

        2.從git倉庫把項目克隆到本地

        3.經過docker構建成鏡像

        4.上傳到registry服務器

        5.client 經過 docker pull registry鏡像到本地,啓動容器。

        6.測試環境,研發環境,經過images傳遞。

 

 分紅三層: 1. 基礎鏡像

           2. 中間件鏡像

           3. 應用鏡像

 

構建java項目docker鏡像,首先 把java 用maven 編譯,而且構建成鏡像,經過 docker cp命令把war包拷貝 到中間件。

FROM       centos7.3:20170204
MAINTAINER fengjian <fengjian@senyint.com>


# Install maven
ADD apache-maven-3.3.9-bin.tar.gz /data/
ADD jdk.tar.gz  /data
#COPY tomcat /data/tomcat
#COPY jenkins.war /data/tomcat/webapps/

COPY  apache-maven-3.3.9   /data/maven
COPY settings.xml /data/maven/conf/settings.xml
CMD  ["source /etc/profile"]
#ADD  supervisor_tomcat.conf /etc/supervisor.conf.d/tomcat.conf

ADD hello /hello
RUN cd /hello && \
    /data/maven/bin/mvn install package

 

1.構建maven鏡像,已經經過mvn install package 編譯java代碼
[root@docker ~]# docker build -t senyint/maven:v1 

2.建立 maven 容器,可是不啓動
[root@docker ~]# docker create --name maven senyint/maven:v1

3.把hello.war 的包 從maven容器中拷貝出來
[root@docker ~]# docker cp  maven:/hello/target/hello.war  .

 

 

 
#########################################################################################################################################################################################################
本身總結

 分紅三層: 1. 基礎鏡像                  centos:7.3      supervisor

                2. 中間件鏡像               java  maven    tomcat

                3. 應用鏡像                  java項目 war包 (jenkins   ........)



1. centos7.3 基礎鏡像

Dockerfile centos7.3基礎鏡像

FROM centos:latest
MAINTAINER fengjian <fengjian@senyint.com> ENV TZ "Asia/Shanghai" ENV TERM xterm ADD 1.repo /etc/yum.repos.d/1.repo ADD aliyun-mirror.repo /etc/yum.repos.d/CentOS-Base.repo ADD aliyun-epel.repo /etc/yum.repos.d/epel.repo RUN yum install -y curl openssl* wget libtool-ltdl-devel tar bzip2 unzippasswd sudo yum-utils hostname net-tools && \ yum install -y gcc gcc-c++ git make automake cmake patch logrotate python-devel libpng-devel libjpeg-devel && \ yum install -y --enablerepo=epel pwgen python-pip && \ yum clean all RUN pip install -i https://pypi.tuna.tsinghua.edu.cn/simple supervisor ADD supervisord.conf /etc/supervisord.conf RUN mkdir -p /etc/supervisor.conf.d && \ mkdir -p /var/log/supervisor ENTRYPOINT ["/usr/bin/supervisord", "-n", "-c", "/etc/supervisord.conf"]
supervisord.conf 配置文件

[root@docker centos7]# cat supervisord.conf 
[unix_http_server]
file=/var/run/supervisor.sock ; (the path to the socket file)
chmod=0700              ; socket file mode (default 0700)

[supervisord]
logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB
logfile_backup=10
loglevel=info
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=true           ; (Start in foreground if true; default false)
minfds=1024                 ; (min. avail startup file descriptors;default 1024)
minprocs=200                ; (min. avail process descriptors;default 200)

[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///var/run/supervisor.sock ; use a unix:// URL  for a unix socket

[include]
files = /etc/supervisor.conf.d/*.conf

 

二.中間件鏡像

[root@docker jdk]# ls
Dockerfile jdk.tar.gz  maven.tar.gz  profile  supervisor_tomcat.conf  tomcat.tar.gz

[root@docker jdk]# vim profile   最底部添加環境變量

export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$JAVA_HOME:$PATH

MAVEN_HOME=/data/maven
export MAVEN_HOME
export PATH=${PATH}:${MAVEN_HOME}/bin
[root@docker jdk]# vim supervisor_tomcat.conf    用於啓動tomcat

[program:tomcat]
directory=/
command=/data/tomcat/bin/catalina.sh start
user=root
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
[root@docker jdk]# vim Dockerfile

FROM senyint/centos7.3
MAINTAINER fengjian <fengjian@senyint.com.com>

ENV JAVA_HOME /data/jdk
ENV JRE_HOME ${JAVA_HOME}/jre
ENV CLASSPATH .:${JAVA_HOME}/lib:${JRE_HOME}/lib

ENV MAVEN_HOME /data/maven
ENV PATH ${PATH}:${MAVEN_HOME}/bin:$JAVA_HOME/bin:$JRE_HOME/bin:$JAVA_HOME:$PATH

RUN mkdir -p /data/webserver
ADD maven.tar.gz /data
ADD jdk.tar.gz /data
ADD tomcat.tar.gz /data
ADD profile /etc
#ADD env.sh /etc/profile.d/

ADD supervisord.conf /etc/supervisord.conf
ADD supervisor_tomcat.conf /etc/supervisor.conf.d/tomcat.conf

RUN mkdir -p /etc/supervisor.conf.d && \
mkdir -p /var/log/supervisor

ENTRYPOINT ["/usr/bin/supervisord", "-n", "-c", "/etc/supervisord.conf"]

 
 

FROM docker.cinyi.com:443/centos7.3

#維護者
MAINTAINER fengjian <fengjian@senyint.com>
#設置一個時區的環境變量
ENV TZ "Asia/Shanghai"
#虛擬終端
ENV TERM xterm

ENV JAVA_HOME /data/jdk
ENV JRE_HOME ${JAVA_HOME}/jre
ENV CLASSPATH .:${JAVA_HOME}/lib:${JRE_HOME}/lib

ENV MAVEN_HOME /data/maven
ENV PATH ${PATH}:${MAVEN_HOME}/bin:$JAVA_HOME/bin:$JRE_HOME/bin:$JAVA_HOME:$PATH

RUN mkdir -p /data/webserver
ADD jdk.tar.gz /data
ADD tomcat.tar.gz /data
Add host.sh /data

ADD profile /etc
RUN chmod +x /data/host.sh ; /data/host.sh


EXPOSE 80

ENTRYPOINT ["/data/tomcat/bin/catalina.sh", "run" ]

 

 

[root@docker jdk]# docker build -t senyint/tomcat:v1 .
啓動容器測試 java 環境變量 [root@docker jdk]# docker run
-d -p 11112:80 --name tomcat1 senyint/tomcat:v1
[root@docker jdk]# docker exec -it tomcat1 /bin/bash
[root@docker jdk]# java -version 顯示java版本
[root@docker jdk]# mvn -version 顯示maven版本

 

 

 

3構建應用

(1)構建jenkins

jenkins Dockerfile文件
[root@docker jenkins]# vim Dockerfile 

 
 

  FROM senyint/java1.8:latest

  MAINTAINER fengjian <fengjian@senyint.com.com>

  ENV JENKINS_HOME /data/jenkins_home

  ADD profile /etc/

  ADD jenkins.war /data/webserver/

  RUN unzip /data/webserver/jenkins.war -d /data/webserver && \
  rm /data/webserver/jenkins.war

  VOLUME /data/jenkins_home


構建jenkins鏡像
[root@docker jenkins]# docker build -t senyint/jenkins .

啓動 jenkins 而且登錄到容器中
[root@docker jenkins]# docker run -d -p 11111:80 -v /usr/bin/docker:/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock -v /docker_project:/docker_project --name jenkins  senyint/jenkins

-v /docker_project:/docker_project 這個是把/docker_project 目錄映射到 jenkins 容器中, jenkins容器 編譯完war包後,經過cp命令拷貝到 /docker_project/java工程目錄/ ,經過senyint/tomcat鏡像構建java項目鏡像。
[root@docker jenkins]# docker exec -it jenkins/bin/bash

使用docker命令,

 

 

如下爲jenkins 配置,以及 docker 對im-web項目進行編譯,構建im-web鏡像,而且推送到registry.

從git 倉庫 下載java項目代碼

 

 

 

 

構建時出現操做方法

 

 

 

 

registry="docker.cinyi.com:443"
#取出項目目錄
javadir=`echo $WORKSPACE | awk -F'/' '{print $5}'`
#取出war包名稱
javaname=`ls $WORKSPACE/target/*war | awk -F'/' '{print $7}' | cut -d . -f 1`

mkdir -p /data/docker_project/$javadir
rm /data/docker_profile/$javadir/$javaname.war -rf
mv $WORKSPACE/target/$javaname.war /data/docker_project/$javadir

#在/data/docker_project 目錄下有一個dockerfile模版,根據war包的名字替換成新的dockerfile
sed "s/jenkins/$javaname/g" /data/docker_project/Dockerfile >/data/docker_project/$javadir/Dockerfile

if docker images | grep $javaname ; then
docker rmi -f docker.cinyi.com:443/senyint/$javaname
fi

docker build -t docker.cinyi.com:443/senyint/$javaname /data/docker_project/$javadir/
docker push docker.cinyi.com:443/senyint/$javaname


#定義namespace 爲test:

k8s_apicurl="curl --cacert /root/ca.pem"
k8s_url="https://192.168.20.227:6443"

#建立namespaces
if ! `$k8s_apicurl -H "Authorization: Bearer 199e9c8d4ce99c61" -X GET $k8s_url/api/v1/namespaces | grep test >/dev/null` ;then
$k8s_apicurl -H "Authorization: Bearer 199e9c8d4ce99c61" -H "content-Type: application/yaml" -X POST $k8s_url/api/v1/namespaces -d "$(cat /root/namespaces.yaml)"
fi

 

#建立service
if ! `$k8s_apicurl -H "Authorization: Bearer 199e9c8d4ce99c61" -X GET $k8s_url/api/v1/namespaces/test/services | grep "im-web" >/dev/null` ; then
$k8s_apicurl -H "Authorization: Bearer 199e9c8d4ce99c61" -H "content-Type: application/yaml" -X POST $k8s_url/api/v1/namespaces/test/services -d "$(cat /root/im-web_service.yaml)"
fi

#建立deployment
if ! `$k8s_apicurl -H "Authorization: Bearer 199e9c8d4ce99c61" -X GET $k8s_url/apis/extensions/v1beta1/namespaces/test/deployments | grep "im-web" >/dev/null` ; then
$k8s_apicurl -H "Authorization: Bearer 199e9c8d4ce99c61" -H "content-Type: application/yaml" -X POST $k8s_url/apis/extensions/v1beta1/namespaces/test/deployments/ -d "$(cat /root/im-web_deployment.yaml)"
fi

 

 把docker 命令 sock 直接掛載到容器中,進行打包,作成鏡像。

docker run -d -p 80:80 --restart=always -v /usr/bin/docker:/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock -v /data/docker_project:/data/docker_project  -v /data/jenkins_home:/data/jenkins_home -v /etc/sysconfig/docker:/etc/sysconfig/docker  senyint/jenkins

 

登錄到容器中,出現docker-client, docker版本是1.12.6

[root@6882772021f0 /]# docker ps -a
You don't have either docker-client or docker-client-latest installed. Please install either one and retry.

 

1.13版本好像沒有問題,1.12.6在容器中yum install docker-client

[root@6882772021f0 /]# yum -y install docker-client

 

###################################################################################

docker資源隔離 使用linux LXC容器技術,主要是使用namespace(命名空間)

kernel namespace(資源隔離) 分爲
1.PID,經過pid 隔離,容器有本身獨立的進程表和1號進程
2.net,經過網絡隔離,容器有本身獨立的network info
3.ipc,進程間的交互方法,在ipc通訊時,須要加入額外信息來標識進程
4.mnt, 相似chroot,每一個容器有本身惟一的目錄掛載
5.uts, 可讓容器擁有本身的hostname 和domain
6.user, 能夠擁有不通的用戶,組


docker  網絡模式

1. Nat  網絡地址轉換

2. Host 
建立host網絡, 和宿主機的網絡相同
[root@docker data]# docker run -d --name centos7-host --net=host centos7.3:20170204


3.other container

容器之間相互通訊十分頻繁,可使用這種模式,container網絡特色

1.與主機網絡空間隔離

2.容器間共享網絡空間

3.適合容器間網絡通訊頻繁。

[root@docker data]# docker run -d --name centos7-nat centos7.3:20170204   nat模式
[root@docker data]# docker run -d --name centos-container --net=container:centos7-nat  centos7.3:20170204

centos-container容器的 ip地址 與 centos7-nat 的地址相同

 

 


4. none docker 容器 無網絡配置,可自行配置。
[root@docker data]# docker run -d --name centos-none --net=none  centos7.3:20170204

登錄到容器後,沒有eth0

 

5. overlay

 

 

overlay 網絡特色

1. 跨主機通訊

2.無需作端口管理

3.無需擔憂IP衝突

 

Consul介紹 

Consul 提供了分佈式系統的服務發現和配置的解決方案。基於go語言實現。而且在git上開放了源碼。consul還包括了分佈式一致協議的實現,健康檢查和管理UI.

 

Consul Agent Server、Client介紹

 

經過運行 consul agent 命令,能夠經過後臺守護進程的方式運行在全部consul集羣節點中。以server或者client 模式運行。而且以HTTP或者DNS 接口方式,

負責運行檢查和服務同步。Server模式的agent負責維護consul集羣狀態,相應RPC查詢,而且還要負責和其餘數據中心進行WAN Gossips交換。client 節點是

相對無狀態的,Client的惟一活動就是轉發請求給Server節點,以保持低延遲和少資源消耗。

 

 

 

 

 

openvswitch docker 配置

測試環境 ens32:192.168.20.209   docker0:10.0.1.1/24
     ens32:192.168.20.135   docker0:10.0.2.1/24
      ens32:192.168.20.223   docker0:10.0.3.1/24
       ens32:192.168.20.224    docker0:10.0.4.1/24

 

192.168.20.20服務器操做
[root@docker ~]# rpm -ivh openvswitch-2.5.0-2.el7.x86_64.rpm
warning: openvswitch-2.5.0-2.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID fac8d3c0: NOKEY
error: Failed dependencies:
libatomic.so.1()(64bit) is needed by openvswitch-2.5.0-2.el7.x86_64

[root@docker ~]# yum -y install libatomic


[root@docker ~]# systemctl start openvswitch.service

[root@docker ~]# systemctl status openvswitch.service
openvswitch.service - Open vSwitch
Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; disabled; vendor preset: disabled)
Active: active (exited) since Sat 2017-02-11 10:00:11 CST; 17s ago
Process: 3854 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 3854 (code=exited, status=0/SUCCESS)

Feb 11 10:00:11 docker systemd[1]: Starting Open vSwitch...
Feb 11 10:00:11 docker systemd[1]: Started Open vSwitch.

[root@docker ~]# yum -y install bridge-utils


[root@docker ~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242fabe521c no

#建立一個新的網橋
[root@docker ~]# ovs-vsctl add-br br0

#將網絡設備端口添加到橋接器
[root@docker ~]# ovs-vsctl add-br br0
[root@docker ~]# ovs-vsctl add-port br0 gre0 -- set interface gre0 type=gre option:remote_ip=192.168.20.135


包括如下命令
[root@docker ~]# ovs-vsctl show
6fde4aed-708a-4ecc-882a-a415b3b3ac3d
Bridge "br0"
Port "br0"
Interface "br0"
type: internal
Port "gre0"
Interface "gre0"
type: gre
options: {remote_ip="192.168.20.135"}
ovs_version: "2.5.0"


#[root@docker ~]# ovs-vsctl del-br br0


#添加br0到本地docker0,使容器流量經過openvswitc的隧道流出
[root@docker ~]# brctl addif docker0 br0
[root@docker ~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242fabe521c no br0


[root@docker ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:84:42:8d brd ff:ff:ff:ff:ff:ff
inet 192.168.20.209/24 brd 192.168.20.255 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe84:428d/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:fa:be:52:1c brd ff:ff:ff:ff:ff:ff
inet 10.0.1.1/24 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:faff:febe:521c/64 scope link
valid_lft forever preferred_lft forever
422: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether c6:82:46:70:bc:d1 brd ff:ff:ff:ff:ff:ff
424: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN
link/gre 0.0.0.0 brd 0.0.0.0
425: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN qlen 1000
link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
426: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master docker0 state DOWN
link/ether 8e:63:55:ec:3b:41 brd ff:ff:ff:ff:ff:ff

#啓用docker0 和 br0 網卡
[root@docker ~]# ip link set dev br0 up
[root@docker ~]# ip link set dev docker0 up

添加一條路由,全部去10.0.0.0/8 的網絡從docker0出
[root@docker ~]# ip route add 10.0.0.0/8 dev docker0

啓動一個容器
[root@docker ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
centos7.3 20170204 1d6f132807d0 6 days ago 530 MB

[root@docker ~]# docker run -d --name 209test centos7.3:20170204
登陸到容器中
[root@docker ~]# docker exec -it 209test /bin/bash
查看ip地址
[root@464241f535e2 /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.1.2 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::42:aff:fe00:102 prefixlen 64 scopeid 0x20<link>
ether 02:42:0a:00:01:02 txqueuelen 0 (Ethernet)
RX packets 720 bytes 68496 (66.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 693 bytes 65706 (64.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

#ping 192.168.20.135 服務器容器的IP地址
[root@464241f535e2 /]# ping 10.0.2.2

PING 10.0.2.2 (10.0.2.2) 56(84) bytes of data.
64 bytes from 10.0.2.2: icmp_seq=1 ttl=63 time=0.451 ms
64 bytes from 10.0.2.2: icmp_seq=2 ttl=63 time=0.493 ms
From 10.0.1.1 icmp_seq=3 Redirect Host(New nexthop: 10.0.2.2)
From 10.0.1.1: icmp_seq=3 Redirect Host(New nexthop: 10.0.2.2)

出現以上問題,過一會,就恢復正常。


##############################################################################################


192.168.20.13五、22三、224 服務器操做
[root@registry ~]# rpm -ivh openvswitch-2.5.0-2.el7.x86_64.rpm
warning: openvswitch-2.5.0-2.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID fac8d3c0: NOKEY
error: Failed dependencies:
libatomic.so.1()(64bit) is needed by openvswitch-2.5.0-2.el7.x86_64

[root@registry ~]# yum -y install libatomic


[root@registry ~]# systemctl start openvswitch.service

[root@registry ~]# systemctl status openvswitch.service
● openvswitch.service - Open vSwitch
Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; disabled; vendor preset: disabled)
Active: active (exited) since Sat 2017-02-11 10:00:11 CST; 17s ago
Process: 3854 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 3854 (code=exited, status=0/SUCCESS)

Feb 11 10:00:11 registry systemd[1]: Starting Open vSwitch...
Feb 11 10:00:11 registry systemd[1]: Started Open vSwitch.

[root@registry ~]# yum -y install bridge-utils


[root@registry ~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242fabe521c no

#建立一個新的網橋
[root@registry ~]# ovs-vsctl add-br br0

#將網絡設備端口添加到橋接器
[root@registry ~]# ovs-vsctl add-br br0
[root@registry ~]# ovs-vsctl add-port br0 gre0 -- set interface gre0 type=gre option:remote_ip=192.168.20.209


包括如下命令,ovs-vsctl show 的結果與 192.168.20.209不同
[root@registry ~]# ovs-vsctl show
19baf011-40aa-426c-a2b9-568101390834
Bridge "br0"
Port "gre0"
Interface "gre0"
type: gre
options: {remote_ip="192.168.20.209"}
Port "br0"
Interface "br0"
type: internal
ovs_version: "2.5.0"

####刪除br0 [root@registry ~]# ovs-vsctl del-br br0


#添加br0到本地registry0,使容器流量經過openvswitc的隧道流出
[root@registry ~]# brctl addif docker0 br0
[root@registry ~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242fabe521c no br0


[root@registry ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:84:2b:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.20.135/24 brd 192.168.20.255 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe84:2bfc/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:5e:5d:06:3f brd ff:ff:ff:ff:ff:ff
inet 10.0.2.1/24 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:5eff:fe5d:63f/64 scope link
valid_lft forever preferred_lft forever
28: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether f2:fd:f4:39:e2:20 brd ff:ff:ff:ff:ff:ff
29: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state DOWN
link/ether 0a:29:1e:93:37:41 brd ff:ff:ff:ff:ff:ff
inet6 fe80::829:1eff:fe93:3741/64 scope link
valid_lft forever preferred_lft forever


#啓用docker0 和 br0 網卡
[root@registry ~]# ip link set dev br0 up
[root@registry ~]# ip link set dev docker0 up

添加一條路由,全部去10.0.0.0/8 的網絡從docker0出
[root@registry ~]# ip route add 10.0.0.0/8 dev docker0

 

 

#########################################################################################

 

192.168.20.223 服務器操做
[root@docker223~]# rpm -ivh openvswitch-2.5.0-2.el7.x86_64.rpm 
warning: openvswitch-2.5.0-2.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID fac8d3c0: NOKEY
error: Failed dependencies:
libatomic.so.1()(64bit) is needed by openvswitch-2.5.0-2.el7.x86_64

[root@docker223~]# yum -y install libatomic


[root@docker223~]# systemctl start openvswitch.service

[root@docker223~]# systemctl status openvswitch.service 
● openvswitch.service - Open vSwitch
Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; disabled; vendor preset: disabled)
Active: active (exited) since Sat 2017-02-11 10:00:11 CST; 17s ago
Process: 3854 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 3854 (code=exited, status=0/SUCCESS)

Feb 11 10:00:11 docker223systemd[1]: Starting Open vSwitch...
Feb 11 10:00:11 docker223systemd[1]: Started Open vSwitch.

[root@docker223~]# yum -y install bridge-utils


[root@docker223~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242fabe521c no

#建立一個新的網橋
[root@docker223~]# ovs-vsctl add-br br0

#將網絡設備端口添加到橋接器
[root@docker223~]# ovs-vsctl add-br br0
[root@docker223~]# ovs-vsctl add-port br0 gre1 -- set interface gre1 type=gre option:remote_ip=192.168.20.209


包括如下命令,ovs-vsctl show 的結果與 192.168.20.209不同
[root@docker223~]# ovs-vsctl show

8256b14a-1da6-4781-b9aa-7c6612ce7ebf
Bridge "br0"
Port "gre1"
Interface "gre1"
type: gre
options: {remote_ip="192.168.20.209"}
Port "br0"
Interface "br0"
type: internal
ovs_version: "2.5.0"

####刪除br0 [root@docker223~]# ovs-vsctl del-br br0


#添加br0到本地registry0,使容器流量經過openvswitc的隧道流出
[root@docker223~]# brctl addif docker0 br0
[root@docker223~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242fabe521c no br0


[root@docker223~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:84:2b:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.20.223/24 brd 192.168.20.255 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe84:2bfc/64 scope link 
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state DOWN 
link/ether 02:42:5e:5d:06:3f brd ff:ff:ff:ff:ff:ff
inet 10.0.3.1/24 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:5eff:fe5d:63f/64 scope link 
valid_lft forever preferred_lft forever
28: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
link/ether f2:fd:f4:39:e2:20 brd ff:ff:ff:ff:ff:ff
29: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state DOWN 
link/ether 0a:29:1e:93:37:41 brd ff:ff:ff:ff:ff:ff
inet6 fe80::829:1eff:fe93:3741/64 scope link 
valid_lft forever preferred_lft forever


#啓用docker0 和 br0 網卡
[root@docker223~]# ip link set dev br0 up
[root@docker223~]# ip link set dev docker0 up

添加一條路由,全部去10.0.0.0/8 的網絡從docker0出
[root@docker223~]# ip route add 10.0.0.0/8 dev docker0

 

#########################################################################################

 

192.168.20.224 服務器操做
[root@docker224~]# rpm -ivh openvswitch-2.5.0-2.el7.x86_64.rpm 
warning: openvswitch-2.5.0-2.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID fac8d3c0: NOKEY
error: Failed dependencies:
libatomic.so.1()(64bit) is needed by openvswitch-2.5.0-2.el7.x86_64

[root@docker224~]# yum -y install libatomic


[root@docker224~]# systemctl start openvswitch.service

[root@docker224~]# systemctl status openvswitch.service 
● openvswitch.service - Open vSwitch
Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; disabled; vendor preset: disabled)
Active: active (exited) since Sat 2017-02-11 10:00:11 CST; 17s ago
Process: 3854 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 3854 (code=exited, status=0/SUCCESS)

Feb 11 10:00:11 docker223systemd[1]: Starting Open vSwitch...
Feb 11 10:00:11 docker223systemd[1]: Started Open vSwitch.

[root@docker223~]# yum -y install bridge-utils


[root@docker224~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242fabe521c no

#建立一個新的網橋
[root@docker224~]# ovs-vsctl add-br br0

#將網絡設備端口添加到橋接器
[root@docker224~]# ovs-vsctl add-br br0
[root@docker224~]# ovs-vsctl add-port br0 gre1 -- set interface gre1 type=gre option:remote_ip=192.168.20.209


包括如下命令,ovs-vsctl show 的結果與 192.168.20.209不同
[root@docker224~]# ovs-vsctl show

8256b14a-1da6-4781-b9aa-7c6612ce7ebf
Bridge "br0"
Port "gre2"
Interface "gre2"
type: gre
options: {remote_ip="192.168.20.209"}
Port "br0"
Interface "br0"
type: internal
ovs_version: "2.5.0"

####刪除br0 [root@docker224~]# ovs-vsctl del-br br0


#添加br0到本地docker0,使容器流量經過openvswitc的隧道流出
[root@docker224~]# brctl addif docker0 br0
[root@docker224~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242fabe521c no br0


[root@docker224~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:50:56:84:2b:fc brd ff:ff:ff:ff:ff:ff
inet 192.168.20.224/24 brd 192.168.20.255 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe84:2bfc/64 scope link 
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state DOWN 
link/ether 02:42:5e:5d:06:3f brd ff:ff:ff:ff:ff:ff
inet 10.0.4.1/24 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:5eff:fe5d:63f/64 scope link 
valid_lft forever preferred_lft forever
28: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN 
link/ether f2:fd:f4:39:e2:20 brd ff:ff:ff:ff:ff:ff
29: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state DOWN 
link/ether 0a:29:1e:93:37:41 brd ff:ff:ff:ff:ff:ff
inet6 fe80::829:1eff:fe93:3741/64 scope link 
valid_lft forever preferred_lft forever


#啓用docker0 和 br0 網卡
[root@docker224~]# ip link set dev br0 up
[root@docker224~]# ip link set dev docker0 up

添加一條路由,全部去10.0.0.0/8 的網絡從docker0出
[root@docker224~]# ip route add 10.0.0.0/8 dev docker0

 

 總結,192.168.20.209 在br0 上添加 gre0 gre1 gre2 ,對應關係以下, 而192.168.20.135 br0添加gre0, 192.168.20.223 添加gre1, 192.168.20.224 添加gre2, 以後分別啓動容器,不一樣容器的網段 能夠相互ping通。

  gr0 192.168.20.135
  gr1 192.168.20.223
  gr2 192.168.20.224

 

192.168.20.209自啓動腳本

#!/bin/bash

systemctl start openvswitch.service

systemctl enable openvswitch.service

ovs-vsctl add-br br0
ovs-vsctl add-port br0 gre0 -- set interface gre0 type=gre option:remote_ip=192.168.20.135
ovs-vsctl add-port br0 gre1 -- set interface gre1 type=gre option:remote_ip=192.168.20.223
ovs-vsctl add-port br0 gre2 -- set interface gre2 type=gre option:remote_ip=192.168.20.224

ovs-vsctl show

brctl show

brctl addif docker0 br0

ip link set dev br0 up

ip link set dev docker0 up

ip route add 10.0.0./8 dev docker0

 

 192.168.20.135自啓動腳本

systemctl start openvswitch.service

systemctl enable openvswitch.service

ovs-vsctl add-br br0
ovs-vsctl add-port br0 gre0 -- set interface gre0 type=gre option:remote_ip=192.168.20.209

ovs-vsctl show

brctl show

brctl addif docker0 br0

ip link set dev br0 up

ip link set dev docker0 up

ip route add 10.0.0.0/8 dev docker0

 

 192.168.20.223自啓動腳本

systemctl start openvswitch.service

systemctl enable openvswitch.service

ovs-vsctl add-br br0
ovs-vsctl add-port br0 gre1 -- set interface gre1 type=gre option:remote_ip=192.168.20.209

ovs-vsctl show

brctl show

brctl addif docker0 br0

ip link set dev br0 up

ip link set dev docker0 up

ip route add 10.0.0.0/8 dev docker0

 

 192.168.20.224自啓動腳本

systemctl start openvswitch.service

systemctl enable openvswitch.service

ovs-vsctl add-br br0
ovs-vsctl add-port br0 gre1 -- set interface gre1 type=gre option:remote_ip=192.168.20.209

ovs-vsctl show

brctl show

brctl addif docker0 br0

ip link set dev br0 up

ip link set dev docker0 ip

ip route add 10.0.0.0/8 dev docker0

相關文章
相關標籤/搜索