中文文檔:docs.docker-cn.com/html
鏡像(image):就是一個只讀的模板,鏡像能夠用來建立Docker容器,一個鏡像能夠建立不少容器。java
容器和鏡像的關係相似於面向對象編程中的對象與類。node
容器(container):Docker利用容器獨立運行的一個或一組應用。容器是用鏡像建立的運行實例。mysql
它能夠被啓動、開始、中止、刪除。每一個容器都是相互隔離的,保證安全的平臺。linux
能夠把容器看作是一個簡易版的Linux環境,,容器的定義和鏡像幾乎一抹同樣,也是nginx
一堆層的統一視角,惟一區別在於容器的最上面那一層是可讀可寫的。c++
倉庫(repository):倉庫是集中存放鏡像文件的場所。倉庫和倉庫註冊服務器是有區別的,倉庫註冊服務器web
上每每存放着多個倉庫,每一個倉庫中又包含了多個鏡像,每一個鏡像有不一樣的標籤。redis
一、yum install -y epel-releasesql
二、yum install -y docker-io
三、安裝後的配置文件:/etc/sysconfig/docker
四、啓動Docker後臺服務:service docker start
五、docker version驗證
官網安裝步驟:docs.docker.com/install/lin…
一、官網中文安裝參考手冊
https://docs.docker-cn.com/engine/installation/linux/docker-ce/centos/#prerequisites
複製代碼
二、肯定你是CentOS7及以上版本
cat /etc/redhat-release
複製代碼
3.yum安裝gcc相關
CentOS7能上外網
yum -y install gcc
yum -y install gcc-c++
複製代碼
四、卸載舊版本
yum -y remove docker docker-common docker-selinux docker-engine
2018.3官網版本
複製代碼
五、安裝須要的軟件包
yum install -y yum-utils device-mapper-persistent-data lvm2
複製代碼
六、設置stable鏡像倉庫
大坑
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
推薦
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
複製代碼
七、更新yum軟件包索引
yum makecache fast
複製代碼
八、安裝DOCKER CE
yum -y install docker-ce
複製代碼
九、啓動docker
systemctl start docker
複製代碼
十、測試
docker version
docker run hello-world
複製代碼
十一、配置鏡像加速
mkdir -p /etc/docker
vim /etc/docker/daemon.json
systemctl daemon-reload
systemctl restart docker
複製代碼
十二、卸載
systemctl stop docker
yum -y remove docker-ce
rm -rf /var/lib/docker
複製代碼
網站: dev.aliyun.com/search.html
cr.console.aliyun.com/cn-hangzhou…
一、得到加速器地址鏈接
二、配置本機Docker運行鏡像加速器
鑑於國內網絡問題,後續拉取 Docker 鏡像十分緩慢,咱們能夠須要配置加速器來解決,
我使用的是阿里雲的本人本身帳號的鏡像地址(須要本身註冊有一個屬於你本身的): https://xxxx.mirror.aliyuncs.com
* vim /etc/sysconfig/docker
將得到的本身帳戶下的阿里雲加速地址配置進
other_args="--registry-mirror=https://你本身的帳號加速信息.mirror.aliyuncs.com"
複製代碼
三、從新啓動Docker後臺服務:service docker restart
四、ps -ef | grep docker 看到以下就配置成功了
[root@Hadoop1 桌面]# ps -ef | grep docker
root 5224 1 1 11:07 pts/0 00:00:00 /usr/bin/docker -d --registry-mirror=https://n13vjvek.mirror.aliyuncs.com
root 5292 3821 0 11:07 pts/0 00:00:00 grep docker
複製代碼
一、Docker是怎麼工做的?
Docker是一個Client-Server結構的系統,Docker守護進程運行在主機上, 而後經過Socket鏈接從客戶端訪問,守護進程從客戶端接受命令並管理運行在主機上的容器。 容器,是一個運行時環境,就是咱們前面說到的集裝箱。
二、爲何Docker比較比VM快?
(1)docker有着比虛擬機更少的抽象層。由亍docker不須要Hypervisor實現硬件資源虛擬化,運行在docker容器上的程序直接使用的都是實際物理機的硬件資源。所以在CPU、內存利用率上docker將會在效率上有明顯優點。
(2)docker利用的是宿主機的內核,而不須要Guest OS。所以,當新建一個容器時,docker不須要和虛擬機同樣從新加載一個操做系統內核。仍而避免引尋、加載操做系統內核返個比較費時費資源的過程,當新建一個虛擬機時,虛擬機軟件須要加載Guest OS,返個新建過程是分鐘級別的。而docker因爲直接利用宿主機的操做系統,則省略了返個過程,所以新建一個docker容器只須要幾秒鐘。
Docker容器 | 虛擬機(VM) | |
---|---|---|
硬件親和力 | 面向軟件開發中 | 面向硬件開發者 |
部署速度 | 快速,秒級 | 較慢 |
Liunx幫助指令: man 指令 ; 會出現這個指令的詳細信息的用法、
op1 桌面]# docker --help
Usage: docker [OPTIONS] COMMAND [arg...]
Options:
--api-cors-header= 在遠程API中設置CORS報頭
-b,-bridge= 將容器鏈接到網橋
——bip= 指定網橋IP
-D,--debug=false 啓用調試模式
-d,--daemon=false 啓用守護進程模式
--default-gateway= 容器默認網關IPv4地址
--default-gateway-v6= 容器默認網關IPv6地址
--default-ulimit=[] 設置容器的默認ulimit
--dns=[] dns服務器使用
--DNS -search=[] DNS搜索要使用的域名
-e,--execl -driver= 本地執行驅動程序使用
--exec-opt=[] 設置exec驅動程序選項
--exec-root=/var/run/docker docker execdriver的根目錄
--fixed-cidr= IPv4子網
——fixed-cidr-v6= IPv6子網固定ip
-G,——group= unix 套接字的docker組
-g,——graph=/var/lib/docker 運行庫的docker根目錄
-H,——host=[] 要鏈接的守護進程套接字
-h, -help= 錯誤打印用法
——icc=true 啓用容器間通訊
——insecureregistry =[] 啓用不安全的註冊表通訊
——ip=0.0.0.0 綁定容器端口時的默認ip
——ip-forward = true 啓用net.ipv4.ip_forward
——IP -masq=true 啓用IP假裝
——iptables=true 啓用iptables規則的添加
——ipv6=fals e啓用ipv6網絡
設置日誌級別
——label=[] 將key=value標籤設置爲守護進程
——log-driver=json-file 容器日誌的默認驅動程序
——log-opt=map[] 設置日誌驅動程序選項
——mtu=0 設置容器網絡mtu
- p,pidfile = / var / run /docker.pid 用於守護進程pid文件的pid路徑
——registry-mirror=[] 首選Docker註冊表鏡像
-s,——Storage -driver= 要使用的存儲驅動程序
——selinux-enabled=false 啓用selinux支持
——storage-opt=[] 設置存儲驅動程序選項
——tls = false 使用tls;暗示了——tlsverify
——tlscacert = ~ / .docker / ca.pem 僅由此CA簽署的pem信任證書
——tlscert = ~ / .docker /cert.pem 到TLS證書文件的pem路徑
——tlskey = ~ / .docker /key.pem 路徑到TLS密鑰文件
——tlsverify=flse 使用TLS,驗證遙控器
——userland-proxy=true 使用userland代理進行環回通訊
-v,--version=false 打印錯誤版本信息並退出
Commands:
attach Attach to a running container
build Build an image from a Dockerfile
commit Create a new image from a container's changes cp Copy files/folders from a container's filesystem to the host path
create Create a new container
diff Inspect changes on a container's filesystem events Get real time events from the server exec Run a command in a running container export Stream the contents of a container as a tar archive history Show the history of an image images List images import Create a new filesystem image from the contents of a tarball info Display system-wide information inspect Return low-level information on a container or image kill Kill a running container load Load an image from a tar archive login Register or log in to a Docker registry server logout Log out from a Docker registry server logs Fetch the logs of a container pause Pause all processes within a container port Lookup the public-facing port that is NAT-ed to PRIVATE_PORT ps List containers pull Pull an image or a repository from a Docker registry server push Push an image or a repository to a Docker registry server rename Rename an existing container restart Restart a running container rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save an image to a tar archive search Search for an image on the Docker Hub start Start a stopped container stats Display a stream of a containers' resource usage statistics
stop Stop a running container
tag Tag an image into a repository
top Lookup the running processes of a container
unpause Unpause a paused container
version Show the Docker version information
wait Block until a container stops, then print its exit code
複製代碼
OPTIONS說明:
-a :列出本地全部的鏡像(含中間映像層)
-q :只顯示鏡像ID。
--digests :顯示鏡像的摘要信息
--no-trunc :顯示完整的鏡像信息
複製代碼
網站
https://hub.docker.com
命令
docker search [OPTIONS] 鏡像名字
OPTIONS說明:
--no-trunc : 顯示完整的鏡像描述
-s : 列出收藏數不小於指定值的鏡像。
--automated : 只列出 automated build類型的鏡像;
複製代碼
下載鏡像
docker pull 鏡像名字[:TAG]
複製代碼
刪除鏡像
刪除單個
docker rmi -f 鏡像ID
刪除多個
docker rmi -f 鏡像名1:TAG 鏡像名2:TAG
刪除所有
docker rmi -f $(docker images -qa)
複製代碼
docker pull centos
複製代碼
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
OPTIONS說明(經常使用):有些是一個減號,有些是兩個減號
--name="容器新名字": 爲容器指定一個名稱;
-d: 後臺運行容器,並返回容器ID,也即啓動守護式容器;
-i:以交互模式運行容器,一般與 -t 同時使用;
-t:爲容器從新分配一個僞輸入終端,一般與 -i 同時使用;
-P: 隨機端口映射;
-p: 指定端口映射,有如下四種格式
ip:hostPort:containerPort
ip::containerPort
hostPort:containerPort
containerPort
啓動交互式容器:
#使用鏡像centos:latest以交互模式啓動一個容器,在容器內執行/bin/bash命令。
docker run -it centos /bin/bash
複製代碼
docker ps [OPTIONS]
OPTIONS說明(經常使用):
-a :列出當前全部正在運行的容器+歷史上運行過的
-l :顯示最近建立的容器。
-n:顯示最近n個建立的容器。
-q :靜默模式,只顯示容器編號。
--no-trunc :不截斷輸出。
複製代碼
兩種退出方式
exit
容器中止退出
ctrl+P+Q
容器不中止退出
複製代碼
docker start 容器ID或者容器名
複製代碼
docker restart 容器ID或者容器名
複製代碼
docker stop 容器ID或者容器名
複製代碼
docker kill 容器ID或者容器名
複製代碼
docker rm 容器ID
一次性刪除多個容器
docker rm -f $(docker ps -a -q)
docker ps -a -q | xargs docker rm
複製代碼
docker run -d 容器名
#使用鏡像centos:latest之後臺模式啓動一個容器
docker run -d centos
問題:而後docker ps -a 進行查看, 會發現容器已經退出
很重要的要說明的一點: Docker容器後臺運行,就必須有一個前臺進程.
容器運行的命令若是不是那些一直掛起的命令(好比運行top,tail),就是會自動退出的。
這個是docker的機制問題,好比你的web容器,咱們以nginx爲例,正常狀況下,咱們配置啓動服務只須要啓動響應的service便可。例如
service nginx start
可是,這樣作,nginx爲後臺進程模式運行,就致使docker前臺沒有運行的應用,
這樣的容器後臺啓動後,會當即自殺由於他以爲他沒事可作了.
因此,最佳的解決方案是,將你要運行的程序之前臺進程的形式運行
複製代碼
docker logs -f -t --tail 容器ID
* -t 是加入時間戳
* -f 跟隨最新的日誌打印
* --tail 數字 顯示最後多少條
複製代碼
docker top 容器ID
複製代碼
docker inspect 容器ID
複製代碼
docker exec -it 容器ID /bin/bash //這個是進入之後才幹活
docker exec -it 容器ID ls -l /tmp //直接在宿主主機幹活,不用進入docker 的centos才幹活
從新進入docker attach 容器ID
複製代碼
上述兩個區別
attach 直接進入容器啓動命令的終端,不會啓動新的進程
exec 是在容器中打開新的終端,而且能夠啓動新的進程
複製代碼
docker cp 容器ID:容器內路徑 目的主機路徑
複製代碼
經常使用命令
attach Attach to a running container # 當前 shell 下 attach 鏈接指定運行鏡像
build Build an image from a Dockerfile # 經過 Dockerfile 定製鏡像
commit Create a new image from a container changes # 提交當前容器爲新的鏡像
cp Copy files/folders from the containers filesystem to the host path #從容器中拷貝指定 文件或者目錄到宿主機中
create Create a new container # 建立一個新的容器,同 run,但不啓動容器
diff Inspect changes on a container's filesystem # 查看 docker 容器變化 events Get real time events from the server # 從 docker 服務獲取容器實時事件 exec Run a command in an existing container # 在已存在的容器上運行命令 export Stream the contents of a container as a tar archive # 導出容器的內容流做爲一個 tar 歸 檔文件[對應 import ] history Show the history of an image # 展現一個鏡像造成歷史 images List images # 列出系統當前鏡像 import Create a new filesystem image from the contents of a tarball # 從tar包中的內容建立一個新 的文件系統映像[對應export] info Display system-wide information # 顯示系統相關信息 inspect Return low-level information on a container # 查看容器詳細信息 kill Kill a running container # kill 指定 docker 容器 load Load an image from a tar archive # 從一個 tar 包中加載一個鏡像[對應 save] login Register or Login to the docker registry server # 註冊或者登錄一個 docker 源服務器 logout Log out from a Docker registry server # 從當前 Docker registry 退出 logs Fetch the logs of a container # 輸出當前容器日誌信息 port Lookup the public-facing port which is NAT-ed to PRIVATE_PORT # 查看映射端口對應的容 器內部源端口 pause Pause all processes within a container # 暫停容器 ps List containers # 列出容器列表 pull Pull an image or a repository from the docker registry server # 從docker鏡像源服務器 拉取指定鏡像或者庫鏡像 push Push an image or a repository to the docker registry server # 推送指定鏡像或者庫鏡像 至docker源服務器 restart Restart a running container # 重啓運行的容器 rm Remove one or more containers # 移除一個或者多個容器 rmi Remove one or more images # 移除一個或多個鏡像[無容器使用該鏡像纔可刪除,不然 需刪除相關容器纔可繼續或 -f 強制刪除] run Run a command in a new container # 建立一個新的容器並運行一個命令 save Save an image to a tar archive # 保存一個鏡像爲一個 tar 包[對應 load] search Search for an image on the Docker Hub # 在 docker hub 中搜索鏡像 start Start a stopped containers # 啓動容器 stop Stop a running containers # 中止容器 tag Tag an image into a repository # 給源中鏡像打標籤 top Lookup the running processes of a container # 查看容器中運行的進程信息 unpause Unpause a paused container # 取消暫停容器 version Show the docker version information # 查看 docker 版本號 wait Block until a container stops, then print its exit code # 截取容器中止時的退出狀態值 複製代碼
鏡像是一種輕量級、可執行的獨立軟件包,用來打包軟件運行環境和基於運行環境開發的軟件,它包含運行某個軟件所需的全部內容,包括代碼、運行時、庫、環境變量和配置文件。
UnionFS(聯合文件系統):Union文件系統(UnionFS)是一種分層、輕量級而且高性能的文件系統,它支持對文件系統的修改做爲一次提交來一層層的疊加,同時能夠將不一樣目錄掛載到同一個虛擬文件系統下(unite several directories into a single virtual filesystem)。Union 文件系統是 Docker 鏡像的基礎。鏡像能夠經過分層來進行繼承,基於基礎鏡像(沒有父鏡像),能夠製做各類具體的應用鏡像。
特性:一次同時加載多個文件系統,但從外面看起來,只能看到一個文件系統,聯合加載會把各層文件系統疊加起來,這樣最終的文件系統會包含全部底層的文件和目錄
docker的鏡像實際上由一層一層的文件系統組成,這種層級的文件系統UnionFS。 bootfs(boot file system)主要包含bootloader和kernel, bootloader主要是引導加載kernel, Linux剛啓動時會加載bootfs文件系統,在Docker鏡像的最底層是bootfs。這一層與咱們典型的Linux/Unix系統是同樣的,包含boot加載器和內核。當boot加載完成以後整個內核就都在內存中了,此時內存的使用權已由bootfs轉交給內核,此時系統也會卸載bootfs。
rootfs (root file system) ,在bootfs之上。包含的就是典型 Linux 系統中的 /dev, /proc, /bin, /etc 等標準目錄和文件。rootfs就是各類不一樣的操做系統發行版,好比Ubuntu,Centos等等。 。 平時咱們安裝進虛擬機的CentOS都是好幾個G,爲何docker這裏才200M??
對於一個精簡的OS,rootfs能夠很小,只須要包括最基本的命令、工具和程序庫就能夠了,由於底層直接用Host的kernel,本身只須要提供 rootfs 就好了。因而可知對於不一樣的linux發行版, bootfs基本是一致的, rootfs會有差異, 所以不一樣的發行版能夠公用bootfs。
以咱們的pull爲例,在下載的過程當中咱們能夠看到docker的鏡像好像是在一層一層的在下載
最大的一個好處就是 - 共享資源
好比:有多個鏡像都從相同的 base 鏡像構建而來,那麼宿主機只需在磁盤上保存一份base鏡像, 同時內存中也只需加載一份 base 鏡像,就能夠爲全部容器服務了。並且鏡像的每一層均可以被共享。
Docker鏡像都是隻讀的。當容器啓動時,一個新的可寫層被加載到鏡像的頂部。這一層一般被稱做「容器層」,「容器層」之下的都叫「鏡像層」。
docker commit提交容器副本使之成爲一個新的鏡像
docker commit -m=「提交的描述信息」 -a=「做者」 容器ID 要建立的目標鏡像名:[標籤名]
案例演示
從Hub上下載tomcat鏡像到本地併成功運行
docker run -it -p 8080:8080 tomcat
-p 主機端口:docker容器端口
-P 隨機分配端口
i:交互
t:終端
故意刪除上一步鏡像生產tomcat容器的文檔
也即當前的tomcat運行實例是一個沒有文檔內容的容器,
複製代碼
以它爲模板commit一個沒有doc的tomcat新鏡像atguigu/tomcat02 啓動咱們的新鏡像並和原來的對比 啓動atguigu/tomcat02,它沒有docs 新啓動原來的tomcat,它有docs
一句話:有點相似咱們Redis裏面的rdb和aof文件。
複製代碼
先來看看Docker的理念: * 將運用與運行的環境打包造成容器運行 ,運行能夠伴隨着容器,可是咱們對數據的要求但願是持久化的 * 容器之間但願有可能共享數據
Docker容器產生的數據,若是不經過docker commit生成新的鏡像,使得數據作爲鏡像的一部分保存下來,
那麼當容器刪除後,數據天然也就沒有了。
複製代碼
爲了能保存數據在docker中咱們使用卷。
複製代碼
容器的持久化
容器間繼承+共享數據
複製代碼
容器內添加
①直接命令添加
命令
docker run -it -v /宿主機絕對路徑目錄:/容器內目錄 鏡像名
查看數據卷是否掛載成功
docker inspect 容器ID
容器和宿主機之間數據共享
容器中止退出後,主機修改後數據是否同步: 同步
命令(帶權限)
docker run -it -v /宿主機絕對路徑目錄:/容器內目錄:ro 鏡像名 。。
只能讀取,不能修改。
複製代碼
②DockerFile添加
根目錄下新建mydocker文件夾並進入
可在Dockerfile中使用VOLUME指令來給鏡像添加一個或多個數據卷
VOLUME["/dataVolumeContainer","/dataVolumeContainer2","/dataVolumeContainer3"]
說明:
出於可移植和分享的考慮,用-v 主機目錄:容器目錄這種方法不可以直接在Dockerfile中實現。
因爲宿主機目錄是依賴於特定宿主機的,並不可以保證在全部的宿主機上都存在這樣的特定目錄。
File構建
# volume test
FROM centos
VOLUME ["/dataVolumeContainer1","/dataVolumeContainer2"]
CMD echo "finished,--------success1"
CMD /bin/bash
build後生成鏡像
docker build -f /mydocker/dockerfile2 -t zzyy/centos
得到一個新鏡像zzyy/centos
run容器
經過上述步驟,容器內的卷目錄地址已經知道
複製代碼
對應的主機目錄地址哪?? 主機對應默認地址 經過docker inspence 容器ID 查看 備註 Docker掛載主機目錄Docker訪問出現cannot open directory .: Permission denied 解決辦法:在掛載目錄後多加一個--privileged=true參數便可
命名的容器掛載數據卷,其它容器經過掛載這個(父容器)實現數據共享,掛載數據卷的容器,稱之爲數據卷容器。
以上一步新建的鏡像zzyy/centos爲模板並運行容器dc01/dc02/dc03
它們已經具備容器卷
/dataVolumeContainer1
/dataVolumeContainer2
複製代碼
先啓動一個父容器dc01
docker run -it --name dc01 zzyy/centos
在dataVolumeContainer2新增內容
dc02/dc03繼承自dc01
docker run -it --name dc02 --volums-from dc01 zzyy/centos
--volumes-from
命令
dc02/dc03分別在dataVolumeContainer2各自新增內容
回到dc01能夠看到02/03各自添加的都能共享了
刪除dc01,dc02修改後dc03能否訪問
刪除dc02後dc03能否訪問
再進一步
新建dc04繼承dc03後再刪除dc03
結論:容器之間配置信息的傳遞,數據卷的生命週期一直持續到沒有容器使用它爲止
複製代碼
Dockerfile內容基礎知識
1:每條保留字指令都必須爲大寫字母且後面要跟隨至少一個參數
2:指令按照從上到下,順序執行
3:#表示註釋
4:每條指令都會建立一個新的鏡像層,並對鏡像進行提交
Docker執行Dockerfile的大體流程
(1)docker從基礎鏡像運行一個容器
(2)執行一條指令並對容器做出修改
(3)執行相似docker commit的操做提交一個新的鏡像層
(4)docker再基於剛提交的鏡像運行一個新容器
(5)執行dockerfile中的下一條指令直到全部指令都執行完成
複製代碼
小總結:
從應用軟件的角度來看,Dockerfile、Docker鏡像與Docker容器分別表明軟件的三個不一樣階段,
* Dockerfile是軟件的原材料
* Docker鏡像是軟件的交付品
* Docker容器則能夠認爲是軟件的運行態。
Dockerfile面向開發,Docker鏡像成爲交付標準,Docker容器則涉及部署與運維,三者缺一不可,協力充當Docker體系的基石。
1 Dockerfile,須要定義一個Dockerfile,Dockerfile定義了進程須要的一切東西。Dockerfile涉及的內容包括執行代碼或者是文件、環境變量、依賴包、運行時環境、動態連接庫、操做系統的發行版、服務進程和內核進程(當應用進程須要和系統服務和內核進程打交道,這時須要考慮如何設計namespace的權限控制)等等;
2 Docker鏡像,在用Dockerfile定義一個文件以後,docker build時會產生一個Docker鏡像,當運行 Docker鏡像時,會真正開始提供服務;
3 Docker容器,容器是直接提供服務的。
複製代碼
FROM
基礎鏡像,當前新鏡像是基於哪一個鏡像的。
MAINTAINER
鏡像維護者的姓名和郵箱地址。
RUN
容器構建時須要運行的命令。
EXPOSE
當前容器對外暴露出的端口。
WORKDIR
指定在建立容器後,終端默認登錄的進來工做目錄,一個落腳點。
ENV
用來在構建鏡像過程當中設置環境變量。
ENV MY_PATH /usr/mytest
這個環境變量能夠在後續的任何RUN指令中使用,這就如同在命令前面指定了環境變量前綴同樣;
也能夠在其它指令中直接使用這些環境變量,
好比:WORKDIR $MY_PATH
ADD
將宿主機目錄下的文件拷貝進鏡像且ADD命令會自動處理URL和解壓tar壓縮包。
COPY
相似ADD,拷貝文件和目錄到鏡像中。
將從構建上下文目錄中 <源路徑> 的文件/目錄複製到新的一層的鏡像內的 <目標路徑> 位置
COPY src dest
COPY ["src", "dest"]
VOLUME
容器數據卷,用於數據保存和持久化工做。
CMD
指定一個容器啓動時要運行的命令。
Dockerfile 中能夠有多個 CMD 指令,但只有最後一個生效,CMD 會被 docker run 以後的參數替換。
ENTRYPOINT
指定一個容器啓動時要運行的命令。
ENTRYPOINT 的目的和 CMD 同樣,都是在指定容器啓動程序及參數。
ONBUILD
當構建一個被繼承的Dockerfile時運行命令,父鏡像在被子繼承後父鏡像的onbuild被觸發。
複製代碼
1.編寫
Hub默認CentOS鏡像什麼狀況:
自定義mycentos目的使咱們本身的鏡像具有以下:
登錄後的默認路徑
vim編輯器
查看網絡配置ifconfig支持
準備編寫DockerFile文件:
myCentOS內容DockerFile
FROM centos
MAINTAINER zzyy<zzyy167@126.com>
ENV MYPATH /usr/localWORKDIR $MYPATH
RUN yum -y install vim
RUN yum -y install net-tools
EXPOSE 80
CMD echo $MYPATHCMD echo "success--------------ok"CMD /bin/bash
2。構建
docker build -t 新鏡像名字:TAG .
會看到 docker build 命令最後有一個 . . 表示當前目錄
3.運行
docker run -it 新鏡像名字:TAG
能夠看到,咱們本身的新鏡像已經支持vim/ifconfig命令,擴展成功了。
4.列出鏡像的變動歷史
docker history 鏡像名
複製代碼
一、mkdir -p /zzyyuse/mydockerfile/tomcat9
二、在上述目錄下touch c.txt
三、將jdk和tomcat安裝的壓縮包拷貝進上一步目錄
apache-tomcat-9.0.8.tar.gz
jdk-8u171-linux-x64.tar.gz
四、在/zzyyuse/mydockerfile/tomcat9目錄下新建Dockerfile文件
目錄內容
FROM centos
MAINTAINER houyachao<hyc@qq.com>
#把宿主機當前上下文的c.txt拷貝到容器/usr/local/路徑下
COPY c.txt /usr/local/cincontainer.txt
#把java與Tomcat添加到容器中,添加並解壓縮
ADD jdk-8u171-linux-x64.tar.gz /usr/local/
ADD a[ache-tomcat-9.0.8.tar.gz /usr/local/
#安裝vim 編輯器
RUN yum -y install vim
#設置工做訪問時候的WORKDIR路徑,登陸落腳點
ENV MYPATH /usr/local
WORKDIR $MYPATH
#配置java與Tomcat環境變量
ENV JAVA_HOME /usr/local/jdk1.8.0_171
ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.8
ENV CATALINA_BASE /usr/local/apache-tomcat-9.0.8
ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/bin
#容器運行時監聽的端口
EXPOSE 8080
#啓動時運行Tomcat
#ENTRYPOINT ["/usr/local/apache-tomcat-9.0.8/bin/startup.sh"]
#CMD ["/usr/local/apache-tomcat-9.0.8/bin/catalina.sh","run"]
CMD /usr/local/apache-tomcat-9.0.8/bin/startup.sh&&tail -F /usr/local/apache-tomcat- 9.0.8/bin/log/catalina.out
五、構建
docker build -t zzyytomcat9 . # 若是在Dockerfile目錄中,能夠將Dockerfile省略不寫
構建完成
六、run
docker run -d -p 9080:8080 --name myt9 -v /zzyyuse/mydockerfile/tomcat9/test:/usr/local/apache-tomcat-9.0.8/webapps/test -v /zzyyuse/mydockerfile/tomcat9/tomcat9logs/:/usr/local/apache-tomcat-9.0.8/logs --privileged=true zzyytomcat9
備註:
Docker掛載主機目錄Docker訪問出現cannot open directory .: Permission denied
解決辦法:在掛載目錄後多加一個--privileged=true參數便可
七、驗證
結合前述的容器卷將測試的web服務test發佈
整體概述
web.xml
a.jsp
測試
複製代碼
搜索鏡像
拉取鏡像
查看鏡像
啓動鏡像
中止容器
移除容器
複製代碼
一、docker hub上面查找tomcat鏡像
docker search tomcat
二、從docker hub上拉取tomcat鏡像到本地
docker pull tomcat
三、docker images查看是否有拉取到的tomcat
四、使用tomcat鏡像建立容器(也叫運行鏡像)
docker run -it -p 8080:8080 tomcat
-p 主機端口:docker容器端口
-P 隨機分配端口
i:交互
t:終端
複製代碼
一、docker hub上面查找mysql鏡像
二、從docker hub上(阿里雲加速器)拉取mysql鏡像到本地標籤爲5.6
三、使用mysql5.6鏡像建立容器(也叫運行鏡像)
使用mysql鏡像:
docker run -p 12345:3306 --name mysql -v /zzyyuse/mysql/conf:/etc/mysql/conf.d -v /zzyyuse/mysql/logs:/logs -v /zzyyuse/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 -d mysql:5.6
命令說明:
-p 12345:3306:將主機的12345端口映射到docker容器的3306端口。
--name mysql:運行服務名字
-v /zzyyuse/mysql/conf:/etc/mysql/conf.d :將主機/zzyyuse/mysql錄下的conf/my.cnf 掛載到容器 的
/etc/mysql/conf.d-v /zzyyuse/mysql/logs:/logs:將主機/zzyyuse/mysql目錄下的 logs 目錄掛載到 容器的 /logs。
-v /zzyyuse/mysql/data:/var/lib/mysql :將主機/zzyyuse/mysql目錄下的data目錄掛載到容器的 /var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456:初始化 root 用戶的密碼。-d mysql:5.6 : 後臺 程序運行mysql5.6
docker exec -it MySQL運行成功後的容器ID /bin/bash
docker exec -it MySQL運行成功後的容器ID /bin/bash
外部Win10也來鏈接運行在dokcer上的mysql服務
數據備份小測試(能夠不作)
docker exec myql服務容器ID sh -c ' exec mysqldump --all-databases -uroot -p"123456" ' > /zzyyuse/all-databases.sql
複製代碼
一、從docker hub上(阿里雲加速器)拉取redis鏡像到本地標籤爲3.2
二、使用redis3.2鏡像建立容器(也叫運行鏡像)
使用鏡像
docker run -p 6379:6379 -v /zzyyuse/myredis/data:/data -v /zzyyuse/myredis/conf/redis.conf:/usr/local/etc/redis/redis.conf -d redis:3.2 redis-server /usr/local/etc/redis/redis.conf --appendonly yes
在主機/zzyyuse/myredis/conf/redis.conf目錄下新建redis.conf文件
# Redis configuration file example.
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
#
# ./redis-server /path/to/redis.conf
# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.
################################## INCLUDES ###################################
# Include one or more other config files here. This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf
################################## NETWORK #####################################
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 lookback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#bind 127.0.0.1
# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
# "bind" directive.
# 2) No password is configured.
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
protected-mode yes
# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379
# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511
# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 700
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0
# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
tcp-keepalive 300
################################# GENERAL #####################################
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
#daemonize no
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised no
# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
pidfile /var/run/redis_6379.pid
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""
# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no
# Specify the syslog identity.
# syslog-ident redis
# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16
################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""
save 120 1
save 300 10
save 60 10000
# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes
# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes
# The filename where to dump the DB
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./
################################# REPLICATION #################################
# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
# 1) Redis replication is asynchronous, but you can configure a master to
# stop accepting writes if it appears to be not connected with at least
# a given number of slaves.
# 2) Redis slaves are able to perform a partial resynchronization with the
# master if the replication link is lost for a relatively small amount of
# time. You may want to configure the replication backlog size (see the next
# sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
# network partition slaves automatically try to reconnect to masters
# and resynchronize with them.
#
# slaveof <masterip> <masterport>
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
# masterauth <master-password>
# When a slave loses its connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO and SLAVEOF.
#
slave-serve-stale-data yes
# You can configure a slave instance to accept writes or not. Writing against
# a slave instance may be useful to store some ephemeral data (because data
# written on a slave will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default slaves are read-only.
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
slave-read-only yes
# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New slaves and reconnecting slaves that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the slaves.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
# file on disk. Later the file is transferred by the parent
# process to the slaves incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
# RDB file to slave sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more slaves
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new slaves arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple slaves
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no
# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the slaves.
#
# This is important since once the transfer starts, it is not possible to serve
# new slaves arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more slaves arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5
# Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
#
# repl-ping-slave-period 10
# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
# 2) Master timeout from the point of view of slaves (data, pings).
# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
#
# repl-timeout 60
# Disable TCP_NODELAY on the slave socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to slaves. But this can add a delay for
# the data to appear on the slave side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the slave side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and slaves are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no
# Set the replication backlog size. The backlog is a buffer that accumulates
# slave data when slaves are disconnected for some time, so that when a slave
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the slave missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the slave can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a slave connected.
#
# repl-backlog-size 1mb
# After a master has no longer connected slaves for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last slave disconnected, for
# the backlog buffer to be freed.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600
# The slave priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a slave to promote into a
# master if the master is no longer working correctly.
#
# A slave with a low priority number is considered better for promotion, so
# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the slave as not able to perform the
# role of master, so a slave with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
slave-priority 100
# It is possible for a master to stop accepting writes if there are less than
# N slaves connected, having a lag less or equal than M seconds.
#
# The N slaves need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the slave, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
# For example to require at least 3 slaves with a lag <= 10 seconds use:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10.
# A Redis master is able to list the address and port of the attached
# slaves in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover slave instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a masteer.
#
# The listed IP and address normally reported by a slave is obtained
# in the following way:
#
# IP: The address is auto detected by checking the peer address
# of the socket used by the slave to connect with the master.
#
# Port: The port is communicated by the slave during the replication
# handshake, and is normally the port that the slave is using to
# list for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the slave may be actually reachable via different IP and port
# pairs. The following two options can be used by a slave in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
#
# slave-announce-ip 5.5.5.5
# slave-announce-port 1234
################################## SECURITY ###################################
# Require clients to issue AUTH <PASSWORD> before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared
# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to slaves may cause problems.
################################### LIMITS ####################################
# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 10000
# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU cache, or to set
# a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
#
# maxmemory <bytes>
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key according to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys-random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are no suitable keys for eviction.
#
# At the date of writing these commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction
# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs a bit more CPU. 3 is very fast but not very accurate.
#
# maxmemory-samples 5
############################## APPEND ONLY MODE ###############################
# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
appendonly no
# The name of the append only file (default: "appendonly.aof")
appendfilename "appendonly.aof"
# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".
# appendfsync always
appendfsync everysec
# appendfsync no
# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
no-appendfsync-on-rewrite no
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes
################################ LUA SCRIPTING ###############################
# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000
################################ REDIS CLUSTER ###############################
#
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
# in order to mark it as "mature" we need to wait for a non trivial percentage
# of users to deploy it in production.
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes
# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf
# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000
# A slave of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a slave to actually have a exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple slaves able to failover, they exchange messages
# in order to try to give an advantage to the slave with the best
# replication offset (more data from the master processed).
# Slaves will try to get their rank by offset, and apply to the start
# of the failover a delay proportional to their rank.
#
# 2) Every single slave computes the time of the last interaction with
# its master. This can be the last ping or command received (if the master
# is still in the "connected" state), or the time that elapsed since the
# disconnection with the master (if the replication link is currently down).
# If the last interaction is too old, the slave will not try to failover
# at all.
#
# The point "2" can be tuned by user. Specifically a slave will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
# (node-timeout * slave-validity-factor) + repl-ping-slave-period
#
# So for example if node-timeout is 30 seconds, and the slave-validity-factor
# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
# slave will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large slave-validity-factor may allow slaves with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a slave at all.
#
# For maximum availability, it is possible to set the slave-validity-factor
# to a value of 0, which means, that slaves will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-slave-validity-factor 10
# Cluster slaves are able to migrate to orphaned masters, that are masters
# that are left without working slaves. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working slaves.
#
# Slaves migrate to orphaned masters only if there are still at least a
# given number of other working slaves for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a slave
# will migrate only if there is at least 1 other working slave for its master
# and so forth. It usually reflects the number of slaves you want for every
# master in your cluster.
#
# Default is 1 (slaves migrate only if their masters remain with at least
# one slave). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1
# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes
# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.
################################## SLOW LOG ###################################
# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.
# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000
# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128
################################ LATENCY MONITOR ##############################
# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0
############################# EVENT NOTIFICATION ##############################
# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
# K Keyspace events, published with __keyspace@<db>__ prefix.
# E Keyevent events, published with __keyevent@<db>__ prefix.
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# $ String commands
# l List commands
# s Set commands
# h Hash commands
# z Sorted set commands
# x Expired events (events generated every time a key expires)
# e Evicted events (events generated when a key is evicted for maxmemory)
# A Alias for g$lshzxe, so that the "AKE" string means all the events.
#
# The "notify-keyspace-events" takes as argument a string that is composed
# of zero or multiple characters. The empty string means that notifications
# are disabled.
#
# Example: to enable list and generic events, from the point of view of the
# event name, use:
#
# notify-keyspace-events Elg
#
# Example 2: to get the stream of the expired keys subscribing to channel
# name __keyevent@0__:expired use:
#
# notify-keyspace-events Ex
#
# By default all notifications are disabled because most users don't need
# this feature and the feature has some overhead. Note that if you don't
# specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""
############################### ADVANCED CONFIG ###############################
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
# Lists are also encoded in a special way to save a lot of space.
# The number of entries allowed per internal list node can be specified
# as a fixed maximum size or a maximum number of elements.
# For a fixed maximum size, use -5 through -1, meaning:
# -5: max size: 64 Kb <-- not recommended for normal workloads
# -4: max size: 32 Kb <-- not recommended
# -3: max size: 16 Kb <-- probably not recommended
# -2: max size: 8 Kb <-- good
# -1: max size: 4 Kb <-- good
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
list-max-ziplist-size -2
# Lists may also be compressed.
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression. The head and tail of the list
# are always uncompressed for fast push/pop operations. Settings are:
# 0: disable all list compression
# 1: depth 1 means "don't start compressing until after 1 node into the list,
# going from either the head or tail"
# So: [head]->node->node->...->node->[tail]
# [head], [tail] will always be uncompressed; inner nodes will compress.
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
# 2 here means: don't compress head or head->next or tail->prev or tail,
# but compress all nodes between them.
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
# etc.
list-compress-depth 0
# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512
# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes
# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# slave -> slave clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and slave clients, since
# subscribers and slaves receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10
# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes
vim /zzyyuse/myredis/conf/redis.conf/redis.conf
測試redis-cli鏈接上來
docker exec -it 運行着Rediis服務的容器ID redis-cli
測試持久化文件生成
複製代碼
本地鏡像發佈到阿里雲流程
一、鏡像的生成方法
前面的DockerFile
從容器建立一個新的鏡像
docker commit [OPTIONS] 容器ID [REPOSITORY[:TAG]]
OPTIONS說明:
-a :提交的鏡像做者;
-m :提交時的說明文字;
二、將本地鏡像推送到阿里雲
1.本地鏡像素材原型
2。阿里雲開發者平臺
https://dev.aliyun.com/search.html
3.建立倉庫鏡像
命名空間
倉庫名稱
4.將鏡像推送到registry
5.公有云能夠查詢到
6.查看詳情
三、將阿里雲上的鏡像下載到本地
下載到本地
複製代碼