Docker Learning Notes

Docker簡介

 是什麼

問題:爲何會有docker出現

一款產品從開發到上線,從操做系統,到運行環境,再到應用配置。做爲開發+運維之間的協做咱們須要關心不少東西,這也是不少互聯網公司都不得不面對的問題,特別是各類版本的迭代以後,不一樣版本環境的兼容,對運維人員都是考驗
Docker之因此發展如此迅速,也是由於它對此給出了一個標準化的解決方案。
環境配置如此麻煩,換一臺機器,就要重來一次,費力費時。不少人想到,能不能從根本上解決問題,軟件能夠帶環境安裝?也就是說,安裝的時候,把原始環境如出一轍地複製過來。開發人員利用 Docker 能夠消除協做編碼時「在個人機器上可正常工做」的問題。
 
 
以前在服務器配置一個應用的運行環境,要安裝各類軟件,就拿尚硅谷電商項目的環境來講吧,Java/Tomcat/MySQL/JDBC驅動包等。安裝和配置這些東西有多麻煩就不說了,它還不能跨平臺。假如咱們是在 Windows 上安裝的這些環境,到了 Linux 又得從新裝。何況就算不跨操做系統,換另外一臺一樣操做系統的服務器,要移植應用也是很是麻煩的。
 
傳統上認爲,軟件編碼開發/測試結束後,所產出的成果便是程序或是可以編譯執行的二進制字節碼等(java爲例)。而爲了讓這些程序能夠順利執行,開發團隊也得準備完整的部署文件,讓維運團隊得以部署應用程式,開發須要清楚的告訴運維部署團隊,用的所有配置文件+全部軟件環境。不過,即使如此,仍然經常發生部署失敗的情況。Docker鏡像的設計,使得Docker得以打破過去「程序即應用」的觀念。透過鏡像(images)將做業系統核心除外,運做應用程式所須要的系統環境,由下而上打包,達到應用程式跨平臺間的無縫接軌運做。
View Code

docker理念

Docker是基於Go語言實現的雲開源項目。
Docker的主要目標是「Build,Ship and Run Any App,Anywhere」,也就是經過對應用組件的封裝、分發、部署、運行等生命週期的管理,使用戶的APP(能夠是一個WEB應用或數據庫應用等等)及其運行環境可以作到「一次封裝,處處運行」。

Linux 容器技術的出現就解決了這樣一個問題,而 Docker 就是在它的基礎上發展過來的。將應用運行在 Docker 容器上面,而 Docker 容器在任何操做系統上都是一致的,這就實現了跨平臺、跨服務器。只須要一次配置好環境,換到別的機子上就能夠一鍵部署好,大大簡化了操做
View Code

一句話

解決了運行環境和配置問題軟件容器,方便作持續集成並有助於總體發佈的容器虛擬化技術。

能幹嗎

以前的虛擬機技術

虛擬機(virtual machine)就是帶環境安裝的一種解決方案。
它能夠在一種操做系統裏面運行另外一種操做系統,好比在Windows 系統裏面運行Linux 系統。應用程序對此毫無感知,由於虛擬機看上去跟真實系統如出一轍,而對於底層系統來講,虛擬機就是一個普通文件,不須要了就刪掉,對其餘部分毫無影響。這類虛擬機完美的運行了另外一套系統,可以使應用程序,操做系統和硬件三者之間的邏輯不變。  

虛擬機的缺點:
1    資源佔用多               2    冗餘步驟多                 3    啓動慢
View Code

容器虛擬化技術

 
因爲前面虛擬機存在這些缺點,Linux 發展出了另外一種虛擬化技術:Linux 容器(Linux Containers,縮寫爲 LXC)。
Linux 容器不是模擬一個完整的操做系統,而是對進程進行隔離。有了容器,就能夠將軟件運行所需的全部資源打包到一個隔離的容器中。容器與虛擬機不一樣,不須要捆綁一整套操做系統,只須要軟件工做所需的庫資源和設置。系統所以而變得高效輕量並保證部署在任何環境中的軟件都能始終如一地運行。
 
比較了 Docker 和傳統虛擬化方式的不一樣之處:
*傳統虛擬機技術是虛擬出一套硬件後,在其上運行一個完整操做系統,在該系統上再運行所需應用進程;
*而容器內的應用進程直接運行於宿主的內核,容器內沒有本身的內核,並且也沒有進行硬件虛擬。所以容器要比傳統虛擬機更爲輕便。
* 每一個容器之間互相隔離,每一個容器有本身的文件系統 ,容器之間進程不會相互影響,能區分計算資源。
View Code

開發/運維(DevOps)

一次構建、隨處運行html

更快速的應用交付和部署

傳統的應用開發完成後,須要提供一堆安裝程序和配置說明文檔,安裝部署後需根據配置文檔進行繁雜的配置才能正常運行。Docker化以後只須要交付少許容器鏡像文件,在正式生產環境加載鏡像並運行便可,應用安裝配置在鏡像裏已經內置好,大大節省部署配置和測試驗證時間。
View Code

更便捷的升級和擴縮容

隨着微服務架構和Docker的發展,大量的應用會經過微服務方式架構,應用的開發構建將變成搭樂高積木同樣,每一個Docker容器將變成一塊「積木」,應用的升級將變得很是容易。當現有的容器不足以支撐業務處理時,可經過鏡像運行新的容器進行快速擴容,使應用系統的擴容從原先的天級變成分鐘級甚至秒級。
View Code

 更簡單的系統運維

應用容器化運行後,生產環境運行的應用可與開發、測試環境的應用高度一致,容器會將應用程序相關的環境和狀態徹底封裝起來,不會由於底層基礎架構和操做系統的不一致性給應用帶來影響,產生新的BUG。當出現程序異常時,也能夠經過測試環境的相同容器進行快速定位和修復。
View Code

 更高效的計算資源利用

Docker是內核級虛擬化,其不像傳統的虛擬化技術同樣須要額外的Hypervisor支持,因此在一臺物理機上能夠運行不少個容器實例,可大大提高物理服務器的CPU和內存的利用率。
View Code

去哪下

官網
docker官網:http://www.docker.com
docker中文網站:https://www.docker-cn.com/
倉庫
Docker Hub官網: https://hub.docker.com/

Docker安裝

前提說明

CentOS Docker 安裝
Docker支持如下的CentOS版本:
CentOS 7 (64-bit)
CentOS 6.5 (64-bit) 或更高的版本
 
前提條件
目前,CentOS 僅發行版本中的內核支持 Docker。
Docker 運行在 CentOS 7 上,要求系統爲64位、系統內核版本爲 3.10 以上。
Docker 運行在 CentOS-6.5 或更高的版本的 CentOS 上,要求系統爲64位、系統內核版本爲 2.6.32-431 或者更高版本。
 
查看本身的內核
uname命令用於打印當前系統相關信息(內核版本號、硬件架構、主機名稱和操做系統類型等)。

 
查看已安裝的CentOS版本信息(CentOS6.8有,CentOS7無該命令)

 

 
 
 
 
 
 
 
 
 
 
 
 
View Code

 

Docker的基本組成

鏡像(image)
Docker 鏡像(Image)就是一個只讀的模板。鏡像能夠用來建立 Docker 容器,一個鏡像能夠建立不少容器。
容器(container)
 
Docker 利用容器(Container)獨立運行的一個或一組應用。容器是用鏡像建立的運行實例。
它能夠被啓動、開始、中止、刪除。每一個容器都是相互隔離的、保證安全的平臺。
能夠把容器看作是一個簡易版的 Linux 環境(包括root用戶權限、進程空間、用戶空間和網絡空間等)和運行在其中的應用程序。
容器的定義和鏡像幾乎如出一轍,也是一堆層的統一視角,惟一區別在於容器的最上面那一層是可讀可寫的。

 

倉庫(repository)
 
倉庫(Repository)是集中存放鏡像文件的場所。
倉庫(Repository)和倉庫註冊服務器(Registry)是有區別的。倉庫註冊服務器上每每存放着多個倉庫,每一個倉庫中又包含了多個鏡像,每一個鏡像有不一樣的標籤(tag)。
 
倉庫分爲公開倉庫(Public)和私有倉庫(Private)兩種形式。
最大的公開倉庫是 Docker Hub(https://hub.docker.com/),
存放了數量龐大的鏡像供用戶下載。國內的公開倉庫包括阿里雲 、網易雲 等

 

小總結:
須要正確的理解倉儲/鏡像/容器這幾個概念:
 
Docker 自己是一個容器運行載體或稱之爲管理引擎。咱們把應用程序和配置依賴打包好造成一個可交付的運行環境,這個打包好的運行環境就彷佛 image鏡像文件。只有經過這個鏡像文件才能生成 Docker 容器。image 文件能夠看做是容器的模板。
Docker 根據 image 文件生成容器的實例。同一個 image 文件,能夠生成多個同時運行的容器實例。  
*  image 文件生成的容器實例,自己也是一個文件,稱爲鏡像文件。   *  一個容器運行一種服務,當咱們須要的時候,就能夠經過docker客戶端建立一個對應的運行實例,也就是咱們的容器   * 至於倉儲,就是放了一堆鏡像的地方,咱們能夠把鏡像發佈到倉儲中,須要的時候從倉儲中拉下來就能夠了。  

 

安裝步驟

 CentOS6.8安裝Docker

yum install -y epel-release
Docker使用EPEL發佈,RHEL系的OS首先要確保已經持有EPEL倉庫,不然先檢查OS的版本,而後安裝相應的EPEL包。

 

yum install -y docker-io

 

安裝後的配置文件:/etc/sysconfig/docker

  

啓動Docker後臺服務:service docker start

 

docker version驗證

 CentOS7安裝Docker

https://docs.docker.com/install/linux/docker-ce/centos/

 

 安裝步驟

官網中文安裝參考手冊java

肯定你是CentOS7及以上版本node

cat /etc/redhat-release
yum安裝gcc相關

yum -y install gcc

yum -y install gcc-c++

 

卸載舊版本mysql

yum -y remove docker docker-common docker-selinux docker-engine


2018.3官網版本


yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-selinux \
                  docker-engine-selinux \
                  docker-engine

 

安裝須要的軟件包linux

yum install -y yum-utils device-mapper-persistent-data lvm2

 

設置stable鏡像倉庫nginx

大坑:
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

 
報錯:
1   [Errno 14] curl#35 - TCP connection reset by peer
 


2   [Errno 12] curl#35 - Timeout

推薦:
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

 

更新yum軟件包索引c++

yum makecache fast

 

安裝DOCKER CEweb

yum -y install docker-ce

啓動dockerredis

systemctl start docker

測試sql

docker version

 

docker run hello-world

配置鏡像加速

mkdir -p /etc/docker


vim  /etc/docker/daemon.json


 
 #網易雲
{"registry-mirrors": ["http://hub-mirror.c.163.com"] }


systemctl restart docker

卸載

systemctl stop docker 
yum -y remove docker-ce
rm -rf /var/lib/docker

Docker經常使用命令

幫助命令 

docker version
docker info
docker --help

鏡像命令

docker images

列出本地主機上的鏡像

 
各個選項說明:REPOSITORY:表示鏡像的倉庫源TAG:鏡像的標籤IMAGE ID:鏡像IDCREATED:鏡像建立時間SIZE:鏡像大小
 同一倉庫源能夠有多個 TAG,表明這個倉庫源的不一樣個版本,咱們使用 REPOSITORY:TAG 來定義不一樣的鏡像。
若是你不指定一個鏡像的版本標籤,例如你只使用 ubuntu,docker 將默認使用 ubuntu:latest 鏡像
 
 

OPTIONS說明:

-a :列出本地全部的鏡像(含中間映像層)

-q :只顯示鏡像ID。

--digests :顯示鏡像的摘要信息

--no-trunc :顯示完整的鏡像信息
docker search 某個XXX鏡像名字
網站:
https://hub.docker.com

docker search [OPTIONS] 鏡像名字

OPTIONS說明:
--no-trunc : 顯示完整的鏡像描述
-s : 列出收藏數不小於指定值的鏡像。
--automated : 只列出 automated build類型的鏡像;

 

docker pull 某個XXX鏡像名字

下載鏡像
docker pull 鏡像名字[:TAG]

docker rmi 某個XXX鏡像名字ID

刪除鏡像
刪除單個:docker rmi  -f 鏡像ID
刪除多個:docker rmi -f 鏡像名1:TAG 鏡像名2:TAG 
刪除所有:docker rmi -f $(docker images -qa)

容器命令

有鏡像才能建立容器,這是根本前提(下載一個CentOS鏡像演示)

 

docker pull centos
新建並啓動容器
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
 OPTIONS說明:
 OPTIONS說明(經常使用):有些是一個減號,有些是兩個減號
 
--name="容器新名字": 爲容器指定一個名稱;
-d: 後臺運行容器,並返回容器ID,也即啓動守護式容器;
-i:以交互模式運行容器,一般與 -t 同時使用;
-t:爲容器從新分配一個僞輸入終端,一般與 -i 同時使用;
-P: 隨機端口映射;
-p: 指定端口映射,有如下四種格式
      ip:hostPort:containerPort
      ip::containerPort
      hostPort:containerPort
      containerPort

啓動交互式容器

#使用鏡像centos:latest以交互模式啓動一個容器,在容器內執行/bin/bash命令。
docker run -it centos /bin/bash 

 

列出當前全部正在運行的容器

docker ps [OPTIONS]
 OPTIONS說明:
OPTIONS說明(經常使用):
 
-a :列出當前全部正在運行的容器+歷史上運行過的
-l :顯示最近建立的容器。
-n:顯示最近n個建立的容器。
-q :靜默模式,只顯示容器編號。
--no-trunc :不截斷輸出。

 

退出容器

兩種退出方式
exit

容器中止退出

ctrl+P+Q
容器不中止退出

啓動容器

docker start 容器ID或者容器名

重啓容器

docker restart 容器ID或者容器名

中止容器

docker stop 容器ID或者容器名

強制中止容器

docker kill 容器ID或者容器名

刪除已中止的容器

docker rm 容器ID
一次性刪除多個容器:
docker rm -f $(docker ps -a -q)
docker ps -a -q | xargs docker rm

重要

啓動守護式容器

 
#使用鏡像centos:latest之後臺模式啓動一個容器
docker run -d centos
 
問題:而後docker ps -a 進行查看, 會發現容器已經退出
很重要的要說明的一點: Docker容器後臺運行,就必須有一個前臺進程.
容器運行的命令若是不是那些一直掛起的命令(好比運行top,tail),就是會自動退出的。
 
這個是docker的機制問題,好比你的web容器,咱們以nginx爲例,正常狀況下,咱們配置啓動服務只須要啓動響應的service便可。例如
service nginx start
可是,這樣作,nginx爲後臺進程模式運行,就致使docker前臺沒有運行的應用,
這樣的容器後臺啓動後,會當即自殺由於他以爲他沒事可作了.
因此,最佳的解決方案是,將你要運行的程序之前臺進程的形式運行
docker run -d 容器名

 

查看容器日誌

docker logs -f -t --tail 容器ID

*   -t 是加入時間戳

*   -f 跟隨最新的日誌打印

*   --tail 數字 顯示最後多少條


 
 docker run -d centos /bin/sh -c "while true;do echo hello zzyy;sleep 2;done"
 

  
*   -t 是加入時間戳
*   -f 跟隨最新的日誌打印
*   --tail 數字 顯示最後多少條

 
 
View Code

查看容器內運行的進程

docker top 容器ID

查看容器內部細節

docker inspect 容器ID

 

進入正在運行的容器並以命令行交互

docker exec -it 容器ID bashShell


從新進入docker attach 容器ID


上述兩個區別:
attach 直接進入容器啓動命令的終端,不會啓動新的進程
exec 是在容器中打開新的終端,而且能夠啓動新的進程

從容器內拷貝文件到主機上

docker cp  容器ID:容器內路徑 目的主機路徑

小總結

attach    Attach to a running container                 # 當前 shell 下 attach 鏈接指定運行鏡像
build     Build an image from a Dockerfile              # 經過 Dockerfile 定製鏡像
commit    Create a new image from a container changes   # 提交當前容器爲新的鏡像
cp        Copy files/folders from the containers filesystem to the host path   #從容器中拷貝指定文件或者目錄到宿主機中
create    Create a new container                        # 建立一個新的容器,同 run,但不啓動容器
diff      Inspect changes on a container's filesystem   # 查看 docker 容器變化
events    Get real time events from the server          # 從 docker 服務獲取容器實時事件
exec      Run a command in an existing container        # 在已存在的容器上運行命令
export    Stream the contents of a container as a tar archive   # 導出容器的內容流做爲一個 tar 歸檔文件[對應 import ]
history   Show the history of an image                  # 展現一個鏡像造成歷史
images    List images                                   # 列出系統當前鏡像
import    Create a new filesystem image from the contents of a tarball # 從tar包中的內容建立一個新的文件系統映像[對應export]
info      Display system-wide information               # 顯示系統相關信息
inspect   Return low-level information on a container   # 查看容器詳細信息
kill      Kill a running container                      # kill 指定 docker 容器
load      Load an image from a tar archive              # 從一個 tar 包中加載一個鏡像[對應 save]
login     Register or Login to the docker registry server    # 註冊或者登錄一個 docker 源服務器
logout    Log out from a Docker registry server          # 從當前 Docker registry 退出
logs      Fetch the logs of a container                 # 輸出當前容器日誌信息
port      Lookup the public-facing port which is NAT-ed to PRIVATE_PORT    # 查看映射端口對應的容器內部源端口
pause     Pause all processes within a container        # 暫停容器
ps        List containers                               # 列出容器列表
pull      Pull an image or a repository from the docker registry server   # 從docker鏡像源服務器拉取指定鏡像或者庫鏡像
push      Push an image or a repository to the docker registry server    # 推送指定鏡像或者庫鏡像至docker源服務器
restart   Restart a running container                   # 重啓運行的容器
rm        Remove one or more containers                 # 移除一個或者多個容器
rmi       Remove one or more images             # 移除一個或多個鏡像[無容器使用該鏡像纔可刪除,不然需刪除相關容器纔可繼續或 -f 強制刪除]
run       Run a command in a new container              # 建立一個新的容器並運行一個命令
save      Save an image to a tar archive                # 保存一個鏡像爲一個 tar 包[對應 load]
search    Search for an image on the Docker Hub         # 在 docker hub 中搜索鏡像
start     Start a stopped containers                    # 啓動容器
stop      Stop a running containers                     # 中止容器
tag       Tag an image into a repository                # 給源中鏡像打標籤
top       Lookup the running processes of a container   # 查看容器中運行的進程信息
unpause   Unpause a paused container                    # 取消暫停容器
version   Show the docker version information           # 查看 docker 版本號
wait      Block until a container stops, then print its exit code   # 截取容器中止時的退出狀態值
 
View Code

 

Docker 鏡像

 是什麼:

 
鏡像是一種輕量級、可執行的獨立軟件包,用來打包軟件運行環境和基於運行環境開發的軟件,它包含運行某個軟件所需的全部內容,包括代碼、運行時、庫、環境變量和配置文件。
 
 
是什麼

 

 UnionFS(聯合文件系統)

UnionFS(聯合文件系統):Union文件系統(UnionFS)是一種分層、輕量級而且高性能的文件系統,它支持對文件系統的修改做爲一次提交來一層層的疊加,同時能夠將不一樣目錄掛載到同一個虛擬文件系統下(unite several directories into a single virtual filesystem)。Union 文件系統是 Docker 鏡像的基礎。鏡像能夠經過分層來進行繼承,基於基礎鏡像(沒有父鏡像),能夠製做各類具體的應用鏡像。
 
特性:一次同時加載多個文件系統,但從外面看起來,只能看到一個文件系統,聯合加載會把各層文件系統疊加起來,這樣最終的文件系統會包含全部底層的文件和目錄
View Code

 Docker鏡像加載原理

 
 Docker鏡像加載原理:
   docker的鏡像實際上由一層一層的文件系統組成,這種層級的文件系統UnionFS。
bootfs(boot file system)主要包含bootloader和kernel, bootloader主要是引導加載kernel, Linux剛啓動時會加載bootfs文件系統,在Docker鏡像的最底層是bootfs。這一層與咱們典型的Linux/Unix系統是同樣的,包含boot加載器和內核。當boot加載完成以後整個內核就都在內存中了,此時內存的使用權已由bootfs轉交給內核,此時系統也會卸載bootfs。
 
rootfs (root file system) ,在bootfs之上。包含的就是典型 Linux 系統中的 /dev, /proc, /bin, /etc 等標準目錄和文件。rootfs就是各類不一樣的操做系統發行版,好比Ubuntu,Centos等等。 
。 
 平時咱們安裝進虛擬機的CentOS都是好幾個G,爲何docker這裏才200M??

對於一個精簡的OS,rootfs能夠很小,只須要包括最基本的命令、工具和程序庫就能夠了,由於底層直接用Host的kernel,本身只須要提供 rootfs 就好了。因而可知對於不一樣的linux發行版, bootfs基本是一致的, rootfs會有差異, 所以不一樣的發行版能夠公用bootfs。
 
View Code

分層的鏡像 

以咱們的pull爲例,在下載的過程當中咱們能夠看到docker的鏡像好像是在一層一層的在下載

 

爲何 Docker 鏡像要採用這種分層結構呢

 
 
最大的一個好處就是 - 共享資源
 
好比:有多個鏡像都從相同的 base 鏡像構建而來,那麼宿主機只需在磁盤上保存一份base鏡像,
同時內存中也只需加載一份 base 鏡像,就能夠爲全部容器服務了。並且鏡像的每一層均可以被共享。
View Code

特色

Docker鏡像都是隻讀的
當容器啓動時,一個新的可寫層被加載到鏡像的頂部。
這一層一般被稱做「容器層」,「容器層」之下的都叫「鏡像層」。
View Code

Docker鏡像commit操做補充

docker commit提交容器副本使之成爲一個新的鏡像

docker commit -m=「提交的描述信息」 -a=「做者」 容器ID 要建立的目標鏡像名:[標籤名]

 

案例演示

從Hub上下載tomcat鏡像到本地併成功運行
docker run -it -p 8080:8080 tomcat

故意刪除上一步鏡像生產tomcat容器的文檔

也即當前的tomcat運行實例是一個沒有文檔內容的容器,
以它爲模板commit一個沒有doc的tomcat新鏡像atguigu/tomcat02

啓動咱們的新鏡像並和原來的對比
啓動atguigu/tomcat02,它沒有docs
新啓動原來的tomcat,它有docs
View Code

Docker容器數據卷

是什麼

一句話:有點相似咱們Redis裏面的rdb和aof文件

 

 
先來看看Docker的理念:
*  將運用與運行的環境打包造成容器運行 ,運行能夠伴隨着容器,可是咱們對數據的要求但願是持久化的
*  容器之間但願有可能共享數據
 
 
Docker容器產生的數據,若是不經過docker commit生成新的鏡像,使得數據作爲鏡像的一部分保存下來,
那麼當容器刪除後,數據天然也就沒有了。


爲了能保存數據在docker中咱們使用卷。
View Code

能幹嗎

 
 
 卷就是目錄或文件,存在於一個或多個容器中,由docker掛載到容器,但不屬於聯合文件系統,所以可以繞過Union File System提供一些用於持續存儲或共享數據的特性:
 
 卷的設計目的就是數據的持久化,徹底獨立於容器的生存週期,所以Docker不會在容器刪除時刪除其掛載的數據卷


特色:
1:數據卷可在容器之間共享或重用數據
2:卷中的更改能夠直接生效
3:數據卷中的更改不會包含在鏡像的更新中
4:數據卷的生命週期一直持續到沒有容器使用它爲止
View Code

容器的持久化

容器間繼承+共享數據

數據卷

容器內添加

直接命令添加
 docker run -it -v /宿主機目錄:/容器內目錄 centos /bin/bash
 docker run -it -v /宿主機絕對路徑目錄:/容器內目錄      鏡像名

查看數據卷是否掛載成功:
docker inspect 容器ID

容器和宿主機之間數據共享
容器中止退出後,主機修改後數據是否同步

命令(帶權限):
 docker run -it -v /宿主機絕對路徑目錄:/容器內目錄:ro 鏡像名
DockerFile添加
根目錄下新建mydocker文件夾並進入
可在Dockerfile中使用VOLUME指令來給鏡像添加一個或多個數據卷: 

  VOLUME["/dataVolumeContainer","/dataVolumeContainer2","/dataVolumeContainer3"]
 
  說明:
 
  出於可移植和分享的考慮,用-v 主機目錄:容器目錄這種方法不可以直接在Dockerfile中實現。
  因爲宿主機目錄是依賴於特定宿主機的,並不可以保證在全部的宿主機上都存在這樣的特定目錄。

File構建

  volume test
  FROM centos
  VOLUME ["/dataVolumeContainer1","/dataVolumeContainer2"]
  CMD echo "finished,--------success1"
  CMD /bin/bash

build後生成鏡像
 得到一個新鏡像zzyy/centos

run容器

  經過上述步驟,容器內的卷目錄地址已經知道
對應的主機目錄地址哪??

主機對應默認地址

 

備註:
Docker掛載主機目錄Docker訪問出現cannot open directory .: Permission denied
解決辦法:在掛載目錄後多加一個--privileged=true參數便可

數據卷容器

是什麼

命名的容器掛載數據卷,其它容器經過掛載這個(父容器)實現數據共享,掛載數據卷的容器,稱之爲數據卷容器 

整體介紹

以上一步新建的鏡像zzyy/centos爲模板並運行容器dc01/dc02/dc03
它們已經具備容器卷:
/dataVolumeContainer1
/dataVolumeContainer2

 

容器間傳遞共享(--volumes-from)

先啓動一個父容器dc01:在dataVolumeContainer2新增內容

 

dc02/dc03繼承自dc01
--volumes-from
命令:
docker run -it --name dc02 --volumes-from dc01 zzyy/centos
dc02/dc03分別在dataVolumeContainer2各自新增內容
回到dc01能夠看到02/03各自添加的都能共享了
刪除dc01,dc02修改後dc03能否訪問
刪除dc02後dc03能否訪問:再進一步
新建dc04繼承dc03後再刪除dc03
結論:容器之間配置信息的傳遞,數據卷的生命週期一直持續到沒有容器使用它爲止

DockerFile解析

是什麼 

Dockerfile是用來構建Docker鏡像的構建文件,是由一系列命令和參數構成的腳本。
構建三步驟:

編寫Dockerfile文件
docker build
docker run

文件什麼樣???
以咱們熟悉的CentOS爲例 
https://hub.docker.com/_/centos/

DockerFile構建過程解析

Dockerfile內容基礎知識

1:每條保留字指令都必須爲大寫字母且後面要跟隨至少一個參數
2:指令按照從上到下,順序執行
3:#表示註釋
4:每條指令都會建立一個新的鏡像層,並對鏡像進行提交

Docker執行Dockerfile的大體流程

1)docker從基礎鏡像運行一個容器
(2)執行一條指令並對容器做出修改
(3)執行相似docker commit的操做提交一個新的鏡像層
(4)docker再基於剛提交的鏡像運行一個新容器
(5)執行dockerfile中的下一條指令直到全部指令都執行完成

小總結

 
從應用軟件的角度來看,Dockerfile、Docker鏡像與Docker容器分別表明軟件的三個不一樣階段,
*  Dockerfile是軟件的原材料
*  Docker鏡像是軟件的交付品
*  Docker容器則能夠認爲是軟件的運行態。
Dockerfile面向開發,Docker鏡像成爲交付標準,Docker容器則涉及部署與運維,三者缺一不可,協力充當Docker體系的基石。

1 Dockerfile,須要定義一個Dockerfile,Dockerfile定義了進程須要的一切東西。Dockerfile涉及的內容包括執行代碼或者是文件、環境變量、依賴包、運行時環境、動態連接庫、操做系統的發行版、服務進程和內核進程(當應用進程須要和系統服務和內核進程打交道,這時須要考慮如何設計namespace的權限控制)等等;
 
2 Docker鏡像,在用Dockerfile定義一個文件以後,docker build時會產生一個Docker鏡像,當運行 Docker鏡像時,會真正開始提供服務;
 
3 Docker容器,容器是直接提供服務的。
 
 
View Code 

DockerFile體系結構(保留字指令)

FROM

基礎鏡像,當前新鏡像是基於哪一個鏡像的

MAINTAINER

鏡像維護者的姓名和郵箱地址

RUN

容器構建時須要運行的命令

EXPOSE

當前容器對外暴露出的端口

WORKDIR

指定在建立容器後,終端默認登錄的進來工做目錄,一個落腳點

ENV

 
ENV MY_PATH /usr/mytest
這個環境變量能夠在後續的任何RUN指令中使用,這就如同在命令前面指定了環境變量前綴同樣;
也能夠在其它指令中直接使用這些環境變量,
 
好比:WORKDIR $MY_PATH
用來在構建鏡像過程當中設置環境變量

ADD

將宿主機目錄下的文件拷貝進鏡像且ADD命令會自動處理URL和解壓tar壓縮包

COPY

相似ADD,拷貝文件和目錄到鏡像中。
將從構建上下文目錄中 <源路徑> 的文件/目錄複製到新的一層的鏡像內的 <目標路徑> 位置

VOLUME

容器數據卷,用於數據保存和持久化工做

CMD

指定一個容器啓動時要運行的命令
Dockerfile 中能夠有多個 CMD 指令,但只有最後一個生效,CMD 會被 docker run 以後的參數替換

ENTRYPOINT 

指定一個容器啓動時要運行的命令
ENTRYPOINT 的目的和 CMD 同樣,都是在指定容器啓動程序及參數

ONBUILD

當構建一個被繼承的Dockerfile時運行命令,父鏡像在被子繼承後父鏡像的onbuild被觸發

案例

Base鏡像(scratch)

Docker Hub 中 99% 的鏡像都是經過在 base 鏡像中安裝和配置須要的軟件構建出來的

自定義鏡像mycentos

編寫
Hub默認CentOS鏡像什麼狀況
         自定義mycentos目的使咱們本身的鏡像具有以下:
         登錄後的默認路徑
         vim編輯器
         查看網絡配置ifconfig支持
準備編寫DockerFile文件    
myCentOS內容DockerFile
          FROM centosMAINTAINER zzyy<zzyy167@126.com>
          ENV MYPATH /usr/localWORKDIR $MYPATH
          RUN yum -y install vimRUN yum -y install net-tools
          EXPOSE 80
          CMD echo $MYPATHCMD echo "success--------------ok"CMD /bin/bash  
構建
      docker build -t 新鏡像名字:TAG .                       會看到 docker build 命令最後有一個 .                                   . 表示當前目錄
運行
      docker run -it 新鏡像名字:TAG                          能夠看到,咱們本身的新鏡像已經支持vim/ifconfig命令,擴展成功了。
列出鏡像的變動歷史
      docker history 鏡像名
View Code

CMD/ENTRYPOINT 鏡像案例

都是指定一個容器啓動時要運行的命令

CMD

Dockerfile 中能夠有多個 CMD 指令,但只有最後一個生效,CMD 會被 docker run 以後的參數替換

Case:tomcat的講解演示:docker run -it -p 8888:8080 tomcat ls -l

ENTRYPOINT 

docker run 以後的參數會被當作參數傳遞給 ENTRYPOINT,以後造成新的命令組合
Case:
製做CMD版能夠查詢IP信息的容器: FROM centosRUN yum install -y curlCMD [ "curl", "-s", "http://ip.cn" ]


crul命令解釋:
 
curl命令能夠用來執行下載、發送各類HTTP請求,指定HTTP頭部等操做。
若是系統沒有curl可使用yum install curl安裝,也能夠下載安裝。
curl是將下載文件輸出到stdout
 
使用命令:curl http://www.baidu.com
執行後,www.baidu.com的html就會顯示在屏幕上了
 
這是最簡單的使用方法。用這個命令得到了http://curl.haxx.se指向的頁面,一樣,若是這裏的URL指向的是一個文件或者一幅圖均可以直接下載到本地。若是下載的是HTML文檔,那麼缺省的將只顯示文件頭部,即HTML文檔的header。要所有顯示,請加參數 -i

問題:
若是咱們但願顯示 HTTP 頭信息,就須要加上 -i 參數
WHY:
 
 
咱們能夠看到可執行文件找不到的報錯,executable file not found。
以前咱們說過,跟在鏡像名後面的是 command,運行時會替換 CMD 的默認值。
所以這裏的 -i 替換了原來的 CMD,而不是添加在原來的 curl -s http://ip.cn 後面。而 -i 根本不是命令,因此天然找不到。
 
那麼若是咱們但願加入 -i 這參數,咱們就必須從新完整的輸入這個命令:
 
$ docker run myip curl -s http://ip.cn -i

製做ENTROYPOINT版查詢IP信息的容器: 
FROM centos
RUN yum install -y curl
ENTRYPOINT [ "curl", "-s", "http://ip.cn" ]
View Code

自定義鏡像Tomcat9

mkdir -p /zzyyuse/mydockerfile/tomcat9
在上述目錄下touch c.txt


將jdk和tomcat安裝的壓縮包拷貝進上一步目錄
apache-tomcat-9.0.8.tar.gz
jdk-8u171-linux-x64.tar.gz
View Code
FROM         centos
MAINTAINER    zzyy<zzyybs@126.com>#把宿主機當前上下文的c.txt拷貝到容器/usr/local/路徑下
COPY c.txt /usr/local/cincontainer.txt
#把java與tomcat添加到容器中
ADD jdk-8u171-linux-x64.tar.gz /usr/local/
ADD apache-tomcat-9.0.8.tar.gz /usr/local/
#安裝vim編輯器
RUN yum -y install vim
#設置工做訪問時候的WORKDIR路徑,登陸落腳點
ENV MYPATH /usr/localWORKDIR $MYPATH
#配置java與tomcat環境變量
ENV JAVA_HOME /usr/local/jdk1.8.0_171ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
ENV CATALINA_HOME /usr/local/apache-tomcat-9.0.8
ENV CATALINA_BASE /usr/local/apache-tomcat-9.0.8ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/bin#容器運行時監聽的端口
EXPOSE  8080#啓動時運行tomcat# 
ENTRYPOINT ["/usr/local/apache-tomcat-9.0.8/bin/startup.sh" ]# 
CMD ["/usr/local/apache-tomcat-9.0.8/bin/catalina.sh","run"]
CMD /usr/local/apache-tomcat-9.0.8/bin/startup.sh && tail -F /usr/local/apache-tomcat-9.0.8/bin/logs/catalina.out
Dockerfile

構建

構建完成

run

 
 docker run -d -p 9080:8080 --name myt9 -v /zzyyuse/mydockerfile/tomcat9/test:/usr/local/apache-tomcat-9.0.8/webapps/test -v /zzyyuse/mydockerfile/tomcat9/tomcat9logs/:/usr/local/apache-tomcat-9.0.8/logs --privileged=true zzyytomcat9
 
 
View Code

備註

 
Docker掛載主機目錄Docker訪問出現cannot open directory .: Permission denied
解決辦法:在掛載目錄後多加一個--privileged=true參數便可
View Code

結合前述的容器卷將測試的web服務test發佈

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns="http://java.sun.com/xml/ns/javaee"
  xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
  id="WebApp_ID" version="2.5">
  
  <display-name>test</display-name>
 
</web-app>
web.xml
<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    <title>Insert title here</title>
  </head>
  <body>
    -----------welcome------------
    <%="i am in docker tomcat self "%>
    <br>
    <br>
    <% System.out.println("=============docker tomcat self");%>
  </body>
</html>
 
a.jsp

Docker經常使用安裝

整體步驟

搜索鏡像-->拉取鏡像  -->  查看鏡像--> 啓動鏡像  -->中止容器-->移除容器

安裝tomcat

docker hub上面查找tomcat鏡像
docker search tomcat
docker pull tomcat
docker images查看是否有拉取到的tomcat
使用tomcat鏡像建立容器(也叫運行鏡像)
docker run -it -p 8080:8080 tomcat
-p 主機端口:docker容器端口
-P 隨機分配端口
i:交互
t:終端
View Code

安裝mysql

docker hub上面查找mysql鏡像
從docker hub上(阿里雲加速器)拉取mysql鏡像到本地標籤爲5.6
使用mysql5.6鏡像建立容器(也叫運行鏡像)
View Code
 
 docker run -p 12345:3306 --name mysql -v /zzyyuse/mysql/conf:/etc/mysql/conf.d -v /zzyyuse/mysql/logs:/logs -v /zzyyuse/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 -d mysql:5.6 命令說明:-p 12345:3306:將主機的12345端口映射到docker容器的3306端口。--name mysql:運行服務名字-v /zzyyuse/mysql/conf:/etc/mysql/conf.d :將主機/zzyyuse/mysql錄下的conf/my.cnf 掛載到容器的 /etc/mysql/conf.d-v /zzyyuse/mysql/logs:/logs:將主機/zzyyuse/mysql目錄下的 logs 目錄掛載到容器的 /logs。-v /zzyyuse/mysql/data:/var/lib/mysql :將主機/zzyyuse/mysql目錄下的data目錄掛載到容器的 /var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456:初始化 root 用戶的密碼。-d mysql:5.6 : 後臺程序運行mysql5.6
 
 docker exec -it MySQL運行成功後的容器ID     /bin/bash
 
 
使用mysql鏡像
外部Win10也來鏈接運行在dokcer上的mysql服務
數據備份小測試(能夠不作):
 
docker exec myql服務容器ID sh -c ' exec mysqldump --all-databases -uroot -p"123456" ' > /zzyyuse/all-databases.sql
View Code

安裝redis

從docker hub上(阿里雲加速器)拉取redis鏡像到本地標籤爲3.2
使用redis3.2鏡像建立容器(也叫運行鏡像)
使用鏡像:
 docker run -p 6379:6379 -v /zzyyuse/myredis/data:/data -v /zzyyuse/myredis/conf/redis.conf:/usr/local/etc/redis/redis.conf  -d redis:3.2 redis-server /usr/local/etc/redis/redis.conf --appendonly yes






 
View Code

在主機/zzyyuse/myredis/conf/redis.conf目錄下新建redis.conf文件

# Redis configuration file example.
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
#
# ./redis-server /path/to/redis.conf
 
# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.
################################## INCLUDES ###################################
 
# Include one or more other config files here.  This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings.  Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf
 
################################## NETWORK #####################################
 
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 lookback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#bind 127.0.0.1
 
# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
#    "bind" directive.
# 2) No password is configured.
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
protected-mode yes
 
# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379
 
# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511
 
# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 700
 
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0
 
# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
#    equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
tcp-keepalive 300
 
################################# GENERAL #####################################
 
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
#daemonize no
 
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
#   supervised no      - no supervision interaction
#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode
#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
#   supervised auto    - detect upstart or systemd method based on
#                        UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
#       They do not enable continuous liveness pings back to your supervisor.
supervised no
 
# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
pidfile /var/run/redis_6379.pid
 
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
 
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""
 
# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no
 
# Specify the syslog identity.
# syslog-ident redis
 
# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0
 
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16
 
################################ SNAPSHOTTING  ################################
#
# Save the DB on disk:
#
#   save <seconds> <changes>
#
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#
#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed
#
#   Note: you can disable saving completely by commenting out all "save" lines.
#
#   It is also possible to remove all the previously configured save
#   points by adding a save directive with a single empty string argument
#   like in the following example:
#
#   save ""
 
save 120 1
save 300 10
save 60 10000
 
# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes
 
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes
 
# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes
 
# The filename where to dump the DB
dbfilename dump.rdb
 
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./
 
################################# REPLICATION #################################
 
# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
# 1) Redis replication is asynchronous, but you can configure a master to
#    stop accepting writes if it appears to be not connected with at least
#    a given number of slaves.
# 2) Redis slaves are able to perform a partial resynchronization with the
#    master if the replication link is lost for a relatively small amount of
#    time. You may want to configure the replication backlog size (see the next
#    sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
#    network partition slaves automatically try to reconnect to masters
#    and resynchronize with them.
#
# slaveof <masterip> <masterport>
 
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
# masterauth <master-password>
 
# When a slave loses its connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
#    still reply to client requests, possibly with out of date data, or the
#    data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
#    an error "SYNC with master in progress" to all the kind of commands
#    but to INFO and SLAVEOF.
#
slave-serve-stale-data yes
 
# You can configure a slave instance to accept writes or not. Writing against
# a slave instance may be useful to store some ephemeral data (because data
# written on a slave will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default slaves are read-only.
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
slave-read-only yes
 
# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New slaves and reconnecting slaves that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the slaves.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
#                 file on disk. Later the file is transferred by the parent
#                 process to the slaves incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
#              RDB file to slave sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more slaves
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new slaves arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple slaves
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no
 
# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the slaves.
#
# This is important since once the transfer starts, it is not possible to serve
# new slaves arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more slaves arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5
 
# Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
#
# repl-ping-slave-period 10
 
# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
# 2) Master timeout from the point of view of slaves (data, pings).
# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
#
# repl-timeout 60
 
# Disable TCP_NODELAY on the slave socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to slaves. But this can add a delay for
# the data to appear on the slave side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the slave side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and slaves are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no
 
# Set the replication backlog size. The backlog is a buffer that accumulates
# slave data when slaves are disconnected for some time, so that when a slave
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the slave missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the slave can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a slave connected.
#
# repl-backlog-size 1mb
 
# After a master has no longer connected slaves for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last slave disconnected, for
# the backlog buffer to be freed.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600
 
# The slave priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a slave to promote into a
# master if the master is no longer working correctly.
#
# A slave with a low priority number is considered better for promotion, so
# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the slave as not able to perform the
# role of master, so a slave with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
slave-priority 100
 
# It is possible for a master to stop accepting writes if there are less than
# N slaves connected, having a lag less or equal than M seconds.
#
# The N slaves need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the slave, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
# For example to require at least 3 slaves with a lag <= 10 seconds use:
#
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10.
 
# A Redis master is able to list the address and port of the attached
# slaves in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover slave instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a masteer.
#
# The listed IP and address normally reported by a slave is obtained
# in the following way:
#
#   IP: The address is auto detected by checking the peer address
#   of the socket used by the slave to connect with the master.
#
#   Port: The port is communicated by the slave during the replication
#   handshake, and is normally the port that the slave is using to
#   list for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the slave may be actually reachable via different IP and port
# pairs. The following two options can be used by a slave in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
#
# slave-announce-ip 5.5.5.5
# slave-announce-port 1234
 
################################## SECURITY ###################################
 
# Require clients to issue AUTH <PASSWORD> before processing any other
# commands.  This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared
 
# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to slaves may cause problems.
 
################################### LIMITS ####################################
 
# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 10000
 
# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU cache, or to set
# a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
#
# maxmemory <bytes>
 
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key according to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys-random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
# Note: with any of the above policies, Redis will return an error on write
#       operations, when there are no suitable keys for eviction.
#
#       At the date of writing these commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction
 
# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs a bit more CPU. 3 is very fast but not very accurate.
#
# maxmemory-samples 5
 
############################## APPEND ONLY MODE ###############################
 
# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
 
appendonly no
 
# The name of the append only file (default: "appendonly.aof")
 
appendfilename "appendonly.aof"
 
# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".
 
# appendfsync always
appendfsync everysec
# appendfsync no
 
# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
 
no-appendfsync-on-rewrite no
 
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
 
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
 
# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes
 
################################ LUA SCRIPTING  ###############################
 
# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000
 
################################ REDIS CLUSTER  ###############################
#
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
# in order to mark it as "mature" we need to wait for a non trivial percentage
# of users to deploy it in production.
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes
 
# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf
 
# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000
 
# A slave of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a slave to actually have a exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple slaves able to failover, they exchange messages
#    in order to try to give an advantage to the slave with the best
#    replication offset (more data from the master processed).
#    Slaves will try to get their rank by offset, and apply to the start
#    of the failover a delay proportional to their rank.
#
# 2) Every single slave computes the time of the last interaction with
#    its master. This can be the last ping or command received (if the master
#    is still in the "connected" state), or the time that elapsed since the
#    disconnection with the master (if the replication link is currently down).
#    If the last interaction is too old, the slave will not try to failover
#    at all.
#
# The point "2" can be tuned by user. Specifically a slave will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
#   (node-timeout * slave-validity-factor) + repl-ping-slave-period
#
# So for example if node-timeout is 30 seconds, and the slave-validity-factor
# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
# slave will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large slave-validity-factor may allow slaves with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a slave at all.
#
# For maximum availability, it is possible to set the slave-validity-factor
# to a value of 0, which means, that slaves will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-slave-validity-factor 10
 
# Cluster slaves are able to migrate to orphaned masters, that are masters
# that are left without working slaves. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working slaves.
#
# Slaves migrate to orphaned masters only if there are still at least a
# given number of other working slaves for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a slave
# will migrate only if there is at least 1 other working slave for its master
# and so forth. It usually reflects the number of slaves you want for every
# master in your cluster.
#
# Default is 1 (slaves migrate only if their masters remain with at least
# one slave). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1
 
# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes
 
# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.
 
################################## SLOW LOG ###################################
 
# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.
 
# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000
 
# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128
 
################################ LATENCY MONITOR ##############################
 
# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0
 
############################# EVENT NOTIFICATION ##############################
 
# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
#  K     Keyspace events, published with __keyspace@<db>__ prefix.
#  E     Keyevent events, published with __keyevent@<db>__ prefix.
#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
#  $     String commands
#  l     List commands
#  s     Set commands
#  h     Hash commands
#  z     Sorted set commands
#  x     Expired events (events generated every time a key expires)
#  e     Evicted events (events generated when a key is evicted for maxmemory)
#  A     Alias for g$lshzxe, so that the "AKE" string means all the events.
#
#  The "notify-keyspace-events" takes as argument a string that is composed
#  of zero or multiple characters. The empty string means that notifications
#  are disabled.
#
#  Example: to enable list and generic events, from the point of view of the
#           event name, use:
#
#  notify-keyspace-events Elg
#
#  Example 2: to get the stream of the expired keys subscribing to channel
#             name __keyevent@0__:expired use:
#
#  notify-keyspace-events Ex
#
#  By default all notifications are disabled because most users don't needthis feature and the feature has some overhead. Note that if you don't
#  specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""
 
############################### ADVANCED CONFIG ###############################
 
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
 
# Lists are also encoded in a special way to save a lot of space.
# The number of entries allowed per internal list node can be specified
# as a fixed maximum size or a maximum number of elements.
# For a fixed maximum size, use -5 through -1, meaning:
# -5: max size: 64 Kb  <-- not recommended for normal workloads
# -4: max size: 32 Kb  <-- not recommended
# -3: max size: 16 Kb  <-- probably not recommended
# -2: max size: 8 Kb   <-- good
# -1: max size: 4 Kb   <-- good
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
list-max-ziplist-size -2
 
# Lists may also be compressed.
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression.  The head and tail of the list
# are always uncompressed for fast push/pop operations.  Settings are:
# 0: disable all list compression
# 1: depth 1 means "don't start compressing until after 1 node into the list,
#    going from either the head or tail"
#    So: [head]->node->node->...->node->[tail]
#    [head], [tail] will always be uncompressed; inner nodes will compress.
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
#    2 here means: don't compress head or head->next or tail->prev or tail,
#    but compress all nodes between them.
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
# etc.
list-compress-depth 0
 
# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512
 
# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
 
# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000
 
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes
 
# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# slave  -> slave clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and slave clients, since
# subscribers and slaves receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
 
# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10
 
# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
aof-rewrite-incremental-fsync yes
redis.conf

 

 測試redis-cli鏈接上來

 docker exec -it 運行着Rediis服務的容器ID redis-cli

 

測試redis-cli鏈接上來

 docker exec -it 運行着Rediis服務的容器ID redis-cli

 

測試持久化文件生成 

本地鏡像發佈到阿里雲

鏡像的生成方法

前面的DockerFile

從容器建立一個新的鏡像
docker commit [OPTIONS] 容器ID [REPOSITORY[:TAG]]


OPTIONS說明:
-a :提交的鏡像做者;
-m :提交時的說明文字;
View Code

將本地鏡像推送到阿里雲

本地鏡像素材原型
阿里雲開發者平臺
https://dev.aliyun.com/search.html
建立倉庫鏡像:命名空間
                       倉庫名稱
將鏡像推送到registry
公有云能夠查詢到
查看詳情

將阿里雲上的鏡像下載到本地

登入

https://cr.console.aliyun.com/cn-shanghai/instances/repositories

 

 建立倉庫:

 

 

 

參看詳情有相關的幫助命令

 

maven構建報錯 

caught when processing request to {}->unix://localhost:80: Permission denied

 

解決方案:改變目錄權限

chmod 777 /var/run/docker.sock
或者
usermod -a -G docker jenkins
相關文章
相關標籤/搜索