Redis的經常使用命令和配置文件

經常使用命令

關鍵字忽略大小寫,Redis的命令手冊見redis中文網html

redis中經常使用的數據類型 string, map, list, set, sortedsetnode

經常使用命令linux

Stringios

set key value, get key, del key (set命令集增改命令於一體)web

list面試

lpush key value, rpush key value, lrange key start end, lpop key, rpop key redis

hashmap (注:下面的field是字段名)算法

hset key field value, hget key field, hdel key field, hgetall keydocker

set (無需集合,元素不重複。key是一個set的名字,一個key中能夠有好多元素)數據庫

sadd key value, smembers keysrem key value

sortedset(score是排序的依據,默認從小到大排序)

zadd key score value, zrange key start end [withscores], zrem key value

全局命令

keys *, type key, del key

select dbId dbId是數據庫的id(默認0~15)

dbsize 當前庫中key數量

fulshdb 清空當前庫

fulshall 清空全部庫

EXPIRE key "seconds" 爲key設置過時時間(單位是秒)

PEXPIRE key "milliseconds" 爲key設置過時時間(單位是毫秒)

TTL keyPTTL key返回剩餘生存時間(前者返回的時間的單位是秒,後者是毫秒)若是key是永久的,返回-1;若是key不存在或者已過時,返回-2。

PERSIST key 移除key的過時時間,將其轉換爲永久狀態。若是返回1,表明轉換成功。若是返回0,表明key不存在或者以前就已是永久狀態。

SETEX key "seconds" "value" 等價於SET和EXPIRE合併的操做,區別之處在於SETEX是一條命令,而命令的執行是原子性的,因此不會出現併發問題

配置文件

如下是通過簡單翻譯的Redis v=5.0.5配置文件。轉自這篇博客

# Redis configuration file example. Redis配置文件示例
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
# 若是要使用自定義的Redis配置文件,則須要將配置文件的路徑(絕對/相對)跟在"./redis-server"命令後的第一個參數,如:
# ./redis-server /path/to/redis.conf

# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
# 下面是對Redis內存申請單位的註釋,好比1k表明1000字節,而1kb表明1024字節,依次類推,而且Redis並不區分大小寫,1k 1K都是1000字節
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.

################################## INCLUDES 包含 ###################################

# Include one or more other config files here.  This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings.  Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
# 
# 能夠將公共配置抽取成模板,而後在主配置中使用"include"選項來外掛它,"include"能夠在文件開始或者結束的時候使用
# 若是在文件開始,那麼主配置文件中指定的key會覆蓋"include"掛進來的key,好比"include"掛進來配置文件port爲6380,而主配置文件port=6379,那麼這種狀況Redis啓動以後,監聽的端口依然是6379
# 若是在文件結束,上面舉例的狀況,Redis啓動以後,監聽的端口是6380
# 其實在集羣中可使用這個特性,能夠減小配置,若是使用共享文件(NFS),還能夠作到一處修改到處修改的效果,可是後者是經過分發文件實現的,這裏是經過共享磁盤
# include /path/to/local.conf
# include /path/to/other.conf

################################## MODULES 模塊 #####################################

# Load modules at startup. If the server is not able to load modules
# it will abort. It is possible to use multiple loadmodule directives.
# 掛載官方或者其餘大牛寫的module,有哪些module能夠在Redis官網中的module中查看https://redis.io/modules,好比
# redis-cell(漏洞限流) RedisBloom(布隆過濾器) RedisSearch(全文檢索) rediSQL(SQL操做redis)等等module

# loadmodule /path/to/my_module.so
# loadmodule /path/to/other_module.so

################################## NETWORK 網絡 #####################################

# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
# 若是沒有指定"bind"配置,則任何機器均可以鏈接到該Redis服務器,但也能夠經過配置"bind",讓一個或者多個地址能夠鏈接該Redis服務器
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
# 經過配置能夠發現,該配置是能夠支持範圍的,另外若是配置是某一個IP,其實整個網段均可以訪問該Redis服務器
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
# 若是沒有指定"bind",那麼將Redis服務器將暴露在互聯網上,這是很是危險的,所以在生產系統上應該禁止這樣的設置
# 默認狀況下是"bind"指定到本機IPV4的迴環地址上,所以只有本機上運行的程序才能夠訪問該Redis服務器

# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bind 192.168.10.20
# 雖然我配置的本地IP地址,可是我192.168.10.12主機同樣訪問,整個192.168.10網段均可以訪問

# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
# "protected mode"是一個安全保護層,能夠避免Redis服務器被互聯網上的機器訪問和利用
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
#    "bind" directive.
# 2) No password is configured.
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
# 當"protected mode"被設置爲on(即設置爲"protected-mode yes"),且沒有顯示用bind指定ip地址集合或者沒有設置密碼,那麼Redis服務器只能被本機訪問
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
# 默認"protected mode"是開啓的,若是肯定本身的服務器須要暴露在互聯網上,且不存在安全問題,能夠將"protected mode"關閉掉
protected-mode yes

# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
# Redis Server的監聽端口
port 6379

# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
# 在高併發場景下,爲了不客戶端鏈接緩慢問題,須要高的backlog,默認值是511。可是真正使用的值依賴於LINUX內核參數somaxconn,而somaxconn默認值是128,因此即使這裏設置了511,最終生效的128。
# 因此若是公司沒有主機工程師必定要記得在安裝新機器操做系統時就將一些內核參數改大一些,好比將somaxconn修改成20480
tcp-backlog 511

# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
# 利用Unix socket能夠提高同一服務器上的進程間通訊速度,並且是數量級的提高,但一般Redis服務器和應用服務器是分開的,因此下面的兩個參數能夠無論
# unixsocket 指定一個文件做爲通訊的媒介
# unixscoketperm 對unixsocket指定文件的訪問權限(讀-寫-執行),若是真的用了這個特性,該值應該根據系統用戶進行權限設置
# unixsocket /tmp/redis.sock
# unixsocketperm 700

# Close the connection after a client is idle for N seconds (0 to disable)
# 當鏈接變得空閒了以後多少秒關閉鏈接,默認設置爲0,表示禁用這個選項帶來的效果--鏈接不關閉
timeout 0

# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
#    equipment in the middle.
# 若是tcp-keepalive值不爲0,那麼在客戶端和服務器缺少通訊的狀況下使用SO_KEEPALIVE,每隔"tcp-keepalive"指定的時間發送"TCP ACKS"給客戶端
# 這麼作有兩個緣由:
# 1.能夠檢測客戶端是否還存活
# 2.保持網絡鏈接是活着的,這樣避免客戶端反覆與服務器創建連接致使性能低下
# 
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
# 默認值是300S,我我的以爲300S仍是太長了,即便大的集羣60S發送一次也不會形成大的網絡流量
tcp-keepalive 300

################################# GENERAL #####################################

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
# 將"daemonize"設置爲yes,Redis會以守護進程的方式運行,而且會在/var/run目錄下生成一個redis.pid文件
daemonize no

# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
#   supervised no      - no supervision interaction
#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode
#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
#   supervised auto    - detect upstart or systemd method based on
#                        UPSTART_JOB or NOTIFY_SOCKET environment variables
# 使用Linux系統的upstart或者systemd兩種方式來管理redis的啓動,須要結合linux的版原本決定,centos7設置爲systemd,而ubuntu設置爲upstart
# Note: these supervision methods only signal "process is ready."
#       They do not enable continuous liveness pings back to your supervisor.
supervised no

# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
# 若是設置了pid文件,那麼Redis啓動時會寫該pid文件到指定的目錄下,退出時刪除該pid文件
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
# 若是redis是以守護進程方式運行的,若是沒有指定"pidfile"的值,默認生成一個/var/run/redis.pid文件,不然使用指定的"pidfile"
# 若是redis是以非守護進程方式運行,若是沒有指定"pidfile"的值,則不會產生pid file
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
pidfile /var/run/redis_6379.pid

# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
# 指定Redis服務的詳細日誌級別,有debug\verbose\notice\warning四種級別,debug固然是不推薦的,日誌太多了,除非有特殊狀況,開發環境能夠試試
loglevel notice

# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
# 若是以非守護進程的方式運行,且沒有指定"logfile",那麼日誌會發送到/dev/null(空文件),咱們沒法經過它看到任何日誌,但若是指定了"logfile",則輸出到配置的文件當中
# 若是以守護進程的方式運行,且沒有指定"logfile",那麼日誌會輸出到標準輸出(控制檯),但若是指定了"logfile",則輸出到配置的文件當中
logfile ""

# 如下3個(syslog-enabled/syslog-ident/syslog-facility)參數感受不須要關注,它們的目的就是將日誌輸出使用系統自帶的logger,並且能夠修改syslog的參數來實現本身特殊的需求
# 我本身沒有去測試過,感受應該不多會用到,也許大企業專門負責Redis集羣的會使用它來定製Redis的日誌輸出格式,而後使用程序來統計最後經過UI來展現
# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# 若是要使用system logger,則將"syslog-enabled"設置爲yes
# syslog-enabled no

# Specify the syslog identity.
# 指定syslog的id,應該是隨便指定吧,起到惟一標識的做用???
# syslog-ident redis

# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# 指定localxxx,須要配合/etc/rsyslog.conf文件使用,意思就是將日誌文件輸出導出到rsyslog.conf指定的文件中。若是開啓了syslog-enable,也許本身指定的logfile就失效了,須要經過rsyslog.conf指定localxxx將日誌導出指定的文件中
# syslog-facility local0

# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
# 設置redis有多少個db,默認是16個,能夠超過16,可是最大值是多少我也不知道。
# 客戶端鏈接上服務器以後,能夠經過"select databases-1"來選擇使用的db,好比要使用第15個db,則使用"select 14"
databases 16

# By default Redis shows an ASCII art logo only when started to log to the
# standard output and if the standard output is a TTY. Basically this means
# that normally a logo is displayed only in interactive sessions.
#
# However it is possible to force the pre-4.0 behavior and always show a
# ASCII art logo in startup logs by setting the following option to yes.
# 是否啓動的時候輸出(顯示)Redis的ASCII LOGO,這個沒事就不要去動他了吧,看看也不錯至少曉得它正在啓動了
always-show-logo yes

################################ SNAPSHOTTING 持久化 ################################
# 下持久化有三種,RDB\AOF\RDB+AOF混合,簡答提一下對應的實現原理
# RDB:將數據庫以二進制存放在磁盤文件中,持久化的時間間隙比較大,丟失的數據比較多,單獨只使用這種方式不推薦
# AOF:將操做數據庫的指令(包括協議信息)以文本方式存放在磁盤文件中,根據配置最多會丟失1S的數據,這種方式還能夠
# 混合:推薦這種方式,可是4.0開始纔有此功能,混合持久化結合了RDB快速恢復數據和AOF丟失數據少的優勢,並且減小了磁盤開銷。。。關於它們更詳細的介紹請查看
# Redis設計與實現-RDB持久化 https://my.oschina.net/u/3049601/blog/3153571
# Redis設計與實現-AOF持久化 https://my.oschina.net/u/3049601/blog/3153678
# Redis設計與實現-混合持久化 https://my.oschina.net/u/3049601/blog/3158904
#
# Save the DB on disk: 
# 保存DB到磁盤中
#   save <seconds> <changes>
#
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#
#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   15分鐘內至少有一個KEY改變了
#   after 300 sec (5 min) if at least 10 keys changed
#   5分鐘內至少有10個KEY改變了
#   after 60 sec if at least 10000 keys changed
#   1分鐘內至少有10000個KEY改變了
#
#   Note: you can disable saving completely by commenting out all "save" lines.
#
#   It is also possible to remove all the previously configured save
#   points by adding a save directive with a single empty string argument
#   like in the following example:
#
#   save ""
#   若是想禁止RDB持久化,能夠將下面的三個save配置項使用"#"註釋掉。還可使用save ""來代替使用"#"來註釋掉三個save配置項

save 900 1
save 300 10
save 60 10000

# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
# 若是RDB開啓了且最近的BGSAVE失敗了,那麼Redis默認是不會再接收新增或者修改請求了,但若是BGSAVE又恢復工做,那麼新增和修改操做能夠繼續(表示能夠自動恢復)
# 若是公司有本身的監控系統能夠很好的檢測Redis服務和持久化狀況,那麼能夠將此功能關閉,這樣能夠提升系統的可用性
# 若是使用集羣,且從節點夠的狀況下有本身的監控,真的能夠將這個功能關閉掉
stop-writes-on-bgsave-error yes

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
# 默認會採用LZF壓縮dump出來的數據庫並寫入到xxx.rdb文件中。壓縮會增長CPU的開銷,若是想節約CPU的開銷,能夠將"rdbcompression"設置爲"no",可是會佔用更多的磁盤
rdbcompression yes

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
# 從Redis5版本開始,默認在RDB文件末尾有一個CRC64(一個隨機算法,生成信息指紋用的)校驗和,它可讓文件格式能夠更強的抵抗風險,可是它會帶來10左右的性能損失,咱們能夠禁止它以得到最大的性能輸出
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
# 若是禁用掉"checksum",那麼生成的RDB文件結尾的校驗和爲"0",那麼加載程序則會跳過校驗
# 針對大企業(不差錢)我的以爲就保留默認設置應該比較好
rdbchecksum yes

# The filename where to dump the DB
# 指定RDB文件的名字,建議使用ip+port來指定,運維能夠更好的分辨,甚至能夠經過程序掃描展現到UI上
dbfilename dump.rdb

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
# 指定RDB和AOF文件的存儲目錄,注意:這裏只能指定到目錄,不要帶文件名稱
dir ./

################################# REPLICATION 主從 #################################

# Master-Replica replication. Use replicaof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP(as soon as possible) about Redis replication.
# Redis的主從複製,使用"replicaof"從一個Redis Server複製到另一個去,下面有幾個要點須要理解
#   +------------------+      +---------------+
#   |      Master      | ---> |    Replica    |
#   | (receive writes) |      |  (exact copy) |
#   +------------------+      +---------------+
#
# 1) Redis replication is asynchronous, but you can configure a master to
#    stop accepting writes if it appears to be not connected with at least
#    a given number of replicas.
#    主從複製雖然是異步進行的,可是能夠經過配置(min-replicas-to-write)讓從節點小於特定值時,主節點不接受"write"請求
#
# 2) Redis replicas are able to perform a partial resynchronization with the
#    master if the replication link is lost for a relatively small amount of
#    time. You may want to configure the replication backlog size (see the next
#    sections of this file) with a sensible value depending on your needs.
#    Redis的老版本沒有複製重同步,可是從2.8開始支持部分同步(使用複製擠壓緩衝區實現),解決了斷線後老版本"徹底同步"低效、阻塞、循環同步的問題
#    若是從節點與主節點斷開聯繫一小段時間,則會發起部分同步,可是複製擠壓緩衝區也有大小,能夠設置緩衝區的大小來減小徹底同步出現
#
# 3) Replication is automatic and does not need user intervention. After a
#    network partition replicas automatically try to reconnect to masters
#    and resynchronize with them.
#    複製時自動進行的,且不須要人工介入。在出現網絡分區後,從節點會自動嘗試去重連主節點,鏈接成功以後發起部分同步,若是複製積壓緩衝區中的數據丟失了,則會發起徹底同步
# 
# 這段英文解釋若是是入門學習Redis,可能不會太看得懂,推薦先看看Redis設計與實現這本書,會對這段描述有比較深入的認識
# replicaof <masterip> <masterport>
# replicaof 主節點IP  主節點端口,注意必定要保證主從節點網絡是通的,檢查本機防火牆和第三方防火牆

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
# 若是主節點設置了密碼,那麼在從節點的redis.conf中要設置masterauth配置項,將密碼寫在這裏,若是沒有
# 設置,那麼主節點會拒絕從節點的複製請求
#
# masterauth <master-password>
# masterauth 主節點密碼

# When a replica loses its connection with the master, or when the replication
# is still in progress, the replica can act in two different ways:
# 若是從節點與主節點失去鏈接或者正在從主節點同步數據,那麼從節點根據配置能夠工做在兩種模式下:
#
# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
#    still reply to client requests, possibly with out of date data, or the
#    data set may just be empty if this is the first synchronization.
#	 若是"replica-serve-stale-data"設置爲"yes",這也是默認設置,那麼從節點將會回覆客戶端的請求,可是獲得的數據可能出現下面兩種狀況
#     1.若是是與主節點失去鏈接,那麼獲得的數據多是過期的
#     2.若是是第一次從主節點同步數據,那麼獲得的數據集會是空的
#
# 2) if replica-serve-stale-data is set to 'no' the replica will reply with
#    an error "SYNC with master in progress" to all the kind of commands
#    but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
#    SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
#    COMMAND, POST, HOST: and LATENCY.
#    若是"replica-serve-stale-data"設置爲"no",從節點將回復客戶端"SYNC with master in progress"錯誤
#    可是INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG等命令是能夠成功執行並獲得相應結果
#
replica-serve-stale-data yes

# You can configure a replica instance to accept writes or not. Writing against
# a replica instance may be useful to store some ephemeral data (because data
# written on a replica will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
# 咱們能夠配置從節點是否能夠接受write請求,很是不建議將從節點設置爲可接受write請求,由於同步可能會致使數據丟失
# 所以從Redis2.6就將"replica-read-only"默認設置爲"yes"了
#
# Since Redis 2.6 by default replicas are read-only.
#
# Note: read only replicas are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only replica exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only replicas using 'rename-command' to shadow all the
# administrative / dangerous commands.
# 從節點(只讀)在架構時建議不要設計爲暴露給互聯網上的不可信任客戶端,它能夠起到濫用實例的保護做用
# 從節點依然支持全部的管理命令,好比CONFI,DEBUG等等,爲了提升從節點的安全性,可使用"rename-command"來屏蔽全部的"管理命令"
replica-read-only yes

# Replication SYNC strategy: disk or socket.
# 同步策略:經過磁盤(Disk-backed)或者經過SOCKET(Diskless)
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
# 很遺憾經過SOCKET同步還處在試驗階段
#
# New replicas and reconnecting replicas that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the replicas.
# The transmission can happen in two different ways:
# 新的從節點和重鏈接的從節點(從節點的最新偏移量不在主節點的複製積壓緩衝區中),則會執行"full synchronization"操做,一個RDB文件會採用如下兩種方式中的一種傳輸給從節點:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
#                 file on disk. Later the file is transferred by the parent
#                 process to the replicas incrementally.
#                 主節點fork一個子進程出來將RDB文件生成到磁盤上,而後父進程將RDB文件逐漸發給從節點
# 2) Diskless: The Redis master creates a new process that directly writes the
#              RDB file to replica sockets, without touching the disk at all.
#              主節點fork一個子進程,而後建立一個和從節點的SOCKET鏈接,直接將數據發送給從節點,而不借助磁盤
#
# With disk-backed replication, while the RDB file is generated, more replicas
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new replicas arriving will be queued and a new transfer
# will start when the current one terminates.
# 若是使用磁盤(disk-backed),當子進程生成RDB文件後,多個從節點當即就可使用RDB文件進行復制操做
# 若是使用SOCKET(diskless),一旦複製開始,當新的節點複製請求必須等已開始複製完成以後才能進行。
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple replicas
# will arrive and the transfer can be parallelized.
# 當使用SOCKET(diskless)時,主節點能夠等一段時間(可配置),這段時間內過來的複製請求能夠並行開始
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
# 若是磁盤效率低,而網絡速度快且帶寬也大的狀況下,Diskless方式完複製效果更好
repl-diskless-sync no

# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the replicas.
# 若是"repl-diskless-sync"設置爲yes,就須要配置"repl-diskless-sync-delay"讓主節點等待更多的複製請求過來,並讓他們併發複製
#
# This is important since once the transfer starts, it is not possible to serve
# new replicas arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more replicas arrive.
# 由於一旦有複製開始進行,新來的複製請求就會排隊,所以設置了延遲時間就可讓更多的複製同時執行
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
# 延遲時間單位是"秒",默認值是5秒。能夠將"repl-diskless-sync-delay"設置爲0,這樣複製就會當即執行
repl-diskless-sync-delay 5

# Replicas send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_replica_period option. The default value is 10
# seconds.
# 從節點使用"repl-ping-replica-period"指定的時長(單位:秒)按期發送"pings"給主節點,經過這個操做能夠檢測從節點是否和主節點失聯。該值默認是10秒
#
# repl-ping-replica-period 10

# The following option sets the replication timeout for:
# "repl-timeout"會影響複製過程當中一下三種狀況的超時時間
#
# 1) Bulk transfer I/O during SYNC, from the point of view of replica.
# 2) Master timeout from the point of view of replicas (data, pings).
# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-replica-period otherwise a timeout will be detected
# every time there is low traffic between the master and the replica.
# 必定要確保"repl-timeout"的值大於"repl-ping-replica-period"的值,不然當主從節點之間通訊量很低時,每次判斷超時都是成功的,默認值是60秒
#
# repl-timeout 60

# Disable TCP_NODELAY on the replica socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to replicas. But this can add a delay for
# the data to appear on the replica side, up to 40 milliseconds with
# Linux kernels using a default configuration.
# 若是將"repl-disable-tcp-nodelay"設置爲"yes",那麼主節點會使用更小的TCP packet和更少的帶寬發送數據到從節點
# 可是這會讓從節點的數據延遲40毫秒(LINUX默認配置,也許能夠經過tcp_delack_min修改),關於tcp-nodelay能夠看博客:https://blog.csdn.net/bytxl/article/details/17677495
#
# If you select "no" the delay for data to appear on the replica side will
# be reduced but more bandwidth will be used for replication.
# 若是將"repl-disable-tcp-nodelay"設置爲no,那麼從節點接收數據的延遲會減小,可是要求更多的帶寬來完成複製工做
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and replicas are many hops away, turning this to "yes" may
# be a good idea.
# 默認咱們使用低延遲的選項,也就是"repl-disable-tcp-nodelay"設置爲no,可是在很是高的通訊量狀況下或者從主節點到從節點會通過不少次轉發,將"repl-disable-tcp-nodelay"設置爲yes是多是更好的選擇
repl-disable-tcp-nodelay no

# Set the replication backlog size. The backlog is a buffer that accumulates
# replica data when replicas are disconnected for some time, so that when a replica
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the replica missed while
# disconnected.
# backlog設置複製積壓緩衝區的大小,用以存放從節點與主節點斷開鏈接後這段時間的write等命令,當從節點從新鏈接上來時,一般不須要作"徹底同步",只須要作部分同步(要求從節點偏移量在複製擠壓緩衝區能夠找到,表示數據可使用偏移量後的命令進行恢復),將偏移量後面的命令發送給從節點執行
#
# The bigger the replication backlog, the longer the time the replica can be
# disconnected and later be able to perform a partial resynchronization.
# 更大的backlog意味着從節點能夠斷開鏈接更長時間,而後才能夠執行部分同步
#
# The backlog is only allocated once there is at least a replica connected.
# 一旦有從節點鏈接到主節點,複製積壓緩衝區就會建立,此時並無數據。
#
# 這個值到底設置多少,有一個計算公式:2*平均斷線時間*每秒寫入數據大小,所以和業務量強相關
# repl-backlog-size 1mb

# After a master has no longer connected replicas for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last replica disconnected, for
# the backlog buffer to be freed.
# 當主節點在最後一個從節點斷線以後的一段時間後(repl-backlog-ttl設置),會將複製積壓緩衝區釋放
#
# Note that replicas never free the backlog for timeout, since they may be
# promoted to masters later, and should be able to correctly "partially
# resynchronize" with the replicas: hence they should always accumulate backlog.
# 從節點永遠都不會釋放,由於它必須保存本身接收的最新偏移量,當出現斷線重連時將這個偏移量發給主節點,主節點決定使用徹底同步仍是部分同步
#
# A value of 0 means to never release the backlog.
# 若是設置爲0,表示永不釋放複製積壓緩衝區
#
# repl-backlog-ttl 3600

# The replica priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a replica to promote into a
# master if the master is no longer working correctly.
# "replica-priority"是一個整數值,在哨兵的集羣模式下,當"主事哨兵"被選舉(選舉採用過半原則)出來以後,由它決定掛掉主節點下的某一個從節點做爲新的主節點,當其餘條件都相同的狀況下,"replica-priority"值越小的從節點會被選中做爲新的主節點
#
# A replica with a low priority number is considered better for promotion, so
# for instance if there are three replicas with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
# 總結起來就是值越小優先級越高,其對應的從節點會優先被選擇做爲新的主節點
#
# However a special priority of 0 marks the replica as not able to perform the
# role of master, so a replica with priority of 0 will never be selected by
# Redis Sentinel for promotion.
# 若是將"replica-priority"設置爲0,則該從節點永遠都不會被選擇爲新的主節點,根本就不參與選舉。能夠減小選舉過程當中過多的網絡通訊,加快選舉過程
# 我還只是一個理論派,沒實戰經驗,我的感受若是機器硬件夠好,且機器所在的網絡質量夠好,能夠將其優先級設置得高一些
#
# By default the priority is 100.
# 默認值是100
replica-priority 100

# It is possible for a master to stop accepting writes if there are less than
# N replicas connected, having a lag less or equal than M seconds.
# 若是鏈接的從節點小於N且它們的滯後時間小於或者等於M秒,那麼主節點可能會中止接收寫操做
# 網上說這兩個條件中一個不知足就可能致使主節點不能接收寫操做,是很準確,必須是兩個條件同時知足纔會觸發
#
# The N replicas need to be in "online" state.
# 要求這N個從節點是"online"狀態,還有在哨兵和Cluster模式下節點還有主觀下線和客觀下線狀態
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the replica, that is usually sent every second.
# 滯後時間的秒數必須小於指定值,滯後時間=當前時間-最後一次接收到的從replica發過來的ping時間
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough replicas
# are available, to the specified number of seconds.
# 這個選項並不能保證N個從節點接收寫操做,可是能夠將丟失的數據限制在指定秒數內
#
# For example to require at least 3 replicas with a lag <= 10 seconds use:
#
# min-replicas-to-write 3
# min-replicas-max-lag 10
#
# Setting one or the other to 0 disables the feature.
# 若是將這兩個選項中的任何一個設置爲0,表示禁用這個特徵
#
# By default min-replicas-to-write is set to 0 (feature disabled) and
# min-replicas-max-lag is set to 10.
# 默認"min-replicas-to-write"被設置爲0,即禁止了這個特徵,"min-replicas-max-lag"默認值爲10秒

# A Redis master is able to list the address and port of the attached
# replicas in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover replica instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a master.
#
# The listed IP and address normally reported by a replica is obtained
# in the following way:
#
#   IP: The address is auto detected by checking the peer address
#   of the socket used by the replica to connect with the master.
#
#   Port: The port is communicated by the replica during the replication
#   handshake, and is normally the port that the replica is using to
#   listen for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the replica may be actually reachable via different IP and port
# pairs. The following two options can be used by a replica in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
# 總結起來講:在主節點中使用info replication能夠列出全部從節點的IP+PORT,原本主節點可使用SOCKET拿到從節點的IP+PORT,但若是使用端口轉發(docker,k8s)和NAT或者由於使用了代理,從節點不能直接經過IP+PORT到達,下面這兩個從節點選項纔有用,它能夠將設置的IP和PORT報告給主節點,INFO和ROLE命令會顯示設置的值
#
# replica-announce-ip 5.5.5.5
# replica-announce-port 1234

################################## SECURITY ###################################

# Require clients to issue AUTH <PASSWORD> before processing any other
# commands.  This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
# 爲了讓不信任的客戶端訪問Redis Server,能夠要求客戶端在執行任何命令以前先校驗密碼
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
# 若是客戶端和Redis server運行在同一個機器,咱們也能夠將"requirepass"註釋掉,客戶端就不須要校驗密碼。
# 能夠推廣到:若是在一個局域網裏面,若是安全作得足夠好,則均可以不設置"requirepass"
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
# 由於Redis能夠每秒能夠驗證150K個密碼,所以若是要設置密碼,必定要設置一個很是強壯的密碼,不然很容易被破解
#
# requirepass foobared

# Command renaming.
# 重命名command,能夠保護咱們的管理員命令和一些會致使Redis卡頓的命令
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
# 在共享環境中,能夠將一些危險的命令進行重命名,這樣可讓普通的客戶端不可使用那些被重命名的命令,只有內部工具可使用
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
# 能夠將危險的命令重命名爲一個空串,完全的禁止該命令
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to replicas may cause problems.
# 由於主節點的會將write命令使用緩衝區記錄下來並傳播給從節點執行,所以若是重命名的命令在從節點沒有同步修改的話,這可能帶來一些意想不到的問題,所以必定要當心這一點。
# 好比將set命令重命名爲myset,那麼在主節點執行myset foo Messi以後,從節點並不會有foo這個key,由於從節點並不認識myset這個命令

################################### CLIENTS ####################################

# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
# 設置同時能夠鏈接到服務器的客戶端數量,默認值是10000,但若是主機的最大文件打開數並無比"maxclients"大,那麼"maxclients"=最大文件打開數-32,這個32是提供給Redis內部使用的,好比集羣之間的通訊等也須要鏈接數
#
# maxclients 10000

############################## MEMORY MANAGEMENT ################################

# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
# 設置Redis的工做最大內存爲某一個特定的限制值。當內存使用達到限制值,根據設置的淘汰策略刪除keys
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
# 當Redis沒法根據設置的淘汰策略刪除keys時或者淘汰策略被設置爲"noeviction",像set lpush等命令會收到報錯,此時管理員就應該特別注意了,及時的增長內存,可是此時讀命令仍是能夠繼續正常使用的
#
# This option is usually useful when using Redis as an LRU or LFU cache, or to
# set a hard memory limit for an instance (using the 'noeviction' policy).
# 若是將Redis做爲一個LRU或者LFU的緩存,再或者將Redis做爲hard memory limit for an instance使用時,這個選項就很是有用
#
# WARNING: If you have replicas attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the replicas are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of replicas is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
# 若是一個設置了"maxmemory"的主節點鏈接了一個從節點,那麼用於主從複製傳遞命令的輸出緩衝區佔用的內存也在maxmemory當中,若是當內存被佔滿時而出現大量的刪除key的操做寫到緩衝區,而緩衝區又不夠,又會觸發刪除更多的key,這樣就會形成一個死循環,直到整個數據庫變成空的。
# 所以
#
# In short... if you have replicas attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for replica
# output buffers (but this is not needed if the policy is 'noeviction').
# 由於used memory是能夠大於maxmemory的,只不過出現這種情時會致使內存回收而觸發刪除KEY的操做。所以,若是在主從模式下,主節點的maxmemory在設置得足夠大的狀況下,還要給輸出緩衝區留出一點空間來,避免出現死循環而致使數據庫被清空。不要物理內存有多少就設置多少,何況還有操做系統和其餘程序在運行,通常設置爲3/4。
#
# maxmemory <bytes>

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
# 關於這幾個策略的講解百度能夠找到很是好的描述,在這裏就不詳細描述了,篇幅也不夠
# 推薦一個:https://cloud.tencent.com/developer/article/1530553 講了原理和使用說明
#
# LRU means Least Recently Used   最近沒有被使用的
# LFU means Least Frequently Used 最近使用頻率最小的
#
# Both LRU, LFU and volatile-ttl are implemented using approximated randomized algorithms
# LRU LFU 和volatile-ttl使用了較接近的隨機算法
#
# Note: with any of the above policies, Redis will return an error on write
#       operations, when there are no suitable keys for eviction.
#       選擇上面的任何一種策略,若是沒有適合的KEY被淘汰,那麼下面的這些寫操做就會報錯
#
#       At the date of writing these commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
# 默認設置是noeviction
#
# maxmemory-policy noeviction

# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
# LRU LFU TTL三種方式並非精準的算法,這是爲了提升速度和節省內存,同時達到了近似的效果。。。很妙
# 咱們能夠基於速度或者精準度的要求去調整採樣的數據大小,"maxmemory-samples"值越大精準度越高,速度越慢,消耗的內存也越多,反之速度快,可是精準度低,內存開銷少
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
# 默認值是5,若是設置爲10就很是接近真正的LRU算法了,可是CPU開銷也越多了。若是設置爲3,速度快了,可是沒那麼準確
#
# maxmemory-samples 5

# Starting from Redis 5, by default a replica will ignore its maxmemory setting
# (unless it is promoted to master after a failover or manually). It means
# that the eviction of keys will be just handled by the master, sending the
# DEL commands to the replica as keys evict in the master side.
# 從Redis 5開始,從節點默認是忽略掉maxmemory設置的,除非從節點在故障轉移時變成了主節點
# 正常狀況下,從節點的Key淘汰是經過從主節點發送del命令過來實現的
#
# This behavior ensures that masters and replicas stay consistent, and is usually
# what you want, however if your replica is writable, or you want the replica to have
# a different memory setting, and you are sure all the writes performed to the
# replica are idempotent, then you may change this default (but be sure to understand
# what you are doing).
# "replica-ignore-maxmemory"能夠保證主從的數據一致性,除非你真的知道本身把"replica-ignore-maxmemory"
# 設置爲no帶來的反作用,那建議你不要作着騷的操做,坑人哦
#
# Note that since the replica by default does not evict, it may end using more
# memory than the one set via maxmemory (there are certain buffers that may
# be larger on the replica, or data structures may sometimes take more memory and so
# forth). So make sure you monitor your replicas and make sure they have enough
# memory to never hit a real out-of-memory condition before the master hits
# the configured maxmemory setting.
# 因爲從節點默認狀況下是不主動刪除KEY的,它可能比主節點消耗更多的內存(可能buffer更大,可能數據結構消耗的內存更多等等),因此要使用你的monitor實時監控你的從節點,並保證主節點達到maxmemory時間先於從節點的內存超過真正的物理內存
#
# replica-ignore-maxmemory yes

############################# LAZY FREEING 惰性回收 ####################################

# Redis has two primitives to delete keys. One is called DEL and is a blocking
# deletion of the object. It means that the server stops processing new commands
# in order to reclaim all the memory associated with an object in a synchronous
# way. If the key deleted is associated with a small object, the time needed
# in order to execute the DEL command is very small and comparable to most other
# O(1) or O(log_N) commands in Redis. However if the key is associated with an
# aggregated value containing millions of elements, the server can block for
# a long time (even seconds) in order to complete the operation.
# Redis提供了兩個命令來手動刪除keys,其中一個是你們熟知的del,另一個是unlink
# "del"命令:刪除是阻塞式(執行刪除時,後續的命令就要排隊等待)刪除以便釋放空間,若是一個Key比較小則刪除很快,影響小,但若是這個Key對應的對象很是大,那麼刪除會很耗時,在高併發的系統裏面會阻塞後面的請求,若是系統架構設計不合理則可能致使整個業務系統不可供,形成嚴重的生產事故
# "unlink"命令:是異步的儘量快的逐步刪除,它所需的時間複雜度是O(1),Redis會啓動另一個線程來執行真正的刪除並回收內存的操做,它不會阻塞後續命令。好比flushall flushdb命令也是異步執行的。
#
# For the above reasons Redis also offers non blocking deletion primitives
# such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and
# FLUSHDB commands, in order to reclaim memory in background. Those commands
# are executed in constant time. Another thread will incrementally free the
# object in the background as fast as possible.
#
# DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled.
# It's up to the design of the application to understand when it is a good
# idea to use one or the other. However the Redis server sometimes has to
# delete keys or flush the whole database as a side effect of other operations.
# Specifically Redis deletes objects independently of a user call in the
# following scenarios:
# 除了用戶可使用del,unlink,flushall,flushdb刪除key,Redis Server在某些狀況下不得不刪除Key,甚至清空整個db以保證服務的可用性,下面列舉了4種狀況
#
# 1) On eviction, because of the maxmemory and maxmemory policy configurations,
#    in order to make room for new data, without going over the specified
#    memory limit.
#    爲了不Redis使用的內存超過"maxmemory",且一直在這種狀態下運行,Redis Server會根據選擇的刪除策略去自動刪除一些Key,以釋放空間給其餘數據使用。
#
# 2) Because of expire: when a key with an associated time to live (see the
#    EXPIRE command) must be deleted from memory.
#    Key設置的過時時間到了,當用戶訪問這個Key會自動刪除,或者Redis Server按期將這種Key刪除。
#
# 3) Because of a side effect of a command that stores data on a key that may
#    already exist. For example the RENAME command may delete the old key
#    content when it is replaced with another one. Similarly SUNIONSTORE
#    or SORT with STORE option may delete existing keys. The SET command
#    itself removes any old content of the specified key in order to replace
#    it with the specified string.
#    一些命令的底層實現就是先刪除再新增,因此再使用這些命令的時候會執行刪除操做,好比SET,SORT,RENAME
#
# 4) During replication, when a replica performs a full resynchronization with
#    its master, the content of the whole database is removed in order to
#    load the RDB file just transferred.
#    主從模式下,若是斷網重連後觸發了"徹底同步",也會將整個DB數據刪除掉,而後再從RDB文件/SOCKET中加載全部數據
#
# In all the above cases the default is to delete objects in a blocking way,
# like if DEL was called. However you can configure each case specifically
# in order to instead release memory in a non-blocking way like if UNLINK
# was called, using the following configuration directives:
# 上面的4種狀況,Redis Server刪除數據都是阻塞式刪除,就像"del"命令。咱們能夠將這4種狀況的設置爲異步刪除,就像命令"unlink"同樣

lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
# Redis的"bgsave"能夠異步的將數據集導出到RDB文件中,這種持久化方式知足了大多數的應用,可是有一種狀況是當由於一些狀況掛掉,好比斷電,根據"save xxx"的配置可能會致使幾分鐘的數據丟失,在一些要求高的系統中這種狀況是不被容許的。
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
# Redis提供了"Append Only File"新的持久化技術,該技術理論上能夠作到當發生斷電時讓丟失的數據小於等於1秒,或者服務器自己沒有掛,只是Redis Server程序掛了,甚至只有一個single write丟失
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
# AOF和RDB兩種持久化技術能夠同時開啓,若是AOF開啓了,那麼啓動Redis時,是從AOF文件中加載數據的,由於它保存的數據更完整,提供更好的持久化功能
#
# Please check http://redis.io/topics/persistence for more information.
# 更多的信息請出門左轉到:http://redis.io/topics/persistence for more information
# 開啓AOF,"appendonly"設置爲yes
appendonly no

# The name of the append only file (default: "appendonly.aof")
# 指定AOF文件名,此文件存放的目錄和RDB是共用的,使用"dir"進行指定
appendfilename "appendonly.aof"

# 對於沒有OS知識的朋友,接下來的appendfsync功能能夠先要去百度找操做系統寫文件緩衝區的知識點,fsync不一樣的選項決定了寫入緩衝區的數據何時真正寫到磁盤上
# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
# 系統調用"fsync()"告訴OS要真正的將數據寫入到磁盤上,而不是寫入到緩衝區當中。一些OS會當即寫到磁盤,一些OS可能會盡量快的藏屍將數據寫到磁盤
#
# Redis supports three different modes:
# Redis支持三種不一樣的模式:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# 模式1-"no":不調用OS的fsync函數,讓OS本身決定何時將緩衝區的數據寫入到磁盤上,該模式對Redis來講速度最快
#
# always: fsync after every write to the append only log. Slow, Safest.
# 模式2-"always":每次"寫操做"都會調用一次fsync函數,這種方式最安全,可是速度是最慢的
#
# everysec: fsync only one time every second. Compromise.
# 模式3-"everysec":每一秒鐘調用一次fsync,這是一種這種折中方案。
# 
# 看到這裏順便提一下,在Redis中隨處可見這種思想,好比前面近似LRU的隨機算法,有序集合底層數據結構中結合Hash表和跳躍表實現高效的單個和範圍查詢,過時key的惰性刪除等等
# 在咱們本身設計系統、開發模塊、甚至生活中也能夠將這個思想好好運用
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
# 默認模式是"everysec"的,這是結合速度和安全性的這種方案。若是你不考慮系統DOWN可能帶來的數據丟失,能夠將模式設置爲"no",而若是你想數據徹底不丟,且願意犧牲性能,能夠將模式設置爲"always"
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
# 更多的細節請出門左轉:http://antirez.com/post/redis-persistence-demystified.html
# 另外大牛"antirez"還開發了基於Redis的神經網絡訓練模塊(neural-redis)和分佈式做業隊列(Disque)
# If unsure, use "everysec".
# 若是本身不肯定到底使用哪種,就使用默認值everysec

# appendfsync always
appendfsync everysec
# appendfsync no

# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
# 當AOF模式設置爲"everysec"或者"always",執行後臺保存AOF文件操做或者AOF文件重寫(能夠單獨百度一下,有的面試官會問這個問題)會產生大量的IO,而一些LINUX OS的fsync調用會被阻塞很長時間(目前還未解決這個問題),這種狀況會阻塞另外線程的同步寫操做
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
# 爲了減輕這個問題帶來的影響,可使用"no-appendfsync-on-rewrite"配置,一旦有BGSAVE和BGREWRITEAOF在執行,阻止fsync函數調用
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
# 簡單點說就是:當"no-appendfsync-on-rewrit"設置爲no,那麼有一個進程在執行SAVE操做,AOF持久化模式至關於被設置成了"no",也就是說根據OS的設置,糟糕的狀況下可能丟失30秒以上的數據
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
# 若是你知道上面說的潛在風險,能夠將"no-appendfsync-on-rewrite"設置爲yes,不然就不要瞎搞,就保持爲no

no-appendfsync-on-rewrite no

# AOF文件重寫是Redis面試的一個點,也是優化Redis的一個點,將它設置得足夠大,能夠保存更多日誌數據
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
# 當AOF文件大小超過指定值"auto-aof-rewrite-min-size",就會發生AOF文件重寫
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
# Redis會記住AOF重寫後的AOF文件大小,若是重啓後還未發生重寫,那麼記住的就是剛開始加載AOF文件的大小
# 這個文件大小值會與下面的配置項值進行比較,決定何時作AOF文件重寫
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
# 若是當"當前大小/最後一次重寫大小"的比值大於"auto-aof-rewrite-percentage"指定的值,則會觸發AOF重寫
# 爲了不AOF已經很小還進行AOF重寫的尷尬狀況,所以須要設置一個AOF重寫最小AOF文件大小
# 好比"auto-aof-rewrite-min-size"設置爲64M,只有當AOF文件超過64M,且"當前大小/最後一次重寫大小">"auto-aof-rewrite-percentage"纔會觸發AOF重寫
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
# 若是將"auto-aof-rewrite-percentage"設置爲0,表示不容許執行自動AOF重寫

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
# 若是運行Redis的OS崩潰掉,特別是ext4格式的文件系統使用"data=ordered"選項執行mount操做,在這些狀況下
# AOF文件多是截斷(損壞)的,重啓Redis時若是"aof-load-truncated"被設置爲yes,那麼AOF文件在加載時可能會丟失掉崩潰前的一些數據
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
# 針對損壞的AOF文件,在重啓Redis的時候,支持兩種方式
# 1.發現文件損壞,直接報錯
# 2.儘量的從找到的截斷(損壞)文件中恢復數據到內存中,這是Redis的默認方式
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# 若是"aof-load-truncated"被設置爲yes,且發現了被截斷的AOF文件,那麼在啓動Redis時日誌或者控制檯中會輸出日誌,讓運維人員或者監控看到這條信息
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
# 若是將"aof-load-truncated"設置no,且發現了被截斷的AOF文件,重啓Redis會報錯,這個時候就須要借用redis-check-aof工具修復AOF文件
# 其實在主從模式下,是否能夠到從節點拿AOF文件進行恢復,好像這個方法是多想了,由於哨兵、Codis、Cluster模式會自動進行故障轉移,只有單機和純主從模式也許這種方式能夠嘗試,可是如今的企業至少應該是哨兵模式了,大企業都用Cluster了或者豌豆莢搞的Codis
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
# 若是AOF文件在文件中間損壞了,即便"aof-load-truncated"設置爲yes,重啓Redis同樣會報錯且退出啓動
# 這個選項只適合AOF被截斷的狀況,也就是AOF沒有足夠的字節
aof-load-truncated yes

# 混合持久化,Redis 4提供的新功能
# When rewriting the AOF file, Redis is able to use an RDB preamble in the
# AOF file for faster rewrites and recoveries. When this option is turned
# on the rewritten AOF file is composed of two different stanzas:
# 若是"aof-use-rdb-preamble"設置爲yes,那麼AOF文件由"rdb file"+"aof tail"兩部分組成,這種組合方式能夠發揮RDB持久化加載速度快和壓縮存儲使用空間小的優點,與AOF持久化丟失數據小於1S的優點
# 
#   [RDB file][AOF tail]
#
# When loading Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, and continues loading the AOF
# tail.
# 該混合持久化方式下的AOF文件用"REDIS"字符串區分,前面是RDB內容,後面是AOF內容

aof-use-rdb-preamble yes

################################ LUA SCRIPTING LUA腳本 ###############################
# LUA腳本我沒有研究過,簡單說下這個配置項是設置LUA腳本最大執行時間
# 另外LUA腳本執行是原子的,所以能夠用它作一些特殊的實現,不過就像Oracle的存儲過程同樣,維護不方便,比較這個腳本語言會的人太少了
# 若是確實有須要,在考慮運維的狀況下可使用它來實現原子性等操做,慎用
# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
# 若是設置爲0或者負值,表示不限制執行時間
lua-time-limit 5000

################################ REDIS CLUSTER 集羣 ###############################
# 在看下面的內容以前建議先去百度一下redis hash slots,以及集羣的架構圖
#
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
# in order to mark it as "mature" we need to wait for a non trivial percentage
# of users to deploy it in production.
# 雖然Redis Cluster被認爲是穩定的,可是依然須要大量的用戶在生產環境中使用它。。。這段註釋應該從redis.conf中刪除了,全世界已經有知名的大企業使用了Redis Cluster
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
# 將"cluster-enabled"設置爲yes,redis instance才能成爲集羣的一部分,但集羣要真正開始工做,還須要將
# 全部的slots分配給cluster node
#
# cluster-enabled yes

# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
# 每一個cluster node有本身的cluster configuration file,且該配置文件不能手工編輯,而是自動建立和更新的
# cluster configuration file不能重名
#
# cluster-config-file nodes-6379.conf

# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
# 集羣節點在"cluster-node-timeout"規定的超時時間內,若是不可達,則被認爲是失敗狀態
# 注意:集羣內的大多數其餘內部時間限制是"cluster-node-timeout"的倍數
#
# cluster-node-timeout 15000

# A replica of a failing master will avoid to start a failover if its data
# looks too old.
# 若是一個掉線主節點的從節點數據太老了,是不容許參與故障轉移的
#
# There is no simple way for a replica to actually have an exact measure of
# its "data age", so the following two checks are performed:
# 沒得撒子簡單的辦法能夠一下計算出數據的年齡,所以Redis提供下面的兩點來校驗數據年齡,以決定集羣節點是否參與故障轉移過程:
#
# 1) If there are multiple replicas able to failover, they exchange messages
#    in order to try to give an advantage to the replica with the best
#    replication offset (more data from the master processed).
#    Replicas will try to get their rank by offset, and apply to the start
#    of the failover a delay proportional to their rank.
#    根據從節點的偏移量(主從複製-複製擠壓緩衝區裏面的偏移量,這個偏移量會跟着命令發給從節點,並保存下來)誰是最新的,而且根據偏移量排序,根據這個排序結果將從節點做爲候選主節點
#
# 2) Every single replica computes the time of the last interaction with
#    its master. This can be the last ping or command received (if the master
#    is still in the "connected" state), or the time that elapsed since the
#    disconnection with the master (if the replication link is currently down).
#    If the last interaction is too old, the replica will not try to failover
#    at all.
#    每一個從節點都會計算它與主節點最後一次交互時間,好比最後一次ping時間、最後一次接收命令時間、與主節點斷開鏈接過去的時長
#    若是最後一次交互時間太長,那麼這個從節點也不會參與故障轉移過程
#
# The point "2" can be tuned by user. Specifically a replica will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
# 前面講到的第2點有一個計算公式來衡量"最後一次交互時間"是否太長
#
#   (node-timeout * replica-validity-factor) + repl-ping-replica-period
#
# So for example if node-timeout is 30 seconds, and the replica-validity-factor
# is 10, and assuming a default repl-ping-replica-period of 10 seconds, the
# replica will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
# 假設"cluster-node-timeout"是30S,"replica-validity-factor"是10,"repl-ping-replica-period"是10S
# 若是"最後一次交互"時間超過"30*10+10=310"就被認爲太長,而不能參與故障轉移
#
# A large replica-validity-factor may allow replicas with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a replica at all.
# "replica-validity-factor"太大,從節點數據可能會過久,若是過小可能選舉不成功,集羣不可用,因此要根據實際狀況設置
#
# For maximum availability, it is possible to set the replica-validity-factor
# to a value of 0, which means, that replicas will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
# 若是爲了保證最大的可用性,能夠將"cluster-replica-validity-factor"設置爲0。此時全部的從節點考慮最後一次交互時間的大小,老是會參與故障轉移過程
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
# "cluster-replica-validity-factor"的"0"是惟一可讓集羣老是可用的選項值
#
# cluster-replica-validity-factor 10

# Cluster replicas are able to migrate to orphaned masters, that are masters
# that are left without working replicas. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working replicas.
# 再不看下面的配置項功能時,有可能集羣從節點會變成一個孤立的從節點,針對這種狀況,若是它再發生故障,由於沒有備選的從節點,因此故障轉移動做無法完成。
#
# Replicas migrate to orphaned masters only if there are still at least a
# given number of other working replicas for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a replica
# will migrate only if there is at least 1 other working replica for its master
# and so forth. It usually reflects the number of replicas you want for every
# master in your cluster.
# 爲了不上面的狀況發生,Redis Cluster默認配置要求一個主節點至少有兩個從節點,一旦主節點掛了被新選舉出來的主節點至少有一個從節點在工做。"cluster-migration-barrier"能夠指定該值的大小,默認值是"1"
#
# Default is 1 (replicas migrate only if their masters remain with at least
# one replica). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
# 默認值是1,若是要想禁止"migration",能夠將"cluster-migration-barrier"設置爲一個超大的值
# 能夠爲了調試或者你想讓本身的系統存在高風險的運行,能夠設置爲0。。。no zuo no die
#
# cluster-migration-barrier 1

# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
# Redis Cluster默認狀況下若是有一個hash slot沒有被分配(用一個Cluster Node接收它),那麼整個集羣是不可用的
# 在這種模式下,一旦出現網絡分區(一段hash slots 就變成未分配),整個集羣就不可用了,直到全部hash slots被分配,集羣會自動變得可用
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
# 也許有時你想即便出現hash slots unconverd,而集羣的部分節點仍然是可用的,能夠將"cluster-require-full-coverage"設置爲no
#
# cluster-require-full-coverage yes

# This option, when set to yes, prevents replicas from trying to failover its
# master during master failures. However the master can still perform a
# manual failover, if forced to do so.
# 若是將"cluster-replica-no-failover"設置爲yes,那麼該集羣從節點不會參與自動故障轉移過程,可是能夠手動強制執行故障轉移
#
# This is useful in different scenarios, especially in the case of multiple
# data center operations, where we want one side to never be promoted if not
# in the case of a total DC failure.
# 在不一樣場景可能很是有用,好比有多個數據中心,而咱們又不但願整個集羣中的某一個數據中心的從節點被提高爲主節點
#
# cluster-replica-no-failover no

# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.

########################## CLUSTER DOCKER/NAT support  ########################

# In certain deployments, Redis Cluster nodes address discovery fails, because
# addresses are NAT-ted or because ports are forwarded (the typical case is
# Docker and other containers).
#
# In order to make Redis Cluster working in such environments, a static
# configuration where each node knows its public address is needed. The
# following two options are used for this scope, and are:
#
# * cluster-announce-ip
# * cluster-announce-port
# * cluster-announce-bus-port
#
# Each instruct the node about its address, client port, and cluster message
# bus port. The information is then published in the header of the bus packets
# so that other nodes will be able to correctly map the address of the node
# publishing the information.
#
# If the above options are not used, the normal Redis Cluster auto-detection
# will be used instead.
#
# Note that when remapped, the bus port may not be at the fixed offset of
# clients port + 10000, so you can specify any port and bus-port depending
# on how they get remapped. If the bus-port is not set, a fixed offset of
# 10000 will be used as usually.
#
# Example:
#
# cluster-announce-ip 10.1.1.5
# cluster-announce-port 6379
# cluster-announce-bus-port 6380

################################## SLOW LOG 慢日誌 ###################################

# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
# 記錄Redis執行耗時超過指定值的"查詢命令",整個"耗時"僅僅是執行命令的耗時(在這段時間內,由於線程被阻塞,其餘命令會被阻塞),不包括與客戶端網絡IO所耗時間或者發數據給客戶端的耗時
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.
# 可使用"slowlog-log-slower-than"指定耗時的閾值(單位是微妙),一旦執行超過這個時間就會記錄日誌到緩衝區
# 可使用"slowlog-max-len 128"指定隊列長度,若是超過隊列,最老的元素會被覆蓋
#
# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
# 單位是微妙,不能設置爲負值,若是設置爲0,那麼全部的查詢命令都會記錄到隊列中
# 
slowlog-log-slower-than 10000

# There is no limit to this length. Just be aware that it will consume memory.
# 最大值沒有限制,咱們只須要考慮內存是否足夠大
# You can reclaim memory used by the slow log with SLOWLOG RESET.
# 可使用slowlog reset回收已使用的內存
slowlog-max-len 128

################################ LATENCY MONITOR ##############################

# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
# 延遲監控子系統經過採集運行時的不一樣操做去收集形成Redis實例延遲的相關可能來源
# 
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
# 能夠經過latency命令得到可用信息的圖表,好比latency docter xxx/latency graph等
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
# 該監控子系統只會記錄那些耗時>="latency-monitor-threshold"指定的值對應的操做,若是設置爲0,表示關閉延時監控
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
# Redis默認是關閉了延遲監控的,由於絕大多數時間是用不着的,由於開啓它有必定的性能損失,除非你的服務發生了延時而開啓監控
# 當Redis是運行着的時候,能夠經過config set latency-monitor-threshold xxx輕鬆開啓監控
latency-monitor-threshold 0

############################# EVENT NOTIFICATION ##############################
# 下面的條件說明不少看上去挺複雜的,其實很簡單:就是多個字符表明的意思組合到一塊兒而已
# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
# Redis能夠將關於"鍵空間(簡單理解爲Hash表中的鍵值對)"發生的事件以通知的形式發送給Pub/Sub客戶端
# 更詳細的請參考Redis的官方文檔:http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
# 若是經過配置開啓了鍵空間和鍵時間的通知,若是經過客戶端在第0號database上執行一個DEL foo操做,那麼會
# 發佈兩條消息
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
# 咱們能夠經過組合下面的分類將事件通知發給客戶端
#  "K"和"E"表明兩大類,不管怎麼組合,必須有其中一個,能夠兩個同時選擇,K表明Keyspace事件,E表明Keyevent事件
#  K覺得着一個或多個數據類型的全部符合規則事件都會生成通知
#  E覺得着一個或多個數據類型的某一個命令的時間會生成通知
#  若是看到這裏還沒明白,建議去百度一下,推薦一個:http://redisdoc.com/topic/notification.html#id1
#  K     Keyspace events, published with __keyspace@<db>__ prefix.
#  E     Keyevent events, published with __keyevent@<db>__ prefix.
#
#  通常的命令,好比DEL SET EXPIRE RENAME等等,感受像是全部會產生改變的命令都符合條件      
#  g     Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
#
#  下面的$ l s h z 分別表明你們都知道5種數據類型
#  $     String commands
#  l     List commands
#  s     Set commands
#  h     Hash commands
#  z     Sorted set commands
#
#  x 表明過時事件  e 表明內存使用超過maxmemory時KEY被淘汰的事件
#  x     Expired events (events generated every time a key expires)
#  e     Evicted events (events generated when a key is evicted for maxmemory)
#
#  A 是一個別名,表明了"g$lshzxe"的組合,能夠加強閱讀性
#  A     Alias for g$lshzxe, so that the "AKE" string means all the events.
#
#  The "notify-keyspace-events" takes as argument a string that is composed
#  of zero or multiple characters. The empty string means that notifications
#  are disabled.
#  能夠給"notify-keyspace-events"設置0或者多個字符,若是設置爲空字符串,則表示關閉此功能
#
#  Example: to enable list and generic events, from the point of view of the
#           event name, use:
#
#  notify-keyspace-events Elg
#
#  Example 2: to get the stream of the expired keys subscribing to channel
#             name __keyevent@0__:expired use:
#
#  notify-keyspace-events Ex
#
#  By default all notifications are disabled because most users don't need
#  this feature and the feature has some overhead. Note that if you don't
#  specify at least one of K or E, no events will be delivered.
#  由於開啓此功能是有必定開銷的,會影響性能,並且大多數用戶不須要此功能,因此默認是關閉了此功能的,不會有事件通知被髮送
notify-keyspace-events ""

############################### ADVANCED CONFIG 高級配置 ###############################
# 下面的配置須要對Redis的原理,特別5中數據類型的底層數據結構有比較清楚的瞭解才能看得懂,總的來講就是根據本身的鍵-值選擇5中數據類型在某些條件下使用何種數據結構來存放數據。
# 最多見的高效數據結構就是ziplist、intset,可是他們一般只有元素(條目)較小且元素(條目)較小時才適合
# 要學習這部份內容能夠看看redis設計與實現和Redis資深歷險兩本書,前一本書將原理不少,且深度足夠,可是Redis的版本有點太老了,後一步本書能夠在原理上對前一本書進行補充,且Redis版本很新,已經到5了。並且它還將了不少實戰的知識。
#
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
#
# hash數據類型:若是條目數小於512,且條目大小不超過64字節,則使用ziplist做爲hash數據類型的底層數據結構
hash-max-ziplist-entries 512
hash-max-ziplist-value 64

# 新版本的Redis針對list數據類型的底層數據結構作了優化採用的是"鏈表+ziplist",其思想有點像Java HashMap的"數組+鏈表/紅黑樹"
# Lists are also encoded in a special way to save a lot of space.
# The number of entries allowed per internal list node can be specified
# as a fixed maximum size or a maximum number of elements.
# 能夠經過"list-max-ziplist-size"設置鏈表中ziplist的條目數量,其值能夠是條目數量,也能夠最大字節數
# For a fixed maximum size, use -5 through -1, meaning:
# 下面是5個可能取值,建議使用-1 和 -2,其餘選項不推薦使用,除非有特殊需求
# -5: max size: 64 Kb  <-- not recommended for normal workloads
# -4: max size: 32 Kb  <-- not recommended
# -3: max size: 16 Kb  <-- probably not recommended
# -2: max size: 8 Kb   <-- good
# -1: max size: 4 Kb   <-- good
#
# Positive numbers mean store up to _exactly_ that number of elements
# per list node.
# 上面的負值就是單個鏈表節點所包含的條目數
#
# The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size),
# but if your use case is unique, adjust the settings as necessary.
# 取值爲-1 -2 發揮的性能是最好的
list-max-ziplist-size -2

# Lists may also be compressed.
# Compress depth is the number of quicklist ziplist nodes from *each* side of
# the list to *exclude* from compression.  The head and tail of the list
# are always uncompressed for fast push/pop operations.  Settings are:
# 0: disable all list compression
#    表示不壓縮任何節點
# 1: depth 1 means "don't start compressing until after 1 node into the list,
#    going from either the head or tail"
#    So: [head]->node->node->...->node->[tail]
#    [head], [tail] will always be uncompressed; inner nodes will compress.
#    表示除鏈表的頭尾之外,其餘鏈表節點都壓縮
# 2: [head]->[next]->node->node->...->node->[prev]->[tail]
#    2 here means: don't compress head or head->next or tail->prev or tail,
#    but compress all nodes between them.
#    依次類推,即前兩個和後兩個之外的都壓縮
# 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail]
#    依次類推
# etc.
# 默認壓縮深度爲0,也就是說不壓縮。。。不管如何設置頭尾是不會壓縮的,好比當list被當作隊列使用時,若是壓縮了,還須要解壓,下降了性能。
list-compress-depth 0

# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# 當集合(set)存放的值都是64位的無符號10進制整數時,且條目數小於512時會採用intset做爲集合的底層數據結構
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512

# 和hash數據類型相似,若是條目數小於128,且條目大小<64會使用ziplist做爲有序集合的底層數據結構
# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64

# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
# 這個是Redis高級功能,能夠用這種數據結構統計網站的UV,可以去重,其準確度接近真實值
# 簡單點說:當去重後統計出來的值小於"hll-sparse-max-bytes"指定的值時,Redis會使用稀疏矩陣來存放,一個Key佔用的空間比稠密矩陣小,若是統計出來的值大於"hll-sparse-max-bytes"指定的值,那麼使用稠密矩陣,此時一個Key佔用的空間是12KB
# "hll-sparse-max-bytes"默認爲3000,若是設置爲16000以上徹底是無用的,由於此時稠密矩陣效果更好
hll-sparse-max-bytes 3000

# Streams macro node max size / items. The stream data structure is a radix
# tree of big nodes that encode multiple items inside. Using this configuration
# it is possible to configure how big a single node can be in bytes, and the
# maximum number of items it may contain before switching to a new node when
# appending new stream entries. If any of the following settings are set to
# zero, the limit is ignored, so for instance it is possible to set just a
# max entires limit by setting max-bytes to 0 and max-entries to the desired
# value.
# 設置Stream的單個節點最大字節數和最多能有多少個條目,若是任何一個條件知足就會新增長一個節點用以保存新的數據
# 若是將任何一個配置項設置爲0,表示不限制
stream-node-max-bytes 4096
stream-node-max-entries 100

# Redis數據庫存放鍵值對數據結構是一個類型爲字典長度爲2的數組,假設這個數組名稱爲"ht",在rehash的時候就是將其中一個字典(ht[0])中的全部數據搬到另外一個字典(ht[1])中,並且rehash是惰性的(由於redis要高效的響應查詢或者寫,不可能去一次完成rehash操做,不像Java的HashMap),當方式key時或者CPU比較空閒時會觸發,所以也被稱之爲"漸進式hash"
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
# 默認是使用1秒鐘的10毫秒進行rehash,在適當的時候回收內存
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
# 若是系統有嚴格的延時要求,在2毫秒內不斷的查詢出結果,能夠將"activerehashing"設置no
# 可是這對你的系統並非一個好事情,所以不建議這樣設置,因此保持不動吧
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
# 若是沒有很是嚴格的要求,建議將"activerehashing"設置爲yes,這樣可讓內存儘量快的釋放
activerehashing yes

# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
# 能夠經過設置客戶端輸出緩衝區大小將待接收數據超過緩衝區大小的客戶端斷開
# 一般使用pub/sub的時候,客戶端沒有及時消費而致使超過緩衝區大小
#
# The limit can be set differently for the three different classes of clients:
# 提供三種客戶端的設置,分別是普通的、主從複製的、pub/sub的客戶端,咱們能夠分別對這三種客戶端的輸出緩衝區設置大小
#
# normal -> normal clients including MONITOR clients
# replica  -> replica clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
# 下面是三種客戶端緩衝區大小設置的語法
# 
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# 若是客戶端輸出緩衝區的大小達到了"hard limit",服務器會當即斷開鏈接
# 若是客戶端輸出緩衝區的大小達到了"soft limit",且持續時間達到了"soft seconds",服務器會當即斷開鏈接
#
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
# 這上面是一個舉例,省略。。。。
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
# 默認狀況下普通的client不限制,由於它們都是發起請求後等待接收數據,並不像異步的客戶端(好比主從複製客戶端和PUB/SUB)會形成數據的擠壓,擠壓的緣由就是客戶端處理速度跟不上數據產生的速度
#
# Instead there is a default limit for pubsub and replica clients, since
# subscribers and replicas receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
# hard or soft limit 均可以經過設置爲0而禁止掉
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

# Client query buffers accumulate new commands. They are limited to a fixed
# amount by default in order to avoid that a protocol desynchronization (for
# instance due to a bug in the client) will lead to unbound memory usage in
# the query buffer. However you can configure it here if you have very special
# needs, such us huge multi/exec requests or alike.
# 客戶端查詢緩衝區會累加新命令,默認狀況下,緩衝區大小是一個固定值以免協議同步失效(如客戶端的bug)致使查詢緩衝區出現未綁定的內存(即客戶端都已經不存在了,可是它發過來的命令還在緩衝區當中)
# 若是有巨大的multi/exec請求,則能夠修改這個值以知足咱們的特殊需求
# 
# client-query-buffer-limit 1gb

# In the Redis protocol, bulk requests, that are, elements representing single
# strings, are normally limited ot 512 mb. However you can change this limit
# here.
# 若是一個大容量請求(即客戶端單次發送過來的字符串)被限制爲512MB,咱們也能夠經過修改"proto-max-bulk-len"值
# 不過我可能一生也不會用到
# proto-max-bulk-len 512mb

# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
# 簡單點說:Redis有後臺任務,經過設置"hz"可提升或者下降檢查這些任務是否應該執行的頻率,值越大消耗的CPU越多,反之越少
# 值能夠設置在1到500之間,一般不建議將該值設置得比100大,通常都使用10這個默認值,除非咱們的系統有很是嚴格的延時要求,纔會將"hz"設置得等於或者超過100
hz 10

# Normally it is useful to have an HZ value which is proportional to the
# number of clients connected. This is useful in order, for instance, to
# avoid too many clients are processed for each background task invocation
# in order to avoid latency spikes.
#
# Since the default HZ value by default is conservatively set to 10, Redis
# offers, and enables by default, the ability to use an adaptive HZ value
# which will temporary raise when there are many connected clients.
#
# When dynamic HZ is enabled, the actual configured HZ will be used as
# as a baseline, but multiples of the configured HZ value will be actually
# used as needed once more clients are connected. In this way an idle
# instance will use very little CPU time while a busy instance will be
# more responsive.
# 英文有時還真的描述很囉唆,仍是中文編碼更高效。。。吐槽一下英文
# 前面的"ht"配置項是固定值,當鏈接客戶端很是多時,若是"ht"仍是10,則可能會致使延遲比較高,所以Redis搞了一個
# "dynamic-hz"配置項,當設置爲yes時,能夠基於"ht"配置值動態的調整使用的"ht"值,好比鏈接的客戶端不少事,動態將ht調高,能夠減小延遲。而當鏈接客戶端比較少,又能夠動態下降"ht",這樣消耗的CPU會不多
# 默認值是yes,這個根本不須要咱們本身去動,有了它咱們也不須要去動"ht"配置
dynamic-hz yes

# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
# 當子進程在重寫AOF文件時,若是將"aof-rewrite-incremental-fsync"設置爲yes,那麼一旦生成32M數據纔會調用一次OS的fsync函數,這樣能夠下降出現訪問峯值時系統的延遲。由於能夠減小fsync調用次數和IO請求
aof-rewrite-incremental-fsync yes

# When redis saves RDB file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
# 和"aof-rewrite-incremental-fsync"一個意思,只不過是用在生成RDB文件時用。
# 若是持久化採用的混合方式,即AOF文件是由"RDB部分+AOF部分"組成的話,我想"aof-rewrite-incremental-fsync"和"rdb-save-incremental-fsync"都會使用到
rdb-save-incremental-fsync yes

# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good
# idea to start with the default settings and only change them after investigating
# how to improve the performances and how the keys LFU change over time, which
# is possible to inspect via the OBJECT FREQ command.
#
# There are two tunable parameters in the Redis LFU implementation: the
# counter logarithm factor and the counter decay time. It is important to
# understand what the two parameters mean before changing them.
#
# Redis的LFU實現有兩個可調整的參數:計數器對數因子(couter logarithm factor)和計數器衰退時間(counter decay time)
# 必定要充分理解這兩個參數以後才能去修改,若是不懂就不要去瞎搞了,若是非要修改,必定要使用"OBJECT FREQ"命令充分調查並知道如何提高性能的狀況下才能進行
# The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis
# uses a probabilistic increment with logarithmic behavior. Given the value
# of the old counter, when a key is accessed, the counter is incremented in
# this way:
# 在介紹maxmemory的時候提到了兩個參數做用的原理,這裏就不贅述了。
#
# 1. A random number R between 0 and 1 is extracted.
# 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1).
# 3. The counter is incremented only if R < P.
#
# The default lfu-log-factor is 10. This is a table of how the frequency
# counter changes with a different number of accesses with different
# logarithmic factors:
# "lfu-log-factor"的默認值=10,下表是不一樣對數因子下計數器的改變頻率:
#
# +--------+------------+------------+------------+------------+------------+
# | factor | 100 hits   | 1000 hits  | 100K hits  | 1M hits    | 10M hits   |
# +--------+------------+------------+------------+------------+------------+
# | 0      | 104        | 255        | 255        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 1      | 18         | 49         | 255        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 10     | 10         | 18         | 142        | 255        | 255        |
# +--------+------------+------------+------------+------------+------------+
# | 100    | 8          | 11         | 49         | 143        | 255        |
# +--------+------------+------------+------------+------------+------------+
#
# NOTE: The above table was obtained by running the following commands:
# 上面的表格能夠經過下面的命令獲得:
#
#   redis-benchmark -n 1000000 incr foo
#   redis-cli object freq foo
#
# NOTE 2: The counter initial value is 5 in order to give new objects a chance
# to accumulate hits.
# 默認counter的初始值是5,爲了讓新的對象有機會累加它的命中率
#
# The counter decay time is the time, in minutes, that must elapse in order
# for the key counter to be divided by two (or decremented if it has a value
# less <= 10).
# 計數器衰減時間是key計數器除以2(若是值小於<=10,則遞減)所必須通過的時間,單位爲分鐘。
#
# The default value for the lfu-decay-time is 1. A Special value of 0 means to
# decay the counter every time it happens to be scanned.
# "lfu-decay-time" 的默認值爲 1,0 表示每次都對計數器進行衰減
#
# lfu-log-factor 10
# lfu-decay-time 1

########################### ACTIVE DEFRAGMENTATION #######################
########################### 在線碎片整理 #######################
#
# WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested
# even in production and manually tested by multiple engineers for some
# time.
# 這還只是一個實驗功能,就像Redis Cluster同樣,其實已經有不少人在使用了
# What is active defragmentation?
# -------------------------------
#
# Active (online) defragmentation allows a Redis server to compact the
# spaces left between small allocations and deallocations of data in memory,
# thus allowing to reclaim back memory.
#
# Fragmentation is a natural process that happens with every allocator (but
# less so with Jemalloc, fortunately) and certain workloads. Normally a server
# restart is needed in order to lower the fragmentation, or at least to flush
# away all the data and create it again. However thanks to this feature
# implemented by Oran Agra for Redis 4.0 this process can happen at runtime
# in an "hot" way, while the server is running.
# 活動碎片整理容許Redis服務器壓縮內存中因爲申請和釋放數據塊致使的碎片,從而回收內存,就好像window的磁盤整理同樣
# 碎片是每次申請內存(幸運的是Jemalloc出現碎片的概率小不少)的時候會天然發生的
# 一般來講,爲了下降碎片化程度須要重啓服務,或者清除全部的數據而後從新建立。 得益於Oran Agra在Redis 4.0實現的這個特性,進程能夠在服務運行時以"熱"方式完成
#
# Basically when the fragmentation is over a certain level (see the
# configuration options below) Redis will start to create new copies of the
# values in contiguous memory regions by exploiting certain specific Jemalloc
# features (in order to understand if an allocation is causing fragmentation
# and to allocate it in a better place), and at the same time, will release the
# old copies of the data. This process, repeated incrementally for all the keys
# will cause the fragmentation to drop back to normal values.
# 一般來講當碎片化達到必定程度(查看下面的配置)Redis 會使用Jemalloc建立連續的內存空間,並在此內存空間對現有的值進行拷貝,拷貝完成後會釋放掉舊的數據。
# 這個過程會對全部的致使碎片化的key以增量的形式進行,Redis到處使用漸進式的,真實辛苦設計者了
#
# Important things to understand:
# 要重點理解的三點:
#
# 1. This feature is disabled by default, and only works if you compiled Redis
#    to use the copy of Jemalloc we ship with the source code of Redis.
#    This is the default with Linux builds.
#    默認狀況下,該功能是關閉的,而且只有在編譯Redis時使用了代碼中的Jemalloc才生效(這是 Linux 下的默認行爲)
# 2. You never need to enable this feature if you don't have fragmentation
#    issues.
#    若是沒有碎片問題,咱們永遠也不須要啓用該功能
#
# 3. Once you experience fragmentation, you can enable this feature when
#    needed with the command "CONFIG SET activedefrag yes".
#    能夠經過命令"CONFIG SET activefrag yes"來啓用並試驗
#
# The configuration parameters are able to fine tune the behavior of the
# defragmentation process. If you are not sure about what they mean it is
# a good idea to leave the defaults untouched.
# 相關的配置參數能夠很好的調整碎片整理過程,若是你不知道這些選項的做用最好使用默認值。

# Enabled active defragmentation
# 開啓在線整理
# activedefrag yes

# Minimum amount of fragmentation waste to start active defrag
# 有多少碎片時開始整理
# active-defrag-ignore-bytes 100mb

# Minimum percentage of fragmentation to start active defrag
# 有多少比例的碎片時開始整理
# active-defrag-threshold-lower 10

# Maximum percentage of fragmentation at which we use maximum effort
# 有多少比例的碎片時開始進行整理
# active-defrag-threshold-upper 100

# Minimal effort for defrag in CPU percentage
# 進行碎片整理時使用多少比例的CPU時間
# active-defrag-cycle-min 5

# Maximal effort for defrag in CPU percentage
# 進行整理時使用多少CPU時間
# active-defrag-cycle-max 75

# Maximum number of set/hash/zset/list fields that will be processed from
# the main dictionary scan
# 進行主字典掃描時處理的 set/hash/zset/list 字段的最大數量(就是說在進行主字典掃描時 set/hash/zset/list 的長度小於這個值纔會處理,大於這個值的會放在一個列表中延遲處理)
# 由於若是某一個key過大,一次性處理完會很是耗時的
# active-defrag-max-scan-fields 1000
相關文章
相關標籤/搜索