4.0.14 經常使用配置html
bind 127.0.0.1 # 默認綁定本地,不寫的話任何地址均可以訪問 protected-mode yes #保護模式,若是沒有設置bind 配置地址,也沒有設置任何密碼,則只容許本地鏈接 port 6379 # 端口號 6379 timeout 0 # 此參數爲設置客戶端空閒超過timeout,服務端會斷開鏈接,爲0則服務端不會主動斷開鏈接,不能小於0。 daemonize no #no爲前臺運行,yes是後臺運行 pidfile /var/run/redis_6379.pid --pid文件路徑 loglevel notice #消息級別,debug(不少信息,方便開發、測試),verbose(許多有用的信息,可是沒有debug級別信息多),notice(適當的日誌級別,適合生產環境),warn(只有很是重要的信息) logfile "" #指定了記錄日誌的文件。空字符串的話,日誌會打印到標準輸出設備。後臺運行的redis標準輸出是/dev/null。 databases 16 #數據庫的數量默認16,默認使用的數據庫是DB 0。能夠經過」SELECT 「命令選擇一個db save 900 1 #持久化頻率,save 900 1,,900秒內有1個以上的key改動則啓用該持久化頻率,下面的以此類推 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes #當RDB持久化出現錯誤後,是否依然進行繼續進行工做,yes:不能進行工做,no:能夠繼續進行工做,能夠經過info中的rdb_last_bgsave_status瞭解RDB持久化是否有錯誤 rdbcompression yes #使用壓縮rdb文件,rdb文件壓縮使用LZF壓縮算法,yes:壓縮,可是須要一些cpu的消耗。no:不壓縮,須要更多的磁盤空間 rdbchecksum yes #是否校驗rdb文件。從rdb格式的第五個版本開始,在rdb文件的末尾會帶上CRC64的校驗和。這跟有利於文件的容錯性,可是在保存rdb文件的時候,會有大概10%的性能損耗,因此若是你追求高性能,能夠關閉該配置。 dbfilename dump.rdb #rdb 文件名 dir ./ #數據目錄,數據庫的寫入會在這個目錄。rdb、aof文件也會寫在這個目錄 slave-serve-stale-data yes #當從庫同主機失去鏈接或者複製正在進行,從機庫有兩種運行方式:1) 若是slave-serve-stale-data設置爲yes(默認設置),從庫會繼續響應客戶端的請求。
# 2) 若是slave-serve-stale-data設置爲no,除去INFO和SLAVOF命令以外的任何請求都會返回一個錯誤」SYNC with master in progress」。 slave-read-only yes #從數據庫只讀 repl-diskless-sync no #是否使用socket方式複製數據。目前redis複製提供兩種方式,disk和socket。若是新的slave連上來或者重連的slave沒法部分同步,就會執行全量同步,master會生成rdb文件。
#有2種方式:disk方式是master建立一個新的進程把rdb文件保存到磁盤,再把磁盤上的rdb文件傳遞給slave。socket是master建立一個新的進程,直接把rdb文件以socket的方式發給slave。
#disk方式的時候,當一個rdb保存的過程當中,多個slave都能共享這個rdb文件。socket的方式就的一個個slave順序複製。在磁盤速度緩慢,網速快的狀況下推薦用socket方式。
repl-diskless-sync-delay 5 #diskless複製的延遲時間,防止設置爲0。一旦複製開始,節點不會再接收新slave的複製請求直到下一個rdb傳輸。因此最好等待一段時間,等更多的slave連上來 repl-disable-tcp-nodelay no #是否禁止複製tcp連接的tcp nodelay參數,可傳遞yes或者no。默認是no,即便用tcp nodelay。若是master設置了yes來禁止tcp nodelay設置,在把數據複製給slave的時候,會減小包的數量和更小的網絡帶寬。
#可是這也可能帶來數據的延遲。默認咱們推薦更小的延遲,可是在數據量傳輸很大的場景下,建議選擇yes。
#requirepass foobared #redis密碼,默認該選項被註釋了
#maxclients 10000 #最大客戶端鏈接數,即併發最大值,默認該選項被註釋了
#maxmemory <bytes> #最大內存值,若是將要超出最大內存,則redis會釋放無用key,若是沒法釋放或內存已經到達最大值,則redis沒法寫入,只能利用swap等進行讀取操做; appendonly no #默認爲no,yes爲開啓aof appendfilename "appendonly.aof" #aof文件名稱 appendfsync everysec #aof文件策略,everysec,no,always.
#內存策略
#(1)設置key的過時時間 expire (過時時間):expire andkey 1000 --單位爲秒
#(2)算法策略
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.node
volatile-lru #設置超時時間的數據中,刪除最不經常使用的數據
allkeys-lru #查詢全部Key中最不經常使用的數據進行刪除,這是應用最普遍的策略
volatile-lfu #從全部配置了過時時間的鍵中刪除使用頻率最少的鍵
allkeys-lfu #從全部鍵中驅逐使用頻率最少的鍵
volatile-random #在已經設定了超時的數據中隨機刪除
allkeys-random #查詢全部的key以後,隨機刪除。
volatile-ttl #查詢所有設定超時時間的數據以後排序,將立刻要過時的數據進行刪除。
noeviction #默認--不刪除任何東西,內存溢出後返回報錯ios
#redis cluster參數
cluster-enabled<yes/no>:若是是,則在特定Redis實例中啓用Redis羣集支持。不然,實例像往常同樣做爲獨立實例啓動。 cluster-config-file<filename>:請注意,儘管有此選項的名稱,但這不是用戶可編輯的配置文件,而是每次發生更改時Redis羣集節點自動保持羣集配置(基本上是狀態)的文件,爲了可以在啓動時從新閱讀它。該文件列出了集羣中其餘節點,狀態,持久變量等內容。因爲某些消息接收,一般會將此文件重寫並刷新到磁盤上。 cluster-node-timeout<milliseconds>:Redis羣集節點不可用的最長時間,不會被視爲失敗。若是主節點的可訪問時間超過指定的時間,則其從屬節點將進行故障轉移。此參數控制Redis羣集中的其餘重要事項。值得注意的是,在指定時間內沒法訪問大多數主節點的每一個節點都將中止接受查詢。 cluster-slave-validity-factor<factor>:若是設置爲零,則從站將始終嘗試對主站進行故障切換,而無論主站和從站之間的鏈路是否保持斷開鏈接的時間長短。若是該值爲正,則計算最大斷開時間做爲節點超時值乘以此選項提供的因子,若是節點是從屬節點,則若是主連接斷開鏈接的時間超過指定的時間,則不會嘗試啓動故障轉移。例如,若是節點超時設置爲5秒,而且有效性因子設置爲10,則從主設備斷開超過50秒的從設備將不會嘗試故障轉移其主設備。請注意,若是沒有從站可以對其進行故障轉移,則任何不一樣於零的值均可能致使Redis羣集在主站發生故障後不可用。在這種狀況下,只有當原始主服務器從新加入羣集時,羣集纔會返回。 cluster-migration-barrier<count>:主服務器將保持鏈接的最小從服務器數,以便另外一個從服務器遷移到再也不由任何從服務器覆蓋的主服務器。有關詳細信息,請參閱本教程中有關副本遷移的相應部分。 cluster-require-full-coverage<yes/no>:若是設置爲yes,則默認狀況下,若是任何節點未覆蓋某個百分比的密鑰空間,則集羣將中止接受寫入。若是該選項設置爲no,即便只能處理有關鍵子集的請求,羣集仍將提供查詢。
轉載詳解參考:web
#redis.conf # Redis configuration file example. # ./redis-server /path/to/redis.conf ################################## INCLUDES ################################### #這在你有標準配置模板可是每一個redis服務器又須要個性設置的時候頗有用。 # include /path/to/local.conf # include /path/to/other.conf ################################ GENERAL ##################################### #是否在後臺執行,yes:後臺運行;no:不是後臺運行(老版本默認) daemonize yes #3.2裏的參數,是否開啓保護模式,默認開啓。要是配置裏沒有指定bind和密碼。 #開啓該參數後,redis只會本地進行訪問,拒絕外部訪問。要是開啓了密碼和bind, #能夠開啓。不然最好關閉,設置爲no。 protected-mode yes #redis的進程文件 pidfile /var/run/redis/redis-server.pid #redis監聽的端口號。 port 6379 #此參數肯定了TCP鏈接中已完成隊列(完成三次握手以後)的長度, # 固然此值必須不大於Linux系統定義的/proc/sys/net/core/somaxconn值 #,默認是511,而Linux的默認參數值是128。 #當系統併發量大而且客戶端速度緩慢的時候,能夠將這二個參數一塊兒參考設定。 #該內核參數默認值通常是128,對於負載很大的服務程序來講大大的不夠。 #通常會將它修改成2048或者更大。 #在/etc/sysctl.conf中添加:net.core.somaxconn = 2048, #而後在終端中執行sysctl -p。 tcp-backlog 511 #指定 redis 只接收來自於該 IP 地址的請求,若是不進行設置,那麼將處理全部請求 bind 127.0.0.1 #配置unix socket來讓redis支持監聽本地鏈接。 # unixsocket /var/run/redis/redis.sock #配置unix socket使用文件的權限 # unixsocketperm 700 # 此參數爲設置客戶端空閒超過timeout,服務端會斷開鏈接,爲0則服務端不會主動斷開鏈接,不能小於0。 timeout 0 #tcp keepalive參數。若是設置不爲0,就使用配置tcp的SO_KEEPALIVE值,使用keepalive有兩個好處:檢測掛掉的對端。下降中間設備出問題而致使網絡看似鏈接卻已經與對端端口的問題。在Linux內核中,設置了keepalive,redis會定時給對端發送ack。檢測到對端關閉須要兩倍的設置值。 tcp-keepalive 0 #指定了服務端日誌的級別。級別包括:debug(不少信息,方便開發、測試),verbose(許多有用的信息,可是沒有debug級別信息多),notice(適當的日誌級別,適合生產環境),warn(只有很是重要的信息) loglevel notice #指定了記錄日誌的文件。空字符串的話,日誌會打印到標準輸出設備。後臺運行的redis標準輸出是/dev/null。 logfile /var/log/redis/redis-server.log #是否打開記錄syslog功能 # syslog-enabled no #syslog的標識符。 # syslog-ident redis #日誌的來源、設備 # syslog-facility local0 #數據庫的數量,默認使用的數據庫是DB 0。能夠經過」SELECT 「命令選擇一個db databases 16 ################################ SNAPSHOTTING ################################ # 快照配置 # 註釋掉「save」這一行配置項就可讓保存數據庫功能失效 # 設置sedis進行數據庫鏡像的頻率。 # 900秒(15分鐘)內至少1個key值改變(則進行數據庫保存--持久化) # 300秒(5分鐘)內至少10個key值改變(則進行數據庫保存--持久化) # 60秒(1分鐘)內至少10000個key值改變(則進行數據庫保存--持久化) save 900 1 save 300 10 save 60 10000 #當RDB持久化出現錯誤後,是否依然進行繼續進行工做,yes:不能進行工做,no:能夠繼續進行工做,能夠經過info中的rdb_last_bgsave_status瞭解RDB持久化是否有錯誤 stop-writes-on-bgsave-error yes #使用壓縮rdb文件,rdb文件壓縮使用LZF壓縮算法,yes:壓縮,可是須要一些cpu的消耗。no:不壓縮,須要更多的磁盤空間 rdbcompression yes #是否校驗rdb文件。從rdb格式的第五個版本開始,在rdb文件的末尾會帶上CRC64的校驗和。這跟有利於文件的容錯性,可是在保存rdb文件的時候,會有大概10%的性能損耗,因此若是你追求高性能,能夠關閉該配置。 rdbchecksum yes #rdb文件的名稱 dbfilename dump.rdb #數據目錄,數據庫的寫入會在這個目錄。rdb、aof文件也會寫在這個目錄 dir /var/lib/redis ################################# REPLICATION ################################# #複製選項,slave複製對應的master。 # slaveof <masterip> <masterport> #若是master設置了requirepass,那麼slave要連上master,須要有master的密碼才行。masterauth就是用來配置master的密碼,這樣能夠在連上master後進行認證。 # masterauth <master-password> #當從庫同主機失去鏈接或者複製正在進行,從機庫有兩種運行方式:1) 若是slave-serve-stale-data設置爲yes(默認設置),從庫會繼續響應客戶端的請求。2) 若是slave-serve-stale-data設置爲no,除去INFO和SLAVOF命令以外的任何請求都會返回一個錯誤」SYNC with master in progress」。 slave-serve-stale-data yes #做爲從服務器,默認狀況下是隻讀的(yes),能夠修改爲NO,用於寫(不建議)。 slave-read-only yes #是否使用socket方式複製數據。目前redis複製提供兩種方式,disk和socket。若是新的slave連上來或者重連的slave沒法部分同步,就會執行全量同步,master會生成rdb文件。有2種方式:disk方式是master建立一個新的進程把rdb文件保存到磁盤,再把磁盤上的rdb文件傳遞給slave。socket是master建立一個新的進程,直接把rdb文件以socket的方式發給slave。disk方式的時候,當一個rdb保存的過程當中,多個slave都能共享這個rdb文件。socket的方式就的一個個slave順序複製。在磁盤速度緩慢,網速快的狀況下推薦用socket方式。 repl-diskless-sync no #diskless複製的延遲時間,防止設置爲0。一旦複製開始,節點不會再接收新slave的複製請求直到下一個rdb傳輸。因此最好等待一段時間,等更多的slave連上來。 repl-diskless-sync-delay 5 #slave根據指定的時間間隔向服務器發送ping請求。時間間隔能夠經過 repl_ping_slave_period 來設置,默認10秒。 # repl-ping-slave-period 10 #複製鏈接超時時間。master和slave都有超時時間的設置。master檢測到slave上次發送的時間超過repl-timeout,即認爲slave離線,清除該slave信息。slave檢測到上次和master交互的時間超過repl-timeout,則認爲master離線。須要注意的是repl-timeout須要設置一個比repl-ping-slave-period更大的值,否則會常常檢測到超時。 # repl-timeout 60 #是否禁止複製tcp連接的tcp nodelay參數,可傳遞yes或者no。默認是no,即便用tcp nodelay。若是master設置了yes來禁止tcp nodelay設置,在把數據複製給slave的時候,會減小包的數量和更小的網絡帶寬。可是這也可能帶來數據的延遲。默認咱們推薦更小的延遲,可是在數據量傳輸很大的場景下,建議選擇yes。 repl-disable-tcp-nodelay no #複製緩衝區大小,這是一個環形複製緩衝區,用來保存最新複製的命令。這樣在slave離線的時候,不須要徹底複製master的數據,若是能夠執行部分同步,只須要把緩衝區的部分數據複製給slave,就能恢復正常複製狀態。緩衝區的大小越大,slave離線的時間能夠更長,複製緩衝區只有在有slave鏈接的時候才分配內存。沒有slave的一段時間,內存會被釋放出來,默認1m。 # repl-backlog-size 5mb #master沒有slave一段時間會釋放複製緩衝區的內存,repl-backlog-ttl用來設置該時間長度。單位爲秒。 # repl-backlog-ttl 3600 #當master不可用,Sentinel會根據slave的優先級選舉一個master。最低的優先級的slave,當選master。而配置成0,永遠不會被選舉。 slave-priority 100 #redis提供了可讓master中止寫入的方式,若是配置了min-slaves-to-write,健康的slave的個數小於N,mater就禁止寫入。master最少得有多少個健康的slave存活才能執行寫命令。這個配置雖然不能保證N個slave都必定能接收到master的寫操做,可是能避免沒有足夠健康的slave的時候,master不能寫入來避免數據丟失。設置爲0是關閉該功能。 # min-slaves-to-write 3 #延遲小於min-slaves-max-lag秒的slave才認爲是健康的slave。 # min-slaves-max-lag 10 # 設置1或另外一個設置爲0禁用這個特性。 # Setting one or the other to 0 disables the feature. # By default min-slaves-to-write is set to 0 (feature disabled) and # min-slaves-max-lag is set to 10. ################################## SECURITY ################################### #requirepass配置可讓用戶使用AUTH命令來認證密碼,才能使用其餘命令。這讓redis可使用在不受信任的網絡中。爲了保持向後的兼容性,能夠註釋該命令,由於大部分用戶也不須要認證。使用requirepass的時候須要注意,由於redis太快了,每秒能夠認證15w次密碼,簡單的密碼很容易被攻破,因此最好使用一個更復雜的密碼。 # requirepass foobared #把危險的命令給修改爲其餘名稱。好比CONFIG命令能夠重命名爲一個很難被猜到的命令,這樣用戶不能使用,而內部工具還能接着使用。 # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 #設置成一個空的值,能夠禁止一個命令 # rename-command CONFIG "" ################################### LIMITS #################################### # 設置能連上redis的最大客戶端鏈接數量。默認是10000個客戶端鏈接。因爲redis不區分鏈接是客戶端鏈接仍是內部打開文件或者和slave鏈接等,因此maxclients最小建議設置到32。若是超過了maxclients,redis會給新的鏈接發送’max number of clients reached’,並關閉鏈接。 # maxclients 10000 #redis配置的最大內存容量。當內存滿了,須要配合maxmemory-policy策略進行處理。注意slave的輸出緩衝區是不計算在maxmemory內的。因此爲了防止主機內存使用完,建議設置的maxmemory須要更小一些。 # maxmemory <bytes> #內存容量超過maxmemory後的處理策略。 #volatile-lru:利用LRU算法移除設置過過時時間的key。 #volatile-random:隨機移除設置過過時時間的key。 #volatile-ttl:移除即將過時的key,根據最近過時時間來刪除(輔以TTL) #allkeys-lru:利用LRU算法移除任何key。 #allkeys-random:隨機移除任何key。 #noeviction:不移除任何key,只是返回一個寫錯誤。 #上面的這些驅逐策略,若是redis沒有合適的key驅逐,對於寫命令,仍是會返回錯誤。redis將再也不接收寫請求,只接收get請求。寫命令包括:set setnx setex append incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby getset mset msetnx exec sort。 # maxmemory-policy noeviction #lru檢測的樣本數。使用lru或者ttl淘汰算法,從須要淘汰的列表中隨機選擇sample個key,選出閒置時間最長的key移除。 # maxmemory-samples 5 ############################## APPEND ONLY MODE ############################### #默認redis使用的是rdb方式持久化,這種方式在許多應用中已經足夠用了。可是redis若是中途宕機,會致使可能有幾分鐘的數據丟失,根據save來策略進行持久化,Append Only File是另外一種持久化方式,能夠提供更好的持久化特性。Redis會把每次寫入的數據在接收後都寫入 appendonly.aof 文件,每次啓動時Redis都會先把這個文件的數據讀入內存裏,先忽略RDB文件。 appendonly no #aof文件名 appendfilename "appendonly.aof" #aof持久化策略的配置 #no表示不執行fsync,由操做系統保證數據同步到磁盤,速度最快。 #always表示每次寫入都執行fsync,以保證數據同步到磁盤。 #everysec表示每秒執行一次fsync,可能會致使丟失這1s數據。 appendfsync everysec # 在aof重寫或者寫入rdb文件的時候,會執行大量IO,此時對於everysec和always的aof模式來講,執行fsync會形成阻塞過長時間,no-appendfsync-on-rewrite字段設置爲默認設置爲no。若是對延遲要求很高的應用,這個字段能夠設置爲yes,不然仍是設置爲no,這樣對持久化特性來講這是更安全的選擇。設置爲yes表示rewrite期間對新寫操做不fsync,暫時存在內存中,等rewrite完成後再寫入,默認爲no,建議yes。Linux的默認fsync策略是30秒。可能丟失30秒數據。 no-appendfsync-on-rewrite no #aof自動重寫配置。當目前aof文件大小超過上一次重寫的aof文件大小的百分之多少進行重寫,即當aof文件增加到必定大小的時候Redis可以調用bgrewriteaof對日誌文件進行重寫。當前AOF文件大小是上第二天志重寫獲得AOF文件大小的二倍(設置爲100)時,自動啓動新的日誌重寫過程。 auto-aof-rewrite-percentage 100 #設置容許重寫的最小aof文件大小,避免了達到約定百分比但尺寸仍然很小的狀況還要重寫 auto-aof-rewrite-min-size 64mb #aof文件可能在尾部是不完整的,當redis啓動的時候,aof文件的數據被載入內存。重啓可能發生在redis所在的主機操做系統宕機後,尤爲在ext4文件系統沒有加上data=ordered選項(redis宕機或者異常終止不會形成尾部不完整現象。)出現這種現象,能夠選擇讓redis退出,或者導入儘量多的數據。若是選擇的是yes,當截斷的aof文件被導入的時候,會自動發佈一個log給客戶端而後load。若是是no,用戶必須手動redis-check-aof修復AOF文件才能夠。 aof-load-truncated yes ################################ LUA SCRIPTING ############################### # 若是達到最大時間限制(毫秒),redis會記個log,而後返回error。當一個腳本超過了最大時限。只有SCRIPT KILL和SHUTDOWN NOSAVE能夠用。第一個能夠殺沒有調write命令的東西。要是已經調用了write,只能用第二個命令殺。 lua-time-limit 5000 ################################ REDIS CLUSTER ############################### #集羣開關,默認是不開啓集羣模式。 # cluster-enabled yes #集羣配置文件的名稱,每一個節點都有一個集羣相關的配置文件,持久化保存集羣的信息。這個文件並不須要手動配置,這個配置文件有Redis生成並更新,每一個Redis集羣節點須要一個單獨的配置文件,請確保與實例運行的系統中配置文件名稱不衝突 # cluster-config-file nodes-6379.conf #節點互連超時的閥值。集羣節點超時毫秒數 # cluster-node-timeout 15000 #在進行故障轉移的時候,所有slave都會請求申請爲master,可是有些slave可能與master斷開鏈接一段時間了,致使數據過於陳舊,這樣的slave不該該被提高爲master。該參數就是用來判斷slave節點與master斷線的時間是否過長。判斷方法是: #比較slave斷開鏈接的時間和(node-timeout * slave-validity-factor) + repl-ping-slave-period #若是節點超時時間爲三十秒, 而且slave-validity-factor爲10,假設默認的repl-ping-slave-period是10秒,即若是超過310秒slave將不會嘗試進行故障轉移 # cluster-slave-validity-factor 10 #master的slave數量大於該值,slave才能遷移到其餘孤立master上,如這個參數若被設爲2,那麼只有當一個主節點擁有2 個可工做的從節點時,它的一個從節點會嘗試遷移。 # cluster-migration-barrier 1 #默認狀況下,集羣所有的slot有節點負責,集羣狀態才爲ok,才能提供服務。設置爲no,能夠在slot沒有所有分配的時候提供服務。不建議打開該配置,這樣會形成分區的時候,小分區的master一直在接受寫請求,而形成很長時間數據不一致。 # cluster-require-full-coverage yes ################################## SLOW LOG ################################### ###slog log是用來記錄redis運行中執行比較慢的命令耗時。當命令的執行超過了指定時間,就記錄在slow log中,slog log保存在內存中,因此沒有IO操做。 #執行時間比slowlog-log-slower-than大的請求記錄到slowlog裏面,單位是微秒,因此1000000就是1秒。注意,負數時間會禁用慢查詢日誌,而0則會強制記錄全部命令。 slowlog-log-slower-than 10000 #慢查詢日誌長度。當一個新的命令被寫進日誌的時候,最老的那個記錄會被刪掉。這個長度沒有限制。只要有足夠的內存就行。你能夠經過 SLOWLOG RESET 來釋放內存。 slowlog-max-len 128 ################################ LATENCY MONITOR ############################## #延遲監控功能是用來監控redis中執行比較緩慢的一些操做,用LATENCY打印redis實例在跑命令時的耗時圖表。只記錄大於等於下邊設置的值的操做。0的話,就是關閉監視。默認延遲監控功能是關閉的,若是你須要打開,也能夠經過CONFIG SET命令動態設置。 latency-monitor-threshold 0 ############################# EVENT NOTIFICATION ############################## #鍵空間通知使得客戶端能夠經過訂閱頻道或模式,來接收那些以某種方式改動了 Redis 數據集的事件。由於開啓鍵空間通知功能須要消耗一些 CPU ,因此在默認配置下,該功能處於關閉狀態。 #notify-keyspace-events 的參數能夠是如下字符的任意組合,它指定了服務器該發送哪些類型的通知: ##K 鍵空間通知,全部通知以 __keyspace@__ 爲前綴 ##E 鍵事件通知,全部通知以 __keyevent@__ 爲前綴 ##g DEL 、 EXPIRE 、 RENAME 等類型無關的通用命令的通知 ##$ 字符串命令的通知 ##l 列表命令的通知 ##s 集合命令的通知 ##h 哈希命令的通知 ##z 有序集合命令的通知 ##x 過時事件:每當有過時鍵被刪除時發送 ##e 驅逐(evict)事件:每當有鍵由於 maxmemory 政策而被刪除時發送 ##A 參數 g$lshzxe 的別名 #輸入的參數中至少要有一個 K 或者 E,不然的話,無論其他的參數是什麼,都不會有任何 通知被分發。詳細使用能夠參考http://redis.io/topics/notifications notify-keyspace-events "" ############################### ADVANCED CONFIG ############################### #數據量小於等於hash-max-ziplist-entries的用ziplist,大於hash-max-ziplist-entries用hash hash-max-ziplist-entries 512 #value大小小於等於hash-max-ziplist-value的用ziplist,大於hash-max-ziplist-value用hash。 hash-max-ziplist-value 64 #數據量小於等於list-max-ziplist-entries用ziplist,大於list-max-ziplist-entries用list。 list-max-ziplist-entries 512 #value大小小於等於list-max-ziplist-value的用ziplist,大於list-max-ziplist-value用list。 list-max-ziplist-value 64 #數據量小於等於set-max-intset-entries用iniset,大於set-max-intset-entries用set。 set-max-intset-entries 512 #數據量小於等於zset-max-ziplist-entries用ziplist,大於zset-max-ziplist-entries用zset。 zset-max-ziplist-entries 128 #value大小小於等於zset-max-ziplist-value用ziplist,大於zset-max-ziplist-value用zset。 zset-max-ziplist-value 64 #value大小小於等於hll-sparse-max-bytes使用稀疏數據結構(sparse),大於hll-sparse-max-bytes使用稠密的數據結構(dense)。一個比16000大的value是幾乎沒用的,建議的value大概爲3000。若是對CPU要求不高,對空間要求較高的,建議設置到10000左右。 hll-sparse-max-bytes 3000 #Redis將在每100毫秒時使用1毫秒的CPU時間來對redis的hash表進行從新hash,能夠下降內存的使用。當你的使用場景中,有很是嚴格的實時性須要,不可以接受Redis時不時的對請求有2毫秒的延遲的話,把這項配置爲no。若是沒有這麼嚴格的實時性要求,能夠設置爲yes,以便可以儘量快的釋放內存。 activerehashing yes ##對客戶端輸出緩衝進行限制能夠強迫那些不從服務器讀取數據的客戶端斷開鏈接,用來強制關閉傳輸緩慢的客戶端。 #對於normal client,第一個0表示取消hard limit,第二個0和第三個0表示取消soft limit,normal client默認取消限制,由於若是沒有尋問,他們是不會接收數據的。 client-output-buffer-limit normal 0 0 0 #對於slave client和MONITER client,若是client-output-buffer一旦超過256mb,又或者超過64mb持續60秒,那麼服務器就會當即斷開客戶端鏈接。 client-output-buffer-limit slave 256mb 64mb 60 #對於pubsub client,若是client-output-buffer一旦超過32mb,又或者超過8mb持續60秒,那麼服務器就會當即斷開客戶端鏈接。 client-output-buffer-limit pubsub 32mb 8mb 60 #redis執行任務的頻率爲1s除以hz。 hz 10 #在aof重寫的時候,若是打開了aof-rewrite-incremental-fsync開關,系統會每32MB執行一次fsync。這對於把文件寫入磁盤是有幫助的,能夠避免過大的延遲峯值。 aof-rewrite-incremental-fsync yes ———————————————— 版權聲明:本文爲CSDN博主「aijou_karen」的原創文章,遵循 CC 4.0 BY-SA 版權協議,轉載請附上原文出處連接及本聲明。 原文連接:https://blog.csdn.net/digimon100/article/details/92403763
原文翻譯,轉自:http://www.javashuo.com/article/p-zxoaektc-t.htmlredis
# Redis configuration file example. # # Note that in order to read the configuration file, Redis must be # started with the file path as first argument: # # ./redis-server /path/to/redis.conf # Note on units: when memory size is needed, it is possible to specify # it in the usual form of 1k 5GB 4M and so forth: # # 1k => 1000 bytes # 1kb => 1024 bytes # 1m => 1000000 bytes # 1mb => 1024*1024 bytes # 1g => 1000000000 bytes # 1gb => 1024*1024*1024 bytes # # units are case insensitive so 1GB 1Gb 1gB are all the same. ################################## INCLUDES ################################# # Include one or more other config files here. This is useful if you # have a standard template that goes to all Redis servers but also need # to customize a few per-server settings. Include files can include # other files, so use this wisely. # # Notice option "include" won't be rewritten by command "CONFIG REWRITE" # from admin or Redis Sentinel. Since Redis always uses the last processed # line as value of a configuration directive, you'd better put includes # at the beginning of this file to avoid overwriting config change at runtime. # # If instead you are interested in using includes to override configuration # options, it is better to use include as the last line. # 在這裏包括一個或多個配置文件。這頗有用#有一個標準的模板,適用於全部Redis服務器, #但也須要爲每一個服務器定製一些設置。包含文件能夠包括#其餘文件,因此要明智地使用它。 #通知選項「include」不會被命令「CONFIG REWRITE」重寫,來自admin或Redis Sentinel。 #由於Redis老是使用最後處理過的做爲配置指令的值,你最好輸入include在此文件的開頭 #以免在運行時覆蓋配置更改。 #若是您感興趣的是使用include覆蓋配置,options最好使用include做爲最後一行。 # include /path/to/local.conf # include /path/to/other.conf ################################## MODULES ##################################### # Load modules at startup. If the server is not able to load modules # it will abort. It is possible to use multiple loadmodule directives. # 在啓動時加載模塊。若是服務器不能加載模塊#將停止。可使用多個loadmodule指令。 # loadmodule /path/to/my_module.so # loadmodule /path/to/other_module.so ################################## NETWORK ##################################### # By default, if no "bind" configuration directive is specified, Redis listens # for connections from all the network interfaces available on the server. # It is possible to listen to just one or multiple selected interfaces using # the "bind" configuration directive, followed by one or more IP addresses. # # 默認狀況下,redis 在 server 上全部有效的網絡接口上監聽客戶端鏈接。 # 你若是隻想讓它在一個網絡接口上監聽,那你就綁定一個IP或者多個IP。 # # 示例,多個IP用空格隔開: # Examples: # # bind 192.168.1.100 10.0.0.1 # bind 127.0.0.1 ::1 # # ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the # internet, binding to all the interfaces is dangerous and will expose the # instance to everybody on the internet. So by default we uncomment the # following bind directive, that will force Redis to listen only into # the IPv4 lookback interface address (this means Redis will be able to # accept connections only from clients running into the same computer it # is running). # # IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES # JUST COMMENT THE FOLLOWING LINE. # 若是您肯定但願實例監聽全部接口只需註釋下面的行。 # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # bind 127.0.0.1 # Protected mode is a layer of security protection, in order to avoid that # Redis instances left open on the internet are accessed and exploited. # # When protected mode is on and if: # # 1) The server is not binding explicitly to a set of addresses using the # "bind" directive. # 2) No password is configured. # # The server only accepts connections from clients connecting from the # IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain # sockets. # # By default protected mode is enabled. You should disable it only if # you are sure you want clients from other hosts to connect to Redis # even if no authentication is configured, nor a specific set of interfaces # are explicitly listed using the "bind" directive. #是否開啓保護模式,默認開啓。要是配置裏沒有指定bind和密碼。 #開啓該參數後,redis只會本地進行訪問,拒絕外部訪問。要是開啓了密碼和bind,能夠開啓。 #不然最好關閉,設置爲no。 protected-mode yes # Accept connections on the specified port, default is 6379 (IANA #815344). # If port 0 is specified Redis will not listen on a TCP socket. # 端口號 port 6399 # TCP listen() backlog. # # In high requests-per-second environments you need an high backlog in order # to avoid slow clients connections issues. Note that the Linux kernel # will silently truncate it to the value of /proc/sys/net/core/somaxconn so # make sure to raise both the value of somaxconn and tcp_max_syn_backlog # in order to get the desired effect. 此參數肯定了TCP鏈接中已完成隊列(完成三次握手以後)的長度, 固然此值必須不大於Linux系統定義的/proc/sys/net/core/somaxconn值,默認是511, 而Linux的默認參數值是128。當系統併發量大而且客戶端速度緩慢的時候, 能夠將這二個參數一塊兒參考設定。該內核參數默認值通常是128, 對於負載很大的服務程序來講大大的不夠。通常會將它修改成2048或者更大。 在/etc/sysctl.conf中添加:net.core.somaxconn = 2048,而後在終端中執行sysctl -p。 tcp-backlog 511 # Unix socket. # # Specify the path for the Unix socket that will be used to listen for # incoming connections. There is no default, so Redis will not listen # on a unix socket when not specified. #指定 unix socket 的路徑。 # unixsocket /tmp/redis.sock # unixsocketperm 700 # Close the connection after a client is idle for N seconds (0 to disable) 指定在一個 client 空閒多少秒以後關閉鏈接(0 就是無論它) timeout 0 # TCP keepalive. # # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence # of communication. This is useful for two reasons: # # tcp 心跳包。 # # 若是設置爲非零,則在與客戶端缺少通信的時候使用 SO_KEEPALIVE 發送 tcp acks 給客戶端。 # 1) Detect dead peers. # 2) Take the connection alive from the point of view of network # equipment in the middle. # # On Linux, the specified value (in seconds) is the period used to send ACKs. # Note that to close the connection the double of the time is needed. # On other kernels the period depends on the kernel configuration. # # A reasonable value for this option is 300 seconds, which is the new # Redis default starting with Redis 3.2.1. 這個選項的合理值是300秒,這是新的# Redis默認從Redis 3.2.1開始。 tcp-keepalive 300 ################################# GENERAL ##################################### # By default Redis does not run as a daemon. Use 'yes' if you need it. # Note that Redis will write a pid file in /var/run/redis.pid when daemonized. 默認狀況下 redis 不是做爲守護進程運行的,若是你想讓它在後臺運行,你就把它改爲 yes。 daemonize yes # If you run Redis from upstart or systemd, Redis can interact with your # supervision tree. Options: # supervised no - no supervision interaction # supervised upstart - signal upstart by putting Redis into SIGSTOP mode # supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET # supervised auto - detect upstart or systemd method based on # UPSTART_JOB or NOTIFY_SOCKET environment variables # Note: these supervision methods only signal "process is ready." # They do not enable continuous liveness pings back to your supervisor. 能夠經過upstart和systemd管理Redis守護進程,這個參數是和具體的操做系統相關的。 supervised no # If a pid file is specified, Redis writes it where specified at startup # and removes it at exit. # # When the server runs non daemonized, no pid file is created if none is # specified in the configuration. When the server is daemonized, the pid file # is used even if not specified, defaulting to "/var/run/redis.pid". # # Creating a pid file is best effort: if Redis is not able to create it # nothing bad happens, the server will start and run normally. # 當redis做爲守護進程運行的時候,它會把 pid 默認寫到 /var/run/redis.pid 文件裏面, # 可是你能夠在這裏本身制定它的文件位置。 pidfile /var/run/redis_6379.pid # Specify the server verbosity level. # This can be one of: # debug (a lot of information, useful for development/testing) # verbose (many rarely useful info, but not a mess like the debug level) # notice (moderately verbose, what you want in production probably) # warning (only very important / critical messages are logged) # 定義日誌級別。 # 能夠是下面的這些值: # debug (適用於開發或測試階段) # verbose (many rarely useful info, but not a mess like the debug level) # notice (適用於生產環境) # warning (僅僅一些重要的消息被記錄) loglevel notice # Specify the log file name. Also the empty string can be used to force # Redis to log on the standard output. Note that if you use standard # output for logging but daemonize, logs will be sent to /dev/null 指定日誌文件的位置 logfile "" # To enable logging to the system logger, just set 'syslog-enabled' to yes, # and optionally update the other syslog parameters to suit your needs. # syslog-enabled no # Specify the syslog identity. # syslog-ident redis # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7. # syslog-facility local0 # Set the number of databases. The default database is DB 0, you can select # a different one on a per-connection basis using SELECT <dbid> where # dbid is a number between 0 and 'databases'-1 # 設置數據庫的數目。 # 默認數據庫是 DB 0,你能夠在每一個鏈接上使用 select <dbid> 命令選擇一個不一樣的數據庫, # 可是 dbid 必須是一個介於 0 到 databasees - 1 之間的值 databases 16 # By default Redis shows an ASCII art logo only when started to log to the # standard output and if the standard output is a TTY. Basically this means # that normally a logo is displayed only in interactive sessions. # # However it is possible to force the pre-4.0 behavior and always show a # ASCII art logo in startup logs by setting the following option to yes. 默認狀況下,Redis只在開始登陸時才顯示ASCII藝術標誌標準輸出,若是標準輸出是TTY。 基本上這意味着一般標識只在交互會話中顯示。#然而,強迫4.0以前的行爲並老是顯示a是 可能的# ASCII藝術標誌在啓動日誌經過設置下列選項是 always-show-logo yes ################################ SNAPSHOTTING ################################ # # Save the DB on disk: # # save <seconds> <changes> # # Will save the DB if both the given number of seconds and the given # number of write operations against the DB occurred. # # In the example below the behaviour will be to save: # after 900 sec (15 min) if at least 1 key changed # after 300 sec (5 min) if at least 10 keys changed # after 60 sec if at least 10000 keys changed # # Note: you can disable saving completely by commenting out all "save" lines. # # It is also possible to remove all the previously configured save # points by adding a save directive with a single empty string argument # like in the following example: # # save "" # 存 DB 到磁盤: # # 格式:save <間隔時間(秒)> <寫入次數> # # 根據給定的時間間隔和寫入次數將數據保存到磁盤 # # 下面的例子的意思是: # 900 秒內若是至少有 1 個 key 的值變化,則保存 # 300 秒內若是至少有 10 個 key 的值變化,則保存 # 60 秒內若是至少有 10000 個 key 的值變化,則保存 # # 注意:你能夠註釋掉全部的 save 行來停用保存功能。 # 也能夠直接一個空字符串來實現停用: # save "" save 900 1 save 300 10 save 60 10000 # By default Redis will stop accepting writes if RDB snapshots are enabled # (at least one save point) and the latest background save failed. # This will make the user aware (in a hard way) that data is not persisting # on disk properly, otherwise chances are that no one will notice and some # disaster will happen. # # If the background saving process will start working again Redis will # automatically allow writes again. # # However if you have setup your proper monitoring of the Redis server # and persistence, you may want to disable this feature so that Redis will # continue to work as usual even if there are problems with disk, # permissions, and so forth. # 默認狀況下,若是 redis 最後一次的後臺保存失敗,redis 將中止接受寫操做, # 這樣以一種強硬的方式讓用戶知道數據不能正確的持久化到磁盤, # 不然就會沒人注意到災難的發生。 # # 若是後臺保存進程從新啓動工做了,redis 也將自動的容許寫操做。 # # 然而你要是安裝了靠譜的監控,你可能不但願 redis 這樣作,那你就改爲 no 好了。 stop-writes-on-bgsave-error yes # Compress string objects using LZF when dump .rdb databases? # For default that's set to 'yes' as it's almost always a win. # If you want to save some CPU in the saving child set it to 'no' but # the dataset will likely be bigger if you have compressible values or keys. # 是否在 dump .rdb 數據庫的時候使用 LZF 壓縮字符串 # 默認都設爲 yes # 若是你但願保存子進程節省點 cpu ,你就設置它爲 no , # 不過這個數據集可能就會比較大 rdbcompression yes # Since version 5 of RDB a CRC64 checksum is placed at the end of the file. # This makes the format more resistant to corruption but there is a performance # hit to pay (around 10%) when saving and loading RDB files, so you can disable it # for maximum performances. # # RDB files created with checksum disabled have a checksum of zero that will # tell the loading code to skip the check. # 是否校驗rdb文件 rdbchecksum yes # The filename where to dump the DB # 設置 dump 的文件位置 dbfilename dump.rdb # The working directory. # # The DB will be written inside this directory, with the filename specified # above using the 'dbfilename' configuration directive. # # The Append Only File will also be created inside this directory. # # Note that you must specify a directory here, not a file name. # 工做目錄 # 例如上面的 dbfilename 只指定了文件名, # 可是它會寫入到這個目錄下。這個配置項必定是個目錄,而不能是文件名。 dir ./ ################################# REPLICATION ################################# # Master-Slave replication. Use slaveof to make a Redis instance a copy of # another Redis server. A few things to understand ASAP about Redis replication. # # 1) Redis replication is asynchronous, but you can configure a master to # stop accepting writes if it appears to be not connected with at least # a given number of slaves. # 2) Redis slaves are able to perform a partial resynchronization with the # master if the replication link is lost for a relatively small amount of # time. You may want to configure the replication backlog size (see the next # sections of this file) with a sensible value depending on your needs. # 3) Replication is automatic and does not need user intervention. After a # network partition slaves automatically try to reconnect to masters # and resynchronize with them. # # slaveof <masterip> <masterport> # If the master is password protected (using the "requirepass" configuration # directive below) it is possible to tell the slave to authenticate before # starting the replication synchronization process, otherwise the master will # refuse the slave request. # # masterauth <master-password> # When a slave loses its connection with the master, or when the replication # is still in progress, the slave can act in two different ways: # # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will # still reply to client requests, possibly with out of date data, or the # data set may just be empty if this is the first synchronization. # # 2) if slave-serve-stale-data is set to 'no' the slave will reply with # an error "SYNC with master in progress" to all the kind of commands # but to INFO and SLAVEOF. # 主從複製。使用 slaveof 來讓一個 redis 實例成爲另外一個reids 實例的副本。 # 注意這個只須要在 slave 上配置。 # # slaveof <masterip> <masterport> # 若是 master 須要密碼認證,就在這裏設置 # masterauth <master-password> # 當一個 slave 與 master 失去聯繫,或者複製正在進行的時候, # slave 可能會有兩種表現: # # 1) 若是爲 yes ,slave 仍然會應答客戶端請求,但返回的數據多是過期, # 或者數據多是空的在第一次同步的時候 # # 2) 若是爲 no ,在你執行除了 info he salveof 以外的其餘命令時, # slave 都將返回一個 "SYNC with master in progress" 的錯誤, # slave-serve-stale-data yes # You can configure a slave instance to accept writes or not. Writing against # a slave instance may be useful to store some ephemeral data (because data # written on a slave will be easily deleted after resync with the master) but # may also cause problems if clients are writing to it because of a # misconfiguration. # # Since Redis 2.6 by default slaves are read-only. # # Note: read only slaves are not designed to be exposed to untrusted clients # on the internet. It's just a protection layer against misuse of the instance. # Still a read only slave exports by default all the administrative commands # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve # security of read only slaves using 'rename-command' to shadow all the # administrative / dangerous commands. # 你能夠配置一個 slave 實體是否接受寫入操做。 # 經過寫入操做來存儲一些短暫的數據對於一個 slave 實例來講多是有用的, # 由於相對從 master 從新同步數而言,據數據寫入到 slave 會更容易被刪除。 # 可是若是客戶端由於一個錯誤的配置寫入,也可能會致使一些問題。 # # 從 redis 2.6 版起,默認 slaves 都是隻讀的。 # # Note: read only slaves are not designed to be exposed to untrusted clients # on the internet. It's just a protection layer against misuse of the instance. # Still a read only slave exports by default all the administrative commands # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve # security of read only slaves using 'rename-command' to shadow all the # administrative / dangerous commands. # 注意:只讀的 slaves 沒有被設計成在 internet 上暴露給不受信任的客戶端。 # 它僅僅是一個針對誤用實例的一個保護層。 slave-read-only yes # Replication SYNC strategy: disk or socket. # # ------------------------------------------------------- # WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY # ------------------------------------------------------- # # New slaves and reconnecting slaves that are not able to continue the replication # process just receiving differences, need to do what is called a "full # synchronization". An RDB file is transmitted from the master to the slaves. # The transmission can happen in two different ways: # # 1) Disk-backed: The Redis master creates a new process that writes the RDB # file on disk. Later the file is transferred by the parent # process to the slaves incrementally. # 2) Diskless: The Redis master creates a new process that directly writes the # RDB file to slave sockets, without touching the disk at all. # # With disk-backed replication, while the RDB file is generated, more slaves # can be queued and served with the RDB file as soon as the current child producing # the RDB file finishes its work. With diskless replication instead once # the transfer starts, new slaves arriving will be queued and a new transfer # will start when the current one terminates. # # When diskless replication is used, the master waits a configurable amount of # time (in seconds) before starting the transfer in the hope that multiple slaves # will arrive and the transfer can be parallelized. # # With slow disks and fast (large bandwidth) networks, diskless replication # works better. #是否使用socket方式複製數據。目前redis複製提供兩種方式,disk和socket。 若是新的slave連上來或者重連的slave沒法部分同步,就會執行全量同步,master會生成rdb文件。 有2種方式:disk方式是master建立一個新的進程把rdb文件保存到磁盤, 再把磁盤上的rdb文件傳遞給slave。socket是master建立一個新的進程, 直接把rdb文件以socket的方式發給slave。disk方式的時候,當一個rdb保存的過程當中, 多個slave都能共享這個rdb文件。socket的方式就的一個個slave順序複製。在磁盤速度緩慢, 網速快的狀況下推薦用socket方式。 repl-diskless-sync no # When diskless replication is enabled, it is possible to configure the delay # the server waits in order to spawn the child that transfers the RDB via socket # to the slaves. # # This is important since once the transfer starts, it is not possible to serve # new slaves arriving, that will be queued for the next RDB transfer, so the server # waits a delay in order to let more slaves arrive. # # The delay is specified in seconds, and by default is 5 seconds. To disable # it entirely just set it to 0 seconds and the transfer will start ASAP. #diskless複製的延遲時間,防止設置爲0。一旦複製開始, 節點不會再接收新slave的複製請求直到下一個rdb傳輸。因此最好等待一段時間,等更多的slave連上來。 repl-diskless-sync-delay 5 # Slaves send PINGs to server in a predefined interval. It's possible to change # this interval with the repl_ping_slave_period option. The default value is 10 # seconds. # # Slaves 在一個預約義的時間間隔內發送 ping 命令到 server 。 # 你能夠改變這個時間間隔。默認爲 10 秒。 # repl-ping-slave-period 10 # The following option sets the replication timeout for: # # 1) Bulk transfer I/O during SYNC, from the point of view of slave. # 2) Master timeout from the point of view of slaves (data, pings). # 3) Slave timeout from the point of view of masters (REPLCONF ACK pings). # # It is important to make sure that this value is greater than the value # specified for repl-ping-slave-period otherwise a timeout will be detected # every time there is low traffic between the master and the slave. # #複製鏈接超時時間。master和slave都有超時時間的設置。 master檢測到slave上次發送的時間超過repl-timeout,即認爲slave離線,清除該slave信息。 slave檢測到上次和master交互的時間超過repl-timeout,則認爲master離線。 須要注意的是repl-timeout須要設置一個比repl-ping-slave-period更大的值,否則會常常檢測到超時。 # repl-timeout 60 # Disable TCP_NODELAY on the slave socket after SYNC? # # If you select "yes" Redis will use a smaller number of TCP packets and # less bandwidth to send data to slaves. But this can add a delay for # the data to appear on the slave side, up to 40 milliseconds with # Linux kernels using a default configuration. # # If you select "no" the delay for data to appear on the slave side will # be reduced but more bandwidth will be used for replication. # # By default we optimize for low latency, but in very high traffic conditions # or when the master and slaves are many hops away, turning this to "yes" may # be a good idea. #是否禁止複製tcp連接的tcp nodelay參數,可傳遞yes或者no。默認是no, 即便用tcp nodelay。若是master設置了yes來禁止tcp nodelay設置, 在把數據複製給slave的時候,會減小包的數量和更小的網絡帶寬。 可是這也可能帶來數據的延遲。默認咱們推薦更小的延遲, 可是在數據量傳輸很大的場景下,建議選擇yes。 repl-disable-tcp-nodelay no # Set the replication backlog size. The backlog is a buffer that accumulates # slave data when slaves are disconnected for some time, so that when a slave # wants to reconnect again, often a full resync is not needed, but a partial # resync is enough, just passing the portion of data the slave missed while # disconnected. # # The bigger the replication backlog, the longer the time the slave can be # disconnected and later be able to perform a partial resynchronization. # # The backlog is only allocated once there is at least a slave connected. # #複製緩衝區大小,這是一個環形複製緩衝區,用來保存最新複製的命令。 這樣在slave離線的時候,不須要徹底複製master的數據,若是能夠執行部分同步, 只須要把緩衝區的部分數據複製給slave,就能恢復正常複製狀態。緩衝區的大小越大, slave離線的時間能夠更長,複製緩衝區只有在有slave鏈接的時候才分配內存。 沒有slave的一段時間,內存會被釋放出來,默認1m。 # repl-backlog-size 1mb # After a master has no longer connected slaves for some time, the backlog # will be freed. The following option configures the amount of seconds that # need to elapse, starting from the time the last slave disconnected, for # the backlog buffer to be freed. # # Note that slaves never free the backlog for timeout, since they may be # promoted to masters later, and should be able to correctly "partially # resynchronize" with the slaves: hence they should always accumulate backlog. # # A value of 0 means to never release the backlog. # # master沒有slave一段時間會釋放複製緩衝區的內存, repl-backlog-ttl用來設置該時間長度。單位爲秒。 # repl-backlog-ttl 3600 # The slave priority is an integer number published by Redis in the INFO output. # It is used by Redis Sentinel in order to select a slave to promote into a # master if the master is no longer working correctly. # # A slave with a low priority number is considered better for promotion, so # for instance if there are three slaves with priority 10, 100, 25 Sentinel will # pick the one with priority 10, that is the lowest. # # However a special priority of 0 marks the slave as not able to perform the # role of master, so a slave with priority of 0 will never be selected by # Redis Sentinel for promotion. # # By default the priority is 100. 當master不可用,Sentinel會根據slave的優先級選舉一個master。 最低的優先級的slave,當選master。而配置成0,永遠不會被選舉。 slave-priority 100 # It is possible for a master to stop accepting writes if there are less than # N slaves connected, having a lag less or equal than M seconds. # # The N slaves need to be in "online" state. # # The lag in seconds, that must be <= the specified value, is calculated from # the last ping received from the slave, that is usually sent every second. # # This option does not GUARANTEE that N replicas will accept the write, but # will limit the window of exposure for lost writes in case not enough slaves # are available, to the specified number of seconds. # # For example to require at least 3 slaves with a lag <= 10 seconds use: # #redis提供了可讓master中止寫入的方式,若是配置了min-slaves-to-write, 健康的slave的個數小於N,mater就禁止寫入。master最少得有多少個健康的slave存活才能執行寫命令。 這個配置雖然不能保證N個slave都必定能接收到master的寫操做,可是能避免沒有足夠健康的slave的時候, master不能寫入來避免數據丟失。設置爲0是關閉該功能。 # min-slaves-to-write 3 #延遲小於min-slaves-max-lag秒的slave才認爲是健康的slave。 # min-slaves-max-lag 10 # # Setting one or the other to 0 disables the feature. # # By default min-slaves-to-write is set to 0 (feature disabled) and # min-slaves-max-lag is set to 10. # A Redis master is able to list the address and port of the attached # slaves in different ways. For example the "INFO replication" section # offers this information, which is used, among other tools, by # Redis Sentinel in order to discover slave instances. # Another place where this info is available is in the output of the # "ROLE" command of a master. # # The listed IP and address normally reported by a slave is obtained # in the following way: # # IP: The address is auto detected by checking the peer address # of the socket used by the slave to connect with the master. # # Port: The port is communicated by the slave during the replication # handshake, and is normally the port that the slave is using to # list for connections. # # However when port forwarding or Network Address Translation (NAT) is # used, the slave may be actually reachable via different IP and port # pairs. The following two options can be used by a slave in order to # report to its master a specific set of IP and port, so that both INFO # and ROLE will report those values. # # There is no need to use both the options if you need to override just # the port or the IP address. # Redis master可以以不一樣的方式列出所鏈接slave的地址和端口。 例如,「INFO replication」部分提供此信息,除了其餘工具以外,Redis Sentinel還使用該信息來發現slave實例。 此信息可用的另外一個地方在masterser的「ROLE」命令的輸出中。 一般由slave報告的列出的IP和地址,經過如下方式得到: IP:經過檢查slave與master鏈接使用的套接字的對等體地址自動檢測地址。 端口:端口在複製握手期間由slavet通訊,而且一般是slave正在使用列出鏈接的端口。 然而,當使用端口轉發或網絡地址轉換(NAT)時,slave實際上能夠經過(不一樣的IP和端口對)來到達。 slave可使用如下兩個選項,以便向master報告一組特定的IP和端口, 以便INFO和ROLE將報告這些值。 若是你須要僅覆蓋端口或IP地址,則不必使用這兩個選項。 # slave-announce-ip 5.5.5.5 # slave-announce-port 1234 ################################## SECURITY ################################### # Require clients to issue AUTH <PASSWORD> before processing any other # commands. This might be useful in environments in which you do not trust # others with access to the host running redis-server. # # This should stay commented out for backward compatibility and because most # people do not need auth (e.g. they run their own servers). # # Warning: since Redis is pretty fast an outside user can try up to # 150k passwords per second against a good box. This means that you should # use a very strong password otherwise it will be very easy to break. # # 設置認證密碼 requirepass password # Command renaming. # # It is possible to change the name of dangerous commands in a shared # environment. For instance the CONFIG command may be renamed into something # hard to guess so that it will still be available for internal-use tools # but not available for general clients. # # Example: # # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 # # It is also possible to completely kill a command by renaming it into # an empty string: # # rename-command CONFIG "" # # Please note that changing the name of commands that are logged into the # AOF file or transmitted to slaves may cause problems. ################################### CLIENTS #################################### # Set the max number of connected clients at the same time. By default # this limit is set to 10000 clients, however if the Redis server is not # able to configure the process file limit to allow for the specified limit # the max number of allowed clients is set to the current file limit # minus 32 (as Redis reserves a few file descriptors for internal uses). # # Once the limit is reached Redis will close all the new connections sending # an error 'max number of clients reached'. # # 一旦達到最大限制,redis 將關閉全部的新鏈接 # 併發送一個‘max number of clients reached’的錯誤。 # maxclients 10000 ############################## MEMORY MANAGEMENT ################################ # Set a memory usage limit to the specified amount of bytes. # When the memory limit is reached Redis will try to remove keys # according to the eviction policy selected (see maxmemory-policy). # # If Redis can't remove keys according to the policy, or if the policy is # set to 'noeviction', Redis will start to reply with errors to commands # that would use more memory, like SET, LPUSH, and so on, and will continue # to reply to read-only commands like GET. # # This option is usually useful when using Redis as an LRU or LFU cache, or to # set a hard memory limit for an instance (using the 'noeviction' policy). # # WARNING: If you have slaves attached to an instance with maxmemory on, # the size of the output buffers needed to feed the slaves are subtracted # from the used memory count, so that network problems / resyncs will # not trigger a loop where keys are evicted, and in turn the output # buffer of slaves is full with DELs of keys evicted triggering the deletion # of more keys, and so forth until the database is completely emptied. # # In short... if you have slaves attached it is suggested that you set a lower # limit for maxmemory so that there is some free RAM on the system for slave # output buffers (but this is not needed if the policy is 'noeviction'). # # 最大使用內存 # maxmemory <bytes> # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory # is reached. You can select among five behaviors: # # volatile-lru -> Evict using approximated LRU among the keys with an expire set. # allkeys-lru -> Evict any key using approximated LRU. # volatile-lfu -> Evict using approximated LFU among the keys with an expire set. # allkeys-lfu -> Evict any key using approximated LFU. # volatile-random -> Remove a random key among the ones with an expire set. # allkeys-random -> Remove a random key, any key. # volatile-ttl -> Remove the key with the nearest expire time (minor TTL) # noeviction -> Don't evict anything, just return an error on write operations. # # 最大內存策略,你有 5 個選擇。 # # volatile-lru -> remove the key with an expire set using an LRU algorithm # volatile-lru -> 使用 LRU 算法移除包含過時設置的 key 。 # allkeys-lru -> remove any key accordingly to the LRU algorithm # allkeys-lru -> 根據 LRU 算法移除全部的 key 。 # volatile-random -> remove a random key with an expire set # allkeys-random -> remove a random key, any key # volatile-ttl -> remove the key with the nearest expire time (minor TTL) # noeviction -> don't expire at all, just return an error on write operations # noeviction -> 不讓任何 key 過時,只是給寫入操做返回一個錯誤 # LRU means Least Recently Used # LFU means Least Frequently Used # # Both LRU, LFU and volatile-ttl are implemented using approximated # randomized algorithms. # # Note: with any of the above policies, Redis will return an error on write # operations, when there are no suitable keys for eviction. # # At the date of writing these commands are: set setnx setex append # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby # getset mset msetnx exec sort # # The default is: # LRU的意思是最近最少使用LFU的意思是最不經常使用的#使用近似方法實現了LRU、LFU和volatile-ttl#隨機算法。 #注意:對於上述任何一種策略,Redis都會在寫入時返回一個錯誤操做,當沒有合適的鍵被驅逐時。 #在寫這些命令的時候,設置setnx setex append地址:rpush lpush rpushx lpushx linsert lset rpoplpush sadd燒結商店sunion sunionstore sdiff sdiffstore zadd zincrbyzunionstore zinterstore hset hsetnx hmset hincrby incrby decrby# getset mset msetnx exec排序 ##默認是: # maxmemory-policy noeviction # LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated # algorithms (in order to save memory), so you can tune it for speed or # accuracy. For default Redis will check five keys and pick the one that was # used less recently, you can change the sample size using the following # configuration directive. # # The default of 5 produces good enough results. 10 Approximates very closely # true LRU but costs more CPU. 3 is faster but not very accurate. # LRU、LFU和最小TTL算法不是精確的算法,而是近似的#算法(爲了節省內存), 因此您能夠調整它的速度或#準確性。對於默認的Redis,會檢查5個鍵並選擇原來的鍵#最近使用的較少, 您可使用如下方法更改樣本量#配置指令。#5的默認值產生足夠好的結果。 10密切接近真正的LRU,但須要更多的CPU。3更快,但不是很準確。 # maxmemory-samples 5 ############################# LAZY FREEING #################################### # Redis has two primitives to delete keys. One is called DEL and is a blocking # deletion of the object. It means that the server stops processing new commands # in order to reclaim all the memory associated with an object in a synchronous # way. If the key deleted is associated with a small object, the time needed # in order to execute the DEL command is very small and comparable to most other # O(1) or O(log_N) commands in Redis. However if the key is associated with an # aggregated value containing millions of elements, the server can block for # a long time (even seconds) in order to complete the operation. # # For the above reasons Redis also offers non blocking deletion primitives # such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and # FLUSHDB commands, in order to reclaim memory in background. Those commands # are executed in constant time. Another thread will incrementally free the # object in the background as fast as possible. # # DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled. # It's up to the design of the application to understand when it is a good # idea to use one or the other. However the Redis server sometimes has to # delete keys or flush the whole database as a side effect of other operations. # Specifically Redis deletes objects independently of a user call in the # following scenarios: # # 1) On eviction, because of the maxmemory and maxmemory policy configurations, # in order to make room for new data, without going over the specified # memory limit. # 2) Because of expire: when a key with an associated time to live (see the # EXPIRE command) must be deleted from memory. # 3) Because of a side effect of a command that stores data on a key that may # already exist. For example the RENAME command may delete the old key # content when it is replaced with another one. Similarly SUNIONSTORE # or SORT with STORE option may delete existing keys. The SET command # itself removes any old content of the specified key in order to replace # it with the specified string. # 4) During replication, when a slave performs a full resynchronization with # its master, the content of the whole database is removed in order to # load the RDB file just transfered. # # In all the above cases the default is to delete objects in a blocking way, # like if DEL was called. However you can configure each case specifically # in order to instead release memory in a non-blocking way like if UNLINK # was called, using the following configuration directives: lazy free可譯爲惰性刪除或延遲釋放;當刪除鍵的時候,redis提供異步延時釋放key內存的功能, 把key釋放操做放在bio(Background I/O)單獨的子線程處理中,減小刪除big key對redis主線程的阻塞。 有效地避免刪除big key帶來的性能和可用性問題 lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no slave-lazy-flush no ############################## APPEND ONLY MODE ############################### # By default Redis asynchronously dumps the dataset on disk. This mode is # good enough in many applications, but an issue with the Redis process or # a power outage may result into a few minutes of writes lost (depending on # the configured save points). # # The Append Only File is an alternative persistence mode that provides # much better durability. For instance using the default data fsync policy # (see later in the config file) Redis can lose just one second of writes in a # dramatic event like a server power outage, or a single write if something # wrong with the Redis process itself happens, but the operating system is # still running correctly. # # AOF and RDB persistence can be enabled at the same time without problems. # If the AOF is enabled on startup Redis will load the AOF, that is the file # with the better durability guarantees. # # Please check http://redis.io/topics/persistence for more information. #默認redis使用的是rdb方式持久化,這種方式在許多應用中已經足夠用了。 可是redis若是中途宕機,會致使可能有幾分鐘的數據丟失,根據save來策略進行持久化, Append Only File是另外一種持久化方式,能夠提供更好的持久化特性。 Redis會把每次寫入的數據在接收後都寫入 appendonly.aof 文件, 每次啓動時Redis都會先把這個文件的數據讀入內存裏,先忽略RDB文件。 appendonly no # The name of the append only file (default: "appendonly.aof") #aof文件名 appendfilename "appendonly.aof" # The fsync() call tells the Operating System to actually write data on disk # instead of waiting for more data in the output buffer. Some OS will really flush # data on disk, some other OS will just try to do it ASAP. # # Redis supports three different modes: # # no: don't fsync, just let the OS flush the data when it wants. Faster. # always: fsync after every write to the append only log. Slow, Safest. # everysec: fsync only one time every second. Compromise. # # The default is "everysec", as that's usually the right compromise between # speed and data safety. It's up to you to understand if you can relax this to # "no" that will let the operating system flush the output buffer when # it wants, for better performances (but if you can live with the idea of # some data loss consider the default persistence mode that's snapshotting), # or on the contrary, use "always" that's very slow but a bit safer than # everysec. # # More details please check the following article: # http://antirez.com/post/redis-persistence-demystified.html # # If unsure, use "everysec". # appendfsync always #aof持久化策略的配置 #no表示不執行fsync,由操做系統保證數據同步到磁盤,速度最快。 #always表示每次寫入都執行fsync,以保證數據同步到磁盤。 #everysec表示每秒執行一次fsync,可能會致使丟失這1s數據。 appendfsync everysec # appendfsync no # When the AOF fsync policy is set to always or everysec, and a background # saving process (a background save or AOF log background rewriting) is # performing a lot of I/O against the disk, in some Linux configurations # Redis may block too long on the fsync() call. Note that there is no fix for # this currently, as even performing fsync in a different thread will block # our synchronous write(2) call. # # In order to mitigate this problem it's possible to use the following option # that will prevent fsync() from being called in the main process while a # BGSAVE or BGREWRITEAOF is in progress. # # This means that while another child is saving, the durability of Redis is # the same as "appendfsync none". In practical terms, this means that it is # possible to lose up to 30 seconds of log in the worst scenario (with the # default Linux settings). # # If you have latency problems turn this to "yes". Otherwise leave it as # "no" that is the safest pick from the point of view of durability. # 在aof重寫或者寫入rdb文件的時候,會執行大量IO,此時對於everysec和always的aof模式來講, 執行fsync會形成阻塞過長時間,no-appendfsync-on-rewrite字段設置爲默認設置爲no。 若是對延遲要求很高的應用,這個字段能夠設置爲yes,不然仍是設置爲no, 這樣對持久化特性來講這是更安全的選擇。設置爲yes表示rewrite期間對新寫操做不fsync, 暫時存在內存中,等rewrite完成後再寫入,默認爲no,建議yes。Linux的默認fsync策略是30秒。 可能丟失30秒數據。 no-appendfsync-on-rewrite no # Automatic rewrite of the append only file. # Redis is able to automatically rewrite the log file implicitly calling # BGREWRITEAOF when the AOF log size grows by the specified percentage. # # This is how it works: Redis remembers the size of the AOF file after the # latest rewrite (if no rewrite has happened since the restart, the size of # the AOF at startup is used). # # This base size is compared to the current size. If the current size is # bigger than the specified percentage, the rewrite is triggered. Also # you need to specify a minimal size for the AOF file to be rewritten, this # is useful to avoid rewriting the AOF file even if the percentage increase # is reached but it is still pretty small. # # Specify a percentage of zero in order to disable the automatic AOF # rewrite feature. #aof自動重寫配置。當目前aof文件大小超過上一次重寫的aof文件大小的百分之多少進行重寫, 即當aof文件增加到必定大小的時候Redis可以調用bgrewriteaof對日誌文件進行重寫。 當前AOF文件大小是上第二天志重寫獲得AOF文件大小的二倍(設置爲100)時, 自動啓動新的日誌重寫過程。 auto-aof-rewrite-percentage 100 #設置容許重寫的最小aof文件大小,避免了達到約定百分比但尺寸仍然很小的狀況還要重寫 auto-aof-rewrite-min-size 64mb # An AOF file may be found to be truncated at the end during the Redis # startup process, when the AOF data gets loaded back into memory. # This may happen when the system where Redis is running # crashes, especially when an ext4 filesystem is mounted without the # data=ordered option (however this can't happen when Redis itself # crashes or aborts but the operating system still works correctly). # # Redis can either exit with an error when this happens, or load as much # data as possible (the default now) and start if the AOF file is found # to be truncated at the end. The following option controls this behavior. # # If aof-load-truncated is set to yes, a truncated AOF file is loaded and # the Redis server starts emitting a log to inform the user of the event. # Otherwise if the option is set to no, the server aborts with an error # and refuses to start. When the option is set to no, the user requires # to fix the AOF file using the "redis-check-aof" utility before to restart # the server. # # Note that if the AOF file will be found to be corrupted in the middle # the server will still exit with an error. This option only applies when # Redis will try to read more data from the AOF file but not enough bytes # will be found. #aof文件可能在尾部是不完整的,當redis啓動的時候,aof文件的數據被載入內存。 重啓可能發生在redis所在的主機操做系統宕機後, 尤爲在ext4文件系統沒有加上data=ordered選項(redis宕機或者異常終止不會形成尾部不完整現象。) 出現這種現象,能夠選擇讓redis退出,或者導入儘量多的數據。若是選擇的是yes, 當截斷的aof文件被導入的時候,會自動發佈一個log給客戶端而後load。若是是no, 用戶必須手動redis-check-aof修復AOF文件才能夠。 aof-load-truncated yes # When rewriting the AOF file, Redis is able to use an RDB preamble in the # AOF file for faster rewrites and recoveries. When this option is turned # on the rewritten AOF file is composed of two different stanzas: # # [RDB file][AOF tail] # # When loading Redis recognizes that the AOF file starts with the "REDIS" # string and loads the prefixed RDB file, and continues loading the AOF # tail. # # This is currently turned off by default in order to avoid the surprise # of a format change, but will at some point be used as the default. #Redis4.0新增RDB-AOF混合持久化格式,在開啓了這個功能以後, AOF重寫產生的文件將同時包含RDB格式的內容和AOF格式的內容, 其中RDB格式的內容用於記錄已有的數據,而AOF格式的內存則用於記錄最近發生了變化的數據, 這樣Redis就能夠同時兼有RDB持久化和AOF持久化的優勢(既可以快速地生成重寫文件, 也可以在出現問題時,快速地載入數據)。 aof-use-rdb-preamble no ################################ LUA SCRIPTING ############################### # Max execution time of a Lua script in milliseconds. # # If the maximum execution time is reached Redis will log that a script is # still in execution after the maximum allowed time and will start to # reply to queries with an error. # # When a long running script exceeds the maximum execution time only the # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be # used to stop a script that did not yet called write commands. The second # is the only way to shut down the server in the case a write command was # already issued by the script but the user doesn't want to wait for the natural # termination of the script. # # Set it to 0 or a negative value for unlimited execution without warnings. # 若是達到最大時間限制(毫秒),redis會記個log,而後返回error。 當一個腳本超過了最大時限。只有SCRIPT KILL和SHUTDOWN NOSAVE能夠用。 第一個能夠殺沒有調write命令的東西。要是已經調用了write,只能用第二個命令殺。 lua-time-limit 5000 ################################ REDIS CLUSTER ############################### # # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however # in order to mark it as "mature" we need to wait for a non trivial percentage # of users to deploy it in production. # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # # Normal Redis instances can't be part of a Redis Cluster; only nodes that are # started as cluster nodes can. In order to start a Redis instance as a # cluster node enable the cluster support uncommenting the following: # #集羣開關,默認是不開啓集羣模式。 # cluster-enabled yes # Every cluster node has a cluster configuration file. This file is not # intended to be edited by hand. It is created and updated by Redis nodes. # Every Redis Cluster node requires a different cluster configuration file. # Make sure that instances running in the same system do not have # overlapping cluster configuration file names. # #集羣配置文件的名稱,每一個節點都有一個集羣相關的配置文件, 持久化保存集羣的信息。這個文件並不須要手動配置,這個配置文件有Redis生成並更新, 每一個Redis集羣節點須要一個單獨的配置文件,請確保與實例運行的系統中配置文件名稱不衝突 # cluster-config-file nodes-6379.conf # Cluster node timeout is the amount of milliseconds a node must be unreachable # for it to be considered in failure state. # Most other internal time limits are multiple of the node timeout. # #節點互連超時的閥值。集羣節點超時毫秒數 # cluster-node-timeout 15000 # A slave of a failing master will avoid to start a failover if its data # looks too old. # # There is no simple way for a slave to actually have an exact measure of # its "data age", so the following two checks are performed: # # 1) If there are multiple slaves able to failover, they exchange messages # in order to try to give an advantage to the slave with the best # replication offset (more data from the master processed). # Slaves will try to get their rank by offset, and apply to the start # of the failover a delay proportional to their rank. # # 2) Every single slave computes the time of the last interaction with # its master. This can be the last ping or command received (if the master # is still in the "connected" state), or the time that elapsed since the # disconnection with the master (if the replication link is currently down). # If the last interaction is too old, the slave will not try to failover # at all. # # The point "2" can be tuned by user. Specifically a slave will not perform # the failover if, since the last interaction with the master, the time # elapsed is greater than: # # (node-timeout * slave-validity-factor) + repl-ping-slave-period # # So for example if node-timeout is 30 seconds, and the slave-validity-factor # is 10, and assuming a default repl-ping-slave-period of 10 seconds, the # slave will not try to failover if it was not able to talk with the master # for longer than 310 seconds. # # A large slave-validity-factor may allow slaves with too old data to failover # a master, while a too small value may prevent the cluster from being able to # elect a slave at all. # # For maximum availability, it is possible to set the slave-validity-factor # to a value of 0, which means, that slaves will always try to failover the # master regardless of the last time they interacted with the master. # (However they'll always try to apply a delay proportional to their # offset rank). # # Zero is the only value able to guarantee that when all the partitions heal # the cluster will always be able to continue. # #在進行故障轉移的時候,所有slave都會請求申請爲master, 可是有些slave可能與master斷開鏈接一段時間了,致使數據過於陳舊, 這樣的slave不該該被提高爲master。該參數就是用來判>斷slave節點與master斷線的時間是否過長。 判斷方法是: #比較slave斷開鏈接的時間和(node-timeout * slave-validity-factor) + repl-ping-slave-period #若是節點超時時間爲三十秒, 而且slave-validity-factor爲10,假設默認的repl-ping-slave-period是10秒, 即若是超過310秒slave將不會嘗試進行故障轉移 #可能出現因爲某主節點失聯卻沒有從節點能頂上的狀況,從而致使集羣不能正常工做,在這種狀況下, 只有等到原來的主節點從新迴歸到集羣,集羣才恢復運做 #若是設置成0,則不管從節點與主節點失聯多久,從節點都會嘗試升級成主節 # cluster-slave-validity-factor 10 # Cluster slaves are able to migrate to orphaned masters, that are masters # that are left without working slaves. This improves the cluster ability # to resist to failures as otherwise an orphaned master can't be failed over # in case of failure if it has no working slaves. # # Slaves migrate to orphaned masters only if there are still at least a # given number of other working slaves for their old master. This number # is the "migration barrier". A migration barrier of 1 means that a slave # will migrate only if there is at least 1 other working slave for its master # and so forth. It usually reflects the number of slaves you want for every # master in your cluster. # # Default is 1 (slaves migrate only if their masters remain with at least # one slave). To disable migration just set it to a very large value. # A value of 0 can be set but is useful only for debugging and dangerous # in production. # #master的slave數量大於該值,slave才能遷移到其餘孤立master上,如這個參數若被設爲2, 那麼只有當一個主節點擁有2 個可工做的從節點時,它的一個從節點會嘗試遷移。 #主節點須要的最小從節點數,只有達到這個數,主節點失敗時,它從節點纔會進行遷移。 # cluster-migration-barrier 1 # By default Redis Cluster nodes stop accepting queries if they detect there # is at least an hash slot uncovered (no available node is serving it). # This way if the cluster is partially down (for example a range of hash slots # are no longer covered) all the cluster becomes, eventually, unavailable. # It automatically returns available as soon as all the slots are covered again. # # However sometimes you want the subset of the cluster which is working, # to continue to accept queries for the part of the key space that is still # covered. In order to do so, just set the cluster-require-full-coverage # option to no. # #默認狀況下,集羣所有的slot有節點分配,集羣狀態才爲ok,才能提供服務。設置爲no, 能夠在slot沒有所有分配的時候提供服務。不建議打開該配置,這樣會形成分區的時候, 小分區的mster一直在接受寫請求,而形成很長時間數據不一致。 #在部分key所在的節點不可用時,若是此參數設置爲」yes」(默認值), 則整個集羣中止接受操做;若是此參數設置爲」no」,則集羣依然爲可達節點上的key提供讀操做 # cluster-require-full-coverage yes # In order to setup your cluster make sure to read the documentation # available at http://redis.io web site. ########################## CLUSTER DOCKER/NAT support ######################## # In certain deployments, Redis Cluster nodes address discovery fails, because # addresses are NAT-ted or because ports are forwarded (the typical case is # Docker and other containers). # # In order to make Redis Cluster working in such environments, a static # configuration where each node knows its public address is needed. The # following two options are used for this scope, and are: # 在某些部署中,Redis集羣節點地址發現失敗, 由於地址是NAT-ted或者由於端口被轉發(典型的狀況是# Docker和其餘容器)。 #爲了讓Redis集羣在這樣的環境中工做,一個靜態的每一個節點知道其公開地址的配置。 的#此做用域使用瞭如下兩個選項: ##實際爲各節點網卡分配ip 先用上網關ip代替 # * cluster-announce-ip ##節點映射端口 # * cluster-announce-port ##節點總線端 # * cluster-announce-bus-port # # Each instruct the node about its address, client port, and cluster message # bus port. The information is then published in the header of the bus packets # so that other nodes will be able to correctly map the address of the node # publishing the information. # # If the above options are not used, the normal Redis Cluster auto-detection # will be used instead. # # Note that when remapped, the bus port may not be at the fixed offset of # clients port + 10000, so you can specify any port and bus-port depending # on how they get remapped. If the bus-port is not set, a fixed offset of # 10000 will be used as usually. # # Example: # 每一個節點指示其地址、客戶端端口和集羣消息#總線端口。而後,這些信息被髮布在總線數據包 的報頭中使其餘節點可以正確映射節點的地址#發佈的信息。#若是不使用上述選項, 正常的Redis集羣自動檢測將被使用。#注意,當從新映射時,總線端口可能不在固定偏移量 上# clients port + 10000,所以您能夠指定任意端口和總線端口他們是如何被重拍的。 若是總線端口沒有設置,則爲一個固定的偏移量# 10000將像往常同樣使用。##的例子: # cluster-announce-ip 10.1.1.5 # cluster-announce-port 6379 # cluster-announce-bus-port 6380 ################################## SLOW LOG ################################### # The Redis Slow Log is a system to log queries that exceeded a specified # execution time. The execution time does not include the I/O operations # like talking with the client, sending the reply and so forth, # but just the time needed to actually execute the command (this is the only # stage of command execution where the thread is blocked and can not serve # other requests in the meantime). # # You can configure the slow log with two parameters: one tells Redis # what is the execution time, in microseconds, to exceed in order for the # command to get logged, and the other parameter is the length of the # slow log. When a new command is logged the oldest one is removed from the # queue of logged commands. # The following time is expressed in microseconds, so 1000000 is equivalent # to one second. Note that a negative number disables the slow log, while # a value of zero forces the logging of every command. ###slog log是用來記錄redis運行中執行比較慢的命令耗時。當命令的執行超過了指定時間, 就記錄在slow log中,slog log保存在內存中,因此沒有IO操做。 #執行時間比slowlog-log-slower-than大的請求記錄到slowlog裏面,單位是微秒, 因此1000000就是1秒。注意,負數時間會禁用慢查詢日誌,而0則會強制記錄全部命令。 slowlog-log-slower-than 10000 # There is no limit to this length. Just be aware that it will consume memory. # You can reclaim memory used by the slow log with SLOWLOG RESET. #慢查詢日誌長度。當一個新的命令被寫進日誌的時候,最老的那個記錄會被刪掉。 這個長度沒有限制。只要有足夠的內存就行。你能夠經過 SLOWLOG RESET 來釋放內存。 slowlog-max-len 128 ################################ LATENCY MONITOR ############################## # The Redis latency monitoring subsystem samples different operations # at runtime in order to collect data related to possible sources of # latency of a Redis instance. # # Via the LATENCY command this information is available to the user that can # print graphs and obtain reports. # # The system only logs operations that were performed in a time equal or # greater than the amount of milliseconds specified via the # latency-monitor-threshold configuration directive. When its value is set # to zero, the latency monitor is turned off. # # By default latency monitoring is disabled since it is mostly not needed # if you don't have latency issues, and collecting data has a performance # impact, that while very small, can be measured under big load. Latency # monitoring can easily be enabled at runtime using the command # "CONFIG SET latency-monitor-threshold <milliseconds>" if needed. #延遲監控功能是用來監控redis中執行比較緩慢的一些操做,用LATENCY打印redis實例在跑命令時的耗時圖表。 只記錄大於等於下邊設置的值的操做。0的話,就是關閉監視。默認延遲監控功能是關閉的, 若是你須要打開,也能夠經過CONFIG SET命令動態設置。 latency-monitor-threshold 0 ############################# EVENT NOTIFICATION ############################## # Redis can notify Pub/Sub clients about events happening in the key space. # This feature is documented at http://redis.io/topics/notifications # # For instance if keyspace events notification is enabled, and a client # performs a DEL operation on key "foo" stored in the Database 0, two # messages will be published via Pub/Sub: # # PUBLISH __keyspace@0__:foo del # PUBLISH __keyevent@0__:del foo # # It is possible to select the events that Redis will notify among a set # of classes. Every class is identified by a single character: # # K Keyspace events, published with __keyspace@<db>__ prefix. # E Keyevent events, published with __keyevent@<db>__ prefix. # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... # $ String commands # l List commands # s Set commands # h Hash commands # z Sorted set commands # x Expired events (events generated every time a key expires) # e Evicted events (events generated when a key is evicted for maxmemory) # A Alias for g$lshzxe, so that the "AKE" string means all the events. # # The "notify-keyspace-events" takes as argument a string that is composed # of zero or multiple characters. The empty string means that notifications # are disabled. # # Example: to enable list and generic events, from the point of view of the # event name, use: # # notify-keyspace-events Elg # # Example 2: to get the stream of the expired keys subscribing to channel # name __keyevent@0__:expired use: # # notify-keyspace-events Ex # # By default all notifications are disabled because most users don't need # this feature and the feature has some overhead. Note that if you don't # specify at least one of K or E, no events will be delivered. #鍵空間通知使得客戶端能夠經過訂閱頻道或模式,來接收那些以某種方式改動了 Redis 數據集的事件。 由於開啓鍵空間通知功能須要消耗一些 CPU ,因此在默認配置下,該功能處於關閉狀態。 #notify-keyspace-events 的參數能夠是如下字符的任意組合,它指定了服務器該發送哪些類型的通知: ##K 鍵空間通知,全部通知以 __keyspace@__ 爲前綴 ##E 鍵事件通知,全部通知以 __keyevent@__ 爲前綴 ##g DEL 、 EXPIRE 、 RENAME 等類型無關的通用命令的通知 ##$ 字符串命令的通知 ##l 列表命令的通知 ##s 集合命令的通知 ##h 哈希命令的通知 ##z 有序集合命令的通知 ##x 過時事件:每當有過時鍵被刪除時發送 ##e 驅逐(evict)事件:每當有鍵由於 maxmemory 政策而被刪除時發送 ##A 參數 g$lshzxe 的別名 #輸入的參數中至少要有一個 K 或者 E,不然的話,無論其他的參數是什麼,都不會有任何 通知被分發。 詳細使用能夠參考http://redis.io/topics/notifications notify-keyspace-events "" ############################### ADVANCED CONFIG ############################### # Hashes are encoded using a memory efficient data structure when they have a # small number of entries, and the biggest entry does not exceed a given # threshold. These thresholds can be configured using the following directives. #數據量小於等於hash-max-ziplist-entries的用ziplist,大於hash-max-ziplist-entries用hash hash-max-ziplist-entries 512 #value大小小於等於hash-max-ziplist-value的用ziplist,大於hash-max-ziplist-value用hash。 hash-max-ziplist-value 64 # Lists are also encoded in a special way to save a lot of space. # The number of entries allowed per internal list node can be specified # as a fixed maximum size or a maximum number of elements. # For a fixed maximum size, use -5 through -1, meaning: # -5: max size: 64 Kb <-- not recommended for normal workloads # -4: max size: 32 Kb <-- not recommended # -3: max size: 16 Kb <-- probably not recommended # -2: max size: 8 Kb <-- good # -1: max size: 4 Kb <-- good # Positive numbers mean store up to _exactly_ that number of elements # per list node. # The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size), # but if your use case is unique, adjust the settings as necessary. list-max-ziplist-size -2 # Lists may also be compressed. # Compress depth is the number of quicklist ziplist nodes from *each* side of # the list to *exclude* from compression. The head and tail of the list # are always uncompressed for fast push/pop operations. Settings are: # 0: disable all list compression # 1: depth 1 means "don't start compressing until after 1 node into the list, # going from either the head or tail" # So: [head]->node->node->...->node->[tail] # [head], [tail] will always be uncompressed; inner nodes will compress. # 2: [head]->[next]->node->node->...->node->[prev]->[tail] # 2 here means: don't compress head or head->next or tail->prev or tail, # but compress all nodes between them. # 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail] # etc. list-compress-depth 0 # Sets have a special encoding in just one case: when a set is composed # of just strings that happen to be integers in radix 10 in the range # of 64 bit signed integers. # The following configuration setting sets the limit in the size of the # set in order to use this special memory saving encoding. set-max-intset-entries 512 # Similarly to hashes and lists, sorted sets are also specially encoded in # order to save a lot of space. This encoding is only used when the length and # elements of a sorted set are below the following limits: zset-max-ziplist-entries 128 zset-max-ziplist-value 64 # HyperLogLog sparse representation bytes limit. The limit includes the # 16 bytes header. When an HyperLogLog using the sparse representation crosses # this limit, it is converted into the dense representation. # # A value greater than 16000 is totally useless, since at that point the # dense representation is more memory efficient. # # The suggested value is ~ 3000 in order to have the benefits of # the space efficient encoding without slowing down too much PFADD, # which is O(N) with the sparse encoding. The value can be raised to # ~ 10000 when CPU is not a concern, but space is, and the data set is # composed of many HyperLogLogs with cardinality in the 0 - 15000 range. hll-sparse-max-bytes 3000 # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in # order to help rehashing the main Redis hash table (the one mapping top-level # keys to values). The hash table implementation Redis uses (see dict.c) # performs a lazy rehashing: the more operation you run into a hash table # that is rehashing, the more rehashing "steps" are performed, so if the # server is idle the rehashing is never complete and some more memory is used # by the hash table. # # The default is to use this millisecond 10 times every second in order to # actively rehash the main dictionaries, freeing memory when possible. # # If unsure: # use "activerehashing no" if you have hard latency requirements and it is # not a good thing in your environment that Redis can reply from time to time # to queries with 2 milliseconds delay. # # use "activerehashing yes" if you don't have such hard requirements but # want to free memory asap when possible. activerehashing yes # The client output buffer limits can be used to force disconnection of clients # that are not reading data from the server fast enough for some reason (a # common reason is that a Pub/Sub client can't consume messages as fast as the # publisher can produce them). # # The limit can be set differently for the three different classes of clients: # # normal -> normal clients including MONITOR clients # slave -> slave clients # pubsub -> clients subscribed to at least one pubsub channel or pattern # # The syntax of every client-output-buffer-limit directive is the following: # # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds> # # A client is immediately disconnected once the hard limit is reached, or if # the soft limit is reached and remains reached for the specified number of # seconds (continuously). # So for instance if the hard limit is 32 megabytes and the soft limit is # 16 megabytes / 10 seconds, the client will get disconnected immediately # if the size of the output buffers reach 32 megabytes, but will also get # disconnected if the client reaches 16 megabytes and continuously overcomes # the limit for 10 seconds. # # By default normal clients are not limited because they don't receive data # without asking (in a push way), but just after a request, so only # asynchronous clients may create a scenario where data is requested faster # than it can read. # # Instead there is a default limit for pubsub and slave clients, since # subscribers and slaves receive data in a push fashion. # # Both the hard or the soft limit can be disabled by setting them to zero. client-output-buffer-limit normal 0 0 0 client-output-buffer-limit slave 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 # Client query buffers accumulate new commands. They are limited to a fixed # amount by default in order to avoid that a protocol desynchronization (for # instance due to a bug in the client) will lead to unbound memory usage in # the query buffer. However you can configure it here if you have very special # needs, such us huge multi/exec requests or alike. # # client-query-buffer-limit 1gb # In the Redis protocol, bulk requests, that are, elements representing single # strings, are normally limited ot 512 mb. However you can change this limit # here. # # proto-max-bulk-len 512mb # Redis calls an internal function to perform many background tasks, like # closing connections of clients in timeout, purging expired keys that are # never requested, and so forth. # # Not all tasks are performed with the same frequency, but Redis checks for # tasks to perform according to the specified "hz" value. # # By default "hz" is set to 10. Raising the value will use more CPU when # Redis is idle, but at the same time will make Redis more responsive when # there are many keys expiring at the same time, and timeouts may be # handled with more precision. # # The range is between 1 and 500, however a value over 100 is usually not # a good idea. Most users should use the default of 10 and raise this up to # 100 only in environments where very low latency is required. hz 10 # When a child rewrites the AOF file, if the following option is enabled # the file will be fsync-ed every 32 MB of data generated. This is useful # in order to commit the file to the disk more incrementally and avoid # big latency spikes. aof-rewrite-incremental-fsync yes # Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good # idea to start with the default settings and only change them after investigating # how to improve the performances and how the keys LFU change over time, which # is possible to inspect via the OBJECT FREQ command. # # There are two tunable parameters in the Redis LFU implementation: the # counter logarithm factor and the counter decay time. It is important to # understand what the two parameters mean before changing them. # # The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis # uses a probabilistic increment with logarithmic behavior. Given the value # of the old counter, when a key is accessed, the counter is incremented in # this way: # # 1. A random number R between 0 and 1 is extracted. # 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1). # 3. The counter is incremented only if R < P. # # The default lfu-log-factor is 10. This is a table of how the frequency # counter changes with a different number of accesses with different # logarithmic factors: # # +--------+------------+------------+------------+------------+------------+ # | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits | # +--------+------------+------------+------------+------------+------------+ # | 0 | 104 | 255 | 255 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 1 | 18 | 49 | 255 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 10 | 10 | 18 | 142 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 100 | 8 | 11 | 49 | 143 | 255 | # +--------+------------+------------+------------+------------+------------+ # # NOTE: The above table was obtained by running the following commands: # # redis-benchmark -n 1000000 incr foo # redis-cli object freq foo # # NOTE 2: The counter initial value is 5 in order to give new objects a chance # to accumulate hits. # # The counter decay time is the time, in minutes, that must elapse in order # for the key counter to be divided by two (or decremented if it has a value # less <= 10). # # The default value for the lfu-decay-time is 1. A Special value of 0 means to # decay the counter every time it happens to be scanned. # # lfu-log-factor 10 # lfu-decay-time 1 ########################### ACTIVE DEFRAGMENTATION ####################### # # WARNING THIS FEATURE IS EXPERIMENTAL. However it was stress tested # even in production and manually tested by multiple engineers for some # time. # # What is active defragmentation? # ------------------------------- # # Active (online) defragmentation allows a Redis server to compact the # spaces left between small allocations and deallocations of data in memory, # thus allowing to reclaim back memory. # # Fragmentation is a natural process that happens with every allocator (but # less so with Jemalloc, fortunately) and certain workloads. Normally a server # restart is needed in order to lower the fragmentation, or at least to flush # away all the data and create it again. However thanks to this feature # implemented by Oran Agra for Redis 4.0 this process can happen at runtime # in an "hot" way, while the server is running. # # Basically when the fragmentation is over a certain level (see the # configuration options below) Redis will start to create new copies of the # values in contiguous memory regions by exploiting certain specific Jemalloc # features (in order to understand if an allocation is causing fragmentation # and to allocate it in a better place), and at the same time, will release the # old copies of the data. This process, repeated incrementally for all the keys # will cause the fragmentation to drop back to normal values. # # Important things to understand: # # 1. This feature is disabled by default, and only works if you compiled Redis # to use the copy of Jemalloc we ship with the source code of Redis. # This is the default with Linux builds. # # 2. You never need to enable this feature if you don't have fragmentation # issues. # # 3. Once you experience fragmentation, you can enable this feature when # needed with the command "CONFIG SET activedefrag yes". # # The configuration parameters are able to fine tune the behavior of the # defragmentation process. If you are not sure about what they mean it is # a good idea to leave the defaults untouched. # Enabled active defragmentation # activedefrag yes # Minimum amount of fragmentation waste to start active defrag # active-defrag-ignore-bytes 100mb # Minimum percentage of fragmentation to start active defrag # active-defrag-threshold-lower 10 # Maximum percentage of fragmentation at which we use maximum effort # active-defrag-threshold-upper 100 # Minimal effort for defrag in CPU percentage # active-defrag-cycle-min 25 # Maximal effort for defrag in CPU percentage # active-defrag-cycle-max 75