Redis key值是二進制安全的,這意味着能夠用任何二進制序列做爲key值,從形如「foo」的簡單字符串到一個JPG文件的內容均可以。空字符串也是有效key值。css
關於key的幾條規則:node
user:1000:password,這沒有什麼問題,但後者更易閱讀,而且由此增長的空間消耗相對於key object和value object自己來講很小。固然,沒人阻止你必定要用更短的鍵值節省一丁點兒空間。nginx
127.0.0.1:6379> set user:01:passwd 99999 OK 127.0.0.1:6379> get user:01:passwd "99999"
這是Redis最簡單的數據類型之一。若是隻使用這種數據類型,那麼redis就是一個持久化的memcached服務器(注:memcache的數據僅保存在內存中,服務器重啓後,數據將丟失)。固然redis對string類型的功能比memcached仍是多不少的,咱們來玩兒一下字符串類型:git
(1)常規的String字符串類型web
[root@redis01 scripts]# redis-cli -a yunjisuan set work ">9000" OK [root@redis01 scripts]# redis-cli -a yunjisuan get work ">9000"
- 在redis中,咱們一般用set設置一對key/value鍵值,而後用get來獲取字符串的值。
- value值能夠是任何類型的字符串(包括二進制數據),例如你能夠在一個鍵下保存一個jpg圖片。但值的長度不能超過1GB
- 雖然字符串是Redis的基本值類型,redis還支持更多對字符串的操做功能。
(2)String類型也能夠用來存儲數字,並支持對數字的加減操做。redis
[root@redis01 scripts]# redis-cli -a yunjisuan set counter 1 OK [root@redis01 scripts]# redis-cli -a yunjisuan incr counter #自增1 (integer) 2 [root@redis01 scripts]# redis-cli -a yunjisuan incr counter (integer) 3 [root@redis01 scripts]# redis-cli -a yunjisuan get counter "3" [root@redis01 scripts]# redis-cli -a yunjisuan incrby counter 2 #自增指定數值 (integer) 5 [root@redis01 scripts]# redis-cli -a yunjisuan incrby counter 2 (integer) 7 [root@redis01 scripts]# redis-cli -a yunjisuan get counter "7" [root@redis01 scripts]# redis-cli -a yunjisuan decr counter #自減1 (integer) 6 [root@redis01 scripts]# redis-cli -a yunjisuan decr counter (integer) 5 [root@redis01 scripts]# redis-cli -a yunjisuan get counter "5" [root@redis01 scripts]# redis-cli -a yunjisuan decrby counter 2 #自減指定數值 (integer) 3 [root@redis01 scripts]# redis-cli -a yunjisuan decrby counter 2 (integer) 1 [root@redis01 scripts]# redis-cli -a yunjisuan get counter "1"
INCR命令將字符串值解析成整型,將其加1,最後將結果保存爲新的字符串值,相似的命令以下列表:算法
incr | 自動自增長1 | INCR key |
---|---|---|
incrby | 自動自增指定數值 | INCRBY key increment |
decr | 自動自減1 | DECR key |
decrby | 自動自減指定數值 | DECRBY key decrement |
(3)爲key設置新值而且返回原值shell
對字符串,另外一個操做是GETSET命令,行如其名:他爲key設置新值而且返回原值。這有什麼用處呢?例如:你的系統每當有新用戶訪問時就用INCR命令操做一個Redis key。
你但願每小時對這個信息收集一次。你就能夠GETSET這個key並給其賦值0並讀取原值。數據庫
[root@redis01 scripts]# redis-cli -a yunjisuan 127.0.0.1:6379> set user01 zhangsan #設置新key-value OK 127.0.0.1:6379> get user01 "zhangsan" 127.0.0.1:6379> getset user01 wangwu #設置新數據並返回舊數據 "zhangsan" 127.0.0.1:6379> getset user01 liliu #設置新數據並返回舊數據 "wangwu" 127.0.0.1:6379> getset user01 gongli #設置新數據並返回舊數據 "liliu" 127.0.0.1:6379> get user01 "gongli"
(4)String類型還支持批量讀寫操做api
127.0.0.1:6379> mset name zhangsan age 44 OK 127.0.0.1:6379> mget name age 1) "zhangsan" 2) "44"
(5)string類型還支持對其部分的修改和獲取操做
127.0.0.1:6379> set images flower OK 127.0.0.1:6379> get images "flower" 127.0.0.1:6379> append images .jpg #追加字符串 (integer) 10 127.0.0.1:6379> get images "flower.jpg" 127.0.0.1:6379> strlen images (integer) 10 127.0.0.1:6379> substr images 0 6 "flower." 127.0.0.1:6379> substr images 0 5 "flower"
命令使用幫助:
#查看單個命令help+命令名 127.0.0.1:6379> help set SET key value [EX seconds] [PX milliseconds] [NX|XX] summary: Set the string value of a key since: 1.0.0 group: string 127.0.0.1:6379> help mset MSET key value [key value ...] summary: Set multiple keys to multiple values since: 1.0.1 group: string
- 要說清楚列表數據類型,最好先講一點理論背景,在信息技術界List這個詞經常被使用不當。例如「Python Lists」就名存實亡(名爲Linked Lists),但它們其實是數組(一樣的數據類型在Ruby中叫數組)
- 通常意義上講,列表就是有序元素的序列:10,20,1,2,3就是一個列表。但用數組實現的List和用Linked List實現的List,在屬性方面大不相同。
- redis lists基於Linked Lists實現。這意味着即便在一個list中有數百萬個元素,在頭部或尾部添加一個元素的操做,其時間複雜度也是常數級別的。用LPUSH命令在十個元素的list頭部添加新元素,和在千萬元素list頭部添加新元素的速度相同。
- 那麼,壞消息是什麼?在數組實現的list中利用索引訪問元素的速度極快,而一樣的操做在linked list實現的list上沒有那麼快。
- Redis Lists用linked list實現的緣由是:對於數據庫系統來講,相當重要的特性是:能很是快的在很大的列表上添加元素。另外一個重要因素是,正如你將要看到的:Redis lists能在常數時間取得常數長度。
Redis lists入門:
LPUSH命令可向list的左邊(頭部)添加一個新元素,而RPUSH命令可向list的右邊(尾部)添加一個新元素。最後LRANGE命令可從list中取出必定範圍的元素。
Redis可以將數據存儲成一個列表,並能對這個列表進行豐富的操做:
127.0.0.1:6379> lpush students "zhangsan" #將元素「zhangsan」放在students列表的最左邊 (integer) 1 127.0.0.1:6379> lpush students "wangwu" #將元素「wangwu」插入列表的最左邊 (integer) 2 127.0.0.1:6379> lpush students "liliu" #將元素「liliu」插入列表的最左邊 (integer) 3 127.0.0.1:6379> lrange students 0 2 #查看序列是0到2的元素 1) "liliu" 2) "wangwu" 3) "zhangsan" 127.0.0.1:6379> rpush students "wangyue" #將元素wangyue插入列表的最右邊 (integer) 4 127.0.0.1:6379> lrange students 0 3 #查看序列是0到3的元素 1) "liliu" 2) "wangwu" 3) "zhangsan" 4) "wangyue" 127.0.0.1:6379> llen students #查看列表元素的個數 (integer) 4 127.0.0.1:6379> lpop students #移除最左邊的元素值 "liliu" 127.0.0.1:6379> rpop students #移除最右邊的元素值 "wangyue" 127.0.0.1:6379> lrange students 0 3 #列表裏只剩下兩個元素了 1) "wangwu" 2) "zhangsan" 127.0.0.1:6379> rpush students zhangsan (integer) 3 127.0.0.1:6379> rpush students zhangsan (integer) 4 127.0.0.1:6379> lrange students 0 3 1) "wangwu" 2) "zhangsan" 3) "zhangsan" 4) "zhangsan" 127.0.0.1:6379> lrem students 2 "zhangsan" #刪除列表裏是「zhangsan」的元素,刪除兩次(從左向右刪) 127.0.0.1:6379> lrange students 0 3 1) "wangwu" 2) "zhangsan" 127.0.0.1:6379> rpush students zhangsan (integer) 2 127.0.0.1:6379> rpush students zhangsan (integer) 3 127.0.0.1:6379> lrem students 1 "zhangsan" #刪除列表裏的元素zhangsan一次 127.0.0.1:6379> lrange students 0 3 1) "wangwu" 2) "zhangsan" 3) "zhangsan" 127.0.0.1:6379> lrem students 0 "zhangsan" #清空列表全部的zhangsan元素 127.0.0.1:6379> lrange students 0 3 1) "wangwu"
Redis也支持不少修改操做
#linsert 在列表裏的某個值的先後插入元素 127.0.0.1:6379> lrange students 0 5 1) "wangwu" 127.0.0.1:6379> lpush students a b c d #左插入元素abcd (integer) 5 127.0.0.1:6379> lrange students 0 5 1) "d" 2) "c" 3) "b" 4) "a" 5) "wangwu" 127.0.0.1:6379> linsert students before b xxxx #在元素b的前邊插入元素xxxx (integer) 6 127.0.0.1:6379> lrange students 0 9 1) "d" 2) "c" 3) "xxxx" 4) "b" 5) "a" 6) "wangwu" 127.0.0.1:6379> linsert students after b xxxx #在元素b的後邊插入元素xxxx (integer) 7 127.0.0.1:6379> lrange students 0 9 1) "d" 2) "c" 3) "xxxx" 4) "b" 5) "xxxx" 6) "a" 7) "wangwu"
更多操做,請查詢help @list
列表list的用途:
- 正如你能夠從上面的例子中猜到的,list可被用來實現聊天系統。還能夠做爲不一樣進程間傳遞消息的隊列。關鍵是,你能夠每次都以原先添加的順序訪問數據。這不須要任何SQLORDER操做,將會很是快,也會很容易擴展到百萬級別的規模。
- 例如在評級系統中,好比社會化新聞網站reddit.com,你能夠把每一個新提交的連接添加到一個list,用LRANGE可簡單的對結果分頁。
- 在博客引擎實現中,你可爲每篇日誌設置一個list,在該list中推入進博客評論,等等。
- 向Redis list壓入ID而不是實際的數據。
- 在上面的例子裏,咱們將「對象」(此例中是簡單消息)直接壓入Redis list,但一般不該這麼作,因爲對象可能被屢次引用:例如在一個list中維護其時間順序,在一個集合中保存它的類別,只要有必要,它還會出如今其餘list中,等等。
- Redis集合是未排序的集合,其元素是二進制安全的字符串。SADD命令能夠向集合添加一個新元素。和sets相關的操做也有許多,好比檢測某個元素是否存在,以及實現交集,並集,差集等等。一例勝千言:
- Redis可以將一系列不重複的值存儲成一個集合
127.0.0.1:6379> sadd users laoda #向集合users裏添加一個元素「laoda」 (integer) 1 127.0.0.1:6379> sadd users laoer laosan #向結合users裏添加兩個元素laoer,laosan (integer) 2 127.0.0.1:6379> smembers users #查看集合裏的全部元素 1) "laosan" #能夠看到集合裏的元素是無序的 2) "laoda" 3) "laoer" #咱們向集合中添加了三個元素,並讓Redis返回全部元素。如今讓咱們看一下某個元素是否存在於集合中 127.0.0.1:6379> sismember users laoda #查看元素laoda是否存在於集合users中 (integer) 1 #存在 127.0.0.1:6379> sismember users laoer #查看元素laoer是否存在於集合users中 (integer) 1 #存在 127.0.0.1:6379> sismember users laosan #查看元素laosan是否存在於集合users中 (integer) 1 #存在 127.0.0.1:6379> sismember users laosi ##查看元素laosi是否存在於集合users中 (integer) 0 #不存在
- 「laoda」是這個集合的成員,而「laosi」不是。集合特別適合表現對象之間的關係。例如用Redis集合能夠很容易實現標籤功能。
- 下面是一個簡單的方案:對每一個想加標籤的對象,用一個標籤ID集合與之關聯,而且對每一個已有的標籤,一組對象ID與之關聯。
- 例如,假設咱們的新聞ID1000被加了三個標籤tag1,2,5和77,就能夠設置下面兩個集合:
root@redis-master ~]# redis-cli -a yunjisuan sadd news:1000:tags 1 (integer) 1 [root@redis-master ~]# redis-cli -a yunjisuan sadd news:1000:tags 2 (integer) 1 [root@redis-master ~]# redis-cli -a yunjisuan sadd news:1000:tags 5 (integer) 1 [root@redis-master ~]# redis-cli -a yunjisuan sadd news:1000:tags 77 (integer) 1 [root@redis-master ~]# redis-cli -a yunjisuan sadd tag:1:objects 1000 (integer) 1 [root@redis-master ~]# redis-cli -a yunjisuan sadd tag:2:objects 1000 (integer) 1 [root@redis-master ~]# redis-cli -a yunjisuan sadd tag:5:objects 1000 (integer) 1 [root@redis-master ~]# redis-cli -a yunjisuan sadd tag:27:objects 1000 (integer) 1 #要獲取一個對象的全部標籤,咱們只須要: #獲取ID號爲1000的全部新聞的題目 [root@redis-master ~]# redis-cli -a yunjisuan smembers news:1000:tags #獲取集合爲news:1000:tags的全部元素 1) "1" #新聞標題 2) "2" #新聞標題 3) "5" #新聞標題 4) "77" #新聞標題 #查詢某個標籤的具體內容,咱們只須要: #獲取某個新聞標題的具體內容 [root@redis-master ~]# redis-cli -a yunjisuan smembers tag:5:objects #獲取集合爲tag:5:objects的全部元素 1) "1000" #新聞內容
而有些看上去並不簡單的操做仍然能使用相應的Redis命令輕鬆實現。例如咱們也許想得到一份同時擁有標籤1,2,10和27的對象列表。則能夠用SINTER命令來作,他能夠在不一樣集合之間取出交集。所以爲達目的咱們只需:
[root@redis-master ~]# redis-cli -a yunjisuan sadd tag:1:objects 500 #向集合tag:1:objects裏添加元素「500」 (integer) 1 [root@redis-master ~]# redis-cli -a yunjisuan smembers tag:1:objects #查看集合tag:1:objects裏的全部元素 1) "500" 2) "1000" [root@redis-master ~]# redis-cli -a yunjisuan smembers tag:2:objects #查看集合tag:2:objects裏的全部元素 1) "1000" [root@redis-master ~]# redis-cli -a yunjisuan sinter tag:1:objects tag:2:objects tag:5:objects tag:27:objects #求集合tag:1:objects ...tag:27:objects裏的全部元素的交集 1) "1000"
如何爲字符串獲取惟一標識:
- 在標籤的例子裏,咱們用到了標籤ID,卻沒有提到ID從何而來。基本上你得爲每一個加入系統的標籤分配一個惟一標識。你也但願在多個客戶端同時試着添加一樣的標籤時不要出現競爭的狀況。此外,若是標籤已經存在,你但願返回他的ID,不然建立一個新的惟一標識並將其與此標籤關聯。
- Redis 1.4增長了Hash類型。有了它,字符串和惟一ID關聯的事兒將不值一提,但現在咱們如何用現有Redis命令可靠的解決它呢?
- 咱們首先的嘗試(以失敗了結)可能以下:
假設咱們想爲標籤「redis」獲取一個惟一ID:
可是這裏會出現一個問題,假如兩個客戶端同時使用這組指令嘗試爲標籤「redis」獲取惟一ID時會發生什麼呢?若是時間湊巧,他們倆都會從GET操做得到nil,都將對next.tag.id key作自增操做,這個key會被自增兩次。其中一個客戶端會將錯誤的ID返回給調用者。
幸運的是修復這個算法並不難,這是明智的版本:
Sorted Sets和Sets結構類似,不一樣的是存在Sorted Sets中的數據會有一個score屬性,並會在寫入時就按這個score拍好序。
#向一個有序集合裏添加元素 127.0.0.1:6379> ZADD days 0 mon #days是有序集合名,0是序號,mon是值 (integer) 1 127.0.0.1:6379> ZADD days 1 tue (integer) 1 127.0.0.1:6379> ZADD days 2 web (integer) 1 127.0.0.1:6379> ZADD days 3 thu (integer) 1 127.0.0.1:6379> ZADD days 4 fri (integer) 1 127.0.0.1:6379> ZADD days 5 sat (integer) 1 127.0.0.1:6379> ZADD days 6 sun (integer) 1 127.0.0.1:6379> zrange days 0 6 #查看集合索引0到6的元素 1) "mon" 2) "tue" 3) "web" 4) "thu" 5) "fri" 6) "sat" 7) "sun" #從上面咱們能夠看出,ZADD建立的集合是有序集合。 #查看有序集合days的具體值的排序 127.0.0.1:6379> zscore days mon "0" 127.0.0.1:6379> zscore days web "2" 127.0.0.1:6379> zscore days fri "4" root@redis-master ~]# redis-cli -a yunjisuan 127.0.0.1:6379> zscore days mon "0" 127.0.0.1:6379> zscore days web "2" 127.0.0.1:6379> zscore days fri "4" 127.0.0.1:6379> zcount days 3 6 (integer) 4 127.0.0.1:6379> ZRANGEBYSCORE days 3 6 1) "thu" 2) "fri" 3) "sat" 4) "sun"
- 集合是使用頻率很高的數據類型,可是...對許多問題來講他們也有點太不講順序了;所以Redis1.2引入了有序集合。它和集合很是類似,也是二進制安全的字符串集合,可是此次帶有關聯的score,以及一個相似LRANGE的操做能夠返回有序元素,此操做只能做用於有序集合,它就是,ZRANGE命令。
- 基本上有序集合從某種程度上說是SQL世界的索引在Redis中的等價物。例如在上面提到的reddit.com例子中,並無提到如何根據用戶投票和時間因素將新聞組合生成首頁。咱們將看到有序集合如何解決這個問題,但最好先從更簡單的事情開始,闡明這個高級數據類型是如何工做的。讓咱們添加幾個黑客,並將他們的生日做爲「score」。
127.0.0.1:6379> zadd hackers 1940 "1940-Alan Kay" (integer) 1 127.0.0.1:6379> zadd hackers 1953 "1953-Richard Stallman" (integer) 1 127.0.0.1:6379> zadd hackers 1965 "1965-Yukihiro Matsumoto" (integer) 1 127.0.0.1:6379> zadd hackers 1916 "1916-Claude Shannon" (integer) 1 127.0.0.1:6379> zadd hackers 1969 "1969-Linus Torvalds" (integer) 1 127.0.0.1:6379> zadd hackers 1912 "1912-Alan Turing" (integer) 1
對有序集合來講,按生日排序返回這些黑客易如反掌,由於他們已是有序的。有序集合是經過一個dual-ported數據結構實現的,它包含一個精簡的有序列表和一個hash table,所以添加一個元素的時間複雜度是O(log(N))。這還行,但當咱們須要訪問有序的元素時,Redis沒必要再作任何事情,它已是有序的了:
127.0.0.1:6379> zadd hackers 1940 "1940-Alan Kay" (integer) 1 127.0.0.1:6379> zadd hackers 1953 "1953-Richard Stallman" (integer) 1 127.0.0.1:6379> zadd hackers 1965 "1965-Yukihiro Matsumoto" (integer) 1 127.0.0.1:6379> zadd hackers 1916 "1916-Claude Shannon" (integer) 1 127.0.0.1:6379> zadd hackers 1969 "1969-Linus Torvalds" (integer) 1 127.0.0.1:6379> zadd hackers 1912 "1912-Alan Turing" (integer) 1 #利用zrange進行排序查詢 127.0.0.1:6379> zrange hackers 0 6 1) "1912-Alan Turing" 2) "1916-Claude Shannon" 3) "1940-Alan Kay" 4) "1953-Richard Stallman" 5) "1965-Yukihiro Matsumoto" 6) "1969-Linus Torvalds" #利用zrevrange進行反向查詢 127.0.0.1:6379> zrevrange hackers 0 -1 1) "1969-Linus Torvalds" 2) "1965-Yukihiro Matsumoto" 3) "1953-Richard Stallman" 4) "1940-Alan Kay" 5) "1916-Claude Shannon" 6) "1912-Alan Turing"
Redis可以存儲key對多個屬性的數據(好比user1,uname user1.passwd)
#存儲一個hash類型test,他的屬性是name,屬性數據是yunjisuan 127.0.0.1:6379> hset test name yunjisuan (integer) 1 #存儲一個hash類型test,他的屬性是age,屬性數據是35 127.0.0.1:6379> hset test age 35 (integer) 1 #存儲一個hash類型test,他的屬性是sex,屬性數據是non 127.0.0.1:6379> hset test sex nan (integer) 1 #查看hash類型test的全部屬性的值 127.0.0.1:6379> hvals test 1) "yunjisuan" 2) "35" 3) "nan" #查看hash類型test的全部屬性及屬性所對應的值 127.0.0.1:6379> hgetall test 1) "name" 2) "yunjisuan" 3) "age" 4) "35" 5) "sex" 6) "nan"
#建立redis存儲目錄 [root@redis-master redis]# cat -n /usr/local/redis/conf/redis.conf | sed -n '187p' 187 dir ./ #修改本行的存儲路徑配置路徑 [root@redis-master redis]# cat -n /usr/local/redis/conf/redis.conf | sed -n '187p' 187 dir /usr/local/redis/data/ #改爲這個 [root@redis-master redis]# redis-cli -a yunjisuan shutdown #關閉redis服務 [root@redis-master redis]# mkdir /usr/local/redis/data #建立redis存儲目錄 [root@redis-master redis]# redis-server /usr/local/redis/conf/redis.conf & #後臺啓動redis進程 #向redis裏寫入數據,並保存 [root@redis-master redis]# redis-cli -a yunjisuan 127.0.0.1:6379> set name2 yunjisuan OK 127.0.0.1:6379> save #保存數據 [3456] 08 Oct 04:39:05.169 * DB saved on disk OK 127.0.0.1:6379> quit [root@redis-master redis]# ll /usr/local/redis/ total 12 drwxr-xr-x. 2 root root 4096 Oct 7 16:53 bin drwxr-xr-x. 2 root root 4096 Oct 8 04:33 conf drwxr-xr-x. 2 root root 4096 Oct 8 04:39 data [root@redis-master redis]# ll /usr/local/redis/data/ total 4 -rw-r--r--. 1 root root 49 Oct 8 04:39 dump.rdb #保存的rdb文件
#建立redis多實例存儲目錄 [root@redis-master redis]# mkdir -p /data/6380/data [root@redis-master redis]# mkdir -p /data/6381/data #建立redis多實例配置文件 [root@redis-master redis]# cp /usr/local/redis/conf/redis.conf /data/6380/ [root@redis-master redis]# cp /usr/local/redis/conf/redis.conf /data/6381/ #修改多實例配置文件的數據存儲路徑 [root@redis-master redis]# sed -n '187p' /data/6380/redis.conf dir /data/6380/data #照此修改存儲路徑 [root@redis-master redis]# sed -n '187p' /data/6381/redis.conf dir /data/6381/data #照此修改存儲路徑 #修改多實例配置文件的佔用端口 [root@redis-master redis]# sed -n '45p' /data/6380/redis.conf port 6380 #照此修改啓動端口 [root@redis-master redis]# sed -n '45p' /data/6381/redis.conf port 6381 #照此修改啓動端口 #修改多實例配置文件的pid文件位置 [root@redis-master redis]# sed -n '41p' /data/6380/redis.conf pidfile /data/6380/redis.pid #照此修改 [root@redis-master redis]# sed -n '41p' /data/6381/redis.conf pidfile /data/6381/redis.pid #照此修改 #開啓多實例配置文件的持久化日誌 [root@redis-master redis]# sed -n '449p' /data/6380/redis.conf appendonly yes #照此修改 [root@redis-master redis]# sed -n '449p' /data/6381/redis.conf appendonly yes #照此修改
[root@redis-master redis]# redis-server /data/6380/redis.conf & [root@redis-master redis]# redis-server /data/6381/redis.conf &
[root@redis-master redis]# netstat -antup | grep redis tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 3456/redis-server * tcp 0 0 0.0.0.0:6380 0.0.0.0:* LISTEN 3493/redis-server * tcp 0 0 0.0.0.0:6381 0.0.0.0:* LISTEN 3496/redis-server * tcp 0 0 :::6379 :::* LISTEN 3456/redis-server * tcp 0 0 :::6380 :::* LISTEN 3493/redis-server * tcp 0 0 :::6381 :::* LISTEN 3496/redis-server *
[root@redis-master data]# tree /data /data ├── 6380 #redis實例6380啓動目錄 │ ├── data #redis實例6380數據目錄 │ │ ├── appendonly.aof #redis實例6380的數據持久化日誌(記錄了數據庫的修改,相似binlog) │ │ └── dump.rdb #redis實例6380數據存儲文件 │ └── redis.conf #redis實例6380配置文件 └── 6381 #redis實例6381啓動目錄 ├── data #redis實例6381數據目錄 │ ├── appendonly.aof #redis實例6381的數據持久化日誌(記錄了數據庫的修改,相似binlog) │ └── dump.rdb #redis實例6381數據存儲文件 └── redis.conf #redis實例6381配置文件 4 directories, 6 files
或許有些同窗會迷糊,appendonly.aof是作什麼用的呢?
咱們打開文件的內容查看以下:
[root@redis-master data]# cat /data/6380/data/appendonly.aof
*2
$6 SELECT $1 0 *3 $3 set $4 name $9 yunjisuan *1 $4 save
咱們發現appendonly.aof實際上裏面記錄的是咱們對redis數據庫的修改記錄,這點相似於MySQL的binlog日誌。
(1)Slave服務器鏈接到Master
服務器
(2)Slave服務器發送SYNC命令
(3)Master服務器備份數據庫到.rdb文件
(4)Master服務器把.rdb文件傳輸給Slave服務器
(5)Slave服務器把.rdb文件數據導入到數據庫中。
上面的這5步是同步的第一階段,接下來在Master服務器上調用每個命令都使用replicationFeedSlaves()來同步到Slave服務器。
Redis的主從同步具備明顯的分佈式緩存特色:
(1)一個master能夠有多個slave,一個slave下面還能夠有多個slave
(2)slave不只能夠鏈接到master,slave也能夠鏈接其餘slave造成樹狀。
(3)主從同步不會阻塞master,可是會阻塞slave。也就是說當一個或多個slave與master進行初次同步數據時,master能夠繼續處理client發來的請求。相反slave在初次同步數據時則會阻塞不能處理client的請求。
(4)主從同步能夠同來提升系統的可伸縮性,咱們能夠用多個slave專門處理client端的讀請求,也能夠用來作簡單的數據冗餘或者只在slave上進行持久化從而提高集羣的總體性能。
(5)對於老版本的redis,每次重連都會從新發送全部數據。
有兩種方式能夠用來完成進行主從Redis服務器的同步設置。都須要在slave服務器上進行,指定slave須要鏈接的Redis服務器(多是master,也多是slave)。
經過簡單的配置slave(master端無需配置),用戶就能使用redis的主從複製
咱們讓端口6379的redis作master;端口6380的redis作slave
#咱們修改/data/6380/redis.conf的配置文件 [root@redis-master ~]# cat -n /data/6380/redis.conf | sed -n '189,215p' 189 ################################# REPLICATION ################################# 190 191 # Master-Slave replication. Use slaveof to make a Redis instance a copy of 192 # another Redis server. Note that the configuration is local to the slave 193 # so for example it is possible to configure the slave to save the DB with a 194 # different interval, or to listen to another port, and so on. 195 # 196 # slaveof <masterip> <masterport> 197 slaveof 192.168.0.135 6379 在此處添加本行內容,指定主master的IP和端口 198 # If the master is password protected (using the "requirepass" configuration 199 # directive below) it is possible to tell the slave to authenticate before 200 # starting the replication synchronization process, otherwise the master will 201 # refuse the slave request. 202 # 203 # masterauth <master-password> 204 masterauth yunjisuan 在此處添加本行內容,指定驗證的密碼 205 # When a slave loses its connection with the master, or when the replication 206 # is still in progress, the slave can act in two different ways: 207 # 208 # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will 209 # still reply to client requests, possibly with out of date data, or the 210 # data set may just be empty if this is the first synchronization. 211 # 212 # 2) if slave-serve-stale-data is set to 'no' the slave will reply with 213 # an error "SYNC with master in progress" to all the kind of commands 214 # but to INFO and SLAVEOF. 215 #
接下來咱們重啓redis的服務進程
[root@redis-master ~]# redis-cli -p 6380 -a yunjisuan shutdown #關閉6380redis進程 [3558] 08 Oct 09:03:10.218 # User requested shutdown... [3558] 08 Oct 09:03:10.218 * Calling fsync() on the AOF file. [3558] 08 Oct 09:03:10.218 * Saving the final RDB snapshot before exiting. [3558] 08 Oct 09:03:10.220 * DB saved on disk [3558] 08 Oct 09:03:10.220 # Redis is now ready to exit, bye bye... [3]+ Done redis-server /data/6380/redis.conf (wd: /data) (wd now: ~) [root@redis-master ~]# redis-server /data/6380/redis.conf & #後臺啓動
當再次啓動從庫時出現以下信息:
[3616] 08 Oct 09:07:50.955 # Server started, Redis version 2.8.9 [3616] 08 Oct 09:07:50.965 * DB saved on disk [3616] 08 Oct 09:07:50.965 * DB loaded from append only file: 0.010 seconds [3616] 08 Oct 09:07:50.965 * The server is now ready to accept connections on port 6380 [3616] 08 Oct 09:07:51.958 * Connecting to MASTER 192.168.0.135:6379 #鏈接master [3616] 08 Oct 09:07:51.958 * MASTER <-> SLAVE sync started #開始發送sync [3616] 08 Oct 09:07:51.958 * Non blocking connect for SYNC fired the event. #這是一個不阻塞事件 [3616] 08 Oct 09:07:51.958 * Master replied to PING, replication can continue... #master應答了ping,同步開始 [3616] 08 Oct 09:07:51.959 * Partial resynchronization not possible (no cached master) #從新進行同步不可能(master沒有緩存內容) [3616] 08 Oct 09:07:51.961 * Full resync from master: #從master同步所有數據 933d3b0123f2d72cf106d901434898aab24d2a6e:1 [3616] 08 Oct 09:07:52.052 * MASTER <-> SLAVE sync: receiving 49 bytes from master #從master接收到49字節數據 [3616] 08 Oct 09:07:52.052 * MASTER <-> SLAVE sync: Flushing old data #刷新舊數據 [3616] 08 Oct 09:07:52.053 * MASTER <-> SLAVE sync: Loading DB in memory #數據放到內存 [3616] 08 Oct 09:07:52.053 * MASTER <-> SLAVE sync: Finished with success #同步完成 [3616] 08 Oct 09:07:52.054 * Background append only file rewriting started by pid 3620 #AOF重寫 [3620] 08 Oct 09:07:52.060 * SYNC append only file rewrite performed [3620] 08 Oct 09:07:52.060 * AOF rewrite: 6 MB of memory used by copy-on-write [3616] 08 Oct 09:07:52.159 * Background AOF rewrite terminated with success #AOF重寫成功 [3616] 08 Oct 09:07:52.159 * Parent diff successfully flushed to the rewritten AOF (0 bytes) [3616] 08 Oct 09:07:52.159 * Background AOF rewrite finished successfully #AOF重寫完畢
[root@redis-master ~]# redis-cli -a yunjisuan -p 6380 get name #獲取redis6380的鍵name的值 "benet" [root@redis-master ~]# redis-cli -a yunjisuan -p 6379 set name xxxxx #向redis6379裏存一個key=name,value=xxxxx的數據 OK [root@redis-master ~]# redis-cli -a yunjisuan -p 6380 get name #獲取redis6380的鍵name的值 "xxxxx"
綜上所示:redis主從同步成功
[root@redis-master ~]# cat -n /data/6380/redis.conf | sed -n "189,324p" 189 ################################# REPLICATION ################################# 190 191 # Master-Slave replication. Use slaveof to make a Redis instance a copy of 192 # another Redis server. Note that the configuration is local to the slave 193 # so for example it is possible to configure the slave to save the DB with a 194 # different interval, or to listen to another port, and so on. 195 # 196 # slaveof <masterip> <masterport> 197 slaveof 192.168.0.135 6379 用於標識master的鏈接IP及端口號 198 # If the master is password protected (using the "requirepass" configuration 199 # directive below) it is possible to tell the slave to authenticate before 200 # starting the replication synchronization process, otherwise the master will 201 # refuse the slave request. 202 # 203 # masterauth <master-password> 204 masterauth yunjisuan 若是master設置了鏈接密碼,這裏要寫上 205 # When a slave loses its connection with the master, or when the replication 206 # is still in progress, the slave can act in two different ways: 207 # 208 # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will 209 # still reply to client requests, possibly with out of date data, or the 210 # data set may just be empty if this is the first synchronization. 211 # 212 # 2) if slave-serve-stale-data is set to 'no' the slave will reply with 213 # an error "SYNC with master in progress" to all the kind of commands 214 # but to INFO and SLAVEOF. 215 # 216 slave-serve-stale-data yes 若是設置yes,那麼一旦從庫鏈接不上主庫,從庫繼續響應客戶端發來的請求並回復,可是回覆的內容有多是過時的。若是no,那麼slave會應答一個錯誤提示,就不提供訪問了。 217 218 # You can configure a slave instance to accept writes or not. Writing against 219 # a slave instance may be useful to store some ephemeral data (because data 220 # written on a slave will be easily deleted after resync with the master) but 221 # may also cause problems if clients are writing to it because of a 222 # misconfiguration. 223 # 224 # Since Redis 2.6 by default slaves are read-only. 225 # 226 # Note: read only slaves are not designed to be exposed to untrusted clients 227 # on the internet. It's just a protection layer against misuse of the instance. 228 # Still a read only slave exports by default all the administrative commands 229 # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve 230 # security of read only slaves using 'rename-command' to shadow all the 231 # administrative / dangerous commands. 232 slave-read-only yes yes:從庫被設置爲只能讀 233 234 # Slaves send PINGs to server in a predefined interval. It's possible to change 235 # this interval with the repl_ping_slave_period option. The default value is 10 236 # seconds. 237 # 238 # repl-ping-slave-period 10 239 240 # The following option sets the replication timeout for: 241 # 242 # 1) Bulk transfer I/O during SYNC, from the point of view of slave. 243 # 2) Master timeout from the point of view of slaves (data, pings). 244 # 3) Slave timeout from the point of view of masters (REPLCONF ACK pings). 245 # 246 # It is important to make sure that this value is greater than the value 247 # specified for repl-ping-slave-period otherwise a timeout will be detected 248 # every time there is low traffic between the master and the slave. 249 # 250 # repl-timeout 60 251 252 # Disable TCP_NODELAY on the slave socket after SYNC? 253 # 254 # If you select "yes" Redis will use a smaller number of TCP packets and 255 # less bandwidth to send data to slaves. But this can add a delay for 256 # the data to appear on the slave side, up to 40 milliseconds with 257 # Linux kernels using a default configuration. 258 # 259 # If you select "no" the delay for data to appear on the slave side will 260 # be reduced but more bandwidth will be used for replication. 261 # 262 # By default we optimize for low latency, but in very high traffic conditions 263 # or when the master and slaves are many hops away, turning this to "yes" may 264 # be a good idea. 265 repl-disable-tcp-nodelay no 266 267 # Set the replication backlog size. The backlog is a buffer that accumulates 268 # slave data when slaves are disconnected for some time, so that when a slave 269 # wants to reconnect again, often a full resync is not needed, but a partial 270 # resync is enough, just passing the portion of data the slave missed while 271 # disconnected. 272 # 273 # The biggest the replication backlog, the longer the time the slave can be 274 # disconnected and later be able to perform a partial resynchronization. 275 # 276 # The backlog is only allocated once there is at least a slave connected. 277 # 278 \# repl-backlog-size 1mb 用於同步的backlog大小,用於從庫增量同步 279 280 # After a master has no longer connected slaves for some time, the backlog 281 # will be freed. The following option configures the amount of seconds that 282 # need to elapse, starting from the time the last slave disconnected, for 283 # the backlog buffer to be freed. 284 # 285 # A value of 0 means to never release the backlog. 286 # 287 \# repl-backlog-ttl 3600 當主從鏈接斷開,backlog的生存週期 288 289 # The slave priority is an integer number published by Redis in the INFO output. 290 # It is used by Redis Sentinel in order to select a slave to promote into a 291 # master if the master is no longer working correctly. 292 # 293 # A slave with a low priority number is considered better for promotion, so 294 # for instance if there are three slaves with priority 10, 100, 25 Sentinel will 295 # pick the one with priority 10, that is the lowest. 296 # 297 # However a special priority of 0 marks the slave as not able to perform the 298 # role of master, so a slave with priority of 0 will never be selected by 299 # Redis Sentinel for promotion. 300 # 301 # By default the priority is 100. 302 slave-priority 100 slave的優先級 303 304 # It is possible for a master to stop accepting writes if there are less than 305 # N slaves connected, having a lag less or equal than M seconds. 306 # 307 # The N slaves need to be in "online" state. 308 # 309 # The lag in seconds, that must be <= the specified value, is calculated from