Redis從入門到放棄系列(二) Hash

Redis從入門到放棄系列(二) Hash

本文例子基於:5.0.4 Hash是Redis中一種比較常見的數據結構,其實現爲hashtable/ziplist,默認建立時爲ziplist,當到達必定量級時,redis會將ziplist轉化爲hashtablejava

Redis從入門到放棄系列(一) Stringredis

首先讓咱們來看一下該如何在redis裏面使用Hash類型數組

//將hash表中key的域field的值設爲value
//若是key不存在,一個新的哈希表被建立並進行HSET操做
//若是域field已經存在於哈希表中,舊值將被覆蓋
hset key field value

代碼示例:

//建立不存在的field
>hset user:1 id 1
(integer) 1
//覆蓋原先的field
>hset user:1 id 2
(integer) 0
>hget user:1 id
"2"
//獲取不存在的field
>hget user:1 not_exist
(nil)
----------------------------------
// hsetnx key field value
//當不存在該field 設置成功返回1 ,不然返回0
> hsetnx user:1 id 1
(integer) 1
> hsetnx user:1 id 1
(integer) 0
> hget user:1 id
"1"
----------------------------------
// hmset key field value [field value ....]
//批量設置多個鍵值對
>HMSET user:1 id 1 name "黑搜丶D" wechat "black-search"
OK
----------------------------------
//hget key field
//獲取hash表key中給定的field的值
>hget user:1 id 
"1"
----------------------------------
// hmget key field[field...]
//按照咱們輸入的field的順序返回
>hmget user:1 name wechat id  not_exist
1) "黑搜丶D"
2) "black-search"
3) "1"
4) (nil)
----------------------------------
// hdel key field 刪除返回被成功移除的域的數量 
> hgetall user:1
1) "id"
2) "1"
3) "name"
4) "black-search"
> HDEL user:1 name
(integer) 1
> HDEL user:1 name
(integer) 0
----------------------------------
// HINCRBY key field increment
// 爲hash表某個整數類型的field增長increment ,返回增長increment以後的大小
> hset user:1 wechat "black-search"
(integer) 1
> HINCRBY user:1 wechat 2
(error) ERR hash value is not an integer
> HINCRBY user:1 id 21
(integer) 22
> hget user:1 id
"22"

至此,redis hash的用法先告一段落.數據結構


debug object key

本文開頭的時候講默認建立爲ziplist,當達到必定的量級轉化爲hashtable,那麼具體是在何時纔會轉化成hashtable呢?app

# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64

從上文咱們能夠知道,只有當咱們知足如下兩個條件會將ziplist轉化爲hashtable結構dom

  1. 保存的全部鍵值對個數小於 512個 (這個限制是由 hash-max-ziplist-entries 參數控制,默認 512)
  2. 保存的全部鍵值對的長度都小於 64 字節(這個限制是由 hash-max-ziplist-value 參數控制,默認 64)
// 這裏測試當鍵值對小於等於512時,hash的類型
@RequestMapping("/")
public void test(){
	List<Long> list = redisTemplate.executePipelined(new RedisCallback<Long>() {
		@Override
		public Long doInRedis(RedisConnection redisConnection) throws DataAccessException {
			redisConnection.openPipeline();
			for (int i=0;i<512;i++){
				redisConnection.hSet("key".getBytes(),("field"+i).getBytes(),"value".getBytes());
			}
			return null;
		}
	});
	System.out.println("結束");
}
//咱們發現這裏hash的類型就是ziplist
> debug object key
Value at:0xbc6f80 refcount:1 encoding:ziplist serializedlength:2603 lru:14344435 lru_seconds_idle:17
//讓咱們調大一下循環的次數,改成513,咱們發現
> debug object key
Value at:0xbc6f80 refcount:1 encoding:hashtable serializedlength:7587 lru:14344656 lru_seconds_idle:4

源碼解析

//首先咱們來看一下dict的結構
typedef struct dict {
    dictType *type;
    void *privdata;
    dictht ht[2];
    long rehashidx; /* rehashing not in progress if rehashidx == -1 */
    unsigned long iterators; /* number of iterators currently running */
} dict;
typedef struct dictType {
    uint64_t (*hashFunction)(const void *key);
    void *(*keyDup)(void *privdata, const void *key);
    void *(*valDup)(void *privdata, const void *obj);
    int (*keyCompare)(void *privdata, const void *key1, const void *key2);
    void (*keyDestructor)(void *privdata, void *key);
    void (*valDestructor)(void *privdata, void *obj);
} dictType;
/* This is our hash table structure. Every dictionary has two of this as we
 * implement incremental rehashing, for the old to the new table. */
typedef struct dictht {
    dictEntry **table;
    unsigned long size;
    unsigned long sizemask;
    unsigned long used;
} dictht;
typedef struct dictEntry {
    void *key;
    union {
        void *val;
        uint64_t u64;
        int64_t s64;
        double d;
    } v;
    struct dictEntry *next;
} dictEntry;

從以上咱們能夠知道,dict裏面包含了兩個dictht(ps:hashtable),一般狀況下只有一個dictht有值.可是當dict擴容/縮容的時候,須要分配新的dictht,而後漸進式搬遷,當遷移結束以後,舊的dictht被刪除,只保留新的dictht dict如何解決hash衝突呢?其實原理跟Java的HashMap是同樣的,採用數組+鏈表的方式去解決ide

漸進式rehash

咱們知道,redis是單進程的,若是要將一個大的字典擴容是會比較耗時的,那麼有可能就會將其餘請求掛起。因此redis採用漸進式rehash來完成這一項艱鉅任務~oop

dictEntry *dictAddRaw(dict *d, void *key, dictEntry **existing)
{
    long index;
    dictEntry *entry;
    dictht *ht;
    //這裏每次都會進行搬遷~
    if (dictIsRehashing(d)) _dictRehashStep(d);

    /* Get the index of the new element, or -1 if
     * the element already exists. */
    if ((index = _dictKeyIndex(d, key, dictHashKey(d,key), existing)) == -1)
        return NULL;

    /* Allocate the memory and store the new entry.
     * Insert the element in top, with the assumption that in a database
     * system it is more likely that recently added entries are accessed
     * more frequently. */
    //當字典處於搬遷中,將新添加的元素掛到新的數組下面
    ht = dictIsRehashing(d) ? &d->ht[1] : &d->ht[0];
    entry = zmalloc(sizeof(*entry));
    entry->next = ht->table[index];
    ht->table[index] = entry;
    ht->used++;

    /* Set the hash entry fields. */
    dictSetKey(d, entry, key);
    return entry;
}

這樣,在客戶端每次請求(hset/hdel等)都會去判斷是否須要搬遷,那麼當客戶端不請求咱們的時候,有可能沒有完整的搬遷?no no no redis會在定時任務裏面掃描處於rehash的dict,而後完成剩餘的搬遷~代碼以下測試

/* This function handles 'background' operations we are required to do
 * incrementally in Redis databases, such as active key expiring, resizing,
 * rehashing. */
void databasesCron(void) {
    /* Expire keys by random sampling. Not required for slaves
     * as master will synthesize DELs for us. */
    if (server.active_expire_enabled) {
        if (server.masterhost == NULL) {
            activeExpireCycle(ACTIVE_EXPIRE_CYCLE_SLOW);
        } else {
            expireSlaveKeys();
        }
    }

    /* Defrag keys gradually. */
    if (server.active_defrag_enabled)
        activeDefragCycle();

    /* Perform hash tables rehashing if needed, but only if there are no
     * other processes saving the DB on disk. Otherwise rehashing is bad
     * as will cause a lot of copy-on-write of memory pages. */
    if (server.rdb_child_pid == -1 && server.aof_child_pid == -1) {
        /* We use global counters so if we stop the computation at a given
         * DB we'll be able to start from the successive in the next
         * cron loop iteration. */
        static unsigned int resize_db = 0;
        static unsigned int rehash_db = 0;
        int dbs_per_call = CRON_DBS_PER_CALL;
        int j;

        /* Don't test more DBs than we have. */
        if (dbs_per_call > server.dbnum) dbs_per_call = server.dbnum;

        /* Resize */
        for (j = 0; j < dbs_per_call; j++) {
            tryResizeHashTables(resize_db % server.dbnum);
            resize_db++;
        }

        /* Rehash */
        //重點在這裏rehash
        if (server.activerehashing) {
            for (j = 0; j < dbs_per_call; j++) {
                int work_done = incrementallyRehash(rehash_db);
                if (work_done) {
                    /* If the function did some work, stop here, we'll do
                     * more at the next cron loop. */
                    break;
                } else {
                    /* If this db didn't need rehash, we'll try the next one. */
                    rehash_db++;
                    rehash_db %= server.dbnum;
                }
            }
        }
    }
}

應用場景

儲存業務數據,咱們發現其實hset的用法很簡單,回顧上一講最後的應用場景ui

//上一講使用string 
>set user:1 '{"id":1,"name":"黑搜丶D","wechat":"black-search"}'
//讓咱們使用hash來實現類似的作法
> HMSET user:1 id 1 name "黑搜丶D" wechat "black-search"
OK
//獲取key的某個field的值
>hget user:1 wechat
"black-search"
//獲取到key的全部 field:value組合
> HGETALL user:1
1) "id"
2) "1"
3) "name"
4) "\xe9\xbb\x91\xe6\x90\x9c\xe4\xb8\xb6D"
5) "wechat"
6) "black-search"

相對於string的用法,咱們使用hash get某個field或者set某個field會省不少帶寬~

黑搜丶D

相關文章
相關標籤/搜索