我以前統計過咱們線上某redis數據被訪問的時間分佈,大概90%的請求只會訪問最新15分鐘的數據,99%的請求訪問最新1小時的數據,只有不到千分之一的請求會訪問超過1天的數據。咱們以前這份數據存了兩天(近500g內存數據),若是算上主備的話用掉了120多個Redis實例(一個實例8g內存),光把過時時間從2天改爲1天就能省下60多個redis實例,並且對原業務也沒有啥太大影響。html
固然Redis已經實現了數據過時的自動清理機制,我所要作的只是改下數據寫入時的過時時間而已。假設Redis沒有數據過時的機制,咱們要怎麼辦? 大概一想就知道很麻煩,仔細想的話還得考慮不少的細節。因此我以爲過時數據在緩存系統中是不起眼但很是重要的功能,除了省事外,它也能幫咱們節省不少成本。接下來咱們看下Redis中是如何實現數據過時的。git
衆所周知,Redis核心流程是單線程執行的,它基本上是處理完一條請求再出處理另一條請求,處理請求的過程並不只僅是響應用戶發起的請求,Redis也會作好多其餘的工做,當前其中就包括數據的過時。github
Redis在讀寫某個key的時候,它就會調用expireIfNeeded()
先判斷這個key是否已通過期了,若是已過時,就會執行刪除。redis
int expireIfNeeded(redisDb *db, robj *key) { if (!keyIsExpired(db,key)) return 0; /* 若是是在slave上下文中運行,直接返回1,由於slave的key過時是由master控制的, * master會給slave發送數據刪除命令。 * * 若是返回0表示數據不須要清理,返回1表示數據此次標記爲過時 */ if (server.masterhost != NULL) return 1; if (checkClientPauseTimeoutAndReturnIfPaused()) return 1; /* 刪除key */ server.stat_expiredkeys++; propagateExpire(db,key,server.lazyfree_lazy_expire); notifyKeyspaceEvent(NOTIFY_EXPIRED, "expired",key,db->id); int retval = server.lazyfree_lazy_expire ? dbAsyncDelete(db,key) : dbSyncDelete(db,key); if (retval) signalModifiedKey(NULL,db,key); return retval; }
判斷是否過時也很簡單,Redis在dictEntry中存儲了上次更新的時間戳,只須要判斷當前時間戳和上次更新時間戳之間的gap是否超過設定的過時時間便可。緩存
咱們重點來關注下這行代碼。多線程
int retval = server.lazyfree_lazy_expire ? dbAsyncDelete(db,key) : dbSyncDelete(db,key);
lazyfree_lazy_expire 是Redis的配置項之一,它的做用是是否開啓惰性刪除(默認不開啓),很顯然若是開啓就會執行異步刪除,接下來咱們詳細說下Redis的惰性刪除。less
何爲惰性刪除,從本質上講惰性刪除就是新開一個線程異步處理數據刪除的任務。爲何要有惰性刪除?衆所周知,Redis核心流程是單線程執行,若是某個一步執行特別耗時,會直接影響到Redis的性能,好比刪除一個幾個G的hash key,那這個實例不直接原地昇天。。 針對這種狀況,須要新開啓一個線程去異步刪除,防止阻塞出Redis的主線程,固然Redis實際實現的時候dbAsyncDelete()並不徹底是異步,咱們直接看代碼。異步
#define LAZYFREE_THRESHOLD 64 int dbAsyncDelete(redisDb *db, robj *key) { /* 從db->expires中刪除key,只是刪除其指針而已,並無刪除實際值 */ if (dictSize(db->expires) > 0) dictDelete(db->expires,key->ptr); /* If the value is composed of a few allocations, to free in a lazy way * is actually just slower... So under a certain limit we just free * the object synchronously. */ /* * 在字典中摘除這個key(沒有真正刪除,只是查不到而已),若是被摘除的dictEntry不爲 * 空就去執行下面的釋放邏輯 */ dictEntry *de = dictUnlink(db->dict,key->ptr); if (de) { robj *val = dictGetVal(de); /* Tells the module that the key has been unlinked from the database. */ moduleNotifyKeyUnlink(key,val); /* lazy_free並非徹底異步的,而是先評估釋放操做所需工做量,若是影響較小就直接在主線程中刪除了 */ size_t free_effort = lazyfreeGetFreeEffort(key,val); /* 若是釋放這個對象須要作大量的工做,就把他放到異步線程裏作 * 但若是這個對象是共享對象(refcount > 1)就不能直接釋放了,固然這不多發送,但有可能redis * 核心會調用incrRefCount來保護對象,而後調用dbDelete。這我只須要直接調用dictFreeUnlinkedEntry, * 等價於調用decrRefCount */ if (free_effort > LAZYFREE_THRESHOLD && val->refcount == 1) { atomicIncr(lazyfree_objects,1); bioCreateLazyFreeJob(lazyfreeFreeObject,1, val); dictSetVal(db->dict,de,NULL); } } /* 釋放鍵值對所佔用的內存,若是是lazyFree,val已是null了,只須要釋放key的內存便可 */ if (de) { dictFreeUnlinkedEntry(db->dict,de); if (server.cluster_enabled) slotToKeyDel(key->ptr); return 1; } else { return 0; } }
這裏第三步中爲何異步刪除不徹底是異步? 我以爲仍是得從異步任務提交bioCreateLazyFreeJob()中一窺端倪。ide
void bioCreateLazyFreeJob(lazy_free_fn free_fn, int arg_count, ...) { va_list valist; /* Allocate memory for the job structure and all required * arguments */ struct bio_job *job = zmalloc(sizeof(*job) + sizeof(void *) * (arg_count)); job->free_fn = free_fn; va_start(valist, arg_count); for (int i = 0; i < arg_count; i++) { job->free_args[i] = va_arg(valist, void *); } va_end(valist); bioSubmitJob(BIO_LAZY_FREE, job); } void bioSubmitJob(int type, struct bio_job *job) { job->time = time(NULL); // 多線程須要加鎖,把待處理的job添加到隊列末尾 pthread_mutex_lock(&bio_mutex[type]); listAddNodeTail(bio_jobs[type],job); bio_pending[type]++; pthread_cond_signal(&bio_newjob_cond[type]); pthread_mutex_unlock(&bio_mutex[type]); }
我理解,在異步刪除的時候須要加鎖將異步任務提交到隊列裏,若是加鎖和任務提交所帶來的性能影響大於直接刪除的影響,那麼異步刪除還不如同步呢。oop
這裏思考下另一個問題,若是數據寫入後就再也沒有讀寫了,是否是實時清理的功能就沒法觸及到這些數據,而後這些數據就永遠都會佔用空間。針對這種狀況,Redis也實現了按期刪除的策略。衆所周知,Redis核心流程是單線程執行,因此註定它不能長時間停下來去幹某個特定的工做,因此Redis按期清理也是每次只作一點點。
/* 有兩種清理模式,快速清理和慢速清理 */ void activeExpireCycle(int type) { /* Adjust the running parameters according to the configured expire * effort. The default effort is 1, and the maximum configurable effort * is 10. */ unsigned long effort = server.active_expire_effort-1, /* Rescale from 0 to 9. */ config_keys_per_loop = ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP + ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP/4*effort, // 每次抽樣的數據量大小 config_cycle_fast_duration = ACTIVE_EXPIRE_CYCLE_FAST_DURATION + ACTIVE_EXPIRE_CYCLE_FAST_DURATION/4*effort, // 每次清理的持續時間 config_cycle_slow_time_perc = ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC + 2*effort, // 最大CPU週期使用率 config_cycle_acceptable_stale = ACTIVE_EXPIRE_CYCLE_ACCEPTABLE_STALE- effort; // 可接受的過時數據佔比 /* This function has some global state in order to continue the work * incrementally across calls. */ static unsigned int current_db = 0; /* Last DB tested. */ static int timelimit_exit = 0; /* Time limit hit in previous call? */ static long long last_fast_cycle = 0; /* When last fast cycle ran. */ int j, iteration = 0; int dbs_per_call = CRON_DBS_PER_CALL; long long start = ustime(), timelimit, elapsed; /* When clients are paused the dataset should be static not just from the * POV of clients not being able to write, but also from the POV of * expires and evictions of keys not being performed. */ if (checkClientPauseTimeoutAndReturnIfPaused()) return; // 快速清理 if (type == ACTIVE_EXPIRE_CYCLE_FAST) { /* Don't start a fast cycle if the previous cycle did not exit * for time limit, unless the percentage of estimated stale keys is * too high. Also never repeat a fast cycle for the same period * as the fast cycle total duration itself. */ // 若是上次執行沒有觸發timelimit_exit, 跳過執行 if (!timelimit_exit && server.stat_expired_stale_perc < config_cycle_acceptable_stale) return; // 兩個快速清理週期內不執行快速清理 if (start < last_fast_cycle + (long long)config_cycle_fast_duration*2) return; last_fast_cycle = start; } /* We usually should test CRON_DBS_PER_CALL per iteration, with * two exceptions: * * 1) Don't test more DBs than we have. * 2) If last time we hit the time limit, we want to scan all DBs * in this iteration, as there is work to do in some DB and we don't want * expired keys to use memory for too much time. */ if (dbs_per_call > server.dbnum || timelimit_exit) dbs_per_call = server.dbnum; /* We can use at max 'config_cycle_slow_time_perc' percentage of CPU * time per iteration. Since this function gets called with a frequency of * server.hz times per second, the following is the max amount of * microseconds we can spend in this function. * config_cycle_slow_time_perc是清理所能佔用的CPU週期數配置,這裏將週期數轉化爲具體的時間 */ timelimit = config_cycle_slow_time_perc*1000000/server.hz/100; timelimit_exit = 0; if (timelimit <= 0) timelimit = 1; if (type == ACTIVE_EXPIRE_CYCLE_FAST) timelimit = config_cycle_fast_duration; /* in microseconds. */ /* Accumulate some global stats as we expire keys, to have some idea * about the number of keys that are already logically expired, but still * existing inside the database. */ long total_sampled = 0; long total_expired = 0; for (j = 0; j < dbs_per_call && timelimit_exit == 0; j++) { /* Expired and checked in a single loop. */ unsigned long expired, sampled; redisDb *db = server.db+(current_db % server.dbnum); /* Increment the DB now so we are sure if we run out of time * in the current DB we'll restart from the next. This allows to * distribute the time evenly across DBs. */ current_db++; /* Continue to expire if at the end of the cycle there are still * a big percentage of keys to expire, compared to the number of keys * we scanned. The percentage, stored in config_cycle_acceptable_stale * is not fixed, but depends on the Redis configured "expire effort". */ do { unsigned long num, slots; long long now, ttl_sum; int ttl_samples; iteration++; /* 若是沒有可清理的,直接結束 */ if ((num = dictSize(db->expires)) == 0) { db->avg_ttl = 0; break; } slots = dictSlots(db->expires); now = mstime(); /* 若是slot的填充率小於1%,採樣的成本過高,跳過執行,等待下次合適的機會。*/ if (num && slots > DICT_HT_INITIAL_SIZE && (num*100/slots < 1)) break; /* 記錄本次採樣的數據和其中過時的數量 */ expired = 0; sampled = 0; ttl_sum = 0; ttl_samples = 0; // 每次最多抽樣num個 if (num > config_keys_per_loop) num = config_keys_per_loop; /* 這裏由於性能考量,咱們訪問了hashcode的的底層實現,代碼和dict.c有些類型, * 但十幾年內很難改變。 * * 注意:hashtable不少特定的地方是空的,因此咱們的終止條件須要考慮到已掃描的bucket * 數量。 但實際上掃描空bucket是很快的,由於都是在cpu 緩存行裏線性掃描,因此能夠多 * 掃一些bucket */ long max_buckets = num*20; long checked_buckets = 0; // 這裏有採樣數據和bucket數量的限制。 while (sampled < num && checked_buckets < max_buckets) { for (int table = 0; table < 2; table++) { if (table == 1 && !dictIsRehashing(db->expires)) break; unsigned long idx = db->expires_cursor; idx &= db->expires->ht[table].sizemask; dictEntry *de = db->expires->ht[table].table[idx]; long long ttl; /* 遍歷當前bucket中的全部entry*/ checked_buckets++; while(de) { /* Get the next entry now since this entry may get * deleted. */ dictEntry *e = de; de = de->next; ttl = dictGetSignedIntegerVal(e)-now; if (activeExpireCycleTryExpire(db,e,now)) expired++; if (ttl > 0) { /* We want the average TTL of keys yet * not expired. */ ttl_sum += ttl; ttl_samples++; } sampled++; } } db->expires_cursor++; } total_expired += expired; total_sampled += sampled; /* 更新ttl統計信息 */ if (ttl_samples) { long long avg_ttl = ttl_sum/ttl_samples; /* Do a simple running average with a few samples. * We just use the current estimate with a weight of 2% * and the previous estimate with a weight of 98%. */ if (db->avg_ttl == 0) db->avg_ttl = avg_ttl; db->avg_ttl = (db->avg_ttl/50)*49 + (avg_ttl/50); } /* We can't block forever here even if there are many keys to * expire. So after a given amount of milliseconds return to the * caller waiting for the other active expire cycle. * 不能一直阻塞在這裏作清理工做,若是超時了要結束清理循環*/ if ((iteration & 0xf) == 0) { /* check once every 16 iterations. */ elapsed = ustime()-start; if (elapsed > timelimit) { timelimit_exit = 1; server.stat_expired_time_cap_reached_count++; break; } } /* * 若是過時key數量超過採樣數的10%+effort,說明過時測數量較多,要多清理下,因此 * 繼續循環作一次採樣清理。 */ } while (sampled == 0 || (expired*100/sampled) > config_cycle_acceptable_stale); } elapsed = ustime()-start; server.stat_expire_cycle_time_used += elapsed; latencyAddSampleIfNeeded("expire-cycle",elapsed/1000); /* Update our estimate of keys existing but yet to be expired. * Running average with this sample accounting for 5%. */ double current_perc; if (total_sampled) { current_perc = (double)total_expired/total_sampled; } else current_perc = 0; server.stat_expired_stale_perc = (current_perc*0.05)+ (server.stat_expired_stale_perc*0.95); }
代碼有些長,大體總結下其執行流程,細節見代碼註釋。
- config_keys_per_loop: 每次循環抽樣的數據量
- config_cycle_fast_duration: 快速清理模式下每次清理的持續時間
- config_cycle_slow_time_perc: 慢速清理模式下每次清理最大消耗CPU週期數(cpu最大使用率)
- config_cycle_acceptable_stale: 可接受的過時數據量佔比,若是本次採樣中過時數量小於這個閾值就結束本次清理。
Redis的數據過時策略比較簡單,代碼也不是特別多,但一如既然到處貫穿者性能的考慮。固然Redis只是提供了這樣一個功能,若是想用好的話還得根據具體的業務需求和實際的數據調整過時時間的配置,就比如我在文章開頭舉的那個例子。
本文是Redis源碼剖析系列博文,同時也有與之對應的Redis中文註釋版,有想深刻學習Redis的同窗,歡迎star和關注。
Redis中文註解版倉庫:https://github.com/xindoo/Redis
Redis源碼剖析專欄:https://zxs.io/s/1h
若是以爲本文對你有用,歡迎一鍵三連。