前面分析了memblock算法、內核頁表的創建、內存管理框架的構建,這些都是x86處理的setup_arch()函數裏面初始化的,因地制宜,具備明顯處理器的特徵。而start_kernel()接下來的初始化則是linux通用的內存管理算法框架了。node
build_all_zonelists()用來初始化內存分配器使用的存儲節點中的管理區鏈表,是爲內存管理算法(夥伴管理算法)作準備工做的。具體實現:linux
【file:/mm/page_alloc.c】 /* * Called with zonelists_mutex held always * unless system_state == SYSTEM_BOOTING. */ void __ref build_all_zonelists(pg_data_t *pgdat, struct zone *zone) { set_zonelist_order(); if (system_state == SYSTEM_BOOTING) { __build_all_zonelists(NULL); mminit_verify_zonelist(); cpuset_init_current_mems_allowed(); } else { #ifdef CONFIG_MEMORY_HOTPLUG if (zone) setup_zone_pageset(zone); #endif /* we have to stop all cpus to guarantee there is no user of zonelist */ stop_machine(__build_all_zonelists, pgdat, NULL); /* cpuset refresh routine should be here */ } vm_total_pages = nr_free_pagecache_pages(); /* * Disable grouping by mobility if the number of pages in the * system is too low to allow the mechanism to work. It would be * more accurate, but expensive to check per-zone. This check is * made on memory-hotadd so a system can start with mobility * disabled and enable it later */ if (vm_total_pages < (pageblock_nr_pages * MIGRATE_TYPES)) page_group_by_mobility_disabled = 1; else page_group_by_mobility_disabled = 0; printk("Built %i zonelists in %s order, mobility grouping %s. " "Total pages: %ld\n", nr_online_nodes, zonelist_order_name[current_zonelist_order], page_group_by_mobility_disabled ? "off" : "on", vm_total_pages); #ifdef CONFIG_NUMA printk("Policy zone: %s\n", zone_names[policy_zone]); #endif }
首先看到set_zonelist_order():算法
【file:/mm/page_alloc.c】 static void set_zonelist_order(void) { current_zonelist_order = ZONELIST_ORDER_ZONE; }
此處用於設置zonelist的順序,ZONELIST_ORDER_ZONE用於表示順序(-zonetype, [node] distance),另外還有ZONELIST_ORDER_NODE表示順序([node] distance, -zonetype)。但其僅限於對NUMA環境存在區別,非NUMA環境則毫無差別。bootstrap
若是系統狀態system_state爲SYSTEM_BOOTING,系統狀態只有在start_kernel執行到最後一個函數rest_init後,纔會進入SYSTEM_RUNNING,因而初始化時將會接着是__build_all_zonelists()函數:數組
【file:/mm/page_alloc.c】 /* return values int ....just for stop_machine() */ static int __build_all_zonelists(void *data) { int nid; int cpu; pg_data_t *self = data; #ifdef CONFIG_NUMA memset(node_load, 0, sizeof(node_load)); #endif if (self && !node_online(self->node_id)) { build_zonelists(self); build_zonelist_cache(self); } for_each_online_node(nid) { pg_data_t *pgdat = NODE_DATA(nid); build_zonelists(pgdat); build_zonelist_cache(pgdat); } /* * Initialize the boot_pagesets that are going to be used * for bootstrapping processors. The real pagesets for * each zone will be allocated later when the per cpu * allocator is available. * * boot_pagesets are used also for bootstrapping offline * cpus if the system is already booted because the pagesets * are needed to initialize allocators on a specific cpu too. * F.e. the percpu allocator needs the page allocator which * needs the percpu allocator in order to allocate its pagesets * (a chicken-egg dilemma). */ for_each_possible_cpu(cpu) { setup_pageset(&per_cpu(boot_pageset, cpu), 0); #ifdef CONFIG_HAVE_MEMORYLESS_NODES /* * We now know the "local memory node" for each node-- * i.e., the node of the first zone in the generic zonelist. * Set up numa_mem percpu variable for on-line cpus. During * boot, only the boot cpu should be on-line; we'll init the * secondary cpus' numa_mem as they come on-line. During * node/memory hotplug, we'll fixup all on-line cpus. */ if (cpu_online(cpu)) set_cpu_numa_mem(cpu, local_memory_node(cpu_to_node(cpu))); #endif } return 0; }
其中build_zonelists_node()函數實現:緩存
【file:/mm/page_alloc.c】 /* * Builds allocation fallback zone lists. * * Add all populated zones of a node to the zonelist. */ static int build_zonelists_node(pg_data_t *pgdat, struct zonelist *zonelist, int nr_zones) { struct zone *zone; enum zone_type zone_type = MAX_NR_ZONES; do { zone_type--; zone = pgdat->node_zones + zone_type; if (populated_zone(zone)) { zoneref_set_zone(zone, &zonelist->_zonerefs[nr_zones++]); check_highest_zone(zone_type); } } while (zone_type); return nr_zones; }
populated_zone()用於判斷管理區zone的present_pages成員是否爲0,若是不爲0的話,表示該管理區存在頁面,那麼則經過zoneref_set_zone()將其設置到zonelist的_zonerefs裏面,而check_highest_zone()在沒有開啓NUMA的狀況下是個空函數。由此能夠看出build_zonelists_node()實則上是按照ZONE_HIGHMEM—>ZONE_NORMAL—>ZONE_DMA的順序去迭代排布到_zonerefs裏面的,表示一個申請內存的代價由低廉到昂貴的順序,這是一個分配內存時的備用次序。數據結構
回到build_zonelists()函數中,而它代碼顯示將本地的內存管理區進行分配備用次序排序,接着再是分配內存代價低於本地的,最後纔是分配內存代價高於本地的。app
分析完build_zonelists(),再回到__build_all_zonelists()看一下build_zonelist_cache():框架
【file:/mm/page_alloc.c】 /* non-NUMA variant of zonelist performance cache - just NULL zlcache_ptr */ static void build_zonelist_cache(pg_data_t *pgdat) { pgdat->node_zonelists[0].zlcache_ptr = NULL; }
該函數與CONFIG_NUMA相關,用來設置zlcache相關的成員。因爲沒有開啓該配置,故直接設置爲NULL。less
基於build_all_zonelists()調用__build_all_zonelists()入參爲NULL,由此可知__build_all_zonelists()運行的代碼是:
for_each_online_node(nid) { pg_data_t *pgdat = NODE_DATA(nid); build_zonelists(pgdat); build_zonelist_cache(pgdat); }
主要是設置各個內存管理節點node裏面各自的內存管理分區zone的內存分配次序。
__build_all_zonelists()接着的是:
for_each_possible_cpu(cpu) { setup_pageset(&per_cpu(boot_pageset, cpu), 0); #ifdef CONFIG_HAVE_MEMORYLESS_NODES if (cpu_online(cpu)) set_cpu_numa_mem(cpu, local_memory_node(cpu_to_node(cpu))); #endif }
其中CONFIG_HAVE_MEMORYLESS_NODES未配置,主要分析一下setup_pageset():
【file:/mm/page_alloc.c】 static void setup_pageset(struct per_cpu_pageset *p, unsigned long batch) { pageset_init(p); pageset_set_batch(p, batch); }
setup_pageset()裏面調用的兩個函數較爲簡單,就直接過一下。先是:
【file:/mm/page_alloc.c】 static void pageset_init(struct per_cpu_pageset *p) { struct per_cpu_pages *pcp; int migratetype; memset(p, 0, sizeof(*p)); pcp = &p->pcp; pcp->count = 0; for (migratetype = 0; migratetype < MIGRATE_PCPTYPES; migratetype++) INIT_LIST_HEAD(&pcp->lists[migratetype]); }
pageset_init()主要是將struct per_cpu_pages結構體進行初始化,而pageset_set_batch()則是對其進行設置。pageset_set_batch()實現:
【file:/mm/page_alloc.c】 /* * pcp->high and pcp->batch values are related and dependent on one another: * ->batch must never be higher then ->high. * The following function updates them in a safe manner without read side * locking. * * Any new users of pcp->batch and pcp->high should ensure they can cope with * those fields changing asynchronously (acording the the above rule). * * mutex_is_locked(&pcp_batch_high_lock) required when calling this function * outside of boot time (or some other assurance that no concurrent updaters * exist). */ static void pageset_update(struct per_cpu_pages *pcp, unsigned long high, unsigned long batch) { /* start with a fail safe value for batch */ pcp->batch = 1; smp_wmb(); /* Update high, then batch, in order */ pcp->high = high; smp_wmb(); pcp->batch = batch; } /* a companion to pageset_set_high() */ static void pageset_set_batch(struct per_cpu_pageset *p, unsigned long batch) { pageset_update(&p->pcp, 6 * batch, max(1UL, 1 * batch)); }
setup_pageset()函數入參p是一個struct per_cpu_pageset結構體的指針,per_cpu_pageset結構是內核的各個zone用於每CPU的頁面高速緩存管理結構。該高速緩存包含一些預先分配的頁面,以用於知足本地CPU發出的單一內存請求。而struct per_cpu_pages定義的pcp是該管理結構的成員,用於具體頁面管理。本來是每一個管理結構有兩個pcp數組成員,裏面的兩條隊列分別用於冷頁面和熱頁面管理,而當前分析的3.14.12版本已經將二者合併起來,統一管理冷熱頁,熱頁面在隊列前面,而冷頁面則在隊列後面。暫且先記着這麼多,後續在Buddy算法的時候再詳細分析了。
至此,能夠知道__build_all_zonelists()是內存管理框架向後續的內存頁面管理算法作準備,排布了內存管理區zone的分配次序,同時初始化了冷熱頁管理。
最後回到build_all_zonelists()函數。因爲沒有開啓內存初始化調試功能CONFIG_DEBUG_MEMORY_INIT,mminit_verify_zonelist()是一個空函數。
基於CONFIG_CPUSETS配置項開啓的狀況下,而cpuset_init_current_mems_allowed()實現以下:
【file:/kernel/cpuset.c】 void cpuset_init_current_mems_allowed(void) { nodes_setall(current->mems_allowed); }
這裏面的current 是一個cpuset的數據結構,用來管理cgroup中的任務可以使用的cpu和內存節點。而成員mems_allowed,該成員是nodemask_t類型的結構體
【file:/include/linux/nodemask.h】 typedef struct { DECLARE_BITMAP(bits, MAX_NUMNODES); } nodemask_t;
該結構其實就是定義了一個位域,每一個位對應一個內存結點,若是置1表示該節點內存可用。而nodes_setall則是將這個位域中每一個位都置1。
末了看一下build_all_zonelists()裏面nr_free_pagecache_pages()的實現:
【file:/mm/page_alloc.c】 /** * nr_free_pagecache_pages - count number of pages beyond high watermark * * nr_free_pagecache_pages() counts the number of pages which are beyond the * high watermark within all zones. */ unsigned long nr_free_pagecache_pages(void) { return nr_free_zone_pages(gfp_zone(GFP_HIGHUSER_MOVABLE)); }
而裏面調用的nr_free_zone_pages()實現爲:
【file:/mm/page_alloc.c】 /** * nr_free_zone_pages - count number of pages beyond high watermark * @offset: The zone index of the highest zone * * nr_free_zone_pages() counts the number of counts pages which are beyond the * high watermark within all zones at or below a given zone index. For each * zone, the number of pages is calculated as: * managed_pages - high_pages */ static unsigned long nr_free_zone_pages(int offset) { struct zoneref *z; struct zone *zone; /* Just pick one node, since fallback list is circular */ unsigned long sum = 0; struct zonelist *zonelist = node_zonelist(numa_node_id(), GFP_KERNEL); for_each_zone_zonelist(zone, z, zonelist, offset) { unsigned long size = zone->managed_pages; unsigned long high = high_wmark_pages(zone); if (size > high) sum += size - high; } return sum; }
能夠看到nr_free_zone_pages()遍歷全部內存管理區並將各管理區的內存空間求和,其實質是用於統計全部的管理區能夠用於分配的內存頁面數。
接着在build_all_zonelists()後面則是判斷當前系統中的內存頁框數目,以決定是否啓用流動分組機制(Mobility Grouping),該機制能夠在分配大內存塊時減小內存碎片。一般只有內存足夠大時纔會啓用該功能,不然將會提高消耗下降性能。其中pageblock_nr_pages表示夥伴系統中的最高階頁塊所能包含的頁面數。
至此,內存管理框架算法基本準備完畢。