Hugepages

Why Hugepages? node

As we known, the default page size is 4k. It is not enough for the new big-memory system. Since the process address space are virtual, the CPU and the operating system have to remember which page belong to which process, and where it is stored. Obviously, the more pages you have, the more time it takes to find where the memory is mapped. app

Most current CPU architectures support bigger pages.those are named Hugepages.Through the Hugepage, the pages can be cutted down and fewer translations requiring fewer cycles to accses the memory.  A less obvious benefit is that address translation information is typically stored in the L2 cache. With huge pages, more cache space is available for application data means that fewer cycles are spent accessing main memory. less

How to enable Hugepages? ui

  1.             Check the hugepage size.

#cat /proc/meminfo spa

The output of "cat /proc/meminfo" will have lines like: orm

..... rem

HugePages_Total: vvv get

HugePages_Free:  www it

HugePages_Rsvd:  xxx io

HugePages_Surp:  yyy

Hugepagesize:    zzz kB

 

where:

HugePages_Total  is the size of the pool of huge pages.

HugePages_Free  is the number of huge pages in the pool that are not yet      allocated.

Hugepagesize    is the size of each page, can be 2M, 4M and so on in different artchitectures.

/proc/sys/vm/nr_hugepages indicates the current number of configured hugetlb pages(HugePages_Total) in the kernel.

 

Use the following command to dynamically allocate/deallocate default sized

huge pages:

 

            echo 20 > /proc/sys/vm/nr_hugepages

So the total hugepage memory is  HugePages_Total* Hugepagesize

 

  1. Mount the hugepage filesystem and set page num

mount -t hugetlbfs nodev  /mnt/huge

echo xx > /proc/sys/vm/nr_hugepages
相關文章
相關標籤/搜索