linux /dev/shm的用途

1.linux下的/dev/shm是什麼?

/dev/shm/是linux下一個目錄,/dev/shm目錄不在磁盤上,而是在內存裏,所以使用linux /dev/shm/的效率很是高,直接寫進內存。node

咱們能夠經過如下兩個腳原本驗證linux /dev/shm的性能:linux

[root@db1 oracle]# ls -l linux_11gR2_grid.zip
-rw-r--r-- 1 oracle dba 980831749 Jul 11 20:18 linux_11gR2_grid.zip
[root@db1 oracle]# cat mycp.sh
#!/bin/sh
echo `date`
cp linux_11gR2_grid.zip ..
echo `date`
[root@db1 oracle]# ./mycp.sh
Fri Jul 15 18:44:17 CST 2011
Fri Jul 15 18:44:29 CST 2011

[root@db1 shm]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-lv01
                       97G  9.2G   83G  10% /
/dev/sda1              99M   15M   80M  16% /boot
tmpfs                 2.0G     0  2.0G   0% /dev/shm

[root@db1 oracle]# cat mycp1.sh
#!/bin/sh
echo `date`
cp linux_11gR2_grid.zip /dev/shm
echo `date`
[root@db1 oracle]# ./mycp1.sh
Fri Jul 15 18:44:29 CST 2011
Fri Jul 15 18:44:30 CST 2011
[root@db1 oracle]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-lv01
                       97G  9.2G   83G  10% /
/dev/sda1              99M   15M   80M  16% /boot
tmpfs                 2.0G  937M  1.1G  46% /dev/shm
[root@db1 oracle]#

能夠看出,在對一個將近1g爲文件的複製,拷到磁盤上與拷到/dev/shm下仍是有很大差距的。數據庫

tmpfs有如下特色:
1.tmpfs 是一個文件系統,而不是塊設備;您只是安裝它,它就可使用了。
2.動態文件系統的大小。
3.tmpfs 的另外一個主要的好處是它閃電般的速度。由於典型的 tmpfs 文件系統會徹底駐留在 RAM 中,讀寫幾乎能夠是瞬間的。
4.tmpfs 數據在從新啓動以後不會保留,由於虛擬內存本質上就是易失的。因此有必要作一些腳本作諸如加載、綁定的操做。oracle

2.linux /dev/shm 默認容量

linux下/dev/shm的容量默認最大爲內存的一半大小,使用df -h命令能夠看到。但它並不會真正的佔用這塊內存,若是/dev/shm/下沒有任何文件,它佔用的內存實際上就是0字節;若是它最大爲1G,裏頭放有100M文件,那剩餘的900M仍然可爲其它應用程序所使用,但它所佔用的100M內存,是毫不會被系統回收從新劃分的,不然誰還敢往裏頭存文件呢?app

經過df -h查看linux /dev/shm的大小ide

[root@db1 shm]# df -h /dev/shm
Filesystem            Size  Used Avail Use% Mounted on
tmpfs                 1.5G     0  1.5G   0% /dev/shm

3.linux /dev/shm 容量(大小)調整

linux /dev/shm容量(大小)是能夠調整,在有些狀況下(如oracle數據庫)默認的最大一半內存不夠用,而且默認的inode數量很低通常都要調高些,這時能夠用mount命令來管理它。
mount -o size=1500M -o nr_inodes=1000000 -o noatime,nodiratime -o remount /dev/shm
在2G的機器上,將最大容量調到1.5G,而且inode數量調到1000000,這意味着大體可存入最多一百萬個小文件oop

經過/etc/fstab文件來修改/dev/shm的容量(增長size選項便可),修改後,從新掛載便可:性能

[root@db1 shm]# grep tmpfs /etc/fstab
tmpfs                   /dev/shm                tmpfs   defaults,size=2G        0 0
[root@db1 /]# umount /dev/shm
[root@db1 /]# mount /dev/shm
[root@db1 /]# df -h /dev/shm
Filesystem            Size  Used Avail Use% Mounted on
tmpfs                 2.0G     0  2.0G   0% /dev/shm

[root@db1 /]# # mount -o remount /dev/shm
[root@db1 /]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/rootvg-lv01
                       97G  9.2G   83G  10% /
/dev/sda1              99M   15M   80M  16% /boot
tmpfs                 2.0G     0  2.0G   0% /dev/shm

 

附:tmpfs文檔ui

Tmpfs is a file system which keeps all files in virtual memory.this

Everything in tmpfs is temporary in the sense that no files will be
created on your hard drive. If you unmount a tmpfs instance,
everything stored therein is lost.

tmpfs puts everything into the kernel internal caches and grows and
shrinks to accommodate the files it contains and is able to swap
unneeded pages out to swap space. It has maximum size limits which can
be adjusted on the fly via ‘mount -o remount …’

If you compare it to ramfs (which was the template to create tmpfs)
you gain swapping and limit checking. Another similar thing is the RAM
disk (/dev/ram*), which simulates a fixed size hard disk in physical
RAM, where you have to create an ordinary filesystem on top. Ramdisks
cannot swap and you do not have the possibility to resize them.

Since tmpfs lives completely in the page cache and on swap, all tmpfs
pages currently in memory will show up as cached. It will not show up
as shared or something like that. Further on you can check the actual
RAM+swap use of a tmpfs instance with df(1) and du(1).

tmpfs has the following uses:

1) There is always a kernel internal mount which you will not see at
all. This is used for shared anonymous mappings and SYSV shared
memory.

This mount does not depend on CONFIG_TMPFS. If CONFIG_TMPFS is not
set, the user visible part of tmpfs is not build. But the internal
mechanisms are always present.

2) glibc 2.2 and above expects tmpfs to be mounted at /dev/shm for
POSIX shared memory (shm_open, shm_unlink). Adding the following
line to /etc/fstab should take care of this:

tmpfs /dev/shm tmpfs defaults 0 0

Remember to create the directory that you intend to mount tmpfs on
if necessary (/dev/shm is automagically created if you use devfs).

This mount is _not_ needed for SYSV shared memory. The internal
mount is used for that. (In the 2.3 kernel versions it was
necessary to mount the predecessor of tmpfs (shm fs) to use SYSV
shared memory)

3) Some people (including me) find it very convenient to mount it
e.g. on /tmp and /var/tmp and have a big swap partition. But be
aware: loop mounts of tmpfs files do not work due to the internal
design. So mkinitrd shipped by most distributions will fail with a
tmpfs /tmp.

4) And probably a lot more I do not know about :-)

tmpfs has a couple of mount options:

size: The limit of allocated bytes for this tmpfs instance. The
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.
nr_blocks: The same as size, but in blocks of PAGECACHE_SIZE.
nr_inodes: The maximum number of inodes for this instance. The default
is half of the number of your physical RAM pages.

These parameters accept a suffix k, m or g for kilo, mega and giga and
can be changed on remount.

To specify the initial root directory you can use the following mount
options:

mode: The permissions as an octal number
uid: The user id
gid: The group id

These options do not have any effect on remount. You can change these
parameters with chmod(1), chown(1) and chgrp(1) on a mounted filesystem.

So ‘mount -t tmpfs -o size=10G,nr_inodes=10k,mode=700 tmpfs /mytmpfs’
will give you tmpfs instance on /mytmpfs which can allocate 10GB
RAM/SWAP in 10240 inodes and it is only accessible by root.

TODOs:

1) give the size option a percent semantic: If you give a mount option
size=50% the tmpfs instance should be able to grow to 50 percent of
RAM + swap. So the instance should adapt automatically if you add
or remove swap space.
2) loop mounts: This is difficult since loop.c relies on the readpage
operation. This operation gets a page from the caller to be filled
with the content of the file at that position. But tmpfs always has
the page and thus cannot copy the content to the given page. So it
cannot provide this operation. The VM had to be changed seriously
to achieve this.
3) Show the number of tmpfs RAM pages. (As shared?)

Author: Christoph Rohland , 1.12.01

相關文章
相關標籤/搜索