關於LVM簡單的理解【百度百科很詳細】,如今php開發的一個項目,涉及到拍高清照片,項目盤20G很快被佔滿了,目前我想到的解決方式可能有 (1).虛擬目錄指定到其餘盤符或分區,可是還要修改項目中圖片存放路徑,很麻煩 (2).找一個分區mount到圖片目錄 (3).用fdisk 先刪除原有分區, 再重建分區, 起始cylinder 絕對不能夠改,這樣會破壞原分區的數據 (4).再就是今天要學習的LVM,它把最下面的物理盤【物理卷】/dev/sda1,/dev/sdb1...邏輯成一個大磁盤【卷組】,而後分配給邏輯分區【邏輯卷】php
這樣不論是擴容仍是減容,都只是邏輯上的變化,而最下層的物理卷根本沒有變化,因此不會影響數據的毀壞【文件系統層次看見的是邏輯分區】,當數據存入取出的時候,好比邏輯分區1到邏輯集合(大磁盤)之間的最小數據塊或者說是基本單位【類比raid中的CHUNK】叫作LE邏輯快,而邏輯集合(大磁盤)到物理卷的基本單位是PE物理塊,在同一個卷組中,LE的大小和PE是相同的,而且一一對應。下面的物理卷壞了任意一塊上面的邏輯卷也不能使用,由於LE在下面的每一個磁盤都有【可使用前面說的raid技術】node
要想擴大上面邏輯分區的最大容量【邏輯邊界】,就必須擴大中間的卷組,最終仍是要擴大底層的磁盤組【物理邊界】,當要去掉底層的一塊盤時,它會先把該盤數據轉移到其餘的盤,而後才拆卸linux
中止raid佔用的分區【上一章raid遺留問題】,不然可能出現資源繁忙或資源被佔用安全
[root@localhost ~]# mdadm -S /dev/md126 mdadm: stopped /dev/md126 [root@localhost ~]# mdadm -S /dev/md127 mdadm: stopped /dev/md127
用fdisk把分區類型改成8eapp
[root@localhost ~]# fdisk /dev/sdb WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): t Partition number (1-6): 1 Hex code (type L to list codes): 8e Changed system type of partition 1 to 8e (Linux LVM) Command (m for help): t Partition number (1-6): 2 Hex code (type L to list codes): 8e Changed system type of partition 2 to 8e (Linux LVM) Command (m for help): p Disk /dev/sdb: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd25c91c2 Device Boot Start End Blocks Id System /dev/sdb1 1 132 1060258+ 8e Linux LVM【改爲LVM分區類型】 /dev/sdb2 133 264 1060290 8e Linux LVM /dev/sdb3 265 396 1060290 8e Linux LVM /dev/sdb4 397 1305 7301542+ 5 Extended /dev/sdb5 397 528 1060258+ 8e Linux LVM /dev/sdb6 529 660 1060258+ 8e Linux LVM Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
如今把sdb3和sdb5作成pv【物理卷】dom
[root@localhost ~]# pvcreate /dev/sdb{3,5} Physical volume "/dev/sdb3" successfully created Physical volume "/dev/sdb5" successfully created
[root@localhost ~]# pvscan PV /dev/sdb3 lvm2 [1.01 GiB] PV /dev/sdb5 lvm2 [1.01 GiB] Total: 2 [2.02 GiB] / in use: 0 [0 ] / in no VG: 2 [2.02 GiB]
[root@localhost ~]# pvdisplay "/dev/sdb3" is a new physical volume of "1.01 GiB" --- NEW Physical volume --- PV Name /dev/sdb3 VG Name PV Size 1.01 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0【當pv加入到卷組之後PE才的個數才知道有多少】 Allocated PE 0 PV UUID U1ndh1-u5pu-v0WF-PrWg-fwuO-1Blw-4Wbz0f "/dev/sdb5" is a new physical volume of "1.01 GiB" --- NEW Physical volume --- PV Name /dev/sdb5 VG Name PV Size 1.01 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID 79gSEo-q0xS-M7Q5-Wlmc-qbCz-tH2Z-ZPMf57
如今要移除一個pv,首先要不數據轉移到其餘的pv上,而後使用vgreduce去掉該pvide
[root@localhost ~]# pvs PV VG Fmt Attr PSize PFree /dev/sdb3 myvg lvm2 a-- 1.01g 1.01g /dev/sdb5 myvg lvm2 a-- 1.01g 1.01g【尚未移除】 [root@localhost ~]# pvmove /dev/sdb5 No data to move for myvg【個人/dev/sdb5還沒數據】 [root@localhost ~]# vgreduce myvg /dev/sdb5 Removed "/dev/sdb5" from volume group "myvg" [root@localhost ~]# pvs PV VG Fmt Attr PSize PFree /dev/sdb3 myvg lvm2 a-- 1.01g 1.01g /dev/sdb5 lvm2 --- 1.01g 1.01g【已經移除】
[root@localhost ~]# pvremove /dev/sdb5 Labels on physical volume "/dev/sdb5" successfully wiped
用法:vgcreate 卷組名 物理卷名學習
-s #:指定PE大小,默認是4Mui
[root@localhost ~]# vgcreate myvg /dev/sdb{3,5} Volume group "myvg" successfully created
[root@localhost ~]# vgs VG #PV #LV #SN Attr VSize VFree myvg 2 0 0 wz--n- 2.02g 2.02g
查看vg詳細信息查看PE大小,pvdisplay查看PE的個數spa
[root@localhost ~]# vgdisplay myvg --- Volume group --- VG Name myvg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 2 Act PV 2 VG Size 2.02 GiB PE Size 4.00 MiB【PE大小】 Total PE 516 Alloc PE / Size 0 / 0 Free PE / Size 516 / 2.02 GiB VG UUID 87U1HI-Nr3b-ytsO-iw3K-lFG6-yVy9-uoZlXM [root@localhost ~]# pvdisplay --- Physical volume --- PV Name /dev/sdb3 VG Name myvg PV Size 1.01 GiB / not usable 3.44 MiB Allocatable yes PE Size 4.00 MiB【PE大小】 Total PE 258 Free PE 258【PE個數】 Allocated PE 0 PV UUID U1ndh1-u5pu-v0WF-PrWg-fwuO-1Blw-4Wbz0f --- Physical volume --- PV Name /dev/sdb5 VG Name myvg PV Size 1.01 GiB / not usable 3.41 MiB Allocatable yes PE Size 4.00 MiB Total PE 258 Free PE 258 Allocated PE 0 PV UUID 79gSEo-q0xS-M7Q5-Wlmc-qbCz-tH2Z-ZPMf57
在尚未建立lg的時候能夠刪除vg,建立了lg有了數據就不能再刪除了
[root@localhost ~]# vgremove myvg Volume group "myvg" successfully removed
[root@localhost ~]# pvs PV VG Fmt Attr PSize PFree /dev/sdb3 myvg lvm2 a-- 1.01g 1.01g /dev/sdb5 myvg lvm2 a-- 1.01g 1.01g【尚未移除】 [root@localhost ~]# pvmove /dev/sdb5 No data to move for myvg【個人/dev/sdb5還沒數據】 [root@localhost ~]# vgreduce myvg /dev/sdb5 Removed "/dev/sdb5" from volume group "myvg" [root@localhost ~]# pvs PV VG Fmt Attr PSize PFree /dev/sdb3 myvg lvm2 a-- 1.01g 1.01g /dev/sdb5 lvm2 --- 1.01g 1.01g【已經移除】
擴展vg就要增長pv
[root@localhost ~]# vgextend myvg /dev/sdb5 Physical volume "/dev/sdb5" successfully created Volume group "myvg" successfully extended [root@localhost ~]# pvcreate /dev/sdb6 Can't open /dev/sdb6 exclusively. Mounted filesystem? [root@localhost ~]# cat /proc/mdstat【怎麼自動被raid佔用了???】 Personalities : [raid1] [raid0] md1 : inactive sdb6[2](S) 1059234 blocks super 1.2 md0 : active raid0 sdb2[1] sdb1[0] 2117632 blocks super 1.2 512k chunks unused devices: <none> [root@localhost ~]# mdadm -S /dev/md0【中止陣列】 mdadm: stopped /dev/md0 [root@localhost ~]# mdadm -S /dev/md1 mdadm: stopped /dev/md1 [root@localhost ~]# pvcreate /dev/sdb6【建立pv】 WARNING: software RAID md superblock detected on /dev/sdb6. Wipe it? [y/n]: y Wiping software RAID md superblock on /dev/sdb6. Physical volume "/dev/sdb6" successfully created [root@localhost ~]# vgextend myvg /dev/sdb6【添加到myvg卷組】 Volume group "myvg" successfully extended [root@localhost ~]# vgs VG #PV #LV #SN Attr VSize VFree myvg 3 0 0 wz--n- 3.02g 3.02g [root@localhost ~]# pvs PV VG Fmt Attr PSize PFree /dev/sdb3 myvg lvm2 a-- 1.01g 1.01g /dev/sdb5 myvg lvm2 a-- 1.01g 1.01g /dev/sdb6 myvg lvm2 a-- 1.01g 1.01g
用法:lvcreate -n LV_NAME -L #G VG_NAME
[root@localhost ~]# lvcreate -L 100M -n mylv myvg Rounding up size to full physical extent 104.00 MiB Logical volume "mylv" created
[root@localhost ~]# lvdisplay --- Logical volume --- LV Path /dev/myvg/mylv【能夠lvdisplay /dev/myvg/mylv】 LV Name mylv VG Name myvg LV UUID 8AbwtV-ZJ3c-MbnS-AjUz-3n1A-xRUX-5S2TBr LV Write Access read/write LV Creation host, time localhost.localdomain, 2017-04-30 15:55:25 +0800 LV Status available # open 0 LV Size 104.00 MiB Current LE 13 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0
建立文件系統
[root@localhost ~]# mke2fs -j /dev/myvg/mylv mke2fs 1.41.12 (17-May-2010) 文件系統標籤= 操做系統:Linux 塊大小=1024 (log=0) 分塊大小=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 26624 inodes, 106496 blocks 5324 blocks (5.00%) reserved for the super user 第一個數據塊=1 Maximum filesystem blocks=67371008 13 block groups 8192 blocks per group, 8192 fragments per group 2048 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729 正在寫入inode表: 完成 Creating journal (4096 blocks): 完成 Writing superblocks and filesystem accounting information: 完成 This filesystem will be automatically checked every 30 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
掛載
[root@localhost ~]# mount /dev/myvg/mylv /mnt/test [root@localhost ~]# ls -l /mnt/test 總用量 12 drwx------. 2 root root 12288 4月 30 16:00 lost+found [root@localhost ~]# mount【查看掛載的設備記錄】 /dev/sda2 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/sda1 on /boot type ext4 (rw) /dev/sda5 on /home type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) /dev/mapper/myvg-mylv on /mnt/test type ext3 (rw)【在這發現一個myvg-mylv掛載了,並無咱們的設備mylv的掛載記錄】
查看/dev/mapper/myvg-mylv和/dev/myvg/mylv,發現都指向了同一個設備/dev/dm-0,這說明真正的物理空間,也就是物理卷分配給了dm-0,mylv和myvg-mylv都是邏輯卷,僅僅是個軟連接【本身的猜測,望大神指正】
[root@localhost ~]# ls -lh /dev/myvg/mylv lrwxrwxrwx. 1 root root 7 4月 30 16:00 /dev/myvg/mylv -> ../dm-0 [root@localhost ~]# ls -lh /dev/mapper/myvg-mylv lrwxrwxrwx. 1 root root 7 4月 30 16:00 /dev/mapper/myvg-mylv -> ../dm-0 [root@localhost ~]# ls -lh /dev/dm-0 brw-rw----. 1 root disk 253, 0 4月 30 16:00 /dev/dm-0
證實一下上面的結論,咱們卸載mylv和myvg-mylv;結果說明邏輯卷有兩個名字,都是軟連接
[root@localhost test]# lvremove /dev/mapper/myvg-mylv Logical volume myvg/mylv contains a filesystem in use. [root@localhost test]# lvremove /dev/myvg/mylv Logical volume myvg/mylv contains a filesystem in use. [root@localhost mnt]# umount /dev/myvg/mylv【軟連接】 [root@localhost mnt]# lvremove /dev/mapper/myvg-mylv【軟連接】 Do you really want to remove active logical volume mylv? [y/n]: y Logical volume "mylv" successfully removed [root@localhost mnt]# ls -l /dev/md-0【真正的邏輯設備md-0被移除】 ls: 沒法訪問/dev/md-0: 沒有那個文件或目錄 [root@localhost mnt]# lvs【真正的邏輯設備md-0被移除】 [root@localhost mnt]#
用法:-L [+]# /PATH/LV,"[+]"表明有"+"號時,表明加多少G【容量】,沒有+號表明擴展到多少G
[root@localhost mnt]# lvcreate -L 1G -n mylv myvg【建立一個邏輯卷】 Logical volume "mylv" created [root@localhost mnt]# mke2fs -j /dev/myvg/mylv【建立文件系統】 mke2fs 1.41.12 (17-May-2010) 文件系統標籤= 操做系統:Linux 塊大小=4096 (log=2) 分塊大小=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 65536 inodes, 262144 blocks 13107 blocks (5.00%) reserved for the super user 第一個數據塊=0 Maximum filesystem blocks=268435456 8 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 正在寫入inode表: 完成 Creating journal (8192 blocks): 完成 Writing superblocks and filesystem accounting information: 完成 This filesystem will be automatically checked every 26 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@localhost mnt]# mount /mnt/test mount: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so [root@localhost mnt]# mount /dev/myvg/mylv /mnt/test【掛載】 [root@localhost mnt]# vgs【查看vg】 VG #PV #LV #SN Attr VSize VFree myvg 3 1 0 wz--n- 3.02g 2.02g [root@localhost mnt]# lvextend -L 2G /dev/myvg/mylv【擴容】 Size of logical volume myvg/mylv changed from 1.00 GiB (128 extents) to 2.00 GiB (256 extents). Logical volume mylv successfully resized [root@localhost mnt]# df -lh Filesystem Size Used Avail Use% Mounted on /dev/sda2 3.9G 3.2G 463M 88% / tmpfs 504M 72K 504M 1% /dev/shm /dev/sda1 190M 26M 155M 15% /boot /dev/sda5 3.7G 34M 3.5G 1% /home /dev/mapper/myvg-mylv 1008M 34M 924M 4% /mnt/test【擴容後發現大小沒變】
resize2fs命令被用來增大或者收縮未加載的「ext2/ext3」文件系統的大小。若是文件系統是處於mount狀態下,那麼它只能作到擴容,前提條件是內核支持在線resize。,linux kernel 2.6支持在mount狀態下擴容但僅限於ext3文件系統。來自: http://man.linuxde.net/resize2fs
-d:打開調試特性;
-p:打印已完成的百分比進度條;
-f:強制執行調整大小操做,覆蓋掉安全檢查操做;
-F:開始執行調整大小前,刷新文件系統設備的緩衝區。
[root@localhost mnt]# resize2fs /dev/myvg/mylv 【從新加載邏輯卷】 resize2fs 1.41.12 (17-May-2010) Filesystem at /dev/myvg/mylv is mounted on /mnt/test; on-line resizing required old desc_blocks = 1, new_desc_blocks = 1 Performing an on-line resize of /dev/myvg/mylv to 524288 (4k) blocks. The filesystem on /dev/myvg/mylv is now 524288 blocks long. [root@localhost mnt]# df -lh Filesystem Size Used Avail Use% Mounted on /dev/sda2 3.9G 3.2G 463M 88% / tmpfs 504M 72K 504M 1% /dev/shm /dev/sda1 190M 26M 155M 15% /boot /dev/sda5 3.7G 34M 3.5G 1% /home /dev/mapper/myvg-mylv 2.0G 34M 1.9G 2% /mnt/test【擴容成功】 [root@localhost mnt]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert mylv myvg -wi-ao---- 2.00g
注意:
不能在線縮減,先卸載
確保縮減後的空間大小依然能存儲原有的數據
在縮減以前應該先強行檢查文件系統,以確保文件系統處於一致性狀態【e2fsck -f】
縮減邏輯卷恰好和上面相反,先resize2fs再lvreduce
[root@localhost mnt]# df -lh【確保縮減後的空間大小依然能存儲原有的數據,df只能查看掛載的分區】 Filesystem Size Used Avail Use% Mounted on /dev/sda2 3.9G 3.2G 463M 88% / tmpfs 504M 72K 504M 1% /dev/shm /dev/sda1 190M 26M 155M 15% /boot /dev/sda5 3.7G 34M 3.5G 1% /home /dev/mapper/myvg-mylv 2.0G 34M 1.9G 2% /mnt/test [root@localhost mnt]# umount /mnt/test【不能在線縮減,先卸載】 [root@localhost mnt]# mount【確保已經卸載】 /dev/sda2 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/sda1 on /boot type ext4 (rw) /dev/sda5 on /home type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) [root@localhost mnt]# e2fsck -f /dev/myvg/mylv【確保文件系統處於一致性狀態】 e2fsck 1.41.12 (17-May-2010) 第一步: 檢查inode,塊,和大小 第二步: 檢查目錄結構 第3步: 檢查目錄鏈接性 Pass 4: Checking reference counts 第5步: 檢查簇概要信息 /dev/myvg/mylv: 11/131072 files (0.0% non-contiguous), 16821/524288 blocks [root@localhost mnt]# resize2fs /dev/myvg/mylv 1G【調整文件系統大小爲1G】 resize2fs 1.41.12 (17-May-2010) Resizing the filesystem on /dev/myvg/mylv to 262144 (4k) blocks. The filesystem on /dev/myvg/mylv is now 262144 blocks long. [root@localhost mnt]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert mylv myvg -wi-a----- 2.00g【邏輯卷還沒調整】 [root@localhost mnt]# lvreduce -L 1G /dev/myvg/mylv【縮減邏輯卷】 WARNING: Reducing active logical volume to 1.00 GiB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce mylv? [y/n]: y Size of logical volume myvg/mylv changed from 2.00 GiB (256 extents) to 1.00 GiB (128 extents). Logical volume mylv successfully resized [root@localhost mnt]# mount /dev/myvg/mylv /mnt/test【掛載】 [root@localhost mnt]# mount /dev/sda2 on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/sda1 on /boot type ext4 (rw) /dev/sda5 on /home type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) /dev/mapper/myvg-mylv on /mnt/test type ext3 (rw) [root@localhost mnt]# df -lh【查看已掛載磁盤分區的使用信息】 Filesystem Size Used Avail Use% Mounted on /dev/sda2 3.9G 3.2G 463M 88% / tmpfs 504M 72K 504M 1% /dev/shm /dev/sda1 190M 26M 155M 15% /boot /dev/sda5 3.7G 34M 3.5G 1% /home /dev/mapper/myvg-mylv 1008M 34M 924M 4% /mnt/test
快照通常都很小,它存的是原系統上要發生變化的文件,好比我要修改一個文件,快照會首先把文件存起來,而後修改事後的文件在原系統,未修改的文件存進了快照。這樣就實現了數據備份,這樣快照卷剛開始很小,時間長了就大了。它這個備份存的是軟件資料的備份。而咱們的raid1和raid10存的備份是磁盤的備份。個人磁盤直接壞掉了,快照確定沒有備份下來,還原也是沒用的【快照也是存在磁盤上的】。
快照卷的好多文件其實都是和原文件系統共用的,因此快照卷至關於一個分區的另外一個入口,這樣快照卷確定在該分區上,對LVM而言就是在一個卷組上【由於PE在vg下面的每一個PV上都有】
要求:
生命週期爲整個快照的文件系統的生命週期,而且在該週期內原文件系統變化的文件大小不能超過快照卷,由於每有文件變化就會備份一份該文件變化以前的快照,最大不會超過快照的原文件系統大小
快照應該是隻讀
跟原卷在同一卷組
使用方法:lvcreate -L # -n SLV_NAME -s -p r /PATH/LV 【指定爲那個邏輯卷建立的】
-s:表示快照卷
-p:指定權限r|w
-L:快照卷大小
#test是邏輯卷mylv掛載目錄 #test1是快照卷掛載目錄 [root@localhost test]# ls lost+found [root@localhost test]# touch b.txt【建立快照前先在原文件系統建立一個b.txt】 [root@localhost test]# lvcreate -L 100M -n mylv-snap -s -p r /dev/myvg/mylv 【爲mylv建立快照】 Rounding up size to full physical extent 104.00 MiB Logical volume "mylv-snap" created [root@localhost test]# mount /dev/myvg/mylv-snap /mnt/test1【掛載】 mount: block device /dev/mapper/myvg-mylv--snap is write-protected, mounting read-only [root@localhost test]# ls /mnt/test1【建立當前快照,這個b.txt其實也是/mnt/test的b.txt,快照只是入口】 b.txt lost+found [root@localhost test]# vi b.txt【編輯test下的b.txt,快照這個時候才備份,備份原來的空文件b.txt】 [root@localhost test]# cat b.txt dsadsaaaaa [root@localhost test]# cat /mnt/test1/b.txt【發現仍是空,已經快照備份了】 [root@localhost test]# #刪除快照 [root@localhost test]# umount /mnt/test1 [root@localhost test]# lvremove /dev/myvg/mylv-snap Do you really want to remove active logical volume mylv-snap? [y/n]: y Logical volume "mylv-snap" successfully removed