LVM的建立與快照和SSM存儲管理器的使用

LVM:Logical Volume Manager(邏輯卷管理) python

概念(名詞):
PV:Physical Volume(物理卷)
VG:Volume Group(卷組)
LV:Logical Volume(邏輯卷)後端

最小存儲單位爲:PE
總結:
名稱 最小存儲單位
硬盤 扇區(512B)
文件系統 block(1K,4K)
RAID chunk (512) mdadm -c
LVM PE (16M 自定義)app

建立LVM:
準備分區:
#fdisk /dev/sdb 分三個分區: sdb1,2,3dom

建立PV
[root@localhost ~]# pvcreate /dev/sdb{1,2}
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdb2" successfully createdide

建立VG
[root@localhost ~]# vgcreate Vg1 /dev/sdb{1,2}
Volume group "Vg1" successfully created性能

建立LV
[root@localhost ~]# lvcreate -n LV1 -L 1.5G Vg1
Logical volume "LV1" created.測試

-n 指定LV名稱,-L 指定大小優化

各類查看
#pvs #pvscan #pvdisplay
#vgs #vgscan #vgdisplay
#lvs #lvscan #lvdisplaythis

[root@localhost ~]# pvdisplay
PV Name /dev/sdb1
VG Name Vg1
PV Size 1.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 255
Free PE 0
Allocated PE 255
PV UUID CdTAEq-AdQ1-eyoO-ZBZu-EIh0-2J3a-FeMWdvscala

指定PE的大小:-s:
[root@localhost ~]# vgcreate -s 16M VGrm /dev/sdb3
Physical volume "/dev/sdb3" successfully created
Volume group "VGrm" successfully created

使用LVM:
[root@localhost ~]# mkfs.xfs /dev/Vg1/LV1
[root@localhost ~]# mkdir /lv1
[root@localhost ~]# mount /dev/Vg1/LV1 /lv1/

LV擴容:
首先,肯定一下是否有可用的擴容空間
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VGrm 1 0 0 wz--n- 1008.00m 1008.00m
Vg1 2 1 0 wz--n- 1.99g 504.00m
rhel 1 2 0 wz--n- 12.00g 4.00m

擴容邏輯卷
[root@localhost ~]# lvextend -L +300M /dev/Vg1/LV1
Size of logical volume Vg1/LV1 changed from 1.50 GiB (384 extents) to 1.79 GiB (459 extents).
Logical volume LV1 successfully resized.
[root@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LV1 Vg1 -wi-ao---- 1.79g
root rhel -wi-ao---- 10.00g
swap rhel -wi-ao---- 2.00g

RHEL7對文件系統進行擴容
[root@localhost ~]# xfs_growfs /dev/Vg1/LV1
RHEL6對文件系統進行擴容
[root@localhost ~]# resize2fs /dev/Vg1/LV1

VG擴容:
[root@localhost ~]# vgextend Vg1 /dev/sdb3
Physical volume "/dev/sdb3" successfully created
Volume group "Vg1" successfully extended

[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 rhel lvm2 a-- 12.00g 4.00m
/dev/sdb1 Vg1 lvm2 a-- 1020.00m 0
/dev/sdb2 Vg1 lvm2 a-- 1020.00m 204.00m
/dev/sdb3 VGrm lvm2 a-- 1008.00m 1008.00m

LVM縮減
lvm支持在線縮小,可是xfs文件系統不支持在線縮小。btrfs支持在線縮小

擴展:
Btrfs 簡介
一直使用 ext2/3,ext 文件系統以其卓越的穩定性成爲了事實上的 Linux 標準文件系統。近年來 ext2/3 暴露出了一些擴展性問題,因而便催生了 ext4 。 ext4 的做者 Theodore Tso 也盛讚 btrfs 並認爲 btrfs 將成爲下一代 Linux 標準文件系統。
btrfs 的特性
首先,是擴展性 (scalability) 相關的特性,btrfs 最重要的設計目標是應對大型機器對文件系統的擴展性要求。其總體性能而不會隨着系統容量的增長而下降。
其次是數據一致性 (data integrity) 相關的特性。
第三是和多設備管理相關的特性。 Btrfs 支持建立快照 (snapshot),和克隆 (clone) 。
最後,總結一些特性: 這些特性都是比較先進的技術,可以顯著提升文件系統的時間 / 空間性能,包括延遲分配,小文件的存儲優化,目錄索引等。

LV能夠縮減
把LV1縮減到1G
[root@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LV1 Vg1 -wi-ao---- 1.79g
root rhel -wi-ao---- 10.00g
swap rhel -wi-ao---- 2.00g

[root@localhost ~]# lvreduce -L 1G /dev/Vg1/LV1
WARNING: Reducing active and open logical volume to 1.00 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce LV1? [y/n]: y
Size of logical volume Vg1/LV1 changed from 1.79 GiB (459 extents) to 1.00 GiB (256 extents).
Logical volume LV1 successfully resized.

VG縮減:
注:縮減時,能夠不卸載正在使用中的LV。另外,只能縮減沒有被使用的pv。不然會提示如下內容:
[root@localhost ~]# vgreduce Vg1 /dev/sdb1
Physical volume "/dev/sdb1" still in use

縮減以前先確認物理卷是否被使用
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 rhel lvm2 a-- 12.00g 4.00m
/dev/sdb1 Vg1 lvm2 a-- 1020.00m 0
/dev/sdb2 Vg1 lvm2 a-- 1020.00m 1016.00m
/dev/sdb3 VGrm lvm2 a-- 1008.00m 1008.00m

[root@localhost ~]# vgreduce Vg1 /dev/sdb2
Removed "/dev/sdb5" from volume group "Vg1"

LVM刪除
刪除以前必須把設備進行卸載,不然會產生如下錯誤
[root@localhost ~]# lvremove /dev/Vg1/LV1
Logical volume Vg1/LV1 contains a filesystem in use.

卸載設備
[root@localhost ~]# umount /dev/Vg1/LV1
刪除LV
[root@localhost ~]# lvremove /dev/Vg1/LV1
Do you really want to remove active logical volume LV1? [y/n]: y
Logical volume "LV1" successfully removed
刪除VG
[root@localhost ~]# vgremove Vg1
Volume group "Vg1" successfully removed
刪除PV
[root@localhost ~]# pvremove /dev/sdb{1,2,}
Labels on physical volume "/dev/sdb1" successfully wiped
Labels on physical volume "/dev/sdb2" successfully wiped

LVM快照
lvm快照有兩大用途,一是用來克隆虛擬機,例如作xen虛擬機時,可先新建一臺完整虛擬機,假設大小爲10G,而後咱們能夠在這臺10G的虛擬機上建立1個3G的快照,接着經過這個3G的快照啓動虛擬機,作完實驗後,能夠直接刪除快照而保持原來完整虛擬機的純淨。
lvm快照的第二大用途是實時備份(moment-in-time),即爲了保持系統的一致性,咱們先作一個快照凍結當前系統狀態,這樣快照裏面的內容可暫時保持不變,系統自己繼續運行,經過備份快照來實現不中斷服務的的備份。
首先準備一個LV
[root@localhost ~]# vgcreate vg1 /dev/sdb{1,2}
[root@localhost ~]# lvcreate -n lv1 -L 1.5G vg1

格式化lv1並進行掛載
[root@localhost ~]# mkfs.xfs /dev/vg1/lv1
[root@localhost ~]# mount /dev/vg1/lv1 /lv1/
準備一個測試文件
[root@localhost ~]# cp /etc/passwd /lv1/
[root@localhost ~]# ls /lv1/
passwd

針對lv1建立一個300M快照
lvcreate -s -n 快照名 -L 快照大小 建立快照的設備
[root@localhost ~]# lvcreate -s -n lv1_sp -L 300M vg1/lv1
Logical volume "lv1_sp" created.
[root@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root rhel -wi-ao---- 10.00g
swap rhel -wi-ao---- 2.00g
lv1 vg1 owi-aos--- 1.50g
lv1_sp vg1 swi-a-s--- 300.00m lv1 0.00

使用快照
[root@localhost ~]# mount /dev/vg1/lv1_sp /opt/
mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg1-lv1_sp,
missing codepage or helper program, or other error

In some cases useful info is found in syslog - try
   dmesg | tail or so.

[root@localhost ~]# umount /dev/vg1/lv1
[root@localhost ~]# mount /dev/vg1/lv1_sp /opt/
[root@localhost ~]# ls /opt/
passwd

系統存儲管理器的使用
系統存儲管理器(又稱ssm)是RHEL7/CentOS7 新增的功能,是一種統一的命令界面,由紅帽公司開發,用於管理各類各樣的存儲設備。目前,有三種可供ssm使用的卷管理後端:LVM、brtfs和crypt

1)安裝ssm系統存儲管理器
[root@localhost ~]# yum -y install system-storage-manager.noarch

查看ssm命令的幫助信息:
[root@localhost ~]# ssm --help
usage: ssm [-h] [--version] [-v] [-f] [-b BACKEND] [-n]
{check,resize,create,list,add,remove,snapshot,mount} ...

System Storage Manager

optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
-v, --verbose Show aditional information while executing.
-f, --force Force execution in the case where ssm has some doubts
or questions.
-b BACKEND, --backend BACKEND
Choose backend to use. Currently you can choose from
(lvm,btrfs,crypt).
-n, --dry-run Dry run. Do not do anything, just parse the command
line options and gather system information if
necessary. Note that with this option ssm will not
perform all the check as some of them are done by the
backends themselves. This option is mainly used for
debugging purposes.

Commands:
{check,resize,create,list,add,remove,snapshot,mount}
check Check consistency of the file system on the device.
resize Change or set the volume and file system size.
create Create a new volume with defined parameters.
list List information about all detected, devices, pools,
volumes and snapshots in the system.
add Add one or more devices into the pool.
remove Remove devices from the pool, volumes or pools.
snapshot Take a snapshot of the existing volume.
mount Mount a volume with file system to specified locaion.
列出:設備
[root@localhost ~]# ssm list

Device Free Used Total Pool Mount point

/dev/sda 20.00 GB PARTITIONED
/dev/sda1 500.00 MB /boot
/dev/sda2 40.00 MB 19.47 GB 19.51 GB rhel
/dev/sdb 20.00 GB
/dev/sdb1 3.20 GB 1.79 GB 5.00 GB vg1
/dev/sdb2 5.00 GB 0.00 KB 5.00 GB vg1
/dev/sdb3 4.98 GB 0.00 KB 5.00 GB VGrm
/dev/sdc 20.00 GB
/dev/sdd 20.00 GB


##列出存儲池
Pool Type Devices Free Used Total

VGrm lvm 1 4.98 GB 0.00 KB 4.98 GB
rhel lvm 1 40.00 MB 19.47 GB 19.51 GB
vg1 lvm 2 8.20 GB 1.79 GB 9.99 GB


##列出卷
Volume Pool Volume size FS FS size Free Type Mount point

/dev/rhel/root
rhel 17.47 GB xfs 17.46 GB 16.26 GB linear /
/dev/rhel/swap
rhel 2.00 GB linear
/dev/vg1/lv1 vg1 1.50 GB xfs 1.49 GB 1.49 GB linear
/dev/sda1 500.00 MB xfs 496.67 MB 397.52 MB part /boot


##列出快照
Snapshot Origin Pool Volume size Size Type

/dev/vg1/lv1_sp lv1 vg1 300.00 MB 2.01 MB linear

將物理磁盤sdc 添加到LVM存儲池
語法格式:ssm add -p 存儲池 設備
[root@localhost ~]# ssm add -p vg1 /dev/sdc
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VGrm 1 0 0 wz--n- 1008.00m 1008.00m
rhel 1 2 0 wz--n- 12.00g 4.00m
vg1 3 2 1 wz--n- 21.99g 20.20g

擴容LV(不能針對建立了快照的lv進行擴容)
[root@localhost ~]# ssm add -p rhel /dev/sdd
[root@localhost ~]# ssm resize -s+10G /dev/rhel/root

注:
使用ssm存儲管理器擴容,不須要再針對文進系統進行擴容

實例:
建立一個名爲vg2的存儲池,並在其上建立一個名爲lv2,大小爲1G的lvm卷,格式化爲xfs文件系統,並將其掛載/lv2目錄下

建立目錄
[root@localhost ~]# mkdir /lv2

[root@localhost ~]# ssm create -s 1G -n lv2 --fstype xfs -p vg2 /dev/sde /lv2
File descriptor 7 (/dev/urandom) leaked on lvm invocation. Parent PID 40535: /usr/bin/python
Physical volume "/dev/sde" successfully created
Volume group "vg2" successfully created
File descriptor 7 (/dev/urandom) leaked on lvm invocation. Parent PID 40535: /usr/bin/python
Logical volume "lv2" created.
meta-data=/dev/vg2/lv2 isize=256 agcount=4, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

查看掛載情況
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 20G 3.2G 17G 16% /
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 84K 2.0G 1% /dev/shm
tmpfs 2.0G 9.0M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/sr0 3.8G 3.8G 0 100% /mnt
/dev/sda1 197M 125M 72M 64% /boot
tmpfs 394M 16K 394M 1% /run/user/42
tmpfs 394M 0 394M 0% /run/user/0
/dev/mapper/vg1-lv1_sp 1.5G 33M 1.5G 3% /opt
/dev/mapper/vg2-lv2 1014M 33M 982M 4% /lv2

建立快照
[root@localhost ~]# ssm snapshot -s 500M -n lv2_sp /dev/vg2/lv2
使用快照
[root@localhost ~]# umount /lv2/
[root@localhost ~]# mkdir /lv2_snapshot
[root@localhost ~]# mount /dev/vg2/lv2_sp /lv2_snapshot/

移除LVM卷
[root@localhost ~]# ssm remove /dev/vg2/lv2_sp
Device '/dev/vg2/lv2_sp' is mounted on '/lv2_snapshot' Unmount (N/y/q) ? Y
File descriptor 7 (/dev/urandom) leaked on lvm invocation. Parent PID 40749: /usr/bin/python
Do you really want to remove active logical volume lv2_sp? [y/n]: y
Logical volume "lv2_sp" successfully removed

注:刪除已經掛載的卷,ssm會自動先將其卸載

相關文章
相關標籤/搜索