Linux系統管理----LVM邏輯卷和磁盤配額做業習題

1.爲主機增長80G SCSI 接口硬盤

 

 

2.劃分三個各20G的主分區

[root@localhost chen]# fdisk /dev/sdbnode

命令(輸入 m 獲取幫助):nlinux

Partition type:centos

   p   primary (0 primary, 0 extended, 4 free)app

   e   extendeddom

Select (default p):測試

Using default response p    //默認爲主分區primaryui

分區號 (1-4,默認 1):spa

起始 扇區 (2048-167772159,默認爲 2048):orm

將使用默認值 2048blog

Last 扇區, +扇區 or +size{K,M,G} (2048-167772159,默認爲 167772159):+20G

分區 1 已設置爲 Linux 類型,大小設爲 20 GiB

[root@localhost chen]# partx /dev/sdb

NR    START       END  SECTORS SIZE NAME UUID

 1     2048  41945087 41943040  20G

 2 41945088  83888127 41943040  20G

 3 83888128 125831167 41943040  20G

3.將三個主分區轉換爲物理卷(pvcreate),掃描系統中的物理卷

[root@localhost chen]# pvscan    //掃描當前系統中的物理卷

  PV /dev/sda2   VG centos          lvm2 [<39.00 GiB / 4.00 MiB free]

  Total: 1 [<39.00 GiB] / in use: 1 [<39.00 GiB] / in no VG: 0 [0   ]

[root@localhost chen]# pvcreate /dev/sdb[123]     //把/dev/sdb下的分區123建立爲物理卷 

  Physical volume "/dev/sdb1" successfully created.

  Physical volume "/dev/sdb2" successfully created.

  Physical volume "/dev/sdb3" successfully created.

[root@localhost chen]# pvscan

  PV /dev/sda2   VG centos          lvm2 [<39.00 GiB / 4.00 MiB free]

  PV /dev/sdb1                      lvm2 [20.00 GiB]

  PV /dev/sdb2                      lvm2 [20.00 GiB]

  PV /dev/sdb3                      lvm2 [20.00 GiB]

 Total: 4 [<99.00 GiB] /in use: 1 [<39.00 GiB] / in no VG: 3 [60.00 GiB]

4.使用兩個物理卷建立卷組,名字爲myvg,查看卷組大小

[root@localhost chen]# vgscan   //掃描當前系統中的卷組

  Reading volume groups from cache.

  Found volume group "centos" using metadata type lvm2

[root@localhost chen]# vgcreate myvg /dev/sdb[12]   //用物理卷/dev/sdb1和/dev/sdb2建立卷組myvg

  Volume group "myvg" successfully created

[root@localhost chen]# vgscan

  Reading volume groups from cache.

  Found volume group "centos" using metadata type lvm2

  Found volume group "myvg" using metadata type lvm2

[root@localhost chen]# vgdisplay myvg     //顯示卷組myvg的信息

  --- Volume group ---

  VG Name               myvg

  System ID

  Format                lvm2

  Metadata Areas        2

  Metadata Sequence No  1

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                0

  Open LV               0

  Max PV                0

  Cur PV                2

  Act PV                2

  VG Size               39.99 GiB

  PE Size               4.00 MiB

  Total PE              10238

  Alloc PE / Size       0 / 0

  Free  PE / Size       10238 / 39.99 GiB

  VG UUID               iIisQE-rwE8-dR41-YRFn-cNmv-eGtt-gbiZtw

 

5.建立邏輯卷mylv,大小爲30G

[root@localhost chen]# lvscan       //掃描當前系統中的邏輯卷

  ACTIVE            '/dev/centos/swap' [3.00 GiB] inherit

  ACTIVE            '/dev/centos/root' [35.99 GiB] inherit

[root@localhost chen]# lvcreate -L 30G -n mylv myvg      //利用卷組myvg,建立邏輯卷mylv,指定的容量大小爲30G

  Logical volume "mylv" created.

[root@localhost chen]# lvscan

  ACTIVE            '/dev/centos/swap' [3.00 GiB] inherit

  ACTIVE            '/dev/centos/root' [35.99 GiB] inherit

  ACTIVE            '/dev/myvg/mylv' [30.00 GiB] inherit

6.將邏輯卷格式化成xfs文件系統,並掛載到/data目錄上,建立文件測試

[root@localhost chen]# mkdir /data

[root@localhost chen]# mkfs.xfs /dev/myvg/mylv     //將邏輯卷格式化成xfs文件系統

meta-data=/dev/myvg/mylv         isize=512    agcount=4, agsize=1966080 blks

         =                       sectsz=512   attr=2, projid32bit=1

         =                       crc=1        finobt=0, sparse=0

data     =                       bsize=4096   blocks=7864320, imaxpct=25

         =                       sunit=0      swidth=0 blks

naming   =version 2              bsize=4096   ascii-ci=0 ftype=1

log      =internal log           bsize=4096   blocks=3840, version=2

         =                       sectsz=512   sunit=0 blks, lazy-count=1

realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@localhost chen]# mount /dev/myvg/mylv /data   //把邏輯卷mylv掛載到/data下

[root@localhost chen]# touch /data/cs

[root@localhost chen]# vi /data/cs

[root@localhost chen]# cat /data/cs

fuhcvgfggffffffffffas

asfasfv

tqw

g'

erhfns

nengf

nngf

7.增大邏輯捲到35G

[root@localhost chen]# lvextend  -L 35G /dev/myvg/mylv   //增大邏輯捲到35G(邏輯卷的最大容量大小來源於卷組的大小)

  Size of logical volume myvg/mylv changed from 30.00 GiB (7680 extents) to 35.00 GiB (8960 extents).

  Logical volume myvg/mylv successfully resized.

[root@localhost chen]# lvdisplay /dev/myvg/mylv    //顯示邏輯卷mylv的信息

  --- Logical volume ---

  LV Path                /dev/myvg/mylv

  LV Name                mylv

  VG Name                myvg

  LV UUID                3QF5Vv-n2Aq-l89C-Avif-04SP-n8C5-qiSkyf

  LV Write Access        read/write

  LV Creation host, time localhost.localdomain, 2019-08-01 17:22:12 +0800

  LV Status              available

  # open                 1

  LV Size                35.00 GiB

  Current LE             8960

  Segments               2

  Allocation             inherit

  Read ahead sectors     auto

  - currently set to     8192

  Block device           253:2

8.編輯/etc/fstab文件掛載邏輯卷,並支持磁盤配額選項

[root@localhost chen]# vi /etc/fstab

 

// mount -o defaults,usrquota,grpquota  /data (用命令使其支持磁盤配額選項mount -o)

9.建立磁盤配額,crushlinux用戶在/data目錄下文件大小軟限制爲80M,硬限制爲100M,

crushlinux用戶在/data目錄下文件數量軟限制爲80個,硬限制爲100個。

[root@localhost chen]# mount -o defaults,usrquota,grpquota /data

[root@localhost chen]# quotacheck -uv /data     //在/data下針對用戶掃描文件系統並創建Qutoa的記錄文件(u),顯示其詳細信息(v)

quotacheck: Skipping /dev/mapper/myvg-mylv [/data]

quotacheck: Cannot find filesystem to check or filesystem not mounted with quota option.

[root@localhost chen]# quotaon /data    //開啓quota服務

quotaon: Enforcing group quota already on /dev/mapper/myvg-mylv

quotaon: Enforcing user quota already on /dev/mapper/myvg-mylv

[root@localhost chen]# edquota -u crushlinux        //編輯用戶的限制值    

 

[root@localhost chen]# quota -uvs crushlinux     //查看用戶crushlinux的quota報表

Disk quotas for user crushlinux (uid 1001):

     Filesystem   space   quota   limit   grace   files   quota   limit   grace

/dev/mapper/myvg-mylv

                     0K  80000K 100000K              1      80     100

[root@localhost chen]# repquota -uvs /data        //查看/data文件系統的限額報表

*** Report for user quotas on device /dev/mapper/myvg-mylv

Block grace time: 7days; Inode grace time: 7days

                        Space limits                File limits

User            used    soft    hard  grace    used  soft  hard  grace

----------------------------------------------------------------------

root      --     16K      0K      0K              4     0     0

crushlinux --      0K  80000K 100000K              1    80   100

#777      --      0K      0K      0K              1     0     0

 

*** Status for user quotas on device /dev/mapper/myvg-mylv

Accounting: ON; Enforcement: ON

Inode: #67 (3 blocks, 3 extents)

 

10.使用touch dd 命令在/data目錄下測試

[crushlinux@localhost chen]$ touch /data/a{1..111}   //建立目錄測試配額限制

touch: 沒法建立"/data/a101": 超出磁盤限額

touch: 沒法建立"/data/a102": 超出磁盤限額

touch: 沒法建立"/data/a103": 超出磁盤限額

touch: 沒法建立"/data/a104": 超出磁盤限額

touch: 沒法建立"/data/a105": 超出磁盤限額

touch: 沒法建立"/data/a106": 超出磁盤限額

touch: 沒法建立"/data/a107": 超出磁盤限額

touch: 沒法建立"/data/a108": 超出磁盤限額

touch: 沒法建立"/data/a109": 超出磁盤限額

touch: 沒法建立"/data/a110": 超出磁盤限額

touch: 沒法建立"/data/a111": 超出磁盤限額

[crushlinux@localhost chen]$dd if=/dev/zero of=/data/qqq bs=1M count=101    //測試容量配額限制

dd: 寫入"/data/qqq" 出錯: 超出磁盤限額

記錄了98+0 的讀入

記錄了97+0 的寫出

101711872字節(102 MB)已複製,0.368447 秒,276 MB/秒

11.查看配額的使用狀況:用戶角度

[crushlinux@localhost chen]$ quota -uvs crushlinux

Disk quotas for user crushlinux (uid 1001):

     Filesystem   space   quota   limit   grace   files   quota   limit   grace

/dev/mapper/myvg-mylv

                     0K  80000K 100000K               1      80     100

12.查看配額的使用狀況:文件系統角度

[root@localhost chen]# repquota -uvs /data

*** Report for user quotas on device /dev/mapper/myvg-mylv

Block grace time: 7days; Inode grace time: 7days

                        Space limits                File limits

User            used    soft    hard  grace    used  soft  hard  grace

----------------------------------------------------------------------

root      --     16K      0K      0K              4     0     0

crushlinux --      0K  80000K 100000K              1    80   100

#777      --      0K      0K      0K              1     0     0

 

*** Status for user quotas on device /dev/mapper/myvg-mylv

Accounting: ON; Enforcement: ON

Inode: #67 (3 blocks, 3 extents)

相關文章
相關標籤/搜索