root管理員就須要使用磁盤容量配額服務來限制某位用戶或某個用戶組針對特定文件夾可使用的最大硬盤空間或最大文件個數,一旦達到這個最大值就再也不容許繼續使用node
限制類型 | 說明 |
---|---|
軟限制 | 當達到軟限制時會提示用戶,但仍容許用戶在限定的額度內繼續使用 |
硬限制 | 當達到硬限制時會提示用戶,且強制終止用戶的操做 |
配置步驟:linux
一、開啓存儲quotaweb
二、新建用戶shell
三、給用戶配額數據庫
四、切換用戶,進行測試vim
五、測試安全
[root@localhost ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Sat Jul 3 11:06:41 2021 # # Accessible filesystems, by reference, are maintained under '/dev/disk/'. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info. # # After editing this file, run 'systemctl daemon-reload' to update systemd # units generated from this file. # UUID=1253ac5b-eaed-4c4c-808d-09fb4828358f / xfs defaults 0 0 UUID=bcc55e4d-0854-44a8-9449-dad12374a6d3 /boot xfs defaults,uquota 0 0 UUID=45a308e8-b622-4f24-9ea2-d4f473388981 swap swap defaults 0 0
[root@localhost ~]# mount | grep boot /dev/sda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,usrquota)
[root@localhost ~]# useradd nice [root@localhost ~]# chmod +R o+w /boot
xfs_quota命令用於管理設備的磁盤容量配額markdown
「xfs_quota [參數] 配額 文件系統」app
[root@localhost ~]# xfs_quota -x -c 'limit bsoft=3m bhard=6m isoft=3 ihard=6 nice' /boot
xfs_quota -x -c report /bootdom
[root@localhost ~]# xfs_quota -x -c report /boot User quota on /boot (/dev/sda1) Blocks User ID Used Soft Hard Warn/Grace ---------- -------------------------------------------------- root 128472 0 0 00 [--------] nice 0 3072 6144 00 [--------]
[root@localhost ~]# su - nice [nice@localhost ~]$ cd /boot [nice@localhost boot]$ dd if=/dev/zero of=/boot/nice bs=5M count=1 1+0 records in 1+0 records out 5242880 bytes (5.2 MB, 5.0 MiB) copied, 0.0219734 s, 239 MB/s # 發現超過5M沒法存入 [nice@localhost boot]$ dd if=/dev/zero of=/boot/nice bs=15M count=1 dd: error writing '/boot/nice': Disk quota exceeded 1+0 records in 0+0 records out 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0437072 s, 96.0 MB/s
參數 | 做用 |
---|---|
-u | 對某個用戶進行設置 |
-g | 對某個用戶組進行設置 |
-p | 複製原有的規則到新的用戶/組 |
-t | 限制寬限期限 |
[root@localhost ~]# edquota -u nice Disk quotas for user nice (uid 1001): Filesystem blocks soft hard inodes soft hard /dev/sda1 4096 3072 102400 1 3 6
[nice@localhost ~]$ dd if=/dev/zero of=/boot/nice bs=1M count=100 100+0 records in 100+0 records out 104857600 bytes (105 MB, 100 MiB) copied, 0.0423891 s, 2.5 GB/s [nice@localhost ~]$ dd if=/dev/zero of=/boot/nice bs=1M count=101 dd: error writing '/boot/nice': Disk quota exceeded 101+0 records in 100+0 records out 104857600 bytes (105 MB, 100 MiB) copied, 0.0421804 s, 2.5 GB/s
簡而言之,10G虛擬成100G,若是內容中有重複,複製其鏈接(相似於百度網盤的秒傳)
各種壓縮文件彙總
文件名 | 描述 | 類型 | 原始大小(KB) | 實際佔用空間(KB) |
---|---|---|---|---|
dickens | 狄更斯文集 | 英文原文 | 9953 | 9948 |
mozilla | Mozilla的1.0可執行文件 | 執行程序 | 50020 | 33228 |
mr | 醫用resonanse圖像 | 圖片 | 9736 | 9272 |
nci | 結構化的化學數據庫 | 數據庫 | 32767 | 10168 |
ooffice | Open Office.org 1.01 DLL | 可執行程序 | 6008 | 5640 |
osdb | 基準測試用的MySQL格式示例數據庫 | 數據庫 | 9849 | 9824 |
reymont | 瓦迪斯瓦夫·雷蒙特的書 | 6471 | 6312 | |
samba | samba源代碼 | src源碼 | 21100 | 11768 |
sao | 星空數據 | 天文格式的bin文件 | 7081 | 7036 |
webster | 辭海 | HTML | 40487 | 40144 |
xml | XML文件 | HTML | 5220 | 2180 |
x-ray | 透視醫學圖片 | 醫院數據 | 8275 | 8260 |
步驟:
一、添加一塊硬盤
二、安裝vdo服務
三、建立vdo卷
四、格式化vdo卷
五、掛載、寫入/etc/fstab文件中
六、檢查
[root@localhost ~]# ls /dev/sd* /dev/sda /dev/sda1 /dev/sda2 /dev/sda3 /dev/sdb /dev/sdc /dev/sdd
[root@localhost ~]# systemctl status vdo ● vdo.service - VDO volume services Loaded: loaded (/usr/lib/systemd/system/vdo.service; enabled; vendor preset: enabled) Active: active (exited) since Wed 2021-07-14 07:18:19 PDT; 45min ago Process: 1041 ExecStart=/usr/bin/vdo start --all --confFile /etc/vdoconf.yml (code=exited, statu> Main PID: 1041 (code=exited, status=0/SUCCESS) Jul 14 07:18:18 localhost.localdomain systemd[1]: Starting VDO volume services... Jul 14 07:18:19 localhost.localdomain systemd[1]: Started VDO volume services.
[root@localhost dev]# vdo create --name=storage --device=/dev/sdc --vdoLogicalSize=100G Creating VDO storage Starting VDO storage Starting compression on VDO storage VDO instance 0 volume is ready at /dev/mapper/storage
[root@localhost ~]# vdo create --name=nice --device=/dev/sdb --vdoLogicalSize=40G Creating VDO nice Starting VDO nice Starting compression on VDO nice VDO instance 0 volume is ready at /dev/mapper/nice
[root@localhost ~]# file /dev/mapper/nice /dev/mapper/nice: symbolic link to ../dm-0 [root@localhost ~]# mkfs.xfs /dev/mapper/nice meta-data=/dev/mapper/nice isize=512 agcount=4, agsize=2621440 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=10485760, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=5120, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 AMD的cpu不兼容,一直卡在這裏,過兩天在想辦法!
[root@linuxprobe ~]# blkid /dev/mapper/storage /dev/mapper/storage: UUID="cd4e9f12-e16a-415c-ae76-8de069076713" TYPE="xfs"
[root@linuxprobe ~]# vim /etc/fstab # # /etc/fstab # Created by anaconda on Tue Jul 21 05:03:40 2020 # # Accessible filesystems, by reference, are maintained under '/dev/disk/'. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info. # # After editing this file, run 'systemctl daemon-reload' to update systemd # units generated from this file. # /dev/mapper/rhel-root / xfs defaults 1 1 UUID=812b1f7c-8b5b-43da-8c06-b9999e0fe48b /boot xfs defaults,uquota 1 2 /dev/mapper/rhel-swap swap swap defaults 0 0 /dev/cdrom /media/cdrom iso9660 defaults 0 0 /dev/sdb1 /newFS xfs defaults 0 0 /dev/sdb2 swap swap defaults 0 0 UUID=cd4e9f12-e16a-415c-ae76-8de069076713 /storage xfs defaults,_netdev 0 0
軟連接(symbolic link):也叫符號連接,僅僅包含所連接文件的名稱和路徑,像個記錄地址的標籤。
硬連接(hard link):能夠將它理解爲一個「指向原始文件block的指針」,系統會建立出一個與原來一摸同樣的inode信息塊。
-s | 建立「符號連接」(若是不帶-s參數,則默認建立硬連接) |
---|---|
-f | 強制建立文件或目錄的連接 |
-i | 覆蓋前先詢問 |
-v | 顯示建立連接的過程 |
# 軟連接 [root@localhost ~]# ln -s nice.txt nice1 [root@localhost ~]# ll total 12 -rw-------. 1 root root 2632 Jul 3 11:13 anaconda-ks.cfg drwxr-xr-x. 2 root root 6 Jul 3 11:14 Desktop drwxr-xr-x. 2 root root 6 Jul 3 11:14 Documents drwxr-xr-x. 2 root root 6 Jul 3 11:14 Downloads drwxr-xr-x. 2 root root 6 Jul 3 11:14 Music lrwxrwxrwx. 1 root root 8 Jul 15 08:59 nice1 -> nice.txt -rw-r--r--. 1 root root 5 Jul 15 08:59 nice.txt -rw-------. 1 root root 2053 Jul 3 11:13 original-ks.cfg drwxr-xr-x. 2 root root 6 Jul 3 11:14 Pictures drwxr-xr-x. 2 root root 6 Jul 3 11:14 Public drwxr-xr-x. 2 root root 6 Jul 3 11:14 Templates drwxr-xr-x. 2 root root 6 Jul 3 11:14 Videos # 硬連接 [root@localhost ~]# ln nice.txt nice2 [root@localhost ~]# ls anaconda-ks.cfg Documents Music nice2 original-ks.cfg Public Videos Desktop Downloads nice1 nice.txt Pictures Templates
[root@localhost ~]# rm -rf nice.txt [root@localhost ~]# ls anaconda-ks.cfg Documents Music nice2 Pictures Templates Desktop Downloads nice1 original-ks.cfg Public Videos # 查看軟連接文件 [root@localhost ~]# cat nice1 cat: nice1: No such file or directory # 查看硬連接文件 [root@localhost ~]# cat nice2 nice
軟連接源文件刪了就沒了,硬連接源文件刪了還能用
RAID磁盤冗餘陣列
RAID技術經過把多個硬盤設備組合成一個容量更大、安全性更好的磁盤陣列,並把數據切割成多個區段後分別存放在各個不一樣的物理硬盤設備上,而後利用分散讀寫技術來提高磁盤陣列總體的性能,同時把多個重要數據的副本同步到不一樣的物理硬盤設備上,從而起到了很是好的數據冗餘備份效果。
RAID級別 | 最少硬盤 | 可用容量 | 讀寫性能 | 安全性 | 特色 |
---|---|---|---|---|---|
0 | 2 | n | n | 低 | 追求最大容量和速度,任何一塊盤損壞,數據所有異常。 |
1 | 2 | n/2 | n | 高 | 追求最大安全性,只要陣列組中有一塊硬盤可用,數據不受影響。 |
5 | 3 | n-1 | n-1 | 中 | 在控制成本的前提下,追求硬盤的最大容量、速度及安全性,容許有一塊硬盤異常,數據不受影響。 |
10 | 4 | n/2 | n/2 | 高 | 綜合RAID1和RAID0的優勢,追求硬盤的速度和安全性,容許有一半硬盤異常(不可同組),數據不受影響 |
命令用於建立、調整、監控和管理RAID設備
語法格式爲:「mdadm 參數 硬盤名稱」
參數 | 做用 |
---|---|
-a | 檢測設備名稱 |
-n | 指定設備數量 |
-l | 指定RAID級別 |
-C | 建立 |
-v | 顯示過程 |
-f | 模擬設備損壞 |
-r | 移除設備 |
-Q | 查看摘要信息 |
-D | 查看詳細信息 |
-S | 中止RAID磁盤陣列 |
作一個RAID-5
一、添加硬盤
二、mdadm -Cv
[root@localhost ~]# ls /dev/sd* /dev/sda /dev/sda1 /dev/sda2 /dev/sda3 /dev/sdb /dev/sdc /dev/sdd /dev/sde
[root@localhost ~]# mdadm -Cv /dev/md0 -n 3 -l 5 /dev/sd[b-d] mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512K mdadm: size set to 5237760K mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started.
[root@localhost ~]# mkfs.xfs /dev/md0 meta-data=/dev/md0 isize=512 agcount=16, agsize=163712 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=2618880, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0
五、掛載到目錄
[root@localhost ~]# mkdir /nice [root@localhost ~]# mount /dev/md0 /nice/ [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 890M 0 890M 0% /dev tmpfs 904M 0 904M 0% /dev/shm tmpfs 904M 9.4M 894M 2% /run tmpfs 904M 0 904M 0% /sys/fs/cgroup
[root@localhost ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Sat Jul 3 11:06:41 2021 # # Accessible filesystems, by reference, are maintained under '/dev/disk/'. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info. # # After editing this file, run 'systemctl daemon-reload' to update systemd # units generated from this file. # UUID=1253ac5b-eaed-4c4c-808d-09fb4828358f / xfs defaults 0 0 UUID=bcc55e4d-0854-44a8-9449-dad12374a6d3 /boot xfs defaults 0 0 UUID=45a308e8-b622-4f24-9ea2-d4f473388981 swap swap defaults 0 0 /dev/md0 /nice xfs defaults 0 0
[C:\~]$ Connecting to 192.168.180.128:22... Connection established. To escape to local shell, press 'Ctrl+Alt+]'. Activate the web console with: systemctl enable --now cockpit.socket Last login: Thu Jul 15 09:12:26 2021 from 192.168.180.1 [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 890M 0 890M 0% /dev tmpfs 904M 0 904M 0% /dev/shm tmpfs 904M 9.4M 894M 2% /run tmpfs 904M 0 904M 0% /sys/fs/cgroup /dev/sda3 18G 4.4G 14G 25% / /dev/md0 10G 105M 9.9G 2% /nice /dev/sda1 295M 143M 152M 49% /boot tmpfs 181M 16K 181M 1% /run/user/42 tmpfs 181M 4.0K 181M 1% /run/user/0
八、查看RAID陣列
[root@localhost ~]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Jul 15 09:14:19 2021 Raid Level : raid5 Array Size : 10475520 (9.99 GiB 10.73 GB) Used Dev Size : 5237760 (5.00 GiB 5.36 GB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Thu Jul 15 09:17:32 2021 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : 22abd690:678a6f4a:5a5217a9:9f9c4632 Events : 22 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 3 8 48 2 active sync /dev/sdd