【原】Linux Raid 實驗

本文參照如下兩個連接,將實驗重作了一遍,目的就是加深印象及提高實操能力html

參照連接:http://www.opsers.org/base/learning-linux-the-day-that-the-system-configuration-in-the-rhel6-disk-array-raid.htmlnode

參照連接:http://www.cnblogs.com/mchina/p/linux-centos-disk-array-software_raid.htmllinux

 

Linux之在CENTOS系統上配置磁盤陣列(RAID)centos

實驗環境安全

虛擬機:Oracle VM 5.0.10 r104061
系統平臺:CentOS Linux release 7.2.1511 (Core)
mdadm版本:mdadm - v3.3.2 - 21st August 2014app

磁盤陣列全名是Redundant Arrays of Inexpensive Disks, RAID ,大概的意思是:廉價的磁盤冗餘陣列。 RAID 能夠經過一個技術(軟件或硬件),將多個較小的磁盤整合成爲一個較大的磁盤設備,而這個較大的磁盤不但擴展了儲存空間,並且還有數據保護的功能。RAID會根據等級 (level) 的不一樣,而使得整合後的磁盤具備不一樣的功能,基本常見的 level 有如下這幾種async

RAID級別劃分

RAID 0:磁盤疊加

這種模式通常是使用相同型號與容量的磁盤來組成。這種模式的 RAID 會將磁盤先切出等量的區塊, 而後當一個文件須要要寫入 RAID 設備時,該文件就會依據區塊的大小切割好,而後再依次放到各個磁盤裏。因爲每一個磁盤會交錯的存放數據, 所以數據要寫入 RAID 時,會被等量的放在各個磁盤上面。 
因此說,RAID 0,他的特色就是: 
一、磁盤越多RAID設備的容量就越大。 
二、容量的總大小是多個硬盤的容量的總和。 
三、磁盤越多,寫入的效能就越高。 
四、若是使用非等大的硬盤,那麼當小的磁盤寫滿後,就直接向空間大的磁盤中寫數據了。 
五、最少的磁盤數是2個,並且磁盤使用率爲100% 
他的導致之處就是:萬一其中一個磁盤有問題,那麼數據就會所有出問題。由於數據是分開存儲的。ide

RAID 1:鏡像備份

這種模式主要是讓同一份數據,完整的保存在不一樣的磁盤上。因爲同一份數據會被分別寫入到其餘不一樣磁盤。所以在大量寫入 RAID 1 設備的狀況下,寫入的效能會變的很是差。但若是你使用的是硬件 RAID (磁盤陣列卡) 時,磁盤陣列卡會主動的複製一份而不使用系統的 I/O總線,這對效能影響是不大的。 若是使用軟件磁盤陣列,效能就會明顯降低了。 
RAID 1,他的特色是: 
一、保證了數據的安全, 
二、RAID 1設備的容量是全部磁盤容量總和的一半 
三、在多個磁盤組成RAID 1設備的時候,總容量將以最小的那一顆磁盤爲主 
四、讀取的效能相對增長。這是由於數據在不一樣的磁盤上面,若是多個進程在讀取同一筆數據時,RAID 會自行取得最佳的讀取平衡。 
五、磁盤數必需是2的整數倍。磁盤利用率爲50% 
不足之處就是:寫入的效能會下降工具

RAID 5:效能與數據備份的均衡考慮

RAID 5:至少須要三個以上的磁盤纔可以組成這種類型的磁盤陣列。這種磁盤陣列的數據寫入有點相似 RAID 0, 不過每一個循環的寫入過程當中,在每顆磁盤還加入一個校驗數據(Parity),這個數據會記錄其餘磁盤的備份數據, 用於當有磁盤損毀時的救援。測試

特色: 
一、當任何一個磁盤損壞時,都可以經過其餘磁盤的檢查碼來重建本來磁盤內的數據,安全性明顯加強。 
二、因爲有同位檢查碼的存在,所以 RAID 5 的總容量會是整個磁盤數量減一個。 
三、當損毀的磁盤數量大於等於兩顆時,那麼 RAID 5 的資料就損壞了。 由於 RAID 5 預設只能支持一顆磁盤的損壞狀況。 
四、在讀寫效能上與 RAID-0 差很少。 
五、最少磁盤是3塊,磁盤利用率N-1塊 
不足:數據寫入的效能不必定增長,由於要寫入 RAID 5 的數據還得要通過計算校驗碼 (parity)。因此寫入的效能與系統的硬件關係較大。尤爲當使用軟件磁盤陣列時,校驗碼 (parity)是經過 CPU 去計算而非專職的磁盤陣列卡, 所以在數據校驗恢復的時候,硬盤的效能會明顯降低。 
RAID0 RAID1 RAID5三個級別的數據存儲流程,你們能夠參考下圖 

RAID 0 1 5

圖片來自:http://www.opsers.org/base/learning-linux-the-day-that-the-system-configuration-in-the-rhel6-disk-array-raid.html

RAID 01或RAID 10

這個RAID級別就是針對上面的特色與不足,把RAID 0和RAID 1這兩個結合起來了。 
所謂的RAID 01就是:

1.先讓組成 RAID 0

2.再組成 RAID 1,這就是 RAID 0+1 

所謂的RAID 10就是:
1.先組成 RAID 1

2.再組成 RAID 0,這就是RAID 1+0 
特色與不足:因爲具備 RAID 0 的優勢,因此效能得以提高,因爲具備 RAID 1 的優勢,因此數據得以備份。 可是也因爲 RAID 1 的缺點,因此總容量會少一半用來作爲備份。

RAID10級別的數據存儲流程,你們能夠參考下圖 
RAID 10

圖片來自:http://www.opsers.org/base/learning-linux-the-day-that-the-system-configuration-in-the-rhel6-disk-array-raid.html

因爲 RAID5 僅能支持一顆磁盤的損毀,所以還有發展出另一種等級,就是 RAID 6 ,這個 RAID 6 則使用兩顆磁盤的容量做爲 parity 的儲存,所以總體的磁盤容量就會少兩顆,可是容許出錯的磁盤數量就能夠達到兩顆,也就是在 RAID 6 的狀況下,同時兩顆磁盤損毀時,數據仍是能夠恢復回來的。而此級別的RAID磁盤最少是4塊,利用率爲 N-2。

Spare Disk:熱備磁盤

他的做用就是:當磁盤陣列中的磁盤有損毀時,這個熱備磁盤就能馬上代替損壞磁盤的位置,這時候咱們的磁盤陣列就會主動重建。而後把全部的數據自動恢復。而這個或多個熱備磁盤是沒有包含在本來磁盤陣列等級中的磁盤,只有當磁盤陣列有任何磁盤損毀時,才真正的起做用。

關於理論知識咱們就只介紹到這裏,固然還能夠延伸出多種組合,只要理解了上面的內容,那麼其餘級別就不難了,無非是多種組合而已。經過上面的講解,我相信你們也知道了作磁盤陣列的優勢了:一、數據的安全性明顯加強,二、讀寫的效能明顯提升,三、磁盤的容量有效擴展。但也別忘記了他的缺點就是成本提升。但相對於數據而言,我想這點成本也不算什麼吧!

 

設置磁盤

在Oracle VM VirtualBox 裏模擬物理增長磁盤,在這篇文章中,咱們將建立RAID0, RAID1, RAID5分區,RAID0 須要兩塊硬盤,RAID1 須要兩塊硬盤,RAID5須要四塊硬盤,因此在這裏添加了八塊物理硬盤,每塊5.00 GB.

mdadm 是multiple devices admin 的簡稱,它是Linux下的一款標準的軟件RAID 管理工具

開始安裝


先安裝mdadm,yum install mdadm

[root@raid]# rpm -qa | grep mdadm
mdadm-3.3.2-7.el7.x86_64

查看新增長的物理磁盤

[root@raid]# fdisk -l

Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdc: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdd: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sde: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdf: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdh: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdi: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdg: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

1、

Raid0實驗:使用Disk /dev/sdb Disk /dev/sdc兩塊盤

1.對磁盤分區

[root@raid ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xd7c6c9b7.

Command (m for help): m
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   g   create a new empty GPT partition table
   G   create an IRIX (SGI) partition table
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-10485759, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): 
Using default value 10485759
Partition 1 of type Linux and of size 5 GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): L   

 0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris        
 1  FAT12           27  Hidden NTFS Win 82  Linux swap / So c1  DRDOS/sec (FAT-
 2  XENIX root      39  Plan 9          83  Linux           c4  DRDOS/sec (FAT-
 3  XENIX usr       3c  PartitionMagic  84  OS/2 hidden C:  c6  DRDOS/sec (FAT-
 4  FAT16 <32M      40  Venix 80286     85  Linux extended  c7  Syrinx         
 5  Extended        41  PPC PReP Boot   86  NTFS volume set da  Non-FS data    
 6  FAT16           42  SFS             87  NTFS volume set db  CP/M / CTOS / .
 7  HPFS/NTFS/exFAT 4d  QNX4.x          88  Linux plaintext de  Dell Utility   
 8  AIX             4e  QNX4.x 2nd part 8e  Linux LVM       df  BootIt         
 9  AIX bootable    4f  QNX4.x 3rd part 93  Amoeba          e1  DOS access     
 a  OS/2 Boot Manag 50  OnTrack DM      94  Amoeba BBT      e3  DOS R/O        
 b  W95 FAT32       51  OnTrack DM6 Aux 9f  BSD/OS          e4  SpeedStor      
 c  W95 FAT32 (LBA) 52  CP/M            a0  IBM Thinkpad hi eb  BeOS fs        
 e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a5  FreeBSD         ee  GPT            
 f  W95 Ext'd (LBA) 54  OnTrackDM6      a6  OpenBSD         ef  EFI (FAT-12/16/
10  OPUS            55  EZ-Drive        a7  NeXTSTEP        f0  Linux/PA-RISC b
11  Hidden FAT12    56  Golden Bow      a8  Darwin UFS      f1  SpeedStor      
12  Compaq diagnost 5c  Priam Edisk     a9  NetBSD          f4  SpeedStor      
14  Hidden FAT16 <3 61  SpeedStor       ab  Darwin boot     f2  DOS secondary  
16  Hidden FAT16    63  GNU HURD or Sys af  HFS / HFS+      fb  VMware VMFS    
17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE 
18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fd  Linux raid auto
1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fe  LANstep        
1c  Hidden W95 FAT3 75  PC/IX           be  Solaris boot    ff  BBT            
1e  Hidden W95 FAT1 80  Old Minix      
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): p

Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xd7c6c9b7

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    10485759     5241856   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

注:同方法對/dev/sdc分區

使kernel從新讀取分區表

[root@raid ~]# partprobe

查看一下狀態

[root@raid ~]# fdisk -l /dev/sdb /dev/sdc

Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xd7c6c9b7

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    10485759     5241856   fd  Linux raid autodetect

Disk /dev/sdc: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x7fd6e126

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048    10485759     5241856   fd  Linux raid autodetect

開始建立Raid0

[root@raid ~]# mdadm -C /dev/md0 -ayes -l0 -n2 /dev/sd[b,c]1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

說明:

-C  --create  建立陣列;
-a  --auto   贊成建立設備,如不加此參數時必須先使用mknod 命令來建立一個RAID設備,不過推薦使用-a yes參數一次性建立;
-l  --level   陣列模式,支持的陣列模式有 linear, raid0, raid1, raid4, raid5, raid6, raid10, multipath, faulty, container;
-n --raid-devices 陣列中活動磁盤的數目,該數目加上備用磁盤的數目應該等於陣列中總的磁盤數目;
/dev/md0     陣列的設備名稱;
/dev/sd{b,c}1   參與建立陣列的磁盤名稱;

查看raid狀態

[root@raid ~]# cat /proc/mdstat
Personalities : [raid0] 
md0 : active raid0 sdc1[1] sdb1[0]
      10475520 blocks super 1.2 512k chunks
      
unused devices: <none>

[root@raid ~]# mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Mon Dec 28 14:48:12 2015
     Raid Level : raid0
     Array Size : 10475520 (9.99 GiB 10.73 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Dec 28 14:48:12 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

     Chunk Size : 512K

           Name : raid:0  (local to host raid)
           UUID : 1100e7ee:d40cbdc2:21c359b3:b6b966b6
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

說明:Raid Level : 陣列級別;
   Array Size : 陣列容量大小;
   Raid Devices : RAID成員的個數;
   Total Devices : RAID中下屬成員的總計個數,由於還有冗餘硬盤或分區,也就是spare,爲了RAID的正常運珩,隨時能夠推上去加入RAID的;
   State : clean, degraded, recovering 狀態,包括三個狀態,clean 表示正常,degraded 表示有問題,recovering 表示正在恢復或構建;
   Active Devices : 被激活的RAID成員個數;
   Working Devices : 正常的工做的RAID成員個數;
   Failed Devices : 出問題的RAID成員;
   Spare Devices : 備用RAID成員個數,當一個RAID的成員出問題時,用其它硬盤或分區來頂替時,RAID要進行構建,在沒構建完成時,這個成員也會被認爲是spare設備;
   UUID : RAID的UUID值,在系統中是惟一的;

建立RAID配置文件/etc/mdadm.conf,默認是不存在的,須要手工建立。該配置文件的主要做用是系統啓動的時候可以自動加載軟RAID,同時也方便往後管理。但不是必須的,推薦對該文件進行配置。咱們這裏須要建立這個文件,測試中發現,若是沒有這個文件,則reboot 後,已經建立好的md0 會自動變成md127。

/etc/mdadm.conf 文件內容包括:
由DEVICE 選項指定用於軟RAID的全部設備,和ARRAY 選項所指定陣列的設備名、RAID級別、陣列中活動設備的數目以及設備的UUID號。

建立/etc/mdadm.conf

echo DEVICE /dev/sd{b,c}1 >> /etc/mdadm.conf
mdadm -Ds >> /etc/mdadm.conf

當前生成的/etc/mdadm.conf 文件內容並不符合所規定的格式,因此也是不生效的,這時須要手工修改該文件內容爲以下格式:

[root@raid ~]# cat /etc/mdadm.conf
DEVICE /dev/sdb1 /dev/sdc1
ARRAY /dev/md0 level=raid0 num-devices=2 UUID=1100e7ee:d40cbdc2:21c359b3:b6b966b6

格式化磁盤陣列

[root@raid ~]# mkfs.ext4 /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
655360 inodes, 2618880 blocks
130944 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

創建掛載點並掛載

[root@raid ~]# mkdir -p /raid0
mount /dev/md0 /raid0

查看磁盤狀態

[root@raid ~]# df -Th
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        46G  4.1G   42G   9% /
devtmpfs                devtmpfs  2.0G     0  2.0G   0% /dev
tmpfs                   tmpfs     2.0G  144K  2.0G   1% /dev/shm
tmpfs                   tmpfs     2.0G  8.8M  2.0G   1% /run
tmpfs                   tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/sda1               xfs       497M  140M  358M  29% /boot
tmpfs                   tmpfs     396M   16K  396M   1% /run/user/0
/dev/md0                ext4      9.8G   37M  9.2G   1% /raid0

寫入/etc/fstab

[root@raid ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon Dec 28 11:06:31 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root                    /                       xfs     defaults        0 0
UUID=5ea4bc6c-3846-41c6-9716-8a273e36a0f0  /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap                    swap                    swap    defaults        0 0
/dev/md0                                   /raid0                  ext4    defaults        0 0

而後reboot 測試開機是否自動掛載,raid0 建立完畢。

磁盤I/O測試,測試文件應大約內存,避免write cache,這裏使用dd命令,該命令只能提供一個大概的測試結果,並且是連續IO,而不是隨機IO

寫測試
[root@raid ~]# time dd if=/dev/zero of=/raid0/iotest bs=8k count=655360 conv=fdatasync
655360+0 records in
655360+0 records out
5368709120 bytes (5.4 GB) copied, 26.4606 s, 203 MB/s

real    0m26.466s
user    0m0.425s
sys     0m23.814s

[root@raid ~]# time dd if=/dev/zero of=/iotest bs=8k count=655360 conv=fdatasync
655360+0 records in
655360+0 records out
5368709120 bytes (5.4 GB) copied, 30.9296 s, 174 MB/s

real    0m30.932s
user    0m0.080s
sys     0m3.623s

一個寫在raid0上,一個寫在根目錄下,速度分別是203 MB/s,174 MB/s,耗時分別是0m26.466s,0m30.932s,可見raid0的速度獲勝


讀測試
[root@raid]# time dd if=/raid0/iotest of=/dev/null bs=8k count=655360
655360+0 records in
655360+0 records out
5368709120 bytes (5.4 GB) copied, 3.98003 s, 1.3 GB/s

real    0m3.983s
user    0m0.065s
sys     0m3.581s

[root@raid raid0]# time dd if=/iotest of=/dev/null bs=8k count=655360
655360+0 records in
655360+0 records out
5368709120 bytes (5.4 GB) copied, 6.81647 s, 788 MB/s

real    0m6.819s
user    0m0.020s
sys     0m4.975s

一個讀取/raid0/iotest,一個讀取/iotest,速度分別是1.3 GB/s,788 MB/s,耗時分別是0m3.983s,0m6.819s,可見raid0的讀幾乎是2倍普通分區

讀寫測試
[root@raid ~]# time dd if=/raid0/iotest of=/raid0/iotest1 bs=8k count=327680 conv=fdatasync
327680+0 records in
327680+0 records out
2684354560 bytes (2.7 GB) copied, 7.04209 s, 381 MB/s

real    0m7.045s
user    0m0.073s
sys     0m3.984s

[root@raid ~]# time dd if=/iotest of=/iotest1 bs=8k count=327680 conv=fdatasync
327680+0 records in
327680+0 records out
2684354560 bytes (2.7 GB) copied, 21.2412 s, 126 MB/s

real    0m21.244s
user    0m0.051s
sys     0m2.954s

一個讀寫/raid0/iotest,/raid0/iotest1,一個讀寫/iotest,/iotest1,速度分別是381 MB/s,126 MB/s,耗時分別是0m7.045s,0m21.244s,可見raid0的讀寫是普通分區2倍還不止

 2、

Raid1實驗:使用Disk /dev/sdd Disk /dev/sde兩塊盤

對磁盤分區並修改分區類型

[root@raid ~]# fdisk /dev/sdd
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x686f5801.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-10485759, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): 
Using default value 10485759
Partition 1 of type Linux and of size 5 GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): L

 0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris        
 1  FAT12           27  Hidden NTFS Win 82  Linux swap / So c1  DRDOS/sec (FAT-
 2  XENIX root      39  Plan 9          83  Linux           c4  DRDOS/sec (FAT-
 3  XENIX usr       3c  PartitionMagic  84  OS/2 hidden C:  c6  DRDOS/sec (FAT-
 4  FAT16 <32M      40  Venix 80286     85  Linux extended  c7  Syrinx         
 5  Extended        41  PPC PReP Boot   86  NTFS volume set da  Non-FS data    
 6  FAT16           42  SFS             87  NTFS volume set db  CP/M / CTOS / .
 7  HPFS/NTFS/exFAT 4d  QNX4.x          88  Linux plaintext de  Dell Utility   
 8  AIX             4e  QNX4.x 2nd part 8e  Linux LVM       df  BootIt         
 9  AIX bootable    4f  QNX4.x 3rd part 93  Amoeba          e1  DOS access     
 a  OS/2 Boot Manag 50  OnTrack DM      94  Amoeba BBT      e3  DOS R/O        
 b  W95 FAT32       51  OnTrack DM6 Aux 9f  BSD/OS          e4  SpeedStor      
 c  W95 FAT32 (LBA) 52  CP/M            a0  IBM Thinkpad hi eb  BeOS fs        
 e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a5  FreeBSD         ee  GPT            
 f  W95 Ext'd (LBA) 54  OnTrackDM6      a6  OpenBSD         ef  EFI (FAT-12/16/
10  OPUS            55  EZ-Drive        a7  NeXTSTEP        f0  Linux/PA-RISC b
11  Hidden FAT12    56  Golden Bow      a8  Darwin UFS      f1  SpeedStor      
12  Compaq diagnost 5c  Priam Edisk     a9  NetBSD          f4  SpeedStor      
14  Hidden FAT16 <3 61  SpeedStor       ab  Darwin boot     f2  DOS secondary  
16  Hidden FAT16    63  GNU HURD or Sys af  HFS / HFS+      fb  VMware VMFS    
17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE 
18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fd  Linux raid auto
1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fe  LANstep        
1c  Hidden W95 FAT3 75  PC/IX           be  Solaris boot    ff  BBT            
1e  Hidden W95 FAT1 80  Old Minix      
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): p

Disk /dev/sdd: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x686f5801

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048    10485759     5241856   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.


[root@raid ~]# fdisk /dev/sde
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xe0cce225.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-10485759, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): 
Using default value 10485759
Partition 1 of type Linux and of size 5 GiB is set

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): L

 0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris        
 1  FAT12           27  Hidden NTFS Win 82  Linux swap / So c1  DRDOS/sec (FAT-
 2  XENIX root      39  Plan 9          83  Linux           c4  DRDOS/sec (FAT-
 3  XENIX usr       3c  PartitionMagic  84  OS/2 hidden C:  c6  DRDOS/sec (FAT-
 4  FAT16 <32M      40  Venix 80286     85  Linux extended  c7  Syrinx         
 5  Extended        41  PPC PReP Boot   86  NTFS volume set da  Non-FS data    
 6  FAT16           42  SFS             87  NTFS volume set db  CP/M / CTOS / .
 7  HPFS/NTFS/exFAT 4d  QNX4.x          88  Linux plaintext de  Dell Utility   
 8  AIX             4e  QNX4.x 2nd part 8e  Linux LVM       df  BootIt         
 9  AIX bootable    4f  QNX4.x 3rd part 93  Amoeba          e1  DOS access     
 a  OS/2 Boot Manag 50  OnTrack DM      94  Amoeba BBT      e3  DOS R/O        
 b  W95 FAT32       51  OnTrack DM6 Aux 9f  BSD/OS          e4  SpeedStor      
 c  W95 FAT32 (LBA) 52  CP/M            a0  IBM Thinkpad hi eb  BeOS fs        
 e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a5  FreeBSD         ee  GPT            
 f  W95 Ext'd (LBA) 54  OnTrackDM6      a6  OpenBSD         ef  EFI (FAT-12/16/
10  OPUS            55  EZ-Drive        a7  NeXTSTEP        f0  Linux/PA-RISC b
11  Hidden FAT12    56  Golden Bow      a8  Darwin UFS      f1  SpeedStor      
12  Compaq diagnost 5c  Priam Edisk     a9  NetBSD          f4  SpeedStor      
14  Hidden FAT16 <3 61  SpeedStor       ab  Darwin boot     f2  DOS secondary  
16  Hidden FAT16    63  GNU HURD or Sys af  HFS / HFS+      fb  VMware VMFS    
17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE 
18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fd  Linux raid auto
1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fe  LANstep        
1c  Hidden W95 FAT3 75  PC/IX           be  Solaris boot    ff  BBT            
1e  Hidden W95 FAT1 80  Old Minix      
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'

Command (m for help): p

Disk /dev/sde: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xe0cce225

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1            2048    10485759     5241856   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

開始建立Raid1

[root@raid ~]# mdadm -C /dev/md1 -ayes -l1 -n2 /dev/sd[d,e]1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

查看raid1狀態

[root@raid ~]# cat /proc/mdstat
Personalities : [raid0] [raid1] 
md1 : active raid1 sde1[1] sdd1[0]
      5237760 blocks super 1.2 [2/2] [UU]
      [================>....]  resync = 84.0% (4401920/5237760) finish=0.0min speed=209615K/sec
      
md0 : active raid0 sdb1[0] sdc1[1]
      10475520 blocks super 1.2 512k chunks
      
unused devices: <none>
[root@raid ~]# mdadm -D /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Mon Dec 28 18:11:06 2015
     Raid Level : raid1
     Array Size : 5237760 (5.00 GiB 5.36 GB)
  Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Dec 28 18:11:33 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : raid:1  (local to host raid)
           UUID : 5ac9846b:2e04aea8:4399404c:5c2b96cb
         Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       65        1      active sync   /dev/sde1

說明:Used Dev Size ,raid成員容量大小,也就是構成raid的成員硬盤或分區的容量的大小,能夠看到,raid1 正在建立,待建立完畢,狀態以下:

[root@raid ~]# cat /proc/mdstat
Personalities : [raid0] [raid1] 
md1 : active raid1 sde1[1] sdd1[0]
      5237760 blocks super 1.2 [2/2] [UU]
      
md0 : active raid0 sdb1[0] sdc1[1]
      10475520 blocks super 1.2 512k chunks
      
unused devices: <none>
[root@raid ~]# mdadm -D /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Mon Dec 28 18:11:06 2015
     Raid Level : raid1
     Array Size : 5237760 (5.00 GiB 5.36 GB)
  Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Mon Dec 28 18:11:33 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : raid:1  (local to host raid)
           UUID : 5ac9846b:2e04aea8:4399404c:5c2b96cb
         Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       65        1      active sync   /dev/sde1

添加raid1 到RAID 配置文件/etc/mdadm.conf 並修改

[root@raid ~]# echo DEVICE /dev/sd{d,e}1 >> /etc/mdadm.conf
[root@raid ~]# mdadm -Ds >> /etc/mdadm.conf

修改/etc/mdadm.conf文件以下格式:

[root@raid ~]# cat /etc/mdadm.conf
DEVICE /dev/sdb1 /dev/sdc1
ARRAY /dev/md0 level=raid0 num-devices=2 UUID=1100e7ee:d40cbdc2:21c359b3:b6b966b6
DEVICE /dev/sdd1 /dev/sde1
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=5ac9846b:2e04aea8:4399404c:5c2b96cb

格式化磁盤陣列

[root@raid ~]# mkfs.ext4 /dev/md1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680 inodes, 1309440 blocks
65472 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

創建掛載點並掛載

[root@raid ~]# mkdir -p /raid1
[root@raid ~]# mount /dev/md1 /raid1

查看磁盤大小

[root@raid ~]# df -Th
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        46G  4.1G   42G   9% /
devtmpfs                devtmpfs  2.0G     0  2.0G   0% /dev
tmpfs                   tmpfs     2.0G   88K  2.0G   1% /dev/shm
tmpfs                   tmpfs     2.0G  8.9M  2.0G   1% /run
tmpfs                   tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/md0                ext4      9.8G   37M  9.2G   1% /raid0
/dev/sda1               xfs       497M  140M  358M  29% /boot
tmpfs                   tmpfs     396M   12K  396M   1% /run/user/0
/dev/md1                ext4      4.8G   20M  4.6G   1% /raid1

寫入/etc/fstab

[root@raid ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon Dec 28 11:06:31 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root                     /                       xfs     defaults        0 0
UUID=5ea4bc6c-3846-41c6-9716-8a273e36a0f0   /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap                     swap                    swap    defaults        0 0
/dev/md0                                    /raid0                  ext4    defaults        0 0
/dev/md1                                    /raid1                  ext4    defaults        0 0

而後reboot 測試開機是否自動掛載,raid1 建立完畢
重啓後,自動掛載了

[root@raid ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   46G  4.1G   42G   9% /
devtmpfs                 2.0G     0  2.0G   0% /dev
tmpfs                    2.0G   88K  2.0G   1% /dev/shm
tmpfs                    2.0G  8.9M  2.0G   1% /run
tmpfs                    2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/md0                 9.8G   37M  9.2G   1% /raid0
/dev/md1                 4.8G   20M  4.6G   1% /raid1
/dev/sda1                497M  140M  358M  29% /boot
tmpfs                    396M   12K  396M   1% /run/user/0

磁盤IO測試,測試文件應大於內存,避免write cache

寫測試
[root@raid ~]# time dd if=/dev/zero of=/raid1/iotest bs=8k count=327680 conv=fdatasync
327680+0 records in
327680+0 records out
2684354560 bytes (2.7 GB) copied, 6.65744 s, 403 MB/s

real    0m6.667s
user    0m0.086s
sys     0m4.236s

[root@raid ~]# time dd if=/dev/zero of=/iotest bs=8k count=327680 conv=fdatasync
655360+0 records in
655360+0 records out
5368709120 bytes (5.4 GB) copied, 30.9296 s, 174 MB/s

real    0m30.932s
user    0m0.080s
sys     0m3.623s

一個寫在raid1上,一個寫在根目錄下,速度分別是403 MB/s,174 MB/s,耗時分別是0m6.667s,0m30.932s,可見raid1的寫速度幾乎是普通分區的2倍多(理論上raid1寫是要慢的,這個很奇怪?)


讀測試
[root@raid ~]# time dd if=/raid1/iotest of=/dev/null bs=8k count=327680
327680+0 records in
327680+0 records out
2684354560 bytes (2.7 GB) copied, 0.445192 s, 6.0 GB/s

real    0m0.446s
user    0m0.026s
sys     0m0.420s

[root@raid ~]# time dd if=/iotest of=/dev/null bs=8k count=327680
327680+0 records in
327680+0 records out
2684354560 bytes (2.7 GB) copied, 1.52405 s, 1.8 GB/s

real    0m1.534s
user    0m0.036s
sys     0m1.194s

一個讀取/raid1/iotest,一個讀取/iotest,速度分別是6.0 GB/s,1.8 GB/s,耗時分別是0m0.446s,0m1.534s,可見raid1的讀幾乎是普通分區3倍多(理論上raid1讀與普通山區差很少,這個很奇怪?)

讀寫測試
[root@raid ~]# time dd if=/raid1/iotest of=/raid1/iotest1 bs=8k count=163840 conv=fdatasync
163840+0 records in
163840+0 records out
1342177280 bytes (1.3 GB) copied, 3.47 s, 387 MB/s

real    0m3.472s
user    0m0.036s
sys     0m2.340s

[root@raid ~]# time dd if=/iotest of=/iotest1 bs=8k count=327680 conv=fdatasync
327680+0 records in
327680+0 records out
2684354560 bytes (2.7 GB) copied, 21.2412 s, 126 MB/s

real    0m21.244s
user    0m0.051s
sys     0m2.954s

一個讀寫/raid1/iotest,/raid1/iotest1,一個讀寫/iotest,/iotest1,速度分別是387 MB/s,126 MB/s,耗時分別是0m3.472s,0m21.244s,可見raid1的讀寫是普通分區2倍還不止(理論上raid1寫是要慢的,這個很奇怪?)

3、

Raid5實驗:使用Disk /dev/sdf Disk /dev/sdg Disk /dev/sdh Disk /dev/sdi四塊盤,三塊作爲活動盤,另外一塊作爲熱備盤

1. 新建分區並修改分區類型
fdisk /dev/sdf
fdisk /dev/sdg
fdisk /dev/sdh
fdisk /dev/sdi
詳細步驟同上,此處忽略
分區結果以下:

[root@raid ~]# fdisk -l /dev/sd[f,g,h,i]

Disk /dev/sdf: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x8a6a4f75

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1            2048    10485759     5241856   fd  Linux raid autodetect

Disk /dev/sdg: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xcd98bef8

   Device Boot      Start         End      Blocks   Id  System
/dev/sdg1            2048    10485759     5241856   fd  Linux raid autodetect

Disk /dev/sdh: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xf4d754a4

   Device Boot      Start         End      Blocks   Id  System
/dev/sdh1            2048    10485759     5241856   fd  Linux raid autodetect

Disk /dev/sdi: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x62fb90d1

   Device Boot      Start         End      Blocks   Id  System
/dev/sdi1            2048    10485759     5241856   fd  Linux raid autodetect

開始建立Raid5

[root@raid ~]# mdadm -C /dev/md5 -ayes -l5 -n3 -x1 /dev/sd[f,g,h,i]1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.

說明:"-x1" 或"--spare-devices=1" 表示當前陣列中熱備盤只有一塊,如有多塊熱備盤,則將"--spare-devices" 的值設置爲相應的數目。

查看raid5 狀態

[root@raid ~]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] 
md5 : active raid5 sdh1[4] sdi1[3](S) sdg1[1] sdf1[0]
      10475520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      
md1 : active raid1 sde1[1] sdd1[0]
      5237760 blocks super 1.2 [2/2] [UU]
      
md0 : active raid0 sdc1[1] sdb1[0]
      10475520 blocks super 1.2 512k chunks
      
unused devices: <none>
[root@raid ~]# mdadm -D /dev/md5
/dev/md5:
        Version : 1.2
  Creation Time : Mon Dec 28 21:08:44 2015
     Raid Level : raid5
     Array Size : 10475520 (9.99 GiB 10.73 GB)
  Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Mon Dec 28 21:09:11 2015
          State : clean 
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

           Name : raid:5  (local to host raid)
           UUID : 1bafff7f:f8993ec9:553cd4f7:31ae4f91
         Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       81        0      active sync   /dev/sdf1
       1       8       97        1      active sync   /dev/sdg1
       4       8      113        2      active sync   /dev/sdh1

       3       8      129        -      spare   /dev/sdi1

說明:Rebuild Status : RAID 的構建進度;相似以下
        4 8 113 2 spare rebuilding /dev/sdh1 注:未被激活,正在構建中的成員,正在傳輸數據;
        3 8 129 - spare /dev/sdi1 熱備盤

添加raid5 到RAID配置文件/etc/mdadm.conf 並修改

[root@raid ~]# echo DEVICE /dev/sd{f,g,h,i}1 >> /etc/mdadm.conf
[root@raid ~]# mdadm -Ds >> /etc/mdadm.conf
[root@raid ~]# cat /etc/mdadm.conf
DEVICE /dev/sdb1 /dev/sdc1
ARRAY /dev/md0 level=raid0 num-devices=2 UUID=1100e7ee:d40cbdc2:21c359b3:b6b966b6
DEVICE /dev/sdd1 /dev/sde1
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=5ac9846b:2e04aea8:4399404c:5c2b96cb
DEVICE /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1
ARRAY /dev/md5 level=raid5 num-devices=3 UUID=1bafff7f:f8993ec9:553cd4f7:31ae4f91

格式化磁盤陣列

[root@raid ~]# mkfs.ext4 /dev/md5
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
655360 inodes, 2618880 blocks
130944 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

創建掛載點並掛載

[root@raid ~]# mkdir -p /raid5
[root@raid ~]# mount /dev/md5 /raid5
[root@raid ~]# df -Th
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        46G  4.1G   42G   9% /
devtmpfs                devtmpfs  2.0G     0  2.0G   0% /dev
tmpfs                   tmpfs     2.0G   88K  2.0G   1% /dev/shm
tmpfs                   tmpfs     2.0G  8.9M  2.0G   1% /run
tmpfs                   tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/md0                ext4      9.8G   37M  9.2G   1% /raid0
/dev/md1                ext4      4.8G  2.6G  2.1G  56% /raid1
/dev/sda1               xfs       497M  140M  358M  29% /boot
tmpfs                   tmpfs     396M   16K  396M   1% /run/user/0
/dev/md5                ext4      9.8G   37M  9.2G   1% /raid5

注:raid5 的可用大小爲9.2G,即(3-1)x 5G.

寫入 /etc/fstab

[root@raid ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon Dec 28 11:06:31 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=5ea4bc6c-3846-41c6-9716-8a273e36a0f0 /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0
/dev/md0                /raid0                  ext4    defaults        0 0
/dev/md1                /raid1                  ext4    defaults        0 0
/dev/md5                /raid5                  ext4    defaults        0 0

而後reboot 測試開機是否自動掛載,raid5 建立完畢,重啓測試自動掛載

[root@raid ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   46G  4.1G   42G   9% /
devtmpfs                 2.0G     0  2.0G   0% /dev
tmpfs                    2.0G   88K  2.0G   1% /dev/shm
tmpfs                    2.0G  8.9M  2.0G   1% /run
tmpfs                    2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/md0                 9.8G   37M  9.2G   1% /raid0
/dev/md1                 4.8G  2.6G  2.1G  56% /raid1
/dev/md5                 9.8G   37M  9.2G   1% /raid5
/dev/sda1                497M  140M  358M  29% /boot
tmpfs                    396M   12K  396M   1% /run/user/0

磁盤I/O測試,

寫測試
[root@raid ~]# time dd if=/dev/zero of=/raid5/iotest bs=8k count=327680 conv=fdatasync
327680+0 records in
327680+0 records out
2684354560 bytes (2.7 GB) copied, 10.2333 s, 262 MB/s

real    0m10.236s
user    0m0.049s
sys     0m2.603s

[root@raid ~]# time dd if=/dev/zero of=/iotest bs=8k count=327680 conv=fdatasync
655360+0 records in
655360+0 records out
5368709120 bytes (5.4 GB) copied, 30.9296 s, 174 MB/s

real    0m30.932s
user    0m0.080s
sys     0m3.623s

一個寫在raid5上,一個寫在根目錄下,速度分別是262 MB/s,174 MB/s,耗時分別是0m10.236s,0m30.932s,可見raid5的寫速度幾乎與普通分區差很少,略快一點


讀測試
[root@raid ~]# time dd if=/raid5/iotest of=/dev/null bs=8k count=327680
327680+0 records in
327680+0 records out
2684354560 bytes (2.7 GB) copied, 0.443526 s, 6.1 GB/s

real    0m0.451s
user    0m0.029s
sys     0m0.416s

[root@raid ~]# time dd if=/iotest of=/dev/null bs=8k count=327680
327680+0 records in
327680+0 records out
2684354560 bytes (2.7 GB) copied, 1.52405 s, 1.8 GB/s

real    0m1.534s
user    0m0.036s
sys     0m1.194s

一個讀取/raid5/iotest,一個讀取/iotest,速度分別是6.1 GB/s,1.8 GB/s,耗時分別是0m0.451s,0m1.534s,可見raid5的讀幾乎是普通分區3倍多

讀寫測試
[root@raid ~]# time dd if=/raid5/iotest of=/raid5/iotest1 bs=8k count=163840 conv=fdatasync
163840+0 records in
163840+0 records out
1342177280 bytes (1.3 GB) copied, 5.55382 s, 242 MB/s

real    0m5.561s
user    0m0.041s
sys     0m1.288s

[root@raid ~]# time dd if=/iotest of=/iotest1 bs=8k count=327680 conv=fdatasync
327680+0 records in
327680+0 records out
2684354560 bytes (2.7 GB) copied, 21.2412 s, 126 MB/s

real    0m21.244s
user    0m0.051s
sys     0m2.954s

一個讀寫/raid5/iotest,/raid5/iotest1,一個讀寫/iotest,/iotest1,速度分別是242 MB/s,126 MB/s,耗時分別是0m5.561s,0m21.244s,可見raid5的讀寫比普通分區略快

 4、

Raid維護

1.模擬磁盤損壞
在實際中,當軟RAID 檢測到某個磁盤有故障時,會自動標記該磁盤爲故障磁盤,並中止對故障磁盤的讀寫操做。在這裏咱們將/dev/sdh1 模擬爲出現故障的磁盤,命令以下:

[root@raid ~]# mdadm /dev/md5 -f /dev/sdh1
mdadm: set /dev/sdh1 faulty in /dev/md5

查看重建狀態
在上面建立RAID5過程當中,咱們設置了一個熱備盤,因此當有標記爲故障磁盤的時候,熱備盤會自動頂替故障磁盤工做,陣列也可以在短期內實現重建。經過查看"/proc/mdstat" 文件能夠看到當前陣列的狀態,以下:

[root@raid ~]# cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] 
md1 : active raid1 sdd1[0] sde1[1]
      5237760 blocks super 1.2 [2/2] [UU]
      
md5 : active raid5 sdh1[4](F) sdg1[1] sdf1[0] sdi1[3]
      10475520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [==>...................]    recovery = 11.7% (612748/10475520) finish=2min speed=1854k/sec
      
md0 : active raid0 sdc1[1] sdb1[0]
      10475520 blocks super 1.2 512k chunks
      
unused devices: <none>
[root@raid ~]# mdadm -D /dev/md5
/dev/md5:
        Version : 1.2
  Creation Time : Mon Dec 28 21:08:44 2015
     Raid Level : raid5
     Array Size : 10475520 (9.99 GiB 10.73 GB)
  Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Mon Dec 28 22:14:03 2015
          State : clean 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : raid:5  (local to host raid)
           UUID : 1bafff7f:f8993ec9:553cd4f7:31ae4f91
         Events : 37

    Number   Major   Minor   RaidDevice State
       0       8       81        0      active sync   /dev/sdf1
       1       8       97        1      active sync   /dev/sdg1
       3       8      129        2      spare  rebuilding   /dev/sdi1

       4       8      113        -      faulty   /dev/sdh1

以上信息代表陣列正在重建,當一個設備出現故障或被標記故障時,相應設備的方括號後將被標以(F),如 "sdh1[4](F)"。其中 "[3/2]" 的第一位數表示陣列所包含的設備數,第二位數表示活動的設備數,由於目前有一個故障設備,因此第二位數爲2;這時的陣列以降級模式運行,雖然該陣列仍然可用,可是不具備數據冗餘;而 "[UU_]" 表示當前陣列能夠正常使用的設備是/dev/sdf1 和/dev/sdg1,若是是設備 「/dev/sdf1」 出現故障時,則將變成[_UU]。

查看以前寫入的測試數據是否還在

[root@raid raid5]# cat /raid5/1.txt
ldjaflajfdlajf

數據正常,未丟失。

重建完畢後查看陣列狀態

[root@raid ~]# cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] 
md1 : active raid1 sdd1[0] sde1[1]
      5237760 blocks super 1.2 [2/2] [UU]
      
md5 : active raid5 sdh1[4](F) sdg1[1] sdf1[0] sdi1[3]
      10475520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      
md0 : active raid0 sdc1[1] sdb1[0]
      10475520 blocks super 1.2 512k chunks
      
unused devices: <none>

當前的RAID 設備又恢復了正常。

移除損壞的磁盤

移除剛纔模擬出現故障的/dev/sdh1,操做以下:

[root@raid ~]# mdadm /dev/md5 -r /dev/sdh1
mdadm: hot removed /dev/sdh1 from /dev/md5

再次查看md5的狀態

[root@raid ~]#  cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] 
md1 : active raid1 sdd1[0] sde1[1]
      5237760 blocks super 1.2 [2/2] [UU]
      
md5 : active raid5 sdg1[1] sdf1[0] sdi1[3]
      10475520 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      
md0 : active raid0 sdc1[1] sdb1[0]
      10475520 blocks super 1.2 512k chunks
      
unused devices: <none>
[root@raid ~]# mdadm -D /dev/md5
/dev/md5:
        Version : 1.2
  Creation Time : Mon Dec 28 21:08:44 2015
     Raid Level : raid5
     Array Size : 10475520 (9.99 GiB 10.73 GB)
  Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Mon Dec 28 22:26:24 2015
          State : clean 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : raid:5  (local to host raid)
           UUID : 1bafff7f:f8993ec9:553cd4f7:31ae4f91
         Events : 38

    Number   Major   Minor   RaidDevice State
       0       8       81        0      active sync   /dev/sdf1
       1       8       97        1      active sync   /dev/sdg1
       3       8      129        2      active sync   /dev/sdi1

/dev/sdh1 已經移除了

新加熱備磁盤

若是是實際生產中添加新的硬盤,一樣須要對新硬盤進行建立分區的操做,這裏咱們爲了方便,將剛纔模擬損壞的硬盤再次新加到raid5 中。

[root@raid ~]# mdadm /dev/md5 -a /dev/sdh1
mdadm: added /dev/sdh1

查看raid5 陣列狀態

[root@raid ~]# mdadm -D /dev/md5
/dev/md5:
        Version : 1.2
  Creation Time : Mon Dec 28 21:08:44 2015
     Raid Level : raid5
     Array Size : 10475520 (9.99 GiB 10.73 GB)
  Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Mon Dec 28 22:34:44 2015
          State : clean 
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

           Name : raid:5  (local to host raid)
           UUID : 1bafff7f:f8993ec9:553cd4f7:31ae4f91
         Events : 39

    Number   Major   Minor   RaidDevice State
       0       8       81        0      active sync   /dev/sdf1
       1       8       97        1      active sync   /dev/sdg1
       3       8      129        2      active sync   /dev/sdi1

       4       8      113        -      spare   /dev/sdh1

/dev/sdh1 已經變成了熱備盤。

查看測試數據

[root@raid ~]# cat /raid5/1.txt

ldjaflajfdlajf

數據正常,未丟失。故障切換測試完畢。

5、

向RAID中增長存儲硬盤

若是如今已經作好的RAID 空間仍是不夠用的話,那麼咱們能夠向裏面增長新的硬盤,來增長RAID 的空間。

在虛擬機中添加物理硬盤,上面咱們已經在虛擬機中添加了八塊硬盤,這裏須要模擬新增硬盤,因此首先將虛擬機關閉,而後在存儲裏再次新增一塊5GB的硬盤。而後分區等等操做,這裏再也不贅述。

向RAID 中新加一塊硬盤

[root@raid ~]# mdadm /dev/md5 -a /dev/sdj1
mdadm: added /dev/sdj1

查看此時的RAID 狀態

[root@raid ~]# mdadm -D /dev/md5 
/dev/md5:
        Version : 1.2
  Creation Time : Mon Dec 28 21:08:44 2015
     Raid Level : raid5
     Array Size : 10475520 (9.99 GiB 10.73 GB)
  Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
   Raid Devices : 3
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Mon Dec 28 22:47:33 2015
          State : clean 
 Active Devices : 3
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 2

         Layout : left-symmetric
     Chunk Size : 512K

           Name : raid:5  (local to host raid)
           UUID : 1bafff7f:f8993ec9:553cd4f7:31ae4f91
         Events : 43

    Number   Major   Minor   RaidDevice State
       0       8       81        0      active sync   /dev/sdf1
       1       8       97        1      active sync   /dev/sdg1
       3       8      129        2      active sync   /dev/sdi1

       4       8      113        -      spare   /dev/sdh1
       5       8      145        -      spare   /dev/sdj1

默認狀況下,咱們向RAID 中增長的磁盤,會被默認看成熱備盤,咱們須要把熱備盤加入到RAID 的活動盤中。

熱備盤轉換成活動盤

相關文章
相關標籤/搜索