RAID軟件磁盤陣列
數據庫
RAID 即廉價磁盤冗餘陣列,其高可用性和可靠性適用於大規模環境中,相比正常使用,數據更須要被保護。RAID 是將多個磁盤整合的大磁盤,不只具備存儲功能,同時還有數據保護功能.安全
軟件磁盤整列經過mdadm命令建立.
app
RAID等級分佈式
RAID-0測試
工具
RAID-0是等量模式,最好由兩塊相同型號,相同容量的磁盤來組成.這種模式的RAID會將磁盤想切除等量的區塊(chunk),當有存儲文件需求時,就會先把文件切割成必定數目的區塊,而後依次存入各個磁盤上.如圖示性能
下面是使用mdadm工具建立RAID-0過程測試
###新建分區/dev/vdb10,/dev/vdb11略過
[root@zwj ~]# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/vdb10 /dev/vdb11 #同 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@zwj ~]# mdadm -E /dev/vdb10 /dev/vdb11 #查看陣列中的磁盤信息 /dev/vdb10: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : d26ff510:82cc5acf:b823eaae:c98c2e8f Name : zwj:0 (local to host zwj) Creation Time : Sun May 7 18:30:05 2017 Raid Level : raid0 Raid Devices : 2 Avail Dev Size : 2096545 (1023.70 MiB 1073.43 MB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1960 sectors, after=0 sectors State : clean Device UUID : c645eb97:9dc15ee8:9148e372:0c26b998 Update Time : Sun May 7 18:30:05 2017 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 402c224f - correct Events : 0 Chunk Size : 512K Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) /dev/vdb11: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : d26ff510:82cc5acf:b823eaae:c98c2e8f Name : zwj:0 (local to host zwj) Creation Time : Sun May 7 18:30:05 2017 Raid Level : raid0 Raid Devices : 2 Avail Dev Size : 2096545 (1023.70 MiB 1073.43 MB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : before=1960 sectors, after=0 sectors State : clean Device UUID : 54fd15e5:8ed74a79:0b1efd81:7995a016 Update Time : Sun May 7 18:30:05 2017 Bad Block Log : 512 entries available at offset 72 sectors Checksum : ab4434b5 - correct Events : 0 Chunk Size : 512K Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) [root@zwj ~]# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun May 7 18:30:05 2017 Raid Level : raid0 Array Size : 2096128 (2047.00 MiB 2146.44 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Sun May 7 18:30:05 2017 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 512K Name : zwj:0 (local to host zwj) UUID : d26ff510:82cc5acf:b823eaae:c98c2e8f Events : 0 Number Major Minor RaidDevice State 0 252 26 0 active sync /dev/vdb10 1 252 27 1 active sync /dev/vdb11 [root@zwj ~]# cat /proc/mdstat Personalities : [raid0] md0 : active raid0 vdb11[1] vdb10[0] 2096128 blocks super 1.2 512k chunks unused devices: <none> [root@zwj ~]# mkdir -p /mnt/raid0 #建立掛着目錄 [root@zwj ~]# mkfs.ext3 /dev/md >/dev/null [root@zwj ~]# mount /dev/md0 /mnt/raid0 [root@zwj ~]# df
。。。
/dev/md0 2.0G 236M 1.7G 13% /mnt/raidmdadm -C /dev/md0 -l raid0 -n 2 /dev/vdb10 /dev/vdb11 ,-C 接陣列名,-l接等級,-n 接使用磁盤數
RAID-1測試
ui
RAID-1是鏡像模式,最好由兩塊相同型號,相同容量的磁盤來組成.這種模式的RAID有點主要是數據的備份,但磁盤使用率低.如圖示this
[root@zwj raid1]# fdisk -l |grep "/dev/vdb1[2-9]" #新建分區爲200M
/dev/vdb12 56186 56592 205096+ 83 Linux
/dev/vdb13 56593 56999 205096+ 83 Linux
[root@zwj raid0]# mdadm -C /dev/md1 -l 1 -n 2 /dev/vdb{12,13}
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[root@zwj raid0]# mdadm -E /dev/vdb12 /dev/vdb13
/dev/vdb12:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 0855d3d3:502aa3e1:9a09f0e8:abdb1e1a
Name : zwj:1 (local to host zwj)
Creation Time : Sun May 7 19:49:56 2017
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 409905 (200.15 MiB 209.87 MB)
Array Size : 204928 (200.13 MiB 209.85 MB)
Used Dev Size : 409856 (200.13 MiB 209.85 MB)
Data Offset : 288 sectors
Super Offset : 8 sectors
Unused Space : before=200 sectors, after=49 sectors
State : clean
Device UUID : e4909dcf:f2fb8812:0c5a70cb:021dd248
Update Time : Sun May 7 19:49:58 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 4478c6ef - correct
Events : 17
Device Role : Active device 0
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/vdb13:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 0855d3d3:502aa3e1:9a09f0e8:abdb1e1a
Name : zwj:1 (local to host zwj)
Creation Time : Sun May 7 19:49:56 2017
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 409905 (200.15 MiB 209.87 MB)
Array Size : 204928 (200.13 MiB 209.85 MB)
Used Dev Size : 409856 (200.13 MiB 209.85 MB)
Data Offset : 288 sectors
Super Offset : 8 sectors
Unused Space : before=200 sectors, after=49 sectors
State : clean
Device UUID : 170d0094:52cdc3d9:993a58b1:b2f3c341
Update Time : Sun May 7 19:49:58 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : aeefcbc0 - correct
Events : 17
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
[root@zwj ~]# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Sun May 7 19:49:56 2017
Raid Level : raid1
Array Size : 204928 (200.13 MiB 209.85 MB)
Used Dev Size : 204928 (200.13 MiB 209.85 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sun May 7 19:49:58 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : zwj:1 (local to host zwj)
UUID : 0855d3d3:502aa3e1:9a09f0e8:abdb1e1a
Events : 17
Number Major Minor RaidDevice State
0 252 28 0 active sync /dev/vdb12
1 252 29 1 active sync /dev/vdb13
[root@zwj ~]# mkfs.ext3 /dev/md1 >/dev/null
[root@zwj ~]# mkdir -p /mnt/raid1
[root@zwj ~]# mount /dev/md1 /mnt/raid1
[root@zwj ~]# cd /mnt/raid1/
[root@zwj raid1]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 20G 13G 5.9G 69% /
/dev/vdb1 20G 936M 18G 5% /mydata
/dev/md127 2.0G 236M 1.7G 13% /mnt/raid0
/dev/md1 194M 5.6M 179M 4% /mnt/raid1 #只有200M
#刪除/dev/vdb13
[root@zwj ~]# mdadm --detail /dev/md126
/dev/md126:
Version : 1.2
Creation Time : Sun May 7 19:49:56 2017
Raid Level : raid1
Array Size : 204928 (200.13 MiB 209.85 MB)
Used Dev Size : 204928 (200.13 MiB 209.85 MB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Sun May 7 20:14:10 2017
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : zwj:1 (local to host zwj)
UUID : 0855d3d3:502aa3e1:9a09f0e8:abdb1e1a
Events : 23
Number Major Minor RaidDevice State
0 252 28 0 active sync /dev/vdb12
2 0 0 2 removed
[root@zwj ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 20G 13G 5.9G 69% /
/dev/vdb1 20G 936M 18G 5% /mydata
/dev/md126 194M 5.6M 179M 4% /mnt/raid1
RAID-10測試spa
RAID 10 是組合 RAID 1 和 RAID 0 造成的。要設置 RAID 10,咱們至少須要4個磁盤,提供更好的性能,在 RAID 10 中咱們將失去一半的磁盤容量,讀與寫的性能都很好,由於它會同時進行寫入和讀取,它能解決數據庫的高 I/O 磁盤寫操做,也保證了數據的安全.
咱們使用四個分區測試/dev/vdb{12..15} 大小各爲200M
[root@zwj ~]# mdadm -C /dev/md10 -l 10 -n 4 /dev/vdb{12..15} mdadm: /dev/vdb13 appears to be part of a raid array: level=raid1 devices=2 ctime=Sun May 7 19:49:56 2017 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md10 started. [root@zwj ~]# mdadm -D /dev/md10 /dev/md10: Version : 1.2 Creation Time : Sun May 7 20:31:43 2017 Raid Level : raid10 Array Size : 407552 (398.00 MiB 417.33 MB) Used Dev Size : 203776 (199.00 MiB 208.67 MB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun May 7 20:31:48 2017 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : zwj:10 (local to host zwj) UUID : 3d1b1b2e:daa4597f:bf838bd1:c6b82c78 Events : 17 Number Major Minor RaidDevice State 0 252 28 0 active sync set-A /dev/vdb12 1 252 29 1 active sync set-B /dev/vdb13 2 252 30 2 active sync set-A /dev/vdb14 3 252 31 3 active sync set-B /dev/vdb15 [root@zwj ~]# mkfs.ext3 /dev/md10 >/dev/null
[root@zwj ~]# mkdir /mnt/raid10;mount /dev/md10 /mnt/raid10
[root@zwj ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 20G 13G 5.9G 69% /
/dev/vdb1 20G 936M 18G 5% /mydata
/dev/md10 386M 11M 356M 3% /mnt/raid10
除了上面的方法,還有一種建立RAID-10的方法,先使用4個分區建立兩個RAID-1,/dev/md1和/dev/md2
再使用/dev/md1和/dev/md2建立一個RAID-0,/dev/md10
mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/vdb12 /dev/vdb13
mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/vdb14 /dev/vdb15
mdadm --create /dev/md10 --level=0 --raid-devices=2 /dev/md1 /dev/md2
RAID-5測試
奇偶校驗信息存儲在每一個磁盤中,好比說,咱們有4個磁盤(預備磁盤不計入內),其中至關於一個磁盤大小的空間被分割去存儲全部磁盤的奇偶校驗信息。若是任何一個磁盤出現故障,咱們能夠經過更換故障磁盤後,從奇偶校驗信息重建獲得原來的數據。
特色:
[root@zwj ~]# mdadm --create /dev/md5 -l 5 -n 3 --spare-devices=1 /dev/vdb{12,13,14,15} mdadm: /dev/vdb12 appears to be part of a raid array: level=raid10 devices=4 ctime=Sun May 7 20:31:43 2017 mdadm: /dev/vdb13 appears to be part of a raid array: level=raid10 devices=4 ctime=Sun May 7 20:31:43 2017 mdadm: /dev/vdb14 appears to be part of a raid array: level=raid10 devices=4 ctime=Sun May 7 20:31:43 2017 mdadm: /dev/vdb15 appears to be part of a raid array: level=raid10 devices=4 ctime=Sun May 7 20:31:43 2017 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md5 started. [root@zwj ~]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Sun May 7 21:26:28 2017 Raid Level : raid5 Array Size : 407552 (398.00 MiB 417.33 MB) Used Dev Size : 203776 (199.00 MiB 208.67 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun May 7 21:26:34 2017 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : zwj:5 (local to host zwj) UUID : 27119caa:5e2deb6e:427c4d7f:536669ca Events : 18 Number Major Minor RaidDevice State 0 252 28 0 active sync /dev/vdb12 1 252 29 1 active sync /dev/vdb13 4 252 30 2 active sync /dev/vdb14 3 252 31 - spare /dev/vdb15 [root@zwj ~]# cat /proc/mdstat Personalities : [raid0] [raid10] [raid6] [raid5] [raid4] md5 : active raid5 vdb14[4] vdb15[3](S) vdb13[1] vdb12[0] 407552 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> [root@zwj ~]# mkfs.ext3 /dev/md5 >/dev/null [root@zwj ~]# mkdir -p /mnt/raid5 [root@zwj ~]# mount /dev/md5 /mnt/raid5 [root@zwj ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 20G 13G 5.9G 69% / /dev/vdb1 20G 936M 18G 5% /mydata /dev/md5 386M 11M 356M 3% /mnt/raid5 [root@zwj ~]# cd /mnt/raid5 [root@zwj raid5]# dd if=/dev/zero of=50_M_file bs=1M count=50 50+0 records in 50+0 records out 52428800 bytes (52 MB) copied, 0.1711 s, 306 MB/s [root@zwj raid5]# df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 20G 13G 5.9G 69% / /dev/vdb1 20G 936M 18G 5% /mydata /dev/md5 386M 61M 306M 17% /mnt/raid5 [root@zwj raid5]# echo "assume /dev/vdb13 fail---";mdadm --manage /dev/md5 --fail /dev/vdb13;mdadm -D /dev/md5 #模擬/dev/vdb13損壞 assume /dev/vdb13 fail--- mdadm: set /dev/vdb13 faulty in /dev/md5 /dev/md5: Version : 1.2 Creation Time : Sun May 7 21:26:28 2017 Raid Level : raid5 Array Size : 407552 (398.00 MiB 417.33 MB) Used Dev Size : 203776 (199.00 MiB 208.67 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun May 7 21:37:40 2017 State : clean, degraded, recovering Active Devices : 2 #活躍的2個磁盤,由於壞了1個 Working Devices : 3 #處於工做狀態的有3個,1個spare
Failed Devices : 1 #壞了1個 Spare Devices : 1 #1個spare Layout : left-symmetric Chunk Size : 512K Rebuild Status : 0% complete Name : zwj:5 (local to host zwj) UUID : 27119caa:5e2deb6e:427c4d7f:536669ca Events : 20 Number Major Minor RaidDevice State 0 252 28 0 active sync /dev/vdb12 3 252 31 1 spare rebuilding /dev/vdb15 #正在自動重建RAID5 4 252 30 2 active sync /dev/vdb14 1 252 29 - faulty /dev/vdb13 #顯示循壞的/dev/vdb13 [root@zwj raid5]# mdadm -D /dev/md5 #檢查是否重建完畢 /dev/md5: Version : 1.2 Creation Time : Sun May 7 21:26:28 2017 Raid Level : raid5 Array Size : 407552 (398.00 MiB 417.33 MB) Used Dev Size : 203776 (199.00 MiB 208.67 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun May 7 21:37:47 2017 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 1 Spare Devices : 0 #沒有備用的了 Layout : left-symmetric Chunk Size : 512K Name : zwj:5 (local to host zwj) UUID : 27119caa:5e2deb6e:427c4d7f:536669ca Events : 39 Number Major Minor RaidDevice State 0 252 28 0 active sync /dev/vdb12 3 252 31 1 active sync /dev/vdb15 4 252 30 2 active sync /dev/vdb14 1 252 29 - faulty /dev/vdb13 [root@zwj raid5]# cat /proc/mdstat Personalities : [raid0] [raid10] [raid6] [raid5] [raid4] md5 : active raid5 vdb14[4] vdb15[3] vdb13[1](F) vdb12[0] 407552 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> [root@zwj raid5]# mdadm --manage /dev/md5 --remove /dev/vdb13 --add /dev/vdb13;mdadm -D /dev/md5 #移除損壞的/dev/vdb13,並加入新的分區(這裏仍是/dev/vdb13) mdadm: hot removed /dev/vdb13 from /dev/md5 mdadm: added /dev/vdb13 /dev/md5: Version : 1.2 Creation Time : Sun May 7 21:26:28 2017 Raid Level : raid5 Array Size : 407552 (398.00 MiB 417.33 MB) Used Dev Size : 203776 (199.00 MiB 208.67 MB) Raid Devices : 3 Total Devices : 4 Persistence : Superblock is persistent Update Time : Sun May 7 21:40:04 2017 State : clean Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : zwj:5 (local to host zwj) UUID : 27119caa:5e2deb6e:427c4d7f:536669ca Events : 41 Number Major Minor RaidDevice State 0 252 28 0 active sync /dev/vdb12 3 252 31 1 active sync /dev/vdb15 4 252 30 2 active sync /dev/vdb14 5 252 29 - spare /dev/vdb13[root@zwj raid5]# df -hFilesystem Size Used Avail Use% Mounted on/dev/vda1 20G 13G 5.9G 69% //dev/vdb1 20G 936M 18G 5% /mydata/dev/md5 386M 61M 306M 17% /mnt/raid5