RAID磁盤陣列

RAID 類型:node

RAID類型centos

最低磁盤個數安全

空間利用率app

各自的優缺點ide

級 別ui

說 明spa

RAID03d

條帶卷code

2+orm

100%

讀寫速度快,不容錯

RAID1

鏡像卷

2

50%

讀寫速度通常,容錯

RAID5

帶奇偶校驗的條帶卷

3+

(n-1)/n

讀寫速度快,容錯,容許壞一塊盤

RAID10

RAID1的安全+RAID0的高速

4

50%

讀寫速度快,容錯

Mdadm 命令詳解

常見參數解釋:

參數

做用

-a

 

檢測設備名

添加磁盤

-n

指定設備數量

-l

指定RAID級別

-C

建立

-v

顯示過程

-f

模擬設備損壞

-r

移除設備

-Q

查看摘要信息

-D

查看詳細信息

-S

中止RAID磁盤陣列

 

搭建 raid 10 陣列

 

第一步:查看磁盤
[root@ken ~]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde
 
第二步:下載mdadm
[root@ken ~]# yum install mdadm -y
 
第三步:建立raid10陣列
[root@ken ~]# mdadm -Cv /dev/md0 -a yes -n 4 -l 10 /dev/sd{b,c,d,e}
(-C:建立磁盤陣列 v:顯示建立過程 /dev/md0 :陣列名稱 -a:是否檢測 -n :指定磁盤數量 -l:陣列類型  /dev/sd{b,c,d,e}:用來建立陣列的磁盤名)
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 20954112K
mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
 
第四步:格式磁盤陣列爲ext4
[root@ken ~]# mkfs.ext4 /dev/md0
mapper/ mcelog  md0     mem     midi    mqueue/ 
[root@ken ~]# mkfs.ext4 /dev/md0 
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
2621440 inodes, 10477056 blocks
523852 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2157969408
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
    4096000, 7962624

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   
 
第五步:掛載
[root@ken ~]# mkdir /raid10
[root@ken ~]# mount /dev/md0 /raid10
[root@ken ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   17G  1.2G   16G   7% /
devtmpfs                 224M     0  224M   0% /dev
tmpfs                    236M     0  236M   0% /dev/shm
tmpfs                    236M  5.6M  230M   3% /run
tmpfs                    236M     0  236M   0% /sys/fs/cgroup
/dev/sda1               1014M  130M  885M  13% /boot
tmpfs                     48M     0   48M   0% /run/user/0
/dev/md0                  40G   49M   38G   1% /raid10
 
第六步:查看/dev/md0的詳細信息
[root@ken ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Feb 28 19:08:25 2019
        Raid Level : raid10
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Feb 28 19:11:41 2019
             State : clean, resyncing 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

     Resync Status : 96% complete

              Name : ken:0  (local to host ken)
              UUID : c5df1175:a6b1ad23:f3d7e80b:6b56fe98
            Events : 26

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync set-A   /dev/sdb
       1       8       32        1      active sync set-B   /dev/sdc
       2       8       48        2      active sync set-A   /dev/sdd
       3       8       64        3      active sync set-B   /dev/sde
 
第七步:寫入到配置文件中
[root@ken ~]# echo "/dev/md0 /raid10 ext4 defaults 0 0" >> /etc/fstab

 

磁盤陣列損壞以及修復

第一步:模擬設備損壞
[root@ken ~]# mdadm /dev/md0 -f /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
[root@ken ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Feb 28 19:08:25 2019
        Raid Level : raid10
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Feb 28 19:15:59 2019
             State : clean, degraded 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : ken:0  (local to host ken)
              UUID : c5df1175:a6b1ad23:f3d7e80b:6b56fe98
            Events : 30

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       32        1      active sync set-B   /dev/sdc
       2       8       48        2      active sync set-A   /dev/sdd
       3       8       64        3      active sync set-B   /dev/sde

       0       8       16        -      faulty   /dev/sdb
 
第二步:添加新的磁盤
在RAID 10級別的磁盤陣列中,當RAID 1磁盤陣列中存在一個故障盤時並不影響RAID 10磁盤陣列的使用。當購買了新的硬盤設備後再使用mdadm命令來予以替換便可,在此期間咱們能夠在/RAID目錄中正常地建立或刪除文件。因爲咱們是在虛擬機中模擬硬盤,因此先重啓系統,而後再把新的硬盤添加到RAID磁盤陣列中。
[root@ken ~]# reboot
[root@ken ~]# umount /raid10
[root@ken ~]# mdadm /dev/md0 -a /dev/sdb
mdadm: added /dev/sdb
[root@ken ~]# mdadm -D  /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Feb 28 19:08:25 2019
        Raid Level : raid10
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Feb 28 19:19:14 2019
             State : clean, degraded, recovering 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 7% complete                                      #這裏顯示重建進度

              Name : ken:0  (local to host ken)
              UUID : c5df1175:a6b1ad23:f3d7e80b:6b56fe98
            Events : 35

    Number   Major   Minor   RaidDevice State
       4       8       16        0      spare rebuilding   /dev/sdb    #rebuilding重建中
       1       8       32        1      active sync set-B   /dev/sdc
       2       8       48        2      active sync set-A   /dev/sdd
       3       8       64        3      active sync set-B   /dev/sde
 
再次查看發現已經構建完畢
[root@ken ~]# mdadm -D  /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Feb 28 19:08:25 2019
        Raid Level : raid10
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Feb 28 19:20:52 2019
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : ken:0  (local to host ken)
              UUID : c5df1175:a6b1ad23:f3d7e80b:6b56fe98
            Events : 51

    Number   Major   Minor   RaidDevice State
       4       8       16        0      active sync set-A   /dev/sdb
       1       8       32        1      active sync set-B   /dev/sdc
       2       8       48        2      active sync set-A   /dev/sdd
       3       8       64        3      active sync set-B   /dev/sde

搭建 raid 5 陣列 以及備份盤

 

第一步:查看磁盤
[root@ken ~]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde
 
第二步:建立RAID5陣列
[root@ken ~]# mdadm  -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/sd{b,c,d,e}
(-x :指定備份盤個數)
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 20954112K
mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
 
第三步:格式化爲ext4
[root@ken ~]# mkfs.ext4 /dev/md0 
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
2621440 inodes, 10477056 blocks
523852 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2157969408
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
    4096000, 7962624

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done  
 
第四步:掛載
[root@ken ~]# mount /dev/md0 /raid5
[root@ken ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   17G  1.2G   16G   7% /
devtmpfs                 476M     0  476M   0% /dev
tmpfs                    488M     0  488M   0% /dev/shm
tmpfs                    488M  7.7M  480M   2% /run
tmpfs                    488M     0  488M   0% /sys/fs/cgroup
/dev/sda1               1014M  130M  885M  13% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/md0                  40G   49M   38G   1% /raid5
 
第五步:查看陣列信息
能夠發現有一個備份盤/dev/sde
[root@ken ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Feb 28 19:35:10 2019
        Raid Level : raid5
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Feb 28 19:37:11 2019
             State : active 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1  (備份盤)

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : ken:0  (local to host ken)
              UUID : b693fe72:4452bd3f:4d995779:ee33bc77
            Events : 76

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      active sync   /dev/sdd

       3       8       64        -      spare   /dev/sde
 
第六步:模擬/dev/sdb磁盤損壞
能夠發現/dev/sde備份盤當即開始構建
[root@ken ~]# mdadm /dev/md0 -f /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
[root@ken ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Thu Feb 28 19:35:10 2019
        Raid Level : raid5
        Array Size : 41908224 (39.97 GiB 42.91 GB)
     Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Feb 28 19:38:41 2019
             State : active, degraded, recovering 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 2% complete

              Name : ken:0  (local to host ken)
              UUID : b693fe72:4452bd3f:4d995779:ee33bc77
            Events : 91

    Number   Major   Minor   RaidDevice State
       3       8       64        0      spare rebuilding   /dev/sde
       1       8       32        1      active sync   /dev/sdc
       4       8       48        2      active sync   /dev/sdd

       0       8       16        -      faulty   /dev/sdb
相關文章
相關標籤/搜索