軟件磁盤陣列

*由四個分區組成raid 5node

*每一個分區1G,每一個分區最好同樣大linux

*利用一個分區設爲sparediskredis

*sparedisk與其它raid分區同樣大vim

*將此raid 5 設備掛載待/mnt/raid目錄下ide

1.分區ui

[root@server3 mnt]# fdisk /dev/vdb spa

Welcome to fdisk (util-linux 2.23.2).orm

 

Changes will remain in memory only, until you decide to write them.server

Be careful before using the write command.ip

 

 

Command (m for help): n

Partition type:

   p   primary (0 primary, 0 extended, 4 free)

   e   extended

Select (default p): p

Partition number (1-4, default 1):

First sector (2048-41943039, default 2048):

Using default value 2048

Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): +1G

Partition 1 of type Linux and of size 1 GiB is set

 

 

 

Command (m for help): p

 

Disk /dev/vdb: 21.5 GB, 21474836480 bytes, 41943040 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: dos

Disk identifier: 0x7afa732b

 

   Device Boot      Start         End      Blocks   Id  System

/dev/vdb1            2048     2099199     1048576   83  Linux

/dev/vdb2         2099200     4196351     1048576   83  Linux

/dev/vdb3         4196352     6293503     1048576   83  Linux

/dev/vdb4         6293504    41943039    17824768    5  Extended

/dev/vdb5         6295552     8392703     1048576   83  Linux

/dev/vdb6         8394752    10491903     1048576   83  Linux

 

[root@server3 ~]# partprobe **可能會出現報錯,重啓就好

 

2.以mdadm建立RAID

[root@server3 ~]# mdadm --create --auto=yes /dev/md0 --level=5 --raid-devices=4 --spare-devices=1 /dev/vdb{2,3,5,6,7}      <==我的問題,本身由新分了個區

mdadm: Defaulting to version 1.2 metadata

mdadm: array /dev/md0 started.

[root@server3 ~]# mdadm --detail /dev/md0

/dev/md0:   <==RAID設備文件名

        Version : 1.2

  Creation Time : Mon Jan 21 15:41:42 2019    <==被建立時間

     Raid Level : raid5             <==RAID等級

     Array Size : 3142656 (3.00 GiB 3.22 GB)  <==raid可用磁盤量

  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB) <==每一個設備的可用量

   Raid Devices : 4             <==用做RAID的設備數量

  Total Devices : 5             <==所有設備數量

    Persistence : Superblock is persistent

 

    Update Time : Mon Jan 21 15:41:52 2019

          State : clean, degraded, recovering

 Active Devices : 4                 <==啓動的設備數量

Working Devices : 5                 <==可動做的設備數量

 Failed Devices : 0                 <==出現錯誤的設備數量

  Spare Devices : 1                 <==預備磁盤數量

 

         Layout : left-symmetric

     Chunk Size : 512K

 

 Rebuild Status : 12% complete

 

           Name : server3:0  (local to host server3)

           UUID : 4c7f9840:5a192f12:004c417e:29d8c02e

         Events : 2

 

    Number   Major   Minor   RaidDevice State

       0     253       18        0      active sync   /dev/vdb2

       1     253       19        1      active sync   /dev/vdb3

       2     253       21        2      active sync   /dev/vdb5

       5     253       22        3      spare rebuilding   /dev/vdb6

 

       4     253       23        -      spare   /dev/vdb7

 

3.磁盤陣列的狀況也可用下面這個文件查看

[root@server3 ~]# cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]

md0 : active raid5 vdb6[5] vdb7[4](S) vdb5[2] vdb3[1] vdb2[0]

      3142656 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

     

unused devices: <none>

 

格式化與掛載使用RAID

 

[root@server3 ~]# mkfs.ext4  /dev/md0

mke2fs 1.42.9 (28-Dec-2013)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=128 blocks, Stripe width=384 blocks

196608 inodes, 785664 blocks

39283 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=805306368

24 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

    32768, 98304, 163840, 229376, 294912

 

Allocating group tables: done                           

Writing inode tables: done                           

Creating journal (16384 blocks): done

Writing superblocks and filesystem accounting information: done

 

[root@server3 ~]# cd /mnt/

[root@server3 mnt]# ls raid/     <==自行建立

[root@server3 mnt]# mount /dev/md0  raid/

[root@server3 mnt]# df

Filesystem     1K-blocks    Used Available Use% Mounted on

/dev/vda3       20243456 3441540  16801916  18% /

devtmpfs          493580       0    493580   0% /dev

tmpfs             508248      84    508164   1% /dev/shm

tmpfs             508248   13564    494684   3% /run

tmpfs             508248       0    508248   0% /sys/fs/cgroup

/dev/vda1         201380  133424     67956  67% /boot

tmpfs             101652      20    101632   1% /run/user/42

tmpfs             101652       0    101652   0% /run/user/0

/dev/md0         3027728    9216   2844996   1% /mnt/raid

 

 

4.錯誤救援模式

    首先設置磁盤爲錯誤

[root@server3 mnt]# cp -a /var/log   raid/    *複製一些東西到/mnt/raid中去

[root@server3 mnt]# df /mnt/raid/ ; du -sm /mnt/raid/

Filesystem     1K-blocks  Used Available Use% Mounted on

/dev/md0         3027728 15404   2838808   1% /mnt/raid

7   /mnt/raid/

 

假設/dev/vdb5出錯了

[root@server3 mnt]# mdadm --manage /dev/md0 --fail /dev/vdb5

mdadm: set /dev/vdb5 faulty in /dev/md0

[root@server3 mnt]# mdadm --detail /dev/md0

/dev/md0:

        Version : 1.2

  Creation Time : Mon Jan 21 15:41:42 2019

     Raid Level : raid5

     Array Size : 3142656 (3.00 GiB 3.22 GB)

  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)

   Raid Devices : 4

  Total Devices : 5

    Persistence : Superblock is persistent

 

    Update Time : Mon Jan 21 16:18:46 2019

          State : clean, degraded, recovering

 Active Devices : 3

Working Devices : 4

 Failed Devices : 1

  Spare Devices : 1

 

         Layout : left-symmetric

     Chunk Size : 512K

 

 Rebuild Status : 11% complete

 

           Name : server3:0  (local to host server3)

           UUID : 4c7f9840:5a192f12:004c417e:29d8c02e

         Events : 21

 

    Number   Major   Minor   RaidDevice State

       0     253       18        0      active sync   /dev/vdb2

       1     253       19        1      active sync   /dev/vdb3

       4     253       23        2      spare rebuilding   /dev/vdb7

       5     253       22        3      active sync   /dev/vdb6

 

       2     253       21        -      faulty   /dev/vdb5

[root@server3 mnt]# mdadm --manage /dev/md0 --fail /dev/vdb5

mdadm: set /dev/vdb5 faulty in /dev/md0

[root@server3 mnt]# mdadm --detail /dev/md0

/dev/md0:

        Version : 1.2

  Creation Time : Mon Jan 21 15:41:42 2019

     Raid Level : raid5

     Array Size : 3142656 (3.00 GiB 3.22 GB)

  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)

   Raid Devices : 4

  Total Devices : 5

    Persistence : Superblock is persistent

 

    Update Time : Mon Jan 21 16:18:46 2019

          State : clean, degraded, recovering

 Active Devices : 3

Working Devices : 4

 Failed Devices : 1      **有一個出錯了

  Spare Devices : 1

 

         Layout : left-symmetric

     Chunk Size : 512K

 

 Rebuild Status : 11% complete

 

           Name : server3:0  (local to host server3)

           UUID : 4c7f9840:5a192f12:004c417e:29d8c02e

         Events : 21

 

    Number   Major   Minor   RaidDevice State

       0     253       18        0      active sync   /dev/vdb2

       1     253       19        1      active sync   /dev/vdb3

       4     253       23        2      spare rebuilding   /dev/vdb7

       5     253       22        3      active sync   /dev/vdb6

 

       2     253       21        -      faulty   /dev/vdb5

 

 

5.將出錯的磁盤刪除並加入新磁盤

    創建新的分區

[root@server3 mnt]# fdisk  /dev/vdb

Welcome to fdisk (util-linux 2.23.2).

 

Changes will remain in memory only, until you decide to write them.

Be careful before using the write command.

 

 

Command (m for help): n

All primary partitions are in use

Adding logical partition 8

First sector (12593152-41943039, default 12593152):

Using default value 12593152

Last sector, +sectors or +size{K,M,G} (12593152-41943039, default 41943039): +1G

Partition 8 of type Linux and of size 1 GiB is set

 

Command (m for help): wq

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

 

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table. The new table will be used at

the next reboot or after you run partprobe(8) or kpartx(8)

Syncing disks.

[root@server3 mnt]# partprobe

    加入新的磁盤拔除由問題的磁盤

[root@server3 mnt]# mdadm --manage /dev/md0 --add /dev/vdb8 --remove /dev/vdb5

mdadm: added /dev/vdb8

mdadm: hot removed /dev/vdb5 from /dev/md0

 

[root@server3 mnt]# mdadm --detail /dev/md0

/dev/md0:

        Version : 1.2

  Creation Time : Mon Jan 21 15:41:42 2019

     Raid Level : raid5

     Array Size : 3142656 (3.00 GiB 3.22 GB)

  Used Dev Size : 1047552 (1023.00 MiB 1072.69 MB)

   Raid Devices : 4

  Total Devices : 5

    Persistence : Superblock is persistent

 

    Update Time : Mon Jan 21 16:26:07 2019

          State : clean

 Active Devices : 4

Working Devices : 5

 Failed Devices : 0

  Spare Devices : 1

 

         Layout : left-symmetric

     Chunk Size : 512K

 

           Name : server3:0  (local to host server3)

           UUID : 4c7f9840:5a192f12:004c417e:29d8c02e

         Events : 38

 

    Number   Major   Minor   RaidDevice State

       0     253       18        0      active sync   /dev/vdb2

       1     253       19        1      active sync   /dev/vdb3

       4     253       23        2      active sync   /dev/vdb7

       5     253       22        3      active sync   /dev/vdb6

 

       6     253       24        -      spare   /dev/vdb8

 

6.開機自動啓動raid並自動掛載

[root@server3 mnt]# vim /etc/fstab

/dev/md0        /mnt/raid       ext4            defaults    1 2

[root@server3 mnt]# mount -a     *檢測一下是否有問題

 

 

7.關閉軟件raid

[root@server3 mnt]# umount /dev/md0

[root@server3 mnt]# vim /etc/fstab    *刪除剛剛寫入的那一行

 

[root@server3 mnt]# mdadm  --stop /dev/md0

mdadm: stopped /dev/md0

unused devices: <none>

[root@server3 mnt]# cat /proc/mdstat

Personalities : [raid6] [raid5] [raid4]

unused devices: <none>       **不存在任何數據設備

相關文章
相關標籤/搜索