【看本文須要的理論知識1.RAID是什麼? 2.RAID有哪些級別? 3.爲何須要RAID技術? 4.硬RAID和軟RAID?】node
mdadm:將任何塊設備作成RAID,在實際的工做中幾乎都是硬RAID,軟RAID用的不多,由於它是靠CPU和軟件模擬實現的【硬RAID是一個控制接口實現的】,把數據按照RAID的等級實現的不一樣的分配。如今要用軟RAID拿來作練習,它和硬RAID的理論是同樣的,真正的軟RAID也應該是在多塊磁盤操做的,好比數據存入到磁盤的時候同時分到兩塊磁盤,這樣速度提高一倍,可是如今是兩個分區作的RAID,一個分區至關於一個磁盤,由於是同一塊磁盤速度確定也是和不用RAID同樣,僅僅是爲了學習知識,不必裝多塊磁盤。linux
首先建立兩個分區app
[root@localhost ~]# fdisk /dev/sdb WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1305, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305): +1G Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (133-1305, default 133): Using default value 133 Last cylinder, +cylinders or +size{K,M,G} (133-1305, default 1305): +1G Command (m for help): t【更改分區類型】 Partition number (1-4): 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): t Partition number (1-4): 2 Hex code (type L to list codes): fd Changed system type of partition 2 to fd (Linux raid autodetect) Command (m for help): p Disk /dev/sdb: 10.7 GB, 10737418240 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd25c91c2 Device Boot Start End Blocks Id System /dev/sdb1 1 132 1060258+ fd Linux raid autodetect /dev/sdb2 133 264 1060290 fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: 設備或資源忙. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks. [root@localhost ~]# cat /proc/partitions major minor #blocks name 8 0 10485760 sda 8 1 204800 sda1 8 2 4194304 sda2 8 3 2097152 sda3 8 4 1 sda4 8 5 3987456 sda5 8 16 10485760 sdb 8 17 1606468 sdb1 8 18 2104515 sdb2 8 19 1060290 sdb3 [root@localhost ~]# partprobe /dev/sdb【讓內核識別,設備被佔用,上一講沒有卸載】 Warning: WARNING: the kernel failed to re-read the partition table on /dev/sdb (設備或資源忙). As a result, it may not reflect all of your changes until after reboot. [root@localhost ~]# ls -l /mnt/test 總用量 16 drwx------. 2 root root 16384 4月 24 18:29 lost+found [root@localhost ~]# umount /mnt/test 【卸載】 [root@localhost ~]# partprobe /dev/sdb【內核識別】 [root@localhost ~]# cat /proc/partitions【查看分區表,已經更改】 major minor #blocks name 8 0 10485760 sda 8 1 204800 sda1 8 2 4194304 sda2 8 3 2097152 sda3 8 4 1 sda4 8 5 3987456 sda5 8 16 10485760 sdb 8 17 1060258 sdb1 8 18 1060290 sdb2
將任何塊設備作成RAID,mdadm有多個模式dom
模式化的命令:ide
建立模式 -C,該模式下的選項性能
-l:RAID級別學習
-n #:設備個數ui
-a {yes|no}:是否自動爲其建立設備文件this
-c:CHUNK大小,2^n,默認64k;好比2塊磁盤作RAID,當一個文件大於64k時才分配到第2個磁盤,不然一塊磁盤就能夠存放操作系統
-x #:指定空閒盤個數
建立raid0,2G每一個盤1G,I/O提高2倍【也可使用每一個盤512M共4個盤I/O提高4倍】
建立md0: [root@localhost ~]# mdadm -C /dev/md0 -a yes -l 0 -n 2 /dev/sdb{1,2}【md0】 mdadm: /dev/sdb1 appears to contain an ext2fs file system size=1606468K mtime=Wed Apr 26 18:23:41 2017 Continue creating array? y【原來的文件系統有數據是否覆蓋掉】 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md0 started. [root@localhost ~]# cat /proc/mdstat【該文件查看哪些raid設備啓用】 Personalities : [raid0] md0 : active raid0 sdb2[1] sdb1[0] 2117632 blocks super 1.2 512k chunks unused devices: <none> 建立文件系統: [root@localhost ~]# mke2fs -j /dev/md0 mke2fs 1.41.12 (17-May-2010) 文件系統標籤= 操做系統:Linux 塊大小=4096 (log=2) 分塊大小=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 132464 inodes, 529408 blocks 26470 blocks (5.00%) reserved for the super user 第一個數據塊=0 Maximum filesystem blocks=545259520 17 block groups 32768 blocks per group, 32768 fragments per group 7792 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 正在寫入inode表: 完成 Creating journal (16384 blocks): 完成 Writing superblocks and filesystem accounting information: 完成 This filesystem will be automatically checked every 34 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. 掛載: [root@localhost ~]# mount /dev/md0 /mnt/test [root@localhost ~]# ls -l /mnt/test 總用量 16 drwx------. 2 root root 16384 4月 29 14:58 lost+found
再建立一個raid1,兩塊磁盤,每一個都是1G,一個是備份一個是存儲,這樣因爲數據要讀入兩份因此輸入速度減半,輸出速度提高一倍
建立分區,而後指定分區類型,略!【看上面】 內核讀取新分區遇到的問題: [root@localhost ~]# partprobe /dev/sdb Warning: WARNING: the kernel failed to re-read the partition table on /dev/sdb (設備或資源忙). As a result, it may not reflect all of your changes until after reboot. [root@localhost ~]# cat /proc/partitions major minor #blocks name 8 0 10485760 sda 8 1 204800 sda1 8 2 4194304 sda2 8 3 2097152 sda3 8 4 1 sda4 8 5 3987456 sda5 8 16 10485760 sdb 8 17 1060258 sdb1 8 18 1060290 sdb2 9 0 2117632 md0 解決,redhat6會遇到: [root@localhost ~]# partx -a /dev/sdb3 /dev/sdb [root@localhost ~]# partx -a /dev/sdb5 /dev/sdb [root@localhost ~]# partx -a /dev/sdb6 /dev/sdb [root@localhost ~]# cat /proc/partitions major minor #blocks name 8 0 10485760 sda 8 1 204800 sda1 8 2 4194304 sda2 8 3 2097152 sda3 8 4 1 sda4 8 5 3987456 sda5 8 16 10485760 sdb 8 17 1060258 sdb1 8 18 1060290 sdb2 8 19 1060290 sdb3 8 20 31 sdb4 8 21 1060258 sdb5 8 22 1060258 sdb6 9 0 2117632 md0 建立raid1: [root@localhost ~]# mdadm -C /dev/md1 -a yes -l 1 -n 2 /dev/sdb3 /dev/sdb5 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. [root@localhost ~]# cat /proc/mdstat Personalities : [raid0] [raid1] md1 : active raid1 sdb5[1] sdb3[0] 1059200 blocks super 1.2 [2/2] [UU]【發現這是1G】 md0 : active raid0 sdb2[1] sdb1[0] 2117632 blocks super 1.2 512k chunks【發現是2G】 unused devices: <none> [root@localhost ~]# mke2fs -j /dev/md1【建立文件系統】 [root@localhost mnt]# mount /dev/md1 /mnt/test1【掛載】 [root@localhost mnt]# ls -l /mnt/test1 總用量 16 drwx------. 2 root root 16384 4月 29 15:57 lost+found【掛載成功】
另外,上面-c指定了一個CHUNK大小,默認是64k,而咱們的磁盤塊大小默認是4k。在實際中每生成一個CHUNK就會計算一下它有多少個磁盤塊,可是若是都是用默認值的話64/4是個定值16,不須要讓它計算了,咱們能夠手動指定了這個值來減小計算量,提高性能。
-E選項表明指定帶寬的stride = THUNK/磁盤塊大小,如今咱們手動指定了stride後就不會每次計算了
[root@localhost etc]# mke2fs -j -E stride=16 -b 4096 /dev/md0
--add:短選項-a,添加一塊磁盤
--remove:短選項-r,移除某塊磁盤
--fail:短選項-f,模擬損壞某塊磁盤【如:mdadm /dev/md# --fail /dev/sdb3】
[root@localhost media]# mdadm /dev/md1 -f /dev/sdb3【模擬損壞sdb3】 mdadm: set /dev/sdb3 faulty in /dev/md1 [root@localhost media]# mdadm -D /dev/md1 /dev/md1: Version : 1.2 Creation Time : Sat Apr 29 15:44:04 2017 Raid Level : raid1 Array Size : 1059200 (1034.55 MiB 1084.62 MB) Used Dev Size : 1059200 (1034.55 MiB 1084.62 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Sat Apr 29 17:37:57 2017 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Name : localhost.localdomain:1 (local to host localhost.localdomain) UUID : b19c3436:677585c7:9cf346ce:2dbc4908 Events : 19 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 21 1 active sync /dev/sdb5 0 8 19 - faulty /dev/sdb3【sdb3損壞了】 [root@localhost media]# ls -l /mnt/test1【看看是否還能使用】 總用量 16 drwx------. 2 root root 16384 4月 29 15:57 lost+found
真實的狀況是,磁盤壞了拆下來更換,可是如今是模擬的,因此應該用命令拆卸
[root@localhost media]# mdadm /dev/md1 -r /dev/sdb3 mdadm: hot removed /dev/sdb3 from /dev/md1 [root@localhost media]# mdadm -D /dev/md1【再次查看一下】 ..... .... UUID : b19c3436:677585c7:9cf346ce:2dbc4908 Events : 20 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 21 1 active sync /dev/sdb5【只有一塊盤了】
增長一塊新盤,要求和原來的磁盤大小同樣;只建立分區指定分區類型就行,不須要建立文件系統;還能夠-a多加一塊盤做爲備用,當raid1中一塊盤壞了自動就會頂上去,備份另外一塊盤的數據。
[root@localhost media]# mdadm /dev/md1 -a /dev/sdb6 mdadm: added /dev/sdb6 [root@localhost media]# cat /proc/mdstat Personalities : [raid0] [raid1] md1 : active raid1 sdb6[2] sdb5[1] 1059200 blocks super 1.2 [2/1] [_U] [==============>......] recovery = 73.9% (783168/1059200) finish=0.0min【正在同步】 speed=261056K/sec md0 : active raid0 sdb2[1] sdb1[0] 2117632 blocks super 1.2 512k chunks unused devices: <none>
查看詳細信息:mdadm -D /dev/md#【-D == --detial】
--scan【mdadm -D --scan】
[root@localhost mnt]# mdadm -D /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sat Apr 29 09:12:13 2017 Raid Level : raid0 Array Size : 2117632 (2.02 GiB 2.17 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Sat Apr 29 09:12:13 2017 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Chunk Size : 512K Name : localhost.localdomain:0 (local to host localhost.localdomain) UUID : 8c6017f2:898580a7:f170e4b1:3e15c461 Events : 0 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 18 1 active sync /dev/sdb2
mdadm -S 或--stop /dev/md#【在redhat6如下版本可能稍微不一樣】陣列中止後下次開機可能出現/dev/md126和/dev/md127,這個時候須要mdadm -S /dev/md126,mdadm -S /dev/md127,這樣被佔用的分區就可使用了,不然被佔用的分區不能使用【資源繁忙】。最後就能夠像普通分區那樣使用了
[root@localhost media]# umount /mnt/test1 [root@localhost media]# mdadm -S /dev/md1 mdadm: stopped /dev/md1 [root@localhost etc]# mdadm -D --scan【查看一下】 ARRAY /dev/md0 metadata=1.2 name=localhost.localdomain:0 UUID=8c6017f2:898580a7:f170e4b1:3e15c461
-F 用的不多,略
-G
-A,通常使用狀況是redhat6如下:mdamd -D --scan > /etc/mdadm.conf,而後mdadm -S /dev/md#中止該raid,下次直接mdamd -A /dev/mde#會自動讀取配置文件裝備上,若是沒有mdamd.conf配置文件還要手動添加磁盤:mdadm /dev/md1 -a /dev/sdb6;在redhat6以上版本,mdamd -S 直接把設備就刪除了,因此redhat6以上版本幾乎不使用-A了
轉:http://blog.csdn.net/lileizha...【 redhat6 建立raid,重啓後md0變爲md127 】
轉:http://godben.blog.51cto.com/...【godben 的BLOG :linux RHEL6 中md0重啓後變爲 md127的解決辦法】
當中止陣列之後分區可能會自動就又被佔用,以下面這種狀況:Can't open /dev/sdb6 exclusively. Mounted filesystem?,這個時候再次mdadm -S 中止就行,爲了避免讓它之後自動又被佔用,直接先-f 模擬損壞,再中止陣列
[root@localhost ~]# pvcreate /dev/sdb6 Can't open /dev/sdb6 exclusively. Mounted filesystem? [root@localhost ~]# cat /proc/mdstat【怎麼自動被raid佔用了???】 Personalities : [raid1] [raid0] md1 : inactive sdb6[2](S) 1059234 blocks super 1.2 md0 : active raid0 sdb2[1] sdb1[0] 2117632 blocks super 1.2 512k chunks unused devices: <none> [root@localhost ~]# mdadm -S /dev/md0【中止陣列】 mdadm: stopped /dev/md0 [root@localhost ~]# mdadm -S /dev/md1 mdadm: stopped /dev/md1
watch:週期性執行指定命令,並以全屏方式顯示結果
-n #:指定每一個#秒執行指定命令一次,默認是2s【watch -n # 'COMMAND'】
歡迎來個人博客:http://www.51aixue.cn/2017/04...