Linux下IP SAN共享存儲操做記錄

 

1、簡單介紹
SAN,即存儲區域網絡(storage area network and SAN protocols),它是一種高速網絡實現計算機與存儲系統之間的數據傳輸。常見的分類是FC-SAN和IP-SAN兩種。FC-SAN經過光纖通道協議轉發scsi協議;IP-SAN經過TCP協議轉發scsi協議,也就是IP 地址。存儲設備是指一臺或多臺用以存儲計算機數據的磁盤設備,一般指磁盤陣列,主要廠商EMC、日立等。node

iSCSI(internet SCSI)技術由IBM公司研究開發,是一個供硬件設備使用的、能夠在IP協議的上層運行的SCSI指令集,這種指令集合能夠實如今IP網絡上運行SCSI協議,使其可以在諸如高速千兆以太網上進行路由選擇。iSCSI是一種新儲存技術,它是將現有SCSI接口與以太網絡(Ethernet)技術結合,使服務器可與使用IP網絡的儲存裝置互相交換資料。linux

iSCSI是一種基於TCP/IP 的協議,用來創建和管理IP存儲設備、主機和客戶機等之間的相互鏈接,並建立存儲區域網絡(SAN)。SAN 使得SCSI 協議應用於高速數據傳輸網絡成爲可能,這種傳輸以數據塊級別(block-level)在多個數據存儲網絡間進行。SCSI 結構基於C/S模式,其一般應用環境是:設備互相靠近,而且這些設備由SCSI 總線鏈接。數據庫

iSCSI 的主要功能是在TCP/IP 網絡上的主機系統(啓動器 initiator)和存儲設備(目標器 target)之間進行大量數據的封裝和可靠傳輸過程。
完整的iSCSI系統的拓撲結構以下:vim

iSCSI簡單來講,就是把SCSI指令經過TCP/IP協議封裝起來,在以太網中傳輸。iSCSI 能夠實如今IP網絡上傳遞和運行SCSI協議,使其可以在諸如高速千兆以太網上進行數據存取,實現了數據的網際傳遞和管理。基於iSCSI創建的存儲區域網(SAN)與基於光纖的FC-SAN相比,具備很好的性價比。centos

iSCSI屬於端到端的會話層協議,它定義的是SCSI到TCP/IP的映射(以下圖),即Initiator將SCSI指令和數據封裝成iSCSI協議數據單元,向下提交給TCP層,最後封裝成IP數據包在IP網絡上傳輸,到達Target後經過解封裝還原成SCSI指令和數據,再由存儲控制器發送到指定的驅動器,從而實現SCSI命令和數據在IP網絡上的透明傳輸。它整合了現有的存儲協議SCSI和網絡協議TCP/IP,實現了存儲與TCP/IP網絡的無縫融合。在本篇中,將把發起器Initiator稱爲客戶端,將目標器Target稱爲服務端以方便理解。bash

2、配置案例服務器

操做需求:
公司以前在阿里雲上購買了6臺機器,磁盤空間大小不一致,後續IDC建設好後,又將業務從阿里雲上遷移到IDC機器上了。爲了避免浪費阿里雲上的這幾臺機器資源,打算將這其中的5臺機器作成IP SAN共享存儲,另外一臺機器共享這5臺的SAN存儲,而後跟本身的磁盤一塊兒作成LVM邏輯卷,最後統一做爲備份磁盤使用!網絡

1)服務器信息以下:session

ip地址              數據盤空間      主機名           系統版本
192.168.10.17       200G         ipsan-node01     centos7.3
192.168.10.18       500G         ipsan-node02     centos7.3
192.168.10.5        500G         ipsan-node03     centos7.3 
192.168.10.6        200G         ipsan-node04     centos7.3
192.168.10.20       100G         ipsan-node05     centos7.3
192.168.10.10       100G         ipsan-node06     centos7.3

前5個node節點做爲IP-SAN存儲的服務端,第6個node節點做爲客戶端,用來共享前5個節點的IP-SAN存儲,而後第6個node節點利用這5個共享過來的IP-SAN存儲和
本身的100G存儲作lvm邏輯卷,最終組成一個大的存儲池來使用!

首先將這6個node節點機對應的盤作格式化(6臺機器的數據盤都是掛載到/data下的,須要先卸載/data,而後格式化磁盤)
接着關閉各節點服務器的iptables防火牆服務(若打開了iptables,則須要開通3260端口)。selinux也要關閉!!

2)服務端的操做記錄(即ipsan-node0一、ipsan-node0二、ipsan-node0三、ipsan-node0四、ipsan-node05)app

關閉iptbales防火牆
[root@ipsan-node01 ~]# systemctl stop firewalld.service
[root@ipsan-node01 ~]# systemctl disable firewalld.service
 
關閉selinux
[root@ipsan-node01 ~]# setenforce 0
setenforce: SELinux is disabled
[root@ipsan-node01 ~]# getenforce
Disabled
[root@ipsan-node01 ~]# cat /etc/sysconfig/selinux
.......
SELINUX=disabled
 
卸載以前掛載到/data下的數據盤,並從新格式化
[root@ipsan-node01 ~]# fdisk -l
 
Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0008d207
 
   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048    83886079    41942016   83  Linux
 
Disk /dev/vdb: 214.7 GB, 214748364800 bytes, 419430400 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xda936a6f
 
   Device Boot      Start         End      Blocks   Id  System
/dev/vdb1            2048   419430399   209714176   83  Linux
[root@ipsan-node01 ~]# umount /data
[root@ipsan-node01 ~]# mkfs.ext4 /dev/vdb1

安裝配置iSCSI Target服務端
[root@ipsan-node01 ~]# yum install -y scsi-target-utils

啓動target服務,經過ss -tnl能夠看到3260端口已開啓
[root@ipsan-node01 ~]# systemctl enable tgtd
[root@ipsan-node01 ~]# systemctl enable tgtd
[root@ipsan-node01 ~]# systemctl status tgtd
[root@ipsan-node01 ~]# ss -tnl
State       Recv-Q Send-Q                     Local Address:Port                                    Peer Address:Port              
LISTEN      0      128                                    *:22                                                 *:*                  
LISTEN      0      128                                    *:3260                                               *:*                  
LISTEN      0      128                                   :::3260                                              :::* 

服務端配置管理工具tgtadm的使用
建立一個target id 爲1 name爲iqn.2018-02.com.node01.san:1
[root@ipsan-node01 ~]# tgtadm -L iscsi -o new -m target -t 1 -T iqn.2018-02.com.node01.san:1

顯示全部target
[root@ipsan-node01 ~]# tgtadm -L iscsi -o show -m target

向某ID爲1的設備上添加一個新的LUN,其號碼爲1,且此設備提供給initiator使用。
/dev/vdb1是某"塊設備"的路徑,此塊設備也能夠是raid或lvm設備。lun0已經被系統預留
[root@ipsan-node01 ~]# tgtadm -L iscsi -o new -m logicalunit -t 1 -l 1 -b /dev/vdb1
[root@ipsan-node01 ~]# tgtadm -L iscsi -o show -m target

定義某target的基於主機的訪問控制列表,192.168.10.0/24表示容許訪問此target的initiator客戶端的列表
[root@ipsan-node01 ~]# tgtadm -L iscsi -o bind -m target -t 1 -I 192.168.10.0/24

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
舒適提示:

若是該節點還有一個塊設備/dev/vdb2須要添加到san存儲裏,能夠再次向ID爲1的設備上添加一個新的LUN,號碼爲2
# tgtadm -L iscsi -o new -m logicalunit -t 1 -l 2 -b /dev/vdb2
# tgtadm -L iscsi -o show -m target

解除target的基於主機的訪問控制列表權限
# tgtadm -L iscsi -o unbind -m target -t 1 -I 192.168.10.0/24
# tgtadm -L iscsi -o show -m target

刪除target中的LUN
# tgtadm -L iscsi -o delete -m target -t 1
# tgtadm -L iscsi -o show -m target
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

同理,另外四個節點的操做如上一致:
一樣須要關閉iptables和selinux
須要卸載以前掛載到/data下的數據盤,並從新格式化

另外四個節點服務端配置管理工具tgtadm的使用分別以下:
[root@ipsan-node02 ~]# yum -y install scsi-target-utils
[root@ipsan-node02 ~]# systemctl enable tgtd
[root@ipsan-node02 ~]# systemctl enable tgtd
[root@ipsan-node02 ~]# systemctl status tgtd
[root@ipsan-node02 ~]# ss -tnl
[root@ipsan-node02 ~]# tgtadm -L iscsi -o new -m target -t 1 -T iqn.2018-02.com.node02.san:1
[root@ipsan-node02 ~]# tgtadm -L iscsi -o show -m target
[root@ipsan-node02 ~]# tgtadm -L iscsi -o new -m logicalunit -t 1 -l 1 -b /dev/vdb1
[root@ipsan-node02 ~]# tgtadm -L iscsi -o show -m target
[root@ipsan-node02 ~]# tgtadm -L iscsi -o bind -m target -t 1 -I 192.168.10.0/24
[root@ipsan-node02 ~]# tgtadm -L iscsi -o show -m target

[root@ipsan-node03 ~]# yum -y install scsi-target-utils
[root@ipsan-node03 ~]# systemctl enable tgtd
[root@ipsan-node03 ~]# systemctl enable tgtd
[root@ipsan-node03 ~]# systemctl status tgtd
[root@ipsan-node03 ~]# ss -tnl
[root@ipsan-node03 ~]# tgtadm -L iscsi -o new -m target -t 1 -T iqn.2018-02.com.node03.san:1
[root@ipsan-node03 ~]# tgtadm -L iscsi -o show -m target
[root@ipsan-node03 ~]# tgtadm -L iscsi -o new -m logicalunit -t 1 -l 1 -b /dev/vdb1
[root@ipsan-node03 ~]# tgtadm -L iscsi -o show -m target
[root@ipsan-node03 ~]# tgtadm -L iscsi -o bind -m target -t 1 -I 192.168.10.0/24
[root@ipsan-node03 ~]# tgtadm -L iscsi -o show -m target

[root@ipsan-node04 ~]# yum -y install scsi-target-utils
[root@ipsan-node04 ~]# systemctl enable tgtd
[root@ipsan-node04 ~]# systemctl enable tgtd
[root@ipsan-node04 ~]# systemctl status tgtd
[root@ipsan-node04 ~]# ss -tnl
[root@ipsan-node04 ~]# tgtadm -L iscsi -o new -m target -t 1 -T iqn.2018-02.com.node04.san:1
[root@ipsan-node04 ~]# tgtadm -L iscsi -o show -m target
[root@ipsan-node04 ~]# tgtadm -L iscsi -o new -m logicalunit -t 1 -l 1 -b /dev/vdb1
[root@ipsan-node04 ~]# tgtadm -L iscsi -o show -m target
[root@ipsan-node04 ~]# tgtadm -L iscsi -o bind -m target -t 1 -I 192.168.10.0/24
[root@ipsan-node04 ~]# tgtadm -L iscsi -o show -m target

[root@ipsan-node05 ~]# yum -y install scsi-target-utils
[root@ipsan-node05 ~]# systemctl enable tgtd
[root@ipsan-node05 ~]# systemctl enable tgtd
[root@ipsan-node05 ~]# systemctl status tgtd
[root@ipsan-node05 ~]# ss -tnl
[root@ipsan-node05 ~]# tgtadm -L iscsi -o new -m target -t 1 -T iqn.2018-02.com.node05.san:1
[root@ipsan-node05 ~]# tgtadm -L iscsi -o show -m target
[root@ipsan-node05 ~]# tgtadm -L iscsi -o new -m logicalunit -t 1 -l 1 -b /dev/vdb1
[root@ipsan-node05 ~]# tgtadm -L iscsi -o show -m target
[root@ipsan-node05 ~]# tgtadm -L iscsi -o bind -m target -t 1 -I 192.168.10.0/24
[root@ipsan-node05 ~]# tgtadm -L iscsi -o show -m target

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
舒適提示:

上面5個節點的target服務端是經過命令行進行配置的,其實除此以外,也能夠經過編輯文件的方式定義target服務端。
方法以下:
[root@ipsan-node01 ~]# cd /etc/tgt/
[root@ipsan-node01 tgt]# ls targets.conf
targets.conf
[root@ipsan-node01 tgt]# vim targets.conf       //添加定義內容以下:
.......
<target iqn.2018-02.com.node01.san:1>          //命令
    backing-store /dev/vdb1                    //共享的設備分區
    initiator-address 192.168.10.0/24          //容許訪問的ip地址段。(也能夠容許某個具體的ip地址,若是是多個具體的ip地址,就寫多行initiator-address的配置)
</target>

若是該節點還有一個塊設備/dev/vdb2須要添加到san存儲裏,則再添加定義以下:
<target iqn.2018-02.com.node01.san:2>
    backing-store /dev/vdb2
    initiator-address 192.168.10.0/24
</target>

重啓tgtd
[root@ipsan-node01 tgt]# systemctl restart tgtd
[root@ipsan-node01 tgt]# tgtadm -L iscsi -o show -m target

其餘4個節點的配置同理
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

3)客戶端的操做記錄(即ipsan-node06)

關閉iptbales防火牆
[root@ipsan-node06 ~]# systemctl stop firewalld.service
[root@ipsan-node06 ~]# systemctl disable firewalld.service
  
關閉selinux
[root@ipsan-node06 ~]# setenforce 0
setenforce: SELinux is disabled
[root@ipsan-node06 ~]# getenforce
Disabled
[root@ipsan-node06 ~]# cat /etc/sysconfig/selinux
.......
SELINUX=disabled
  
卸載以前掛載到/data下的數據盤,並從新格式化
[root@ipsan-node01 ~]# fdisk -l
Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0008e3b4

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048    83886079    41942016   83  Linux

Disk /dev/vdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xf450445d

   Device Boot      Start         End      Blocks   Id  System
/dev/vdb1            2048   209715199   104856576   83  Linux

[root@ipsan-node06 ~]# umount /data
[root@ipsan-node06 ~]# mkfs.ext4 /dev/vdb1

安裝iscsi-initiator-utils工具
[root@ipsan-node06 ~]# yum install -y iscsi-initiator-utils  
[root@ipsan-node06 ~]# cat /etc/iscsi/initiatorname.iscsi  
[root@ipsan-node06 ~]# echo "InitiatorName=`iscsi-iname -p iqn.2018-02.com.node01`" > /etc/iscsi/initiatorname.iscsi  
[root@ipsan-node06 ~]# echo "InitiatorAlias=initiator1" >> /etc/iscsi/initiatorname.iscsi  
[root@ipsan-node06 ~]# cat /etc/iscsi/initiatorname.iscsi

[root@ipsan-node06 ~]# systemctl enable iscsi
[root@ipsan-node06 ~]# systemctl start iscsi
[root@ipsan-node06 ~]# systemctl status iscsi
[root@ipsan-node06 ~]# ss -tnl
State       Recv-Q Send-Q                     Local Address:Port                                    Peer Address:Port              
LISTEN      0      128                                    *:22                                                 *:*                  
LISTEN      0      128                        192.168.10.10:3128                                               *:*                  
LISTEN      0      1                              127.0.0.1:32000                                              *:*   

iscsiadm是個模式化的工具,其模式可經過-m或--mode選項指定,常見的模式有discovery、node、fw、session、host、iface幾個,
若是沒有額外指定其它選項,則discovery和node會顯示其相關的全部記錄;session用於顯示全部的活動會話和鏈接,fw顯示全部的啓動固件值,
host顯示全部的iSCSI主機,iface顯示/var/lib/iscsi/ifaces目錄中的全部ifaces設定。

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
舒適提示:
iscsiadm命令參數以下:
iscsiadm -m discovery [ -d debug_level ] [ -P printlevel ] [ -I iface -t type -p ip:port [ -l ] ]   
iscsiadm -m node [ -d debug_level ] [ -P printlevel ] [ -L all,manual,automatic ] [ -U all,manual,automatic ] [ [ -T tar-getname -p ip:port -I iface ] [ -l | -u | -R | -s] ] [ [ -o operation ]   
  
  
-d, --debug=debug_level   顯示debug信息,級別爲0-8;  
-l, --login  
-t, --type=type  這裏可使用的類型爲sendtargets(可簡寫爲st)、slp、fw和 isns,此選項僅用於discovery模式,且目前僅支持st、fw和isns;其中st表示容許每一個iSCSI target發送一個可用target列表給initiator;  
-p, --portal=ip[:port]  指定target服務的IP和端口;  
-m, --mode op  可用的mode有discovery, node, fw, host iface 和 session  
-T, --targetname=targetname  用於指定target的名字  
-u, --logout   
-o, --op=OPEARTION:指定針對discoverydb數據庫的操做,其僅能爲new、delete、update、show和nonpersistent其中之一;  
-I, --interface=[iface]:指定執行操做的iSCSI接口,這些接口定義在/var/lib/iscsi/ifaces中; 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

依次發現服務端的san存儲設備(只要能發現設備便可)
[root@ipsan-node06 ~]# iscsiadm -m discovery -t st -p 192.168.10.17
Starting iscsid:
192.168.10.17:3260,1 iqn.2018-02.com.node01.san:1

[root@ipsan-node06 ~]# iscsiadm -m discovery -t st -p 192.168.10.18
Starting iscsid:
192.168.10.18:3260,1 iqn.2018-02.com.node02.san:1

[root@ipsan-node06 ~]# iscsiadm -m discovery -t st -p 192.168.10.5
Starting iscsid:
192.168.10.5:3260,1 iqn.2018-02.com.node03.san:1

[root@ipsan-node06 ~]# iscsiadm -m discovery -t st -p 192.168.10.6
Starting iscsid:
192.168.10.6:3260,1 iqn.2018-02.com.node04.san:1

[root@ipsan-node06 ~]# iscsiadm -m discovery -t st -p 192.168.10.20
Starting iscsid:
192.168.10.20:3260,1 iqn.2018-02.com.node05.san:1

[root@ipsan-node06 ~]# ls /var/lib/iscsi/send_targets/
192.168.10.10,3260  192.168.10.17,3260  192.168.10.18,3260  192.168.10.20,3260  192.168.10.5,3260  192.168.10.6,3260
[root@ipsan-node06 ~]# ls /var/lib/iscsi/send_targets/
192.168.10.10,3260  192.168.10.17,3260  192.168.10.18,3260  192.168.10.20,3260  192.168.10.5,3260  192.168.10.6,3260

依次登錄發現的san存儲設備
[root@ipsan-node06 ~]# iscsiadm -m node -T iqn.2018-02.com.node01.san:1 -p 192.168.10.17 -l
[root@ipsan-node06 ~]# iscsiadm -m node -T iqn.2018-02.com.node02.san:1 -p 192.168.10.18 -l
[root@ipsan-node06 ~]# iscsiadm -m node -T iqn.2018-02.com.node03.san:1 -p 192.168.10.5 -l
[root@ipsan-node06 ~]# iscsiadm -m node -T iqn.2018-02.com.node04.san:1 -p 192.168.10.6 -l
[root@ipsan-node06 ~]# iscsiadm -m node -T iqn.2018-02.com.node05.san:1 -p 192.168.10.20 -l

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
舒適提示:

關閉iSCSI服務器端
關閉iSCSI在開機重啓或重啓iscsi服務時自動對target進行從新鏈接,就須要在該客戶機完全將該target條目信息刪除:
退出target會話或卸載iscsi設備(例如退出ipsan-node01節點的target會話):
# iscsiadm -m node -T node -T iqn.2018-02.com.node01.san:1 -p 192.168.10.17 -u

刪除target條目的記錄(例如退出ipsan-node01節點的target條目的記錄):
# iscsiadm -m node -T node -T iqn.2018-02.com.node01.san:1 -p 192.168.10.17 -o delete

特別注意:
在客戶端刪除了以前discovery發現的可用的target條目,則重啓或重啓服務後將不會自動進行重鏈接。
# ls /var/lib/iscsi/send_targets/
# ls /var/lib/iscsi/
# rm -rf /var/lib/iscsi/*
# ls /var/lib/iscsi/
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

驗證可否看到服務端設備
[root@ipsan-node06 ~]# fdisk -l /dev/sd[a-z]          //或者直接使用命令"fdisk -l"

Disk /dev/vda: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0008e3b4

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048    83886079    41942016   83  Linux

Disk /dev/vdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xf450445d

   Device Boot      Start         End      Blocks   Id  System
/dev/vdb1            2048   209715199   104856576   83  Linux

Disk /dev/sda: 214.7 GB, 214747316224 bytes, 419428352 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x149fdfec

Disk /dev/sdb: 536.9 GB, 536869863424 bytes, 1048573952 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xc74ea52c

Disk /dev/sdc: 536.9 GB, 536869863424 bytes, 1048573952 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x6663eaa6

Disk /dev/sdd: 214.7 GB, 214747316224 bytes, 419428352 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x46738964

Disk /dev/sde: 107.4 GB, 107373133824 bytes, 209713152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xe78614cc

如上信息,可知客戶端節點ipsan-node06上已經發現了其餘5個服務端節點的存儲設置了!
接着對發現的存儲設置依次進行分區
[root@ipsan-node06 ~]# fdisk /dev/sda
依次輸入p->n->p->回車->回車->回車->w
[root@ipsan-node06 ~]# partprobe 

[root@ipsan-node06 ~]# fdisk /dev/sdb
依次輸入p->n->p->回車->回車->回車->w
[root@ipsan-node06 ~]# partprobe 

[root@ipsan-node06 ~]# fdisk /dev/sdc
依次輸入p->n->p->回車->回車->回車->w
[root@ipsan-node06 ~]# partprobe 

[root@ipsan-node06 ~]# fdisk /dev/sdd
依次輸入p->n->p->回車->回車->回車->w
[root@ipsan-node06 ~]# partprobe 

[root@ipsan-node06 ~]# fdisk /dev/sde
依次輸入p->n->p->回車->回車->回車->w
[root@ipsan-node06 ~]# partprobe 

再次查看設備分區狀況
[root@ipsan-node06 backup]# fdisk -l /dev/sd[a-z]

Disk /dev/sda: 214.7 GB, 214747316224 bytes, 419428352 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x149fdfec

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048   419428351   209713152   83  Linux

Disk /dev/sdb: 536.9 GB, 536869863424 bytes, 1048573952 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xc74ea52c

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048  1048573951   524285952   83  Linux

Disk /dev/sdc: 536.9 GB, 536869863424 bytes, 1048573952 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x6663eaa6

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048  1048573951   524285952   83  Linux

Disk /dev/sdd: 214.7 GB, 214747316224 bytes, 419428352 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x46738964

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048   419428351   209713152   83  Linux

Disk /dev/sde: 107.4 GB, 107373133824 bytes, 209713152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xe78614cc

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1            2048   209713151   104855552   83  Linux

在服務端節點ipsan-node02上,使用本身的數據磁盤/dev/vdb1和上面發現的5個客戶端節點的san存儲設備建立lvm邏輯卷
建立pv(若是沒有pvcreate等命令,可使用"yum install -y lvm2"進行安裝)
[root@ipsan-node06 ~]# pvcreate /dev/{vdb1,sda1,sdb1,sdc1,sdd1,sde1}
[root@ipsan-node06 ~]# pvs                   //或者使用命令"pvdisplay"
  PV         VG  Fmt  Attr PSize    PFree 
  /dev/sda1  vg0 lvm2 a--  <200.00g     0 
  /dev/sdb1  vg0 lvm2 a--  <500.00g     0 
  /dev/sdc1  vg0 lvm2 a--  <500.00g     0 
  /dev/sdd1  vg0 lvm2 a--  <200.00g     0 
  /dev/sde1  vg0 lvm2 a--  <100.00g <2.54g
  /dev/vdb1  vg0 lvm2 a--  <100.00g     0

建立vg
[root@ipsan-node06 ~]# vgcreate vg0 /dev/{vdb1,sda1,sdb1,sdc1,sdd1,sde1}
[root@ipsan-node06 ~]# vgs                //或者使用命令"vgdisplay"
  VG  #PV #LV #SN Attr   VSize VFree 
  vg0   6   1   0 wz--n- 1.56t <2.54g

建立lv
[root@ipsan-node06 ~]# lvcreate -L +1.56t -n lv01 vg0
[root@ipsan-node06 ~]# lvs              //或者使用命令"lvdisplay"
  LV   VG  Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv01 vg0 -wi-ao---- 1.56t

格式化lvm邏輯卷磁盤
[root@ipsan-node06 ~]# mkfs.ext4 /dev/vg0/lv01 

掛載lvm邏輯卷磁盤
[root@ipsan-node06 ~]# mkdir /backup
[root@ipsan-node06 ~]# mount /dev/vg0/lv01 /backup

檢查lvm邏輯卷磁盤是否掛載上
[root@ipsan-node06 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda1              40G  3.4G   34G  10% /
devtmpfs              7.8G   78M  7.7G   1% /dev
tmpfs                 7.8G   12K  7.8G   1% /dev/shm
tmpfs                 7.8G  440K  7.8G   1% /run
tmpfs                 7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/vg0-lv01  1.6T   60k  1.6T   0% /backup

使用lsblk檢查設備信息
[root@ipsan-node06 ~]# lsblk
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda            8:0    0  200G  0 disk 
└─sda1         8:1    0  200G  0 part 
  └─vg0-lv01 252:0    0  1.6T  0 lvm  /backup
sdb            8:16   0  500G  0 disk 
└─sdb1         8:17   0  500G  0 part 
  └─vg0-lv01 252:0    0  1.6T  0 lvm  /backup
sdc            8:32   0  500G  0 disk 
└─sdc1         8:33   0  500G  0 part 
  └─vg0-lv01 252:0    0  1.6T  0 lvm  /backup
sdd            8:48   0  200G  0 disk 
└─sdd1         8:49   0  200G  0 part 
  └─vg0-lv01 252:0    0  1.6T  0 lvm  /backup
sde            8:64   0  100G  0 disk 
└─sde1         8:65   0  100G  0 part 
  └─vg0-lv01 252:0    0  1.6T  0 lvm  /backup
sr0           11:0    1 1024M  0 rom  
vda          253:0    0   40G  0 disk 
└─vda1       253:1    0   40G  0 part /
vdb          253:16   0  100G  0 disk 
└─vdb1       253:17   0  100G  0 part 
  └─vg0-lv01 252:0    0  1.6T  0 lvm  /backup
相關文章
相關標籤/搜索