iscsI服務的配置 高可用(HA)集羣鋪設的業務Apache+iscsI服務的配置

主機環境 redhat6.5 64位html

實驗環境 服務端1 ip 172.25.29.1 主機名:server1.example.com  iscsi  apachenode

    服務端2 ip 172.25.29.2  主機名:server2.example.com  iscsi  apacheweb

    管理端1 ip 172.25.29.3  主機名:server3.example.com   scsiapache

防火牆狀態:關閉vim

            

   前面的博文已經寫太高可用集羣的搭建,如今就再也不重複了。此次就以鋪設apche和iscsi業務爲例,來測試搭建的高可用集羣。網絡

  在搭建業務以前要保證安裝了httpd服務(服務端1和服務端2)app


1.安裝、開啓scsi(管理端)ide

[root@server3 ~]# yum install scsi* -y    #安裝scsioop

[root@server3 ~]# vim /etc/tgt/targets.conf  #修改配置文件測試

38 <target iqn.2008-09.com.example:server.target1>

 39    backing-store /dev/vdb       #共享磁盤的名稱

 40        initiator_address 172.25.29.1    #地址

 41        initiator-address 172.25.29.2

 42</target>

[root@server3 ~]# /etc/init.d/tgtd start         #開啓tgtd

Starting SCSI target daemon:                               [  OK  ]

[root@server3 ~]# tgt-admin -s        #查看

Target 1:iqn.2008-09.com.example:server.target1

   System information:

       Driver: iscsi

       State: ready

   I_T nexus information:

   LUN information:

       LUN: 0

           Type: controller

           SCSI ID: IET     00010000

           SCSI SN: beaf10

           Size: 0 MB, Block size: 1

           Online: Yes

           Removable media: No

           Prevent removal: No

           Readonly: No

           Backing store type: null

           Backing store path: None

           Backing store flags:

       LUN: 1

           Type: disk

           SCSI ID: IET     00010001

           SCSI SN: beaf11

           Size: 4295 MB, Block size: 512

           Online: Yes

           Removable media: No

           Prevent removal: No

           Readonly: No

           Backing store type: rdwr

           Backing store path: /dev/sda     #磁盤

           Backing store flags:

   Account information:

   ACL information:

       172.25.29.1                  #1p

       172.25.29.2

 

 

2.安裝、開啓iscsi、將共享分區分紅邏輯卷(服務端1)

[root@server1 ~]#  yum install iscsi* -y            #安裝iscsi

[root@server1 ~]# iscsiadm -m discovery -tst -p 172.25.29.3    #查看

Starting iscsid:                                           [  OK  ]

172.25.29.3:3260,1iqn.2008-09.com.example:server.target1

[root@server1 ~]# iscsiadm -m node -l

Logging in to [iface: default, target:iqn.2008-09.com.example:server.target1, portal: 172.25.29.3,3260] (multiple)

Login to [iface: default, target: iqn.2008-09.com.example:server.target1,portal: 172.25.29.3,3260] successful.

[root@server1 ~]# pvcreate /dev/sda   #分紅物理邏輯單元

 Physical volume "/dev/sda" successfully created

[root@server1 ~]# pvs

 PV         VG       Fmt Attr PSize PFree

 /dev/sda            lvm2 a-- 4.00g 4.00g

 /dev/vda2  VolGroup lvm2 a--  8.51g   0

 /dev/vdb1  VolGroup lvm2 a--  8.00g   0

[root@server1 ~]# vgcreate clustervg/dev/sda    #組成邏輯卷組

 Clustered volume group "clustervg" successfully created

[root@server1 ~]# lvcreate -l 1023 -n democlustervg    #lvm分區

 Logical volume "demo" created

[root@server1 ~]# lvs

 LV      VG        Attr       LSize  Pool Origin Data%  Move LogCpy%Sync Convert

 lv_root VolGroup  -wi-ao----  15.61g                                             

 lv_swap VolGroup  -wi-ao----920.00m                                            

 demo    clustervg -wi-a-----   4.00g                                            

[root@server1 ~]# mkfs.ext4/dev/clustervg/demo    #格式化

mke2fs 1.41.12 (17-May-2010)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

262144 inodes, 1047552 blocks

52377 blocks (5.00%) reserved for the superuser

First data block=0

Maximum filesystem blocks=1073741824

32 block groups

32768 blocks per group, 32768 fragments pergroup

8192 inodes per group

Superblock backups stored on blocks:

       32768,98304, 163840, 229376, 294912, 819200, 884736

 

Writing inode tables: done                           

Creating journal (16384 blocks): done

Writing superblocks and filesystemaccounting information: done

 

This filesystem will be automaticallychecked every 32 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@server1 /]# mount /dev/clustervg/demo/mnt/    #將磁盤掛載到mnt

[root@server1 mnt]# vim index.html          #寫個簡單的測試頁

server1

[root@server1 /]# umount /mnt/           #卸載


#服務端2

[root@server2 ~]#  yum install iscsi* -y            #安裝iscsi

[root@server2 ~]# iscsiadm -m discovery -tst -p 172.25.29.3    #查看

Starting iscsid:                                           [  OK  ]

172.25.29.3:3260,1iqn.2008-09.com.example:server.target1

[root@server2 ~]# iscsiadm -m node -l

Logging in to [iface: default, target:iqn.2008-09.com.example:server.target1, portal: 172.25.29.3,3260] (multiple)

Login to [iface: default, target: iqn.2008-09.com.example:server.target1,portal: 172.25.29.3,3260] successful.

 

服務端2不用做修改,將分區化成lvm,服務端1上的分區會同步到服務端2上,但可用用命令查看是否同步如[root@server2 ~]# lvs

 LV      VG        Attr       LSize  Pool Origin Data%  Move LogCpy%Sync Convert

 lv_root VolGroup  -wi-ao----   7.61g                                            

 lv_swap VolGroup  -wi-ao----920.00m                                            

 demo    clustervg -wi-a-----   4.00g              

pvs vgs等的均可以查看

[root@server3 ~]# /etc/init.d/luci start    #luci開啓

Starting saslauthd:                                        [  OK  ]

Start luci...                                             [  OK  ]

Point your web browser tohttps://server3.example.com:8084 (or equivalent) to access luci


3.在搭建好的集羣上添加服務(雙機熱備),以apche和iscsi爲例

1.添加服務  這裏採用的是雙機熱備

登錄https://server3.example.com:8084

wKiom1f7bvOwVpyXAADB8rSwR8M196.png


選擇Failover Domains,如圖,填寫Name,如圖選擇,前面打勾的三個分別是結點失效以後能夠跳到另外一個結點、只服務運行指定的結點、當結點失效之跳到另外一個結點以後,原先的結點恢復以後,不會跳回原先的結點。下面的Member打勾,是指服務運行server1.example.com和server2.exampe.com結點,後面的Priority值越小,優先級越高,選擇Creale

wKioL1fszt2QwJD9AAFBHopqOv8714.png

選擇Resourcs,點擊Add,選擇添加IPAddress如圖,添加的ip必須是未被佔用的ip,24是子網掩碼的位數,10指的是等待時間爲10秒。選擇Submit

wKiom1fszt_CX1PkAAFGjLB5hRg321.png

以相同的方法添加Script,httpd是服務的名字,/etc/init.d/httpd是服務啓動腳本的路徑,選擇SubmitwKioL1fszuDSxiTdAAGHuVgqBNQ791.png

添加Resource,類型爲Filesystem,如圖,

wKioL1f7bvTjMCu6AAEZcxBddK4788.png

選擇Service Groups,點擊Add如圖,apache是服務的名字,下面兩個勾指分別的是

自動開啓服務、運行 ,選擇Add Resource

wKiom1fszuHClCIMAAFS9BJ9XwE460.png

選擇172.25.29.100/24以後,如圖點擊Add  Resource

wKiom1fszuGB6jJ6AAEz1k12IaI218.png

選擇先選擇webdata以後,點擊Add  Resource再選擇httpd,點擊Submit,完成

wKioL1fszuKCPx0MAADv0cfEVSw440.png

 

2.測試

在測試以前 server1和server2必須安裝httpd。注意:不要開啓httpd服務,在訪問的時候,會自動開啓(若是在訪問以前開啓了服務,訪問的時候會報錯)

測試 172.25.29.100(vip)

wKiom1fszuKz_aDiAAARmtxYKnQ979.png

[root@server1~]# ip addr show   #查看

1:lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000

    link/ether 52:54:00:94:2f:4f brdff:ff:ff:ff:ff:ff

    inet 172.25.29.1/24 brd 172.25.29.255 scopeglobal eth0

    inet 172.25.29.100/24 scope globalsecondary eth0        #自動添加了ip 172.25.29.100

    inet6 fe80::5054:ff:fe94:2f4f/64 scope link

       valid_lft forever preferred_lft forever

[root@server1~]# clustat      #查看服務

ClusterStatus for wen @ Tue Sep 27 18:12:38 2016

MemberStatus: Quorate

 

 Member Name                             ID   Status

 ------ ----                             ---- ------

 server1.example.com                         1 Online, Local,rgmanager

 server2.example.com                         2 Online, rgmanager

 

 Service Name                   Owner (Last)                   State        

 ------- ----                   ----- ------                   -----        

 service:apache                 server1.example.com            started    #serve1在服務      

[root@server1~]# /etc/init.d/network stop    #當網絡斷開以後,fence控制server1自動斷電,而後啓動;服務轉到server2

 

測試

wKiom1fszuKz_aDiAAARmtxYKnQ979.png

[root@server2~]# ip addr show      #查看

1:lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

    link/loopback 00:00:00:00:00:00 brd00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    inet6 ::1/128 scope host

       valid_lft forever preferred_lft forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast stateUP qlen 1000

    link/ether 52:54:00:23:81:98 brdff:ff:ff:ff:ff:ff

    inet172.25.29.2/24 brd 172.25.29.255 scope global eth0

    inet 172.25.29.100/24 scope globalsecondary eth0     #自動添加

    inet6 fe80::5054:ff:fe23:8198/64 scope link

       valid_lft forever preferred_lft forever

 

附加:

  將iscsi分區的格式換稱gfs2格式,再作lvm的拉伸以下:

 [root@server1~]# clustat       #查看服務

ClusterStatus for wen @ Tue Sep 27 18:22:20 2016

MemberStatus: Quorate

 Member Name                             ID   Status

 ------ ----                             ---- ------

 server1.example.com                         1 Online, Local,rgmanager

 server2.example.com                         2 Online, rgmanager

 Service Name                   Owner (Last)                   State        

 ------- ----                   ----- ------                   ------        

 service:apache                 server2.example.com            started    #server2服務 

[root@server1 /]# clusvcadm -d apache    #apache disaled

Local machine disablingservice:apache...Success

[root@server1 /]# lvremove/dev/clustervg/demo  #刪除設備

Do you really want to remove activeclustered logical volume demo? [y/n]: y

 Logical volume "demo" successfully removed

[root@server1 /]# lvcreate -L 2g -n democlustervg   #從新指定設備的大小

 Logical volume "demo" created

[root@server1 /]# mkfs.gfs2 -p lock_dlm -twen:mygfs2 -j 3 /dev/clustervg/demo   #格式化(類型:gfs2

This will destroy any data on/dev/clustervg/demo.

It appears to contain: symbolic link to`../dm-2'

 

Are you sure you want to proceed? [y/n] y

 

Device:                    /dev/clustervg/demo

Blocksize:                 4096

Device Size                2.00 GB (524288 blocks)

Filesystem Size:           2.00 GB (524288 blocks)

Journals:                  3

Resource Groups:           8

Locking Protocol:          "lock_dlm"

Lock Table:                "wen:mygfs2"

UUID:                     10486879-ea8c-3244-a2cd-00297f342973

[root@server1 /]# mount /dev/clustervg/demo/mnt/     #將設備掛載到/mnt

[root@server1 mnt]# vim index.html                #寫簡單的測試頁

www.server.example.com

[root@server2 /]# mount /dev/clustervg/demo/mnt/           #將設備掛載到/mnt(服務端2

[root@server2 /]# cd /mnt/

[root@server2 mnt]# ls

index.html

[root@server2 mnt]# cat index.html

www.server.example.com

[root@server2 mnt]# vim index.html

[root@server2 mnt]# cat index.html            #修改測試頁

www.server2.example.com

 

[root@server1 mnt]# cat index.html     #查看(服務端1),實現了實時同步

www.server2.example.com

[root@server1 mnt]# cd ..

[root@server1 /]# umount /mnt/

[root@server1 /]# vim /etc/fstab    #設置開機自動掛載

UUID="10486879-ea8c-3244-a2cd-00297f342973"/var/www/html gfs2 _netdev 0 0

[root@server1 /]# mount -a    #刷新

[root@server1 /]# df                    #查看

Filesystem                   1K-blocks     Used Available Use% Mounted on

/dev/mapper/VolGroup-lv_root  16106940 10258892   5031528 68% /

tmpfs                           961188    31816   929372   4% /dev/shm

/dev/vda1                       495844    33457   436787   8% /boot

/dev/mapper/clustervg-demo     2096912  397152   1699760  19% /var/www/html      #掛載上了

[root@server1 /]# clusvcadm -e apache    #eabled apache

Local machine trying to enableservice:apache...Service is already running

在測試以前,將上面在ServiceGroups中添加的Filesystem移除掉,在進行測試(若是不移除,系統就會報錯)

測試

wKioL1f7eLPCwwUjAAAXGmyLsW0570.png

 

[root@server1 /]# lvextend -l +511/dev/clustervg/demo   #在文件系統層面擴展設備的大小

 Extending logical volume demo to 4.00 GiB

 Logical volume demo successfully resized

[root@server1 /]# gfs2_grow/dev/clustervg/demo      #在物理層面進行擴展

FS: Mount Point: /var/www/html

FS: Device:      /dev/dm-2

FS: Size:        524288 (0x80000)

FS: RG size:     65533 (0xfffd)

DEV: Size:       1047552 (0xffc00)

The file system grew by 2044MB.

gfs2_grow complete.

[root@server1 /]# df -lh        #查看大小

Filesystem                    Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root   16G 9.8G  4.8G  68% /

tmpfs                         939M   32M 908M   4% /dev/shm

/dev/vda1                     485M   33M 427M   8% /boot

/dev/mapper/clustervg-demo    3.8G 388M  3.4G  11% /var/www/html    #變成3.8G

相關文章
相關標籤/搜索