openfile學習筆記

 

Openfiler是在rPath Linux基礎上開發的,它可以做爲一個獨立的Linux操做系統發行。Openfiler是一款很是好的存儲管理操做系統,開源免費,經過web界面對html

存儲磁盤的管理,支持如今流行的網絡存儲技術IP-SAN和NAS,支持iSCSI(Internet Small Computer System Interface, 學名ISCSI HBA)、NFS、SMB/CIFS及FTP等node

協議。linux

一. 安裝openfilerweb

先下載一個Openfiler 軟件,而後安裝到咱們的虛擬機上。 Openfiler是基於Linux的存儲平臺,安裝過程和安裝通常的Linux系統同樣。shell

下載地址:http://www.openfiler.com/community/download/服務器

安裝的第一個界面:網絡

 

這裏有一個磁盤的配置。 我選擇了手動配置。 我給openfiler 是40g 的磁盤空間。 系統佔2G,交換區1G。 剩下的空間沒有分配。session

安裝完成以後的界面以下:oracle

 

在這裏有提示咱們經過web進行訪問。 而且提示了訪問的地址:app

Https://192.168.1.1:446/. 默認帳戶是openfiler密碼爲password. 咱們登錄以後能夠修改用戶的密碼。

 

二. 存儲端(target)配置

Openfiler 的配置,能夠參考Oracle 的這遍文檔:

http://www.oracle.com/technology/global/cn/pub/articles/hunter_rac10gr2_iscsi.html#9

http://www.oracle.com/technetwork/cn/articles/hunter-rac11gr2-iscsi-083834-zhs.html#11

2.1 啓動iscsi target 服務

在Service 裏面啓動iscsi target。 啓動以後,下次重啓會自動啓該服務。

 

2.2 配置iscsi initiator 訪問IP

只有配置了IP 纔有權限訪問openfiler 存儲。在system 選項的最下面有配置選項,把IP 寫上便可。 注意這裏的子網掩碼,寫的是255.255.255.255

 

2.3 建立卷設備

如今咱們來配置共享設備。 先對咱們沒有格式的分區格式化成擴展分區,必定要擴展分區:

[root@san ~]# fdisk /dev/sda

The number of cylinders for this disk is set to 5221.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n

Command action

e extended

p primary partition (1-4)

e

Partition number (1-4): 3

First cylinder (383-5221, default 383):

Using default value 383

Last cylinder or +size or +sizeM or +sizeK (383-5221, default 5221):

Using default value 5221

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.

The kernel still uses the old table.

The new table will be used at the next reboot.

Syncing disks.

[root@san ~]# fdisk -l

Disk /dev/sda: 42.9 GB, 42949672960 bytes

255 heads, 63 sectors/track, 5221 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sda1 * 1 255 2048256 83 Linux

/dev/sda2 256 382 1020127+ 82 Linux swap / Solaris

/dev/sda3 383 5221 38869267+ 5 Extended

格式化以後,咱們在openfiler的網頁中就能看到這個磁盤信息,若是不格式化,或者格式化錯誤,是沒法編輯的。

 

頁面網下拉,咱們能看到建立分區:

 

把全部空間所有建立成一個分區,這個就是一個卷。 以後窗口會顯示:

 

建立完成後, 選擇volume Groups。

 

而後輸入vg名稱和對應的設備,肯定便可。

 

至此,咱們已經建立完成了一個叫san的卷組。 可是咱們在環境中使用的是卷。 因此咱們還須要在這個卷組裏建立它的卷。

點擊旁邊的add volume選項:

 

在這個頁面往下拉,咱們能夠看到建立卷的選項:

 

這裏我把全部的空間都建立到一個邏輯卷裏。

 

邏輯卷建立完成之後,咱們須要建立一個iscsi target, 而後把邏輯卷和這個target 映射起來,這樣就能夠經過這個target 與服務器進行鏈接。 點機iSCSI

Target,建立Target IQN

 

選擇LUN Mapping, 將ISCSI 和 邏輯卷對應起來

 

配置可以訪問邏輯卷的Network ACL 權限,這個ip 是在system 的選項裏設置的。 這個以前已經配置過。 這裏能夠設置多個IP, 能夠控制哪一個IP 容許訪問哪一個邏輯卷。從而能夠多個用戶同時使用存儲而互不影響。

 

至此, 存儲的服務端已經配置完成。 在這一步,咱們建立了一個邏輯卷而後與ISCSI target 進行了對應。 客戶端的服務器就經過這個ISCSI target 進行鏈接。

Openfiler target的配置文件是: /etc/ietd.conf。

[root@san etc]# cat /etc/ietd.conf

##### WARNING!!! - This configuration file generated by Openfiler. DO NOT MANUALLY EDIT. #####

Target iqn.2006-01.com.san

HeaderDigest None

DataDigest None

MaxConnections 1

InitialR2T Yes

ImmediateData No

MaxRecvDataSegmentLength 131072

MaxXmitDataSegmentLength 131072

MaxBurstLength 262144

FirstBurstLength 262144

DefaultTime2Wait 2

DefaultTime2Retain 20

MaxOutstandingR2T 8

DataPDUInOrder Yes

DataSequenceInOrder Yes

ErrorRecoveryLevel 0

Lun 0 Path=/dev/san/racshare,Type=blockio,ScsiSN=4YMdbG-SGED-jqHA,ScsiId=4YMdbG-SGED-jqHA,IOMode=wt

[root@san etc]#

 

 

 

Make iSCSI Target(s) Available to Client(s)

Every time a new logical volume is added, you will need to restart the associated service on the Openfiler server. In my case, I created a new

iSCSI logical volume so I needed to restart the iSCSI target (iscsi-target) service. This will make the new iSCSI target available to all

clients on the network who have privileges to access it.

To restart the iSCSI target service, use the Openfiler Storage Control Center and navigate to [Services] / [Enable/Disable]. The iSCSI target

service should already be enabled (several sections back). If so, disable the service then enable it again. (See Figure 2)

The same task can be achieved through an SSH session on the Openfiler server:

[root@openfiler1 ~]# service iscsi-target restart

Stopping iSCSI target service: [ OK ]

Starting iSCSI target service: [ OK ]

 

Configure iSCSI Initiator and New Volume

An iSCSI client can be any system (Linux, Unix, MS Windows, Apple Mac, etc.) for which iSCSI support (a driver) is available. In this article, the

client is an Oracle database server (linux3) running CentOS 5.

In this section I will be configuring the iSCSI software initiator on the Oracle database server linux3. Red Hat Enterprise Linux (and CentOS 5)

includes the Open-iSCSI software initiator which can be found in the iscsi-initiator-utils RPM.

This is a change from previous versions of RHEL (4.x) which included the Linux iscsi-sfnet

software driver developed as part of the Linux-iSCSI Project.

All iSCSI management tasks like discovery and logins will use the command-line interface iscsiadm which is included with Open-iSCSI.

The iSCSI software initiator on linux3 will be configured to automatically login to the network storage server (openfiler1) and discover the

iSCSI volume created in the previous section. I will then go through the steps of creating a persistent local SCSI device name (i.e.

/dev/iscsi/linux3-data-1) for the iSCSI target name discovered using udev. Having a consistent local SCSI device name and which

iSCSI target it maps to is highly recommended in order to distinguish between multiple SCSI devices. Before I can do any of this, however, I

must first install the iSCSI initiator software!

Connecting to an iSCSI Target with Open-iSCSI Initiator using Linux

http://www.idevelopment.info/data/Unix/Linux/LINUX_ConnectingToAniSCSITargetWithOpen-iSCSIInitiatorUsingLinux.shtml[2015/3/31 9:11:34]

Installing the iSCSI (Initiator) Service

With Red Hat Enterprise Linux 5 (and CentOS 5), the Open-iSCSI iSCSI software initiator does not get installed by default. The software is

included in the iscsi-initiator-utils package which can be found on CD #1. To determine if this package is installed (which in most

cases, it will not be), perform the following on the client node (linux3):

[root@linux3 ~]# rpm -qa | grep iscsi-initiator-utils

If the iscsi-initiator-utils package is not installed, load CD #1 into the machine and perform the following:

[root@linux3 ~]# mount -r /dev/cdrom /media/cdrom

[root@linux3 ~]# cd /media/cdrom/CentOS

[root@linux3 ~]# rpm -Uvh iscsi-initiator-utils-6.2.0.865-0.8.el5.i386.rpm

[root@linux3 ~]# cd /

[root@linux3 ~]# eject

Configure the iSCSI (Initiator) Service

After verifying that the iscsi-initiator-utils package is installed, start the iscsid service and enable it to automatically start when the

system boots. I will also configure the iscsi service to automatically start which logs into iSCSI targets needed at system start up.

[root@linux3 ~]# service iscsid start

Turning off network shutdown. Starting iSCSI daemon: [ OK ]

[ OK ]

[root@linux3 ~]# chkconfig iscsid on

[root@linux3 ~]# chkconfig iscsi on

Now that the iSCSI service is started, use the iscsiadm command-line interface to discover all available targets on the network storage server:

[root@linux3 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1

當發現不了iscsi target

修改/etc/initiators.deny註釋掉全部的行

 

Manually Login to iSCSI Target(s)

At this point the iSCSI initiator service has been started and the client node was able to discover the available target(s) from the network storage

server. The next step is to manually login to the available target(s) which can be done using the iscsiadm command-line interface. Note that I

had to specify the IP address and not the host name of the network storage server (openfiler1-san) - I believe this is required given the

discovery (above) shows the targets using the IP address.

[root@linux3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:scsi.linux3-data-1 -p

192.168.2.195 --login

Logging in to [iface: default, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal:

192.168.2.195,3260]

Login to [iface: default, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal:

192.168.2.195,3260]: successful

Configure Automatic Login

The next step is to make certain the client will automatically login to the target(s) listed above when the machine is booted (or the iSCSI initiator

service is started/restarted):

[root@linux3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:scsi.linux3-data-1 -p

192.168.2.195 --op update -n node.startup -v automatic

Create Persistent Local SCSI Device Names

In this section, I will go through the steps to create a persistent local SCSI device name (/dev/iscsi/linux3-data-1) which will be mapped

to the new iSCSI target name. This will be done using udev. Having a consistent local SCSI device name (for example /dev/mydisk1 or

/dev/mydisk2) is highly recommended in order to distinguish between multiple SCSI devices (/dev/sda or /dev/sdb) when the node is

booted or the iSCSI initiator service is started/restarted.

When the database server node boots and the iSCSI initiator service is started, it will automatically login to the target(s) configured in a random

fashion and map them to the next available local SCSI device name. For example, the target iqn.2006-01.com.openfiler:scsi.linux3-

data-1 may get mapped to /dev/sda when the node boots. I can actually determine the current mappings for all targets (if there were multiple

targets) by looking at the /dev/disk/by-path directory:

[root@linux3 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10 " "

$11}')

ip-192.168.2.195:3260-iscsi-iqn.2006-01.com.openfiler:scsi.linux3-data-1 -> ../../sda

Using the output from the above listing, we can establish the following current mappings:

Current iSCSI Target Name to local SCSI Device Name Mappings

 

Ok, so I only have one target discovered which maps to /dev/sda. But what if there were multiple targets configured (say, iqn.2006-

01.com.openfiler:scsi.linux3-data-2) or better yet, I had multiple removable SCSI devices on linux3? This mapping could change

every time the node is rebooted. For example, if I had a second target discovered on linux3 (i.e. iqn.2006-

01.com.openfiler:scsi.linux3-data-2), after a reboot it may be determined that the second iSCSI target iqn.2006-

01.com.openfiler:scsi.linux3-data-2 gets mapped to the local SCSI device /dev/sda and iqn.2006-

01.com.openfiler:scsi.linux3-data-1 gets mapped to the local SCSI device /dev/sdb or visa-versa.

As you can see, it is impractical to rely on using the local SCSI device names like /dev/sda or /dev/sdb given there is no way to predict the

iSCSI target mappings after a reboot.

What we need is a consistent device name we can reference like /dev/iscsi/linux3-data-1 that will always point to the appropriate iSCSI

target through reboots. This is where the Dynamic Device Management tool named udev comes in. udev provides a dynamic device directory

using symbolic links that point to the actual device using a configurable set of rules. When udev receives a device event (for example, the client

logging in to an iSCSI target), it matches its configured rules against the available device attributes provided in sysfs to identify the device. Rules

that match may provide additional device information or specify a device node name and multiple symlink names and instruct udev to run

additional programs (a SHELL script for example) as part of the device event handling process.

The first step is to create a new rules file. This file will be named /etc/udev/rules.d/55-openiscsi.rules and contain only a single line

of name=value pairs used to receive events we are interested in. It will also define a call-out SHELL script

(/etc/udev/scripts/iscsidev.sh) to handle the event.

Create the following rules file /etc/udev/rules.d/55-openiscsi.rules on the client node linux3:

# /etc/udev/rules.d/55-openiscsi.rules

KERNEL=="sd*", BUS=="scsi", PROGRAM="/etc/udev/scripts/iscsidev.sh

%b",SYMLINK+="iscsi/%c/part%n"

Next, create the UNIX SHELL script that will be called when this event is received. Let's first create a separate directory on the linux3 node

where udev scripts can be stored:

[root@linux3 ~]# mkdir -p /etc/udev/scripts

Finally, create the UNIX shell script /etc/udev/scripts/iscsidev.sh:

#!/bin/sh

# FILE: /etc/udev/scripts/iscsidev.sh

BUS=${1}

HOST=${BUS%%:*}

[ -e /sys/class/iscsi_host ] || exit 1

file="/sys/class/iscsi_host/host${HOST}/device/session*/iscsi_session*/targetname"

target_name=$(cat ${file})

# This is not an open-scsi drive

if [ -z "${target_name}" ]; then

exit 1

fi

# Check if QNAP drive

check_qnap_target_name=${target_name%%:*}

if [ $check_qnap_target_name = "iqn.2004-04.com.qnap" ]; then

target_name=`echo "${target_name%.*}"`

fi

echo "${target_name##*.}"

After creating the UNIX SHELL script, change it to executable:

[root@linux3 ~]# chmod 755 /etc/udev/scripts/iscsidev.sh

Now that udev is configured, restart the iSCSI initiator service

[root@linux3 ~]# service iscsi stop

Logging out of session [sid: 3, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal:

192.168.2.195,3260]

Logout of [sid: 3, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal:

192.168.2.195,3260]: successful

Stopping iSCSI daemon: /etc/init.d/iscsi: line 33: 5143 Killed

/etc/init.d/iscsid stop

[root@linux3 ~]# service iscsi start

iscsid dead but pid file exists

Turning off network shutdown. Starting iSCSI daemon: [ OK ]

[ OK ]

Setting up iSCSI targets: Logging in to [iface: default, target: iqn.2006-

01.com.openfiler:scsi.linux3-data-1, portal: 192.168.2.195,3260]

Login to [iface: default, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal:

192.168.2.195,3260]: successful

[ OK ]

Let's see if our hard work paid off:

[root@linux3 ~]# ls -l /dev/iscsi/

total 0

drwxr-xr-x 2 root root 60 Apr 7 01:57 linux3-data-1

[root@linux3 ~]# ls -l /dev/iscsi/linux3-data-1/

total 0

lrwxrwxrwx 1 root root 9 Apr 7 01:57 part -> ../../sda

The listing above shows that udev did the job is was suppose to do! We now have a consistent set of local device name(s) that can be used to

reference the iSCSI targets through reboots. For example, we can safely assume that the device name /dev/iscsi/linux3-data-1/part

will always reference the iSCSI target iqn.2006-01.com.openfiler:scsi.linux3-data-1. We now have a consistent iSCSI target name to local device name mapping which is described in the following table:

 

Create Primary Partition on iSCSI Volume

I now need to create a single primary partition on the new iSCSI volume that spans the entire size of the volume. The fdisk command is used

in Linux for creating (and removing) partitions. You can use the default values when creating the primary partition as the default action is to use

the entire disk. You can safely ignore any warnings that may indicate the device does not contain a valid DOS partition (or Sun, SGI or OSF

disklabel).

[root@linux3 ~]# fdisk /dev/iscsi/linux3-data-1/part

Command (m for help): n

Command action

e extended

p p primary partition (1-4)

Partition number (1-4): 1

First cylinder (1-36864, default 1): 1

Last cylinder or +size or +sizeM or +sizeK (1-36864, default 36864): 36864

Command (m for help): p

Disk /dev/iscsi/linux3-data-1/part: 38.6 GB, 38654705664 bytes

64 heads, 32 sectors/track, 36864 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System

/dev/iscsi/linux3-data-1/part1 1 36864 37748720 83 Linux

Command (m for help): w

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

Create File System on new iSCSI Volume / Partition

The next step is to create an ext3 file system on the new partition. Provided with the RHEL distribution is a script named /sbin/mkfs.ext3

which makes the task of creating an ext3 file system seamless. Here is an example session of using the mkfs.ext3 script on linux3:

[root@linux3 ~]# mkfs.ext3 -b 4096 /dev/iscsi/linux3-data-1/part1

Connecting to an iSCSI Target with Open-iSCSI Initiator using Linux

http://www.idevelopment.info/data/Unix/Linux/LINUX_ConnectingToAniSCSITargetWithOpen-iSCSIInitiatorUsingLinux.shtml[2015/3/31 9:11:34]

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

4718592 inodes, 9437180 blocks

471859 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=0

288 block groups

32768 blocks per group, 32768 fragments per group

16384 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,

4096000, 7962624

Writing inode tables: done

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 26 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.

Mount the New File System

Now that the new iSCSI volume is partition and formatted, the final step is to mount the new volume. For this example, I will be mounting the

new volume on the directory /u03.

Create the /u03 directory before attempting to mount the new volume:

[root@linux3 ~]# mkdir -p /u03

Next, edit the /etc/fstab on linux3 and add an entry for the new volume:

/dev/VolGroup00/LogVol00 / ext3 defaults 1 1

LABEL=/boot /boot ext3 defaults 1 2

tmpfs /dev/shm tmpfs defaults 0 0

devpts /dev/pts devpts gid=5,mode=620 0 0

sysfs /sys sysfs defaults 0 0

proc /proc proc defaults 0 0

//ddeevv//ViosclGsir/oluip0n0u/x3Lo-gdVaotla0-11 / p a r t1 s/wua0p3 sewxatp3 d_enfeatudletvs 00 00

cartman:SHARE2 /cartman nfs defaults 0 0

domo:Public /domo nfs defaults 0 0

After making the new entry in the /etc/fstab file, it is now just a matter of mounting the new iSCSI volume:

[root@linux3 ~]# mount /u03

[root@linux3 ~]# df -k

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

56086828 21905480 31286296 42% /

/dev/hda1 101086 19160 76707 20% /boot

tmpfs 1037056 0 1037056 0% /dev/shm

cartman:SHARE2 306562280 8448 306247272 1% /cartman

/ddoemov/:Psduab1li c 1 93179175862490102 3 2 9158109274404 13559008286732146 8 11%8 %/ u/0d3omo

 

Logout and Remove an iSCSI Target from a Linux Client

It is my hope that this article has provided valuable insight into how you can take advantage of networked storage and the iSCSI configuration

process. As you can see, the process is fairly straightforward. Just as simple as it was to configure the Open-iSCSI Initiator on Linux, it is just as

easy to remove it and that is the subject of this section.

1. Unmount the File System

[root@linux3 ~]# cd

[root@linux3 ~]# umount /u03

After unmounting the file system, remove (or comment out) its related entry from the /etc/fstab file:

# /dev/iscsi/linux3-data-1/part1 /u03 ext3 _netdev 0 0

2. Manually Logout of iSCSI Target(s)

[root@linux3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:scsi.linux3-data-1 –p

192.168.2.195 --logout

Logging out of session [sid: 4, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1,

portal: 192.168.2.195,3260]

Logout of [sid: 4, target: iqn.2006-01.com.openfiler:scsi.linux3-data-1, portal:

192.168.2.195,3260]: successful

Verify we are logged out of the iSCSI target by looking at the /dev/disk/by-path directory. If no other iSCSI targets exist on the client

node, then after logging out from the iSCSI target, the mappings for all targets should be gone and the following command should not

find any files or directories:

[root@linux3 ~]# (cd /dev/disk/by-path; ls -l *openfiler* | awk '{FS=" "; print $9 " " $10

" " $11}')

ls: *openfiler*: No such file or directory

3. Delete Target and Disable Automatic Login

Update the record entry on the client node to disable automatic logins to the iSCSI target:

[root@linux3 ~]# iscsiadm -m node -T iqn.2006-01.com.openfiler:scsi.linux3-data-1 -p

192.168.2.195 --op update -n node.startup -v manual

Delete the iSCSI target:

[root@linux3 ~]# iscsiadm -m node --op delete --targetname iqn.2006-

01.com.openfiler:scsi.linux3-data-1

4. Remove udev Rules Files

If the iSCSI target being removed is the only remaining target and you don't plan on adding any further iSCSI targets in the future, then it

is safe to remove the iSCSI rules file and its call-out script:

[root@linux3 ~]# rm /etc/udev/rules.d/55-openiscsi.rules

[root@linux3 ~]# rm /etc/udev/scripts/iscsidev.sh

5. Disable the iSCSI (Initiator) Service

If the iSCSI target being removed is the only remaining target and you don't plan on adding any further iSCSI targets in the future, then it

is safe to disable the iSCSI Initiator Service:

[root@linux3 ~]# service iscsid stop

[root@linux3 ~]# chkconfig iscsid off

[root@linux3 ~]# chkconfig iscsi off

About the Author

Jeffrey Hunter is an Oracle Certified Professional, Java Development Certified Professional, Author, and an Oracle ACE. Jeff currently works as

a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania. His work includes advanced performance tuning,

Java and PL/SQL programming, developing high availability solutions, capacity planning, database security, and physical / logical database

design in a UNIX / Linux server environment. Jeff's other interests include mathematical encryption theory, tutoring advanced mathematics,

programming language processors (compilers and interpreters) in Java and C, LDAP, writing web-based database administration tools, and of

course Linux. He has been a Sr. Database Administrator and Software Engineer for over 20 years and maintains his own website site at:

http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor's degree in Computer

Science and Mathematics.

相關文章
相關標籤/搜索