This document is intended to provide a step-by-step guide to setting up GlusterFS for the first time. For the purposes of this guide, it is required to use Fedora 26 (or, higher) virtual machine instances.node
After you deploy GlusterFS by following these steps, we recommend that you read the GlusterFS Admin Guide to learn how to administer GlusterFS and how to select a volume type that fits your needs. Read the GlusterFS Install Guide for a more detailed explanation of the steps we took here. We want you to be successful in as short a time as possible.git
If you would like a more detailed walkthrough with instructions for installing using different methods (in local virtual machines, EC2 and baremetal) and different distributions, then have a look at the Install guide.github
If you are already an Ansible user, and are more comfortable with setting up distributed systems with Ansible, we recommend you to skip all these and move over to gluster-ansible repository, which gives most of the details to get the systems running faster.app
While GlusterD2 project continues to be under active development, contributors can start by setting up the cluster to understand the aspect of peer and volume management.Please refer to GD2 quick start guide here. Feedback on the new CLI and the ReST APIs are welcome at gluster-users@gluster.org & gluster-devel@gluster.org.tcp
To deploy GlusterFS using scripted methods, please read this article.ide
Note: GlusterFS stores its dynamically generated configuration files at /var/lib/glusterd
. If at any point in time GlusterFS is unable to write to these files (for example, when the backing filesystem is full), it will at minimum cause erratic behavior for your system; or worse, take your system offline completely. It is recommended to create separate partitions for directories such as /var/log
to reduce the chances of this happening.wordpress
Perform this step on all the nodes, "server{1,2,3}"測試
Note: We are going to use the XFS filesystem for the backend bricks. But Gluster is designed to work on top of any filesystem, which supports extended attributes.ui
The following examples assume that the brick will be residing on /dev/sdb1.this
mkfs.xfs -i size=512 /dev/sdb1 mkdir -p /data/brick1 echo '/dev/sdb1 /data/brick1 xfs defaults 1 2' >> /etc/fstab mount -a && mount
You should now see sdb1 mounted at /data/brick1
Install the software
yum install glusterfs-server
Start the GlusterFS management daemon:
service glusterd start service glusterd status glusterd.service - LSB: glusterfs server Loaded: loaded (/etc/rc.d/init.d/glusterd) Active: active (running) since Mon, 13 Aug 2012 13:02:11 -0700; 2s ago Process: 19254 ExecStart=/etc/rc.d/init.d/glusterd start (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/glusterd.service ├ 19260 /usr/sbin/glusterd -p /run/glusterd.pid ├ 19304 /usr/sbin/glusterfsd --xlator-option georep-server.listen-port=24009 -s localhost... └ 19309 /usr/sbin/glusterfs -f /var/lib/glusterd/nfs/nfs-server.vol -p /var/lib/glusterd/...
The gluster processes on the nodes need to be able to communicate with each other. To simplify this setup, configure the firewall on each node to accept all traffic from the other node.
iptables -I INPUT -p all -s <ip-address> -j ACCEPT
where ip-address is the address of the other node.
From "server1"
gluster peer probe server2 gluster peer probe server3
Note: When using hostnames, the first server needs to be probed from one other server to set its hostname.
From "server2"
gluster peer probe server1
Note: Once this pool has been established, only trusted members may probe new servers into the pool. A new server cannot probe the pool, it must be probed from the pool.
Check the peer status on server1
gluster peer status
You should see something like this (the UUID will differ)
Number of Peers: 2 Hostname: server2 Uuid: f0e7b138-4874-4bc0-ab91-54f20c7068b4 State: Peer in Cluster (Connected) Hostname: server3 Uuid: f0e7b138-4532-4bc0-ab91-54f20c701241 State: Peer in Cluster (Connected)
On all servers:
mkdir -p /data/brick1/gv0
From any single server:
gluster volume create gv0 replica 3 server1:/data/brick1/gv0 server2:/data/brick1/gv0 server3:/data/brick1/gv0 gluster volume start gv0
Confirm that the volume shows "Started":
gluster volume info
You should see something like this (the Volume ID will differ):
Volume Name: gv0 Type: Replicate Volume ID: f25cc3d8-631f-41bd-96e1-3e22a4c6f71f Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: server1:/data/brick1/gv0 Brick2: server2:/data/brick1/gv0 Brick3: server3:/data/brick1/gv0 Options Reconfigured: transport.address-family: inet
Note: If the volume does not show "Started", the files under /var/log/glusterfs/glusterd.log
should be checked in order to debug and diagnose the situation. These logs can be looked at on one or, all the servers configured.
For this step, we will use one of the servers to mount the volume. Typically, you would do this from an external machine, known as a "client". Since using this method would require additional packages to be installed on the client machine, we will use one of the servers as a simple place to test first , as if it were that "client".
mount -t glusterfs server1:/gv0 /mnt for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
First, check the client mount point:
ls -lA /mnt/copy* | wc -l
You should see 100 files returned. Next, check the GlusterFS brick mount points on each server:
ls -lA /data/brick1/gv0/copy*
You should see 100 files on each server using the method we listed here. Without replication, in a distribute only volume (not detailed here), you should see about 33 files on each one.