GlusterFS Storage Setup on Two CentOS 7 Servers and Client

glusterfs_logo
GlusterFS is a simple and easy to configure scalable network attached storage. GlusterFS is a distributed storage consisting of nodes (servers including storage bricks), which export their own local file system as a volume. Volumes can be mounted on client servers as a network storage using NFS and Gluster Fuse. GlusterFS provides failover, redundancy and anti split-brain mechanisms that act as a High Availability system, that’s why in many aspects it is similar to the well known clustering software like Veritas Cluster Suite.

In this tutorial we will install GlusterFS on two CentOS 7 based nodes. We will use KVM Virtual Machines to make things easier and faster, but our main intention is to show you how to install GlusterFS on physical hardware.

Environment used:

GlusterFS KVM node 1:
hostname: glusterfs1
IP: 192.168.2.35
OS: CentOS 7.2
OS disk: /dev/vda (50GB)
GlusterFS disk: /dev/vdb (10GB)

GlusterFS KVM node 2:
hostname: glusterfs2
IP: 192.168.2.36
OS: CentOS 7.2
OS disk: /dev/vda (50GB)
GlusterFS disk: /dev/vdb (10GB)

GlusterFS KVM client machine:
hostname: tuxfixer
IP: 192.168.2.9
OS: CentOS 7.2

Steps:

1. Prepare GlusterFS disks

Create xfs file system on /dev/vdb disk on both nodes:

[root@glusterfs1 ~]# mkfs -t xfs -L brick1 /dev/vdb

Verify created file system on both nodes:

[root@glusterfs1 ~]# blkid | grep /dev/vdb
/dev/vdb: LABEL="brick1" UUID="9ab0dcae-78e7-49a0-931f-885d56b48292" TYPE="xfs"

Create mount points for GlusterFS bricks on both nodes:

[root@glusterfs1 ~]# mkdir -p /glusterfs/brick1

2. Mount GlusterFS disks

Edit /etc/fstab file on both nodes and add the following line:

/dev/vdb   /glusterfs/brick1   xfs   defaults   1 2

Mount bicks on both GlusterFS nodes:

[root@glusterfs1 ~]# mount -a

Verify mount point for /dev/vdb disk on both nodes:

[root@glusterfs1 ~]# df -hT | grep /dev/vdb
/dev/vdb                xfs        10G   33M   10G   1% /glusterfs/brick1

/dev/vdb should be mounted at /glusterfs/brick1.

3. Install GlusterFS software

Install EPEL repo on both nodes (may be needed for some dependencies):

[root@glusterfs1 ~]# yum install epel-release

Install wget package on both nodes:

[root@glusterfs1 ~]# yum install wget

Download GlusterFS repository on both nodes:

[root@glusterfs1 ~]# wget -P /etc/yum.repos.d https://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo

Install GlusterFS server package on both nodes:

[root@glusterfs1 ~]# yum install glusterfs-server

4. Enable and GlusterFS service

Start GlusterFS service on both nodes:

[root@glusterfs1 ~]# systemctl start glusterd.service

Verify GlusterFS service on both nodes:

[root@glusterfs1 yum.repos.d]# systemctl status glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
   Active: active (running) since wto 2016-01-26 22:06:08 CET; 8s ago
  Process: 2678 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 2679 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─2679 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

sty 26 22:06:08 glusterfs1 systemd[1]: Starting GlusterFS, a clustered file-system server...
sty 26 22:06:08 glusterfs1 systemd[1]: Started GlusterFS, a clustered file-system server.

Enable GlusterFS service on both nodes (to be persistent after reboot):

[root@glusterfs1 ~]# systemctl enable glusterd.service

5. Open GlusterFS ports on firewall

GlusterFS by default uses the following ports:

  • 24007/TCP – Gluster Daemon
  • 24008/TCP – Gluster Management
  • 49152/TCP – Brick port (for GlusterFS version 3.7 each new brick will use next new port: 49153, 49154, etc…)
  • 38465-38469/TCP – Gluster NFS service
  • 111/TCP/UDP – Portmapper
  • 2049/TCP – NFS Service

Add appropriate firewalld rules to open ports on both nodes:

[root@glusterfs1 /]# firewall-cmd --zone=public --add-port=24007-24008/tcp --permanent
[root@glusterfs1 /]# firewall-cmd --zone=public --add-port=49152/tcp --permanent
[root@glusterfs1 /]# firewall-cmd --zone=public --add-port=38465-38469/tcp --permanent
[root@glusterfs1 /]# firewall-cmd --zone=public --add-port=111/tcp --permanent
[root@glusterfs1 /]# firewall-cmd --zone=public --add-port=111/udp --permanent
[root@glusterfs1 /]# firewall-cmd --zone=public --add-port=2049/tcp --permanent
[root@glusterfs1 ~]# firewall-cmd --reload

…or just stop and disable Firewalld service on both nodes:

[root@glusterfs1 ~]# systemctl stop firewalld.service
[root@glusterfs1 ~]# systemctl disable firewalld.service

6. Configure GlusterFS trusted pool

Probe each GlusterFS node from one other:

[root@glusterfs1 ~]# gluster peer probe 192.168.2.36
peer probe: success. 
[root@glusterfs2 ~]# gluster peer probe 192.168.2.35
peer probe: success. Host 192.168.2.35 port 24007 already in peer list

Note: Once the pool has been established, only trusted members may probe new nodes into the pool. A new node can not probe the pool, it must be probed from the pool.

Verify peer status on each node:

[root@glusterfs1 ~]# gluster peer status
Number of Peers: 1

Hostname: 192.168.2.36
Uuid: b56cf21a-9f49-45b5-b7f0-e76c7d4fddfa
State: Peer in Cluster (Connected)
[root@glusterfs2 ~]# gluster peer status
Number of Peers: 1

Hostname: 192.168.2.35
Uuid: 4d4b6d42-486f-4631-bd62-eb8328e0a26c
State: Peer in Cluster (Connected)

7. Set up a GlusterFS volume

Create volume directory on both nodes:

[root@glusterfs1 ~]# mkdir /glusterfs/brick1/gluster_volume_0

Create volume from any single node:

[root@glusterfs1 ~]# gluster volume create gluster_volume_0 replica 2 192.168.2.35:/glusterfs/brick1/gluster_volume_0 192.168.2.36:/glusterfs/brick1/gluster_volume_0
volume create: gluster_volume_0: success: please start the volume to access data

Start volume from any single GlusterFS node:

[root@glusterfs1 ~]# gluster volume start gluster_volume_0
volume start: gluster_volume_0: success

Verify volume from any single node:

[root@glusterfs1 ~]# gluster volume info
 
Volume Name: gluster_volume_0
Type: Replicate
Volume ID: b88771d3-bf16-4526-b00c-b8a2bd5d1a3f
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.2.35:/glusterfs/brick1/gluster_volume_0
Brick2: 192.168.2.36:/glusterfs/brick1/gluster_volume_0
Options Reconfigured:
performance.readdir-ahead: on

8. Mount GlusterFS volume on client machine

Prepare any client machine needed to mount and test GlusterFS volume.
Install glusterfs-fuse package on client machine:

[root@tuxfixer ~]# yum install glusterfs-fuse

Create GlusterFS mount point directory:

[root@tuxfixer ~]# mkdir -p /mnt/volume

Mount volume using first GlusterFS node IP (192.168.2.35):

[root@tuxfixer ~]# mount -t glusterfs 192.168.2.35:/gluster_volume_0 /mnt/volume

Verify mount point:

[root@tuxfixer ~]# df -hT | grep /mnt/volume
192.168.2.35:/gluster_volume_0 fuse.glusterfs   10G   33M   10G   1% /mnt/volume
Liked it? Take a second to support tuxfixer.com on Patreon!

Share on: Share on FacebookShare on Google+Tweet about this on TwitterShare on StumbleUponShare on LinkedInPin on PinterestBuffer this pageShare on TumblrDigg thisFlattr the authorShare on RedditShare on VKShare on Yummly

4 thoughts on “GlusterFS Storage Setup on Two CentOS 7 Servers and Client

  1. Alexandre June 23, 2016 at 15:43

    Hi Grzegorz Juszczak

    Its possible I use the GlusterFS in openstack? I need the block storage or I use the cinder.

    Thanks for your help.

  2. Grzegorz Juszczak June 26, 2016 at 20:53

    Hi Alexandre
    Of course it’s possible, in fact you can use GlusterFS as a backend for cinder instead of LVM cinder-volumes volume group.
    There is a parameter in answer file:

    # A single or comma-separated list of Red Hat Storage (gluster)
    # volume shares to mount. Example: ‘ip-address:/vol-name’, ‘domain
    # :/vol-name’
    CONFIG_CINDER_GLUSTER_MOUNTS=

    I am planning to make a tutorial soon about Openstack installation with GlusterFS as cinder backend.

  3. Alexandre July 5, 2016 at 22:28

    I appreciate..rsrsr

Leave a Reply

Name *
Email *
Website