Install and Configure GlusterFS on CentOS 7 / RHEL 7

In this tutorial, we will see what is GlusterFS? and how to install and configure on CentOS 7 and RHEL 7? How to configure GlusterFS server and GlusterFS client? How to test GlusterFS High availability? These all points we will cover in this tutorial, we will also proceed step by step installation and configuration of all these points.

What is GlusterFS?

GlusterFS is an open-source, scalable network file system suitable for high data-intensive workloads such as media streaming, cloud storage, and CDN (Content Delivery Network). In gluster, client server can access the storage as like local storage. Whenever user creates the data on gluster storage, then data will be mirrored and distributed to other storage nodes. The primary purpose of GlusterFS is data high availability accessible to the applications or users.


These are the important terminologies that we are going to use throughout this article.

Brick:– Brick is basic storage (directory) on a server in the trusted storage pool. Like /test/glusterfs

Volume:- Volume is a logical collection of bricks.

Cluster:- Cluster is a group of computers which are linked between all computers, and working together as a single computer. If one computer goes down then second computer will handle all services.

Distributed File System:- A file system in which the data is spread across the multiple storage nodes and allows the clients to access it over a network.

Client:- Client is a machine where our gluster file system will be mounts.

Server:- Server is a machine where the actual file system is hosted and all data will be stored.

Replicate:- Making multiple copies of data to achieve high redundancy.

Fuse:- Fuse is a loadable kernel module that lets non-privileged users create their own file systems without editing kernel code.

Glusterd:- Glusterd is a daemon that runs on all servers in the trusted storage pool.


Volume is a collection of bricks, and most of the gluster operations such as reading and writing happen on the volume. Different types of volumes supported by GlusterFS based on the requirements.

In this article, we are going to configure a replicated GlusterFS volume on CentOS 7 / RHEL 7.

Replicated Glusterfs Volume is like a RAID 1, and volume maintains exact copies of the data on all bricks. We can decide the number of replicas while creating the volume, so we would need to have at least two bricks to create a volume with two replicas or three bricks to create a volume of 3 replicas.

Pre Request:-

Here, we are going to configure GlusterFS volume with two replicas. First of all you have to make sure you have two 64bit systems (either Virtual Machine or Physical Server) with minimum 1GB of memory, and one spare hard disk on each server.

Host Name IP Address OS   Memory Disk Purpose
tzgluster1 CentOS 7   1GB /dev/sdb (2GB)   Storage Node 1
tzgluster2 CentOS 7   1GB /dev/sdb (2GB)   Storage Node 2
tzclient CentOS 7    N/A N/A   Client Machine

Configure DNS:-

GlusterFS components are use DNS for name resolutions, so configure either DNS or we can set up a hosts entry like below. If we do not have a DNS on our environment. Here I am going to use /etc/hosts file to name resolution between our gluster server and gluster client machine.

[[email protected] ~]# cat /etc/hosts   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 tzgluster1 tzgluster2 tzclient
[[email protected] ~]#

Add GlusterFS Repository:-

First of all we need to configure GlusterFS repository on both storage nodes to install glusterfs packages. We can follow this instruction to add the repository to our system.

On RHEL 7:-

Add Gluster repository on RHEL 7.

[[email protected] ~]# vi /etc/yum.repos.d/Gluster.repo

name=Gluster 3.8

On CentOS 7:-

Install centos-release-gluster package, it will provides us automatically required YUM repository files. This RPM is available from CentOS Extras.

[[email protected] ~]# yum install -y centos-release-gluster

Install GlusterFS:-

Once we have added the repository on our systems, then we can go for the installation of GlusterFS. We can install GlusterFS package using below command.

[[email protected] ~]# yum install -y glusterfs-server

Start the glusterd service on all gluster nodes after installation.

[[email protected] ~]# systemctl start glusterd

Verify that the glusterfs service is running fine or not using below command.

[[email protected] ~]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2018-09-08 19:12:48 CEST; 46s ago
  Process: 2628 ExecStart=/usr/sbin/glusterd -p /var/run/ --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 2629 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─2629 /usr/sbin/glusterd -p /var/run/ --log-level INFO

Sep 08 19:12:48 tzgluster1 systemd[1]: Starting GlusterFS, a clustered file-system server...
Sep 08 19:12:48 tzgluster1 systemd[1]: Started GlusterFS, a clustered file-system server.
[[email protected] ~]#

Enable glusterd service to start automatically on system boot.

[[email protected] ~]# systemctl enable glusterd
Created symlink from /etc/systemd/system/ to /usr/lib/systemd/system/glusterd.service.
[[email protected] ~]#

Configure Firewall:-

We need to configure firewall to access properly our glusterfs services between server and client. We can either disable the firewall or configure the firewall to allow all connections within a cluster.

By default, glusterd will listen on TCP/24007 but opening that port is not enough on the gluster nodes. Each time we will add a brick, it will open a new port (that we will be able to see with “gluster volumes status”)

# Disable FirewallD
[[email protected] ~]# systemctl stop firewalld
[[email protected] ~]# systemctl disable firewalld


We can run this below command on a node in which we want to accept all traffics coming from the source IP.

[[email protected] ~]# firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="" accept'
[[email protected] ~]# firewall-cmd --reload
[[email protected] ~]#

Add Storage:-

I am assuming that we have one spare hard disk on our machine, /dev/sdb is the one, which we will use for a brick. We need to create a single partition on the spare hard disk as like below.

We need to perform the below steps on both nodes.

[[email protected] ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x5b54b761.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4188133, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-4188133, default 4188133):
Using default value 4188133
Partition 1 of type Linux and of size 2 GiB is set

Command (m for help): p

Disk /dev/sdb: 2144 MB, 2144324608 bytes, 4188134 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x5b54b761

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048     4188133     2093043   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[[email protected] ~]#

Format the created partition with the file system of your choice.

[[email protected] ~]# mkfs.ext4 /dev/sdb1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
130816 inodes, 523260 blocks
26163 blocks (5.00%) reserved for the super user
First data block=0
Maximum file system blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
8176 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and file system accounting information: done

[[email protected] ~]#

Now we can mount the disk on a directory like /test/glusterfs.

[[email protected] ~]# mkdir -p /test/glusterfs
[[email protected] ~]# mount /dev/sdb1 /test/glusterfs
[[email protected] ~]#

Now we need to add an entry into /etc/fstab for keeping the mount persistent across reboot this directory.

[[email protected] ~]# echo "/dev/sdb1 /test/glusterfs ext4 defaults 0 0" | tee --append /etc/fstab

Configure GlusterFS on CentOS 7:-

Before creating a volume, we need to create trusted storage pool by adding tzgluster2. We can run GlusterFS configuration commands on any one server in the cluster will execute the same command on all other servers.

Here we will run all GlusterFS commands in gluster1 node.

[[email protected] ~]# gluster peer probe tzgluster2
peer probe: success.
[[email protected] ~]#

Now we can verify the status of the trusted storage pool using below commands.

[[email protected] ~]# gluster peer status
Number of Peers: 1

Hostname: tzgluster2
Uuid: ceed9138-a3f3-40ed-94df-37b57b17de4a
State: Peer in Cluster (Connected)
[[email protected] ~]#

We can display list the storage pool using below command.

[[email protected] ~]# gluster pool list
UUID                                    Hostname        State
ceed9138-a3f3-40ed-94df-37b57b17de4a    tzgluster2      Disconnected
269e06ee-5ef2-40cf-ad87-34b8eebe6d71    localhost       Connected
[[email protected] ~]#

Setup GlusterFS Volume:-

Now we need to create a brick (directory) called “tzclouds1” in the mounted file system on both nodes.

[[email protected] ~]# mkdir -p /test/glusterfs/tzclouds1
[[email protected] ~]#

As we know, we are going to use replicated volume, so we need to create the volume named “tzclouds1” with two replicas.

[[email protected] ~]# gluster volume create tzclouds1 replica 2 tzgluster1:/test/glusterfs/tzclouds1 tzgluster2:/test/glusterfs/tzclouds1
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See:
Do you still want to continue?
 (y/n) y
volume create: tzclouds1: success: please start the volume to access data
[[email protected] ~]#

Now we can start the volume using below command.

[[email protected] ~]# gluster volume start tzclouds1
volume start: tzclouds1: success
[[email protected] ~]#

We can check the status of the created volume using below command.

[[email protected] ~]# gluster volume info tzclouds1

Volume Name: tzclouds1
Type: Replicate
Volume ID: 5c5f8450-9621-459d-9ae5-4b57713b61b5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Brick1: tzgluster1:/test/glusterfs/tzclouds1
Brick2: tzgluster2:/test/glusterfs/tzclouds1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[[email protected] ~]#

Installation and configuration GlusterFS Client:-

We can install glusterfs-client package to support the mounting of GlusterFS file systems. We need to run all commands as root user.

First of all we need to install repository package for install glusterfs-client package.

[[email protected] ~]# yum install -y centos-release-gluster

On CentOS 7 and RHEL 7 we can use below command to install glusterfs-client package.

[[email protected] ~]# yum install -y glusterfs-client

Now we need to create a directory to mount the GlusterFS file system on client server.

[[email protected] ~]# mkdir -p /client/glusterfs

Now, mount the GlusterFS file system to /client/glusterfs using the following command.

[[email protected] ~]# mount -t glusterfs tzgluster1:/tzclouds1 /client/glusterfs
[[email protected] ~]#

If you get any error like below.

WARNING: getfattr not found, certain checks will be skipped..
Mount failed. Please check the log file for more details.

Consider adding Firewall rules for client machine (client) to allow connections on the gluster nodes (gluster1 and tzgluster2). Run the below command on both gluster nodes.

[[email protected] ~]# firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="" accept'
[[email protected] ~]# firewall-cmd --reload
[[email protected] ~]#

You can also use tzgluster2 instead of gluster1 in the above command.

We can verify the mounted GlusterFS file system using below command.

[[email protected] ~]# df -h /client/glusterfs
Filesystem             Size  Used Avail Use% Mounted on
tzgluster1:/tzclouds1  2.0G   26M  1.9G   2% /client/glusterfs
[[email protected] ~]#

We can also use below command to verify the GlusterFS file system.

[[email protected] ~]# cat /proc/mounts
rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,seclabel,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,seclabel,nosuid,size=493100k,nr_inodes=123275,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,seclabel,nosuid,nodev 0 0
devpts /dev/pts devpts rw,seclabel,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,seclabel,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs ro,seclabel,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/net_cls cgroup rw,nosuid,nodev,noexec,relatime,net_cls 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
/dev/mapper/centos-root / xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
selinuxfs /sys/fs/selinux selinuxfs rw,relatime 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=31,pgrp=1,timeout=300,minproto=5,maxproto=5,direct 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,seclabel,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
mqueue /dev/mqueue mqueue rw,seclabel,relatime 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0
nfsd /proc/fs/nfsd nfsd rw,relatime 0 0
/dev/sda1 /boot xfs rw,seclabel,relatime,attr2,inode64,noquota 0 0
tmpfs /run/user/1000 tmpfs rw,seclabel,nosuid,nodev,relatime,size=101688k,mode=700,uid=1000,gid=1000 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
gvfsd-fuse /run/user/1000/gvfs fuse.gvfsd-fuse rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
tmpfs /run/user/0 tmpfs rw,seclabel,nosuid,nodev,relatime,size=101688k,mode=700 0 0
tzgluster1:/tzclouds1 /client/glusterfs fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0
[[email protected] ~]#

Add below entry in /etc/fstab for automatically mounting during system boot.

[[email protected] ~]# echo "tzgluster1:/tzclouds1 /client/glusterfs glusterfs  defaults,_netdev 0 0" | tee --append /etc/fstab
tzgluster1:/tzclouds1 /client/glusterfs glusterfs  defaults,_netdev 0 0
[[email protected] ~]# cat /etc/fstab

# /etc/fstab
# Created by anaconda on Tue Sep 26 12:44:35 2017
# Accessible file systems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=9b1b6c8c-a702-4654-8b65-3ea79c368a84 /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0
tzgluster1:/tzclouds1 /client/glusterfs glusterfs  defaults,_netdev 0 0
[[email protected] ~]#

Test GlusterFS Replication and High-Availability:-

GlusterFS Server Side:-

To check the replication, we need to mount the created GlusterFS volume on the same storage node.

First of all we need to create one directory on both glusterfs server like /testHA

[[email protected] ~]# mkdir /testHA
[[email protected] ~]#
[[email protected] ~]# mkdir /testHA
[[email protected] ~]#
[[email protected] ~]# mount -t glusterfs tzgluster2:/tzclouds1 /testHA
[[email protected] ~]# 
[[email protected] ~]# mount -t glusterfs tzgluster1:/tzclouds1 /testHA
[[email protected] ~]#

Data inside the /testHA directory of both nodes will always be same (replication).

GlusterFS Client Side:-

Let’s create some files on the mounted file system on the client.

[[email protected] ~]# touch /client/glusterfs/file1
[[email protected] ~]# touch /client/glusterfs/file2
[[email protected] ~]#

We can verify the created files using below command.

[[email protected] ~]# ls -l /client/glusterfs/
total 0
-rw-r--r--. 1 root root 0 Sep  9 06:56 file1
-rw-r--r--. 1 root root 0 Sep  9 06:56 file2
[[email protected] ~]#

Test the both GlusterFS nodes whether they have same data inside /testHA.

[[email protected] ~]# ls -l /testHA/
total 0
-rw-r--r-- 1 root root 0 Sep  9 06:56 file1
-rw-r--r-- 1 root root 0 Sep  9 06:56 file2
[[email protected] ~]#
[[email protected] ~]# ls -l /testHA/
total 0
-rw-r--r--. 1 root root 0 Sep  9 06:56 file1
-rw-r--r--. 1 root root 0 Sep  9 06:56 file2
[[email protected] ~]#

As you know, we have mounted GlusterFS volume from tzgluster1 on client, now it is time to test the high-availability of the volume by shutting down the node.

[[email protected] ~]# poweroff

Now we need to test the availability of the files, we can see files that we created recently even though the node is down.

[[email protected] ~]# ls -l /client/glusterfs/
total 0
-rw-r--r--. 1 root root 0 Sep  9 06:56 file1
-rw-r--r--. 1 root root 0 Sep  9 06:56 file2
[[email protected] ~]#

Create some more files on the GlusterFS file system to check the replication.

[[email protected] ~]# touch /client/glusterfs/file3
[[email protected] ~]# touch /client/glusterfs/file4
[[email protected] ~]#

Verify the files count after created.

[[email protected] ~]# ls -l /client/glusterfs/
total 0
-rw-r--r--. 1 root root 0 Sep  9 06:56 file1
-rw-r--r--. 1 root root 0 Sep  9 06:56 file2
-rw-r--r--. 1 root root 0 Sep  9 07:41 file3
-rw-r--r--. 1 root root 0 Sep  9 07:41 file4
[[email protected] ~]#

Since the gluster1 is down, all your data’s are now written on tzgluster2 due to High Availability. Now we need to power on the node1 gluster1.

Check the /testHA on the gluster1, we can see all four files in the directory, this confirms that our file replication is working as expected.

[[email protected] ~]# mount -t glusterfs tzgluster1:/tzclouds1 /testHA
[[email protected] ~]# ls -l /testHA/
total 0
-rw-r--r-- 1 root root 0 Sep  9 06:56 file1
-rw-r--r-- 1 root root 0 Sep  9 06:56 file2
-rw-r--r-- 1 root root 0 Sep  9 07:41 file3
-rw-r--r-- 1 root root 0 Sep  9 07:41 file4
[[email protected] ~]#

That’s all. We have completed GlusterFS server and GlusterFS client installation and configuration. We have tested GlusterFS High Availability in this tutorial.

Disk partitions and mounting in CentOS 7

2 thoughts on “Install and Configure GlusterFS on CentOS 7 / RHEL 7

Leave a Reply

Your email address will not be published. Required fields are marked *