How to install and configure GlusterFS server on centos

We are going to setup glusterfs storage server with four nodes.
1. On all of your four nodes install the glusterfs and xfs packages,

wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
yum install glusterfs-server xfsprogs
chkconfig glusterd on
service glusterd start

2. On all of your cluster nodescreate a new 2Gb LV called brick1 in vgsrv VG and format this LV with an XFS filesystem with 512byte inodes.

lvcreate -L 2G -n brick1 vgsrv
mkfs.xfs -i size=512 /dev/vgsrv/brick1
mkdir /server1_export1
echo "/dev/vgsrv/brick1 /server1_export1 xfs defaults 0 1" >> /etc/fstab
mount -a

3. From server1, add the other three nodes as trusted peers.


[root@proxy ~]# gluster peer probe server2{ip}
[root@proxy ~]# gluster peer probe server3{ip}
[root@proxy ~]# gluster peer probe server4{ip}

[root@proxy ~]# gluster peer status
Number of Peers: 3

Hostname: server2
Uuid: a381532b-81a0-41c7-9adb-cd29f9f38158
State: Peer in Cluster (Connected)

Hostname: server3
Uuid: b289f724-1de9-47fb-8913-41ce82237f65
State: Peer in Cluster (Connected)

Hostname: server4
Uuid: 9ac6df3e-441e-495b-84cb-2b9d50a6099c
State: Peer in Cluster (Connected)

There are three types of volume[cluster of bricks] creation,
1. Distributed
2. striped
3. Replicated

4. We are going to create a new replicated GlusterFS volume using bricks from server1 & server2.

On server1,

[root@proxy ~]# gluster volume create newvol replica 2 server1:/server1_export1/brick server2:/server2_export1/brick
volume create: newvol: success: please start the volume to access data

Note: The brick directory should ideally be a sub-directory of a mount point (and not a mount point directory itself) for ease of administration.

[root@proxy ~]# gluster volume start newvol
volume start: newvol: success

[root@proxy ~]# gluster volume status newvol
Status of volume: newvol
Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------
Brick server1:/server1_export1/brick        49152    Y    1277
Brick server2:/server2_export1/brick        49152    Y    1300
NFS Server on localhost                    N/A    N    N/A
Self-heal Daemon on localhost                N/A    Y    1298
NFS Server on server2                N/A    N    N/A
Self-heal Daemon on server2            N/A    Y    1321
NFS Server on server4                N/A    N    N/A
Self-heal Daemon on server4            N/A    Y    1287
NFS Server on server3                N/A    N    N/A
Self-heal Daemon on server3            N/A    Y    1315

Task Status of Volume newvol
------------------------------------------------------------------------------
There are no active volume tasks

[root@proxy ~]# gluster volume info newvol

Volume Name: newvol
Type: Replicate
Volume ID: af11dc1a-1536-40a0-8739-7213b462d6c2
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: server1:/server1_export1/brick
Brick2: server2:/server2_export1/brick

5. Configure a storage client on your local desktop machine,

wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
yum -y install glusterfs-fuse
mkdir /newvol
mount -t glusterfs server1:/newvol /newvol


~]# mount
/dev/xvda1 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
server1:/newvol on /newvol type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

Create some test files under mount point /newvol and examine the contents of /serverX_export1 directory on server1 & server2.

~]#touch /newvol/file{0..9}

[root@proxy brick]# ls -la /server1_export1/brick/
total 0
drwxr-xr-x  3 root root 143 Apr  9 10:04 .
drwxr-xr-x  3 root root  18 Apr  9 09:53 ..
-rw-r--r--  2 root root   0 Apr  9 10:04 file0
-rw-r--r--  2 root root   0 Apr  9 10:04 file1
-rw-r--r--  2 root root   0 Apr  9 10:04 file2
-rw-r--r--  2 root root   0 Apr  9 10:04 file3
-rw-r--r--  2 root root   0 Apr  9 10:04 file4
-rw-r--r--  2 root root   0 Apr  9 10:04 file5
-rw-r--r--  2 root root   0 Apr  9 10:04 file6
-rw-r--r--  2 root root   0 Apr  9 10:04 file7
-rw-r--r--  2 root root   0 Apr  9 10:04 file8
-rw-r--r--  2 root root   0 Apr  9 10:04 file9
drw------- 16 root root 170 Apr  9 10:04 .glusterfs

[root@sara2 ~]# ls -la /server2_export1/brick/
total 0
drwxr-xr-x  3 root root 143 Apr  9 10:04 .
drwxr-xr-x  3 root root  18 Apr  9 09:53 ..
-rw-r--r--  2 root root   0 Apr  9 10:04 file0
-rw-r--r--  2 root root   0 Apr  9 10:04 file1
-rw-r--r--  2 root root   0 Apr  9 10:04 file2
-rw-r--r--  2 root root   0 Apr  9 10:04 file3
-rw-r--r--  2 root root   0 Apr  9 10:04 file4
-rw-r--r--  2 root root   0 Apr  9 10:04 file5
-rw-r--r--  2 root root   0 Apr  9 10:04 file6
-rw-r--r--  2 root root   0 Apr  9 10:04 file7
-rw-r--r--  2 root root   0 Apr  9 10:04 file8
-rw-r--r--  2 root root   0 Apr  9 10:04 file9
drw------- 16 root root 170 Apr  9 10:04 .glusterfs

Yes, both directory on both nodes have same contents due to replicated volume.

-Poweroff your server2,

[root@sara2 ~]# poweroff

Then create a 512MB file in desktop  and verify it on /server1_export1/brick/,

~]# dd if=/dev/zero of=/newvol/bigfile bs=1M count=512

[root@proxy brick]# ls -lh /server1_export1/brick/bigfile
-rw-r--r-- 2 root root 512M Apr  9 10:13 /server1_export1/brick/bigfile

-Start your server2,

Once it is back then examine the contents of /server2_export1/brick.
Initially you cant see that "bigfile" which means gluster service not started the self healing procedure.
Hence examine the list of files that still need to be healed on server1,

[root@proxy brick]# gluster volume heal newvol info
Brick proxy.sara.com:/server1_export1/brick/
Number of entries: 2
/
/bigfile

Brick sara2.test.com:/server2_export1/brick/
Number of entries: 0

[or] also you run the below command to start self heal manually,

[root@proxy brick]# gluster volume heal newvol
Launching heal operation to perform index self heal on volume newvol has been successful

6. Expand a volume on Glusterfs storage,

Here we are going to expand "newvol" volume from replica 2[server1-server2] to 2x2 distributed replica[both].

First add the bricks from server3 & server4 on server1,

[root@proxy brick]# gluster volume add-brick newvol \
> server3:/server3_export1/brick \
> server4:/server4_export1/brick
volume add-brick: success

[root@proxy brick]# gluster volume info newvol
 Volume Name: newvol
Type: Distributed-Replicate
Volume ID: af11dc1a-1536-40a0-8739-7213b462d6c2
Status: Started
Number of Bricks: 2 x 2 = 4  --> [two replica & two distributed = 1 replica[server1 & server2] and 2 replica[server3 & server4]
Transport-type: tcp
Bricks:
Brick1: server1:/server1_export1/brick
Brick2: server2:/server2_export1/brick
Brick3: server3:/server3_export1/brick
Brick4: server4:/server4_export1/brick

examine the contents of brick folder on server3 & server4, yes it is still empty.

[root@sara4 ~]# ls -la /server4_export1/brick/
total 0
drwxr-xr-x 3 root root 23 Apr  9 10:25 .
drwxr-xr-x 3 root root 18 Apr  9 09:53 ..
drw------- 6 root root 80 Apr  9 10:25 .glusterfs

Hence start the rebalancing of the files on "newvol" across all bricks from server1,

[root@proxy brick]# gluster volume rebalance newvol start
volume rebalance: newvol: success: Initiated rebalance on volume newvol.
Execute "gluster volume rebalance <volume-name> status" to check status.
ID: 8aefeaaa-697a-4efe-9017-18a52e4f07ab

[root@proxy brick]# gluster volume rebalance newvol status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                1       512.0MB            11             0             0            completed               1.00
                            server2                0        0Bytes            11             0             0            completed               0.00
                            server3                0        0Bytes            11             0             0            completed               0.00
                            server4                0        0Bytes            11             0             0            completed               0.00
volume rebalance: newvol: success:

Now examine the contents of all nodes,
Yes it seems the files have been distributed between two replicas, one replica on server1&2 and one replica on server3&4.

7. Replacing a Brick on GlusterFS,

Here we are going to replace the failed brick[just assume] /server4_export1 with a new brick /server4_export2 on server4.

lvcreate -L 2G -n brick2 vgsrv
mkfs.xfs -i size=512 /dev/vgsrv/brick2
mkdir /server4_export2
echo "/dev/vgsrv/brick2 /server4_export2 xfs defaults 0 1" >> /etc/fstab
mount -a

[root@sara4 brick]# mount
/dev/xvda1 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
/dev/mapper/vgsrv-brick1 on /server4_export1 type xfs (rw)
/dev/mapper/vgsrv-brick2 on /server4_export2 type xfs (rw)

Start the replacement,

On server4,
[root@sara4 brick]# gluster volume replace-brick newvol \
>  server4:/server4_export1/brick \
>  server4:/server4_export2/brick start
All replace-brick commands except commit force are deprecated. Do you want to continue? (y/n) y
volume replace-brick: success: replace-brick started successfully
ID: a1a82d01-13d5-44cf-acf3-b5965482d7de

[root@sara4 brick]# gluster volume replace-brick newvol  server4:/server4_export1/brick  server4:/server4_export2/brick status
All replace-brick commands except commit force are deprecated. Do you want to continue? (y/n) y
volume replace-brick: success: Number of files migrated = 11    Migration complete

Once migration is completed then commit your changes.

[root@sara4 brick]# gluster volume replace-brick newvol  server4:/server4_export1/brick  server4:/server4_export2/brick commit
All replace-brick commands except commit force are deprecated. Do you want to continue? (y/n) y
volume replace-brick: success: replace-brick commit successful

Then examine the newly added brick,

volume replace-brick: success: replace-brick commit successful
[root@sara4 brick]# ls -la /server4_export2/brick/
total 128
drwxr-xr-x  3 root root       157 Apr  9 11:03 .
drwxr-xr-x  3 root root        18 Apr  9 11:02 ..
---------T  2 root root 536870912 Apr  9 10:34 bigfile
-rw-r--r--  2 root root         0 Apr  9 10:04 file0
-rw-r--r--  2 root root         0 Apr  9 10:04 file1
-rw-r--r--  2 root root         0 Apr  9 10:04 file2
---------T  2 root root         0 Apr  9 10:29 file3
---------T  2 root root         0 Apr  9 10:29 file4
-rw-r--r--  2 root root         0 Apr  9 10:04 file5
-rw-r--r--  2 root root         0 Apr  9 10:04 file6
---------T  2 root root         0 Apr  9 10:29 file7
-rw-r--r--  2 root root         0 Apr  9 10:04 file8
---------T  2 root root         0 Apr  9 10:29 file9
drw------- 17 root root       179 Apr  9 11:04 .glusterfs


Comments

Popular posts from this blog

Using a Linux server to route packets between two private networks

PHP Fatal error: Class 'JFactory' not found

KVM & Qemu