A translation of the article was prepared ahead of the start of the Linux administrator. Virtualization and Clustering ” .

ITKarma picture


DRBD (Distributed Replicated Block Device) is a distributed, flexible and universally replicated data storage solution for Linux. It reflects the contents of block devices such as hard drives, partitions, logical volumes, etc. between servers. It creates copies of data on two storage devices so that if one of them fails, you can use the data on the second.

We can say that this is something like RAID network configuration 1 with disks that are mapped to different servers. However, it does not work at all like RAID (even network).

Initially, DRBD was mainly used in high availability computer clusters (HA - high availability), however, starting with the ninth version, it can be used to deploy cloud storage solutions.

In this article, we will describe how to install DRBD on CentOS, and briefly demonstrate how to use it to replicate storage (partition) on two servers. This is the perfect article to get started with DRBD on Linux.

Test environment


We will use a two-node cluster for this setup.

  • Node 1: 192.168.56.101 - tecmint.tecmint.lan
  • Node 2: 192.168.56.102 - server1.tecmint.lan

Step 1: Install DRBD Packages


DRBD is implemented as a Linux kernel module. It is a driver for a virtual block device, so it is located at the very bottom of the system I/O stack.

DRBD can be installed from ELRepo or EPEL. Let's start by importing the ELRepo package signing key and connecting the repository on both nodes as shown below.

# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org # rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm 


Then you need to install the DRBD kernel module and utilities on both nodes using:

# yum install -y kmod-drbd84 drbd84-utils 

If you have SELinux connected, you need to configure policies to free DRBD processes from SELinux control.

# semanage permissive -a drbd_t 

In addition, if your system has a firewall (firewalld), you need to add the DRBD port 7789 to enable data synchronization between the two nodes.

Run these commands for the first node:

# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.56.102" port port="7789" protocol="tcp" accept' # firewall-cmd --reload 

Then run these commands for the second node:

# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.56.101" port port="7789" protocol="tcp" accept' # firewall-cmd --reload 

Step 2. Preparing a low-level repository


Now that we have DRBD installed on both nodes of the cluster, we need to prepare storage areas on them about the same size. This can be a hard disk partition (or an entire physical hard disk), a software RAID device, LVM logical volume or any other type of block device located on your system.

For this article, we will create a 2 GB test block device using the dd command.

# dd if=/dev/zero of=/dev/sdb1 bs=2024k count=1024 

Suppose this is an unused partition (/dev/sdb1) on a second block device (/dev/sdb) connected to both nodes.

Step 3. Configure DRBD


The main DRBD configuration file is CDMY0CDMY, and additional configuration files can be found in the CDMY1CDMY directory.

To replicate the repository, we need to add the necessary configurations for this configuration to the CDMY2CDMY file, which contains global and general sections of the DRBD configuration, and we need to define resources in the CDMY3CDMY files.

Make a backup copy of the source file on both nodes, and then open a new file for editing (use a text editor to your liking).

# mv/etc/drbd.d/global_common.conf/etc/drbd.d/global_common.conf.orig # vim/etc/drbd.d/global_common.conf 

Add the following lines to both files:

global { usage-count yes; } common { net { protocol C; } } 

Save the file, and then close the editor.

Let's briefly dwell on the line of protocol C. DRBD supports three different replication modes (i.e., three degrees of synchronization of replication), namely:

  • protocol A: asynchronous replication protocol; most commonly used in long distance replication scenarios.
  • protocol B: semi-synchronous replication protocol or synchronous memory protocol.
  • protocol C: commonly used for nodes in networks with short distances; this is by far the most commonly used replication protocol in DRBD settings.

Important : The choice of replication protocol affects two deployment factors: protection and latency. And throughput, by contrast, is largely independent of the selected replication protocol.

Step 4. Adding a resource


Resource is a collective term that refers to all aspects of a particular replicated dataset. We will define our resource in the CDMY4CDMY file.

Add the following to the file on both nodes (remember to replace the variables with actual values ​​for your environment).

Pay attention to host names, we need to specify the network host name, which can be obtained using the uname CDMY5CDMY command.

resource test { on tecmint.tecmint.lan { device/dev/drbd0; disk/dev/sdb1; meta-disk internal; address 192.168.56.101:7789; } on server1.tecmint.lan { device/dev/drbd0; disk/dev/sdb1; meta-disk internal; address 192.168.56.102:7789; } } } 

where:

  • on hostname : the on section to which the nested configuration statement belongs.
  • test : this is the name of the new resource.
  • device/dev/drbd0 : indicates a new virtual block device controlled by DRBD.
  • disk/dev/sdb1 : this is the block device partition that is the backup device for the DRBD device.
  • meta-disk : determines where the DRBD stores its metadata. Internal means that DRBD stores its metadata on the same physical low-level device as the actual data on production.
  • address : indicates the IP address and port number of the corresponding host.

Also note that if the parameters on both hosts have the same values, you can specify them directly in the resources section.

For example, the above configuration may be restructured to:

resource test { device/dev/drbd0; disk/dev/sdb1; meta-disk internal; on tecmint.tecmint.lan { address 192.168.56.101:7789; } on server1.tecmint.lan { address 192.168.56.102:7789; } } 

Step 5. Initializing and starting the resource


To interact with DRBD, we will use the following administration tools (which interact with the kernel module to configure and administer DRBD resources):

  • drbdadm : DRBD high-level administration tool.
  • drbdsetup : a lower level administration tool for connecting DRBD devices to their backup devices, configuring pairs of DRBD devices to reflect their backup devices and to verify the configuration of running DRBD devices.
  • Drbdmeta : a metadata management tool.

After adding all the initial resource configurations, we must call the resource on both nodes.

# drbdadm create-md test 

ITKarma picture
Initializing the Metadata Repository

Next, we need to run it, which will connect the resource to its backup device, then set the replication parameters and connect the resource to its peer:

# drbdadm up test 

Now, if you run the lsblk command, you'll notice that DRBD device/volume drbd0 is connected to the backup device CDMY6CDMY:

# lsblk 

ITKarma picture
Block Device List

To disable a resource, run:

# drbdadm down test 

To check the status of the resource, run the following command (note that at this stage the status of the Inconsistent/Inconsistent disks is expected):

# drbdadm status test OR # drbdsetup status test --verbose --statistics #for a more detailed status 

ITKarma picture
Checking the status of a resource on y
evil


Step 6: Install the primary resource/source for initial device synchronization


At this point, DRBD is ready to go. Now we need to specify which node should be used as a source for initial device synchronization.

Run the following command on only one node to start the initial full synchronization:

# drbdadm primary --force test # drbdadm status test 

ITKarma picture
Installing the primary node as the starting device
After synchronization is complete, the status of both drives must be UpToDate.

Step 7: Testing the DRBD setup


Finally, we need to check whether the DRBD device will work as needed to store replicated data. Remember that we used an empty disk volume, so we need to create a file system on the device and mount it to see if we can use it to store replicated data.

We need to create a file system on the device using the following command on the node from which we started the initial full synchronization (on which there is a resource with the main role):

# mkfs -t ext4/dev/drbd0 

ITKarma picture
Create a file system on a Drbd volume

Then mount it as shown (you can give the mount point a suitable name):

# mkdir -p/mnt/DRDB_PRI/# mount/dev/drbd0/mnt/DRDB_PRI/ 

Now copy or create some files at the mount point above and make a long list with the ls command :

# cd/mnt/DRDB_PRI/# ls -l 

ITKarma picture
List the contents of the main Drbd volume

Next, unmount the device (make sure that the mount is not open, change the directory after unmounting to avoid errors) and change the role of the node from primary to secondary:

# umount/mnt/DRDB_PRI/# cd # drbdadm secondary test 

Make the other node (on which there is a resource with a secondary role) primary, then connect the device to it and run a long list of mount points. If the setup works fine, all files stored on the volume should be there:

# drbdadm primary test # mkdir -p/mnt/DRDB_SEC/# mount/dev/drbd0/mnt/DRDB_SEC/# cd/mnt/DRDB_SEC/# ls -l 

ITKarma picture
Verifying the DRBD setup running on the secondary node.

For more information, see the administration tool man pages:

# man drbdadm # man drbdsetup # man drbdmeta 

Help: DRBD User Guide .

Summary


DRBD is extremely flexible and versatile, making it a storage replication solution suitable for adding HA to virtually any application. In this article, we showed how to install DRBD on CentOS 7, and briefly demonstrated how to use it to replicate storage. Feel free to share your thoughts with us using the feedback form below.



Learn more about the course.


.

Source