Please use this contents only as a guideline. For a detailed installation and configuration guide, please read the DRBD official documentation. At the moment of this writing, the latest DRBD version is 8. The same DRBD packages must be installed on both nodes.
|Published (Last):||19 April 2016|
|PDF File Size:||7.56 Mb|
|ePub File Size:||9.33 Mb|
|Price:||Free* [*Free Regsitration Required]|
Writes to the primary node are transferred to the lower-level block device and simultaneously propagated to the secondary node s. The secondary node s then transfers data to its corresponding lower-level block device. When the failed ex-primary node returns, the system may or may not raise it to primary level again, after device data resynchronization. DRBD is often deployed together with the Pacemaker or Heartbeat cluster resource managers, although it does integrate with other cluster management frameworks.
It integrates with virtualization solutions such as Xen , and may be used both below and on top of the Linux LVM stack. Shared cluster storage comparison[ edit ] Conventional computer cluster systems typically use some sort of shared storage for data being used by cluster resources.
In DRBD that overhead is reduced as all read operations are carried out locally. Shared storage is not necessarily highly available. For example, a single storage area network accessed by multiple virtualization hosts is considered shared storage, but is not considered highly available at the storage level - if that single storage area network fails, neither host within the cluster can access the shared storage.
DRBD allows for a storage target that is both shared and highly available. A disadvantage is the lower time required to write directly to a shared storage device than to route the write through the other node.
In RAID, the redundancy exists in a layer transparent to the storage-using application. While there are two storage devices, there is only one instance of the application and the application is not aware of multiple copies. When the application reads, the RAID layer chooses the storage device to read.
When a storage device fails, the RAID layer chooses to read the other, without the application instance knowing of the failure.
In contrast, with DRBD there are two instances of the application, and each can read only from one of the two storage devices. Should one storage device fail, the application instance tied to that device can no longer read the data.
Consequently, in that case that application instance shuts down and the other application instance, tied to the surviving copy of the data, takes over. Conversely, in RAID, if the single application instance fails, the information on the two storage devices is effectively unusable, but in DRBD, the other application instance can take over. A DRBD can be used as the basis of A conventional file system this is the canonical example , another logical block device as used in LVM , for example , any application requiring direct access to a block device.
DRBD-based clusters are often employed for adding synchronous replication and high availability to file servers , relational databases such as MySQL , and many other workloads.
The guide is constantly being updated. This guide assumes, throughout, that you are using DRBD version 8. If you are using a pre Please use the drbd-user mailing list to submit comments.
Distributed Replicated Block Device
Axigen with Linux-HA and DRBD - DRBD