MariaDB Galera Cluster is a virtually synchronous multi-master cluster that runs on Linux only. Its Enterprise version is MariaDB Enterprise Cluster (powered by Galera).
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Enterprise Cluster is a solution designed to handle high workloads exceeding the capacity of a single server. It is based on Galera Cluster technology integrated with MariaDB Enterprise Server and includes features like data-at-rest encryption for added security. This multi-primary replication alternative is ideal for maintaining data consistency across multiple servers, providing enhanced reliability and scalability.
MariaDB Enterprise Cluster, powered by Galera, is available with MariaDB Enterprise Server. MariaDB Galera Cluster is available with MariaDB Community Server.
In order to handle increasing load and especially when that load exceeds what a single server can process, it is best practice to deploy multiple MariaDB Enterprise Servers with a replication solution to maintain data consistency between them. MariaDB Enterprise Cluster is a multi-primary replication solution that serves as an alternative to the single-primary MariaDB Replication.
MariaDB Enterprise Cluster is built on MariaDB Enterprise Server with Galera Cluster and MariaDB MaxScale. In MariaDB Enterprise Server 10.5 and later, it features enterprise-specific options, such as data-at-rest encryption for the write-set cache, that are not available in other Galera Cluster implementations.
As a multi-primary replication solution, any MariaDB Enterprise Server can operate as a Primary Server. This means that changes made to any node in the cluster replicate to every other node in the cluster, using certification-based replication and global ordering of transactions for the InnoDB storage engine.
Note: MariaDB Enterprise Cluster is only available for Linux operating systems.
There are a few things to consider when planning the hardware, virtual machines, or containers for MariaDB Enterprise Cluster.
MariaDB Enterprise Cluster architecture involves deploying MariaDB MaxScale with multiple instances of MariaDB Enterprise Server. The Servers are configured to use multi-primary replication to maintain consistency between themselves while MariaDB MaxScale routes reads and writes between them.
The application establishes a client connection to MariaDB MaxScale. MaxScale then routes statements to one of the MariaDB Enterprise Servers in the cluster. Writes made to any node in this cluster replicate to all the other nodes of the cluster.
When MariaDB Enterprise Servers start in a cluster:
Each Server attempts to establish network connectivity with the other Servers in the cluster
Groups of connected Servers form a component
When a Server establishes network connectivity with the Primary Component, it synchronizes its local database with that of the cluster
As a member of the Primary Component, the Server becomes operational — able to accept read and write queries from clients
During startup, the Primary Component is the Server bootstrapped to run as the Primary Component. Once the cluster is online, the Primary Component is any combination of Servers which includes a minimum of more than half the total number of Servers.
A Server or group of Servers that loses network connectivity with the majority of the cluster becomes non-operational.
In planning the number of systems to provision for MariaDB Enterprise Cluster, it is important to keep cluster operation in mind. Ensuring that it has enough disk space and that it is able to maintain a Primary Component in the event of outages.
Each Server requires the minimum amount of disk space needed to store the entire database. The upper storage limit for MariaDB Enterprise Cluster is that of the smallest disk in use.
Each switch in use should have an odd number of Servers above three.
In a cluster that spans multiple switches, each data center in use should have an odd number of switches above three.
In a cluster that spans multiple data centers, use an odd number of data centers above three.
Each data center in use should have at least one Server dedicated to backup operations. This can be another cluster node or a separate Replica Server kept in sync using MariaDB Replication.
In case of planning Servers to the switch, switches to the data center, and data centers in the cluster, this model helps preserve the Primary Component. A minimum of three in use means that a single Server or switch can fail without taking down the cluster.
Using an odd number above three reduces the risk of a split-brain situation (that is, a case where two separate groups of Servers believe that they are part of the Primary Component and remain operational).
Nodes in MariaDB Enterprise Cluster are individual MariaDB Enterprise Servers configured to perform multi-primary cluster replication. This configuration is set using a series of system variables in the configuration file.
[mariadb]
# General Configuration
bind_address = 0.0.0.0
innodb_autoinc_lock_mode = 2
# Cluster Configuration
wsrep_cluster_name = "accounting_cluster"
wsrep_cluster_address = "gcomm://192.0.2.1,192.0.2.2,192.0.2.3"
# wsrep Provider
wsrep_provider = /usr/lib/galera/libgalera_enterprise_smm.so
wsrep_provider_options = "evs.suspect_timeout=PT10S"
Additional information on system variables is available in the Reference chapter.
The innodb_autoinc_lock_mode
system variable must be set to a value of 2 to enable interleaved lock mode. MariaDB Enterprise Cluster does not support other lock modes.
Ensure also that the bind_address
system variable is properly set to allow MariaDB Enterprise Server to listen for TCP/IP connections:
bind_address = 0.0.0.0
innodb_autoinc_lock_mode = 2
MariaDB Enterprise Cluster requires that you set a name for your cluster, using the wsrep_cluster_name
system variable. When nodes connect to each other, they check the cluster name to ensure that they've connected to the correct cluster before replicating data. All Servers in the cluster must have the same value for this system variable.
Using the wsrep_cluster_address
system variable, you can define the back-end protocol (always gcomm) and comma-separated list of the IP addresses or domain names of the other nodes in the cluster.
wsrep_cluster_name = "accounting_cluster"
wsrep_cluster_address = "gcomm://192.0.2.1,192.0.2.2,192.0.2.3"
It is best practice to list all nodes on this system variable, as this is the list the node searches when attempting to reestablish network connectivity with the primary component.
Note: In certain environments, such as deployments in the cloud, you may also need to set the wsrep_node_address
system variable, so that MariaDB Enterprise Server properly informs other Servers how to reach it.
MariaDB Enterprise Server connects to other Servers and replicates data from the cluster through a wsrep Provider called the Galera Replicator plugin. In order to enable clustering, specify the path to the relevant .so
file using the wsrep_provider
system variable.
MariaDB Enterprise Server 10.4 and later installations use an enterprise-build of the Galera Enterprise 4 plugin. This includes all the features of Galera Cluster 4 as well as enterprise features like GCache encryption.
To enable MariaDB Enterprise Cluster, use the libgalera_enterprise_smm.so
library:
wsrep_provider = /usr/lib/galera/libgalera_enterprise_smm.so
MariaDB Enterprise Server use the older community-release of the Galera 3 plugin. This is set using the libgalera_smm.so
library:
wsrep_provider = /usr/lib/galera/libgalera_smm.so
In addition to system variables, there is a set of options that you can pass to the wsrep Provider to configure or to otherwise adjust its operations. This is done through the wsrep_provider_options
system variable:
wsrep_provider_options = "evs.suspect_timeout=PT10S"
Additional information is available in the Reference chapter.
MariaDB Enterprise Cluster implements a multi-primary replication solution.
When you write to a table on a node, the node collects the write into a write-set transaction, which it then replicates to the other nodes in the cluster.
Your application can write to any node in the cluster. Each node certifies the replicated write-set. If the transaction has no conflicts, the nodes apply it. If the transaction does have conflicts, it is rejected and all of the nodes revert the changes.
The first node you start in MariaDB Enterprise Cluster bootstraps the Primary Component. Each subsequent node that establishes a connection joins and synchronizes with the Primary Component. A cluster achieves a quorum when more than half the nodes are joined to the Primary Component.
When a component forms that has less than half the nodes in the cluster, it becomes non-operational, since it believes there is a running Primary Component to which it has lost network connectivity.
These quorum requirements, combined with the requisite number of odd nodes, avoid a split brain situation, or one in which two separate components believe they are each the Primary Component.
In cases where the cluster goes down and your nodes become non-operational, you can dynamically bootstrap the cluster.
First, find the most up-to-date node (that is, the node with the highest value for the wsrep_last_committed
status variable):
SHOW STATUS LIKE 'wsrep_last_committed';
Once you determine the node with the most recent transaction, you can designate it as the Primary Component by running the following on it:
SET GLOBAL wsrep_provider_options="pc.bootstrap=YES";
The node bootstraps the Primary Component onto itself. Other nodes in the cluster with network connectivity then submit state transfer requests to this node to bring their local databases into sync with what's available on this node.
From time to time a node can fall behind the cluster. This can occur due to expensive operations being issued to it or due to network connectivity issues that lead to write-sets backing up in the queue. Whatever the cause, when a node finds that it has fallen too far behind the cluster, it attempts to initiate a state transfer.
In a state transfer, the node connects to another node in the cluster and attempts to bring its local database back in sync with the cluster. There are two types of state transfers:
Incremental State Transfer (IST)
State Snapshot Transfer (SST)
When the donor node receives a state transfer request, it checks its write-set cache (that is, the GCache) to see if it has enough saved write-sets to bring the joiner into sync. If the donor node has the intervening write-sets, it performs an IST operation, where the donor node only sends the missing write-sets to the joiner. The joiner applies these write-sets following the global ordering to bring its local databases into sync with the cluster.
When the donor does not have enough write-sets cached for an IST, it runs an SST operation. In an SST, the donor uses a backup solution, like MariaDB Enterprise Backup, to copy its data directory to the joiner. When the joiner completes the SST, it begins to process the write-sets that came in during the transfer. Once it's in sync with the cluster, it becomes operational.
IST's provide the best performance for state transfers and the size of the GCache may need adjustment to facilitate their use.
MariaDB Enterprise Server uses Flow Control to sometimes throttle transactions and ensure that all nodes work equitably.
Write-sets that replicate to a node are collected by the node in its received queue. The node then processes the write-sets according to global ordering. Large transactions, expensive operations, or simple hardware limitations can lead to the received queue backing up over time.
When a node's received queue grows beyond certain limits, the node initiates Flow Control. In Flow Control, the node pauses replication to work through the write-sets it already has. Once it has worked the received queue down to a certain size, it re-initiates replication.
A node or nodes will be removed or evicted from a cluster if it becomes non-responsive.
In MariaDB Enterprise Cluster, each node monitors network connectivity and response times from every other node. MariaDB Enterprise Cluster evaluates network performance using the EVS Protocol.
When a node finds another to have poor network connectivity, it adds an entry to the delayed list. If the node becomes active again and the network performance improves for a certain amount of time, an entry for it is removed from the delayed list. That is, the longer a node has network problems the longer it has to be active again to be removed from the delayed list.
If the number of entries for a node in the delayed list exceeds a threshold established for the cluster, the EVS Protocol evicts the node from the cluster.
Evicted nodes become non-operational components. They cannot rejoin the cluster until you restart MariaDB Enterprise Server.
Under normal operation, huge transactions and long-running transactions are difficult to replicate. MariaDB Enterprise Cluster rejects conflicting transactions and rolls back the changes. A transaction that takes several minutes or longer to run can encounter issues if a small transaction is run on another node and attempts to write to the same table. The large transaction fails because it encounters a conflict when it attempts to replicate.
MariaDB Enterprise Server 10.4 and later support streaming replication for MariaDB Enterprise Cluster. In streaming replication, huge transactions are broken into transactional fragments, which are replicated and applied as the operation runs. This makes it more difficult for intervening sessions to introduce conflicts.
To initiate streaming replication, set the wsrep_trx_fragment_unit and wsrep_trx_fragment_size
system variables. You can set the unit to BYTES, ROWS, or STATEMENTS
:
SET SESSION wsrep_trx_fragment_unit='STATEMENTS';
SET SESSION wsrep_trx_fragment_size=5;
Then, run your transaction.
Streaming replication works best with very large transactions where you don't expect to encounter conflicts. If the statement does encounter a conflict, the rollback operation is much more expensive than usual. As such, it's best practice to enable streaming replication at a session-level and to disable it by setting the wsrep_trx_fragment_size
system variable to 0 when it's not needed.
SET SESSION wsrep_trx_fragment_size=0;
Deployments on mixed hardware can introduce issues where some MariaDB Enterprise Servers perform better than others. A Server in one part of the world might perform more reliably or be physically closer to most users than others. In cases where a particular MariaDB Enterprise Server holds logical significance for your cluster, you can weight its value in quorum calculations.
Galera Arbitrator is a separate process that runs alongside MariaDB Enterprise Server. While the Arbitrator does not take part in replication, whenever the cluster performs quorum calculations it gives the Arbitrator a vote as though it were another MariaDB Enterprise Server. In effect this means that the system has the vote of MariaDB Enterprise Server plus any running Arbitrators in determining whether it's part of the Primary Component.
Bear in mind that the Galera Arbitrator is a separate package, galera-arbitrator-4
, which is not installed by default with MariaDB Enterprise Server.
MariaDB Enterprise Servers that join a cluster attempt to connect to the IP addresses provided to the wsrep_cluster_address
system variable. This variable adjusts itself at runtime to include the addresses of all connected nodes.
To scale-out MariaDB Enterprise Cluster, start new MariaDB Enterprise Servers with the appropriate wsrep_cluster_address
list and the same wsrep_cluster_name
value. The new nodes establish network connectivity with the running cluster and request a state transfer to bring their local database into sync with the cluster.
Once the MariaDB Enterprise Server reports itself as being in sync with the cluster, MariaDB MaxScale can begin including it in the load distribution for the cluster.
Being a multi-primary replication solution means that any MariaDB Enterprise Server in the cluster can handle write operations, but write scale-out is minimal as every Server in the cluster needs to apply the changes.
MariaDB Enterprise Cluster does not provide failover capabilities on its own. MariaDB MaxScale is used to route client connections to MariaDB Enterprise Server.
Unlike a traditional load balancer, MariaDB MaxScale is aware of changes in the node and cluster states.
MaxScale takes nodes out of the distribution that initiate a blocking SST operation or Flow Control or otherwise go down, which allows them to recover or catch up without stopping service to the rest of the cluster.
With MariaDB Enterprise Cluster, each node contains a replica of all the data in the cluster. As such, you run MariaDB Enterprise Backup on any node to back up the available data. The process for backing up a node is the same as for a single MariaDB Enterprise Server.
MariaDB Enterprise Server supports data-at-rest encryption to secure data on disk, and data-in-transit encryption to secure data on the network.
MariaDB Enterprise Server support data-at-rest encryption of the GCache, the file used by Galera systems to cache write sets. Encrypting GCache ensures the Server encrypts both data it temporarily caches from the cluster as well as the data it permanently stores in tablespaces.
For data-in-transit, MariaDB Enterprise Cluster supports encryption the same as MariaDB Server and additionally provides data-in-transit encryption for Galera replication traffic and for State Snapshot Transfer (SST) traffic.
MariaDB Enterprise Server 10.6 encrypts Galera replication and SST traffic using the server's TLS configuration by default. With the wsrep_ssl_mode
system variable, you can configure the node to use the TLS configuration of wsrep Provider options.
MariaDB Enterprise Server 10.5 and earlier support encrypting Galera replication and SST traffic through wsrep Provider options.
TLS encryption is only available when used by all nodes in the cluster.
To encrypt data-at-rest such as GCache, stop the server, set encrypt_binlog=ON
within the MariaDB Enterprise Server configuration file, and restart the server. This variable also controls encryption of the binary log and the relay log when used.
[mariadb]
...
# Controls Binary Log, Relay Log, and GCache Encryption
encrypt_binlog=ON
To stop using encryption on the GCache file, stop the server, set encrypt_binlog=OFF
within the MariaDB Enterprise Server configuration file, and restart the server. This variable also controls encryption of the binary log and the relay log when used.
[mariadb]
...
# Controls Binary Log, Relay Log, and GCache Encryption
encrypt_binlog=OFF
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Galera Cluster is a Linux-exclusive, multi-primary cluster designed for MariaDB, offering features such as active-active topology, read/write capabilities on any node, automatic membership and node joining, true parallel replication at the row level, and direct client connections, with an emphasis on the native MariaDB experience.
MariaDB Galera Cluster is a virtually synchronous multi-primary cluster for MariaDB. It is available on Linux only and only supports the InnoDB storage engine (although there is experimental support for MyISAM and, from MariaDB 10.6, Aria. See the wsrep_replicate_myisam system variable, or, from MariaDB 10.6, the wsrep_mode system variable.
Active-active multi-primary topology
Read and write to any cluster node
Automatic membership control: failed nodes drop from the cluster
Automatic node joining
True parallel replication, on row level
Direct client connections, native MariaDB look & feel
The above features yield several benefits for a DBMS clustering solution, including:
No replica lag
No lost transactions
Read scalability
Smaller client latencies
The Getting Started with MariaDB Galera Cluster page has instructions on how to get up and running with MariaDB Galera Cluster.
A great resource for Galera users is Codership on Google Groups (codership-team
'at'
googlegroups
(dot)
com
) - If you use Galera, it is recommended you subscribe.
MariaDB Galera Cluster is powered by:
MariaDB Server.
The functionality of MariaDB Galera Cluster can be obtained by installing the standard MariaDB Server packages and the Galera wsrep provider library package. The following Galera version corresponds to each MariaDB Server version:
In MariaDB 10.4 and later, MariaDB Galera Cluster uses Galera 4. This means that the wsrep API version is 26 and the Galera wsrep provider library is version 4.X.
In MariaDB 10.3 and before, MariaDB Galera Cluster uses Galera 3. This means that the wsrep API is version 25 and the Galera wsrep provider library is version 3.X.
See Deciphering Galera Version Numbers for more information about how to interpret these version numbers.
The following table lists each version of the Galera 4 wsrep provider, and it lists which version of MariaDB each one was first released in. If you would like to install Galera 4 using yum, apt, or zypper, then the package is called galera-4
.
26.4.2
26.4.1
26.4.0
Codership on Google Groups (codership-team 'at' googlegroups (dot) com
) - A great mailing list for Galera users.
This page is licensed: CC BY-SA / Gnu FDL
In MariaDB Cluster, transactions are replicated using the wsrep API, synchronously ensuring consistency across nodes. Synchronous replication offers high availability and consistency but is complex and potentially slower compared to asynchronous replication. Due to these challenges, asynchronous replication is often preferred for database performance and scalability, as seen in popular systems like MySQL and PostgreSQL, which typically favor asynchronous or semi-synchronous solutions.
In MariaDB Cluster, the server replicates a transaction at commit time by broadcasting the write set associated with the transaction to every node in the cluster. The client connects directly to the DBMS and experiences behavior that is similar to native MariaDB in most cases. The wsrep API (write set replication API) defines the interface between Galera replication and MariaDB.
The basic difference between synchronous and asynchronous replication is that "synchronous" replication guarantees that if a change happened on one node in the cluster, then the change will happen on other nodes in the cluster "synchronously," or at the same time. "Asynchronous" replication gives no guarantees about the delay between applying changes on the "master" node and the propagation of changes to "slave" nodes. The delay with "asynchronous" replication can be short or long. This also implies that if a master node crashes in an "asynchronous" replication topology, then some of the latest changes may be lost.
Theoretically, synchronous replication has several advantages over asynchronous replication:
Clusters utilizing synchronous replication are always highly available. If one of the nodes crashed, then there would be no data loss. Additionally, all cluster nodes are always consistent.
Clusters utilizing synchronous replication allow transactions to be executed on all nodes in parallel.
Clusters utilizing synchronous replication can guarantee causality across the whole cluster. This means that if a transactionSELECT
is executed on one cluster node after a transaction is executed on another cluster node, it should see the effects of that transaction.
However, in practice, synchronous database replication has traditionally been implemented via the so-called "2-phase commit" or distributed locking, which proved to be very slow. Low performance and complexity of implementation of synchronous replication led to a situation where asynchronous replication remains the dominant means for database performance scalability and availability. Widely adopted open-source databases such as MySQL or PostgreSQL offer only asynchronous or semi-synchronous replication solutions.
Galera's replication is not completely synchronous. It is sometimes called virtually synchronous replication.
An alternative approach to synchronous replication that uses group communication and transaction ordering techniques was suggested by a number of researchers. For example:
Prototype implementations have shown a lot of promise. We combined our experience in synchronous database replication and the latest research in the field to create the Galera Replication library and the wsrep API.
Galera replication is a highly transparent, scalable, and virtually synchronous replication solution for database clustering to achieve high availability and improved performance. Galera-based clusters are:
Highly available
Highly transparent
Highly scalable (near-linear scalability may be reached depending on the application)
Galera replication functionality is implemented as a shared library and can be linked with any transaction processing system that implements the wsrep API hooks.
The Galera replication library is a protocol stack providing functionality for preparing, replicating, and applying transaction write sets. It consists of:
wsrep API specifies the interface responsibilities for DBMS and replication provider
wsrep hooks is the wsrep integration in the DBMS engine.
Galera provider implements the wsrep API for Galera library
certification layer takes care of preparing write sets and performing certification
replication manages replication protocol and provides total ordering capabilities
GCS framework provides plugin architecture for group communication systems
many gcs implementations can be adapted; we have experimented with spread and our in-house implementations: vsbes and Gemini.
Many components in the Galera replication library were redesigned and improved with the introduction of Galera 4.
Although the Galera provider certifies the write set associated with a transaction at commit time on each node in the cluster, this write set is not necessarily applied on that cluster node immediately. Instead, the write set is placed in the cluster node's receive queue on the node, and it is eventually applied by one of the cluster node's Galera slave threads.
The number of Galera slave threads can be configured with the wsrep_slave_threads system variable.
The Galera slave threads are able to determine which write sets are safe to apply in parallel. However, if your cluster nodes seem to have frequent consistency problems, then setting the value to 1
will probably fix the problem.
When a cluster node's state, as seen by wsrep_local_state_comment, isJOINED
, then increasing the number of slave threads may help the cluster node catch up with the cluster more quickly. In this case, it may be useful to set the number of threads to twice the number of CPUs on the system.
Streaming replication was introduced in Galera 4.
In older versions of MariaDB Cluster, there was a 2GB limit on the size of the transaction you could run. The node waits on the transaction commit before performing replication and certification. With large transactions, long-running writes, and changes to huge datasets, there was a greater possibility of a conflict forcing a rollback on an expensive operation.
Using streaming replication, the node breaks huge transactions up into smaller and more manageable fragments; it then replicates these fragments to the cluster as it works instead of waiting for the commit. Once certified, the fragment can no longer be aborted by conflicting transactions. As this can have performance consequences both during execution and in the event of rollback, it is recommended that you only use it with large transactions that are unlikely to experience conflict.
For more information on streaming replication, see the Galera documentation.
Group Commit support for MariaDB Cluster was introduced in Galera 4.
In MariaDB Group Commit, groups of transactions are flushed together to disk to improve performance. In previous versions of MariaDB, this feature was not available in MariaDB Cluster, as it interfered with the global ordering of transactions for replication. MariaDB Cluster can now take advantage of Group Commit.
For more information on Group Commit, see the Galera documentation.
This page is licensed: CC BY-SA / Gnu FDL
Get started quickly with MariaDB Galera Cluster using these guides. Follow step-by-step instructions to deploy and configure a highly available, multi-master cluster for your applications.
MariaDB Galera Cluster quickstart guide
MariaDB Galera Cluster provides a multi-primary (active-active) cluster solution for MariaDB, enabling high availability, read/write scalability, and true synchronous replication. This means any node can handle read and write operations, with changes instantly replicated to all other nodes, ensuring no replica lag and no lost transactions. It's exclusively available on Linux.
Before starting, ensure you have:
At least three nodes: For redundancy and avoiding split-brain scenarios (bare-metal or virtual machines).
Linux Operating System: A compatible Debian-based (e.g., Ubuntu, Debian) or RHEL-based (e.g., CentOS, Fedora) distribution.
Synchronized Clocks: All nodes should have NTP configured for time synchronization.
SSH Access: Root or sudo access to all nodes for installation and configuration.
Network Connectivity: All nodes must be able to communicate with each other over specific ports (see Firewall section). Low latency between nodes is ideal.
rsync
: Install rsync
on all nodes, as it's commonly used for State Snapshot Transfers (SST).
sudo apt install rsync
(Debian/Ubuntu)
sudo yum install rsync
(RHEL/CentOS)
Install MariaDB Server and the Galera replication provider on all nodes of your cluster.
a. Add MariaDB Repository:
It's recommended to install from the official MariaDB repositories to get the latest stable versions. Use the MariaDB Repository Configuration Tool (search "MariaDB Repository Generator") to get specific instructions for your OS and MariaDB version.
Example for Debian/Ubuntu (MariaDB 10.11):
sudo apt update
sudo apt install dirmngr software-properties-common apt-transport-https ca-certificates curl -y
curl -LsS https://r.mariadb.com/downloads/mariadb_repo_setup | sudo bash
sudo apt update
b. Install MariaDB Server and Galera:
sudo apt install mariadb-server mariadb-client galera-4 -y # For MariaDB 10.4+ or later, galera-4 is the provider.
# For older versions (e.g., 10.3), use galera-3.
c. Secure MariaDB Installation:
Run the security script on each node to set the root password and remove insecure defaults.
sudo mariadb-secure-installation
Set a strong root password.
Answer Y
to remove anonymous users, disallow remote root login, remove test database, and reload privilege tables.
Open the necessary ports on each node's firewall to allow inter-node communication.
# Example for UFW (Ubuntu)
sudo ufw allow 3306/tcp # MariaDB client connections
sudo ufw allow 4567/tcp # Galera replication (multicast and unicast)
sudo ufw allow 4567/udp # Galera replication (multicast)
sudo ufw allow 4568/tcp # Incremental State Transfer (IST)
sudo ufw allow 4444/tcp # State Snapshot Transfer (SST)
sudo ufw reload
sudo ufw enable # If firewall is not already enabled
Adjust for your firewall system (e.g., firewalld
for RHEL-based systems).
galera.cnf
on Each Node)Create a configuration file (e.g., /etc/mysql/conf.d/galera.cnf
) on each node. The content will be largely identical, with specific changes for each node's name and address.
Example galera.cnf
content:
[mysqld]
# Basic MariaDB settings
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
bind-address=0.0.0.0 # Binds to all network interfaces. Adjust if you have a specific private IP for cluster traffic.
# Galera Provider Configuration
wsrep_on=ON
wsrep_provider=/usr/lib/galera/libgalera_smm.so # Adjust path if different (e.g., /usr/lib64/galera-4/libgalera_smm.so)
# Galera Cluster Configuration
wsrep_cluster_name="my_galera_cluster" # A unique name for your cluster
# IP addresses of ALL nodes in the cluster, comma-separated.
# Use private IPs if available for cluster communication.
wsrep_cluster_address="gcomm://node1_ip_address,node2_ip_address,node3_ip_address"
# This node's specific configuration
wsrep_node_name="node1" # Must be unique for each node (e.g., node1, node2, node3)
wsrep_node_address="node1_ip_address" # This node's own IP address
Important:
wsrep_cluster_address
: List the IP addresses of all nodes in the cluster on every node.
wsrep_node_name
: Must be unique for each node (e.g., node1
, node2
, node3
).
wsrep_node_address
: Set to the specific IP address of the node you are configuring.
a. Bootstrapping the First Node:
Start MariaDB on the first node with the --wsrep-new-cluster option. This tells it to form a new cluster. Do this only once for the initial node of a new cluster.
sudo systemctl stop mariadb # Ensure it's stopped
sudo galera_new_cluster # This command often wraps the systemctl start --wsrep-new-cluster
# Alternatively: sudo systemctl start mariadb --wsrep-new-cluster
b. Starting Subsequent Nodes:
For the second and third nodes, start the MariaDB service normally. They will discover and join the existing cluster using the wsrep_cluster_address specified in their configuration.
sudo systemctl start mariadb
After all nodes are started, verify that they have joined the cluster.
a. Check Cluster Size (on any node):
Connect to MariaDB on any node and check the cluster status:
sudo mariadb -u root -p
Inside the MariaDB shell:
SHOW STATUS LIKE 'wsrep_cluster_size';
The Value
should match the number of nodes in your cluster (e.g., 3
).
b. Test Replication:
On node1
, create a new database and a table:
CREATE DATABASE test_db;
USE test_db;
CREATE TABLE messages (id INT AUTO_INCREMENT PRIMARY KEY, text VARCHAR(255));
INSERT INTO messages (text) VALUES ('Hello from node1!');
On node2
(or node3
), connect to MariaDB and check for the new database and table:
SHOW DATABASES; -- test_db should appear
USE test_db;
SELECT * FROM messages; -- 'Hello from node1!' should appear
Insert data from node2
:
INSERT INTO messages (text) VALUES ('Hello from node2!');
Verify on node1
that the new data is present:
USE test_db;
SELECT * FROM messages; -- 'Hello from node2!' should appear
This confirms synchronous replication is working.
MariaDB Galera Cluster Replication quickstart guide
Galera Replication is a core technology enabling MariaDB Galera Cluster to provide a highly available and scalable database solution. It is characterized by its virtually synchronous replication, ensuring strong data consistency across all cluster nodes.
Galera Replication is a multi-primary replication solution for database clustering. Unlike traditional asynchronous or semi-synchronous replication, Galera ensures that transactions are committed on all nodes (or fail on all) before the client receives a success confirmation. This mechanism eliminates data loss and minimizes replica lag, making all nodes active and capable of handling read and write operations.
The core of Galera Replication revolves around the concept of write sets and the wsrep API
:
Write Set Broadcasting: When a client commits a transaction on any node in the cluster, that node (the "donor" for that specific transaction) captures the changes (the "write set") associated with that transaction. This write set is then broadcasted to all other nodes in the cluster.
Certification and Application: Each receiving node performs a "certification" test to ensure that the incoming write set does not conflict with any concurrent transactions being committed locally.
If the write set passes certification, it is applied to the local database, and the transaction is committed on that node.
If a conflict is detected, the conflicting transaction (usually the one that was executed locally) is aborted, ensuring data consistency across the cluster.
Virtually Synchronous: The term "virtually synchronous" means that while the actual data application might happen slightly after the commit on the initiating node, the commit order is globally consistent, and all successful transactions are guaranteed to be applied on all active nodes. A transaction is not truly considered committed until it has passed certification on all nodes.
wsrep API
: This API defines the interface between the Galera replication library (the "wsrep provider") and the database server (MariaDB). It allows the database to expose hooks for Galera to capture and apply transaction write sets.
Multi-Primary (Active-Active): All nodes in a Galera Cluster can be simultaneously used for both read and write operations.
Synchronous Replication (Virtual): Data is consistent across all nodes at all times, preventing data loss upon node failures.
Automatic Node Provisioning (SST/IST): When a new node joins or an existing node rejoins, Galera automatically transfers the necessary state to bring it up to date.
State Snapshot Transfer (SST): A full copy of the database is transferred from an existing node to the joining node.
Incremental State Transfer (IST): Only missing write sets are transferred if the joining node is not too far behind.
Automatic Membership Control: Nodes automatically detect and manage cluster membership changes (nodes joining or leaving).
Galera Replication essentially transforms a set of individual MariaDB servers into a robust, highly available, and consistent distributed database system.
MariaDB Galera Cluster usage guide
This guide provides essential information for effectively using and interacting with a running MariaDB Galera Cluster. It covers connection methods, operational considerations, monitoring, and best practices for applications.
Since Galera Cluster is multi-primary, any node can accept read and write connections.
a. Using a Load Balancer (Recommended for Production):
Deploying a load balancer or proxy (like MariaDB MaxScale, ProxySQL, or HAProxy) is the recommended approach.
MariaDB MaxScale: Provides intelligent routing (e.g., readwritesplit
, router
), connection pooling, and advanced cluster awareness (e.g., binlogrouter
for replication clients, switchover
for failover).
Other Load Balancers: Configure them to distribute connections across your Galera nodes, typically using health checks on port 3306 or other cluster-specific checks.
b. Direct Connection:
You can connect directly to any individual node's IP address or hostname using standard MariaDB client tools or connectors (e.g., mariadb command-line client, MariaDB Connector/J, Connector/Python).
Example (Command-line):Bash
mariadb -h <node_ip_address> -u <username> -p
While simple, this method lacks automatic failover; your application would need to handle connection retries and failover logic.
Active-Active: You can perform both read and write operations on any node in the cluster. All successful write operations are synchronously replicated to all other nodes.
Transactions: Standard SQL transactions (START TRANSACTION
, COMMIT
, ROLLBACK
) work as expected. Galera handles the replication of committed transactions.
DDL operations (like CREATE TABLE
, ALTER TABLE
, DROP TABLE
) require special attention in a synchronous multi-primary cluster to avoid conflicts and outages.
Total Order Isolation (TOI) - Default:
This is Galera's default DDL method.
The DDL statement is executed on all nodes in the same order, and it temporarily blocks other transactions on all nodes while it applies.
It ensures consistency but can cause brief pauses in application activity, especially on busy clusters.
Best Practice: Execute DDL during maintenance windows or low-traffic periods.
Rolling Schema Upgrade (RSU) / Percona's pt-online-schema-change
:
For large tables or critical production systems, use tools like pt-online-schema-change
(from Percona Toolkit) which performs DDL without blocking writes.
This tool works by creating a new table, copying data, applying changes, and then swapping the tables. It's generally preferred for minimizing downtime for ALTER TABLE
operations.
wsrep_OSU_method
:
This system variable controls how DDL operations are executed.
TOI
(default): Total Order Isolation.
RSU
: Rolling Schema Upgrade (requires manual steps with pt-online-schema-change
).
NBO
(Non-Blocking Operations): A newer method allowing non-blocking DDL for some operations, but not fully implemented for all DDL types. Use with caution and test thoroughly.
Regularly monitor your Galera Cluster to ensure its health and consistency.
wsrep_cluster_size
: Number of nodes currently in the Primary Component.SQL
SHOW STATUS LIKE 'wsrep_cluster_size';
Expected value: the total number of nodes configured (e.g., 3).
wsrep_local_state_comment
/ wsrep_local_state
: The state of the current node.
Synced
(4): Node is fully synchronized and operational.
Donor/Desync
(2): Node is transferring state to another node.
Joining
(1): Node is in the process of joining the cluster.
Donor/Stalled
(1): Node is stalled.
wsrep_incoming_addresses
: List of incoming connections from other cluster nodes.
wsrep_cert_deps_distance
: Indicates flow control. A high value suggests that this node is falling behind and flow control may activate.
wsrep_flow_control_paused
: Percentage of time the node was paused due to flow control. High values indicate a bottleneck.
wsrep_local_recv_queue
/ wsrep_local_send_queue
: Size of the receive/send queue. Ideally, these should be close to 0. Sustained high values indicate replication lag or node issues.
Galera Cluster is designed for automatic recovery, but understanding the process is key.
Node Failure: If a node fails, the remaining nodes continue to operate as the Primary Component. The failed node will automatically attempt to rejoin when it comes back online.
Split-Brain Scenarios: If the network partitions the cluster, nodes will try to form a "Primary Component." The partition with the majority of nodes forms the new Primary Component. If no majority can be formed (e.g., a 2-node cluster splits), the cluster will become inactive. A 3-node or higher cluster is recommended to avoid this.
Manual Bootstrapping (Last Resort): If the entire cluster goes down or a split-brain occurs where no Primary Component forms, you might need to manually "bootstrap" a new Primary Component from one of the healthy nodes.
Choose the node that was most up-to-date.
Stop MariaDB on that node.
Start it with: sudo galera_new_cluster
or sudo systemctl start mariadb --wsrep-new-cluster
.
Start other nodes normally; they will rejoin the bootstrapped component.
Use Connection Pooling: Essential for managing connections efficiently in high-traffic applications.
Short Transactions: Keep transactions as short and concise as possible to minimize conflicts and improve throughput. Long-running transactions increase the risk of rollbacks due to certification failures.
Primary Keys: All tables should have a primary key. Galera relies on primary keys for efficient row-level replication. Tables without primary keys can cause performance degradation and issues.
Retry Logic: Implement retry logic in your application for failed transactions (e.g., due to certification failures, deadlock, or temporary network issues).
Connect to a Load Balancer: Always direct your application's connections through a load balancer or proxy to leverage automatic failover and intelligent routing.
By following these guidelines, you can effectively manage and operate your MariaDB Galera Cluster for high availability and performance.
Galera Management in MariaDB handles synchronous multi-master replication, ensuring high availability, data consistency, failover, and seamless node provisioning across clusters.
The instructions on this page were used to create the galera package on the Fedora Linux distribution. This package contains the wsrep provider for MariaDB Galera Cluster.
The following table lists each version of the Galera 4 wsrep provider, and it lists which version of MariaDB each one was first released in. If you would like to install Galera 4 using yum, apt, or zypper, then the package is called galera-4
.
26.4.2
26.4.1
26.4.0
The following table lists each version of the Galera 3 wsrep provider, and it lists which version of MariaDB each one was first released in. If you would like to install Galera 3 using yum, apt, or zypper, then the package is called galera-4
.
25.3.37
25.3.35
25.3.34
25.3.33
25.3.32
25.3.27
25.3.25
25.3.24
25.3.23
25.3.22
25.3.21
N/A
25.3.20
25.3.19
25.3.18
25.3.16
N/A
25.3.15
25.3.12
25.3.11
N/A
25.3.10
N/A
25.3.8
N/A
25.3.7
N/A
25.3.6
N/A
25.3.4
N/A
25.3.3
N/A
The following table lists each version of the Galera 2 wsrep provider, and it lists which version of MariaDB each one was first released in.
For convenience, a galera package containing the preferred wsrep provider is included in the MariaDB YUM and APT repositories (the preferred versions are bolded in the table above).
See also Deciphering Galera Version Numbers.
Install the prerequisites:
sudo yum update
sudo yum -y install boost-devel check-devel glibc-devel openssl-devel scons
Clone galera.git from github.com/mariadb and checkout the mariadb-3.x branch:
git init repo
cd repo
git clone -b mariadb-3.x https://github.com/MariaDB/galera.git
Build the packages by executing under thebuild.sh
scripts/ directory with the-p
switch:
cd galera
./scripts/build.sh -p
When finished, you will have an RPM package containing the Galera library, arbitrator, and related files in the current directory. Note: The same set of instructions can be applied to other RPM-based platforms to generate the Galera package.
This page is licensed: CC BY-SA / Gnu FDL
The instructions on this page were used to create the galera package on the Ubuntu and Debian Linux distributions. This package contains the wsrep provider for MariaDB Galera Cluster.
The version of the wsrep provider is 25.3.5. We also provide 25.2.9 for those that need or want it. Prior to that, the wsrep version was 23.2.7.
Install prerequisites:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get -y install check debhelper libasio-dev libboost-dev libboost-program-options-dev libssl-dev scons
Clone galera.git from github.com/mariadb and checkout mariadb-3.x banch:
git init repo
cd repo
git clone -b mariadb-3.x https://github.com/MariaDB/galera.git
Build the packages by executing build.sh
under scripts/ directory with -p
switch:
cd galera
./scripts/build.sh -p
When finished, you will have the Debian packages for galera library and arbitrator in the parent directory.
If you want to run the galera
test suite (mysql-test-run --suite=galera
), you need to install the galera library as either /usr/lib/galera/libgalera_smm.so
or /usr/lib64/galera/libgalera_smm.so
This page is licensed: CC BY-SA / Gnu FDL
A number of options need to be set in order for Galera Cluster to work when using MariaDB. These should be set in the MariaDB option file.
Several options are mandatory, which means that they must be set in order for Galera Cluster to be enabled or to work properly with MariaDB. The mandatory options are:
wsrep_provider — Path to the Galera library
wsrep_on=ON — Enable wsrep replication
default_storage_engine=InnoDB — This is the default value, or alternately wsrep_replicate_myisam=1 (before MariaDB 10.6) or galera-cluster-system-variables/#wsrep_mode=REPLICATE_ARIA,REPLICATE_MYISAM (MariaDB 10.6 and later)
innodb_doublewrite=1 — This is the default value, and should not be changed.
These are optional optimizations that can be made to improve performance.
innodb_flush_log_at_trx_commit=0 — This is not usually recommended in the case of standard MariaDB. However, it is a safer, recommended option with Galera Cluster, since inconsistencies can always be fixed by recovering from another node.
innodb_autoinc_lock_mode=2 — This tells InnoDB to use interleaved method. Interleaved is the fastest and most scalable lock mode, and should be used when BINLOG_FORMAT is set to ROW. Setting the auto-increment lock mode for InnoDB to interleaved, you’re allowing slaves threads to operate in parallel.
wsrep_slave_threads=4 — This makes state transfers quicker for new nodes. You should start with four slave threads per CPU core. The logic here is that, in a balanced system, four slave threads can typically saturate a CPU core. However, I/O performance can increase this figure several times over. For example, a single-core ThinkPad R51 with a 4200 RPM drive can use thirty-two slave threads. The value should not be set higher than wsrep_cert_deps_distance.
Like with MariaDB replication, write sets that are received by a node with Galera Cluster's certification-based replication are not written to the binary log by default. If you would like a node to write its replicated write sets to the binary log, then you will have to set log_slave_updates=ON. This is especially helpful if the node is a replication master. See Using MariaDB Replication with MariaDB Galera Cluster: Configuring a Cluster Node as a Replication Master.
Like with MariaDB replication, replication filters can be used to filter write sets from being replicated by Galera Cluster's certification-based replication. However, they should be used with caution because they may not work as you'd expect.
The following replication filters are honored for InnoDB DML, but not DDL:
The following replication filters are honored for DML and DDL for tables that use both the InnoDB and MyISAM storage engines:
However, it should be kept in mind that if replication filters cause inconsistencies that lead to replication errors, then nodes may abort.
See also MDEV-421 and MDEV-6229.
Galera Cluster needs access to the following ports:
Standard MariaDB Port (default: 3306) - For MySQL client connections and State Snapshot Transfers that use the mysqldump
method. This can be changed by setting port.
Galera Replication Port (default: 4567) - For Galera Cluster replication traffic, multicast replication uses both UDP transport and TCP on this port. Can be changed by setting wsrep_node_address.
Galera Replication Listening Interface (default: 0.0.0.0:4567
) needs to be set using gmcast.listen_addr, either
in wsrep_provider_options: wsrep_provider_options='gmcast.listen_addr=tcp://<IP_ADDR>:<PORT>;'
or in wsrep_cluster_address
IST Port (default: 4568) - For Incremental State Transfers. Can be changed by setting ist.recv_addr in wsrep_provider_options.
SST Port (default: 4444) - For all State Snapshot Transfer methods other than mysqldump
. Can be changed by setting wsrep_sst_receive_address.
If you want to run multiple Galera Cluster instances on one server, then you can do so by starting each instance with mysqld_multi, or if you are using systemd, then you can use the relevant systemd method for interacting with multiple MariaDB instances.
You need to ensure that each instance is configured with a different datadir.
You also need to ensure that each instance is configured with different network ports.
This page is licensed: CC BY-SA / Gnu FDL
URLs in Galera take a particular format:
<schema>://<cluster_address>[?option1=value1[&option2=value2]]
gcomm
- This is the option to use for a working implementation.
dummy
- Used for running tests and profiling, does not do any actual replication, and all following parameters are ignored.
The cluster address shouldn't be empty like gcomm://
. This should never be hardcoded into any configuration files.
To connect the node to an existing cluster, the cluster address should contain the address of any member of the cluster you want to join.
The cluster address can also contain a comma-separated list of multiple members of the cluster. It is good practice to list all possible members of the cluster, for example. gcomm:<node1 name or ip>,<node2 name or ip2>,<node3 name or ip>
Alternately, if multicast is used, put the multicast address instead of the list of nodes. Each member address or multicast address can specify <node name or ip>:<port>
if a non-default port is used.
The wsrep_provider_options variable is used to set a list of options. These parameters can also be provided (and overridden) as part of the URL. Unlike options provided in a configuration file, they will not endure and need to be resubmitted with each connection.
A useful option to set is pc.wait_prim=no
to ensure the server will start running even if it can't determine a primary node. This is useful if all members go down at the same time.
By default, gcomm listens on all interfaces. The port is either provided in the cluster address or will default to 4567 if not set.
This page is licensed: CC BY-SA / Gnu FDL
To facilitate development and QA, we have created some test repositories for the Galera wsrep provider.
These are test repositories. There will be periods when they do not work at all, or work incorrectly, or possibly cause earthquakes, typhoons, and tornadoes. You have been warned.
Replace ${dist}
in the code below for
the YUM-based distribution you are testing. Valid distributions are:
centos5-amd64
centos5-x86
centos6-amd64
centos6-x86
centos7-amd64
rhel5-amd64
rhel5-x86
rhel6-amd64
rhel6-x86
rhel6-ppc64
rhel7-amd64
rhel7-ppc64
rhel7-ppc64le
fedora22-amd64
fedora22-x86
fedora23-amd64
fedora23-x86
fedora24-amd64
fedora24-x86
opensuse13-amd64
opensuse13-x86
sles11-amd64
sles11-x86
sles12-amd64
sles12-ppc64le
# Place this code block in a file at /etc/yum.repos.d/galera.repo
[galera-test]
name = galera-test
baseurl = http://yum.mariadb.org/galera/repo/rpm/${dist}
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
Replace ${dist}
in the code below
for the APT-based distribution
you are testing. Valid ones are:
wheezy
jessie
sid
precise
trusty
xenial
# run the following command:
sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db 0xF1656F24C74CD1D8
# Add the following line to your /etc/apt/sources.list file:
deb http://yum.mariadb.org/galera/repo/deb ${dist} main
This page is licensed: CC BY-SA / Gnu FDL
The most recent release of MariaDB 10.11 is: MariaDB 10.11.11 Stable (GA) Download Now, Alternate download from mariadb.org
The current versions of the Galera wsrep provider library are 26.4.21 for Galera 4. For convenience, packages containing these libraries are included in the MariaDB YUM and APT repositories.
Currently, MariaDB Galera Cluster only supports the InnoDB storage engine (although there is experimental support for MyISAM and, from MariaDB 10.6, Aria).
A great resource for Galera users is the mailing list run by the developers at Codership. It can be found at Codership on Google Groups. If you use Galera, then it is recommended you subscribe.
MariaDB Galera Cluster is powered by:
MariaDB Server.
The MySQL-wsrep patch for MySQL Server and MariaDB Server developed by Codership. The patch currently supports only Unix-like operating systems.
The MySQL-wsrep patch has been merged into MariaDB Server. This means that the functionality of MariaDB Galera Cluster can be obtained by installing the standard MariaDB Server packages and the Galera wsrep provider library package. The following Galera version corresponds to each MariaDB Server version:
MariaDB Galera Cluster uses Galera 4. This means that the MySQL-wsrep patch is version 26 and the Galera wsrep provider library is version 4.
See Deciphering Galera Version Numbers for more information about how to interpret these version numbers.
See What is MariaDB Galera Cluster: Galera Versions for more information about which specific Galera version is included in each release of MariaDB Server.
In supported builds, Galera Cluster functionality can be enabled by setting some configuration options that are mentioned below. Galera Cluster functionality is not enabled in a standard MariaDB Server installation unless explicitly enabled with these configuration options.
During normal operation, a MariaDB Galera node consumes no more memory than a regular MariaDB server. Additional memory is consumed for the certification index and uncommitted write sets, but normally, this should not be noticeable in a typical application. There is one exception, though:
Writeset caching during state transfer. When a node is receiving a state transfer, it cannot process and apply incoming writesets because it has no state to apply them to yet. Depending on a state transfer mechanism (e.g.mysqldump) the node that sends the state transfer may not be able to apply writesets as well. Thus, they need to cache those writesets for a catch-up phase. Currently the writesets are cached in memory and, if the system runs out of memory either the state transfer will fail or the cluster will block waiting for the state transfer to end.
To control memory usage for writeset caching, check the Galera parameters: gcs.recv_q_hard_limit
, gcs.recv_q_soft_limit
, and gcs.max_throttle
.
Before using MariaDB Galera Cluster, we would recommend reading through the known limitations, so you can be sure that it is appropriate for your application.
To use MariaDB Galera Cluster, there are two primary packages that you need to install:
A MariaDB Server version that supports Galera Cluster.
The Galera wsrep provider library.
As mentioned in the previous section, Galera Cluster support is actually included in the standard MariaDB Server packages. That means that installing MariaDB Galera Cluster package is the same as installing standard MariaDB Server package in those versions. However, you will also have to install an additional package to obtain the Galera wsrep provider library.
Some SST methods may also require additional packages to be installed. The mariadb-backup SST method is generally the best option for large clusters that expect a lot of loads.
MariaDB Galera Cluster can be installed via a package manager on Linux. In order to do so, your system needs to be configured to install from one of the MariaDB repositories.
You can configure your package manager to install it from MariaDB Corporation's MariaDB Package Repository by using the MariaDB Package Repository setup script.
You can also configure your package manager to install it from MariaDB Foundation's MariaDB Repository by using the MariaDB Repository Configuration Tool.
On RHEL, CentOS, Fedora, and other similar Linux distributions, it is highly recommended to install the relevant RPM packages from MariaDB's
repository using yum or dnf. Starting with RHEL 8 and Fedora 22, yum
has been replaced by dnf
, which is the next major version of yum
. However, yum
commands still work on many systems that use dnf
.
To install MariaDB Galera Cluster with yum
or dnf
, follow the instructions at Installing MariaDB Galera Cluster with yum.
On Debian, Ubuntu, and other similar Linux distributions, it is highly recommended to install the relevant DEB packages from MariaDB's repository using apt-get.
To install MariaDB Galera Cluster with apt-get
, follow the instructions at Installing MariaDB Galera Cluster with apt-get.
On SLES, OpenSUSE, and other similar Linux distributions, it is highly recommended to install the relevant RPM packages from MariaDB's repository using zypper.
To install MariaDB Galera Cluster with zypper
, follow the instructions at Installing MariaDB Galera Cluster with ZYpp.
To install MariaDB Galera Cluster with a binary tarball, follow the instructions at Installing MariaDB Binary Tarballs.
To make the location of the libgalera_smm.so
library in binary tarballs more similar to its location in other packages, the library is now found at lib/galera/libgalera_smm.so
in the binary tarballs, and there is a symbolic link in the lib
directory that points to it.
To install MariaDB Galera Cluster by compiling it from source, you will have to compile both MariaDB Server and the Galera wsrep provider library. For some information on how to do this, see the pages at Installing Galera From Source. The pages at Compiling MariaDB From Source and Galera Cluster Documentation: Building Galera Cluster for MySQL may also be helpful.
A number of options need to be set in order for Galera Cluster to work when using MariaDB. See Configuring MariaDB Galera Cluster for more information.
To first node of a new cluster needs to be bootstrapped by starting mariadbd on that node with the option --wsrep-new-cluster option. This option tells the node that there is no existing cluster to connect to. The node will create a new UUID to identify the new cluster.
Do not use the --wsrep-new-cluster option when connecting to an existing cluster. Restarting the node with this option set will cause the node to create new UUID to identify the cluster again, and the node won't reconnect to the old cluster. See the next section about how to reconnect to an existing cluster.
For example, if you were manually starting mariadbd on a node, then you could bootstrap it by executing the following:
$ mariadbd --wsrep-new-cluster
However, keep in mind that most users are not going to be starting mariadbd manually. Instead, most users will use a service manager to start mariadbd. See the following sections on how to bootstrap a node with the most common service managers.
On operating systems that use systemd, a node can be bootstrapped in the following way:
$ galera_new_cluster
This wrapper uses systemd to run mariadbd with the --wsrep-new-cluster option.
If you are using the systemd service that supports the systemd service's method for interacting with multiple MariaDB Server processes, then you can bootstrap a specific instance by specifying the instance name as a suffix. For example:
$ galera_new_cluster mariadb@node1
Systemd support and the galera_new_cluster script were added.
On operating systems that use sysVinit, a node can be bootstrapped in the following way:
$ service mysql bootstrap
This runs mariadbd with the --wsrep-new-cluster option.
Once you have a cluster running and you want to add/reconnect another node to it, you must supply an address of one or more of the existing cluster members in the wsrep_cluster_address option. For example, if the first node of the cluster has the address 192.168.0.1, then you could add a second node to the cluster by setting the following option in a server option group in an option file:
[mariadb]
...
wsrep_cluster_address=gcomm://192.168.0.1 # DNS names work as well, IP is preferred for performance
The new node only needs to connect to one of the existing cluster nodes. Once it connects to one of the existing cluster nodes, it will be able to see all of the nodes in the cluster. However, it is generally better to list all nodes of the cluster in wsrep_cluster_address, so that any node can join a cluster by connecting to any of the other cluster nodes, even if one or more of the cluster nodes are down. It is even OK to list a node's own IP address in wsrep_cluster_address, since Galera Cluster is smart enough to ignore it.
Once all members agree on the membership, the cluster's state will be exchanged. If the new node's state is different from that of the cluster, then it will request an IST or SST to make itself consistent with the other nodes.
If you shut down all nodes at the same time, then you have effectively terminated the cluster. Of course, the cluster's data still exists, but the running cluster no longer exists. When this happens, you'll need to bootstrap the cluster again.
If the cluster is not bootstrapped and mariadbd on the first node is just started normally, then the node willl try to connect to at least one of the nodes listed in the wsrep_cluster_address option. If no nodes are currently running, then this will fail. Bootstrapping the first node solves this problem.
In some cases Galera will refuse to bootstrap a node if it detects that it might not be the most advanced node in the cluster. Galera makes this determination if the node was not the last one in the cluster to be shut down or if the node crashed. In those cases, manual intervention is needed.
If you know for sure which node is the most advanced you can edit the grastate.dat
file in the datadir. You can set safe_to_bootstrap=1
on the most advanced node.
You can determine which node is the most advanced by checking grastate.dat
on each node and looking for the node with the highest seqno
. If the node crashed and seqno=-1
, then you can find the most advanced node by recovering the seqno
on each node with the wsrep_recover option. For example:
$ mariadbd --wsrep_recover
On operating systems that use systemd, the position of a node can be recovered by running the galera_recovery
script. For example:
$ galera_recovery
If you are using the systemd service that supports the systemd service's method for interacting with multiple MariaDB Server processes, then you can recover the position of a specific instance by specifying the instance name as a suffix. For example:
$ galera_recovery mariadb@node1
The galera_recovery
script recovers the position of a node by running mariadbd with the wsrep_recover option.
When the galera_recovery
script runs mariadbd, it does not write to the error log. Instead, it redirects mariadbd log output to a file named with the format /tmp/wsrep_recovery.XXXXXX
, where XXXXXX
is replaced with random characters.
When Galera is enabled, MariaDB's systemd service automatically runs the galera_recovery
script prior to starting MariaDB, so that MariaDB starts with the proper Galera position.
Support for systemd and the galera_recovery
script were added.
In a State Snapshot Transfer (SST), the cluster provisions nodes by transferring a full data copy from one node to another. When a new node joins the cluster, the new node initiates a State Snapshot Transfer to synchronize its data with a node that is already part of the cluster.
See Introduction to State Snapshot Transfers (SSTs) for more information.
In an Incremental State Transfer (SST), the cluster provisions nodes by transferring a node's missing writesets from one node to another. When a new node joins the cluster, the new node initiates a Incremental State Transfer to synchronize its data with a node that is already part of the cluster.
If a node has only been out of a cluster for a little while, then an IST is generally faster than an SST.
MariaDB Galera Cluster supports Data at Rest Encryption. See SSTs and Data at Rest Encryption for some disclaimers on how SSTs are affected when encryption is configured.
Some data still cannot be encrypted:
The disk-based Galera gcache is not encrypted (MDEV-8072).
Galera Cluster's status variables can be queried with the standard SHOW STATUS command. For example:
SHOW GLOBAL STATUS LIKE 'wsrep_%';
The cluster nodes can be configured to invoke a command when cluster membership or node status changes. This mechanism can also be used to communicate the event to some external monitoring agent. This is configured by setting wsrep_notify_cmd. See Galera Cluster documentation: Notification Command for more information.
This page is licensed: CC BY-SA / Gnu FDL
There are binary installation packages available for RPM and Debian-based distributions, which will pull in all required Galera dependencies.
If these are not available, you will need to build Galera from source.
The wsrep API for Galera Cluster is included by default. Follow the usual compiling -mariadb-from-source instructions
make cannot manage dependencies for the build process, so the following packages need to be installed first:
RPM-based:
yum-builddep MariaDB-server
Debian-based:
apt-get build-dep mariadb-server
If running on an alternative system, or the commands are available, the following packages are required. You will need to check the repositories for the correct package names on your distribution—these may differ between distributions or require additional packages:
Git, CMake (on Fedora, both cmake and cmake-fedora are required), GCC and GCC-C++, Automake, Autoconf, and Bison, as well as development releases of libaio and ncurses.
You can use Git to download the source code, as MariaDB source code is available through GitHub. Clone the repository:
git clone https://github.com/mariadb/server mariadb
Check out the branch (e.g., 10.5-galera or 11.1-galera), for example:
cd mariadb
git checkout 10.5-galera
The standard and Galera Cluster database servers are the same, except that for Galera Cluster, the wsrep API patch is included. Enable the patch with the CMake configuration options. WITH_WSREP
and WITH_INNODB_DISALLOW_WRITES
. To build the database server, run the following commands:
cmake -DWITH_WSREP=ON -DWITH_INNODB_DISALLOW_WRITES=ON .
make
make install
There are also some build scripts in the *BUILD/* directory, which may be more convenient to use. For example, the following pre-configures the build options discussed above:
./BUILD/compile-pentium64-wsrep
There are several others as well, so you can select the most convenient.
<>
Besides the server with the Galera support, you will also need a Galera provider.
make cannot manage dependencies itself, so the following packages need to be installed first:
apt-get install -y scons check
If running on an alternative system, or the commands are available, the following packages are required. You will need to check the repositories for the correct package names on your distribution - these may differ between distributions, or require additional packages:
SCons, as well as development releases of Boost (libboost_program_options, libboost_headers1), Check and OpenSSL.
Run:
git clone -b mariadb-4.x https://github.com/MariaDB/galera.git
After this, the source files for the Galera provider will be in the galera
directory.
The Galera Replication Plugin both implements the wsrep API and operates as the database server's wsrep Provider. To build, cd into the galera/ directory and do:
git submodule init
git submodule update
./scripts/build.sh
mkdir /usr/lib64/galera
cp libgalera_smm.so /usr/lib64/galera
The path to libgalera_smm.so
needs to be defined in the my.cnf configuration file.
Building Galera Replication Plugin from source on FreeBSD runs into issues due to Linux dependencies. To overcome these, either install the binary package: pkg install galera
, or use the ports build available at /usr/ports/databases/galera
.
After building, a number of other steps are necessary:
Create the database server user and group:
groupadd mysql
useradd -g mysql mysql
Install the database (the path may be different if you specified CMAKE_INSTALL_PREFIX):
cd /usr/local/mysql
./scripts/mariadb-install-db --user=mysql
If you want to install the database in a location other than /usr/local/mysql/data , use the --basedir or --datadir options.
Change the user and group permissions for the base directory.
chown -R mysql /usr/local/mysql
chgrp -R mysql /usr/local/mysql
Create a system unit for the database server.
cp /usr/local/mysql/supported-files/mysql.server /etc/init.d/mysql
chmod +x /etc/init.d/mysql
chkconfig --add mysql
Galera Cluster can now be started using the service command and is set to start at boot.
This page is licensed: CC BY-SA / Gnu FDL
Get MariaDB Galera on IBM Cloud
You should have an IBM Cloud account; otherwise, you can register here. At the end of the tutorial, you will have a cluster with MariaDB up and running. IBM Cloud uses Bitnami charts to deploy MariaDB Galera with Helm
We will provision a new Kubernetes Cluster for you if, you already have one, skip to step 2
We will deploy the IBM Cloud Block Storage plug-in; if you already have it, skip to step 3
MariaDB Galera deployment
Click the Catalog button on the top
Select Service from the catalog
Search for Kubernetes Service and click on it
You are now at the Kubernetes deployment page; you need to specify some details about the cluster
Choose a standard or free plan; the free plan only has one worker node and no subnet. to provision a standard cluster, you will need to upgrade account to Pay-As-You-Go
To upgrade to a Pay-As-You-Go account, complete the following steps:
In the console, go to Manage > Account.
Select Account settings, and click Add credit card.
Enter your payment information, click Next, and submit your information
Choose classic or VPC, read the docs, and choose the most suitable type for yourself
Now choose your location settings; for more information, please visit Locations
Choose Geography (continent)
Choose Single or Multizone. In single zone, your data is only kept in one datacenter; on the other hand, with Multizone it is distributed to multiple zones, thus safer in an unforeseen zone failure
Choose a Worker Zone if using Single zones or Metro if Multizone
If you wish to use Multizone please set up your account with VRF or enable Vlan spanning
If at your current location selection, there is no available Virtual LAN, a new Vlan will be created for you
Choose a Worker node setup or use the preselected one, set Worker node amount per zone
Choose Master Service Endpoint, In VRF-enabled accounts, you can choose private-only to make your master accessible on the private network or via VPN tunnel. Choose public-only to make your master publicly accessible. When you have a VRF-enabled account, your cluster is set up by default to use both private and public endpoints. For more information visit endpoints.
Give cluster a name
Give desired tags to your cluster; for more information, visit tags
Click create
Wait for you cluster to be provisioned
Your cluster is ready for usage
The Block Storage plug-in is a persistent, high-performance iSCSI storage that you can add to your apps by using Kubernetes Persistent Volumes (PVs).
Click the Catalog button on the top
Select Software from the catalog
Search for IBM Cloud Block Storage plug-in and click on it
On the application page Click in the dot next to the cluster, you wish to use
Click on Enter or Select Namespace and choose the default Namespace or use a custom one (if you get error please wait 30 minutes for the cluster to finalize)
Give a name to this workspace
Click install and wait for the deployment
We will deploy MariaDB on our cluster
Click the Catalog button on the top
Select Software from the catalog
Search for MariaDB and click on it
On the application page Click in the dot next to the cluster, you wish to use
Click on Enter or Select Namespace and choose the default Namespace or use a custom one
Give a unique name to workspace, which you can easily recognize
Select which resource group you want to use, it's for access controll and billing purposes. For more information please visit resource groups
Give tags to your MariaDB Galera, for more information visit tags
Click on Parameters with default values, You can set deployment values or use the default ones
Please set the MariaDB Galera root password in the parameters
After finishing everything, tick the box next to the agreements and click install
The MariaDB Galera workspace will start installing, wait a couple of minutes
Your MariaDB Galera workspace has been successfully deployed
Go to Resources in your browser
Click on Clusters
Click on your Cluster
Now you are at your clusters overview, here Click on Actions and Web terminal from the dropdown menu
Click install - wait couple of minutes
Click on Actions
Click Web terminal, and a terminal will open up
Type in the terminal; please change NAMESPACE to the namespace you choose at the deployment setup:
$ kubectl get ns
$ kubectl get pod -n NAMESPACE -o wide
$ kubectl get service -n NAMESPACE
Enter your pod with bash; please replace PODNAME with your mariadb pod's name
$ kubectl exec --stdin --tty PODNAME -n NAMESPACE -- /bin/bash
After you are in your pod , please verify that MariaDB is running on your pod's cluster. Please enter the root password after the prompt
mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"
You have successfully deployed MariaDB Galera on IBM Cloud!
This page is licensed: CC BY-SA / Gnu FDL
This article contains information on known problems and limitations of MariaDB Galera Cluster.
Currently, replication works only with the InnoDB storage engine. Any writes to tables of other types, including system (MySQL) tables are not replicated (this limitation excludes DDL statements such as CREATE USER, which implicitly modify the mysql. tables — those are replicated). There is, however, experimental support for MyISAM - see the wsrep_replicate_myisam system variable)
Unsupported explicit locking include LOCK TABLES, FLUSH TABLES {explicit table list} WITH READ LOCK, (GET_LOCK(), RELEASE_LOCK(),…). Using transactions properly should be able to overcome these limitations. Global locking operators like FLUSH TABLES WITH READ LOCK are supported.
All tables should have a primary key (multi-column primary keys are supported). DELETE operations are unsupported on tables without a primary key. Also, rows in tables without a primary key may appear in a different order on different nodes.
The general query log and the slow query log cannot be directed to a table. If you enable these logs, then you must forward the log to a file by setting log_output=FILE.
Transaction size. While Galera does not explicitly limit the transaction size, a write set is processed as a single memory-resident buffer, and as a result, extremely large transactions (e.g. LOAD DATA) may adversely affect node performance. To avoid that, the wsrep_max_ws_rows and wsrep_max_ws_size system variables limit transaction rows to 128K and the transaction size to 2Gb by default. If necessary, users may want to increase those limits. Future versions will add support for transaction fragmentation.
If you are using mysqldump for state transfer, and it fails for whatever reason (e.g., you do not have the database account it attempts to connect with, or it does not have the necessary permissions), you will see an SQL SYNTAX error in the server error log. Don't let it fool you, this is just a fancy way to deliver a message (the pseudo-statement inside the bogus SQL will contain the error message).
Do not use transactions of any essential size. Just to insert 100K rows, the server might require an additional 200-300 Mb. In a less fortunate scenario, it can be 1.5 GB for 500K rows, or 3.5 GB for 1M rows. See MDEV-466 for some numbers (you'll see that it's closed, but it's not closed because it was fixed).
Locking is lax when DDL is involved. For example, if your DML transaction uses a table, and a parallel DDL statement is started, in the normal MySQL setup, it would have waited for the metadata lock, but in the Galera context, it will be executed right away. It happens even if you are running a single node, as long as you have configured it as a cluster node. See also MDEV-468. This behavior might cause various side effects; the consequences have not been investigated yet. Try to avoid such parallelism.
Do not rely on auto-increment values to be sequential. Galera uses a mechanism based on autoincrement increment to produce unique non-conflicting sequences, so on every single node, the sequence will have gaps. See Managing Auto Increments with Multi Masters
A command may fail with ER_UNKNOWN_COM_ERROR,
producing 'WSREP has not yet prepared node for application use' (or 'Unknown command' in older versions) error message. It happens when a cluster is suspected to be split, and the node is in a smaller part, for example, during a network glitch, when nodes temporarily lose each other. It can also occur during state transfer. The node takes this measure to prevent data inconsistency. It's usually a temporary state; it can be detected by checking the wsrep_ready value. The node, however, allows the SHOW and SET commands during this period.
After a temporary split, if the 'good' part of the cluster is still reachable and its state was modified, resynchronization occurs. As a part of it, nodes of the 'bad' part of the cluster drop all client connections. It might be quite unexpected, especially if the client was idle and did not even know anything was happening. Please also note that after the connection to the isolated node is restored, if there is a flow on the node, it takes a long time for it to synchronize, during which the "good" node says that the cluster is already of the normal size and synced, while the rejoining node says it's only joined (but not synced). The connections keep getting 'unknown command'. It should pass eventually.
While binlog_format is checked on startup and can only be ROW (see Binary Log Formats), it can be changed at runtime. Do NOT change binlog_format at runtime, it is likely to cause replication failure, but make all other nodes crash.
If you are using rsync for state transfer, and a node crashes before the state transfer is over, the rsync process might hang forever, occupying the port and preventing the node. The problem will show up as 'port in use' in the server error log. Find the orphaned rsync process and kill it manually.
Performance: By design performance of the cluster cannot be higher than the performance of the slowest node; however, even if you have only one node, its performance can be considerably lower compared to running the same server in a standalone mode (without wsrep provider). It is particularly true for big enough transactions (even those which are well within current limitations on transaction size quoted above).
Windows is not supported.
Replication filters: When using a Galera cluster, replication filters should be used with caution. See Configuring MariaDB Galera Cluster: Replication Filters for more details. See also MDEV-421 and MDEV-6229.
Flashback isn't supported in Galera due to an incompatible binary log format.
FLUSH PRIVILEGES
is not replicated.
The query cache needed to be disabled by setting query_cache_size=0 prior to MariaDB Galera Cluster 5.5.40, MariaDB Galera Cluster 10.0.14, and MariaDB 10.1.2
In an asynchronous replication setup where a master replicates to a Galera node acting as a slave, parallel replication (slave-parallel-threads > 1) on the slave is currently not supported (see MDEV-6860).
The disk-based Galera gcache is not encrypted (MDEV-8072).
Nodes may have different table definitions, especially temporarily during rolling schema upgrade operations, but the same schema compatibility restrictions apply as they do for row-based replication
This page is licensed: CC BY-SA / Gnu FDL
These topics will be discussed in more detail below.
Dear Schema Designer:
InnoDB only, always have PK.
Dear Developer:
Check for errors, even after COMMIT.
Moderate sized transactions.
Don't make assumptions about AUTO_INCREMENT values.
Handling of "critical reads" is quite different (arguably better).
Read/Write split is not necessary, but is still advised in case the underlying structure changes in the future.
Dear DBA:
Building the machines is quite different. (Not covered here)
ALTERs are handled differently.
TRIGGERs and EVENTs may need checking.
Tricks in replication (eg, BLACKHOLE) may not work.
Several variables need to be set differently.
(This overview is valid even for same-datacenter nodes, but the issues of latency vanish.)
Cross-colo latency is an 'different' than with traditional replication, but not necessarily better or worse with Galera. The latency happens at a very different time for Galera.
In 'traditional' replication, these steps occur:
Client talks to Master. If Client and Master are in different colos, this has a latency hit.
Each SQL to Master is another latency hit, including(?) the COMMIT (unless using autocommit).
Replication to Slave(s) is asynchronous, so this does not impact the client writing to the Master.
Since replication is asynchronous, a Client (same or subsequent) cannot be guaranteed to see that data on the Slave. This is a "critical read". The async Replication delay forces apps to take some evasive action.
In Galera-based replication:
Client talks to any Master -- possibly with cross-colo latency. Or you could arrange to have Galera nodes co-located with clients to avoid this latency.
At COMMIT time (or end of statement, in case of autocommit=1), galera makes one roundtrip to other nodes.
The COMMIT usually succeeds, but could fail if some other node is messing with the same rows. (Galera retries on autocommit failures.)
Failure of the COMMIT is reported to the Client, who should simply replay the SQL statements from the BEGIN.
Later, the whole transaction will be applied (with possibility of conflict) on the other nodes.
Critical Read -- details below
For an N-statement transaction: In a typical 'traditional' replication setup:
0 or N (N+2?) latency hits, depending on whether the Client is co-located with the Master.
Replication latencies and delays lead to issues with "Critical Reads".
In Galera:
0 latency hits (assuming Client is 'near' some node)
1 latency hit for the COMMIT.
0 (usually) for Critical Read (details below)
Bottom line: Depending on where your Clients are, and whether you clump statements into BEGIN...COMMIT transacitons, Galera may be faster or slower than traditional replication in a WAN topology.
By using wsrep_auto_increment_control = ON, the values of auto_increment_increment and auto_increment_offset will be automatically adjusted as nodes come/go.
If you are building a Galera cluster by starting with one node as a Slave to an existing non-Galera system, and if you have multi-row INSERTs that depend on AUTO_INCREMENTs, the read this Percona blog
Bottom line: There may be gaps in AUTO_INCREMENT values. Consecutive rows, even on one connection, will not have consecutive ids.
Beware of Proxies that try to implement a "read/write split". In some situations, a reference to LAST_INSERT_ID() will be sent to a "Slave".
For effective replication of data, you must use only InnoDB. This eliminates
FULLTEXT index (until 5.6)
SPATIAL index
MyISAM's PK as second column
You can use MyISAM and MEMORY for data that does not need to be replicated.
Also, you should use "START TRANSACTION READONLY" wherever appropriate.
Check for errors after issuing COMMIT. A "deadlock" can occur due to writes on other node(s).
Possible exception (could be useful for legacy code without such checks): Treat the system as single-Master, plus Slaves. By writing only to one node, COMMIT should always succeed(?)
What about autocommit = 1? wsrep_retry_autocommit tells Galera to retry if a single statement that is autocommited N times. So, there is still a chance (very slim) of getting a deadlock on such a statement. The default setting of "1" retry is probably good.
"Row Based Replication" will be used; this requires a PK on every table. A non-replicated table (eg, MyISAM) does not have to have a PK.
(This section assumes you have Galera nodes in multiple colos.) Because of some of the issues discussed, it is wise to group your write statements into moderate sized BEGIN...COMMIT transactions. There is one latency hit per COMMIT or autocommit. So, combining statements will decrease those hits. On the other hand, it is unwise (for other reasons) to make huge transactions, such as inserting/modifying millions of rows in a single transaction.
To deal with failure on COMMIT, design your code so you can redo the SQL statements in the transaction without messing up other data. For example, move "normalization" statements out of the main transaction; there is arguably no compelling reason to roll them back if the main code rolls back.
In any case, doing what is "right" for the business logic overrides other considerations.
Galera's tx_isolation is between Serializable and Repeatable Read. tx_isolation variable is ignored.
Set wsrep_log_conflicts to get errors put in the regular MySQL mysqld.err.
XA transactions cannot be supported. (Galera is already doing a form of XA in order to do its thing.)
Here is a 'simple' (but not 'free') way to assure that a read-after-write, even from a different connection, will see the updated data.
SET SESSION wsrep_sync_wait = 1;
SELECT ...
SET SESSION wsrep_sync_wait = 0;
For non-SELECTs, use a different bit set for the first select. (TBD: Would 0xffff always work?) (Before Galera 3.6, it was wsrep_causal_reads = ON.) Doc for wsrep_sync_wait
This setting stalls the SELECT until all current updates have been applied to the node. That is sufficient to guarantee that a previous write will be visible. The time cost is usually zero. However, a large UPDATE could lead to a delay. Because of RBR and parallel application, delays are likely to be less than on traditional replication. Zaitsev's blog
It may be more practical (for a web app) to simply set wsrep_sync_wait right after connecting.
As said above, use InnoDB only. However, here is more info on the MyISAM (and hence FULLTEXT, SPATIAL, etc) issues. MyISAM and MEMORY tables are not replicated.
Having MyISAM not replicated can be a big benefit -- You can "CREATE TEMPORARY TABLE ... ENGINE=MyISAM" and have it existed on only one node. RBR assures that any data transferred from that temp table into a 'real' table can still be replicated.
GRANTs and related operations act on the MyISAM tables in the database mysql
. The GRANT statements will(?) be replicated, but the underlying tables will not.
Many DDL changes on Galera can be achieved without downtime, even if they take a long time.
Rolling Schema Upgrade (RSU): manually execute the DDL on each node in the cluster. The node will desync while executing the DDL.
Total Order Isolation (TOI): Galera automatically replicates the DDL to each node in the cluster, and it synchronizes each node so that the statement is executed at same time (in the replication sequence) on all nodes.
Caution: Since there is no way to synchronize the clients with the DDL, you must make sure that the clients are happy with either the old or the new schema. Otherwise, you will probably need to take down the entire cluster while simultaneously switching over both the schema and the client code.
Fast DDL operations can usually be executed in TOI mode:
DDL operations that support the NOCOPY
and INSTANT
algorithms are usually very fast.
DDL operations that support the INPLACE
algorithm may be fast or slow, depending on whether the table needs to be rebuilt.
DDL operations that only support the COPY
algorithm are usually very slow.
For a list of which operations support which algorithms, see InnoDB Online DDL.
If you need to use RSU mode, then do the following separately for each node:
SET SESSION wsrep_OSU_method='RSU';
ALTER TABLE tab <alter options here>;
SET SESSION wsrep_OSU_method='TOI';
More discussion of RSU procedures
You can 'simulate' Master + Slaves by having clients write only to one node.
No need to check for errors after COMMIT.
Lose the latency benefits.
Remove node from cluster; back it up; put it back in. Syncup is automatic.
Remove node from cluster; use it for testing, etc; put it back in. Syncup is automatic.
Rolling hardware/software upgrade: Remove; upgrade; put back in. Repeat.
auto_increment_increment - If you are writing to multiple nodes, and you use AUTO_INCREMENT, then auto_increment_increment will automatically be equal the current number of nodes.
binlog_format - ROW is required for Galera.
innodb_doublewrite - ON: When an IST occurs, want there to be no torn pages? (With FusionIO or other drives that guarantee atomicity, OFF is better.)
innodb_flush_log_at_trx_commit - 2 or 0. IST or SST will recover from loss if you have 1.
query_cache_size - 0
query_cache_type - 0: The Query cache cannot be used in a Galera context.
wsrep_auto_increment_control - Normally want ON
wsrep_on - ON
wsrep_provider_options - Various settings may need tuning if you are using a WAN.
wsrep_slave_threads - use for parallel replication
wsrep_sync_wait (previously wsrep_causal_reads) - used transiently to dealing with "critical reads".
Until recently, FOREIGN KEYs were buggy.
LOAD DATA is auto chunked. That is, it is passed to other nodes piecemeal, not all at once.
MariaDB's known issues with Galera
DROP USER may not replicate?
A slight difference in ROLLBACK for conflict: InnoDB rolls back smaller transaction; Galera rolls back last.
SET GLOBAL wsrep_debug = 1; leads to a lot of debug info in the error log.
Large UPDATEs / DELETEs should be broken up. This admonition is valid for all databases, but there are additional issues in Galera.
WAN: May need to increase (from the defaults) wsrep_provider_options = evs...
MySQL/Perona 5.6 or MariaDB 10 is recommended when going to Galera.
See Using MariaDB GTIDs with MariaDB Galera Cluster.
If all the servers are in the same 'vulnerability zone' -- eg, rack or data center -- Have an odd number (at least 3) of nodes.
When spanning colos, you need 3 (or more) data centers in order to be 'always' up, even during a colo failure. With only 2 data centers, Galera can automatically recover from one colo outage, but not the other. (You pick which.)
If you use 3 or 4 colos, these number of nodes per colo are safe:
3 nodes: 1+1+1 (1 node in each of 3 colos)
4 nodes: 1+1+1+1 (4 nodes won't work in 3 colos)
5 nodes: 2+2+1, 2+1+1+1 (5 nodes spread 'evenly' across the colos)
6 nodes: 2+2+2, 2+2+1+1
7 nodes: 3+2+2, 3+3+1, 2+2+2+1, 3+2+1+1 There may be a way to "weight" the nodes differently; that would allow a few more configurations. With "weighting", give each colo the same weight; then subdivide the weight within each colo evenly. Four nodes in 3 colos: (1/6+1/6) + 1/3 + 1/3 That way, any single colo failure cannot lead to "split brain".
Posted 2013; VARIABLES: 2015; Refreshed Feb. 2016
Rick James graciously allowed us to use this article in the documentation.
Rick James' site has other useful tips, how-tos, optimizations, and debugging tips.
Original source: galera
This page is licensed: CC BY-SA / Gnu FDL
State Snapshot Transfers (SSTs) in MariaDB Galera Cluster copy the full dataset from a donor node to a new or recovering joiner node, ensuring data consistency before the joiner joins replication.
In a State Snapshot Transfer (SST), the cluster provisions nodes by transferring a full data copy from one node to another. When a new node joins the cluster, the new node initiates a State Snapshot Transfer to synchronize its data with a node that is already part of the cluster.
There are two conceptually different ways to transfer a state from one MariaDB server to another:
Logical: The only SST method of this type is the mysqldump
SST method, which uses the mysqldump utility to get a logical dump of the donor. This SST method requires the joiner node to be fully initialized and ready to accept connections before the transfer. This method is, by definition, blocking, in that it blocks the donor node from modifying its state for the duration of the transfer. It is also the slowest of all, and that might be an issue in a cluster with a lot of loads.
Physical: SST methods of this type physically copy the data files from the donor node to the joiner node. This requires that the joiner node be initialized after the transfer. The mariadb-backup SST method and a few other SST methods fall into this category. These SST methods are much faster than the mysqldump
SST method, but they have certain limitations. For example, they can be used only on server startup, and the joiner node must be configured very similarly to the donor node (e.g., innodb_file_per_table should be the same, and so on). Some of the SST methods in this category are non-blocking on the donor node, meaning that the donor node is still able to process queries while donating the SST (e.g. the mariadb-backup SST method is non-blocking).
SST methods are supported via a scriptable interface. New SST methods could potentially be developed by creating new SST scripts. The scripts usually have names of the form wsrep_sst_<method>
where <method>
is one of the SST methods listed below.
You can choose your SST method by setting the wsrep_sst_method system variable. It can be changed dynamically with SET GLOBAL on the node that you intend to be an SST donor. For example:
SET GLOBAL wsrep_sst_method='mariadb-backup';
It can also be set in a server option group in an option file prior to starting up a node:
[mariadb]
...
wsrep_sst_method = mariadb-backup
For an SST to work properly, the donor and joiner node must use the same SST method. Therefore, it is recommended to set wsrep_sst_method to the same value on all nodes, since any node will usually be a donor or joiner node at some point.
MariaDB Galera Cluster comes with the following built-in SST methods:
This SST method uses the mariadb-backup utility for performing SSTs. It is one of the two non-locking methods. This is the recommended SST method if you require the ability to run queries on the donor node during the SST. Note that if you use the mariadb-backup
SST method, then you also need to have socat
installed on the server. This is needed to stream the backup from the donor to the joiner. This is a limitation inherited from the xtrabackup-v2
SST method.
This SST method supports GTID
This SST method supports Data at Rest Encryption.
This SST method is available from MariaDB 10.1.26 and MariaDB 10.2.10.
With this SST method, it is impossible to upgrade the cluster between some major versions; see MDEV-27437.
See mariadb-backup SST method for more information.
rsync
is the default method. This method uses the rsync utility to create a snapshot of the donor node. rsync
should be available by default on all modern Linux distributions. The donor node is blocked with a read lock during the SST. This is the fastest SST method, especially for large datasets since it copies binary data. Because of that, this is the recommended SST method if you do not need to allow the donor node to execute queries during the SST.
The rsync
method runs rsync
in --whole-file
mode, assuming that nodes are connected by fast local network links so that the default delta transfer mode would consume more processing time than it may save on data transfer bandwidth. When having a distributed cluster with slow links between nodes, the rsync_wan
method runs rsync
in the default delta transfer mode, which may reduce data transfer time substantially when an older datadir state is already present on the joiner node. Both methods are actually implemented by the same script, wsrep_sst_rsync_wan
is just a symlink to the wsrep_sst_rsync
script and the actual rsync
mode to use is determined by the name the script was called by.
This SST method supports GTID
This SST method supports Data at Rest Encryption.
The rsync SST method does not support tables created with the DATA DIRECTORY or INDEX DIRECTORY clause. Use the mariadb-backup SST method as an alternative to support this feature.
Use of this SST method could result in data corruption when using innodb_use_native_aio (the default).
Use of this SST method could result in data corruption when using innodb_use_native_aio (the default). wsrep_sst_method=rsync
is a reliable way to upgrade the cluster to a newer major version.
stunnel can be used to encrypt data over the wire. Be sure to have stunnel
installed. You will also need to generate certificates and keys. See the stunnel documentation for information on how to do that. Once you have the keys, you will need to add the tkey
and tcert
options to the [sst]
option group in your MariaDB configuration file, such as:
[sst]
tkey = /etc/my.cnf.d/certificates/client-key.pem
tcert = /etc/my.cnf.d/certificates/client-cert.pem
You also need to run the certificate directory through openssl rehash.
stunnel cannot be used to encrypt data over the wire.
This SST method runs mysqldump on the donor node and pipes the output to the mariadb client connected to the joiner node. The mysqldump
SST method needs a username/password pair set in the wsrep_sst_auth variable in order to get the dump. The donor node is blocked with a read lock during the SST. This is the slowest SST method.
This SST method supports GTID.
This SST method supports Data at Rest Encryption.
Percona XtraBackup is not supported in MariaDB. mariadb-backup is the recommended backup method to use instead of Percona XtraBackup. See Percona XtraBackup Overview: Compatibility with MariaDB for more information.
This SST method uses the Percona XtraBackup utility for performing SSTs. It is one of the two non-blocking methods. Note that if you use the xtrabackup-v2
SST method, you also need to have socat
installed on the server. Since Percona XtraBackup is a third-party product, this SST method requires an additional installation and some additional configuration. Please refer to Percona's xtrabackup SST documentation for information from the vendor.
This SST method does not support GTID
This SST method does not support Data at Rest Encryption.
This SST method is available from MariaDB Galera Cluster 5.5.37 and MariaDB Galera Cluster 10.0.10.
See xtrabackup-v2 SST method for more information.
Percona XtraBackup is not supported in MariaDB. mariadb-backup is the recommended backup method to use instead of Percona XtraBackup. See Percona XtraBackup Overview: Compatibility with MariaDB for more information.
This SST method is an older SST method that uses the Percona XtraBackup utility for performing SSTs. The xtrabackup-v2
SST method should be used instead of the xtrabackup
SST method starting from MariaDB 5.5.33.
This SST method does not support GTID
This SST method does not support Data at Rest Encryption.
All SST methods except rsync
require authentication via username and password. You can tell the client what username and password to use by setting the wsrep_sst_auth system variable. It can be changed dynamically with SET GLOBAL on the node that you intend to be a SST donor. For example:
SET GLOBAL wsrep_sst_auth = 'mariadb-backup:password';
It can also be set in a server option group in an option file prior to starting up a node:
[mariadb]
...
wsrep_sst_auth = mariadb-backup:password
Some authentication plugins do not require a password. For example, the unix_socket and gssapi authentication plugins do not require a password. If you are using a user account that does not require a password in order to log in, then you can just leave the password component of wsrep_sst_auth empty. For example:
[mariadb]
...
wsrep_sst_auth = mariadb-backup:
See the relevant description or page for each SST method to find out what privileges need to be granted to the user and whether the privileges are needed on the donor node or joiner node for that method.
MariaDB's systemd unit file has a default startup timeout of about 90 seconds on most systems. If an SST takes longer than this default startup timeout on a joiner node, then systemd
will assume that mysqld
has failed to startup, which causes systemd
to kill the mysqld
process on the joiner node. To work around this, you can reconfigure the MariaDB systemd
unit to have an infinite timeout, such as by executing one of the following commands:
If you are using systemd
228 or older, then you can execute the following to set an infinite timeout:
sudo tee /etc/systemd/system/mariadb.service.d/timeoutstartsec.conf <<EOF
[Service]
TimeoutStartSec=0
EOF
sudo systemctl daemon-reload
Systemd 229 added the infinity option, so if you are using systemd
229 or later, then you can execute the following to set an infinite timeout:
sudo tee /etc/systemd/system/mariadb.service.d/timeoutstartsec.conf <<EOF
[Service]
TimeoutStartSec=infinity
EOF
sudo systemctl daemon-reload
See Configuring the Systemd Service Timeout for more details.
Note that systemd 236 added the EXTEND_TIMEOUT_USEC environment variable that allows services to extend the startup timeout during long-running processes. Starting with MariaDB 10.1.35, MariaDB 10.2.17, and MariaDB 10.3.8, on systems with systemd versions that support it, MariaDB uses this feature to extend the startup timeout during long SSTs. Therefore, if you are using systemd
236 or later, then you should not need to manually override TimeoutStartSec
, even if your SSTs run for longer than the configured value. See MDEV-15607 for more information.
An SST failure generally renders the joiner node unusable. Therefore, when an SST failure is detected, the joiner node will abort.
Restarting a node after a mysqldump
SST failure may require manual restoration of the administrative tables.
Look at the description of each SST method to determine which methods support Data at Rest Encryption.
For logical SST methods like mysqldump
, each node should be able to have different encryption keys. For physical SST methods, all nodes need to have the same encryption keys, since the donor node will copy encrypted data files to the joiner node, and the joiner node will need to be able to decrypt them.
In order to avoid a split-brain condition, the minimum recommended number of nodes in a cluster is 3.
When using an SST method that blocks the donor, there is yet another reason to require a minimum of 3 nodes. In a 3-node cluster, if one node is acting as an SST joiner and one other node is acting as an SST donor, then there is still one more node to continue executing queries.
In some cases, if Galera Cluster's automatic SSTs repeatedly fail, then it can be helpful to perform a "manual SST". See the following pages on how to do that:
SST scripts can't currently read the mysqld<#> option group in an option file that are read by instances managed by mysqld_multi.
See MDEV-18863 for more information.
This page is licensed: CC BY-SA / Gnu FDL
Sometimes it can be helpful to perform a "manual SST" when Galera's normal SSTs fail. This can be especially useful when the cluster's datadir
is very large, since a normal SST can take a long time to fail in that case.
A manual SST essentially consists of taking a backup of the donor, loading the backup on the joiner, and then manually editing the cluster state on the joiner node. This page will show how to perform this process with mariadb-backup.
Check that the donor and joiner nodes have the same mariadb-backup version.
mariadb-backup --version
Create backup directory on donor.
MYSQL_BACKUP_DIR=/mysql_backup
mkdir $MYSQL_BACKUP_DIR
Take a full backup the of the donor node with mariadb-backup
. The --galera-info
option should also be provided, so that the node's cluster state is also backed up.
DB_USER=sstuser
DB_USER_PASS=password
mariadb-backup --backup --galera-info \
--target-dir=$MYSQL_BACKUP_DIR \
--user=$DB_USER \
--password=$DB_USER_PASS
Verify that the MariaDB Server process is stopped on the joiner node. This will depend on your service manager.
For example, on systemd systems, you can execute::
systemctl status mariadb
Create the backup directory on the joiner node.
MYSQL_BACKUP_DIR=/mysql_backup
mkdir $MYSQL_BACKUP_DIR
Copy the backup from the donor node to the joiner node.
OS_USER=dba
JOINER_HOST=dbserver2.mariadb.com
rsync -av $MYSQL_BACKUP_DIR/* ${OS_USER}@${JOINER_HOST}:${MYSQL_BACKUP_DIR}
Prepare the backup on the joiner node.
mariadb-backup --prepare \
--target-dir=$MYSQL_BACKUP_DIR
Get the Galera Cluster version ID from the donor node's grastate.dat
file.
MYSQL_DATADIR=/var/lib/mysql
cat $MYSQL_DATADIR/grastate.dat | grep version
For example, a very common version number is "2.1".
Get the node's cluster state from the xtrabackup_galera_info
file in the backup that was copied to the joiner node.
cat $MYSQL_BACKUP_DIR/xtrabackup_galera_info
The file contains the values of the wsrep_local_state_uuid
and wsrep_last_committed
status variables.
The values are written in the following format:
wsrep_local_state_uuid:wsrep_last_committed
For example:
d38587ce-246c-11e5-bcce-6bbd0831cc0f:1352215
Create a grastate.dat
file in the backup directory of the joiner node. The Galera Cluster version ID, the cluster uuid, and the seqno from previous steps will be used to fill in the relevant fields.
For example, with the example values from the last two steps, we could do:
sudo tee $MYSQL_BACKUP_DIR/grastate.dat <<EOF
# GALERA saved state
version: 2.1
uuid: d38587ce-246c-11e5-bcce-6bbd0831cc0f
seqno: 1352215
safe_to_bootstrap: 0
EOF
Remove the existing contents of the datadir
on the joiner node.
MYSQL_DATADIR=/var/lib/mysql
rm -Rf $MYSQL_DATADIR/*
Copy the contents of the backup directory to the datadir
the on joiner node.
mariadb-backup --copy-back \
--target-dir=$MYSQL_BACKUP_DIR
Make sure the permissions of the datadir
are correct on the joiner node.
chown -R mysql:mysql $MYSQL_DATADIR/
Start the MariaDB Server process on the joiner node. This will depend on your service manager.
For example, on systemd systems, you can execute::
systemctl start mariadb
Watch the MariaDB error log on the joiner node and verify that the node does not need to perform a normal SSTs due to the manual SST.
tail -f /var/log/mysql/mysqld.log
The mariabackup
SST method uses the mariadb-backup utility for performing SSTs. It is one of the methods that does not block the donor node. mariadb-backup
was originally forked from Percona XtraBackup, and similarly, the mariabackup
SST method was originally forked from the xtrabackup-v2 SST method.
Note that if you use the mariadb-backup
SST method, then you also need to have socat installed on the server. This is needed to stream the backup from the donor node to the joiner node. This is a limitation that was inherited from the xtrabackup-v2 SST method.
To use the mariadb-backup SST method, you must set the wsrep_sst_method=mariabackup
on both the donor and joiner node. It can be changed dynamically with SET GLOBAL
on the node that you intend to be an SST donor. For example:
SET GLOBAL wsrep_sst_method='mariabackup';
It can be set in a server option group in an option file prior to starting up a node:
[mariadb]
...
wsrep_sst_method = mariabackup
For an SST to work properly, the donor and joiner node must use the same SST method. Therefore, it is recommended to set wsrep_sst_method
to the same value on all nodes, since any node will usually be a donor or joiner node at some point.
The InnoDB redo log format has been changed in MariaDB 10.5 and MariaDB 10.8 in a way that will not allow the crash recovery or the preparation of a backup from an older major version. Because of this, the mariabackup
SST method cannot be used for some major-version upgrades, unless you temporarily edit the wsrep_sst_mariadbbackup
script so that the --prepare
step on the newer-major-version joiner will be executed using the older-major-version mariadb-backup
tool.
The default method wsrep_sst_method=rsync
works for major-version upgrades; see MDEV-27437.
The mariabackup
SST method is configured by placing options in the [sst]
section of a MariaDB configuration file (e.g., /etc/my.cnf.d/server.cnf
). These settings are parsed by the wsrep_sst_mariabackup
and wsrep_sst_common
scripts.
The command-line utility is mariadb-backup
; this tool was previously called mariabackup
. The SST method itself retains the original name mariabackup
(as in wsrep_sst_method=mariabackup
).
These options control the core data transfer mechanism.
streamfmt
mbstream
Specifies the backup streaming format. mbstream
is the native format for mariadb-backup
.
transferfmt
socat
Defines the network utility for data transfer.
sockopt
A string of socket options passed to the socat
utility.
rlimit
Throttles the data transfer rate in bytes per second. Supports K
, M
, and G
suffixes.
These options configure on-the-fly compression to reduce network bandwidth.
compressor
The command-line string for compressing the data stream on the donor (e.g., "lz4 -z"
).
decompressor
The command-line string for decompressing the data stream on the joiner (e.g., "lz4 -d"
).
These options manage user authentication and stream encryption.
wsrep-sst-auth
The authentication string in user:password
format. The user requires RELOAD
, PROCESS
, LOCK TABLES
, and REPLICATION CLIENT
privileges.
tcert
Path to the TLS certificate file for securing the transfer.
tkey
Path to the TLS private key file.
tca
Path to the TLS Certificate Authority (CA) file.
progress
Set to 1
to show transfer progress (requires pv
utility).
sst-initial-timeout
300
Timeout in seconds for the initial connection.
sst-log-archive
1
Set to 1
to archive the previous SST log.
cpat
A space-separated list of extra files/directories to copy from donor to joiner.
mariadb-backup
OptionsThis feature allows mariadb-backup
specific options to be passed through the SST script.
use-extra
0
Must be set to 1
to enable pass-through functionality.
[sst]
# Enable pass-through functionality
use-extra=1
# mariadb-backup native options
encrypt=AES256
encrypt-key-file=/etc/mysql/encrypt/keyfile.key
compress-threads=4
To use the mariadb-backup SST method, mariadb-backup needs to be able to authenticate locally on the donor node, so that it can create a backup to stream to the joiner. You can tell the donor node what username and password to use by setting the wsrep_sst_auth
system variable. It can be changed dynamically with SET GLOBAL
on the node that you intend to be an SST donor:
SET GLOBAL wsrep_sst_auth = 'mariadbbackup:mypassword';
It can also be set in a server option group in an option file prior to starting up a node:
[mariadb]
...
wsrep_sst_auth = mariadbbackup:mypassword
Some authentication plugins do not require a password. For example, the unix_socket
and gssapi
authentication plugins do not require a password. If you are using a user account that does not require a password in order to log in, then you can just leave the password component of wsrep_sst_auth
empty. For example:
[mariadb]
...
wsrep_sst_auth = mariadbbackup:
The user account that performs the backup for the SST needs to have the same privileges as mariadb-backup, which are the RELOAD
, PROCESS
, LOCK TABLES
and BINLOG MONITOR
, REPLICA MONITOR
global privileges. To be safe, ensure that these privileges are set on each node in your cluster. mariadb-backup
connects locally on the donor node to perform the backup, so the following user should be sufficient:
CREATE USER 'mariadbbackup'@'localhost' IDENTIFIED BY 'mypassword';
GRANT RELOAD, PROCESS, LOCK TABLES,
BINLOG MONITOR ON *.* TO 'mariadbbackup'@'localhost';
It is possible to use the unix_socket
authentication plugin for the user account that performs SSTs. This would provide the benefit of not needing to configure a plain-text password in wsrep_sst_auth
.
The user account would have to have the same name as the operating system user account that is running the mysqld
process. On many systems, this is the user account configured as the user
option, and it tends to default to mysql
.
For example, if the unix_socket
authentication plugin is already installed, then you could execute the following to create the user account:
CREATE USER 'mysql'@'localhost' IDENTIFIED VIA unix_socket;
GRANT RELOAD, PROCESS, LOCK TABLES,
REPLICATION CLIENT ON *.* TO 'mysql'@'localhost';
To configure wsrep_sst_auth
, set the following in a server option group in an option file prior to starting up a node:
[mariadb]
...
wsrep_sst_auth = mysql:
It is possible to use the gssapi
authentication plugin for the user account that performs SSTs. This would provide the benefit of not needing to configure a plain-text password in wsrep_sst_auth
.
The following steps would need to be done beforehand:
You need a KDC running MIT Kerberos or Microsoft Active Directory.
You will need to create a keytab file for the MariaDB server.
You will need to install the package containing the gssapi
authentication plugin.
You will need to install the plugin in MariaDB, so that the gssapi
authentication plugin is available to use.
You will need to configure the plugin.
You will need to create a user account gssapi
For example, you could execute the following to create the user account in MariaDB:
CREATE USER 'mariadbbackup'@'localhost' IDENTIFIED VIA gssapi;
GRANT RELOAD, PROCESS, LOCK TABLES,
BINLOG MONITOR ON *.* TO 'mariadbbackup'@'localhost';
To configure wsrep_sst_auth
, set the following in a server option group in an option file prior to starting up a node:
[mariadb]
...
wsrep_sst_auth = mariadbbackup:
When mariadb-backup is used to create the backup for the SST on the donor node, mariadb-backup briefly requires a system-wide lock at the end of the backup. This is done with BACKUP STAGE BLOCK_COMMIT
.
If a specific node in your cluster is acting as the primary node by receiving all of the application's write traffic, then this node should not usually be used as the donor node, because the system-wide lock could interfere with the application. In this case, you can define one or more preferred donor nodes by setting the wsrep_sst_donor
system variable.
For example, let's say that we have a 5-node cluster with the nodes node1
, node2
, node3
, node4
, and node5
, and let's say that node1
is acting as the primary node. The preferred donor nodes for node2
could be configured by setting the following in a server option group in an option file prior to starting up a node:
[mariadb]
...
wsrep_sst_donor=node3,node4,node5,
The trailing comma tells the server to allow any other node as donor when the preferred donors are not available. Therefore, if node1
is the only node left in the cluster, the trailing comma allows it to be used as the donor node.
During the SST process, the donor node uses socat
to stream the backup to the joiner node. Then the joiner node prepares the backup before restoring it. The socat
utility must be installed on both the donor node and the joiner node in order for this to work. Otherwise, the MariaDB error log will contain an error like:
WSREP_SST: [ERROR] socat not found in path: /usr/sbin:/sbin:/usr//bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin (20180122 14:55:32.993)
On RHEL/CentOS, socat
can be installed from the Extra Packages for Enterprise Linux (EPEL) repository.
This SST method supports two different TLS methods. The specific method can be selected by setting the encrypt
option in the [sst]
section of the MariaDB configuration file. The options are:
TLS using OpenSSL encryption built into socat
(encrypt=2
)
TLS using OpenSSL encryption with Galera-compatible certificates and keys (encrypt=3
)
Note that encrypt=1
refers to a TLS encryption method that has been deprecated and removed. encrypt=4
refers to a TLS encryption method in xtrabackup-v2
that has not yet been ported to mariadb-backup
. See MDEV-18050 about that.
To generate keys compatible with this encryption method, follow these directions.
First, generate the keys and certificates:
FILENAME=sst
openssl genrsa -out $FILENAME.key 1024
openssl req -new -key $FILENAME.key -x509 -days 3653 -out $FILENAME.crt
cat $FILENAME.key $FILENAME.crt >$FILENAME.pem
chmod 600 $FILENAME.key $FILENAME.pem
On some systems, you may also have to add dhparams
to the certificate:
openssl dhparam -out dhparams.pem 2048
cat dhparams.pem >> sst.pem
Next, copy the certificate and keys to all nodes in the cluster.
When done, configure the following on all nodes in the cluster:
[sst]
encrypt=2
tca=/etc/my.cnf.d/certificates/sst.crt
tcert=/etc/my.cnf.d/certificates/sst.pem
Make sure to replace the paths with whatever is relevant on your system. This should allow your SSTs to be encrypted.
To generate keys compatible with this encryption method, follow these directions.
First, generate the keys and certificates:
# CA
openssl genrsa 2048 > ca-key.pem
openssl req -new -x509 -nodes -days 365000 \
-key ca-key.pem -out ca-cert.pem
# server1
openssl req -newkey rsa:2048 -days 365000 \
-nodes -keyout server1-key.pem -out server1-req.pem
openssl rsa -in server1-key.pem -out server1-key.pem
openssl x509 -req -in server1-req.pem -days 365000 \
-CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 \
-out server1-cert.pem
Next, copy the certificate and keys to all nodes in the cluster.
When done, configure the following on all nodes in the cluster:
[sst]
encrypt=3
tkey=/etc/my.cnf.d/certificates/server1-key.pem
tcert=/etc/my.cnf.d/certificates/server1-cert.pem
Make sure to replace the paths with whatever is relevant on your system. This should allow your SSTs to be encrypted.
The mariadb-backup
SST method has its own logging outside of the MariaDB Server logging.
Logging for mariadb-backup SSTs works the following way.
By default, on the donor node, it logs to mariadb-backup.backup.log
. This log file is located in the datadir
.
By default, on the joiner node, it logs to mariadb-backup.prepare.log
and mariadb-backup.move.log
These log files are also located in the datadir
.
By default, before a new SST is started, existing mariadb-backup
SST log files are compressed and moved to /tmp/sst_log_archive
. This behavior can be disabled by setting sst-log-archive=0
in the [sst]
option group in an option file. Similarly, the archive directory can be changed by setting sst-log-archive-dir
:
[sst]
sst-log-archive=1
sst-log-archive-dir=/var/log/mysql/sst/
See MDEV-17973 for more information.
Redirect the SST logs to the syslog instead, by setting the following in the [sst]
option group in an option file:
[sst]
sst-syslog=1
You can also redirect the SST logs to the syslog by setting the following in the [mysqld_safe]
option group in an option file:
[mysqld_safe]
syslog
If you are performing mariadb-backup SSTs with IPv6 addresses, then the socat
utility needs to be passed the pf=ip6
option. This can be done by setting the sockopt
option in the [sst]
option group in an option file:
[sst]
sockopt=",pf=ip6"
See MDEV-18797 for more information.
If Galera Cluster's automatic SSTs repeatedly fail, it can be helpful to perform a "manual SST"; see: Manual SST of Galera Cluster node with mariadb-backup
This page is licensed: CC BY-SA / Gnu FDL
Articles on upgrading between MariaDB versions with Galera Cluster
MariaDB starting with 10.1
Since MariaDB 10.1, the MySQL-wsrep patch has been merged into MariaDB Server. Therefore, in MariaDB 10.1 and above, the functionality of MariaDB Galera Cluster can be obtained by installing the standard MariaDB Server packages and the Galera wsrep provider library package.
Beginning in MariaDB 10.1, Galera Cluster ships with the MariaDB Server. Upgrading a Galera Cluster node is very similar to upgrading a server from MariaDB 10.3 to MariaDB 10.4. For more information on that process as well as incompatibilities between versions, see the Upgrade Guide.
The following steps can be used to perform a rolling upgrade from MariaDB 10.3 to MariaDB 10.4 when using Galera Cluster. In a rolling upgrade, each node is upgraded individually, so the cluster is always operational. There is no downtime from the application's perspective.
First, before you get started:
First, take a look at Upgrading from MariaDB 10.3 to MariaDB 10.4 to see what has changed between the major versions.
Check whether any system variables or options have been changed or removed. Make sure that your server's configuration is compatible with the new MariaDB version before upgrading.
Check whether replication has changed in the new MariaDB version in any way that could cause issues while the cluster contains upgraded and non-upgraded nodes.
Check whether any new features have been added to the new MariaDB version. If a new feature in the new MariaDB version cannot be replicated to the old MariaDB version, then do not use that feature until all cluster nodes have been upgrades to the new MariaDB version.
Next, make sure that the Galera version numbers are compatible.
If you are upgrading from the most recent MariaDB 10.3 release to MariaDB 10.4, then the versions will be compatible. MariaDB 10.3 uses Galera 3 (i.e. Galera wsrep provider versions 25.3.x), and MariaDB 10.4 uses Galera 4 (i.e. Galera wsrep provider versions 26.4.x). This means that upgrading to MariaDB 10.4 also upgrades the system to Galera 4. However, Galera 3 and Galera 4 should be compatible for the purposes of a rolling upgrade, as long as you are using Galera 26.4.2 or later.
See What is MariaDB Galera Cluster? Galera wsrep provider Versions for information on which MariaDB releases uses which Galera wsrep provider versions.
Ideally, you want to have a large enough gcache to avoid a State Snapshot Transfer (SST) during the rolling upgrade. The gcache size can be configured by setting gcache.size For example:wsrep_provider_options="gcache.size=2G"
Before you upgrade, it would be best to take a backup of your database. This is always a good idea to do before an upgrade. We would recommend mariadb-backup.
Then, for each node, perform the following steps:
Modify the repository configuration, so the system's package manager installs MariaDB 10.4. For example,
On Debian, Ubuntu, and other similar Linux distributions, see Updating the MariaDB APT repository to a New Major Release for more information.
On RHEL, CentOS, Fedora, and other similar Linux distributions, see Updating the MariaDB YUM repository to a New Major Release for more information.
On SLES, OpenSUSE, and other similar Linux distributions, see Updating the MariaDB ZYpp repository to a New Major Release for more information.
If you use a load balancing proxy such as MaxScale or HAProxy, make sure to drain the server from the pool so it does not receive any new connections.
Uninstall the old version of MariaDB and the Galera wsrep provider.
On Debian, Ubuntu, and other similar Linux distributions, execute the following:sudo apt-get remove mariadb-server galera
On RHEL, CentOS, Fedora, and other similar Linux distributions, execute the following:sudo yum remove MariaDB-server galera
On SLES, OpenSUSE, and other similar Linux distributions, execute the following:sudo zypper remove MariaDB-server galera
Install the new version of MariaDB and the Galera wsrep provider.
On Debian, Ubuntu, and other similar Linux distributions, see Installing MariaDB Packages with APT for more information.
On RHEL, CentOS, Fedora, and other similar Linux distributions, see Installing MariaDB Packages with YUM for more information.
On SLES, OpenSUSE, and other similar Linux distributions, see Installing MariaDB Packages with ZYpp for more information.
Make any desired changes to configuration options in option files, such as my.cnf
. This includes removing any system variables or options that are no longer supported.
On Linux distributions that use systemd
you may need to increase the service startup timeout as the default timeout of 90 seconds may not be sufficient. See Systemd: Configuring the Systemd Service Timeout for more information.
Run mysql_upgrade with the --skip-write-binlog
option.
mysql_upgrade
does two things:
Ensures that the system tables in the mysql l
database are fully compatible with the new version.
Does a very quick check of all tables and marks them as compatible with the new version of MariaDB.
When this process is done for one node, move onto the next node.
Note that when upgrading the Galera wsrep provider, sometimes the Galera protocol version can change. The Galera wsrep provider should not start using the new protocol version until all cluster nodes have been upgraded to the new version, so this is not generally an issue during a rolling upgrade. However, this can cause issues if you restart a non-upgraded node in a cluster where the rest of the nodes have been upgraded.
This page is licensed: CC BY-SA / Gnu FDL
Galera Cluster ships with the MariaDB Server. Upgrading a Galera Cluster node is very similar to upgrading a server from MariaDB 10.4 to MariaDB 10.5. For more information on that process as well as incompatibilities between versions, see the Upgrade Guide.
The following steps can be used to perform a rolling upgrade from MariaDB 10.4 to MariaDB 10.5 when using Galera Cluster. In a rolling upgrade, each node is upgraded individually, so the cluster is always operational. There is no downtime from the application's perspective.
First, before you get started:
First, take a look at Upgrading from MariaDB 10.4 to MariaDB 10.5 to see what has changed between the major versions.
Check whether any system variables or options have been changed or removed. Make sure that your server's configuration is compatible with the new MariaDB version before upgrading.
Check whether replication has changed in the new MariaDB version in any way that could cause issues while the cluster contains upgraded and non-upgraded nodes.
Check whether any new features have been added to the new MariaDB version. If a new feature in the new MariaDB version cannot be replicated to the old MariaDB version, then do not use that feature until all cluster nodes have been upgrades to the new MariaDB version.
Next, make sure that the Galera version numbers are compatible.
If you are upgrading from the most recent MariaDB 10.4 release to MariaDB 10.5, then the versions will be compatible.
See What is MariaDB Galera Cluster?: Galera wsrep provider Version s for information on which MariaDB releases uses which Galera wsrep provider versions.
You want to have a large enough gcache to avoid a State Snapshot Transfer (SST) during the rolling upgrade. The gcache size can be configured by setting gcache.size For example:wsrep_provider_options="gcache.size=2G"
Before you upgrade, it would be best to take a backup of your database. This is always a good idea to do before an upgrade. We would recommend mariadb-backup.
Then, for each node, perform the following steps:
Modify the repository configuration, so the system's package manager installs MariaDB 10.5. For example,
On Debian, Ubuntu, and other similar Linux distributions, see Updating the MariaDB APT repository to a New Major Release for more information.
On RHEL, CentOS, Fedora, and other similar Linux distributions, see Updating the MariaDB YUM repository to a New Major Release for more information.
On SLES, OpenSUSE, and other similar Linux distributions, see Updating the MariaDB ZYpp repository to a New Major Release for more information.
If you use a load balancing proxy such as MaxScale or HAProxy, make sure to drain the server from the pool so it does not receive any new connections.
Uninstall the old version of MariaDB and the Galera wsrep provider.
On Debian, Ubuntu, and other similar Linux distributions, execute the following:sudo apt-get remove mariadb-server galera
On RHEL, CentOS, Fedora, and other similar Linux distributions, execute the following:sudo yum remove MariaDB-server galera
On SLES, OpenSUSE, and other similar Linux distributions, execute the following:sudo zypper remove MariaDB-server galera
Install the new version of MariaDB and the Galera wsrep provider.
On Debian, Ubuntu, and other similar Linux distributions, see Installing MariaDB Packages with APT for more information.
On RHEL, CentOS, Fedora, and other similar Linux distributions, see Installing MariaDB Packages with YUM for more information.
On SLES, OpenSUSE, and other similar Linux distributions, see Installing MariaDB Packages with ZYpp for more information.
Make any desired changes to configuration options in option files, such as my.cnf
. This includes removing any system variables or options that are no longer supported.
On Linux distributions that use systemd
you may need to increase the service startup timeout as the default timeout of 90 seconds may not be sufficient. See Systemd: Configuring the Systemd Service Timeout for more information.
Run mariadb-upgrade with the --skip-write-binlog
option.
mariadb-upgrade
does two things:
Ensures that the system tables in the mysql database are fully compatible with the new version.
Does a very quick check of all tables and marks them as compatible with the new version of MariaDB.
When this process is done for one node, move onto the next node.
Note that when upgrading the Galera wsrep provider, sometimes the Galera protocol version can change. The Galera wsrep provider should not start using the new protocol version until all cluster nodes have been upgraded to the new version, so this is not generally an issue during a rolling upgrade. However, this can cause issues if you restart a non-upgraded node in a cluster where the rest of the nodes have been upgraded.
This page is licensed: CC BY-SA / Gnu FDL
Galera Cluster ships with the MariaDB Server. Upgrading a Galera Cluster node is very similar to upgrading a server from MariaDB 10.5 to MariaDB 10.6. For more information on that process as well as incompatibilities between versions, see the Upgrade Guide.
The following steps can be used to perform a rolling upgrade from MariaDB 10.5 to MariaDB 10.6 when using Galera Cluster. In a rolling upgrade, each node is upgraded individually, so the cluster is always operational. There is no downtime from the application's perspective.
First, before you get started:
First, take a look at Upgrading from MariaDB 10.5 to MariaDB 10.6 to see what has changed between the major versions.
Check whether any system variables or options have been changed or removed. Make sure that your server's configuration is compatible with the new MariaDB version before upgrading.
Check whether replication has changed in the new MariaDB version in any way that could cause issues while the cluster contains upgraded and non-upgraded nodes.
Check whether any new features have been added to the new MariaDB version. If a new feature in the new MariaDB version cannot be replicated to the old MariaDB version, then do not use that feature until all cluster nodes have been upgrades to the new MariaDB version.
Next, make sure that the Galera version numbers are compatible.
If you are upgrading from the most recent MariaDB 10.5 release to MariaDB 10.6, then the versions will be compatible.
See What is MariaDB Galera Cluster?: Galera wsrep provider Versions for information on which MariaDB releases uses which Galera wsrep provider versions.
You want to have a large enough gcache to avoid a State Snapshot Transfer (SST) during the rolling upgrade. The gcache size can be configured by setting gcache.size for example:wsrep_provider_options="gcache.size=2G"
Before you upgrade, it would be best to take a backup of your database. This is always a good idea to do before an upgrade. We would recommend mariadb-backup.
Then, for each node, perform the following steps:
Modify the repository configuration, so the system's package manager installs MariaDB 10.6 For example,
On Debian, Ubuntu, and other similar Linux distributions, see Updating the MariaDB APT repository to a New Major Release for more information.
On RHEL, CentOS, Fedora, and other similar Linux distributions, see Updating the MariaDB YUM repository to a New Major Release for more information.
On SLES, OpenSUSE, and other similar Linux distributions, see Updating the MariaDB ZYpp repository to a New Major Release for more information.
If you use a load balancing proxy such as MaxScale or HAProxy, make sure to drain the server from the pool so it does not receive any new connections.
Uninstall the old version of MariaDB and the Galera wsrep provider.
On Debian, Ubuntu, and other similar Linux distributions, execute the following:sudo apt-get remove mariadb-server galera-4
On RHEL, CentOS, Fedora, and other similar Linux distributions, execute the following:sudo yum remove MariaDB-server galera-4
On SLES, OpenSUSE, and other similar Linux distributions, execute the following:sudo zypper remove MariaDB-server galera-4
Install the new version of MariaDB and the Galera wsrep provider.
On Debian, Ubuntu, and other similar Linux distributions, see Installing MariaDB Packages with APT for more information.
On RHEL, CentOS, Fedora, and other similar Linux distributions, see Installing MariaDB Packages with YUM for more information.
On SLES, OpenSUSE, and other similar Linux distributions, see Installing MariaDB Packages with ZYpp for more information.
Make any desired changes to configuration options in option files, such as my.cnf
. This includes removing any system variables or options that are no longer supported.
On Linux distributions that use systemd
you may need to increase the service startup timeout as the default timeout of 90 seconds may not be sufficient. See Systemd: Configuring the Systemd Service Timeout for more information.
Run mariadb-upgrade with the --skip-write-binlog
option.
mariadb-upgrade
does two things:
Ensures that the system tables in the mysql database are fully compatible with the new version.
Does a very quick check of all tables and marks them as compatible with the new version of MariaDB.
When this process is done for one node, move onto the next node.
Note that when upgrading the Galera wsrep provider, sometimes the Galera protocol version can change. The Galera wsrep provider should not start using the new protocol version until all cluster nodes have been upgraded to the new version, so this is not generally an issue during a rolling upgrade. However, this can cause issues if you restart a non-upgraded node in a cluster where the rest of the nodes have been upgraded.
This page is licensed: CC BY-SA / Gnu FDL
Galera Cluster ships with the MariaDB Server. Upgrading a Galera Cluster node is very similar to upgrading a server from MariaDB 10.6 to MariaDB 10.11. For more information on that process as well as incompatibilities between versions, see the Upgrade Guide.
Stopping all nodes, upgrading all nodes, then starting the nodes
Rolling upgrade with IST (however, see MDEV-33263)
Note that rolling upgrade with SST does not work
The following steps can be used to perform a rolling upgrade from MariaDB 10.6 to MariaDB 10.11 when using Galera Cluster. In a rolling upgrade, each node is upgraded individually, so the cluster is always operational. There is no downtime from the application's perspective.
First, before you get started:
First, take a look at Upgrading from MariaDB 10.6 to MariaDB 10.11 to see what has changed between the major versions.
Check whether any system variables or options have been changed or removed. Make sure that your server's configuration is compatible with the new MariaDB version before upgrading.
Check whether replication has changed in the new MariaDB version in any way that could cause issues while the cluster contains upgraded and non-upgraded nodes.
Check whether any new features have been added to the new MariaDB version. If a new feature in the new MariaDB version cannot be replicated to the old MariaDB version, then do not use that feature until all cluster nodes have been upgrades to the new MariaDB version.
Next, make sure that the Galera version numbers are compatible.
If you are upgrading from the most recent MariaDB 10.6 release to MariaDB 10.11, then the versions will be compatible.
See What is MariaDB Galera Cluster?: Galera wsrep provider Versions for information on which MariaDB releases uses which Galera wsrep provider versions.
You want to have a large enough gcache to avoid a State Snapshot Transfer (SST) during the rolling upgrade. The gcache size can be configured by setting gcache.size For example:wsrep_provider_options="gcache.size=2G"
Before you upgrade, it would be best to take a backup of your database. This is always a good idea to do before an upgrade. We would recommend mariadb-backup.
Then, for each node, perform the following steps:
Modify the repository configuration, so the system's package manager installs MariaDB 10.11. For example,
On Debian, Ubuntu, and other similar Linux distributions, see Updating the MariaDB APT repository to a New Major Release for more information.
On RHEL, CentOS, Fedora, and other similar Linux distributions, see Updating the MariaDB YUM repository to a New Major Release for more information.
On SLES, OpenSUSE, and other similar Linux distributions, see Updating the MariaDB ZYpp repository to a New Major Release for more information
If you use a load balancing proxy such as MaxScale or HAProxy, make sure to drain the server from the pool so it does not receive any new connections.
Uninstall the old version of MariaDB and the Galera wsrep provider.
On Debian, Ubuntu, and other similar Linux distributions, execute the following:sudo apt-get remove mariadb-server galera-4
On RHEL, CentOS, Fedora, and other similar Linux distributions, execute the following:sudo yum remove MariaDB-server galera-4
On SLES, OpenSUSE, and other similar Linux distributions, execute the following:sudo zypper remove MariaDB-server galera-4
Install the new version of MariaDB and the Galera wsrep provider.
On Debian, Ubuntu, and other similar Linux distributions, see Installing MariaDB Packages with APT for more information.
On RHEL, CentOS, Fedora, and other similar Linux distributions, see Installing MariaDB Packages with YUM for more information.
On SLES, OpenSUSE, and other similar Linux distributions, see Installing MariaDB Packages with ZYpp for more information.
Make any desired changes to configuration options in option files, such as my.cnf
. This includes removing any system variables or options that are no longer supported.
On Linux distributions that use systemd
you may need to increase the service startup timeout as the default timeout of 90 seconds may not be sufficient. See Systemd: Configuring the Systemd Service Timeout for more information.
With wsrep off, run mariadb-upgrade with the --skip-write-binlog
option.
mariadb-upgrade
does two things:
Ensures that the system tables in the mysql database are fully compatible with the new version.
Does a very quick check of all tables and marks them as compatible with the new version of MariaDB.
Restart the server with wsrep on.
When this process is done for one node, move on to the next node.
Note that when upgrading the Galera wsrep provider, sometimes the Galera protocol version can change. The Galera wsrep provider should not start using the new protocol version until all cluster nodes have been upgraded to the new version, so this is not generally an issue during a rolling upgrade. However, this can cause issues if you restart a non-upgraded node in a cluster where the rest of the nodes have been upgraded.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB Galera security encrypts replication/SST traffic and ensures integrity through firewalls, secure credentials, and network isolation.
By default, Galera Cluster replicates data between each node without encrypting it. This is generally acceptable when the cluster nodes runs on the same host or in networks where security is guaranteed through other means. However, in cases where the cluster nodes exist on separate networks or they are in a high-risk network, the lack of encryption does introduce security concerns as a malicious actor could potentially eavesdrop on the traffic or get a complete copy of the data by triggering an SST.
To mitigate this concern, Galera Cluster allows you to encrypt data in transit as it is replicated between each cluster node using the Transport Layer Security (TLS) protocol. TLS was formerly known as Secure Socket Layer (SSL), but strictly speaking the SSL protocol is a predecessor to TLS and, that version of the protocol is now considered insecure. The documentation still uses the term SSL often and for compatibility reasons TLS-related server system and status variables still use the prefix ssl_
, but internally, MariaDB only supports its secure successors.
In order to secure connections between the cluster nodes, you need to ensure that all servers were compiled with TLS support. See Secure Connections Overview to determine how to check whether a server was compiled with TLS support.
For each cluster node, you also need a certificate, private key, and the Certificate Authority (CA) chain to verify the certificate. If you want to use self-signed certificates that are created with OpenSSL, then see Certificate Creation with OpenSSL for information on how to create those.
In order to enable TLS for Galera Cluster's replication traffic, there are a number of wsrep_provider_options that you need to set, such as:
You need to set the path to the server's certificate by setting the socket.ssl_cert wsrep_provider_option.
You need to set the path to the server's private key by setting the socket.ssl_key wsrep_provider_option.
You need to set the path to the certificate authority (CA) chain that can verify the server's certificate by setting the socket.ssl_ca wsrep_provider_option.
If you want to restrict the server to certain ciphers, then you also need to set the socket.ssl_cipher wsrep_provider_option.
It is also a good idea to set MariaDB Server's regular TLS-related system variables, so that TLS will be enabled for regular client connections as well. See Securing Connections for Client and Server for information on how to do that.
For example, to set these variables for the server, add the system variables to a relevant server option group in an option file:
[mariadb]
...
ssl_cert = /etc/my.cnf.d/certificates/server-cert.pem
ssl_key = /etc/my.cnf.d/certificates/server-key.pem
ssl_ca = /etc/my.cnf.d/certificates/ca.pem
wsrep_provider_options="socket.ssl_cert=/etc/my.cnf.d/certificates/server-cert.pem;socket.ssl_key=/etc/my.cnf.d/certificates/server-key.pem;socket.ssl_ca=/etc/my.cnf.d/certificates/ca.pem"
And then restart the server to make the changes persistent.
By setting both MariaDB Server's TLS-related system variables and Galera Cluster's TLS-related wsrep_provider_options, the server can secure both external client connections and Galera Cluster's replication traffic.
The method that you would use to enable TLS for State Snapshot Transfers (SSTs) would depend on the value of wsrep_sst_method.
See mariadb-backup SST Method: TLS for more information.
See xtrabackup-v2 SST Method: TLS for more information.
This SST method simply uses the mariadb-dump (previously mysqldump) utility, so TLS would be enabled by following the guide at Securing Connections for Client and Server: Enabling TLS for MariaDB Clients
This SST method supports encryption in transit via stunnel. See Introduction to State Snapshot Transfers (SSTs): rsync for more information.
This page is licensed: CC BY-SA / Gnu FDL
WSREP stands for Write-Set Replication.
MariaDB Enterprise Cluster, powered by Galera, adds some security features:
New TLS Modes have been implemented, which can be used to configure mandatory TLS and X.509 certificate verification for Enterprise Cluster:
WSREP TLS Modes have been implemented for Enterprise Cluster replication traffic.
SST TLS Modes have been implemented for SSTs that use MariaDB Enterprise Backup or Rsync.
Cluster name verification checks that a Joiner node belongs to the cluster prior to performing a State Snapshot Transfer (SST) or an Incremental State Transfer (IST).
Certificate expiration warnings are written to the MariaDB error log when the node's X.509 certificate is close to expiration.
TLS can be enabled without downtime for Enterprise Cluster replication traffic.
MariaDB Enterprise Cluster, powered by Galera, adds the wsrep_ssl_mode system variable, which configures the WSREP
TLS Mode used for Enterprise Cluster replication traffic.
The following WSREP
TLS Modes are supported:
WSREP TLS Mode
Values
Description
PROVIDER
TLS is optional for Enterprise Cluster replication traffic.
Each node obtains its TLS configuration from the wsrep_provider_options system variable. When the provider is not configured to use TLS on a node, the node will connect to the cluster without TLS.
The Provider WSREP TLS Mode is backward compatible with ES 10.5 and earlier. When performing a rolling upgrade from ES 10.5 and earlier, the Provider WSREP TLS Mode can be configured on the upgraded nodes.
SERVER
TLS is mandatory for Enterprise Cluster replication traffic, but X.509 certificate verification is not performed.
Each node obtains its TLS configuration from the node's MariaDB Enterprise Server configuration. When MariaDB Enterprise Server is not configured to use TLS on a node, the node will fail to connect to the cluster.
The Server WSREP TLS Mode is the default in ES 10.6.
SERVER_X509
TLS and X.509 certificate verification are mandatory for Enterprise Cluster replication traffic.
Each node obtains its TLS configuration from the node's MariaDB Enterprise Server configuration. When MariaDB Enterprise Server is not configured to use TLS on a node, the node will fail to connect to the cluster.
MariaDB Enterprise Cluster supports the Provider WSREP TLS
Mode, which is equivalent to Enterprise Cluster's TLS implementation in earlier versions of MariaDB Server. The Provider WSREP TLS
Mode is primarily intended for backward compatibility, and it is most useful for users who need to perform a rolling upgrade to Enterprise Server 10.6.
The Provider WSREP TLS
Mode can be configured by setting the wsrep_ssl_mode system variable to PROVIDER
.
TLS is optional in the Provider WSREP TLS
Mode. When the provider is not configured to use TLS on a node, the node will connect to the cluster without TLS.
Each node obtains its TLS configuration from the wsrep_provider_options system variable. The following options are used:
Set this option to true
to enable TLS.
Set this option to the path of the CA chain file.
Set this option to the path of the node's X.509 certificate file.
Set this option to the path of the node's private key file.
For example:
[mariadb]
...
wsrep_ssl_mode = PROVIDER
wsrep_provider_options = "socket.ssl=true;socket.ssl_cert=/certs/server-cert.pem;socket.ssl_ca=/certs/ca-cert.pem;socket.ssl_key=/certs/server-key.pem"
MariaDB Enterprise Cluster adds the Server and Server X.509 WSREP TLS
Modes for users who require mandatory TLS.
The Server WSREP TLS
Mode can be configured by setting the wsrep_ssl_mode system variable to SERVER
. In the Server WSREP TLS
Mode, TLS is mandatory, but X.509 certificate verification is not performed. The Server WSREP TLS
Mode is the default.
The Server X.509 WSREP TLS
Mode can be configured by setting the wsrep_ssl_mode system variable to SERVER_X509
. In the Server X.509 WSREP TLS
Mode, TLS and X.509 certification verification are mandatory.
TLS is mandatory in both the Server and Server X.509 WSREP TLS
Modes. When MariaDB Enterprise Server is not configured to use TLS on a node, the node will fail to connect to the cluster.
Each node obtains its TLS configuration from the node's MariaDB Enterprise Server configuration. The following system variables are used:
System Variable
Description
Set this system variables to the path of the CA chain file.
Optionally set this system variables to the path of the CA chain directory. The directory must have been processed by openssl rehash
. When your CA chain is stored in a single file, use the ssl_ca system variable instead.
Set this system variable to the path of the node's X.509 certificate file.
Set this system variable to the path of the node's private key file.
For example:
[mariadb]
...
wsrep_ssl_mode = SERVER_X509
ssl_ca = /certs/ca-cert.pem
ssl_cert = /certs/server-cert.pem
ssl_key = /certs/server-key.pem
MariaDB Enterprise Cluster, powered by Galera, adds the ssl-mode
option, which configures the SST TLS Mode for State Snapshot Transfers (SSTs). The ssl-mode
option is supported by the following SST methods, which can be configured using the wsrep_sst_method system variable:
The following SST TLS Modes are supported:
SST TLS Mode
Values
Description
DISABLED
Not set
TLS is optional for SST traffic.
Each node obtains its TLS configuration from the tca
, tcert
, and tkey
options. When the SST is not configured to use TLS on a node, the node will connect during the SST without TLS.
The Backward Compatible SST TLS Mode is backward compatible with ES 10.5 and earlier, so it is suitable for rolling upgrades.
The Backward Compatible SST TLS Mode is the default in ES 10.6.
REQUIRED
TLS is mandatory for SST traffic, but X.509 certificate verification is not performed.
Each node obtains its TLS configuration from the node's MariaDB Enterprise Server configuration. When MariaDB Enterprise Server is not configured to use TLS on a node, the node will fail to connect during an SST.
VERIFY_CA
VERIFY_IDENTITY
TLS and X.509 certification verification are mandatory for SST traffic.
Each node obtains its TLS configuration from the node's MariaDB Enterprise Server configuration. When MariaDB Enterprise Server is not configured to use TLS on a node, the node will fail to connect during an SST.
Prior to the state transfer, the Donor node will verify the Joiner node's X.509 certificate, and the Joiner node will verify the Donor node's X.509 certificate.
In MariaDB Enterprise Server 10.6, MariaDB Enterprise Cluster adds the Backward Compatible SST TLS Mode for SSTs that use MariaDB Enterprise Backup or Rsync. The Backward Compatible SST TLS Mode is primarily intended for backward compatibility with ES 10.5 and earlier, and it is most useful for users who need to perform a rolling upgrade to ES 10.6.
The Backward Compatible SST TLS Mode is the default, but it can also be configured by setting the ssl_mode
option to DISABLED
in a configuration file in the [sst]
group.
TLS is optional in the Backward Compatible SST TLS Mode. When the SST is not configured to use TLS, the SST will occur without TLS.
Each node obtains its TLS configuration from a configuration file in the [sst]
group. The following options are used:
Option
Description
tca
Set this option to the path of the CA chain file.
tcert
Set this option to the path of the node's X.509 certificate file.
tkey
Set this option to the path of the node's private key file.
For example:
[mariadb]
...
wsrep_sst_method = mariabackup
wsrep_sst_auth = mariabackup:mypassword
[sst]
ssl_mode = DISABLED
tca = /certs/ca-cert.pem
tcert = /certs/server-cert.pem
tkey = /certs/server-key.pem
MariaDB Enterprise Cluster adds the Server and Server X.509 SST TLS Modes for SSTs that use MariaDB Enterprise Backup or Rsync. The Server and Server X.509 SST TLS
Modes are intended for users who require mandatory TLS.
The Server SST TLS
Mode can be configured by setting the ssl_mode
option to REQUIRED
in a configuration file in the [sst]
group. In the Server SST TLS Mode, TLS is mandatory, but X.509 certificate verification is not performed.
The Server X.509 SST TLS Mode can be configured by setting the ssl_mode
option to VERIFY_CA
or VERIFY_IDENTITY
in a configuration file in the [sst]
group. In the Server X.509 SST TLS Mode, TLS and X.509 certification verification are mandatory. Prior to the state transfer, the Donor node will verify the Joiner node's X.509 certificate, and the Joiner node will verify the Donor node's X.509 certificate.
TLS is mandatory in both the Server and Server X.509 SST TLS
Modes. When MariaDB Enterprise Server is not configured to use TLS on a node, the node will fail to connect during an SST.
Each node obtains its TLS configuration from the node's MariaDB Enterprise Server configuration. The following system variables are used:
For example:
[mariadb]
...
wsrep_sst_method = mariabackup
wsrep_sst_auth = mariabackup:mypassword
ssl_ca = /certs/ca-cert.pem
ssl_cert = /certs/server-cert.pem
ssl_key = /certs/server-key.pem
[sst]
ssl_mode = VERIFY_CA
When the backward-compatible TLS parameters in the [sst] group are configured, the Server and Server X.509 SST TLS Modes use those parameters instead of the MariaDB Enterprise Server system variables. In that case, the following message will be written to the MariaDB error log:
new ssl configuration options (ssl-ca, ssl-cert and ssl-key) are ignored by SST due to presence of the tca, tcert and/or tkey in the [sst] section
MariaDB Enterprise Cluster, powered by Galera, adds cluster name verification for Joiner nodes, which ensures that the Joiner node does not perform a State Snapshot Transfer (SST) or an Incremental State Transfer (IST) for the wrong cluster.
Prior to performing a State Snapshot Transfer (SST) or Incremental State Transfer (IST), the Donor node verifies the wsrep_cluster_name value configured by the Joiner node to verify that the node belongs to the cluster.
MariaDB Enterprise Cluster, powered by Galera, can be configured to write certificate expiration warnings to the MariaDB Error Log when the node's X.509 certificate is close to expiration.
Certificate expiration warnings can be configured using the wsrep_certificate_expiration_hours_warning system variable:
When the wsrep_certificate_expiration_hours_warning
system variable is set to 0
, certificate expiration warnings are not printed to the MariaDB Error Log.
When the wsrep_certificate_expiration_hours_warning
system variable is set to a value N
, which is greater than 0
, certificate expiration warnings are printed to the MariaDB Error Log when the node's certificate expires in N
hours or less.
For example:
[mariadb]
...
# warn 3 days before certificate expiration
wsrep_certificate_expiration_hours_warning=72
MariaDB Enterprise Cluster, powered by Galera, adds new capabilities that allow TLS to be enabled for Enterprise Cluster replication traffic without downtime.
Enabling TLS without downtime relies on two new options implemented for the wsrep_provider_options system variable:
Option
Dynamic
Default
Description
socket.dynamic
No
false
When set to true
, the node will allow TLS and non-TLS communications at the same time.
socket.ssl_reload
Yes
N/A
When set to true
with the SET GLOBAL statement, Enterprise Cluster dynamically re-initializes its TLS context.
This is most useful if you need to replace a certificate that is about to expire without restarting the server.
The paths to the certificate and key files cannot be changed dynamically, so the updated certificates and keys must be placed at the same paths defined by the relevant TLS variables.
MariaDB Galera Cluster ensures high availability and disaster recovery through synchronous multi-master replication. It's ideal for active-active setups, providing strong consistency and automatic failover, perfect for critical applications needing continuous uptime.
Common use cases for Galera replication include:
Read Master
Traditional MariaDB master-slave topology, but with Galera all "slave" nodes are capable masters at all times - it is just the application that treats them as slaves. Galera replication can guarantee zero slave lag for such installations and, due to parallel slave applying, much better throughput for the cluster.
WAN Clustering
Synchronous replication works fine over the WAN network. There will be a delay, which is proportional to the network round trip time (RTT), but it only affects the commit operation.
Disaster Recover
Disaster recovery is a sub-class of WAN replication. Here one data center is passive and only receives replication events, but does not process any client transactions. Such a remote data center will be up to date at all times and no data loss can happen. During recovery, the spare site is just nominated as primary and application can continue as normal with a minimal fail over delay.
Latency Eraser
With WAN replication topology, cluster nodes can be located close to clients. Therefore all read & write operations will be super fast with the local node connection. The RTT related delay will be experienced only at commit time, and even then it can be generally accepted by end user, usually the kill-joy for end user experiences is the slow browsing response time, and read operations are as fast as they possibly can be.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB ensures high availability with Replication for async/semi-sync data copying and Galera Cluster for sync multi-master with failover and zero data loss.
Galera Load Balancer is a simple load balancer specifically designed for Galera Cluster. Like Galera, it only runs on Linux. Galera Load Balancer is developed and maintained by Codership. Documentation is available on fromdual.com.
Galera Load Balancer is inspired by Pen, which is a generic TCP load balancer. However, since Pen is a generic TCP connection load balancer, the techniques it uses are not well-suited to the particular use case of database servers. Galera Load Balancer is optimized for this type of workload.
Several balancing policies are supported. Each node can be assigned a different weight. Nodes with a higher weight are preferred. Depending on the selected policy, other nodes can even be ignored until the preferred nodes crash.
A lightweight daemon called glbd receives the connections from clients, and it redirects them to nodes. No specific client exists for this demo: a generic TCP client, such as nc, can be used to send administrative commands and read the usage statistics.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB Galera Cluster provides high availability with synchronous replication, while adding asynchronous replication boosts redundancy for disaster recovery or reporting.
MariaDB replication can be used to replicate between MariaDB Galera Cluster and MariaDB Server. This article will discuss how to do that.
Before we set up replication, we need to ensure that the cluster is configured properly. This involves the following steps:
Set log_slave_updates=ON on all nodes in the cluster. See Configuring MariaDB Galera Cluster: Writing Replicated Write Sets to the Binary Log and Using MariaDB Replication with MariaDB Galera Cluster: Configuring a Cluster Node as a Replication Master for more information on why this is important. It is also needed to enable wsrep GTID mode.
Set server_id to the same value on all nodes in the cluster. See Using MariaDB Replication with MariaDB Galera Cluster: Setting server_id on Cluster Nodes for more information on what this means.
If you want to use GTID replication, then you also need to configure some things to enable wsrep GTID mode. For example:
wsrep_gtid_mode=ON needs to be set on all nodes in the cluster.
wsrep_gtid_domain_id needs to be set to the same value on all nodes in the cluster so that each cluster node uses the same domain when assigning GTIDs for Galera Cluster's write sets.
log_slave_updates needs to be enabled on all nodes in the cluster. See MDEV-9855 about that.
And as an extra safety measure:
gtid_domain_id should be set to a different value on all nodes in a given cluster, and each of these values should be different than the configured wsrep_gtid_domain_id value. This is to prevent a node from using the same domain used for Galera Cluster's write sets when assigning GTIDs for non-Galera transactions, such as DDL executed with wsrep_sst_method=RSU set or DML executed with wsrep_on=OFF set.
Before we set up replication, we also need to ensure that the MariaDB Server replica is configured properly. This involves the following steps:
Set server_id to a different value than the one that the cluster nodes are using.
Set gtid_domain_id to a value that is different than the wsrep_gtid_domain_id and gtid_domain_id values that the cluster nodes are using.
Set log_bin and log_slave_updates=ON if you want the replica to log the transactions that it replicates.
Our process to set up replication is going to be similar to the process described at Setting up a Replication Slave with mariadb-backup, but it will be modified a bit to work in this context.
The very first step is to start the nodes in the first cluster. The first node will have to be bootstrapped. The other nodes can be started normally.
Once the nodes are started, you need to pick a specific node that will act as the replication primary for the MariaDB Server.
The first step is to simply take and prepare a fresh full backup of the node that you have chosen to be the replication primary. For example:
$ mariadb-backup --backup \
--target-dir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypassword
And then you would prepare the backup as you normally would. For example:
$ mariadb-backup --prepare \
--target-dir=/var/mariadb/backup/
Once the backup is done and prepared, you can copy it to the MariaDB Server that will be acting as replica. For example:
$ rsync -avrP /var/mariadb/backup dc2-dbserver1:/var/mariadb/backup
At this point, you can restore the backup to the datadir, as you normally would. For example:
$ mariadb-backup --copy-back \
--target-dir=/var/mariadb/backup/
And adjusting file permissions, if necessary:
$ chown -R mysql:mysql /var/lib/mysql/
Now that the backup has been restored to the MariaDB Server replica, you can start the MariaDB Server process.
Before the MariaDB Server replica can begin replicating from the cluster's primary, you need to create a user account on the primary that the replica can use to connect, and you need to grant the user account the REPLICATION SLAVE privilege. For example:
CREATE USER 'repl'@'dc2-dbserver1' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'dc2-dbserver1';
At this point, you need to get the replication coordinates of the primary from the original backup.
The coordinates will be in the xtrabackup_binlog_info file.
mariadb-backup dumps replication coordinates in two forms: GTID strings and binary log file and position coordinates, like the ones you would normally see from SHOW MASTER STATUS output. In this case, it is probably better to use the GTID coordinates.
For example:
mariadb-bin.000096 568 0-1-2
Regardless of the coordinates you use, you will have to set up the primary connection using CHANGE MASTER TO and then start the replication threads with START SLAVE.
If you want to use GTIDs, then you will have to first set gtid_slave_pos to the GTID coordinates that we pulled from the xtrabackup_binlog_info file, and we would set MASTER_USE_GTID=slave_pos
in the CHANGE MASTER TO command. For example:
SET GLOBAL gtid_slave_pos = "0-1-2";
CHANGE MASTER TO
MASTER_HOST="c1dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_USE_GTID=slave_pos;
START SLAVE;
If you want to use the binary log file and position coordinates, then you would set MASTER_LOG_FILE
and MASTER_LOG_POS
in the CHANGE MASTER TO command to the file and position coordinates that we pulled from the xtrabackup_binlog_info file. For example:
CHANGE MASTER TO
MASTER_HOST="c1dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_LOG_FILE='mariadb-bin.000096',
MASTER_LOG_POS=568,
START SLAVE;
You should be done setting up the replica now, so you should check its status with SHOW SLAVE STATUS. For example:
SHOW SLAVE STATUS\G
Now that the MariaDB Server is up, ensure that it does not start accepting writes yet if you want to set up circular replication between the cluster and the MariaDB Server.
You can also set up circular replication between the cluster and MariaDB Server, which means that the MariaDB Server replicates from the cluster, and the cluster also replicates from the MariaDB Server.
Before circular replication can begin, you also need to create a user account on the MariaDB Server, since it will be acting as the replication primary to the cluster's replica, and you need to grant the user account the REPLICATION SLAVE privilege. For example:
CREATE USER 'repl'@'c1dbserver1' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'c1dbserver1';
How this is done would depend on whether you want to use the GTID coordinates or the binary log file and position coordinates.
Regardless, you need to ensure that the second cluster is not accepting any writes other than those that it replicates from the cluster at this stage.
To get the GTID coordinates on the MariaDB server, you can check gtid_current_pos by executing:
SHOW GLOBAL VARIABLES LIKE 'gtid_current_pos';
Then on the node acting as a replica in the cluster, you can set up replication by setting gtid_current_pos to the GTID that was returned and then executing CHANGE MASTER TO :
SET GLOBAL gtid_slave_pos = "0-1-2";
CHANGE MASTER TO
MASTER_HOST="c2dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_USE_GTID=slave_pos;
START SLAVE;
To get the binary log file and position coordinates on the MariaDB server, you can execute SHOW MASTER STATUS:
SHOW MASTER STATUS
Then on the node acting as a replica in the cluster, you would set master_log_file
and master_log_pos
in the CHANGE MASTER TO command. For example:
CHANGE MASTER TO
MASTER_HOST="c2dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_LOG_FILE='mariadb-bin.000096',
MASTER_LOG_POS=568;
START SLAVE;
You should be done setting up the circular replication on the node in the first cluster now, so you should check its status with SHOW SLAVE STATUS. For example:
SHOW SLAVE STATUS\G
This page is licensed: CC BY-SA / Gnu FDL
MariaDB replication can be used for replication between two MariaDB Galera Clusters. This article will discuss how to do that.
Before we set up replication, we need to ensure that the clusters are configured properly. This involves the following steps:
Set log_slave_updates=ON on all nodes in both clusters. See Configuring MariaDB Galera Cluster: Writing Replicated Write Sets to the Binary Log and Using MariaDB Replication with MariaDB Galera Cluster: Configuring a Cluster Node as a Replication Master for more information on why this is important. This is also needed to enable wsrep GTID mode.
Set server_id to the same value on all nodes in a given cluster, but be sure to use a different value in each cluster. See Using MariaDB Replication with MariaDB Galera Cluster: Setting server_id on Cluster Nodes for more information on what this means.
If you want to use GTID replication, then you also need to configure some things to enable wsrep GTID mode. For example:
wsrep_gtid_mode=ON needs to be set on all nodes in each cluster.
wsrep_gtid_domain_id needs to be set to the same value on all nodes in a given cluster so that each cluster node uses the same domain when assigning GTIDs for Galera Cluster's write sets. Each cluster should have this set to a different value so that each cluster uses different domains when assigning GTIDs for their write sets.
log_slave_updates needs to be enabled on all nodes in the cluster. See MDEV-9855 about that.
And as an extra safety measure:
gtid_domain_id should be set to a different value on all nodes in a given cluster, and each of these values should be different than the configured wsrep_gtid_domain_id value. This is to prevent a node from using the same domain used for Galera Cluster's write sets when assigning GTIDs for non-Galera transactions, such as DDL executed with wsrep_sst_method=RSU set or DML executed with wsrep_on=OFF set.
Our process to set up replication is going to be similar to the process described at Setting up a Replication Slave with mariadb-backup, but it will be modified a bit to work in this context.
The very first step is to start the nodes in the first cluster. The first node will have to be bootstrapped. The other nodes can be started normally.
Once the nodes are started, you need to pick a specific node that will act as the replication primary for the second cluster.
The first step is to simply take and prepare a fresh full backup of the node that you have chosen to be the replication primary. For example:
$ mariadb-backup --backup \
--target-dir=/var/mariadb/backup/ \
--user=mariadb-backup --password=mypassword
And then you would prepare the backup as you normally would. For example:
$ mariadb-backup --prepare \
--target-dir=/var/mariadb/backup/
Once the backup is done and prepared, you can copy it to the node in the second cluster that will be acting as replica. For example:
$ rsync -avrP /var/mariadb/backup c2dbserver:/var/mariadb/backup
At this point, you can restore the backup to the datadir, as you normally would. For example:
$ mariadb-backup --copy-back \
--target-dir=/var/mariadb/backup/
And adjusting file permissions, if necessary:
$ chown -R mysql:mysql /var/lib/mysql/
Now that the backup has been restored to the second cluster's replica, you can start the server by bootstrapping the node.
Before the second cluster's replica can begin replicating from the first cluster's primary, you need to create a user account on the primary that the replica can use to connect, and you need to grant the user account the REPLICATION SLAVE privilege. For example:
CREATE USER 'repl'@'c2dbserver1' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'c2dbserver1';
At this point, you need to get the replication coordinates of the primary from the original backup.
The coordinates will be in the xtrabackup_binlog_info file.
mariadb-backup dumps replication coordinates in two forms: GTID strings and binary log file and position coordinates, like the ones you would normally see from SHOW MASTER STATUS output. In this case, it is probably better to use the GTID coordinates.
For example:
mariadb-bin.000096 568 0-1-2
Regardless of the coordinates you use, you will have to set up the primary connection using CHANGE MASTER TO and then start the replication threads with START SLAVE.
If you want to use GTIDs, then you will have to first set gtid_slave_pos to the GTID coordinates that we pulled from the xtrabackup_binlog_info file, and we would set MASTER_USE_GTID=slave_pos
in the CHANGE MASTER TO command. For example:
SET GLOBAL gtid_slave_pos = "0-1-2";
CHANGE MASTER TO
MASTER_HOST="c1dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_USE_GTID=slave_pos;
START SLAVE;
If you want to use the binary log file and position coordinates, then you would set MASTER_LOG_FILE
and MASTER_LOG_POS
in the CHANGE MASTER TO command to the file and position coordinates that we pulled the xtrabackup_binlog_info file. For example:
CHANGE MASTER TO
MASTER_HOST="c1dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_LOG_FILE='mariadb-bin.000096',
MASTER_LOG_POS=568,
START SLAVE;
You should be done setting up the replica now, so you should check its status with SHOW SLAVE STATUS. For example:
SHOW SLAVE STATUS\G
If the replica is replicating normally, then the next step would be to start the MariaDB Server process on the other nodes in the second cluster.
Now that the second cluster is up, ensure that it does not start accepting writes yet if you want to set up circular replication between the two clusters.
You can also set up circular replication between the two clusters, which means that the second cluster replicates from the first cluster, and the first cluster also replicates from the second cluster.
Before circular replication can begin, you also need to create a user account on the second cluster's primary that the first cluster's replica can use to connect, and you need to grant the user account the the REPLICATION SLAVE privilege. For example:
CREATE USER 'repl'@'c1dbserver1' IDENTIFIED BY 'password';
GRANT REPLICATION SLAVE ON *.* TO 'repl'@'c1dbserver1';
How this is done would depend on whether you want to use the GTID coordinates or the binary log file and position coordinates.
Regardless, you need to ensure that the second cluster is not accepting any writes other than those that it replicates from the first cluster at this stage.
To get the GTID coordinates on the second cluster, you can check gtid_current_pos by executing:
SHOW GLOBAL VARIABLES LIKE 'gtid_current_pos';
Then on the first cluster, you can set up replication by setting gtid_current_pos to the GTID that was returned and then executing CHANGE MASTER TO:
SET GLOBAL gtid_slave_pos = "0-1-2";
CHANGE MASTER TO
MASTER_HOST="c2dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_USE_GTID=slave_pos;
START SLAVE;
To get the binary log file and position coordinates on the second cluster, you can execute SHOW MASTER STATUS:
SHOW MASTER STATUS
Then on the first cluster, you would set master_log_file
and master_log_pos
in the CHANGE MASTER TO command. For example:
CHANGE MASTER TO
MASTER_HOST="c2dbserver1",
MASTER_PORT=3310,
MASTER_USER="repl",
MASTER_PASSWORD="password",
MASTER_LOG_FILE='mariadb-bin.000096',
MASTER_LOG_POS=568;
START SLAVE;
You should be done setting up the circular replication on the node in the first cluster now, so you should check its status with SHOW SLAVE STATUS. For example:
SHOW SLAVE STATUS\G
This page is licensed: CC BY-SA / Gnu FDL
MariaDB's global transaction IDs (GTIDs) are very useful when used with MariaDB replication, which is primarily what that feature was developed for. Galera Cluster, on the other hand, was developed by Codership for all MySQL and MariaDB variants, and the initial development of the technology pre-dated MariaDB's GTID implementation. As a side effect, MariaDB Galera Cluster (at least until MariaDB 10.5.1) only partially supports MariaDB's GTID implementation.
Galera Cluster has its own certification-based replication method that is substantially different from MariaDB replication. However, it would still be beneficial if MariaDB Galera Cluster was able to associate a Galera Cluster write set with a GTID that is globally unique but that is also consistent for that write set on each cluster node.
Before MariaDB 10.5.1, MariaDB Galera Cluster did not replicate the original GTID with the write set except in cases where the transaction was originally applied by a slave SQL thread. Each node independently generated its own GTID for each write set in most cases. See MDEV-20720.
MariaDB supports wsrep_gtid_mode.
MariaDB has a feature called wsrep GTID mode. When this mode is enabled, MariaDB uses some tricks to try to associate each Galera Cluster write set with a GTID that is globally unique, but that is also consistent for that write set on each cluster node. These tricks work in some cases, but GTIDs can still become inconsistent among cluster nodes.
Several things need to be configured for wsrep GTID mode to work, such as
wsrep_gtid_mode=ON needs to be set on all nodes in the cluster.
wsrep_gtid_domain_id needs to be set to the same value on all nodes in a given cluster, so that each cluster node uses the same domain when assigning GTIDs for Galera Cluster's write sets. When replicating between two clusters, each cluster should have this set to a different value, so that each cluster uses different domains when assigning GTIDs for their write sets.
log_slave_updates needs to be enabled on all nodes in the cluster. See MDEV-9855.
And as an extra safety measure:
gtid_domain_id should be set to a different value on all nodes in a given cluster, and each of these values should be different than the configured wsrep_gtid_domain_id value. This is to prevent a node from using the same domain used for Galera Cluster's write sets when assigning GTIDs for non-Galera transactions, such as DDL executed with wsrep_sst_method=RSU set or DML executed with wsrep_on=OFF set.
If you want to avoid writes accidentialy local GTIDS, you can avoid it with wsrep_gtid_mode = DISALLOW_LOCAL_GTID
In this case you get an error: ERROR 4165 (HY000): Galera replication not supported
You can overwrite it temporarily with set sql_log_bin = 0;
For information on setting server_id, see Using MariaDB Replication with MariaDB Galera Cluster: Setting server_id on Cluster Nodes.
Until MariaDB 10.5.1, there were known cases where GTIDs could become inconsistent across the cluster nodes.
A known issue (fixed in MariaDB 10.5.1) is:
Implicitly dropped temporary tables can make GTIDs inconsistent. See MDEV-14153 and MDEV-20720.
This does not necessarily imply that wsrep GTID mode works perfectly in all other situations. If you discover any other issues with it, please report a bug.
If a Galera Cluster node is also a replication slave, then that node's slave SQL thread will be applying transactions that it replicates from its replication master. If the node has log_slave_updates=ON set, then each transaction that the slave SQL thread applies will also generate a Galera Cluster write set that is replicated to the rest of the nodes in the cluster.
In MariaDB 10.1.30 and earlier, the node acting as slave would apply the transaction with the original GTID that it received from the master, and the other Galera Cluster nodes would generate their own GTIDs for the transaction when they replicated the write set.
In MariaDB 10.1.31 and later, the node acting as slave will include the transaction's original Gtid_Log_Event
in the replicated write set, so all nodes should associate the write set with its original GTID. See MDEV-13431 about that.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB replication and MariaDB Galera Cluster can be used together. However, there are some things that have to be taken into account.
If you want to use MariaDB replication and MariaDB Galera Cluster together, then the following tutorials may be useful:
If a Galera Cluster node is also a replication master, then some additional configuration may be needed.
Like with MariaDB replication, write sets that are received by a node with Galera Cluster's certification-based replication are not written to the binary log by default.
If the node is a replication master, then its replication slaves only replicate transactions that are in the binary log, so this means that the transactions that correspond to Galera Cluster write-sets would not be replicated by any replication slaves by default. If you would like a node to write its replicated write sets to the binary log, then you will have to set log_slave_updates=ON. If the node has any replication slaves, then this would also allow those slaves to replicate the transactions that corresponded to those write sets.
See Configuring MariaDB Galera Cluster: Writing Replicated Write Sets to the Binary Log for more information.
If a Galera Cluster node is also a replication slave, then some additional configuration may be needed.
If the node is a replication slave, then the node's slave SQL thread will be applying transactions that it replicates from its replication master. Transactions applied by the slave SQL thread will only generate Galera Cluster write-sets if the node has log_slave_updates=ON set. Therefore, in order to replicate these transactions to the rest of the nodes in the cluster, log_slave_updates=ON must be set.
If the node is a replication slave, then it is probably also a good idea to enable wsrep_restart_slave. When this is enabled, the node will restart its slave threads whenever it rejoins the cluster.
Both MariaDB replication and MariaDB Galera Cluster support replication filters, so extra caution must be taken when using all of these features together. See Configuring MariaDB Galera Cluster: Replication Filters for more details on how MariaDB Galera Cluster interprets replication filters.
It is most common to set server_id to the same value on each node in a given cluster. Since MariaDB Galera Cluster uses a virtually synchronous certification-based replication, all nodes should have the same data, so in a logical sense, a cluster can be considered in many cases a single logical server for purposes related to MariaDB replication. The binary logs of each cluster node might even contain roughly the same transactions and GTIDs if log_slave_updates=ON is set and if wsrep GTID mode is enabled and if non-Galera transactions are not being executed on any nodes.
There are cases when it might make sense to set a different server_id value on each node in a given cluster. For example, if log_slave_updates=OFF is set and if another cluster or a standard MariaDB Server is using multi-source replication to replicate transactions from each cluster node individually, then it would be required to set a different server_id value on each node for this to work.
Keep in mind that if replication is set up in a scenario where each cluster node has a different server_id value, and if the replication topology is set up in such a way that a cluster node can replicate the same transactions through Galera and through MariaDB replication, then you may need to configure the cluster node to ignore these transactions when setting up MariaDB replication. You can do so by setting IGNORE_SERVER_IDS to the server IDs of all nodes in the same cluster when executing CHANGE MASTER TO. For example, this might be required when circular replication is set up between two separate clusters, and each cluster node has a different server_id value, and each cluster has log_slave_updates=ON set.
This page is licensed: CC BY-SA / Gnu FDL
Galera Cluster for MariaDB offers synchronous multi-master replication with high availability, no data loss, and simplified, consistent scaling.
Galera status variables can be viewed with the SHOW STATUS statement.
SHOW STATUS LIKE 'wsrep%';
See also the Full list of MariaDB options, system and status variables.
MariaDB Galera Cluster has the following status variables:
wsrep_applier_thread_count
Description: Stores the current number of applier threads to make clear how many slave threads of this type there are.
wsrep_apply_oooe
Description: How often write sets have been applied out of order, an indicator of parallelization efficiency.
wsrep_apply_oool
Description: How often write sets with a higher sequence number were applied before ones with a lower sequence number, implying slow write sets.
wsrep_apply_window
Description: Average distance between highest and lowest concurrently applied seqno.
wsrep_cert_deps_distance
Description: Average distance between the highest and the lowest sequence numbers that can possibly be applied in parallel, or the potential degree of parallelization.
wsrep_cert_index_size
Description: The number of entries in the certification index.
wsrep_cert_interval
Description: Average number of transactions received while a transaction replicates.
wsrep_cluster_capabilities
Description:
wsrep_cluster_conf_id
Description: Total number of cluster membership changes that have taken place.
wsrep_cluster_size
Description: Number of nodes currently in the cluster.
wsrep_cluster_state_uuid
Description: UUID state of the cluster. If it matches the value in wsrep_local_state_uuid, the local and cluster nodes are in sync.
wsrep_cluster_status
Description: Cluster component status. Possible values are PRIMARY
(primary group configuration, quorum present), NON_PRIMARY
(non-primary group configuration, quorum lost), or DISCONNECTED
(not connected to group, retrying).
wsrep_cluster_weight
Description: The total weight of the current members in the cluster. The value is counted as a sum of pc.weight of the nodes in the current primary component.
wsrep_commit_oooe
Description: How often a transaction was committed out of order.
wsrep_commit_oool
Description: No meaning.
wsrep_commit_window
Description: Average distance between highest and lowest concurrently committed seqno.
wsrep_connected
Description: Whether or not MariaDB is connected to the wsrep provider. Possible values are ON
or OFF
.
wsrep_desync_count
Description: Returns the number of operations in progress that require the node to temporarily desync from the cluster.
wsrep_evs_delayed
Description: Provides a comma separated list of all the nodes this node has registered on its delayed list.
wsrep_evs_evict_list
Description: Lists the UUID’s of all nodes evicted from the cluster. Evicted nodes cannot rejoin the cluster until you restart their mysqld processes.
wsrep_evs_repl_latency
Description: This status variable provides figures for the replication latency on group communication. It measures latency (in seconds) from the time point when a message is sent out to the time point when a message is received. As replication is a group operation, this essentially gives you the slowest ACK and longest RTT in the cluster. The format is min/avg/max/stddev
wsrep_evs_state
Description: Shows the internal state of the EVS protocol.
wsrep_flow_control_paused
Description: The fraction of time since the last FLUSH STATUS command that replication was paused due to flow control.
wsrep_flow_control_paused_ns
Description: The total time spent in a paused state measured in nanoseconds.
wsrep_flow_control_recv
Description: Number of FC_PAUSE events received as well as sent since the most recent status query.
wsrep_flow_control_sent
Description: Number of FC_PAUSE events sent since the most recent status query
wsrep_gcomm_uuid
Description: The UUID assigned to the node.
wsrep_incoming_addresses
Description: Comma-separated list of incoming server addresses in the cluster component.
wsrep_last_committed
Description: Sequence number of the most recently committed transaction.
wsrep_local_bf_aborts
Description: Total number of local transactions aborted by slave transactions while being executed
wsrep_local_cached_downto
Description: The lowest sequence number, or seqno, in the write-set cache (GCache).
wsrep_local_cert_failures
Description: Total number of local transactions that failed the certification test.
wsrep_local_commits
Description: Total number of local transactions committed on the node.
wsrep_local_index
Description: The node's index in the cluster. The index is zero-based.
wsrep_local_recv_queue
Description: Current length of the receive queue, which is the number of write sets waiting to be applied.
wsrep_local_recv_queue_avg
Description: Average length of the receive queue since the most recent status query. If this value is noticeably larger than zero, the node is likely to be overloaded and cannot apply the write sets as quickly as they arrive, resulting in replication throttling.
wsrep_local_recv_queue_max
Description: The maximum length of the recv queue since the last FLUSH STATUS command.
wsrep_local_recv_queue_min
Description: The minimum length of the recv queue since the last FLUSH STATUS command.
wsrep_local_replays
Description: Total number of transaction replays due to asymmetric lock granularity.
wsrep_local_send_queue
Description: Current length of the send queue, which is the number of write sets waiting to be sent.
wsrep_local_send_queue_avg
Description: Average length of the send queue since the most recent status query. If this value is noticeably larger than zero, there are most likely network throughput or replication throttling issues.
wsrep_local_send_queue_max
Description: The maximum length of the send queue since the last FLUSH STATUS command.
wsrep_local_send_queue_min
Description: The minimum length of the send queue since the last FLUSH STATUS command.
wsrep_local_state
Description: Internal Galera Cluster FSM state number.
wsrep_local_state_comment
Description: Human-readable explanation of the state.
wsrep_local_state_uuid
Description: The node's UUID state. If it matches the value in wsrep_cluster_state_uuid, the local and cluster nodes are in sync.
wsrep_open_connections
Description: The number of open connection objects inside the wsrep provider.
wsrep_open_transactions
Description: The number of locally running transactions that have been registered inside the wsrep provider. This means transactions that have made operations that have caused write set population to happen. Transactions that are read-only are not counted.
wsrep_protocol_version
Description: The wsrep protocol version being used.
wsrep_provider_name
Description: The name of the provider. The default is "Galera".
wsrep_provider_vendor
Description: The vendor string.
wsrep_provider_version
Description: The version number of the Galera wsrep provider
wsrep_ready
Description: Whether or not the Galera wsrep provider is ready. Possible values are ON
or OFF
wsrep_received
Description: Total number of write sets received from other nodes.
wsrep_received_bytes
Description: Total size in bytes of all write sets received from other nodes.
wsrep_repl_data_bytes
Description: Total size of data replicated.
wsrep_repl_keys
Description: Total number of keys replicated.
wsrep_repl_keys_bytes
Description: Total size of keys replicated.
wsrep_repl_other_bytes
Description: Total size of other bits replicated.
wsrep_replicated
Description: Total number of write sets replicated to other nodes.
wsrep_replicated_bytes
Description: Total size in bytes of all write sets replicated to other nodes.
wsrep_rollbacker_thread_count
Description: Stores the current number of rollbacker threads to make clear how many slave threads of this type there are.
wsrep_thread_count
Description: Total number of wsrep (applier/rollbacker) threads.
This page is licensed: CC BY-SA / Gnu FDL
/This page documents system variables related to Galera Cluster. For options that are not system variables, see Galera Options.
See Server System Variables for a complete list of system variables and instructions on setting them.
Also see the Full list of MariaDB options, system and status variables.
wsrep_allowlist
Description:
Allowed IP addresses, comma delimited.
Note that setting gmcast.listen_addr=tcp://[::]:4567
on a dual-stack system (eg. Linux with net.ipv6.bindv6only = 0
), IPv4 addresses need to allowlisted using the IPv4-mapped IPv6 address (eg. ::ffff:1.2.3.4
).
Commandline: --wsrep-allowlist=value1[,value2...]
Scope: Global
Dynamic: No
Data Type: String
Default Value: None
Introduced: MariaDB 10.10
wsrep_auto_increment_control
Description: If set to 1
(the default), will automatically adjust the auto_increment_increment and auto_increment_offset variables according to the size of the cluster, and when the cluster size changes. This avoids replication conflicts due to auto_increment. In a primary-replica environment, can be set to OFF
.
Commandline: --wsrep-auto-increment-control[={0|1}]
Scope: Global
Dynamic: Yes
Data Type: Boolean
Default Value: ON
wsrep_causal_reads
Description: If set to ON
(OFF
is default), enforces read-committed characteristics across the cluster. In the case that a primary applies an event more quickly than a replica, the two could briefly be out-of-sync. With this variable set to ON
, the replica will wait for the event to be applied before processing further queries. Setting to ON
also results in larger read latencies. Deprecated by wsrep_sync_wait=1.
Commandline: --wsrep-causal-reads[={0|1}]
Scope: Session
Dynamic: Yes
Data Type: Boolean
Default Value: OFF
Removed: MariaDB 11.3.0
wsrep_certificate_expiration_hours_warning
wsrep_certification_rules
Description: Certification rules to use in the cluster. Possible values are:
strict
: Stricter rules that could result in more certification failures. For example with foreign keys, certification failure could result if different nodes receive non-conflicting insertions at about the same time that point to the same row in a parent table
optimized
: relaxed rules that allow more concurrency and cause less certification failures.
Commandline: --wsrep-certifcation-rules
Scope: Global
Dynamic: Yes
Data Type: Enumeration
Default Value: strict
Valid Values: strict
, optimized
wsrep_certify_nonPK
Description: When set to ON
(the default), Galera will still certify transactions for tables with no primary key. However, this can still cause undefined behavior in some circumstances. It is recommended to define primary keys for every InnoDB table when using Galera.
Commandline: --wsrep-certify-nonPK[={0|1}]
Scope: Global
Dynamic: Yes
Data Type: Boolean
Default Value: ON
wsrep_cluster_address
Description: The addresses of cluster nodes to connect to when starting up.
Good practice is to specify all possible cluster nodes, in the form gcomm://<node1 or ip:port>,<node2 or ip2:port>,<node3 or ip3:port>
.
Specifying an empty ip (gcomm://
) will cause the node to start a new cluster (which should not be done in the my.cnf file, as after each restart the server will not rejoin the current cluster).
The variable can be changed at runtime in some configurations, and will result in the node closing the connection to any current cluster, and connecting to the new address.
If specifying a port, note that this is the Galera port, not the MariaDB port.
For example:
gcomm://192.168.0.1,192.168.0.2,192.168.0.3
gcomm://192.168.0.1:1234,192.168.0.2:1234,192.168.0.3:1234?gmcast.listen_addr=tcp://0.0.0.0:1234
See also gmcast.listen_addr
Commandline: --wsrep-cluster-address=value
Scope: Global
Dynamic: No
Data Type: String
wsrep_cluster_name
Description: The name of the cluster. Nodes cannot connect to clusters with a different name, so needs to be identical on all nodes in the same cluster. The variable can be set dynamically, but note that doing so may be unsafe and cause an outage, and that the wsrep provider is unloaded and loaded.
Commandline: --wsrep-cluster-name=value
Scope: Global
Dynamic: Yes
Data Type: String
Default Value: my_wsrep_cluster
wsrep_convert_LOCK_to_trx
Description: Converts LOCK/UNLOCK TABLES statements to BEGIN and COMMIT. Used mainly for getting older applications to work with a multi-primary setup, use carefully, as can result in extremely large writesets.
Commandline: --wsrep-convert-LOCK-to-trx[={0|1}]
Scope: Global
Dynamic: Yes
Data Type: Boolean
Default Value: OFF
wsrep_data_home_dir
Description: Directory where wsrep provider will store its internal files.
Commandline: --wsrep-data-home-dir=value
Scope: Global
Dynamic: No
Data Type: String
Default Value: The datadir variable value.
wsrep_dbug_option
Description: Unused. The mechanism to pass the DBUG options to the wsrep provider hasn't been implemented.
Commandline: --wsrep-dbug-option=value
Scope: Global
Dynamic: Yes
Data Type: String
wsrep_debug
Description: WSREP debug level logging.
Before MariaDB 10.6.1, DDL logging was only logged on the originating node. From MariaDB 10.6.1, it is logged on other nodes as well.
It is an enum. Valid values are:0: NONE
: Off (default)1: SERVER
: MariaDB server code contains WSREP_DEBUG log writes, and these will be added to server error log2: TRANSACTION
: Logging from wsrep-lib transaction is added to the error log3: STREAMING
: Logging from streaming transactions in wsrep-lib is added to the error log4: CLIENT
: Logging from wsrep-lib client state is added to the error log.
Commandline:
--wsrep-debug[={NONE|SERVER|TRANSACTION|STREAMING|CLIENT}]
Scope: Global
Dynamic: Yes
Data Type: Enumeration
Default Value: NONE
Valid Values: NONE
, SERVER
, TRANSACTION
, STREAMING
, CLIENT
wsrep_desync
Description: When a node receives more write-sets than it can apply, the transactions are placed in a received queue. If the node's received queue has too many write-sets waiting to be applied (as defined by the gcs.fc_limit WSREP provider option), then the node would usually engage Flow Control. However, when this option is set to ON
, Flow Control will be disabled for the desynced node. The desynced node works through the received queue until it reaches a more manageable size. The desynced node continues to receive write-sets from the other nodes in the cluster. The other nodes in the cluster do not wait for the desynced node to catch up, so the desynced node can fall even further behind the other nodes in the cluster. You can check if a node is desynced by checking if the wsrep_local_state_comment status variable is equal to Donor/Desynced
.
Commandline: --wsrep-desync[={0|1}]
Scope: Global
Dynamic: Yes
Data Type: Boolean
Default Value: OFF
wsrep_dirty_reads
Description: By default, when not synchronized with the group (wsrep_ready=OFF) a node will reject all queries other than SET and SHOW. If wsrep_dirty_reads
is set to 1
, queries which do not change data, like SELECT queries (dirty reads), creating of prepare statement, etc. will be accepted by the node.
Commandline: --wsrep-dirty-reads[={0|1}]
Scope: Global,Session
Dynamic: Yes
Data Type: Boolean
Default Value: OFF
Valid Values: ON
, OFF
wsrep_drupal_282555_workaround
Description: If set to ON
, a workaround for Drupal/MySQL/InnoDB bug #282555 is enabled. This is a bug where, in some cases, when inserting a DEFAULT
value into an AUTO_INCREMENT column, a duplicate key error may be returned.
Commandline: --wsrep-drupal-282555-workaround[={0|1}]
Scope: Global
Dynamic: Yes
Data Type: Boolean
Default Value: OFF
wsrep_forced_binlog_format
Description: A binary log format that will override any session binlog format settings.
Commandline: --wsrep-forced-binlog-format=value
Scope: Global
Dynamic: Yes
Default Value: NONE
Data Type: Enum
Valid Values: STATEMENT
, ROW
, MIXED
or NONE
(which resets the forced binlog format state).
wsrep_gtid_domain_id
Description: This system variable defines the GTID domain ID that is used for wsrep GTID mode.
When wsrep_gtid_mode is set to ON
, wsrep_gtid_domain_id
is used in place of gtid_domain_id for all Galera Cluster write sets.
When wsrep_gtid_mode is set to OFF
, wsrep_gtid_domain_id
is simply ignored to allow for backward compatibility.
There are some additional requirements that need to be met in order for this mode to generate consistent GTIDs. For more information, see Using MariaDB GTIDs with MariaDB Galera Cluster.
Commandline: --wsrep-gtid-domain-id=#
Scope: Global
Dynamic: Yes
Data Type: numeric
Default Value: 0
Range: 0
to 4294967295
wsrep_gtid_mode
Description: Wsrep GTID mode attempts to keep GTIDs consistent for Galera Cluster write sets on all cluster nodes. GTID state is initially copied to a joiner node during an SST. If you are planning to use Galera Cluster with MariaDB replication, then wsrep GTID mode can be helpful.
When wsrep_gtid_mode
is set to ON
, wsrep_gtid_domain_id is used in place of gtid_domain_id for all Galera Cluster write sets.
When wsrep_gtid_mode
is set to OFF
, wsrep_gtid_domain_id is simply ignored to allow for backward compatibility.
There are some additional requirements that need to be met in order for this mode to generate consistent GTIDs. For more information, see Using MariaDB GTIDs with MariaDB Galera Cluster.
Commandline: --wsrep-gtid-mode[={0|1}]
Scope: Global
Dynamic: Yes
Data Type: boolean
Default Value: OFF
wsrep_gtid_seq_no
Description: Internal server usage, manually set WSREP GTID seqno.
Commandline: None
Scope: Session only
Dynamic: Yes
Data Type: numeric
Range: 0
to 18446744073709551615
Introduced: MariaDB 10.5.1
wsrep_ignore_apply_errors
Description: Bitmask determining whether errors are ignored, or reported back to the provider.
0: No errors are skipped.
1: Ignore some DDL errors (DROP DATABASE, DROP TABLE, DROP INDEX, ALTER TABLE).
2: Skip DML errors (Only ignores DELETE errors).
4: Ignore all DDL errors.
Commandline: --wsrep-ignore-apply-errors
Scope: Global
Dynamic: Yes
Data Type: Numeric
Default Value: 7
Range: 0
to 7
wsrep_load_data_splitting
Description: If set to ON
, LOAD DATA INFILE supports big data files by introducing transaction splitting. The setting has been deprecated in Galera 4, and defaults to OFF
Commandline: --wsrep-load-data-splitting[={0|1}]
Scope: Global
Dynamic: Yes
Data Type: Boolean
Default Value: OFF
Deprecated: MariaDB 10.4.2
Removed: MariaDB 11.5
wsrep_log_conflicts
Description: If set to ON
(OFF
is default), details of conflicting MDL as well as InnoDB locks in the cluster will be logged.
Commandline: --wsrep-log-conflicts[={0|1}]
Scope: Global
Dynamic: Yes
Data Type: Boolean
Default Value: OFF
wsrep_max_ws_rows
Description: Maximum permitted number of rows per writeset. The support for this variable has been added and in order to be backward compatible the default value has been changed to 0
, which essentially allows writesets to be any size.
Commandline: --wsrep-max-ws-rows=#
Scope: Global
Dynamic: Yes
Data Type: Numeric
Default Value:
0
Range: 0
to 1048576
wsrep_max_ws_size
Description: Maximum permitted size in bytes per write set. Writesets exceeding 2GB will be rejected.
Commandline: --wsrep-max-ws-size=#
Scope: Global
Dynamic: Yes
Data Type: Numeric
Default Value:
2147483647
(2GB)
Range: 1024
to 2147483647
wsrep_mode
Description: Turns on WSREP features which are not part of default behavior.
BINLOG_ROW_FORMAT_ONLY: Only ROW binlog format is supported.
DISALLOW_LOCAL_GTID: Nodes can have GTIDs for local transactions in a number of scenarios. If DISALLOW_LOCAL_GTID is set, these operations produce an error ERROR HY000: Galera replication not supported. Scenarios include:
A DDL statement is executed with wsrep_OSU_method=RSU set.
A DML statement writes to a non-InnoDB table.
A DML statement writes to an InnoDB table with wsrep_on=OFF set.
REPLICATE_ARIA: Whether or not DML updates for Aria tables will be replicated. This functionality is experimental and should not be relied upon in production systems.
REPLICATE_MYISAM: Whether or not DML updates for a MyISAM tables will be replicated. This functionality is experimental and should not be relied upon in production systems.
REQUIRED_PRIMARY_KEY: Table should have PRIMARY KEY defined.
STRICT_REPLICATION: Same as the old wsrep_strict_ddl setting.
Commandline: --wsrep-mode=value
Scope: Global
Dynamic: Yes
Data Type: Enumeration
Default Value: (Empty)
Valid Values: BINLOG_ROW_FORMAT_ONLY
, DISALLOW_LOCAL_GTID
, REQUIRED_PRIMARY_KEY
, REPLICATE_ARIA
, REPLICATE_MYISAM
and STRICT_REPLICATION
Introduced: MariaDB 10.6.0
wsrep_mysql_replication_bundle
Description: Determines the number of replication events that are grouped together. Experimental implementation aimed to assist with bottlenecks when a single replica faces a large commit time delay. If set to 0
(the default), there is no grouping.
Commandline: --wsrep-mysql-replication-bundle=#
Scope: Global
Dynamic: No
Data Type: Numeric
Default Value: 0
Range: 0
to 1000
wsrep_node_address
Description: Specifies the node's network address, in the format ip address[:port]
. It supports IPv6. The default behavior is for the node to pull the address of the first network interface on the system and the default Galera port. This autoguessing can be unreliable, particularly in the following cases:
cloud deployments
container deployments
servers with multiple network interfaces.
servers running multiple nodes.
network address translation (NAT).
clusters with nodes in more than one region.
Commandline: --wsrep-node-address=value
Scope: Global
Dynamic: No
Data Type: String
Default Value: Primary network address, usually eth0
with a default port of 4567, or 0.0.0.0
if no IP address.
wsrep_node_incoming_address
Description: This is the address from which the node listens for client connections. If an address is not specified or it's set to AUTO
(default), mysqld uses either --bind-address or --wsrep-node-address, or tries to get one from the list of available network interfaces, in the same order. See also wsrep_provider_options -> gmcast.listen_addr.
Commandline: --wsrep-node-incoming-address=value
Scope: Global
Dynamic: No
Data Type: String
Default Value: AUTO
wsrep_node_name
Description: Name of this node. This name can be used in wsrep_sst_donor as a preferred donor. Note that multiple nodes in a cluster can have the same name.
Commandline: --wsrep-node-name=value
Scope: Global
Dynamic: Yes
Data Type: String
Default Value: The server's hostname.
wsrep_notify_cmd
Description: Command to be executed each time the node state or the cluster membership changes. Can be used for raising an alarm, configuring load balancers and so on. See the Codership Notification Script page for more details.
Commandline: --wsrep-notify-command=value
Scope: Global
Dynamic:
No (>= MariaDB 10.5.9)
Yes (<= MariaDB 10.5.8)
Data Type: String
Default Value: Empty
wsrep_on
Description: Whether or not wsrep replication is enabled. If the global value is set to OFF
, it is not possible to load the provider and join the node in the cluster. If only the session value is set to OFF
, the operations from that particular session are not replicated in the cluster, but other sessions and applier threads will continue as normal. The session value of the variable does not affect the node's membership and thus, regardless of its value, the node keeps receiving updates from other nodes in the cluster. It is set to OFF
by default and must be turned on to enable Galera replication.
Commandline: --wsrep-on[={0|1}]
Scope: Global, Session
Dynamic: Yes
Data Type: Boolean
Default Value: OFF
Valid Values: ON
, OFF
wsrep_OSU_method
Description: Online schema upgrade method. The default is TOI
, specifying the setting without the optional parameter will set to RSU
.
TOI
: Total Order Isolation. In each cluster node, DDL is processed in the same order regarding other transactions, guaranteeing data consistency. However, affected parts of the database will be locked for the whole cluster.
RSU
: Rolling Schema Upgrade. DDL processing is only done locally on the node, and the user needs perform the changes manually on each node. The node is desynced from the rest of the cluster while the processing takes place to avoid the blocking other nodes. Schema changes must be backwards compatible in the same way as for ROW based replication to avoid breaking replication when the DDL processing is complete on the single node, and replication recommences.
Commandline: --wsrep-OSU-method[=value]
Scope: Global, Session
Dynamic: Yes
Data Type: Enum
Default Value: TOI
Valid Values: TOI
, RSU
wsrep_patch_version
Description: Wsrep patch version, for example wsrep_25.10
.
Commandline: None
Scope: Global
Dynamic: No
Data Type: String
Default Value: None
wsrep_provider
Description: Location of the wsrep library, usually /usr/lib/libgalera_smm.so
on Debian and Ubuntu, and /usr/lib64/libgalera_smm.so
on Red Hat/CentOS.
Commandline: --wsrep-provider=value
Scope: Global
No (>= MariaDB 10.5.9)
Yes (<= MariaDB 10.5.8)
Data Type: String
Default Value: None
wsrep_provider_options
Description: Semicolon (;) separated list of wsrep options (see wsrep_provider_options).
Commandline: --wsrep-provider-options=value
Scope: Global
Dynamic: No
Data Type: String
Default Value: Empty
wsrep_recover
Description: If set to ON
when the server starts, the server will recover the sequence number of the most recent write set applied by Galera, and it will be output to stderr
, which is usually redirected to the error log. At that point, the server will exit. This sequence number can be provided to the wsrep_start_position system variable.
Commandline: --wsrep-recover[={0|1}]
Scope: Global
Dynamic: No
Data Type: Boolean
Default Value: OFF
wsrep_reject_queries
Description: Variable to set to reject queries from client connections, useful for maintenance. The node continues to apply write-sets, but an Error 1047: Unknown command
error is generated by a client query.
NONE
- Not set. Queries will be processed as normal.
ALL
- All queries from client connections will be rejected, but existing client connections will be maintained.
ALL_KILL
All queries from client connections will be rejected, and existing client connections, including the current one, will be immediately killed.
Commandline: --wsrep-reject-queries[=value]
Scope: Global
Dynamic: Yes
Data Type: Enum
Default Value: NONE
Valid Values: NONE
, ALL
, ALL_KILL
wsrep_replicate_myisam
Description: Whether or not DML updates for MyISAM tables will be replicated. This functionality is still experimental and should not be relied upon in production systems. Deprecated in MariaDB 10.6, and removed in MariaDB 10.7, use wsrep_mode instead.
Commandline: --wsrep-replicate-myisam[={0|1}]
Scope: Global
Dynamic: Yes
Default Value: OFF
Data Type: Boolean
Valid Values: ON
, OFF
Deprecated: MariaDB 10.6.0
Removed: MariaDB 10.7.0
wsrep_restart_slave
Description: If set to ON, the replica is restarted automatically, when node joins back to cluster.
Commandline: --wsrep-restart-slave[={0|1}]
Scope: Global
Dynamic: Yes
Default Value: OFF
Data Type: Boolean
wsrep_retry_autocommit
Description: Number of times autocommited queries will be retried due to cluster-wide conflicts before returning an error to the client. If set to 0
, no retries will be attempted, while a value of 1
(the default) or more specifies the number of retries attempted. Can be useful to assist applications using autocommit to avoid deadlocks.
Commandline: --wsrep-retry-autocommit=value
Scope: Global
Dynamic: No
Data Type: Numeric
Default Value: 1
Range: 0
to 10000
wsrep_slave_FK_checks
Description: If set to ON (the default), the applier replica thread performs foreign key constraint checks.
Commandline: --wsrep-slave-FK-checks[={0|1}]
Scope: Global
Dynamic: Yes
Data Type: Boolean
Default Value: ON
wsrep_slave_threads
Description: Number of replica threads used to apply Galera write sets in parallel. The Galera replica threads are able to determine which write sets are safe to apply in parallel. However, if your cluster nodes seem to have frequent consistency problems, then setting the value to 1
will probably fix the problem. See About Galera Replication: Galera Replica Threads for more information.
Commandline: --wsrep-slave-threads=
#
Scope: Global
Dynamic: Yes
Data Type: Numeric
Default Value: 1
Range: 1
to 512
wsrep_slave_UK_checks
Description: If set to ON, the applier replica thread performs secondary index uniqueness checks.
Commandline: --wsrep-slave-UK-checks[={0|1}]
Scope: Global
Dynamic: Yes
Data Type: Boolean
Default Value: OFF
wsrep_sr_store
Description: Storage for streaming replication fragments.
Commandline: --wsrep-sr-store=val
Scope: Global
Dynamic: No
Data Type: Enum
Default Value: table
Valid Values: table
, none
wsrep_ssl_mode
wsrep_sst_auth
Description: Username and password of the user to use for replication. Unused if wsrep_sst_method is set to rsync
, while for other methods it should be in the format <user>:<password>
. The contents are masked in logs and when querying the value with SHOW VARIABLES. See Introduction to State Snapshot Transfers (SSTs) for more information.
Commandline: --wsrep-sst-auth=value
Scope: Global
Dynamic: Yes
Data Type: String
Default Value: (Empty)
wsrep_sst_donor
Description: Comma-separated list (from 5.5.33) or name (as per wsrep_node_name) of the servers as donors, or the source of the state transfer, in order of preference. The donor-selection algorithm, in general, prefers a donor capable of transferring only the missing transactions (IST) to the joiner node, instead of the complete state (SST). Thus, it starts by looking for an IST-capable node in the given donor list followed by rest of the nodes in the cluster. In case multiple candidate nodes are found outside the specified donor list, the node in the same segment (gmcast.segment) as the joiner is preferred. If none of the existing nodes in the cluster can serve the missing transactions through IST, the algorithm moves on to look for a suitable node to transfer the entire state (SST). It first looks at the nodes specified in the donor list (irrespective of their segment). If no suitable donor is still found, the rest of the donor nodes are checked for suitability only if the donor list has a "terminating-comma". Note that a stateless node (the Galera arbitrator) can never be a donor. See Introduction to State Snapshot Transfers (SSTs) for more information. [NOTE] Although the variable is dynamic, the node will not use the new value unless the node requiring SST or IST disconnects from the cluster. To force this, set wsrep_cluster_address to an empty string and back to the nodes list. After setting this variable dynamically, on startup the value from the configuration file will be used again.
Commandline: --wsrep-sst-donor=value
Scope: Global
Dynamic: Yes (read note above)
Data Type: String
Default Value:
wsrep_sst_donor_rejects_queries
Description: If set to ON
(OFF
is default), the donor node will reject incoming queries, returning an UNKNOWN COMMAND
error code. Can be used for informing load balancers that a node is unavailable.
Commandline: --wsrep-sst-donor-rejects-queries[={0|1}]
Scope: Global
Dynamic: Yes
Data Type: Boolean
Default Value: OFF
wsrep_sst_method
Description: Method used for taking the state snapshot transfer (SST). See Introduction to State Snapshot Transfers (SSTs): SST Methods for more information.
Commandline: --wsrep-sst-method=value
Scope: Global
Dynamic: Yes
Data Type: String
Default Value: rsync
Valid Values: rsync
, mysqldump
, xtrabackup
, xtrabackup-v2
, mariadb-backup
wsrep_sst_receive_address
Description: This is the address where other nodes (donor) in the cluster connect to in order to send the state-transfer updates. If an address is not specified or its set to AUTO
(default), mysqld uses --wsrep_node_address's value as the receiving address. However, if --wsrep_node_address is not set, it uses address from either --bind-address or tries to get one from the list of available network interfaces, in the same order. Note: setting it to localhost
will make it impossible for nodes running on other hosts to reach this node. See Introduction to State Snapshot Transfers (SSTs) for more information.
Commandline: --wsrep-sst-receive-address=value
Scope: Global
Dynamic: Yes
Data Type: String
Default Value: AUTO
wsrep_start_position
Description: The start position that the node should use in the format: UUID:seq_no
. The proper value to use for this position can be recovered with wsrep_recover.
Commandline: --wsrep-start-position=value
Scope: Global
Dynamic: Yes
Data Type: String
Default Value: 00000000-0000-0000-0000-000000000000:-1
wsrep_status_file
Description: wsrep status output filename.
Commandline: --wsrep-status-file=value
Scope: Global
Dynamic: No
Data Type: String
Default Value: None
Introduced: MariaDB 10.9
wsrep_strict_ddl
Description: If set, reject DDL statements on affected tables not supporting Galera replication. This is done by checking if the table is InnoDB, which is the only table currently fully supporting Galera replication. MyISAM tables will not trigger the error if the experimental wsrep_replicate_myisam setting is ON
. If set, should be set on all tables in the cluster. Affected DDL statements include: CREATE TABLE (e.g. CREATE TABLE t1(a int) engine=Aria) ALTER TABLE TRUNCATE TABLECREATE VIEW CREATE TRIGGER CREATE INDEX DROP INDEX RENAME TABLE DROP TABLE
Statements in procedures, events, and functions are permitted as the affected
tables are only known at execution. Furthermore, the various USER, ROLE, SERVER and
DATABASE statements are also allowed as they do not have an affected table. Deprecated in MariaDB 10.6.0 and removed in MariaDB 10.7. Use wsrep_mode=STRICT_REPLICATION instead.
Commandline: --wsrep-strict-ddl[={0|1}
Scope: Global
Dynamic: Yes
Data Type: boolean
Default Value: OFF
Introduced: MariaDB 10.5.1
Deprecated: MariaDB 10.6.0
Removed: MariaDB 10.7.0
wsrep_sync_wait
Description: Setting this variable ensures causality checks will take place before executing an operation of the type specified by the value, ensuring that the statement is executed on a fully synced node. While the check is taking place, new queries are blocked on the node to allow the server to catch up with all updates made in the cluster up to the point where the check was begun. Once reached, the original query is executed on the node. This can result in higher latency. Note that when wsrep_dirty_reads is ON, values of wsrep_sync_wait become irrelevant. Sample usage (for a critical read that must have the most up-to-date data) SET SESSION wsrep_sync_wait=1; SELECT ...; SET SESSION wsrep_sync_wait=0;
0
- Disabled (default)
1
- READ (SELECT and BEGIN/START TRANSACTION). This is the same as wsrep_causal_reads=1.
2
- UPDATE and DELETE;
3
- READ, UPDATE and DELETE;
4
- INSERT and REPLACE;
5
- READ, INSERT and REPLACE;
6
- UPDATE, DELETE, INSERT and REPLACE;
7
- READ, UPDATE, DELETE, INSERT and REPLACE;
8
- SHOW
9
- READ and SHOW
10
- UPDATE, DELETE and SHOW
11
- READ, UPDATE, DELETE and SHOW
12
- INSERT, REPLACE and SHOW
13
- READ, INSERT, REPLACE and SHOW
14
- UPDATE, DELETE, INSERT, REPLACE and SHOW
15
- READ, UPDATE, DELETE, INSERT, REPLACE and SHOW
Commandline: --wsrep-sync-wait=
#
Scope: Session
Dynamic: Yes
Data Type: Numeric
Default Value: 0
Range:
0
to 15
wsrep_trx_fragment_size
Description: Size of transaction fragments for streaming replication (measured in units as specified by wsrep_trx_fragment_unit)
Commandline: --wsrep-trx-fragment-size=
#
Scope: Session
Dynamic: Yes
Data Type: numeric
Default Value: 0
Range: 0
to 2147483647
wsrep_trx_fragment_unit
Description: Unit for streaming replication transaction fragments' size:
bytes
: transaction’s binlog events buffer size in bytes
rows
: number of rows affected by the transaction
statements
: number of SQL statements executed in the multi-statement transaction
Commandline: --wsrep-trx-fragment-unit=value
Scope: Session
Dynamic: Yes
Data Type: enum
Default Value: bytes
Valid Values: bytes
, rows
or statements
This page is licensed: CC BY-SA / Gnu FDL
Print warning about certificate expiration if the X509 certificate used for wsrep connections is about to expire in hours given as an argument. If the value is 0, warnings are not printed.
The wsrep_certificate_expiration_hours_warning
system variable can be set in a configuration file:
[mariadb]
...
# warn 3 days before certificate expiration
wsrep_certificate_expiration_hours_warning=72
The global value of the wsrep_certificate_expiration_hours_warning
system variable can also be set dynamically at runtime by executing SET GLOBAL:
SET GLOBAL wsrep_certificate_expiration_hours_warning=72;
When the wsrep_certificate_expiration_hours_warning
system variable is set dynamically at runtime, its value will be reset the next time the server restarts. To make the value persist on restart, set it in a configuration file too.
The wsrep_certificate_expiration_hours_warning
system variable can be used to configure certificate expiration warnings for MariaDB Enterprise Cluster, powered by Galera:
When the wsrep_certificate_expiration_hours_warning
system variable is set to 0
, certificate expiration warnings are not printed to the MariaDB Error Log.
When the wsrep_certificate_expiration_hours_warning
system variable is set to a value N
, which is greater than 0
, certificate expiration warnings are printed to the MariaDB Error Log when the node's certificate expires in N
hours or less.
Command-line
--wsrep_certificate_expiration_hours_warning=#
Configuration file
Supported
Dynamic
Yes
Scope
Global
Data Type
BIGINT UNSIGNED
Minimum Value
0
Maximum Value
18446744073709551615
Product Default Value
0
Name for the cluster.
This system variable specifies the logical name of the cluster. Every Cluster Node that connects to each other must have the same logical name in order to form a component or join the Primary Component.
Command-line
--wsrep_cluster_name=arg
Configuration file
Supported
Dynamic
Yes
Scope
Global
Data Type
VARCHAR
Product Default Value
my_wsrep_cluster
Set the cluster name using an options file:
[mariadb]
wsrep_provider = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_name = example_cluster
wsrep_cluster_address = gcomm://192.0.2.1,192.0.2.2,192.0.2.3
To view the current cluster name, use the SHOW VARIABLES statement:
SHOW VARIABLES LIKE "wsrep_cluster_name";
+--------------------+-----------------+
| Variable_name | Value |
+--------------------+-----------------+
| wsrep_cluster_name | example_cluster |
+--------------------+-----------------+
The following options can be set as part of the Galera wsrep_provider_options variable. Dynamic options can be changed while the server is running.
Options need to be provided as a semicolon (;) separated list on a single line. Options that are not explicitly set are set to their default value.
Note that before Galera 3, the repl
tag was named replicator
.
base_dir
Description: Specifies the data directory
base_host
Description: For internal use. Should not be manually set.
Default: 127.0.0.1
(detected network address)
base_port
Description: For internal use. Should not be manually set.
Default: 4567
cert.log_conflicts
Description: Certification failure log details.
Dynamic: Yes
Default: no
cert.optimistic_pa
Description: Controls parallel application of actions on the replica. If set, the full range of parallelization as determined by the certification algorithm is permitted. If not set, the parallel applying window will not exceed that seen on the primary, and applying will start no sooner than after all actions it has seen on the master are committed.
Dynamic: Yes
Default: yes
debug
Description: Enable debugging.
Dynamic: Yes
Default: no
evs.auto_evict
Description: Number of entries the node permits for a given delayed node before triggering the Auto Eviction protocol. An entry is added to a delayed list for each delayed response from a node. If set to 0
, the default, the Auto Eviction protocol is disabled for this node. See Auto Eviction for more.
Dynamic: No
Default: 0
evs.causal_keepalive_period
Description: Used by the developers only, and not manually serviceable.
Dynamic: No
Default: The evs.keepalive_period.
evs.debug_log_mask
Description: Controls EVS debug logging. Only effective when wsrep_debug is on.
Dynamic: Yes
Default: 0x1
evs.delay_margin
Description: Time that response times can be delayed before this node adds an entry to the delayed list. See evs.auto_evict. Must be set to a higher value than the round-trip delay time between nodes.
Dynamic: No
Default: PT1S
evs.delayed_keep_period
Description: Time that this node requires a previously delayed node to remain responsive before being removed from the delayed list. See evs.auto_evict.
Dynamic: No
Default: PT30S
evs.evict
Description: When set to the gcomm UUID of a node, that node is evicted from the cluster. When set to an empty string, the eviction list is cleared on the node where it is set. See evs.auto_evict.
Dynamic: No
Default: Empty string
evs.inactive_check_period
Description: Frequency of checks for peer inactivity (looking for nodes with delayed responses), after which nodes may be added to the delayed list, and later evicted.
Dynamic: No
Default: PT0.5S
evs.inactive_timeout
Description: Time limit that a node can be inactive before being pronounced as dead.
Dynamic: No
Default: PT15S
evs.info_log_mask
Description: Controls extra EVS info logging. Bits:
0x1 – extra view change information
0x2 – extra state change information
0x4 – statistics
0x8 – profiling (only available in builds with profiling enabled)
Dynamic: No
Default: 0
evs.install_timeout
Description: Timeout on waits for install message acknowledgments. Replaces evs.consensus_timeout.
Dynamic: Yes
Default: PT7.5S
evs.join_retrans_period
Description: Time period for how often retransmission of EVS join messages when forming cluster membership should occur.
Dynamic: Yes
Default: PT1S
evs.keepalive_period
Description: How often keepalive signals should be transmitted when there's no other traffic.
Dynamic: Yes
Default: PT1S
evs.max_install_timeouts
Description: Number of membership install rounds to attempt before timing out. The total rounds will be this value plus two.
Dynamic: No
Default: 3
evs.send_window
Description: Maximum number of packets that can be replicated at a time, Must be more than evs.user_send_window, which applies to data packets only (double is recommended). In WAN environments can be set much higher than the default, for example 512
.
Dynamic: Yes
Default: 4
evs.stats_report_period
Description: Reporting period for EVS statistics.
Dynamic: No
Default: PT1M
evs.suspect_timeout
Description: A node will be suspected to be dead after this period of inactivity. If all nodes agree, the node is dropped from the cluster before evs.inactive_timeout is reached.
Dynamic: No
Default: PT5S
evs.use_aggregate
Description: If set to true
(the default), small packets will be aggregated into one where possible.
Dynamic: No
Default: true
evs.user_send_window
Description: Maximum number of data packets that can be replicated at a time. Must be smaller than evs.send_window (half is recommended). In WAN environments can be set much higher than the default, for example 512
.
Dynamic: Yes
Default: 2
evs.version
Description: EVS protocol version. Defaults to 0
for backward compatibility. Certain EVS features (e.g. auto eviction) require more recent versions.
Dynamic: No
Default: 0
evs.view_forget_timeout
Description: Time after which past views will be dropped from the view history.
Dynamic: No
Default: P1D
gcache.dir
Description: Directory where GCache files are placed.
Dynamic: No
Default: The working directory
gcache.keep_pages_size
Description: Total size of the page storage pages for caching. One page is always present if only page storage is enabled.
Dynamic: No
Default: 0
gcache.mem_size
Description: Maximum size of size of the malloc() store for setups that have spare RAM.
Dynamic: No
Default: 0
gcache.name
Description: Gcache ring buffer storage file name. By default placed in the working directory, changing to another location or partition can reduce disk IO.
Dynamic: No
Default: ./galera.cache
gcache.page_size
Description: Size of the page storage page files. These are prefixed by gcache.page
. Can be set to as large as the disk can handle.
Dynamic: No
Default: 128M
gcache.recover
Description: Whether or not gcache recovery takes place when the node starts up. If it is possible to recover gcache, the node can then provide IST to other joining nodes, which assists when the whole cluster is restarted.
Dynamic: No
Default: no
Introduced: MariaDB 10.1.20, MariaDB Galera 10.0.29, MariaDB Galera 5.5.54
gcache.size
Description: Gcache ring buffer storage size (the space the node uses for caching write sets), preallocated on startup.
Dynamic: No
Default: 128M
gcomm.thread_prio
Description: Gcomm thread policy and priority (in the format policy:priority
. Priority is an integer, while policy can be one of:
fifo
: First-in, first-out scheduling. Always preempt other, batch or idle threads and can only be preempted by other fifo
threads of a higher priority or blocked by an I/O request.
rr
: Round-robin scheduling. Always preempt other, batch or idle threads. Runs for a fixed period of time after which the thread is stopped and moved to the end of the list, being replaced by another round-robin thread with the same priority. Otherwise runs until preempted by other rr
threads of a higher priority or blocked by an I/O request.
other
: Default scheduling on Linux. Threads run until preempted by a thread of a higher priority or a superior scheduling designation, or blocked by an I/O request.
Dynamic: No
Default: Empty string
gcs.fc_debug
Description: If set to a value greater than zero (the default), debug statistics about SST flow control will be posted each timegcs.fc_master_slave after the specified number of writesets.
Dynamic: No
Default: 0
gcs.fc_factor
Description:Fraction below gcs.fc_limit which if the recv queue drops below, replication resumes.
Dynamic: Yes
Default: 1.0
gcs.fc_limit
Description: If the recv queue exceeds this many writesets, replication is paused. Can increase greatly in master-slave setups. Replication will resume again according to the gcs.fc_factor setting.
Dynamic: Yes
Default: 16
gcs.fc_master_slave
Description: Whether to assume that the cluster only contains one master. Deprecated since Galera 4.10 (MariaDB 10.8.1, MariaDB 10.7.2, MariaDB 10.6.6, MariaDB 10.5.14, MariaDB 10.4.22) - see gcs.fc_single_primary
Dynamic: No
Default: no
gcs.fc_single_primary
Description: Defines whether there is more than one source of replication. As the number of nodes in the cluster grows, the larger the calculated gcs.fc_limit gets. At the same time, the number of writes from the nodes increases. When this parameter value is set to NO (multi-primary), the gcs.fc_limit parameter is dynamically modified to give more margin for each node to be a bit further behind applying writes. The gcs.fc_limit parameter is modified by the square root of the cluster size, that is, in a four-node cluster it is two times higher than the base value. This is done to compensate for the increasing replication rate noise.
Dynamic: No
Default: no
gcs.max_packet_size
Description: Maximum packet size, after which writesets become fragmented.
Dynamic: No
Default: 64500
gcs.max_throttle
Description: How much we can throttle replication rate during state transfer (to avoid running out of memory). Set it to 0.0 if stopping replication is acceptable for the sake of completing state transfer.
Dynamic: No
Default: 0.25
gcs.recv_q_hard_limit
Description: Maximum size of the recv queue. If exceeded, the server aborts. Half of available RAM plus swap is a recommended size.
Dynamic: No
Default: LLONG_MAX
gcs.recv_q_soft_limit
Description: Fraction of gcs.recv_q_hard_limit after which replication rate is throttled. The rate of throttling increases linearly from zero (the regular, varying rate of replication) at and below csrecv_q_soft_limit
to one (full throttling) at gcs.recv_q_hard_limit
Dynamic: No
Default: 0.25
gcs.sync_donor
Description: Whether or not the rest of the cluster should stay in sync with the donor. If set to YES
(NO
is default), if the donor is blocked by state transfer, the whole cluster is also blocked.
Dynamic: No
Default: no
gmcast.listen_addr
Description: Address Galera listens for connections from other nodes. Can be used to override the default port to listen, which is obtained from the connection address.
Dynamic: No
Default: tcp://0.0.0.0:4567
gmcast.mcast_addr
Description: Not set by default, but if set, UDP multicast will be used for replication. Must be identical on all nodes.For example, gmcast.mcast_addr=239.192.0.11
Dynamic: No
Default: None
gmcast.mcast_ttl
Description: Multicast packet TTL (time to live) value.
Dynamic: No
Default: 1
gmcast.peer_timeout
Description: Connection timeout for initiating message relaying.
Dynamic: No
Default: PT3S
gmcast.segment
Description: Defines the segment to which the node belongs. By default, all nodes are placed in the same segment (0
). Usually, you would place all nodes in the same datacenter in the same segment. Galera protocol traffic is only redirected to one node in each segment, and then relayed to other nodes in that same segment, which saves cross-datacenter network traffic at the expense of some extra latency. State transfers are also, preferably but not exclusively, taken from the same segment. If there are no nodes available in the same segment, state transfer will be taken from a node in another segment.
Dynamic: No
Default: 0
Range: 0
to 255
gmcast.time_wait
Description: Waiting time before allowing a peer that was declared outside of the stable view to reconnect.
Dynamic: No
Default: PT5S
gmcast.version
Description: Deprecated option. Gmcast version.
Dynamic: No
Default: 0
ist.recv_addr
Description: Address for listening for Incremental State Transfer.
Dynamic: No
Default::<port+1> from wsrep_node_address
ist.recv_bind
Description:
Dynamic: No
Default: Empty string
Introduced: MariaDB 10.1.17, MariaDB Galera 10.0.27, MariaDB Galera 5.5.51
pc.announce_timeout
Description: Period of time for which cluster joining announcements are sent every 1/2 second.
Dynamic: No
Default: PT3S
pc.checksum
Description: For debug purposes, by default false
(true
in earlier releases), indicates whether to checksum replicated messages on PC level. Safe to turn off.
Dynamic: No
Default: false
pc.ignore_quorum
Description: Whether to ignore quorum calculations, for example when a master splits from several slaves, it will remain in operation if set to true
(false is default
). Use with care however, as in master-slave setups, slaves will not automatically reconnect to the master if set.
Dynamic: Yes
Default: false
pc.ignore_sb
Description: Whether to permit updates to be processed even in the case of split brain (when a node is disconnected from its remaining peers). Safe in master-slave setups, but could lead to data inconsistency in a multi-master setup.
Dynamic: Yes
Default: false
pc.linger
Description: Time that the PC protocol waits for EVS termination.
Dynamic: No
Default: PT20S
pc.npvo
Description: If set to true
(false
is default), when there are primary component conficts, the most recent component will override the older.
Dynamic: No
Default: false
pc.recovery
Description: If set to true
(the default), the Primary Component state is stored on disk and in the case of a full cluster crash (e.g power outages), automatic recovery is then possible. Subsequent graceful full cluster restarts will require explicit bootstrapping for a new Primary Component.
Dynamic: No
Default: true
pc.version
Description: Deprecated option. PC protocol version.
Dynamic: No
Default: 0
pc.wait_prim
Description: When set to true
, the default, the node will wait for a primary component for the period of time specified by pc.wait_prim_timeout. Used to bring up non-primary components and make them primary using pc.bootstrap.
Dynamic: No
Default: true
pc.wait_prim_timeout
Description: Ttime to wait for a primary component. See pc.wait_prim.
Dynamic: No
Default: PT30S
pc.weight
Description: Node weight, used for quorum calculation. See the Codership article Weighted Quorum.
Dynamic: Yes
Default: 1
protonet.backend
Description: Deprecated option. Transport backend to use. Only ASIO is supported currently.
Dynamic: No
Default: asio
protonet.version
Description: Deprecated option. Protonet version.
Dynamic: No
Default: 0
repl.causal_read_timeout
Description: Timeout period for causal reads.
Dynamic: Yes
Default: PT30S
repl.commit_order
Description: Whether or not out-of-order committing is permitted, and under what conditions. By default it is not permitted, but setting this can improve parallel performance.
0
BYPASS: No commit order monitoring is done (useful for measuring the performance penalty).
1
OOOC: Out-of-order committing is permitted for all transactions.
2
LOCAL_OOOC: Out-of-order committing is permitted for local transactions only.
3
NO_OOOC: Out-of-order committing is not permitted at all.
Dynamic: No
Default: 3
repl.key_format
Description: Format for key replication. Can be one of:
FLAT8
- shorter key with a higher probability of false positives when matching
FLAT16
- longer key with a lower probability of false positives when matching
FLAT8A
- shorter key with a higher probability of false positives when matching, includes annotations for debug purposes
FLAT16A
- longer key with a lower probability of false positives when matching, includes annotations for debug purposes
Dynamic: Yes
Default: FLAT8
repl.max_ws_size
Description:
Dynamic:
Default: 2147483647
repl.proto_max
Description:
Dynamic:
Default: 9
socket.checksum
Description: Method used for generating checksum. Note: If Galera 25.2.x and 25.3.x are both being used in the cluster, MariaDB with Galera 25.3.x must be started with wsrep_provider_options='socket.checksum=1'
in order to make it backward compatible with Galera v2. Galera wsrep providers other than 25.3.x or 25.2.x are not supported.
Dynamic: No
Default: 2
socket.dynamic
Description: Allow both encrypted and unencrypted connections between nodes. Typically this should be set to false
(the default), when set to true
encrypted connections will still be preferred, but will fall back to unencrypted connections when encryption is not possible, e.g. not enabled on all nodes yet. Needs to be true
on all nodes when wanting to enable or disable encryption via a rolling restart. As this can't be changed at runtime a rolling restart to enable or disable encryption may need three restarts per node in total: one to enable socket.dynamic
on each node, one to change the actual encryption settings on each node, and a final round to change socket.dynamic
back to false
.
Dynamic: No
Default: false
Introduced: MariaDB 10.4.19, MariaDB 10.5.10, MariaDB 10.6.0
socket.recv_buf_size
Description: Size in bytes of the receive buffer used on the network sockets between nodes, passed on to the kernel via the SO_RCVBUF socket option.
Dynamic: No
Default:
= MariaDB 10.3.23, MariaDB 10.2.32, MariaDB 10.1.45: Auto
< MariaDB 10.3.22: MariaDB 10.2.31, MariaDB 10.1.44: 212992
socket.send_buf_size
Description: Size in bytes of the send buffer used on the network sockets between nodes, passed on to the kernel via the SO_SNDBUF socket option.
Dynamic: No
Default:: Auto
Introduced: MariaDB 10.3.23, MariaDB 10.2.32, MariaDB 10.1.45
socket.ssl
Description: Explicitly enables TLS usage by the wsrep Provider.
Dynamic: No
Default: NO
socket.ssl_ca
Description: Path to Certificate Authority (CA) file. Implicitly enables the socket.ssl option.
Dynamic: No
socket.ssl_cert
Description: Path to TLS certificate. Implicitly enables the socket.ssl option.
Dynamic: No
socket.ssl_cipher
Description: TLS cipher to use. Implicitly enables the socket.ssl option. Since MariaDB 10.2.18 defaults to the value of the ssl_cipher system variable.
Dynamic: No
Default: system default, before MariaDB 10.2.18 defaults to AES128-SHA
.
socket.ssl_compression
Description: Compression to use on TLS connections. Implicitly enables the socket.ssl option.
Dynamic: No
socket.ssl_key
Description: Path to TLS key file. Implicitly enables the socket.ssl option.
Dynamic: No
socket.ssl_password_file
Description: Path to password file to use in TLS connections. Implicitly enables the socket.ssl option.
Dynamic: No
This page is licensed: CC BY-SA / Gnu FDL
Select which SSL implementation is used for wsrep provider communications: PROVIDER - wsrep provider internal SSL implementation; SERVER - use server side SSL implementation; SERVER_X509 - as SERVER and require valid X509 certificate.
The wsrep_ssl_mode
system variable is used to configure the WSREP
TLS Mode used by MariaDB Enterprise Cluster, powered by Galera.
When set to SERVER
or SERVER_X509
, MariaDB Enterprise Cluster uses the TLS configuration for MariaDB Enterprise Server:
[mariadb]
...
wsrep_ssl_mode = SERVER_X509
ssl_ca = /certs/ca-cert.pem
ssl_cert = /certs/server-cert.pem
ssl_key = /certs/server-key.pem
When set to PROVIDER
, MariaDB Enterprise Cluster obtains its TLS configuration from the wsrep_provider_options system variable:
[mariadb]
...
wsrep_ssl_mode = PROVIDER
wsrep_provider_options = "socket.ssl=true;socket.ssl_cert=/certs/server-cert.pem;socket.ssl_ca=/certs/ca-cert.pem;socket.ssl_key=/certs/server-key.pem"
The wsrep_ssl_mode
system variable configures the WSREP
TLS Mode. The following WSREP
TLS Modes are supported:
WSREP TLS Mode
Values
Description
Provider
PROVIDER
TLS is optional for Enterprise Cluster replication traffic.
Each node obtains its TLS configuration from the wsrep_provider_options system variable. When the provider is not configured to use TLS on a node, the node will connect to the cluster without TLS.
The Provider WSREP TLS Mode is backward compatible with ES 10.5 and earlier. When performing a rolling upgrade from ES 10.5 and earlier, the Provider WSREP TLS Mode can be configured on the upgraded nodes.
Server
SERVER
TLS is mandatory for Enterprise Cluster replication traffic, but X509 certificate verification is not performed.
Each node obtains its TLS configuration from the node's MariaDB Enterprise Server configuration. When MariaDB Enterprise Server is not configured to use TLS on a node, the node will fail to connect to the cluster.
The Server WSREP TLS Mode is the default in ES 10.6.
Server X509
SERVER_X509
TLS and X509 certificate verification are mandatory for Enterprise Cluster replication traffic.
Each node obtains its TLS configuration from the node's MariaDB Enterprise Server configuration. When MariaDB Enterprise Server is not configured to use TLS on a node, the node will fail to connect to the cluster.
When the wsrep_ssl_mode
system variable is set to PROVIDER
, each node obtains its TLS configuration from the wsrep_provider_options system variable. The following options are used:
WSREP Provider Option
Description
Set this option to true
to enable TLS.
Set this option to the path of the CA chain file.
Set this option to the path of the node's X509 certificate file.
Set this option to the path of the node's private key file.
When the wsrep_ssl_mode
system variable is set to SERVER
or SERVER_X509
, each node obtains its TLS configuration from the node's MariaDB Enterprise Server configuration. The following system variables are used:
System Variable
Description
Set this system variables to the path of the CA chain file.
Optionally set this system variables to the path of the CA chain directory. The directory must have been processed by openssl rehash
. When your CA chain is stored in a single file, use the ssl_ca system variable instead.
Set this system variable to the path of the node's X509 certificate file.
Set this system variable to the path of the node's private key file.
Command-line
--wsrep_ssl_mode={PROVIDER|SERVER|SERVER_X509}
Configuration file
Supported
Dynamic
No
Scope
Global
Data Type
ENUM (PROVIDER, SERVER, SERVER_X509)
Product Default Value
SERVER
wsrep_sst_common
VariablesThe wsrep_sst_common
script provides shared functionality used by various State Snapshot Transfer (SST) methods in Galera Cluster. It centralizes the handling of common configurations such as authentication credentials, SSL/TLS
encryption parameters, and other security-related settings. This ensures consistent and secure communication between cluster nodes during the SST process.
The wsrep_sst_common
script parses the following options:
WSREP_SST_OPT_AUTH
(wsrep-sst-auth)
Description: Defines the authentication credentials used by the State Snapshot Transfer (SST) process, typically formatted as user:password
. These credentials are essential for authenticating the SST user on the donor node, ensuring that only authorized joiner nodes can initiate and receive data during the SST operation. Proper configuration of this variable is critical to maintain the security and integrity of the replication process between Galera cluster nodes.
tcert
(tca)
Description: Specifies the Certificate Authority (CA) certificate file used for SSL/TLS encryption during State Snapshot Transfers (SSTs). When encryption is enabled, this certificate allows the joining node (client) to authenticate the identity of the donor node, ensuring secure and trusted communication between them.
tcap
(tcapath)
Description: Specifies the path to a directory that contains a collection of trusted Certificate Authority (CA) certificates. Instead of providing a single CA certificate file, this option allows the use of multiple CA certificates stored in separate files within the specified directory. It is useful in environments where trust needs to be established with multiple certificate authorities.
tpem
(tcert)
Description: This variable stores the path to the TLS/SSL certificate file for the specific node. The certificate, typically in PEM format, is used by the node to authenticate itself to other nodes during secure SST operations. It is derived from the tcert
option in the [sst]
section.
tkey
(tkey)
Description: Represents the private key file that corresponds to the public key certificate specified by tpem
. This private key is essential for decrypting data and establishing a secure connection during State Snapshot Transfer (SST). It enables the receiving node to authenticate encrypted information and participate in secure replication within the cluster.
wsrep_sst_mariabackup
VariablesThe wsrep_sst_mariabackup
script handles the actual data transfer and processing during an SST. The variables it reads from the [sst]
group control aspects of the backup format, compression, transfer mechanism, and logging.
The wsrep_sst_mariadbbackup
script parses the following options:
sfmt
(streamfmt)
Default: mbstream
Description: Defines the streaming format used by mariabackup
for the SST. mbstream
indicates that mariabackup
will output a continuous stream of data. Other potential values (though not explicitly shown as defaults) might be related to different backup methods or tools.
tfmt
(transferfmt)
Default: socat
Description: Specifies the transfer format or utility used to move the data stream from the donor to the joiner node. socat
is a common command-line tool for data transfer, often used for setting up various network connections.
sockopt
(socket options)
Description: Allows additional socket options to be passed to the underlying network communication. This could include settings for TCP buffers, keep-alives, or other network-related tunables to optimize the transfer performance.
progress
Description: Likely controls whether progress information about the SST is displayed or logged. Setting this could enable visual indicators or detailed log entries about the transfer's advancement.
ttime
(time)
Default: 0
Description: Possibly a timeout value in seconds for certain operations during the SST, or a flag related to timing the transfer. A value of 0
might indicate no timeout or that timing is handled elsewhere.
cpat
Description: Appears to be related to a "copy pattern" or specific path handling during the SST. Its exact function would depend on how the wsrep_sst_mariabackup
script uses this pattern for file or directory management.
scomp
(compressor)
Description: Specifies the compression utility to be used on the data stream before transfer. Common values could include gzip
, pigz
, lz4
, or qpress
, which reduce the data size for faster transmission over the network.
sdecomp
(decompressor)
Description: Specifies the decompression utility to be used on the receiving end (joiner node) to decompress the data stream that was compressed by scomp
. It should correspond to the scomp
setting.
rlimit
(resource limit)
Description: Potentially sets resource limits for the mariabackup
process during the SST. This could include limits on CPU usage, memory, or file descriptors, preventing the SST from consuming excessive resources and impacting the server's performance.
uextra
(use-extra)
Default: 0
Description: A boolean flag (0 or 1) that likely indicates whether to use extra or advanced features/parameters during the SST. The specific "extra" features would be determined by the mariabackup
implementation.
speciald
(sst-special-dirs)
Default: 1
Description: A boolean flag (0 or 1) that likely controls whether mariabackup
should handle special directories (e.g., innodb_log_group_home_dir
, datadir
) in a specific way during the SST, rather than just copying them as regular files. This is important for maintaining data consistency.
stimeout
(sst-initial-timeout)
Default: 300
Description: Sets an initial timeout in seconds for the SST process. If the SST doesn't make progress or complete within this initial period, it might be aborted.
ssyslog
(sst-syslog)
Default: 0
Description: A boolean flag (0 or 1) that likely controls whether SST-related messages should be logged to syslog. This can be useful for centralized logging and monitoring of Galera cluster events.
sstlogarchive
(sst-log-archive)
Default: 1
Description: A boolean flag (0 or 1) that likely determines whether SST logs should be archived. Archiving logs helps in post-mortem analysis and troubleshooting of SST failures.
sstlogarchivedir
(sst-log-archive-dir)
Description: Specifies the directory where SST logs should be archived if sstlogarchive
is enabled.
Explicitly enables TLS usage by the wsrep provider.
The wsrep_provider_options
system variable applies to MariaDB Enterprise Cluster, powered by Galera and to Galera Cluster available with MariaDB Community Server. This page relates specifically to the socket.ssl wsrep_provider_options
.
The socket.ssl
option is used to specify if SSL encryption should be used.
Option Name
socket.ssl
Default Value
NO
Dynamic
NO
Debug
NO
wsrep_provider_options
define optional settings the node passes to the wsrep provider.
To display current wsrep_provider_options
values:
SHOW GLOBAL VARIABLES LIKE 'wsrep_provider_options';
The expected output will display the option and the value. Options with no default value, for example SSL options, will not be displayed in the output.
When changing a setting for a wsrep_provider_options in the config file, you must list EVERY option that is to have a value other than the default value. Options that are not explicitly listed are reset to the default value.
Options are set in the my.cnf
configuration file. Use the ;
delimiter to set multiple options.
The configuration file must be updated on each node. A restart to each node is needed for changes to take effect.
Use a quoted string that includes every option where you want to override the default value. Options that are not in the list will reset to their default value.
To set the option in the configuration file:
wsrep_provider_options='socket.ssl=YES;gcache.debug=YES;gcs.fc_limit=NO;socket.send_buf_size=NO;evs.keepalive_period=PT3S'
The socket.ssl option cannot be set dynamically. It can only be set in the configuration file.
Trying to change a non-dynamic option with SET
results in an error:
ERROR 1210 (HY000): Incorrect arguments to SET
Defines the path to the SSL Certificate Authority (CA) file.
The wsrep_provider_options
system variable applies to MariaDB Enterprise Cluster, powered by Galera and to Galera Cluster available with MariaDB Community Server. This page relates specifically to the socket.ssl_ca wsrep_provider_options
.
The node uses the CA file to verify the signature on the certificate. You can use either an absolute path or one relative to the working directory. The file must use PEM format.
Option Name
socket.ssl_ca
Default Value
"" (an empty string)
Dynamic
NO
Debug
NO
wsrep_provider_options
define optional settings the node passes to the wsrep provider.
To display current wsrep_provider_options
values:
SHOW GLOBAL VARIABLES LIKE 'wsrep_provider_options';
The expected output will display the option and the value. Options with no default value, for example SSL options, will not be displayed in the output.
When changing a setting for a wsrep_provider_options in the config file, you must list EVERY option that is to have a value other than the default value. Options that are not explicitly listed are reset to the default value.
Options are set in the my.cnf
configuration file. Use the ;
delimiter to set multiple options.
The configuration file must be updated on each node. A restart to each node is needed for changes to take effect.
Use a quoted string that includes every option where you want to override the default value. Options that are not in the list will reset to their default value.
To set the option in the configuration file:
wsrep_provider_options='socket.ssl_ca=/path/to/ca-cert.pem;gcache.debug=YES;gcs.fc_limit=NO;socket.send_buf_size=NO;evs.keepalive_period=PT3S'
The socket.ssl_ca option cannot be set dynamically. It can only be set in the configuration file.
Trying to change a non-dynamic option with SET
results in an error:
ERROR 1210 (HY000): Incorrect arguments to SET
Defines the path to the SSL certificate.
The wsrep_provider_options
system variable applies to MariaDB Enterprise Cluster, powered by Galera and to Galera Cluster available with MariaDB Community Server. This page relates specifically to the socket.ssl_cert wsrep_provider_options
.
The node uses the certificate as a self-signed public key in encrypting replication traffic over SSL. You can use either an absolute path or one relative to the working directory. The file must use PEM format.
Option Name
socket.ssl_cert
Default Value
"" (an empty string)
Dynamic
NO
Debug
NO
wsrep_provider_options
define optional settings the node passes to the wsrep provider.
To display current wsrep_provider_options
values:
SHOW GLOBAL VARIABLES LIKE 'wsrep_provider_options';
The expected output will display the option and the value. Options with no default value, for example SSL options, will not be displayed in the output.
When changing a setting for a wsrep_provider_options in the config file, you must list EVERY option that is to have a value other than the default value. Options that are not explicitly listed are reset to the default value.
Options are set in the my.cnf
configuration file. Use the ;
delimiter to set multiple options.
The configuration file must be updated on each node. A restart to each node is needed for changes to take effect.
Use a quoted string that includes every option where you want to override the default value. Options that are not in the list will reset to their default value.
To set the option in the configuration file:
wsrep_provider_options='socket.ssl_cert=/path/to/server-cert.pem;gcache.debug=YES;gcs.fc_limit=NO;socket.send_buf_size=NO;evs.keepalive_period=PT3S'
The socket.ssl_cert option cannot be set dynamically. It can only be set in the configuration file.
Trying to change a non-dynamic option with SET
results in an error:
ERROR 1210 (HY000): Incorrect arguments to SET
Defines the path to the SSL certificate key.
The wsrep_provider_options
system variable applies to MariaDB Enterprise Cluster, powered by Galera and to Galera Cluster available with MariaDB Community Server. This page relates specifically to the socket.ssl_key wsrep_provider_options
.
The node uses the certificate key, a self-signed private key, in encrypting replication traffic over SSL. You can use either an absolute path or one relative to the working directory. The file must use PEM format.
Option Name
socket.ssl_key
Maximum Value
"" (an empty string)
Dynamic
NO
Debug
NO
wsrep_provider_options
define optional settings the node passes to the wsrep provider.
To display current wsrep_provider_options
values:
SHOW GLOBAL VARIABLES LIKE 'wsrep_provider_options';
The expected output will display the option and the value. Options with no default value, for example SSL options, will not be displayed in the output.
When changing a setting for a wsrep_provider_options in the config file, you must list EVERY option that is to have a value other than the default value. Options that are not explicitly listed are reset to the default value.
Options are set in the my.cnf
configuration file. Use the ;
delimiter to set multiple options.
The configuration file must be updated on each node. A restart to each node is needed for changes to take effect.
Use a quoted string that includes every option where you want to override the default value. Options that are not in the list will reset to their default value.
To set the option in the configuration file:
wsrep_provider_options='socket.ssl_key=/path/to/server-key.pem;gcache.debug=YES;gcs.fc_limit=NO;socket.send_buf_size=NO;evs.keepalive_period=PT3S'
The socket.ssl_key option cannot be set dynamically. It can only be set in the configuration file.
Trying to change a non-dynamic option with SET
results in an error:
ERROR 1210 (HY000): Incorrect arguments to SET