Multi-Node

MariaDB Enterprise Server features several different multi-node implementations, which can enable high availability, load balancing, and/or scalability.

Overview

Transactional Replication Technologies Comparison

Multi-Node Implementation

High Availability

Data Protection

Scaling

Node Onboarding

Availability

Asynchronous MariaDB Replication

Yes

No

Read

Manual

ES 10.2+

Semi-Synchronous MariaDB Replication

Yes

Yes

Read

Manual

ES 10.2+

Enterprise Cluster

Yes

Yes

Read

Automatic

ES 10.2+

Distributed SQL with MariaDB Xpand

Yes

Yes with Rebalancing

Read and Write

Automatic

ES 10.5+

Special Purpose Technologies Comparison

Multi-Node Implementation

Use Case

Availability

MaxScale

Database Proxy with Automatic Failover, Read-Write Splitting, Query Routing, Binlog Server, SQL Firewall

ES 10.2+

Multi-Node Enterprise ColumnStore

Analytics with High Availability

ES 10.4+

S3

Archive Read-only Tables in the Cloud

ES 10.3+

Spider

Sharding Tables, Interlinking Databases

ES 10.3+

Delayed MariaDB Replication

Configuring Replica Servers that lag behind the Primary Server

ES 10.2+

Technology Compatibility

Technology

MariaDB Replication

Galera Cluster

Multi-Node Enterprise ColumnStore

MariaDB Xpand

S3

Spider

Included with MariaDB Enterprise Server

Yes

Yes

Yes

Yes

Yes

Yes

Included with MariaDB Community Server

Yes

Yes

No

No

Yes

Yes

MaxScale Monitor

MariaDB Monitor (mariadbmon)

Galera Monitor (galeramon)

MariaDB Monitor (mariadbmon)

Clustrix Monitor (clustrixmon)

N/A

N/A

Supported for InnoDB tables

Yes

Yes

No

No

No

Yes

Supported for ColumnStore tables

Yes

No

Yes

No

No

Yes

Supported for MariaDB Xpand tables

No

No

No

Yes

No

Yes

Examples

Basic MariaDB Replication Configuration

[mariadb]

# Server Configuration
log_error                 = mariadbd.err
innodb_buffer_pool_size   = 1G

# Replication Configuration (Primary Server)
log_bin          = mariadb-bin
server_id        = 1
binlog_format    = ROW

Basic Galera Cluster Configuration

[mariadb]

# Server Configuration
log_error                 = mariadbd.err
innodb_buffer_pool_size   = 1G

# Cluster Configuration
wsrep_provider           = /usr/lib64/galera-enterprise-4/libgalera_enterprise_smm.so
wsrep_cluster_address    = gcomm://192.0.2.1,192.0.2.2,192.0.2.3
wsrep_cluster_name       = TestCluster
wsrep_on                 = ON

binlog_format            = ROW
innodb_autoinc_lock_mode = 2

FAQ

How can MaxScale enhance my multi-node deployment?

MariaDB MaxScale allows applications and clients to connect to a multi-node deployment as they would to a single database server, while abstracting the complexity of the underlying database infrastructure. Key architectural solutions supported by MaxScale include:

Should I use an HTAP deployment for my analytics, or should I use a purely ColumnStore deployment?

MariaDB's Hybrid Transactional-Analytical Processing (HTAP) deployments combine the power of ColumnStore's columnar storage engine with the power of MariaDB's row-based engines, such as InnoDB and MyRocks.

An HTAP deployment is suitable for cases where the following criteria is met:

  • Big data is regularly bulk loaded into tables that use ColumnStore.

  • Transactional data is frequently inserted into, updated in, or deleted from tables that use InnoDB or MyRocks, which function as a system of record.

  • Your applications or reporting systems need to query both the analytical and transactional data.

How many nodes should my multi-node deployment have?

There are many factors involved that can affect the optimal number of nodes in a deployment.

Guidance on the number of nodes required is available for the following topologies:

The best option is to discuss your specific requirements with MariaDB's Enterprise Architects (EAs). Please contact support to find out more information about MariaDB's EA services.

How do I get causal reads in a multi-node deployment?

Causal reads guarantee that read-only queries on replica servers still return the most current data, even if the replica server is lagging behind the primary server.

In some multi-node deployments, causal reads are always available. For example, MariaDB Xpand replicates data in a strongly consistent manner, so all queries on all nodes will return the most current data.

In some other multi-node deployments, read-only queries executed on replica servers can return stale data due to replication lag if the queries are not executed in a causally consistent manner. Causal consistency is the act of ensuring that interdependent operations maintain consistency by performing them in the same order on all servers. When causal consistency is applied to read-only operations, the operation is typically called a "causal read."

MariaDB Enterprise has many different ways to implement causal reads, depending on the specific components and technologies involved:

Multi-Node Implementation

MaxScale Component

Enabling Causal Reads

MariaDB Replication

Read/Write Split Router (readwritesplit)

MaxScale's Read/Write Split Router (readwritesplit) has a built-in option to enable causal reads for MariaDB Replication. For more information, see Ensuring Causal Consistency with MaxScale's Read/Write Split Router.

Galera Cluster

Any

Galera Cluster supports causal reads. To enable it, set the wsrep_sync_wait system variable to 1. Its default value is 0.

MariaDB Xpand

Any

MariaDB Xpand executes read-only queries in a causally consistent manner by design, so no configuration is necessary to enable causal reads.

Any

Consistent Critical Read Filter (ccrfilter)

MaxScale's Consistent Critical Read Filter (ccrfilter) can be used to enable causal reads when it is used in tandem with MaxScale's Hint Filter (hintfilter), and any of MaxScale's routers that supports routing hints, like Read/Write Split Router (readwritesplit) and Hint Router (hintrouter).