Group Changes with MariaDB Xpand 5.3 and 6


MariaDB Xpand performs a group change when the cluster detects a change in cluster membership:

  • Xpand maintains a set of all nodes that can currently communicate with each other, which is referred to as the Group

  • Xpand performs a group change when a node is added to or removed from the group

  • Group changes have many causes, including scale up/down operations, node or zone failures, or failed heartbeat checks

  • The duration of a group change is typically a few seconds

  • Xpand performs many types of actions during a group change, such as recalculating quorum and ensuring fault tolerance

  • When Xpand performs a group change, it can have an impact on transactions, queries, and connections


Information provided here applies to:

  • MariaDB Xpand 5.3

  • MariaDB Xpand 6

What is a Group Change?

Xpand uses a distributed group membership protocol to maintain the static set of all nodes known to the cluster and checks that the nodes maintain active communication between each other. Xpand refers to this as a group.

When the set of nodes changes, there is a change in the group and a group change occurs. During a group change, Xpand performs tasks to ensure:

  • Data consistency

  • Data availability

  • Effective query distribution

When Does a Cluster Experience a Group Change?

When Node(s) are Added to a Cluster

A group change occurs in conjunction with Scaling Out. Following the group change, the Rebalancer will work in the background to move slices to the new node(s). Performance can degrade slightly during that time.

When Node(s) Leave the Cluster

When reducing a deployment's capacity using the Scale-In procedure, the ALTER CLUSTER REFORM command, used to remove nodes, will invoke a group change.

A deployment will also group change if node(s) are dropped using the emergency procedure of ALTER CLUSTER DROP.

Additionally, there are several unscheduled events that can cause a deployment to group change:

  • A deployment experiences unexpected node failure(s) due to hardware failure, network failure, or kernel panic.

  • A node or node(s) are unable to be reached during a regular heartbeat check of the deployment.

Following a node loss, if the node was not previously soft-failed, the Rebalancer will automatically work to reprotect all data and ensure all data has sufficient copies throughout the deployment. Performance can degrade slightly during that time.

What Happens During a Group Change?

If Xpand detects a change in its group, it will recover automatically as long as a quorum of nodes is available. Your deployment will experience a brief period during which transactions are suspended while the group is being reformed and the consistency of the database is ensured. Connections from applications to surviving nodes will remain but transactions and queries for those connections will be temporarily paused.

The deployment can recover from multiple simultaneous node failures if the total number of failed nodes does not exceed the value configured for MAX_FAILURES.

Details of a Group Change

Group changes are relatively short (generally measured in seconds), though the duration of each group change depends on factors such as the number of containers, workload, and deployment size. The underlying steps of a group change are the same, regardless of deployment size and workload.

Cluster Pauses Processing and Performs Internal Operations

When there is a group change, the deployment pauses all processing and determines whether a quorum of nodes is available. If true, Xpand performs a series of internal operations in preparation for the new group. Together these operations may take a few seconds, or 10s or seconds, depending on how large the deployment is, how large the database is, and how many transactions were in-process when the group change occurred. These steps ensure that the consistency of the database is guaranteed despite having lost a member of the deployment.

  • Initializing subsystems such as flow control and the Rebalancer.

  • Synchronizing global deployment state, including internal system catalogs and global variables.

  • Resolving (or re-resolving) in-process transactions, including rolling back transactions that were interrupted by the group change.

  • Invalidating or rebuilding internal caches, such as the Query Plan Cache.

  • Creating recovery queues for downed replicas, or "flipping" queues that are no longer needed.

  • Performing checks for licensing and Resiliency.

  • Resizing device files if necessary.

Cluster Forms New Group

Once the deployment is ready to resume operations, a new group is formed and the clustrix.log will contain an informational message that includes details of the new group:

[INFO] Node 1 has new group effffe: { 1-4 down: 5 }

This example shows a deployment that has re-grouped without node 5. The database then resumes its operations.

What Happens to Processes That Were Running?

If any of the following processes were running when a group change occurred, they will be impacted as shown:

Queries (DML and DDL)

If a transaction or statement is interrupted by a group change before it has a chance to commit, it will receive an error.

If the global autoretry is true and a transaction was submitted with autocommit enabled, the database will automatically retry. If the retried statements cannot be executed successfully, the application will receive another error.


Replication processes will automatically restart at the proper binlog location following a group change.

Other Connections

Connections to nodes that are still in quorum will be maintained and will not experience any errors. Connections to non-communicative nodes will be lost.

Special Considerations

In-Memory tables may be impacted by a group change. For additional information, see "In-Memory Tables with MariaDB Xpand".