Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
MaxScale's Xpand Monitor (xpandmon) monitors nodes in Xpand deployments.
Additional information is available .
Prior to MaxScale 6, Clustrix Monitor (clustrixmon) was used.
MaxScale's monitors nodes in Xpand deployments.
The supports:
Monitoring Xpand deployments.
This page is: Copyright © 2025 MariaDB. All rights reserved.
In MariaDB MaxScale, monitors perform the following tasks:
Deciding whether a server is up or down.
Deciding whether a server is the primary server or a replica server.
Performing automatic failover when the primary server fails (for certain kinds of deployments).
MaxScale supports different monitors for different kinds of deployments:
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Monitor (mariadbmon)
ColumnStore 1.2
ColumnStore Monitor (csmon)
Xpand
Amazon Aurora
Aurora Monitor (auroramon)
This monitor is designed specifically for Amazon Aurora clusters. It identifies the writer and reader instances, allowing MaxScale to route queries and manage high availability in an AWS environment.
MaxScale's Aurora Monitor (auroramon) monitors the status of Aurora cluster replicas.
Additional information is available .
This page is: Copyright © 2025 MariaDB. All rights reserved.
Learn to use the MariaDB Monitor to automate cluster management. This guide covers how to configure server monitoring, automatic failover, switchover, and other HA features.
Designing for MaxScale's ColumnStore Monitor
Understanding MaxScale's ColumnStore Monitor
MaxScale's monitors
Additional information is available .
MaxScale's monitors ColumnStore deployments.
The ColumnStore Monitor (csmon) supports:
Monitoring deployments.
Query-based load balancing with the
Connection-based load balancing with the Read Connection Router (readconnroute)
Learn about the MariaDB Monitor in MaxScale 21.06. This module tracks primary and replica servers, enabling automatic failover, switchover, and other essential high-availability features.
MaxScale's MariaDB Monitor (mariadbmon) monitors deployments.
This page contains topics that need to be considered when designing applications that use the MariaDB Monitor.
Additional information is available here.
This page is: Copyright © 2025 MariaDB. All rights reserved.
MaxScale's MariaDB Monitor (mariadbmon) monitors deployments.
When the primary server fails, MariaDB Monitor can promote a replica server to be the new primary server automatically.
When automatic failover is enabled for MariaDB Monitor, it does the following:
It selects the replica server with the latest GTID position to be the new primary server.
If the new primary server has unprocessed relay logs, then it cancels and restarts the failover process after a short wait.
It prepares the new primary server:
It stops its replica threads by executing and .
It configures it to allow writes by setting to OFF.
If the handle_events parameter is true, then it enable events that were previously enabled on the old primary server.
If the promotion_sql_file parameter is set, then the script referred to by the parameter is executed.
It redirects all replica servers to replicate to the new primary server:
It stops its replica threads by executing and .
It configures that replication by executing and .
It checks that all slaves are replicating properly by executing .
Configure automatic failover by configuring several parameter for the MariaDB Monitor in maxscale.cnf.
For example:
Restart the MaxScale instance.
This page is: Copyright © 2025 MariaDB. All rights reserved.
MaxScale's MariaDB Monitor (mariadbmon) monitors deployments.
The MariaDB Monitor (mariadbmon) supports:
Monitoring deployments
Query-based load balancing with the Read/Write Split Router (readwritesplit)
Connection-based load balancing with the Read Connection Router (readconnroute)
Deploy MaxScale with MariaDB Monitor and Read/Write Split Router
Deploy MaxScale with MariaDB Monitor and Read Connection Router
This page is: Copyright © 2025 MariaDB. All rights reserved.
MaxScale's Aurora Monitor (auroramon) monitors the status of Aurora cluster replicas.
The Aurora Monitor (auroramon) supports:
Monitoring replicas in Amazon Aurora deployments
This page is: Copyright © 2025 MariaDB. All rights reserved.
If there is an external master, then it configures that replication by executing CHANGE MASTER TO and START REPLICA.
replication_password
• This parameter is used by the monitor to set the MASTER_PASSWORD option when executing the statement. • If this parameter is not set, then the monitor uses the monitor user's password.
replication_master_ssl
• This parameter is used by the monitor to set the MASTER_SSL option when executing the statement. • If this parameter is not set, then the monitor does not enable TLS.t
failover_timeout
• This parameter defines the maximum amount of time allowed to perform a failover. • If failover times out, then a message is logged to the MaxScale log, and automatic failover is disabled.
switchover_timeout
• This parameter defines the maximum amount of time allowed to perform a switchover. • If switchover times out, then a message is logged to the MaxScale log, and automatic failover is disabled.
verify_master_failure
• When this parameter is enabled, if the monitor detects that the primary server failed, it will execute to verify that the replica servers have also detected the failure. • If a replica has received an event within master_failure_timeout duration, the primary is not considered down when deciding whether to failover, even if the monitor cannot connect to the primary.
master_failure_timeout
• This parameter defines the timeout for verify_master_failure. • The default value is 10 seconds.
servers_no_promotion
• This parameter defines a comma-separated list of servers that should not be chosen to be primary server.
promotion_sql_file
• This parameter defines an SQL script that should be executed on the new primary server during failover or switchover.
demotion_sql_file
• This parameter defines an SQL script that should be executed on the old primary server during failover or switchover when it is demoted to be a replica server. • The script is also executed when a server is automatically added to the cluster due to the auto_rejoin parameter.
handle_events
• When this parameter is enabled, the monitor enables events on the new primary server that were previously enabled on the old primary server. • The monitor also disables the events on the old primary server.
failcount
• This parameter defines the number of monitoring checks that must pass before a primary server is considered to be down. • The default value is 5. • The total wait time can be calculated as: (monitor_interval + backend_connect_timeout) * failcount
auto_failover
• When this parameter is enabled, the monitor will automatically failover to a new primary server if the primary server fails. • When this parameter is disabled, the monitor will not automatic failover to a new primary server if the primary server fails, so failover must be performed manually. • This parameter is disabled by default.
auto_rejoin
• When this parameter is enabled, the monitor will attempt to automatically configure new replica servers to replicate from the primary server when they come online. • When this parameter is disabled, the monitor will not attempt to automatically configure new replica servers to replicate from the primary server when they come online, so they must be configured manually. • TThis parameter is disabled by default.
switchover_on_low_disk_space
• When this parameter is enabled, the monitor will automatically switchover to a new primary server if the primary server is low on disk space. • When this parameter is disabled, the monitor will automatically switchover to a new primary server if the primary server is low on disk space, so switchover must be performed manually. • This parameter requires the disk_space_threshold parameter to be set for the server or the monitor. • This parameter requires the disk_space_check_interval parameter to be set for the monitor. • This parameter is disabled by default.
enforce_simple_topology
• When this parameter is enabled, the monitor assumes that the topology of the cluster only consists of a single primary server, which has multiple replica servers. • When this parameter is disabled, the monitor does not make assumptions about the topology of the cluster. • This parameter implicitly sets the assume_unique_hostnames, auto_failover, and auto_rejoin parameters.
replication_user
• This parameter is used by the monitor to set the MASTER_USER option when executing the CHANGE MASTER TO statement. • If this parameter is not set, then the monitor uses the monitor user.
[repl-cluster]
type = monitor
module = mariadbmon
...
auto_failover = true
auto_rejoin = true
replication_user = repl
replication_password = passwd
replication_master_ssl = true$ sudo systemctl restart maxscaleMaxScale's Galera Monitor (galeramon) monitors Galera Cluster deployments.
This page contains topics that need to be considered when designing applications that use the Galera Monitor.
?
Additional information is available .
MaxScale's monitors deployments.
What Does the Support?
The Galera Monitor (galeramon) supports:
Monitoring deployments
Monitoring deployments
Query-based load balancing with the
Connection-based load balancing with the Read Connection Router (readconnroute)
Deploy MaxScale with Galera Monitor and Read/Write Split Router
Deploy MaxScale with Galera Monitor and Read Connection Router
MaxScale's monitors .
By default, when a node is chosen as a donor for a State Snapshot Transfer (SST), Galera Monitor does not route any queries to it. However, some SST methods are non-blocking on the donor, so this default behavior is not always desired.
A cluster's SST method is defined by the system variable. When this system variable is set to mariadb-backup, the cluster uses to perform the SST. is a non-blocking backup method, so Galera Cluster allows the node to execute queries while acting as the SST donor.
Configure the availability of SST donors by configuring the available_when_donor parameter for the Galera Monitor in maxscale.cnf.
For example:
Restart the MaxScale instance.
[galera-cluster]
type = monitor
module = galeramon
...
available_when_donor = true$ sudo systemctl restart maxscaleMaxScale's MariaDB Monitor (mariadbmon) monitors deployments.
When multiple MaxScale instances are used in a highly available deployment, MariaDB Monitor needs to ensure that only one MaxScale instance performs automatic failover operations at a given time. It does this by using cooperative locks on the back-end servers.
When cooperative locking is enabled for MariaDB Monitor, it tries to acquire locks on the back-end servers with GET_LOCK() function. If a specific MaxScale instance is able to acquire the lock on a majority of servers, then it is considered the primary MaxScale instance, which means that it can handle automatic failover.
Configure cooperative locking by configuring the cooperative_monitoring_locks parameter for the MariaDB Monitor in maxscale.cnf. It has several possible values.
For example:
Restart the MaxScale instance.
This page is: Copyright © 2025 MariaDB. All rights reserved.
none
Do not use any cooperative locking. This is the default value.
majority_of_all
Primary monitor requires locks on a majority of servers, even those which are down.
majority_of_running
Primary monitor requires locks on a majority of running servers.
[repl-cluster]
type = monitor
module = mariadbmon
...
cooperative_monitoring_locks = majority_of_running$ sudo systemctl restart maxscale