Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Explore the latest updates in MariaDB MaxScale 24.02. This release features binlog compression, enhanced REST API security, and significant improvements to the MaxGUI interface.
Secure your MaxScale deployment by configuring authenticators. These modules manage client authentication against backend MariaDB servers, supporting various mechanisms for robust security.
MariaDB MaxScale 24.02 enhances database management. This release introduces binlog compression, refined REST API security, and major improvements to the MaxGUI user interface.
MariaDB MaxScale is a database proxy that forwards database statements to one or more database servers.
The forwarding is performed using rules based on the semantic understanding of the database statements and on the roles of the servers within the backend cluster of databases.
MariaDB MaxScale is designed to provide, transparently to applications, load balancing and high availability functionality. MariaDB MaxScale has a scalable and flexible architecture, with plugin components to support different protocols and routing approaches.
MariaDB MaxScale makes extensive use of the asynchronous I/O capabilities of the Linux operating system, combined with a fixed number of worker threads. epoll is used to provide the event driven framework for the input and output via sockets.
Many of the services provided by MariaDB MaxScale are implemented as external shared object modules loaded at runtime. These modules support a fixed interface, communicating the entry points via a structure consisting of a set of function pointers. This structure is called the "module object". Additional modules can be created to work with MariaDB MaxScale.
Commonly used module types are protocol, router and filter. Protocol modules implement the communication between clients and MariaDB MaxScale, and between MariaDB MaxScale and backend servers. Routers inspect the queries from clients and decide the target backend. The decisions are usually based on routing rules and backend server status. Filters work on data as it passes through MariaDB MaxScale. Filter are often used for logging queries or modifying server responses.
A Google Group exists for MariaDB MaxScale. The Group is used to discuss ideas, issues and communicate with the MariaDB MaxScale community. Send email to or use the interface.
Bugs can be reported in the MariaDB Jira
Information about installing MariaDB MaxScale, either from a repository or by building from source code, is included in the .
The same guide also provides basic information on running MariaDB MaxScale. More detailed information about configuring MariaDB MaxScale can be found in the .
This page is licensed: CC BY-SA / Gnu FDL
Filters are powerful modules that intercept and process database traffic in MaxScale. Use them to log, transform, block, or reroute queries to add control, security, and monitoring.
This is your starting point for MariaDB MaxScale 24.02. Find essential guides for installation, learn the basics of configuration, and explore tutorials to get up and running quickly.
Monitors are essential for high availability. They track backend server status, detect failures, promote replicas, and perform automatic failovers, ensuring service continuity.
Protocol modules interpret client-server communication for MaxScale. This section covers the available modules, including the standard MariaDB protocol, NoSQL, and Change Data Capture (CDC).
Manage MaxScale programmatically using the REST API. This interface allows for the dynamic administration and monitoring of resources like servers, services, listeners, and filters.
Routers are the core of a MaxScale service, intelligently directing database traffic. This section details available routers, from read-write splitting to sharding and routing hints.
The main tutorial for MariaDB MaxScale consist of setting up MariaDB MaxScale for the environment you are using with either a connection-based or a read/write-based configuration.
These tutorials are for specific use cases and module combinations.
Here are tutorials on monitoring and managing MariaDB MaxScale in cluster environments.
The routing module is the core of a MariaDB MaxScale service. The router documentation contains all module specific configuration options and detailed explanations of their use.
The following routers are only available in MaxScale Enterprise.
Here are detailed documents about the filters MariaDB MaxScale offers. They contain configuration guides and example use cases. Before reading these, you should have read the filter tutorial so that you know how they work and how to configure them.
The following filters are only available in MaxScale Enterprise.
Common options for all monitor modules.
Module specific documentation.
Documentation for MaxScale protocol modules.
The MaxScale CDC Connector provides a C++ API for consuming data from a CDC system.
A short description of the authentication module type can be found in the document.
This page is licensed: CC BY-SA / Gnu FDL
Refer to the documentation for the MaxScale hint syntax.
This page is licensed: CC BY-SA / Gnu FDL
Explore connectors for MariaDB MaxScale 24.02. This section details the CDC Connector, a C++ API enabling applications to connect and consume a real-time stream of database change events.
Access detailed technical information for MariaDB MaxScale 24.02. This section is your complete reference for configuration settings, command-line tools, hint syntax, and more.
Get hands-on experience with MariaDB MaxScale 24.02. These tutorials provide step-by-step instructions for common tasks like setting up read-write splitting, failover, and sharding.
The MariaDBAuth-module implements the client and backend authentication for the server plugin mysql_native_password. This is the default authentication plugin used by both MariaDB and MySQL.
The following settings may be given in the authenticator_options of the listener.
clear_pw_passthroughBoolean, default value is "false". Activates passthrough-mode. In this mode, MaxScale does not check client credentials at all and defers authentication to the backend server. This feature is primarily meant to be used with Xpand LDAP- authentication, although it may be useful in any situation where MaxScale cannot check the existence of client user account nor authenticate the client.
When a client connects to a listener with this setting enabled, MaxScale will change authentication method to "mysql_clear_password", causing the client to send their cleartext password to MaxScale. MaxScale will then attempt to use the password to authenticate to backends. The authentication result of the first backend to respond will be sent to the client. The backend may ask MaxScale for either cleartext password or standard ("mysql_native_password") authentication token. MaxScale can work with both backend plugins since it has the original password.
This feature is incompatible with service setting lazy_connect. Either leave
it unspecified or set lazy_connect=false in the linked service. Also,
multiple client authenticators are not allowed on the listener when
passthrough-mode is on.
Because passwords are sent in cleartext, the listener should be configured for ssl.
log_password_mismatchType:
Mandatory: No
Dynamic: No
Default: false
The service setting log_auth_warnings must also be enabled for this setting to have effect. When both settings are enabled, password hashes are logged if a client gives a wrong password. This feature may be useful when diagnosing authentication issues. It should only be enabled on a secure system as the logging of password hashes may be a security risk.
cache_dirDeprecated and ignored.
inject_service_userDeprecated and ignored.
This page is licensed: CC BY-SA / Gnu FDL
Change Data Capture (CDC) is a new MaxScale protocol that allows compatible clients to authenticate and register for Change Data Capture events. The new protocol must be use in conjunction with AVRO router which currently converts MariaDB binlog events into AVRO records. Clients connect to CDC listener and authenticate using credentials provided in a format described in the CDC Protocol documentation.
Note: If no users are found in that file or if it doesn't exist, the only available user will be the service user:
[avro-service]
type=service
router=avrorouter
source=replication-service
user=cdc_user
password=cdc_passwordStarting with MaxScale 2.1, users can also be created through maxctrl:
maxctrl call command cdc add_user <service> <name> <password>The should be the service name where the user is created. Older versions of MaxScale should use the cdc_users.py script.
The output of this command should be appended to the cdcusers file at/var/lib/maxscale/<service name>/.
Users can be deleted by removing the related rows in 'cdcusers' file. For more details on the format of the cdcusers file, read the .
This page is licensed: CC BY-SA / Gnu FDL
GSSAPI is an authentication protocol that is commonly implemented with Kerberos on Unix or Active Directory on Windows. This document describes GSSAPI authentication in MaxScale. The authentication module name in MaxScale is_GSSAPIAuth_.
For Unix systems, the usual GSSAPI implementation is Kerberos. This is a short guide on how to set up Kerberos for MaxScale.
The first step is to configure MariaDB to use GSSAPI authentication. The MariaDB documentation for the is a good example on how to set it up.
The next step is to copy the keytab file from the server where MariaDB is
installed to the server where MaxScale is located. The keytab file must be
placed in the configured default location which almost always is/etc/krb5.keytab
The mariadbprotocol module implements the MariaDB client-server protocol.
The legacy protocol names mysqlclient, mariadb and mariadbclient are all
aliases to mariadbprotocol.
Before upgrading to MariaDB MaxScale 24.02, it's critical to review the changes. This guide outlines new features, altered parameters, and deprecated functionality to ensure a smooth transition.
This page has moved .
Note: The password encryption format changed in MaxScale 2.5. All encrypted passwords created with MaxScale 2.4 or older need to be re-encrypted.
There are two options for representing the password, either plain text or
encrypted passwords may be used. In order to use encrypted passwords a set of
keys must be generated that will be used by the encryption and decryption
process. To generate the keys, use the maxkeys command.
By default the key file will be generated in /var/lib/maxscale. If a different
directory is required, it can be given as the first argument to the program. For
more information, see maxkeys --help.
Once the keys have been created the maxpasswd command can be used to generate
the encrypted password.
The username and password, either encrypted or plain text, are stored in the
service section using the user and
This document describes how to configure a MariaDB primary-replica cluster monitor to be used with MaxScale.
Define the monitor that monitors the servers.
The mandatory parameters are the object type, the monitor module to use, the
list of servers to monitor and the username and password to use when connecting
to the servers. The monitor_interval parameter controls for how long
the monitor waits between each monitoring loop.
bash$ cdc_users.py [-h] USER PASSWORDbash$ cdc_users.py user1 pass1 >> /var/lib/maxscale/avro-service/cdcusersWith the comment filter it is possible to define comments that are injected before the actual statements. These comments appear as sql comments when they are received by the server.
The Comment filter requires one mandatory parameter to be defined.
Type: string
Mandatory: Yes
Dynamic: Yes
A parameter that contains the comment injected before the statements. There is also defined variable $IP that can be used to comment the IP address of the client in the injected comment. Variables must be written in all caps.
as comment.
The following configuration adds the IP address of the client to the comment.
In this example when MaxScale receives statement like:
It would look like
when received by server.
This page is licensed: CC BY-SA / Gnu FDL
Protocol level parameters are defined in the listeners. They must be defined using the scoped parameter syntax where the protocol name is used as the prefix.
For the MariaDB protocol module, the prefix is always mariadbprotocol.
Type: boolean
Mandatory: No
Dynamic: Yes
Default: true
Whether the use of the replication protocol is allowed through this listener. If
disabled with mariadbprotocol.allow_replication=false, all attempts to start
replication will be rejected with a ER_FEATURE_DISABLED error (error number
1289).
This page is licensed: CC BY-SA / Gnu FDL
passwordIf a custom location was used for the key file, give it as the first argument tomaxpasswd and pass the password to be encrypted as the second argument. For
more information, see maxkeys --help.
Here is an example configuration that uses an encrypted password.
If the key file is not in the default location, the datadir parameter must be set to the directory that contains it.
This page is licensed: CC BY-SA / Gnu FDL
maxkeysmaxpasswd plainpassword
96F99AA1315BDC3604B006F427DD9484[My-Service]
type=service
router=readconnroute
router_options=master
servers=dbserv1, dbserv2, dbserv3
user=maxscale
password=96F99AA1315BDC3604B006F427DD9484The monitor user requires the REPLICATION CLIENT privileges to do basic monitoring. To create a user with the proper grants, execute the following SQL.
Note: If the automatic failover of the MariaDB Monitor will used, the user will require additional grants. Execute the following SQL to grant them.
This page is licensed: CC BY-SA / Gnu FDL
[Replication-Monitor]
type=monitor
module=mariadbmon
servers=dbserv1, dbserv2, dbserv3
user=monitor_user
password=my_password
monitor_interval=2000msCREATE USER 'monitor_user'@'%' IDENTIFIED BY 'my_password';
GRANT REPLICATION CLIENT ON *.* TO 'monitor_user'@'%';GRANT SUPER, RELOAD on *.* to 'monitor_user'@'%';[MyListener]
type=listener
authenticator=mariadbauth
authenticator_options=clear_pw_passthrough=true
ssl=true
<other options>[MyComment]
type=filter
module=comment
inject="Comment to be injected"
[MyService]
type=service
router=readwritesplit
servers=server1
user=myuser
password=mypasswd
filters=MyComment[IPComment]
type=filter
module=comment
inject="IP=$IP"
[MyService]
type=service
router=readwritesplit
servers=server1
user=myuser
password=mypasswd
filters=IPCommentSELECT user FROM people;/* IP=::ffff:127.0.0.1 */SELECT user FROM people;[MyListener]
type=listener
service=MyService
protocol=mariadbprotocol
mariadbprotocol.allow_replication=false
port=3306The location of the keytab file can be changed with the KRB5_KTNAME environment variable: keytab_def.html
To take GSSAPI authentication into use, add the following to the listener.
The principal name should be the same as on the MariaDB servers.
Type: string
Mandatory: No
Dynamic: No
Default: mariadb/localhost.localdomain
The service principal name to send to the client. This parameter is a string parameter which is used by the client to request the token.
This parameter must be the same as the principal name that the backend MariaDB server uses.
Type: path
Mandatory: No
Dynamic: No
Default: Kerberos Default
Keytab file location. This should be an absolute path to the file containing the
keytab. If not defined, Kerberos will search from a default location, usually/etc/krb5.keytab. This path is set to an environment variable. This means that
multiple listeners with GSSAPIAuth will override each other. If using multiple
GSSAPI authenticators, either do not set this option or use the same value for
all listeners.
Read the Authentication Modules document for more details on how authentication modules work in MaxScale.
The GSSAPI plugin authentication starts when the database server sends the
service principal name in the AuthSwitchRequest packet. The principal name will
usually be in the form service@REALM.COM.
The client searches its local cache for a token for the service or may request it from the GSSAPI server. If found, the client sends the token to the database server. The database server verifies the authenticity of the token using its keytab file and sends the final OK packet to the client.
The GSSAPI authenticator modules require the GSSAPI development libraries (krb5-devel on CentOS 7).
This page is licensed: CC BY-SA / Gnu FDL
The C++ connector for the MariaDB MaxScaleCDC system.
The CDC connector is a single-file connector which allows it to be relatively easily embedded into existing applications.
To start using the connector, either download it from the MariaDB website or configure the MaxScale repository
and install the maxscale-cdc-connector package.
A CDC connection object is prepared by instantiating the CDC::Connection
class. To create the actual connection, call the CDC::Connection::connect
method of the class.
After the connection has been created, call the CDC::Connection::read method
to get a row of data. The CDC::Row::length method tells how many values a row
has and CDC::Row::value is used to access that value. The field name of a
value can be extracted with the CDC::Row::key method and the current GTID of a
row of data is retrieved with the CDC::Row::gtid method.
To close the connection, destroy the instantiated object.
The source code that demonstrates basic usage of the MaxScale CDC Connector.
The CDC connector depends on:
OpenSSL
To build and package the connector as a library, follow MaxScale build
instructions with the exception of adding -DTARGET_COMPONENT=devel to the
CMake call.
This page is licensed: CC BY-SA / Gnu FDL
CDC is a new protocol that allows compatible clients to authenticate and register for Change Data Capture events. The new protocol must be use in conjunction with AVRO router which currently converts MariaDB binlog events into AVRO records. Change Data Capture protocol is used by clients in order to interact with stored AVRO file and also allows registered clients to be notified with the new events coming from MariaDB 10.0/10.1 database.
The users and their hashed passwords are stored in /var/cache/maxscale/<service name>/cdcusers where <service name> is the name of the service.
For example, the following service entry will look into /var/cache/maxscale/CDC-Service/ for a file called cdcusers. If that file is found, the users in that file will be used for authentication.
If the cdcusers file cannot be found, the service user (maxuser:maxpwd in the example) can be used to connect through the CDC protocol.
For more details, refer to the .
Client connects to MaxScale CDC protocol listener.
Send the authentication message which includes the user and the SHA1 of the password
In the future, optional flags could be implemented.
Sending UUID
Specify the output format (AVRO or JSON) for data retrieval.
Send CDC commands to retrieve router statistics or to query for data events
The authentication starts when the client sends the hexadecimal representation
of the username concatenated with a colon (:) and the SHA1 of the password.
bin2hex(username + ':' + SHA1(password))
For example the user foobar with a password of foopasswd should send the following hexadecimal string
Server returns OK on success and ERR on failure.
REGISTER
REGISTER UUID=UUID, TYPE={JSON | AVRO}
Register as a client to the service.
Example:
Server returns OK on success and ERR on failure.
REQUEST-DATA
REQUEST-DATA DATABASE.TABLE[.VERSION] [GTID]
This command fetches data from specified table in a database and returns the output in the requested format (AVRO or JSON). Data records are sent to clients and if new AVRO versions are found (e.g. mydb.mytable.0000002.avro) the new schema and data will be sent as well.
The data will be streamed until the client closes the connection.
Clients should continue reading from network in order to automatically gets new events.
Example:
MaxScale includes an example CDC client application written in Python 3. You can find the source code for it .
This page is licensed: CC BY-SA / Gnu FDL
Introduced in MaxScale 2.1, the module commands are special, module-specific commands. They allow the modules to expand beyond the capabilities of the module API. Currently, only MaxCtrl implements an interface to the module commands.
All registered module commands can be shown with maxctrl list commands and
they can be executed with maxctrl call command <module> <name> ARGS... whereis the name of the module and is the name of the command.ARGS is a command specific list of arguments.
The module command API is defined in the modulecmd.h header. It consists of various functions to register and call module commands. Read the function documentation in the header for more details.
The following example registers the module command my_command for module_my_module_.
The array my_args of type modulecmd_arg_type_t is used to tell what kinds of arguments the command expects. The first argument is a boolean and the second argument is an optional string.
Arguments are passed to the parsing function as an array of void pointers. They are interpreted as the types the command expects.
When the module command is executed, the argv parameter for the_my_simple_cmd_ contains the parsed arguments received from the caller of the command.
This page is licensed: CC BY-SA / Gnu FDL
HintRouter was introduced in 2.2 and is still beta.
The HintRouter module is a simple router intended to operate in conjunction with the NamedServerFilter. The router looks at the hints embedded in a packet buffer and attempts to route the packet according to the hint. The user can also set a default action to be taken when a query has no hints or when the hints could not be applied.
If a packet has multiple hints attached, the router will read them in order and attempt routing. Any successful routing ends the process and any further hints are ignored for the packet.
The HintRouter is a rather simple router and only accepts a few configuration settings.
This setting defines what happens when a query has no routing hint or applying the routing hint(s) fails. If also the default action fails, the routing will end in error and the session closes. The different values are:
Note that setting default action to anything other than all means that session
variable write commands are by default not routed to all backends.
Defines the default backend name if default_action=named. <server-name> must
be a valid backend name.
<limit> should be an integer, -1 by default. Defines how many backend replica
servers a session should attempt to connect to. Having less replicas defined in
the services and/or less successful connections during session creation is not
an error. The router will attempt to distribute replicas evenly between sessions
by assigning them in a round robin fashion. The session will always try to
connect to a primary regardless of this setting, although not finding one is not
an error.
Negative values activate default mode, in which case this value is set to the number of backends in the service - 1, so that the sessions are connected to all replicas.
If the hints or the default_action point to a named server, this setting is
probably best left to default to ensure that the specific server is connected to
at session creation. The router will not attempt to connect to additional
servers after session creation.
A minimal configuration doesn't require any parameters as all settings have reasonable defaults.
If packets should be routed to the primary server by default and only a few connections are required, the configuration might be as follows.
This page is licensed: CC BY-SA / Gnu FDL
This document describes how to configure a Galera cluster monitor.
Define the monitor that monitors the servers.
[Galera-Monitor]
type=monitor
module=galeramon
servers=dbserv1, dbserv2, dbserv3
user=monitor_user
password=my_password
monitor_interval=2000msThe mandatory parameters are the object type, the monitor module to use, the
list of servers to monitor and the username and password to use when connecting
to the servers. The monitor_interval parameter controls for how long
the monitor waits between each monitoring loop.
This monitor module will assign one node within the Galera Cluster as the
current primary and other nodes as replica. Only those nodes that are active
members of the cluster are considered when making the choice of primary node. The
primary node will be the node with the lowest value of wsrep_local_index.
The monitor user does not require any special grants to monitor a Galera cluster. To create a user for the monitor, execute the following SQL.
This page is licensed: CC BY-SA / Gnu FDL
The cat router is a special router that concatenates result sets.
Note: This module is experimental and must be built from source. The module is deprecated in MaxScale 23.08 and might be removed in a future release.
The router has no special parameters. To use it, define a service withrouter=cat and add the servers you want to use.
The order the servers are defined in is the order in which the servers are
queried. This means that the results are ordered based on the servers
parameter of the service. The result will only be completed once all servers
have executed this.
All commands executed via this router will be executed on all servers. This
means that an INSERT through the cat router will send it to all servers. In
the case of commands that do not return resultsets, the response of the last
server is sent to the client. This means that if one of the earlier servers
returns a different result, the client will not see it.
As the intended use-case of the router is to mainly reduce multiple result sets into one, it has no mechanisms to prevent writes from being executed on slave servers (which would cause data corruption or replication failure). Take great care when performing administrative operations though this router.
If a connection to one of the servers is lost, the client connection will also be closed.
Here is a simple example service definition that uses the servers from the tutorial and the credentials from the .
This page is licensed: CC BY-SA / Gnu FDL
Review the official release notes for MariaDB MaxScale 24.02. This section details new features, bug fixes, and functional changes for each point release to ensure a smooth upgrade.
This filter was added in MariaDB MaxScale 2.3
We recommend to install MaxScale on a separate server, to ensure that there can be no competition of resources between MaxScale and a MariaDB Server that it manages.
The recommended approach is to use to install MaxScale. After enabling the repository by following the instructions, MaxScale can be installed with the following commands.
For RHEL/Rocky Linux/Alma Linux, use dnf install maxscale
The goal of this tutorial is to configure a system that appears to the client as a single database. MariaDB MaxScale will split the statements such that write statements are sent to the primary server and read statements are balanced across the replica servers.
This tutorial is a part of . Please read it and follow the instructions. Return here once basic setup is complete.
The first step is to define the servers that make up the cluster. These servers will be used by the services and are monitored by the monitor.
The address and port parameters tell where the server is located.
To enable encryption for the MaxScale-to-MariaDB communication, add ssl=true
to the server section. To enable server certificate verification, addssl_verify_peer_certificate=true
authenticator=GSSAPIAuth
authenticator_options=principal_name=mariadb/localhost.localdomain@EXAMPLE.COMauthenticator_options=principal_name=mymariadb@EXAMPLE.COM,gssapi_keytab_path=/home/user/mymariadb.keytab#include <maxscale/modulecmd.hh>
bool my_simple_cmd(const MODULECMD_ARG *argv)
{
printf("%d arguments given\n", argv->argc);
}
int main(int argc, char **argv)
{
modulecmd_arg_type_t my_args[] =
{
{MODULECMD_ARG_BOOLEAN, "This is a boolean parameter"},
{MODULECMD_ARG_STRING | MODULECMD_ARG_OPTIONAL, "This is an optional string parameter"}
};
// Register the command
modulecmd_register_command("my_module", "my_command", my_simple_cmd, 2, my_args);
// Find the registered command
const MODULECMD *cmd = modulecmd_find_command("my_module", "my_command");
// Parse the arguments for the command
const void *arglist[] = {"true", "optional string"};
MODULECMD_ARG *arg = modulecmd_arg_parse(cmd, arglist, 2);
// Call the module command
modulecmd_call_command(cmd, arg);
// Free the parsed arguments
modulecmd_arg_free(arg);
return 0;
}master
Route to the primary server.
slave
Route to any single replica server.
named
Route to a named server. The name is given in the default_server-setting.
all
Default value. Route to all connected servers.
CREATE USER 'monitor_user'@'%' IDENTIFIED BY 'my_password';[concat-service]
type=service
router=cat
servers=dbserv1,dbserv2,dbserv3
user=maxscale
password=maxscale_pwThe ssl and ssl_verify_peer_certificate parameters are similar to the--ssl and --ssl-verify-server-cert options of the mysql command line
client.
For more information about TLS, refer to the Configuration Guide.
This page is licensed: CC BY-SA / Gnu FDL
[dbserv1]
type=server
address=192.168.2.1
port=3306
[dbserv2]
type=server
address=192.168.2.2
port=3306
[dbserv3]
type=server
address=192.168.2.3
port=3306The throttle filter is used to limit the maximum query frequency (QPS - queries per second) of a database session to a configurable value. The main use cases are to prevent a rogue session (client side error) and a DoS attack from overloading the system.
The throttling is dynamic. The query frequency is not limited to an absolute value. Depending on the configuration the throttle will allow some amount of high frequency queries, or especially short bursts with no frequency limitation.
This configuration states that the query frequency will be throttled to around 500 qps, and that the time limit a query is allowed to stay at the maximum frequency is 60 seconds. All values involving time are configured in milliseconds. With the basic configuration the throttling will be nearly immediate, i.e. a session will only be allowed very short bursts of high frequency querying.
When a session has been continuously throttled for throttling_duration
milliseconds, or 60 seconds in this example, MaxScale will disconnect the
session.
The two parameters max_qps and sampling_duration together define how a
session is throttled.
Suppose max qps is 400 qps and sampling duration is 10 seconds. Since QPS is not an instantaneous measure, but one could say it has a granularity of 10 seconds, we see that over the 10 seconds 10*400 = 4000 queries are allowed before throttling kicks in.
With these values, a fresh session can start off with a speed of 2000 qps, and maintain that speed for 2 seconds before throttling starts.
If the client continues to query at high speed and throttling duration is set to 10 seconds, Maxscale will disconnect the session 12 seconds after it started.
max_qps
Type: number
Mandatory: Yes
Dynamic: Yes
Maximum queries per second.
This is the frequency to which a session will be limited over a given time
period. QPS is not measured as an instantaneous value but over a configurable
sampling duration (see sampling_duration).
throttling_duration
Type: duration
Mandatory: Yes
Dynamic: Yes
This defines how long a session is allowed to be throttled before MaxScale disconnects the session.
Type: duration
Mandatory: No
Dynamic: Yes
Default: 250ms
Sampling duration defines the window of time over which QPS is measured. This parameter directly affects the amount of time that high frequency queries are allowed before throttling kicks in.
The lower this value is, the more strict throttling becomes. Conversely, the longer this time is, the longer bursts of high frequency querying is allowed.
Type: duration
Mandatory: No
Dynamic: Yes
Default: 2s
This value defines what continuous throttling means. Continuous throttling
starts as soon as the filter throttles the frequency. Continuous throttling ends
when no throttling has been performed in the past continuous_duration time.
This page is licensed: CC BY-SA / Gnu FDL
For Debian and Ubuntu, run apt update followed by apt install maxscale.
For SLES, use zypper install maxscale.
Download the correct MaxScale package for your CPU architecture and operating system from the MariaDB Downloads page. MaxScale can be installed with the following commands.
For RHEL/Rocky Linux/Alma Linux, use dnf install /path/to/maxscale-*.rpm
For Debian and Ubuntu, use apt install /path/to/maxscale-*.deb.
For SLES, use zypper install /path/to/maxscale-*.rpm.
MaxScale can also be installed using a tarball. That may be required if you are using a Linux distribution for which there exist no installation package or if you want to install many different MaxScale versions side by side. For instructions on how to do that, please refer to Install MariaDB MaxScale using a Tarball.
Alternatively you may download the MariaDB MaxScale source and build your own binaries. To do this, refer to the separate document Building MariaDB MaxScale from Source Code
MaxScale assumes that memory allocations always succeed and in general does
not check for memory allocation failures. This assumption is compatible with
the Linux kernel parameter vm.overcommit_memory
having the value 0, which is also the default on most systems.
With vm.overcommit_memory being 0, memory allocations made by an
application never fail, but instead the application may be killed by the
so-called OOM (out-of-memory) killer if, by the time the application
actually attempts to use the allocated memory, there is not available
free memory on the system.
If the value is 2, then a memory allocation made by an application may
fail and unless the application is prepared for that possibility, it will
likely crash with a SIGSEGV. As MaxScale is not prepared to handle memory
allocation failures, it will crash in this situation.
The current value of vm.overcommit_memory can be checked with
or
The MaxScale Tutorial covers the first steps in configuring your MariaDB MaxScale installation. Follow this tutorial to learn how to configure and start using MaxScale.
For a detailed list of all configuration parameters, refer to the Configuration Guide and the module specific documents listed in the Documentation Contents.
Read the Encrypting Passwords section of the configuration guide to set up password encryption for the configuration file.
There are various administration tasks that may be done with MariaDB MaxScale. A command line tools is available, maxctrl, that will interact with a running MariaDB MaxScale and allow the status of MariaDB MaxScale to be monitored and give some control of the MariaDB MaxScale functionality.
The administration tutorial covers the common administration tasks that need to be done with MariaDB MaxScale.
The main configuration file for MaxScale is in /etc/maxscale.cnf and
additional user-created configuration files are in/etc/maxscale.cnf.d/. Objects created or modified at runtime are stored in/var/lib/maxscale/maxscale.cnf.d/. Some modules also store internal data in/var/lib/maxscale/ named after the module or the configuration object.
The simplest way to back up the configuration and runtime data of a MaxScale installation is to create an archive from the following files and directories:
/etc/maxscale.cnf
/etc/maxscale.cnf.d/
/var/lib/maxscale/
This can be done with the following command:
If MaxScale is configured to store data in custom locations, these should be included in the backup as well.
This page is licensed: CC BY-SA / Gnu FDL
After configuring the servers and the monitor, we create a read-write-splitter service configuration. Create the following section in your configuration file. The section name is also the name of the service and should be meaningful. For this tutorial, we use the name Splitter-Service.
router defines the routing module used. Here we use readwritesplit for query-level read-write-splitting.
A service needs a list of servers where queries will be routed to. The server names must match the names of server sections in the configuration file and not the hostnames or addresses of the servers.
The user and password parameters define the credentials the service uses to populate user authentication data. These users were created at the start of the MaxScale Tutorial.
For increased security, see password encryption.
To allow network connections to a service, a network ports must be associated with it. This is done by creating a separate listener section in the configuration file. A service may have multiple listeners but for this tutorial one is enough.
The service parameter tells which service the listener connects to. For the_Splitter-Listener_ we set it to Splitter-Service.
A listener must define the network port to listen on.
The optional address-parameter defines the local address the listener should bind to.
This may be required when the host machine has multiple network interfaces. The
default behavior is to listen on all network interfaces (the IPv6 address ::).
For the last steps, please return to MaxScale Tutorial.
This page is licensed: CC BY-SA / Gnu FDL
sudo yum -y install epel-release
sudo yum -y install jansson openssl-devel cmake make gcc-c++ gitsudo apt-get update
sudo apt-get -y install libjansson-dev libssl-dev cmake make g++ gitsudo apt-get update
sudo apt-get -y install libjansson-dev libssl-dev cmake make g++ gitsudo zypper install -y libjansson-devel openssl-devel cmake make gcc-c++ git[CDC-Service]
type=service
router=avrorouter
user=maxuser
password=maxpwdfoobar:SHA1(foopasswd) -> 666f6f6261723a3137336363643535253331REGISTER UUID=11ec2300-2e23-11e6-8308-0002a5d5c51b, TYPE=AVROREQUEST-DATA db1.table1
REQUEST-DATA dbi1.table1.000003
REQUEST-DATA db2.table4 0-11-345default_action=<master|slave|named|all>default_server=<server-name>max_slaves=<limit>[Routing-Service]
type=service
router=hintrouter
servers=replica1,replica2,replica3[Routing-Service]
type=service
router=hintrouter
servers=MyPrimary, replica1,replica2,replica3,replica4,replica5,replica6,replica7
default_action=master
max_slaves=2[Throttle]
type = filter
module = throttlefilter
max_qps = 500
throttling_duration = 60000
...
[Routing-Service]
type = service
filters = Throttlesysctl vm.overcommit_memorycat /proc/sys/vm/overcommit_memorytar -caf maxscale-backup.tar.gz /etc/maxscale.cnf /etc/maxscale.cnf.d/ /var/lib/maxscale/[Splitter-Service]
type=service
router=readwritesplit
servers=dbserv1, dbserv2, dbserv3
user=maxscale
password=maxscale_pw[Splitter-Listener]
type=listener
service=Splitter-Service
port=3306The Maxrows filter is capable of restricting the amount of rows that a SELECT, a prepared statement or stored procedure could return to the client application.
If a resultset from a backend server has more rows than the configured limit or the resultset size exceeds the configured size, an empty result will be sent to the client.
The Maxrows filter is easy to configure and to add to any existing service.
The Maxrows filter has no mandatory parameters. Optional parameters are:
max_resultset_rows
Type: number
Mandatory: No
Dynamic: Yes
Default: (no limit)
Specifies the maximum number of rows a resultset can have in order to be returned to the user.
If a resultset is larger than this an empty result will be sent instead.
max_resultset_size
Type: size
Mandatory: No
Dynamic: Yes
Default: 64Ki
Specifies the maximum size a resultset can have in order to be sent to the client. A resultset larger than this, will not be sent: an empty resultset will be sent instead.
max_resultset_return
Type: enum
Mandatory: No
Dynamic: Yes
Values: empty, error, ok
Default: empty
Specifies what the filter sends to the client when the rows or size limit is hit, possible values:
an empty result set
an error packet with input SQL
an OK packet
Example output with ERR packet:
debug
Type: number
Mandatory: No
Dynamic: Yes
Default: 0
An integer value, using which the level of debug logging made by the Maxrows filter can be controlled. The value is actually a bitfield with different bits denoting different logging.
0 (0b00000) No logging is made.
1 (0b00001) A decision to handle data form server is logged.
2 (0b00010) Reached max_resultset_rows or max_resultset_size is logged.
To log everything, give debug a value of 3.
Here is an example of filter configuration where the maximum number of returned rows is 10000 and maximum allowed resultset size is 256KB
This page is licensed: CC BY-SA / Gnu FDL
This document lists known issues and limitations in MariaDB MaxScale and its plugins. Since limitations are related to specific plugins, this document is divided into several sections.
In versions 2.1.2 and earlier, the configuration files are limited to 1024 characters per line. This limitation was increased to 16384 characters in MaxScale 2.1.3. MaxScale 2.3.0 increased this limit to 16777216 characters.
In versions 2.2.12 and earlier, the section names in the configuration files were limited to 49 characters. This limitation was increased to 1023 characters in MaxScale 2.2.13.
Starting with MaxScale 2.4.0, on systems with Linux kernels 3.9 or newer due to the addition of SO_REUSEPORT support, it is possible for multiple MaxScale instances to listen on the same network port if the directories used by both instances are completely separate and there are no conflicts which can cause unexpected splitting of connections. This will only happen if users explicitly tell MaxScale to ignore the default directories and will not happen in normal use.
The parser of MaxScale correctly parses WITH statements, but fails to
collect columns, functions and tables used in the SELECT defining theWITH clause.
Consequently, the database firewall will not block WITH statements
where the SELECT of the WITH clause refers to forbidden columns.
MaxScale assumes that certain configuration parameters in MariaDB are set to their default values. These include but are not limited to:
autocommit: Autocommit is enabled for all new connections.
tx_read_only: Transactions use READ WRITE permissions by default.
If a module in MaxScale requires tracking of transaction boundaries but does not
require query classification, a custom parser is used to detect them. Currently
the only situation in which this parser is used is when a readconnroute
service uses the cache filter.
The custom parser detects a subset of the full SQL syntax used to start
transactions. This means that more complex statements will not be fully parsed
and will cause the transaction state to not match the real state on the
database. For example, SET @my_var = (SELECT 1), autocommit = 0 is not parsed
by the custom parser and causes the autocommit modification to not be noticed.
MaxScale will treat statements executed after XA START and before XA END as
if they were executed in a normal read-write transaction started with START TRANSACTION. This means that only XA transactions in the ACTIVE state will be
routed as transactions and all statements after XA END are routed normally.
XA transactions and normal transactions are mutually exclusive in MariaDB. This
means that a START TRANSACTION command will fail if the connection already has
an open XA transaction. MaxScale currently only inspects the SQL and deduces the
transaction state from that. If a transaction fails to start due to an open XA
transaction, the state in MaxScale and in MariaDB can be different and MaxScale
will keep routing statements as if they were inside of a transaction. However,
as this is an unlikely scenario, usually no action needs to be taken.
For its proper functioning, MaxScale needs in general to be aware of the transaction state and autocommit mode. In order to be that, MaxScale parses statements going through it.
However, if a transaction is committed or rolled back, or the autocommit mode is changed using a prepared statement, MaxScale will miss that and its internal state will be incorrect, until the transaction state or autocommit mode is changed using an explicit statement.
For instance, after the following sequence of commands, MaxScale will still think autocommit is on:
To ensure that MaxScale functions properly, do not commit or rollback a transaction or change the autocommit mode using a prepared statement.
Compression is not included in the server handshake.
If a KILL [CONNECTION] <ID> statement is executed, MaxScale will intercept
it. If the ID matches a MaxScale session ID, it will be closed by sending
modified KILL commands of the same type to all backend server to which the
session in question is connected to. This results in behavior that is similar
to how MariaDB does it. If the KILL CONNECTION USER <user> form is given,
all connections with a matching username will be closed instead.
MariaDB MaxScale does not support KILL QUERY ID <query_id>
MySQL old style passwords are not supported. MySQL versions 4.1 and newer use a new authentication protocol which does not support pre-4.1 style passwords.
When users have different passwords based on the host from which they connect MariaDB MaxScale is unable to determine which password it should use to connect to the backend database. This results in failed connections and unusable usernames in MariaDB MaxScale.
The Tee filter does not support binary protocol prepared statements. The execution of a prepared statements through a service that uses the tee filter is not guaranteed to succeed on the service where the filter branches to as it does on the original service.
This possibility exists due to the fact that the binary protocol prepared statements are identified by a server-generated ID. The ID sent to the client from the main service is not guaranteed to be the same that is sent by the branch service.
A server can only be monitored by one monitor. Two or more monitors monitoring the same server is considered an error.
The default master selection is based only on MIN(wsrep_local_index). This can be influenced with the server priority mechanic described in the manual.
Refer to individual router documentation for a list of their limitations.
The ETL feature in MaxScale always uses the MariaDB Connector/ODBC driver to perform the data loading into MariaDB. The recommended minimum version of the connector is 3.1.18. Older versions of the driver suffer from problems that may manifest as crashes or memory leaks. The driver must be installed on the system in order for the ETL feature to work.
The data loading into MariaDB is done with autocommit, unique_checks andforeign_key_checks disabled inside of a single transaction. This is done to
leverage the optimizations done for InnoDB that allows faster insertions into
empty tables. When loading data into MariaDB versions 10.5 or older, this can
translate into long rollback times in case the ETL operation fails.
For ETL operations that migrate data from PostgreSQL, we recommend using the official PostgreSQL ODBC driver. Use of other PostgreSQL ODBC drivers is possible but not recommended: correct configuration of the driver is necessary to prevent the driver from consuming too much memory.
Triggers on tables are not migrated automatically.
Check constraints are defined using the native PostgreSQL syntax. Incompatibilities must be manually fixed.
All indexes specific to PostgreSQL will be converted into normal indexes in MariaDB.
The GEOMETRY type is assumed to be the type provided by PostGIS. It is
converted into a MariaDB GEOMETRY
It is the responsibility of the end-user to correctly configure the ODBC driver. Some drivers read the whole resultset into memory by default which will result in MaxScale running out of memory
ETL operations that operate on more than one catalog are not supported.
This page is licensed: CC BY-SA / Gnu FDL
This filter was introduced in MariaDB MaxScale 2.3.0.
The binlogfilter can be combined with a binlogrouter service to selectively
replicate the binary log events to replica servers.
The filter uses two settings, match and exclude, to determine which events are replicated. If a binlog event does not match or is excluded, the event is replaced with an empty data event. The empty event is always 35 bytes which translates to a space reduction in most cases.
When statement-based replication is used, any query events that are filtered out are replaced with a SQL comment. This causes the query event to do nothing and thus the event will not modify the contents of the database. The GTID position of the replicating database will still advance which means that downstream servers replicating from it keep functioning correctly.
The filter works with both row based and statement based replication but we recommend using row based replication with the binlogfilter. This guarantees that there are no ambiguities in the event filtering.
matchType:
Mandatory: No
Dynamic: Yes
Default: None
Include queries that match the regex. See next entry, exclude, for more information.
excludeType:
Mandatory: No
Dynamic: Yes
Default: None
Exclude queries that match the regex.
If neither match nor exclude are defined, the filter does nothing and all events
are replicated. This filter does not accept regular expression options as a separate
setting, such settings must be defined in the patterns themselves. See the for
more information.
The two settings are matched against the database and table name concatenated
with a period. For example, the string the patterns are matched against for the
database test and table t1 is test.t1.
For statement based replication, the pattern is matched against all the tables in the statements. If any of the tables matches the match pattern, the event is replicated. If any of the tables matches the exclude pattern, the event is not replicated.
rewrite_srcType:
Mandatory: No
Dynamic: Yes
Default: None
See the next entry, rewrite_dest, for more information.
rewrite_destType:
Mandatory: No
Dynamic: Yes
Default: None
rewrite_src and rewrite_dest control the statement rewriting of the binlogfilter.
The rewrite_src setting is a PCRE2 regular expression that is matched against
the default database and the SQL of statement based replication events (query
events). rewrite_dest is the replacement string which supports the normal
PCRE2 backreferences (e.g the first capture group is $1, the second is $2,
etc.).
Both rewrite_src and rewrite_dest must be defined to enable statement rewriting.
When statement rewriting is enabled must be used. The filter will disallow replication for all replicas that attempt to replicate with traditional file-and-position based replication.
The replacement is done both on the default database as well as the SQL statement in the query event. This means that great care must be taken when defining the rewriting rules. To prevent accidental modification of the SQL into a form that is no longer valid, use database and table names that never occur in the inserted data and is never used as a constant value.
With the following configuration, only events belonging to database customers
are replicated. In addition to this, events for the table orders are excluded
and thus are not replicated.
For more information about the binlogrouter and how to use it, refer to the .
This page is licensed: CC BY-SA / Gnu FDL
MariaDB MaxScale can be built on any system that meets the requirements. The main requirements are as follows:
CMake version 3.16 or later (Packaging requires CMake 3.25.1 or later)
GCC version 4.9 or later
OpenSSL version 1.0.1 or later
GNUTLS
Node.js 14 or newer for building MaxCtrl and the GUI (webpack), Node.js 10 or newer for running MaxCtrl
PAM
SASL2 (cyrus-sasl)
SQLite3 version 3.3 or later
Tcl
git
jansson
libatomic
libcurl
libmicrohttpd
libuuid
libxml2
libssh
pcre2
zstd
This is the minimum set of requirements that must be met to build the MaxScale core package. Some modules in MaxScale require optional extra dependencies.
libuuid (binlogrouter)
boost (binlogrouter)
Bison 2.7 or later (dbfwfilter)
Flex 2.5.35 or later (dbfwfilter)
Some of these dependencies are not available on all operating systems and are
downloaded automatically during the build step. To skip the building of modules
that need automatic downloading of the dependencies, use -DBUNDLE=N when
configuring CMake.
This installs MaxScale as if it was installed from a package. Install git before running the following commands.
For a definitive list of packages, consult the script.
The tests and other parts of the build can be controlled via CMake arguments.
Here is a small table with the names of the most common parameters and what
they control. These should all be given as parameters to the -D switch inNAME=VALUE format (e.g. -DBUILD_TESTS=Y).
Note: You can look into for a list of the CMake variables.
To run the MaxScale unit test suite, configure the build with -DBUILD_TESTS=Y,
compile and then run the make test command.
If you wish to build packages, just add -DPACKAGE=Y to the CMake invocation
and build the package with make package instead of installing MaxScale withmake install. This process will create a RPM/DEB package depending on your
system.
To build a tarball, add -DTARBALL=Y to the cmake invocation. This will create
a maxscale-x.y.z.tar.gz file where x.y.z is the version number.
Some Debian and Ubuntu systems suffer from a bug where make package fails
with errors from dpkg-shlibdeps. This can be fixed by running make beforemake package and adding the path to the libmaxscale-common.so library to
the LD_LIBRARY_PATH environment variable.
The MaxScale build system is split into multiple components. The main component
is the core MaxScale package which contains MaxScale and all the modules. This
is the default component that is build, installed and packaged. There is also
the experimental component that contains all experimental modules which are
not considered as part of the core MaxScale package and are either alpha or beta
quality modules.
To build the experimental modules along with the MaxScale core components,
invoke CMake with -DTARGET_COMPONENT=core,experimental.
This page is licensed: CC BY-SA / Gnu FDL
MariaDB MaxScale is also made available as a tarball, which is named likemaxscale-x.y.z.OS.tar.gz where x.y.z is the same as the corresponding version and OS
identifies the operating system, e.g. maxscale-2.5.6.centos.7.tar.gz.
In order to use the tarball, the following libraries are required:
libcurl
libaio
OpenSSL
gnutls
libatomic
unixODBC
The tarball has been built with the assumption that it will be installed in /usr/local.
However, it is possible to install it in any directory, but in that case MariaDB MaxScale
must be invoked with a flag.
/usr/localIf you have root access to the system you probably want to install MariaDB MaxScale under
the user and group maxscale.
The required steps are as follows:
Creating the symbolic link is necessary, since MariaDB MaxScale has been built
with the assumption that the plugin directory is /usr/local/maxscale/lib/maxscale.
The symbolic link also makes it easy to switch between different versions of
MariaDB MaxScale that have been installed side by side in /usr/local;
just make the symbolic link point to another installation.
In addition, the first time you install MariaDB MaxScale from a tarball you need to create the following directories:
and make maxscale the owner of them:
The following step is to create the MariaDB MaxScale configuration file /etc/maxscale.cnf.
The file etc/maxscale.cnf.template can be used as a base.
Please refer to for details.
When the configuration file has been created, MariaDB MaxScale can be started.
The -d flag causes maxscale not to turn itself into a daemon,
which is advisable the first time MariaDB MaxScale is started, as it makes it easier to spot problems.
If you want to place the configuration file somewhere else but in /etc
you can invoke MariaDB MaxScale with the --config flag,
for instance, --config=/usr/local/maxscale/etc/maxscale.cnf.
Note also that if you want to keep everything under /usr/local/maxscale
you can invoke MariaDB MaxScale using the flag --basedir.
That will cause MariaDB MaxScale to look for its configuration file in/usr/local/maxscale/etc and to store all runtime files under /usr/local/maxscale/var.
Enter a directory where you have the right to create a subdirectory. Then do as follows.
The next step is to create the MaxScale configuration file maxscale-x.y.z/etc/maxscale.cnf.
The file maxscale-x.y.z/etc/maxscale.cnf.template can be used as a base.
Please refer to for details.
When the configuration file has been created, MariaDB MaxScale can be started.
With the flag --basedir, MariaDB MaxScale is told where the lib, etc and var
directories are found. Unless it is specified, MariaDB MaxScale assumes
the lib directory is found in /usr/local/maxscale,
and the var and etc directories in /.
It is also possible to specify the directories and the location of the configuration file individually. Invoke MaxScale like
to find out the appropriate flags.
This page is licensed: CC BY-SA / Gnu FDL
MaxGUI is a browser-based interface for MaxScale REST-API and query execution.
To enable MaxGUI in a testing mode, add admin_host=0.0.0.0 andadmin_secure_gui=false under the [maxscale] section of the MaxScale
configuration file. Once enabled, MaxGUI will be available on port 8989:http://127.0.0.1:8989/
To make MaxGUI secure, set admin_secure_gui=true and configure both theadmin_ssl_key and admin_ssl_cert parameters.
See and for instructions on how to harden your MaxScale installation for production use.
MaxGUI uses the same credentials as maxctrl. The default username is admin
with mariadb as the password.
Internally, MaxGUI uses as the authentication method for persisting the user's session. If the Remember me checkbox is ticked, the session will persist for 24 hours. Otherwise, the session will expire as soon as MaxGUI is closed.
To log out, simply click the username section in the top right corner of the page header to access the logout menu.
This page provides an overview of MaxScale configuration which includes Monitors, Servers, Services, Sessions, Listeners, and Filters.
By default, the refresh interval is 10 seconds.
This page shows information on each and allow to edit its parameter, relationships and perform other manipulation operations.
Access this page by clicking on the MaxScale object name on the
This page visualizes MaxScale configuration and clusters.
Configuration: Visualizing MaxScale configuration.
Cluster: Visualizing a replication cluster into a tree graph and provides
manual cluster manipulation operations such asswitchover, reset-replication, release-locks, failover, rejoin . At the
moment, it supports only servers monitored by Monitor using module.
Access this page by clicking the graph icon on the sidebar navigation.
This page shows and allows editing of MaxScale parameters.
Access this page by clicking the gear icon on the sidebar navigation.
Realtime MaxScale logs can be accessed by clicking the logs icon on the sidebar navigation.
The "Workspace" page offers a versatile set of tools for effectively managing data and database interactions. It includes the following key tasks:
Execute queries on various servers, services, or listeners to retrieve data and perform database operations. Visualize query results using different graph types such as line, bar, or scatter graphs. Export query results in formats like CSV or JSON for further analysis and sharing.
The "Data Migration" feature facilitates seamless transitions from PostgreSQL to MariaDB. Transfer data and database structures between the two systems while ensuring data integrity and consistency throughout the process.
Generating Entity-Relationship Diagrams (ERDs) to gain insights regarding data structure, optimizing database design for both efficiency and clarity.
This page is licensed: CC BY-SA / Gnu FDL
The goal of this tutorial is to configure a system that has two ports available, one for write connections and another for read connections. The read connections are load- balanced across replica servers.
This tutorial is a part of the MariaDB MaxScale Tutorial. Please read it and follow the instructions. Return here once basic setup is complete.
We want two services and ports to which the client application can connect. One service routes client connections to the primary server, the other load balances between replica servers. To achieve this, we need to define two services in the configuration file.
Create the following two sections in your configuration file. The section names are the names of the services and should be meaningful. For this tutorial, we use the names_Write-Service_ and Read-Service.
router defines the routing module used. Here we use readconnroute for connection-level routing.
A service needs a list of servers to route queries to. The server names must match the names of server sections in the configuration file and not the hostnames or addresses of the servers.
The router_options-parameter tells the readconnroute-module which servers it should
route a client connection to. For the write service we use the master-type and for the
read service the slave-type.
The user and password parameters define the credentials the service uses to populate user authentication data. These users were created at the start of the .
For increased security, see .
To allow network connections to a service, a network ports must be associated with it. This is done by creating a separate listener section in the configuration file. A service may have multiple listeners but for this tutorial one per service is enough.
The service parameter tells which service the listener connects to. For the_Write-Listener_ we set it to Write-Service and for the Read-Listener we set it to Read-Service.
A listener must define the network port to listen on.
The optional address-parameter defines the local address the listener should bind to.
This may be required when the host machine has multiple network interfaces. The
default behavior is to listen on all network interfaces (the IPv6 address ::).
For the last steps, please return to .
This page is licensed: CC BY-SA / Gnu FDL
This filter was introduced in MariaDB MaxScale 2.1.
This document provides an overview of the readconnroute router module and its intended use case scenarios. It also displays all router configuration parameters with their descriptions.
[MaxRows]
type=filter
module=maxrows
[MaxRows-Routing-Service]
type=service
...
filters=MaxRowsmax_resultset_rows=1000max_resultset_size=128KiMariaDB [(test)]> select * from test.t4;
ERROR 1415 (0A000): Row limit/size exceeded for query: select * from test.t4debug=2[MaxRows]
type=filter
module=maxrows
max_resultset_rows=10000
max_resultset_size=256000Any KILL commands executed using a prepared statement are ignored by
MaxScale. If any are executed, it is highly likely that the wrong connection
ends up being killed.
If a KILL connection kills a session that is connected to a readwritesplit
service that has transaction_replay or delayed_retry enabled, it is
possible that the query is retried even if the connection is killed. To avoid
this, use KILL QUERY instead.
A KILL on one service can cause a connection from another service to be
closed even if it uses a different protocol.
The change user command (COM_CHANGE_USER) only works with standard authentication.
If a COM_CHANGE_USER succeeds on MaxScale yet fails on the server the session ends up in an inconsistent state. This can happen if the password of the target user is changed and MaxScale uses old user account data when processing the change user. In such a situation, MaxScale and server will disagree on the current user. This can affect e.g. reconnections.
ST_AsTextmemcached (storage_memcached for the cache filter)
hiredis (storage_redis for the cache filter)
CMAKE_INSTALL_PREFIX
Location where MariaDB MaxScale will be installed to. Set this to /usr if you want MariaDB MaxScale installed into the same place the packages are installed.
BUILD_TESTS
Build unit tests
WITH_SCRIPTS
Install systemd and init.d scripts
PACKAGE
Enable building of packages
TARGET_COMPONENT
Which component to install, default is the 'core' package. Other targets are 'experimental', which installs experimental packages and 'all' which installs all components.
TARBALL
Build tar.gz packages, requires PACKAGE=Y
The readconnroute router provides simple and lightweight load balancing across a set of servers. The router can also be configured to balance connections based on a weighting parameter defined in the server's section.
Note that *readconnroute balances connections and not statements. When a client connects, the router selects a server based upon the router configuration and current server load, but the single created connection is fixed and will not be changed for the duration of the session. If the connection between MaxScale and the server breaks, the connection cannot be re-established and the session will be closed. The fact that the server is fixed when the client connects also means that routing hints are ignored.
Warning: readconnroute will not prevent writes from being done even if you
define router_options=slave. The client application is responsible for
making sure that it only performs read-only queries in such
cases. readconnroute is simple by design: it selects a server for each
client connection and routes all queries there. If something more complex is
required, the readwritesplit router is usually the right
choice.
For more details about the standard service parameters, refer to the Configuration Guide.
Type: enum_mask
Mandatory: No
Dynamic: Yes
Values: master, slave, synced, running
Default: running
router_options can contain a comma separated list of valid server
roles. These roles are used as the valid types of servers the router will
form connections to when new sessions are created.
Examples:
Here is a list of all possible values for the router_options.
master
A server assigned as a primary by one of MariaDB MaxScale monitors. Depending on the monitor implementation, this could be a primary server of a Primary-Replica replication cluster or a Write-Primary of a Galera cluster.
slave
A server assigned as a replica of a primary. If all replicas are down, but the primary is still available, then the router will use the primary.
synced
A Galera cluster node which is in a synced state with the cluster.
running
A server that is up and running. All servers that MariaDB MaxScale can connect to are labeled as running.
If no router_options parameter is configured in the service definition,
the router will use the default value of running. This means that it will
load balance connections across all running servers defined in the servers
parameter of the service.
When a connection is being created and the candidate server is being chosen, the list of servers is processed in from first entry to last. This means that if two servers with equal weight and status are found, the one that's listed first in the servers parameter for the service is chosen.
Type: boolean
Mandatory: No
Dynamic: Yes
Default: true
This option can be used to prevent queries from being sent to the current primary.
If router_options does not contain master, the readconnroute instance is
usually meant for reading. Setting master_accept_reads=false excludes the primary
from server selection (and thus from receiving reads).
If router_options contains master, the setting of master_accept_reads has no effect.
By default master_accept_reads=true.
Type: duration
Mandatory: No
Dynamic: Yes
Default: 0s
The maximum acceptable replication lag. The value is in seconds and is specified
as documented here. The
default value is 0s, which means that the lag is ignored.
The replication lag of a server must be less than the configured value in order
for it to be used for routing. To configure the router to not allow any lag, use
the smallest duration larger than 0, that is, max_replication_lag=1s.
The most common use for the readconnroute is to provide either a read or write port for an application. This provides a more lightweight routing solution than the more complex readwritesplit router but requires the application to be able to use distinct write and read ports.
To configure a read-only service that tolerates primary failures, we first need to add a new section into the configuration file.
Here the router_options designates replicas as the only valid server
type. With this configuration, the queries are load balanced across the
replica servers.
For more complex examples of the readconnroute router, take a look at the examples in the Tutorials folder.
The router_diagnostics output for readconnroute has the following fields.
queries: Number of queries executed through this service.
Sending of binary data with LOAD DATA LOCAL INFILE is not supported.
The router will never reconnect to the server it initially connected to.
This page is licensed: CC BY-SA / Gnu FDL
set autocommit=1
PREPARE hide_autocommit FROM "set autocommit=0"
EXECUTE hide_autocommitgit clone https://github.com/mariadb-corporation/MaxScale
mkdir build
cd build
../MaxScale/BUILD/install_build_deps.sh
cmake ../MaxScale -DCMAKE_INSTALL_PREFIX=/usr
make
sudo make install
sudo ./postinstmake
LD_LIBRARY_PATH=$PWD/server/core/ make package$ sudo groupadd maxscale
$ sudo useradd -g maxscale maxscale
$ cd /usr/local
$ sudo tar -xzvf maxscale-x.y.z.OS.tar.gz
$ sudo ln -s maxscale-x.y.z.OS maxscale
$ cd maxscale
$ sudo chown -R maxscale var$ sudo mkdir /var/log/maxscale
$ sudo mkdir /var/lib/maxscale
$ sudo mkdir /var/run/maxscale
$ sudo mkdir /var/cache/maxscale$ sudo chown maxscale /var/log/maxscale
$ sudo chown maxscale /var/lib/maxscale
$ sudo chown maxscale /var/run/maxscale
$ sudo chown maxscale /var/cache/maxscale$ sudo bin/maxscale --user=maxscale -d$ sudo bin/maxscale --user=maxscale --basedir=/usr/local/maxscale -d$ tar -xzvf maxscale-x.y.z.OS.tar.gz$ cd maxscale-x.y.z.OS
$ bin/maxscale -d --basedir=.$ bin/maxscale --help[Write-Service]
type=service
router=readconnroute
router_options=master
servers=dbserv1, dbserv2, dbserv3
user=maxscale
password=maxscale_pw
[Read-Service]
type=service
router=readconnroute
router_options=slave
servers=dbserv1, dbserv2, dbserv3
user=maxscale
password=maxscale_pw[Write-Listener]
type=listener
service=Write-Service
port=3306
[Read-Listener]
type=listener
service=Read-Service
port=3307router_options=slave
router_options=master,slave[Read-Service]
type=service
router=readconnroute
servers=replica1,replica2,replica3
router_options=slaveEd25519 is a highly secure authentication method based on public key cryptography. It is used with the auth_ed25519-plugin of MariaDB Server.
When a client authenticates via ed25519, MaxScale first sends them a random message. The client signs the message using their password as private key and sends the signature back. MaxScale then checks the signature using the public key fetched from the mysql.user-table. The client password or an equivalent token is never exposed. For more information, see .
The security of this authentication scheme presents a problem for a proxy such as MaxScale since MaxScale needs to log in to backend servers on behalf of the client. Since each server will generate their own random messages, MaxScale cannot simply forward the original signature. Either the real password is required, or a different authentication scheme must be used between MaxScale and backends. The MaxScale ed25519auth-plugin supports both alternatives.
To begin, add "ed25519auth" to the list of authenticators for a listener.
MaxScale will now authenticate incoming clients with ed25519 if their user account has plugin set to "ed25519" in the mysql.user-table. However, routing queries will fail since MaxScale cannot authenticate to backends. To continue, either use a mapping file or enable sha256-mode. Sha256-mode is enabled with the following settings.
This setting defines the authentication mode used. Two values are supported:
ed25519 (default) Digital signature based authentication. Requires mapping
for backend support.
sha256 Authenticate client with caching_sha2_password-plugin instead.
Requires either SSL or configured RSA-keys.
Defines the RSA-keys used for encrypting the client password if SSL is not in use. Should point to files with the private and public keys.
To enable MaxScale to authenticate to backends,user mapping can be used. The mapping and backend passwords are given in a json-file. The client can map to an identical username or to another user, and the backend authentication scheme can be something else than ed25519.
The following example maps user "alpha" to "beta" and MaxScale then uses standard authentication to log into backends as "beta". User "alpha" authenticates to MaxScale using whatever method configured in the server. User "gamma" does not map to another user, just the password is given.
MaxScale configuration:
/home/joe/mapping.json:
The mapping-based solution requires the DBA to maintain a file with user passwords, which has security and upkeep implications. To avoid this, MaxScale can instead use the caching_sha2_password-plugin to authenticate the client. This authentication scheme transmits the client password to MaxScale in full, allowing MaxScale to log into backends using ed25519. MaxScale effectively lies to the client about its authentication plugin and then uses the correct plugin with the backends. Enable sha256-authentication by setting authentication option ed_mode to "sha256".
sha256-authentication is best used with encrypted connections. The example below shows a listener configured for sha256-mode and SSL.
If SSL is not in use, caching_sha2_password transmits the password using RSA-encryption. In this case, MaxScale needs the public and private RSA-keys. MaxScale sends the public key to the client if they don't already have it and the client uses it to encrypt the password. MaxScale then uses the private key to decrypt the password. The example below shows a listener configured for sha256-mode without SSL.
The keyfiles can be generated with OpenSSL using the following commands.
This page is licensed: CC BY-SA / Gnu FDL
The Consistent Critical Read (CCR) filter allows consistent critical reads to be done through MaxScale while still allowing scaleout of non-critical reads.
When the filter detects a statement that would modify the database, it attaches a routing hint to all following statements done by that connection. This routing hint guides the routing module to route the statement to the primary server where data is guaranteed to be in an up-to-date state. Writes from one session do not, by default, propagate to other sessions.
Note: This filter does not work with prepared statements. Only text protocol queries are handled by this filter.
The triggering of the filter can be limited further by adding MaxScale supported comments to queries and/or by using regular expressions. The query comments take precedence: if a comment is found it is obeyed even if a regular expression parameter might give a different result. Even a comment cannot cause a SELECT-query to trigger the filter. Such a comment is considered an error and ignored.
The comments must follow the MaxScale hint syntax
and the HintFilter needs to be in the filter chain before the CCR-filter. If a
query has a MaxScale supported comment line which defines the parameter ccr,
that comment is caught by the CCR-filter. Parameter values match and ignore
are supported, causing the filter to trigger (match) or not trigger (ignore)
on receiving the write query. For example, the query
would normally cause the filter to trigger, but does not because of the
comment. The match-comment typically has no effect, since write queries by
default trigger the filter anyway. It can be used to override an ignore-type
regular expression that would otherwise prevent triggering.
The CCR filter has no mandatory parameters.
Type: duration
Mandatory: No
Dynamic: Yes
Default: 60s
The time window during which queries are routed to the primary. The duration can be specified as documented here but the value will always be rounded to the nearest second. If no explicit unit has been specified, the value is interpreted as seconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected. The default value for this parameter is 60 seconds.
When a data modifying SQL statement is processed, a timer is set to the value of_time_. Once the timer has elapsed, all statements are routed normally. If a new data modifying SQL statement is processed within the time window, the timer is reset to the value of time.
Enabling this parameter in combination with the count parameter causes both the time window and number of queries to be inspected. If either of the two conditions are met, the query is re-routed to the primary.
Type: count
Mandatory: No
Dynamic: Yes
Default: 0
The number of SQL statements to route to primary after detecting a data modifying SQL statement. This feature is disabled by default.
After processing a data modifying SQL statement, a counter is set to the value of count and all statements are routed to the primary. Each executed statement after a data modifying SQL statement cause the counter to be decremented. Once the counter reaches zero, the statements are routed normally. If a new data modifying SQL statement is processed, the counter is reset to the value of_count_.
Type: regex
Mandatory: No
Dynamic: No
Default: ""
These regular expression settings control which statements trigger statement re-routing. Only non-SELECT statements are inspected. For CCRFilter, the exclude-parameter is instead named ignore, yet works similarly.
Type: enum
Mandatory: No
Dynamic: No
Values: ignorecase, case, extended
Default: ignorecase
Regular expression options for match and ignore.
Type: boolean
Mandatory: No
Dynamic: Yes
Default: false
global is a boolean parameter that when enabled causes writes from one
connection to propagate to all other connections. This can be used to work
around cases where one connection writes data and another reads it, expecting
the write done by the other connection to be visible.
This parameter only works with the time parameter. The use of global andcount at the same time is not allowed and will be treated as an error.
Here is a minimal filter configuration for the CCRFilter which should solve most problems with critical reads after writes.
With this configuration, whenever a connection does a write, all subsequent reads done by that connection will be forced to the primary for 5 seconds.
This prevents read scaling until the modifications have been replicated to the replicas. For best performance, the value of time should be slightly greater than the actual replication lag between the primary and its replicas. If the number of critical read statements is known, the count parameter could be used to control the number reads that are sent to the primary.
This page is licensed: CC BY-SA / Gnu FDL
The ldi (LOAD DATA INFILE) filter was introduced in MaxScale 23.08.0 and it
extends the MariaDB LOAD DATA INFILE syntax to support loading data from any
object storage that supports the S3 API. This includes cloud offerings like AWS
S3 and Google Cloud Storage as well as locally run services like Minio.
If the filename starts with either S3:// or gs://, the path is interpreted
as a S3 object file. The prefix is case-insensitive. For example, the following
command would load the file my-data.csv from the bucket my-bucket into the
table t1.
Here is a minimal configuration for the filter that can be used to load data from AWS S3:
The first step is to move the file to be loaded into the same region that MaxScale and the MariaDB servers are in. One factor in the speed of the upload is the network latency and minimizing it by moving the source and the destination closer improves the data loading speed.
The next step is to connect to MaxScale and prepare the session for an upload by providing the service account access and secret keys.
Once the credentials are configured, the data loading can be started:
This feature has been removed in MaxScale 24.02.
If you are using self-hosted object storage programs like Minio, a common problem is that they do not necessarily support the newer virtual-hosted-style requests that is used by AWS. This usually manifests as an error either about a missing file or a missing bucket.
If the host parameter is set to a hostname, it's assumed that the object
storage supports the newer virtual-hosted-style requests. If this not the case,
the filter must be configured with protocol_version=1.
Conversely, if the host parameter is set to a plain IP address, it is assumed
that it does not support the newer virtual-hosted-style request. If the host
does support it, the filter must be configured with protocol_version=2.
Type: string
Mandatory: No
Dynamic: Yes
The S3 access key used to perform all requests to it.
This must be either configured in the MaxScale configuration file or set withSET @maxscale.ldi.s3_key='<key>' before starting the data load.
Type: string
Mandatory: No
Dynamic: Yes
The S3 secret key used to perform all requests to it.
This must be either configured in the MaxScale configuration file or set withSET @maxscale.ldi.s3_secret='<secret>' before starting the data load.
Type: string
Mandatory: No
Dynamic: Yes
Default: us-east-1
The S3 region where the data is located.
The value can be overridden with SET @maxscale.ldi.s3_region='<region>' before
starting the data load.
Type: string
Mandatory: No
Dynamic: Yes
Default: s3.amazonaws.com
The location of the S3 object storage. By default the original AWS S3 host is
used. The corresponding value for Google Cloud Storage isstorage.googleapis.com.
The value can be overridden with SET @maxscale.ldi.s3_host='<host>' before
starting the data load.
Type: integer
Mandatory: No
Dynamic: Yes
Default: 0
The port on which the S3 object storage is listening. If unset or set to the value of 0, the default S3 port is used.
The value can be overridden with SET @maxscale.ldi.s3_port=<port> before
starting the data load. Note that unlike the other values, the value for this
variable must be an SQL integer and not an SQL string.
Type: boolean
Mandatory: No
Dynamic: Yes
Default: false
If set to true, TLS certificate verification for the object storage is skipped.
Type: boolean
Mandatory: No
Dynamic: Yes
Default: false
If set to true, communication with the object storage is done unencrypted using HTTP instead of HTTPS.
Type: integer
Mandatory: No
Dynamic: Yes
Default: 0
Values: 0, 1, 2
Which protocol version to use. By default the protocol version is derived from
the value of host but this automatic protocol version deduction will not
always produce the correct result. For the legacy path-style requests used by
older S3 storage buckets, the value must be set to 1. All new buckets use the
protocol version 2.
For object storage programs like Minio, the value must be set to 1 as the bucket name cannot be resolved via the subdomain like it is done for object stores in the cloud.
This parameter has been removed in MaxScale 24.02.
This parameter has been removed in MaxScale 24.02.
This page is licensed: CC BY-SA / Gnu FDL
This filter adds routing hints to a service. The filter has no parameters.
Note: If a query has more than one comment only the first comment is processed. Always place any MaxScale related comments first before any other comments that might appear in the query.
The client connection will need to have comments enabled. For example themariadb and mysql command line clients have comments disabled by default and
they need to be enabled by passing the --comments or -c option to it. Most,
if not all, connectors keep all comments intact in executed queries.
For comment types, use either -- (notice the whitespace after the double
hyphen) or # after the semicolon or /* ... */ before the semicolon.
Inline comment blocks, i.e. /* .. */, do not require a whitespace character
after the start tag or before the end tag but adding the whitespace is advised.
All hints must start with the maxscale tag.
The hints have two types, ones that define a server type and others that contain name-value pairs.
These hints will instruct the router to route a query to a certain type of a server.
Route to primary
A master value in a routing hint will route the query to a primary server. This
can be used to direct read queries to a primary server for a up-to-date result
with no replication lag.
Route to replica
A slave value will route the query to a replica server. Please note that the
hints will override any decisions taken by the routers which means that it is
possible to force writes to a replica server.
Route to named server
A server value will route the query to a named server. The value of<server name> needs to be the same as the server section name in
maxscale.cnf. If the server is not used by the service, the hint is ignored.
Route to last used server
A last value will route the query to the server that processed the last
query. This hint can be used to force certain queries to be grouped to the same
server.
Name-value hints
These control the behavior and affect the routing decisions made by the
router. Currently the only accepted parameter is the readwritesplit parametermax_slave_replication_lag. This will route the query to a server with a lower
replication lag than this parameter's value.
Hints can be either single-use hints, which makes them affect only one query, or named hints, which can be pushed on and off a stack of active hints.
Defining named hints:
Pushing a hint onto the stack:
Popping the topmost hint off the stack:
You can define and activate a hint in a single command using the following:
You can also push anonymous hints onto the stack which are only used as long as they are on the stack:
The hintfilter supports routing hints in prepared statements for both thePREPARE and EXECUTE SQL commands as well as the binary protocol prepared
statements.
With binary protocol prepared statements, a routing hint in the prepared statement is applied to the execution of the statement but not the preparation of it. The preparation of the statement is routed normally and is sent to all servers.
For example, when the following prepared statement is prepared with the MariaDB
Connector-C function mariadb_stmt_prepare and then executed withmariadb_stmt_execute the result is always returned from the primary:
Support for binary protocol prepared statements was added in MaxScale 6.0 ().
The protocol commands that the routing hints are applied to are:
COM_STMT_EXECUTE
COM_STMT_BULK_EXECUTE
COM_STMT_SEND_LONG_DATA
COM_STMT_FETCH
Support for direct execution of prepared statements was added in MaxScale
6.2.0. For example the MariaDB Connector-C uses direct execution whenmariadb_stmt_execute_direct is used.
Text protocol prepared statements (i.e. the PREPARE and EXECUTE SQL
commands) behave differently. If a PREPARE command has a routing hint, it will
be routed according to the routing hint. Any subsequent EXECUTE command will
not be affected by the routing hint in the PREPARE statement. This means they
must have their own routing hints.
The following example is the recommended method of executing text protocol prepared statements with hints:
The PREPARE is routed normally and will be routed to all servers. TheEXECUTE will be routed to the primary as a result of it having the route to master hint.
SELECT queries to primaryIn this example, MariaDB MaxScale is configured with the readwritesplit router and the hint filter.
Behind MariaDB MaxScale is a primary server and a replica server. If there is replication lag between the primary and the replica, read queries sent to the replica might return old data. To guarantee up-to-date data, we can add a routing hint to the query.
The first INSERT query will be routed to the primary. The following SELECT query would normally be routed to the replica but with the added routing hint it will be routed to the primary. This way we can do an INSERT and a SELECT right after it and still get up-to-date data.
This page is licensed: CC BY-SA / Gnu FDL
The Regex filter is a filter module for MariaDB MaxScale that is able to rewrite query content using regular expression matches and text substitution. The regular expressions use the .
PCRE2 library uses a different syntax than POSIX to refer to capture
groups in the replacement string. The main difference is the usage of the dollar
character instead of the backslash character for references e.g. $1 instead of\1. For more details about the replacement string differences, please read the
chapter in the PCRE2 manual.
The following demonstrates a minimal configuration.
The Regex filter has two mandatory parameters: match and replace.
matchType:
Mandatory: Yes
Dynamic: Yes
Defines the text in the SQL statements that is replaced.
optionsType:
Mandatory: No
Dynamic: Yes
Values: ignorecase, case, extended
The options-parameter affects how the patterns are compiled as .
replaceType: string
Mandatory: Yes
Dynamic: Yes
This is the text that should replace the part of the SQL-query matching the pattern defined in match.
sourceType: string
Mandatory: No
Dynamic: Yes
Default: None
The optional source parameter defines an address that is used to match against the address from which the client connection to MariaDB MaxScale originates. Only sessions that originate from this address will have the match and replacement applied to them.
userType: string
Mandatory: No
Dynamic: Yes
Default: None
The optional user parameter defines a username that is used to match against the user from which the client connection to MariaDB MaxScale originates. Only sessions that are connected using this username will have the match and replacement applied to them.
log_fileType: string
Mandatory: No
Dynamic: Yes
Default: None
The optional log_file parameter defines a log file in which the filter writes all queries that are not matched and matching queries with their replacement queries. All sessions will log to this file so this should only be used for diagnostic purposes.
log_traceType: string
Mandatory: No
Dynamic: Yes
Default: None
The optional log_trace parameter toggles the logging of non-matching and matching queries with their replacements into the log file on the info level. This is the preferred method of diagnosing the matching of queries since the log level can be changed at runtime. For more details about logging levels and session specific logging, please read the .
MySQL 5.1 used the parameter TYPE = to set the storage engine that should be used for a table. In later versions this changed to be ENGINE =. Imagine you have an application that you cannot change for some reason, but you wish to migrate to a newer version of MySQL. The regexfilter can be used to transform the create table statements into the form that could be used by MySQL 5.5
This page is licensed: CC BY-SA / Gnu FDL
Note: This module is experimental and must be built from source. The module is deprecated in MaxScale 23.08 and might be removed in a future release.
The Transaction Performance Monitoring (TPM) filter is a filter module for MaxScale that monitors every SQL statement that passes through the filter. The filter groups a series of SQL statements into a transaction by detecting 'commit' or 'rollback' statements. It logs all committed transactions with necessary information, such as timestamp, client, SQL statements, latency, etc., which can be used later for transaction performance analysis.
The configuration block for the TPM filter requires the minimal filter options in it's section within the maxscale.cnf file, stored in /etc/maxscale.cnf.
The TPM filter does not support any filter options currently.
The TPM filter accepts a number of optional parameters.
The name of the output file created for performance logging. The default filename is tpm.log.
The optional source parameter defines an address that is used
to match against the address from which the client connection
to MaxScale originates. Only sessions that originate from this
address will be logged.
The optional user parameter defines a user name that is used
to match against the user from which the client connection to
MaxScale originates. Only sessions that are connected using
this username are logged.
The optional delimiter parameter defines a delimiter that is used to
distinguish columns in the log. The default delimiter is :::.
The optional query_delimiter defines a delimiter that is used to
distinguish different SQL statements in a transaction.
The default query delimiter is @@@.
named_pipe is the path to a named pipe, which TPM filter uses to
communicate with 3rd-party applications (e.g., ).
Logging is enabled when the router receives the character '1' and logging is
disabled when the router receives the character '0' from this named pipe.
The default named pipe is /tmp/tpmfilter and logging is disabled by default.
For example, the following command enables the logging:
Similarly, the following command disables the logging:
For each transaction, the TPM filter prints its log in the following format:
<timestamp> | <server_name> | <user_name> | <latency of the transaction> | <latencies of individual statements in the transaction> (delimited by 'query_delimiter') | <actual SQL statements>
You want to log every transaction with its SQL statements and latency for future transaction performance analysis.
Add a filter with the following definition:
After the filter reads the character '1' from its named pipe, the following is an example log that is generated from the above TPM filter with the above configuration:
Note that 3 and 6 are latencies of each transaction in milliseconds, while 0.165 and 0.123 are latencies of the first statement of each transaction in milliseconds.
This page is licensed: CC BY-SA / Gnu FDL
There are five main components that you need to make sure are completed before you go into production:
Encrypting Plaintext Passwords
Securing the GUI Interface
Managing Users & Passwords
Enabling Audit Logging
Encrypting Database Connections
Ensuring the security of your MaxScale setup involves stringent control over the key file permissions. Utilizing is an effective approach to generate a secure key file.
This generates a keyfile in /var/lib/maxscale
See
for more information about maxkeys.
Once generated, this key file can be relocated to a secure location. This key file serves a dual purpose: it enables the encryption of passwords and facilitates MaxScale in decrypting those encrypted passwords.
To maintain confidentiality, it is crucial to adjust the ownership and
permissions of the key file appropriately using chown. This step ensures
that the key file remains secure and inaccessible to unauthorized users.
Following the secure setup of the key file, you can proceed to encrypt the plaintext passwords of users already created in your databases.
These encrypted passwords can then replace the plaintext passwords in your MaxScale configuration (CNF) files. This enhances the overall security of your database system by reducing the risk that passwords are accidentally shared.
To enhance the security of your MaxScale environment, it’s crucial to
configure the GUI host address properly. The default setting, 0.0.0.0,
allows unrestricted access from any network, which poses a significant
security risk. Instead, you should set the admin_host to a more secure
address. Additionally, you can change the default port (8989) to another
port for added security. For example, you can restrict access to the
localhost by setting:
Alternatively, you can specify an internal network IP address to limit access within your internal network, such as:
If you need to allow external access, ensure that the network is adequately secured and that only authorized users can access the MaxScale interface. Consult with your network administrator to determine the most appropriate and secure configuration.
To further secure your MaxScale setup, enable TLS encryption for data in transit. Follow these steps to configure SSL:
1. Set Up SSL Keys and Certificates:
Generate SSL keys and certificates. See
Add them to the MaxScale configuration file.
2. Update the MaxScale Configuration:
Enable secure connections by setting admin_secure_gui to true.
Specify the paths to the SSL certificate and key files in your CNF file:
3. Verify Encryption:
Use the Maxctrl command to verify that TLS encryption is functioning correctly:
4. Update Default Credentials:
It’s essential to change the default admin passwords. Create a new user with a strong password and remove the default admin user for enhanced security.
MaxScale allows you to manage user access to its GUI, offering different
permission levels to suit various operational needs. Currently, MaxScale
supports two primary roles: admin and basic. This functionality is
particularly useful for organizations with hierarchical structures or
distinct departments, enabling you to grant status view access without
allowing execution or manipulation capabilities.
To create or delete users in the MaxScale GUI, you can use the maxctrl
command. Here’s an example of creating a user with administrative
privileges:
To remove an existing user, such as the default admin user, you can use the following command:
The MaxScale GUI also provides functionality to manage user access and update the admin password. Through the GUI, you can:
Add Users: Create users with basic or admin access.
Modify User Permissions: Change roles as needed to adapt to evolving security requirements.
Update Admin Password: Enhance security by regularly updating the admin password.
By leveraging these features, you can ensure that your MaxScale environment remains secure and that user access is appropriately managed according to your organization’s needs.
Turn on admin auditing to log all login, connection, and configuration changes. Choose an audit file location and set up log rotation.
Admin auditing in MaxScale provides comprehensive tracking of all administrative activities, including logins, connections, and modifications. These activities are recorded in an audit file for enhanced security and traceability.
To enable admin auditing, add the following configuration to your MaxScale configuration file:
This configuration activates auditing and specifies the location of the audit file. Ensure that the specified directory exists before restarting MaxScale.
1. Enable Auditing:
Add the configuration lines to your MaxScale configuration file.
Verify the directory specified in admin_audit_file exists.
2. Audit File Management:
Implement log rotation to manage the size and number of audit files. This can be achieved using standard Linux log rotation tools. See
For manual log rotation, you can use the following MaxCtrl command:
Configuring SSL Encryption for MaxScale with an Encrypted MariaDB Server
If you have already implemented encryption on your MariaDB server, it’s crucial to extend this encryption configuration to MaxScale to ensure secure communication. Once encryption is enabled on your MariaDB server, follow these steps to configure MaxScale to utilize SSL.
Steps to Configure SSL in MaxScale:
Add ssl=true to each server section in your MaxScale configuration file.
Add ssl_verify_peer_certificate=true to ensure that MaxScale verifies
the server’s SSL certificates, providing an additional layer of security.
Your MaxScale configuration file should look something like this:
These settings instruct MaxScale to use SSL for connections to the MariaDB server and to verify peer certificates, enhancing the security of data in transit.
This page is licensed: CC BY-SA / Gnu FDL
This document is designed as a quick introduction to setting up MariaDB MaxScale.
The installation and configuration of the MariaDB Server is not covered in this document. See the following MariaDB documentation articles for more information on setting up a primary-replica-cluster or a Galera-cluster: and .
This tutorial assumes that one of the standard MaxScale binary distributions is used and that MaxScale is installed using default options.
Building from source code in GitHub is covered in Building from Source.
The precise installation process varies from one distribution to another. Details on package installation can be found in the .
MaxScale checks that incoming clients are valid. To do this, MaxScale needs to retrieve user authentication information from the backend databases. Create a special user account for this purpose by executing the following SQL commands on the primary server of your database cluster. The following tutorials will use these credentials.
MariaDB versions 10.2.2 to 10.2.10 also require GRANT SELECT ON mysql.* TO 'maxscale'@'%';
Because MariaDB MaxScale sits between the clients and the backend databases, the backend databases will see all clients as if they were connecting from MaxScale's address. This usually means that two sets of grants for each user are required.
For example, assume that the user 'jdoe'@'client-host' exists and MaxScale is located at_maxscale-host_. If 'jdoe'@'client-host' needs to be able to connect through MaxScale, another user, 'jdoe'@'maxscale-host', must be created. The second user must have the same password and similar grants as 'jdoe'@'client-host'.
The quickest way to do this is to first create the new user:
Then do a SHOW GRANTS query:
Then copy the same grants to the 'jdoe'@'maxscale-host' user.
An alternative to generating two separate accounts is to use one account with a wildcard host ('jdoe'@'%') which covers both hosts. This is more convenient but less secure than having specific user accounts as it allows access from all hosts.
MaxScale reads its configuration from /etc/maxscale.cnf. A template configuration is provided with the MaxScale installation.
A global maxscale section is included in every MaxScale configuration file. This section sets the values of various global parameters, such as the number of threads MaxScale uses to handle client requests. To set thread count to the number of available cpu cores, set the following.
Read the mini-tutorial for server configuration instructions.
The type of monitor used depends on the type of cluster used. For a primary-replica cluster read . For a Galera cluster read .
This part is covered in two different tutorials. For a fully automated read-write-splitting setup, read the . For a simple connection based setup, read the .
After configuration is complete, MariaDB MaxScale is ready to start. For systems that use systemd, use the systemctl command.
For older SysV systems, use the service command.
If MaxScale fails to start, check the error log in /var/log/maxscale/maxscale.log to see if any errors are detected in the configuration file.
The maxctrl-command can be used to confirm that MaxScale is running and the services, listeners and servers have been correctly configured. The following shows expected output when using a read-write-splitting configuration.
MariaDB MaxScale is now ready to start accepting client connections and route queries to the backend cluster.
More options can be found in the , and .
For more information about MaxCtrl and how to secure it, see the .
This page is licensed: CC BY-SA / Gnu FDL
This document describes general MySQL protocol authentication in MaxScale. For REST-api authentication, see the and the .
Similar to the MariaDB Server, MaxScale uses authentication plugins to implement different authentication schemes for incoming clients. The same plugins also handle authenticating the clients to backend servers. The authentication plugins available in MaxScale are , and .
Most of the authentication processing is performed on the protocol level, before handing it over to one of the plugins. This shared part is described in this document. For information on an individual plugin, see its documentation.
Every MaxScale service with a MariaDB protocol listener requires knowledge of the user accounts defined on the backend databases. The service maintains this information in an internal component called the
The luafilter is a filter that calls a set of functions in a Lua script.
Read the for information on how to write Lua scripts.
Note: This module is experimental and must be built from source. The module is deprecated in MaxScale 23.08 and might be removed in a future release.
This tutorial is a short introduction to the , how to set it up and how it interacts with the binlogrouter.
The first part configures the services and sets them up for the binary log to Avro file conversion. The second part of this tutorial uses the client listener interface for the avrorouter and shows how to communicate with the service over the network.
The filter mechanism in MariaDB MaxScale is a means by which processing can be inserted into the flow of requests and responses between the client connection to MariaDB MaxScale and the MariaDB MaxScale connection to the backend database servers. The path from the client side of MariaDB MaxScale out to the actual database servers can be considered a pipeline, filters can then be placed in that pipeline to monitor, modify, copy or block the content that flows through that pipeline.
[Read-Write-Listener]
type=listener
address=::
service=Read-Write-Service
authenticator=ed25519authauthenticator_options=ed_mode=sha256authenticator_options=ed_mode=sha256,
ed_rsa_privkey_path=/tmp/sha_private_key.pem,
ed_rsa_pubkey_path=/tmp/sha_public_key.pem[Read-Write-Listener]
type=listener
address=::
service=Read-Write-Service
authenticator=ed25519auth,mariadbauth
user_mapping_file=/home/joe/mapping.json{
"user_map": [
{
"original_user": "alpha",
"mapped_user": "beta"
},
{
"original_user": "gamma",
"mapped_user": "gamma"
}
],
"server_credentials": [
{
"mapped_user": "beta",
"password": "hunter2",
"plugin": "mysql_native_password"
},
{
"mapped_user": "gamma",
"password": "letmein",
"plugin": "ed25519"
}
]
}[Read-Write-Listener]
type=listener
address=::
service=Read-Write-Service
authenticator=ed25519auth
authenticator_options=ed_mode=sha256
ssl=true
ssl_key=/tmp/my-key.pem
ssl_cert=/tmp/my-cert.pem
ssl_ca=/tmp/myCA.pem[Read-Write-Listener]
type=listener
address=::
service=Read-Write-Service
authenticator=ed25519auth
authenticator_options=ed_mode=sha256,
ed_rsa_privkey_path=/tmp/sha_private_key.pem,
ed_rsa_pubkey_path=/tmp/sha_public_key.pemopenssl genrsa -out sha_private_key.pem 2048
openssl rsa -in sha_private_key.pem -pubout -out sha_public_key.pemINSERT INTO departments VALUES ('d1234', 'NewDepartment'); -- maxscale ccr=ignorematch=.*INSERT.*
ignore=.*UPDATE.*
options=case,extended[CCRFilter]
type=filter
module=ccrfilter
time=5LOAD DATA INFILE 'S3://my-bucket/my-data.csv' INTO TABLE t1
FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n';[LDI-Filter]
type=filter
module=ldi
host=s3.amazonaws.com
region=us-east-1SET @maxscale.ldi.s3_key='<my-access-key>', @maxscale.ldi.s3_secret='<my-secret-key>';LOAD DATA INFILE 'S3://my-bucket/my-data.csv' INTO TABLE t1;[BinlogFilter]
type=filter
module=binlogfilter
match=/customers[.]/
exclude=/[.]orders/
[BinlogServer]
type=service
router=binlogrouter
server_id=33
filters=BinlogFilter
[BinlogListener]
type=listener
service=BinlogServer
port=4000Default: ignorecase
The luafilter has two parameters. They control which scripts will be called by
the filter. Both parameters are optional but at least one should be defined. If
both global_script and session_script are defined, the entry points in both
scripts will be called.
The global Lua script. The parameter value is a path to a readable Lua script which will be executed.
This script will always be called with the same global Lua state and it can be used to build a global view of the whole service.
The session level Lua script. The parameter value is a path to a readable Lua script which will be executed once for each session.
Each session will have its own Lua state meaning that each session can have a unique Lua environment. Use this script to do session specific tasks.
The entry points for the Lua script expect the following signatures:
nil createInstance(name) - global script only, called when the script is first loaded
When the global script is loaded, it first executes on a global level before the luafilter calls the createInstance function in the Lua script with the filter's name as its argument.
nil newSession(string, string) - new session is created
After the session script is loaded, the newSession function in the Lua scripts is called. The first parameter is the username of the client and the second parameter is the client's network address.
nil closeSession() - session is closed
The closeSession function in the Lua scripts will be called.
(nil | bool | string) routeQuery() - query is being routed
The Luafilter calls the routeQuery functions of both the session and the
global script. The query is passed as a string parameter to the
routeQuery Lua function and the return values of the session specific
function, if any were returned, are interpreted. If the first value is
bool, it is interpreted as a decision whether to route the query or to
send an error packet to the client. If it is a string, the current query
is replaced with the return value and the query will be routed. If nil is
returned, the query is routed normally.
nil clientReply() - reply to a query is being routed
This function is called with the name of the server that returned the response.
string diagnostic() - global script only, print diagnostic information
If the Lua function returns a string that is valid JSON, it will be decoded as JSON and displayed as such in the REST API. If the object does not decode into JSON, it will be stored as a JSON string.
These functions, if found in the script, will be called whenever a call to the matching entry point is made.
Script Template
Here is a script template that can be used to try out the luafilter. Copy it
into a file and add global_script=<path to script> into the filter
configuration. Make sure the file is readable by the maxscale user.
The luafilter exposes the following functions that can be called inside the Lua script API endpoints. The callback function in which they can be called is documented after the function signature. If the functions are called outside of the correct callback function, they raise a Lua error.
string mxs_get_sql() (use: routeQuery)
Returns the SQL of the query being executed. This returns an empty string for any query that is not a text protocol query (COM_QUERY). Support for prepared statements is not yet implemented.
string mxs_get_type_mask() (use: routeQuery)
Returns the type of the current query being executed as a string. The values
are the string versions of the query types defined in query_classifier.h
are separated by vertical bars (|).
This function can only be called from the routeQuery entry point.
string mxs_get_operation() (use: routeQuery)
Returns the current operation type as a string. The values are defined in
query_classifier.h.
This function can only be called from the routeQuery entry point.
string mxs_get_canonical() (use: routeQuery)
Returns the canonical version of a query by replacing all user-defined constant values with question marks.
This function can only be called from the routeQuery entry point.
number mxs_get_session_id() (use: newSession, routeQuery, clientReply, closeSession)
This function returns the session ID of the current session. Inside thecreateInstance and diagnostic endpoints this function will always return
the value 0.
string mxs_get_db() (use: newSession, routeQuery, clientReply, closeSession)
Returns the current default database used by the connection.
string mxs_get_user() (use: newSession, routeQuery, clientReply, closeSession)
Returns the username of the client connection.
string mxs_get_host() (use: newSession, routeQuery, clientReply, closeSession)
Returns the address of the client connection.
string mxs_get_replier() (use: clientReply)
Returns the target that returned the result to the latest query.
Here is a minimal configuration entry for a luafilter definition.
And here is a script that opens a file in /tmp/ and logs output to it.
mxs_get_sql() and mxs_get_canonical() do not work with queries done with
the binary protocol.
The Lua code is not restricted in any way which means excessively slow execution of it can cause the MaxScale process to become slower or to be aborted due to a SystemD watchdog timeout.
This page is licensed: CC BY-SA / Gnu FDL
SmartRouter is the query router of the SmartQuery framework. Based on the type of the query, each query is routed to the server or cluster that can best handle it.
For workloads where both transactional and analytical queries are needed, SmartRouter unites the Transactional (OLTP) and Analytical (OLAP) workloads into a single entry point in MaxScale. This allows a MaxScale client to freely mix transactional and analytical queries using the same connection. This is known as Hybrid Transactional and Analytical Processing, HTAP.
SmartRouter is configured as a service that either routes to other MaxScale routers or plain servers. Although one can configure SmartRouter to use a plain server directly, we refer to the configured "servers" as clusters.
For details about the standard service parameters, refer to the Configuration Guide.
Type: target
Mandatory: Yes
Dynamic: No
One of the clusters must be designated as the master. All writes go to the
primary cluster, which for all practical purposes should be a primary-replica
ReadWriteSplit. This document does not go into details about setting up
primary-replica clusters, but suffice to say, that when setting up the ColumnStore
servers they should be configured to be replicas of a MariaDB server running an
InnoDB engine.
The ReadWriteSplit documentation has more on primary-replica setup.
Example
Suppose we have a Transactional service like
for which we have defined the listener
That is, that service can be accessed using the socket /tmp/rws-row.sock.
The Analytical service could look like this
Then we can define the SmartQuery service as follows
Note that the SmartQuery listener listens on a port, while the Row and Column service listeners listen on Unix domain sockets. The reason is that there is a significant performance benefit when SmartRouter accesses the services over a Unix domain socket compared to accessing them over a TCP/IP socket.
A complete configuration example can be found at the end of this document.
SmartRouter keeps track of the performance, or the execution time, of queries to the clusters. Measurements are stored with the canonical of a query as the key. The canonical of a query is the sql with all user-defined constants replaced with question marks. When SmartRouter sees a read-query whose canonical has not been seen before, it will send the query to all clusters. The first response from a cluster will designate that cluster as the best one for that canonical. Also, when the first response is received, the other queries are cancelled. The response is sent to the client once all clusters have responded to the query or the cancel.
There is obviously overhead when a new canonical is seen. This means that queries after a MaxScale start will be slightly slower than normal. The execution time of a query depends on the database engine, and on the contents of the tables being queried. As a result, MaxScale will periodically re-measure queries.
The performance behavior of queries under dynamic conditions, and their effect on different storage engines is being studied at MariaDB. As we learn more, we will be able to better categorize queries and move that knowledge into SmartRouter.
LOAD DATA LOCAL INFILE is not supported.
The performance data is not persisted. The measurements will be performed anew after each startup.
This page is licensed: CC BY-SA / Gnu FDL
Filters can be divided into a number of categories
Logging filters do not in any way alter the statement or results of the statements that are passed through MariaDB MaxScale. They merely log some information about some or all of the statements and/or result sets.
Two examples of logging filters are contained within the MariaDB MaxScale, a filter that will log all statements and another that will log only a number of statements, based on the duration of the execution of the query.
Statement rewriting filters modify the statements that are passed through the filter. This allows a filter to be used as a mechanism to alter the statements that are seen by the database, an example of the use of this might be to allow an application to remain unchanged when the underlying database changes or to compensate for the migration from one database schema to another.
A result set manipulation filter is very similar to a statement rewriting but applies to the result set returned rather than the statement executed. An example of this may be obfuscating the values in a column.
Routing hint filters are filters that embed hints in the request that can be used by the router onto which the query is passed. These hints include suggested destinations as well as metric that may be used by the routing process.
A firewall filter is a mechanism that allows queries to be blocked within MariaDB MaxScale before they are sent on to the database server for execution. They allow constructs or individual queries to be intercepted and give a level of access control that is more flexible than the traditional database grant mechanism.
A pipeline filter is one that has an affect on how the requests are routed within the internal MariaDB MaxScale components. The most obvious version of this is the ability to add a "tee" connector in the pipeline, duplicating the request and sending it to a second MariaDB MaxScale service for processing.
Filters are defined in the configuration file, typically maxscale.cnf, using a section for each filter instance. The content of the filter sections in the configuration file various from filter to filter, however there are always to entries present for every filter, the type and module.
The type is used by the configuration manager within MariaDB MaxScale to determine what this section is defining and the module is the name of the plugin that implements the filter.
When a filter is used within a service in MariaDB MaxScale the entry filters= is added to the service definition in the ini file section for the service. Multiple filters can be defined using a syntax akin to the Linux shell pipe syntax.
The names used in the filters= parameter are the names of the filter definition sections in the ini file. The same filter definition can be used in multiple services and the same filter module can have multiple instances, each with its own section in the ini file.
The filters that are bundled with the MariaDB MaxScale are documented separately, in this section a short overview of how these might be used for some simple tasks will be discussed. These are just examples of how these filters might be used, other filters may also be easily added that will enhance the MariaDB MaxScale functionality still further.
The top filter can be used to measure the execution time of every statement within a connection and log the details of the longest running statements.
The first thing to do is to define a filter entry in the ini file for the top filter. In this case we will call it "top30". The type is filter and the module that implements the filter is called topfilter.
In the definition above we have defined two filter specific parameters, the count of the number of statement to be logged and a filebase that is used to define where to log the information. This filename is a stem to which a session id is added for each new connection that uses the filter.
The filter keeps track of every statement that is executed, monitors the time it takes for a response to come back and uses this as the measure of execution time for the statement. If the time is longer than the other statements that have been recorded, then this is added to the ordered list within the filter. Once 30 statements have been recorded those statements that have been recorded with the least time are discarded from the list. The result is that at any time the filter has a list of the 30 longest running statements in each session.
When the session ends a report will be written for the session into the logfile defined. That report will include the top 30 longest running statements, plus summary data for the session;
The time the connection was opened.
The host the connection was from.
The username used in the connection.
The duration of the connection.
The total number of statements executed in the connection.
The average execution time for a statement in this connection.
The scenario we are using in this example is one in which you have an online gaming application that is designed to work with a MariaDB database. The database schema includes a high score table which you would like to have access to in a Cassandra cluster. The application is already using MariaDB MaxScale to connect to a MariaDB Galera cluster, using a service names BubbleGame. The definition of that service is as follows
The table you wish to store in Cassandra in called HighScore and will contain the same columns in both the MariaDB table and the Cassandra table. The first step is to install a MariaDB instance with the Cassandra storage engine to act as a bridge server between the relational database and Cassandra. In this bridge server add a table definition for the HighScore table with the engine type set to Cassandra. See Cassandra Storage Engine Overview for details. Add this server into the MariaDB MaxScale configuration and create a service that will connect to this server.
Next add a filter definition for the tee filter that will duplication insert statements that are destined for the HighScore table to this new service.
The above filter definition will cause all statements that match the regular expression inset.*HighScore.*values to be duplication and sent not just to the original destination, via the router but also to the service named Cassandra.
The final step is to add the filter to the BubbleGame service to enable the use of the filter.
This page is licensed: CC BY-SA / Gnu FDL
# The --comments flag is needed for the command line client
mariadb --comments -u my-user -psecret -e "SELECT @@hostname -- maxscale route to server db1"-- maxscale <hint body>-- maxscale route to [master | slave | server <server name>]-- maxscale route to master-- maxscale route to slave-- maxscale route to server <server name>-- maxscale route to last-- maxscale <param>=<value>-- maxscale <hint name> prepare <hint content>-- maxscale <hint name> begin-- maxscale end-- maxscale <hint name> begin <hint content>-- maxscale begin <hint content>SELECT user FROM accounts WHERE id = ? -- maxscale route to masterPREPARE my_ps FROM 'SELECT user FROM accounts WHERE id = ?';
EXECUTE my_ps USING 123; -- maxscale route to master[ReadWriteService]
type=service
router=readwritesplit
servers=server1,server2
user=maxuser
password=maxpwd
filters=Hint
[Hint]
type=filter
module=hintfilterINSERT INTO table1 VALUES ("John","Doe",1);
SELECT * FROM table1; -- maxscale route to master[MyRegexFilter]
type=filter
module=regexfilter
match=some string
replace=replacement string
[MyService]
type=service
router=readconnroute
servers=server1
user=myuser
password=mypasswd
filters=MyRegexfiltermatch=TYPE[ ]*=
options=casereplace=ENGINE =source=127.0.0.1user=johnlog_file=/tmp/regexfilter.loglog_trace=true[CreateTableFilter]
type=filter
module=regexfilter
options=ignorecase
match=TYPE\s*=
replace=ENGINE=
[MyService]
type=service
router=readconnroute
servers=server1
user=myuser
password=mypasswd
filters=CreateTableFilter[MyLogFilter]
type=filter
module=tpmfilter
[MyService]
type=service
router=readconnroute
servers=server1
user=myuser
password=mypasswd
filters=MyLogFilterfilename=/tmp/SqlQueryLogsource=127.0.0.1user=johndelimiter=:::query_delimiter=@@@named_pipe=/tmp/tpmfilter$ echo '1' > /tmp/tpmfilter$ echo '0' > /tmp/tpmfilter[PerformanceLogger]
type=filter
module=tpmfilter
delimiter=:::
query_delimiter=@@@
filename=/var/logs/tpm/perf.log
named_pipe=/tmp/tpmfilter
[Product-Service]
type=service
router=readconnroute
servers=server1
user=myuser
password=mypasswd
filters=PerformanceLogger1484086477::::server1::::root::::3::::0.165@@@@0.108@@@@0.102@@@@0.092@@@@0.121@@@@0.122@@@@0.110@@@@2.081::::UPDATE WAREHOUSE SET W_YTD = W_YTD + 3630.48 WHERE W_ID = 2 @@@@SELECT W_STREET_1, W_STREET_2, W_CITY, W_STATE, W_ZIP, W_NAME FROM WAREHOUSE WHERE W_ID = 2@@@@UPDATE DISTRICT SET D_YTD = D_YTD + 3630.48 WHERE D_W_ID = 2 AND D_ID = 9@@@@SELECT D_STREET_1, D_STREET_2, D_CITY, D_STATE, D_ZIP, D_NAME FROM DISTRICT WHERE D_W_ID = 2 AND D_ID = 9@@@@SELECT C_FIRST, C_MIDDLE, C_LAST, C_STREET_1, C_STREET_2, C_CITY, C_STATE, C_ZIP, C_PHONE, C_CREDIT, C_CREDIT_LIM, C_DISCOUNT, C_BALANCE, C_YTD_PAYMENT, C_PAYMENT_CNT, C_SINCE FROM CUSTOMER WHERE C_W_ID = 2 AND C_D_ID = 9 AND C_ID = 1025@@@@UPDATE CUSTOMER SET C_BALANCE = 1007749.25, C_YTD_PAYMENT = 465215.47, C_PAYMENT_CNT = 203 WHERE C_W_ID = 2 AND C_D_ID = 9 AND C_ID = 1025@@@@INSERT INTO HISTORY (H_C_D_ID, H_C_W_ID, H_C_ID, H_D_ID, H_W_ID, H_DATE, H_AMOUNT, H_DATA) VALUES (9,2,1025,9,2,'2017-01-10 17:14:37',3630.48,'locfljbe xtnfqn')
1484086477::::server1::::root::::6::::0.123@@@@0.087@@@@0.091@@@@0.098@@@@0.078@@@@0.106@@@@0.094@@@@0.074@@@@0.089@@@@0.073@@@@0.098@@@@0.073@@@@0.088@@@@0.072@@@@0.087@@@@0.071@@@@0.085@@@@0.078@@@@0.088@@@@0.098@@@@0.081@@@@0.076@@@@0.082@@@@0.073@@@@0.077@@@@0.070@@@@0.105@@@@0.093@@@@0.088@@@@0.089@@@@0.087@@@@0.087@@@@0.086@@@@1.883::::SELECT C_DISCOUNT, C_LAST, C_CREDIT, W_TAX FROM CUSTOMER, WAREHOUSE WHERE W_ID = 2 AND C_W_ID = 2 AND C_D_ID = 10 AND C_ID = 1267@@@@SELECT D_NEXT_O_ID, D_TAX FROM DISTRICT WHERE D_W_ID = 2 AND D_ID = 10 FOR UPDATE@@@@UPDATE DISTRICT SET D_NEXT_O_ID = D_NEXT_O_ID + 1 WHERE D_W_ID = 2 AND D_ID = 10@@@@INSERT INTO OORDER (O_ID, O_D_ID, O_W_ID, O_C_ID, O_ENTRY_D, O_OL_CNT, O_ALL_LOCAL) VALUES (286871, 10, 2, 1267, '2017-01-10 17:14:37', 7, 1)@@@@INSERT INTO NEW_ORDER (NO_O_ID, NO_D_ID, NO_W_ID) VALUES ( 286871, 10, 2)@@@@SELECT I_PRICE, I_NAME , I_DATA FROM ITEM WHERE I_ID = 24167@@@@SELECT S_QUANTITY, S_DATA, S_DIST_01, S_DIST_02, S_DIST_03, S_DIST_04, S_DIST_05, S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, S_DIST_10 FROM STOCK WHERE S_I_ID = 24167 AND S_W_ID = 2 FOR UPDATE@@@@SELECT I_PRICE, I_NAME , I_DATA FROM ITEM WHERE I_ID = 96982@@@@SELECT S_QUANTITY, S_DATA, S_DIST_01, S_DIST_02, S_DIST_03, S_DIST_04, S_DIST_05, S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, S_DIST_10 FROM STOCK WHERE S_I_ID = 96982 AND S_W_ID = 2 FOR UPDATE@@@@SELECT I_PRICE, I_NAME , I_DATA FROM ITEM WHERE I_ID = 40679@@@@SELECT S_QUANTITY, S_DATA, S_DIST_01, S_DIST_02, S_DIST_03, S_DIST_04, S_DIST_05, S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, S_DIST_10 FROM STOCK WHERE S_I_ID = 40679 AND S_W_ID = 2 FOR UPDATE@@@@SELECT I_PRICE, I_NAME , I_DATA FROM ITEM WHERE I_ID = 31459@@@@SELECT S_QUANTITY, S_DATA, S_DIST_01, S_DIST_02, S_DIST_03, S_DIST_04, S_DIST_05, S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, S_DIST_10 FROM STOCK WHERE S_I_ID = 31459 AND S_W_ID = 2 FOR UPDATE@@@@SELECT I_PRICE, I_NAME , I_DATA FROM ITEM WHERE I_ID = 6143@@@@SELECT S_QUANTITY, S_DATA, S_DIST_01, S_DIST_02, S_DIST_03, S_DIST_04, S_DIST_05, S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, S_DIST_10 FROM STOCK WHERE S_I_ID = 6143 AND S_W_ID = 2 FOR UPDATE@@@@SELECT I_PRICE, I_NAME , I_DATA FROM ITEM WHERE I_ID = 12001@@@@SELECT S_QUANTITY, S_DATA, S_DIST_01, S_DIST_02, S_DIST_03, S_DIST_04, S_DIST_05, S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, S_DIST_10 FROM STOCK WHERE S_I_ID = 12001 AND S_W_ID = 2 FOR UPDATE@@@@SELECT I_PRICE, I_NAME , I_DATA FROM ITEM WHERE I_ID = 40407@@@@SELECT S_QUANTITY, S_DATA, S_DIST_01, S_DIST_02, S_DIST_03, S_DIST_04, S_DIST_05, S_DIST_06, S_DIST_07, S_DIST_08, S_DIST_09, S_DIST_10 FROM STOCK WHERE S_I_ID = 40407 AND S_W_ID = 2 FOR UPDATE@@@@INSERT INTO ORDER_LINE (OL_O_ID, OL_D_ID, OL_W_ID, OL_NUMBER, OL_I_ID, OL_SUPPLY_W_ID, OL_QUANTITY, OL_AMOUNT, OL_DIST_INFO) VALUES (286871,10,2,1,24167,2,7,348.31998,'btdyjesowlpzjwnmxdcsion')@@@@INSERT INTO ORDER_LINE (OL_O_ID, OL_D_ID, OL_W_ID, OL_NUMBER, OL_I_ID, OL_SUPPLY_W_ID, OL_QUANTITY, OL_AMOUNT, OL_DIST_INFO) VALUES (286871,10,2,2,96982,2,1,4.46,'kudpnktydxbrbxibbsyvdiw')@@@@INSERT INTO ORDER_LINE (OL_O_ID, OL_D_ID, OL_W_ID, OL_NUMBER, OL_I_ID, OL_SUPPLY_W_ID, OL_QUANTITY, OL_AMOUNT, OL_DIST_INFO) VALUES (286871,10,2,3,40679,2,7,528.43,'nhcixumgmosxlwgabvsrcnu')@@@@INSERT INTO ORDER_LINE (OL_O_ID, OL_D_ID, OL_W_ID, OL_NUMBER, OL_I_ID, OL_SUPPLY_W_ID, OL_QUANTITY, OL_AMOUNT, OL_DIST_INFO) VALUES (286871,10,2,4,31459,2,9,341.82,'qbglbdleljyfzdpfbyziiea')@@@@INSERT INTO ORDER_LINE (OL_O_ID, OL_D_ID, OL_W_ID, OL_NUMBER, OL_I_ID, OL_SUPPLY_W_ID, OL_QUANTITY, OL_AMOUNT, OL_DIST_INFO) VALUES (286871,10,2,5,6143,2,3,152.67,'tmtnuupaviimdmnvmetmcrc')@@@@INSERT INTO ORDER_LINE (OL_O_ID, OL_D_ID, OL_W_ID, OL_NUMBER, OL_I_ID, OL_SUPPLY_W_ID, OL_QUANTITY, OL_AMOUNT, OL_DIST_INFO) VALUES (286871,10,2,6,12001,2,5,304.3,'ufytqwvkqxtmalhenrssfon')@@@@INSERT INTO ORDER_LINE (OL_O_ID, OL_D_ID, OL_W_ID, OL_NUMBER, OL_I_ID, OL_SUPPLY_W_ID, OL_QUANTITY, OL_AMOUNT, OL_DIST_INFO) VALUES (286871,10,2,7,40407,2,1,30.32,'hvclpfnblxchbyluumetcqn')@@@@UPDATE STOCK SET S_QUANTITY = 65 , S_YTD = S_YTD + 7, S_ORDER_CNT = S_ORDER_CNT + 1, S_REMOTE_CNT = S_REMOTE_CNT + 0 WHERE S_I_ID = 24167 AND S_W_ID = 2@@@@UPDATE STOCK SET S_QUANTITY = 97 , S_YTD = S_YTD + 1, S_ORDER_CNT = S_ORDER_CNT + 1, S_REMOTE_CNT = S_REMOTE_CNT + 0 WHERE S_I_ID = 96982 AND S_W_ID = 2@@@@UPDATE STOCK SET S_QUANTITY = 58 , S_YTD = S_YTD + 7, S_ORDER_CNT = S_ORDER_CNT + 1, S_REMOTE_CNT = S_REMOTE_CNT + 0 WHERE S_I_ID = 40679 AND S_W_ID = 2@@@@UPDATE STOCK SET S_QUANTITY = 28 , S_YTD = S_YTD + 9, S_ORDER_CNT = S_ORDER_CNT + 1, S_REMOTE_CNT = S_REMOTE_CNT + 0 WHERE S_I_ID = 31459 AND S_W_ID = 2@@@@UPDATE STOCK SET S_QUANTITY = 86 , S_YTD = S_YTD + 3, S_ORDER_CNT = S_ORDER_CNT + 1, S_REMOTE_CNT = S_REMOTE_CNT + 0 WHERE S_I_ID = 6143 AND S_W_ID = 2@@@@UPDATE STOCK SET S_QUANTITY = 13 , S_YTD = S_YTD + 5, S_ORDER_CNT = S_ORDER_CNT + 1, S_REMOTE_CNT = S_REMOTE_CNT + 0 WHERE S_I_ID = 12001 AND S_W_ID = 2@@@@UPDATE STOCK SET S_QUANTITY = 44 , S_YTD = S_YTD + 1, S_ORDER_CNT = S_ORDER_CNT + 1, S_REMOTE_CNT = S_REMOTE_CNT + 0 WHERE S_I_ID = 40407 AND S_W_ID = 2
...function createInstance(name)
end
function newSession(user, host)
end
function closeSession()
end
function routeQuery()
end
function clientReply()
end
function diagnostic()
end[MyLuaFilter]
type=filter
module=luafilter
global_script=/path/to/script.luaf = io.open("/tmp/test.log", "a+")
function createInstance(name)
f:write("createInstance for " .. name .. "\n")
end
function newSession(user, host)
f:write("newSession for: " .. user .. "@" .. host .. "\n")
end
function closeSession()
f:write("closeSession\n")
end
function routeQuery()
f:write("routeQuery: " .. mxs_get_sql() .. " -- type: " .. mxs_qc_get_type_mask() .. " operation: " .. mxs_qc_get_operation() .. "\n")
end
function clientReply()
f:write("clientReply: " .. mxs_get_replier() .. "\n")
end
function diagnostic()
f:write("diagnostics\n")
return "Hello from Lua!"
end[RWS-Row]
type=service
router=readwritesplit
servers = row_server_1, row_server_2, ...[RWS-Row-Listener]
type=listener
service=RWS-Row
socket=/tmp/rws-row.sock[RWS-Column]
type = service
router = readwritesplit
servers = column_server_1, column_server_2, ...
[RWS-Column-Listener]
type = listener
service = RWS-Column
socket = /tmp/rws-col.sock[SmartQuery]
type = service
router = smartrouter
targets = RWS-Row, RWS-Column
master = RWS-Row
[SmartQuery-Listener]
type = listener
service = SmartQuery
port = <port>[maxscale]
[row_server_1]
type = server
address = <ip>
port = <port>
[row_server_2]
type = server
address = <ip>
port = <port>
[Row-Monitor]
type = monitor
module = mariadbmon
servers = row_server_1, row_server_2
user = <user>
password = <password>
monitor_interval = 2000ms
[column_server_1]
type = server
address = <ip>
port = <port>
[Column-Monitor]
type = monitor
module = csmon
servers = column_server_1
user = <user>
password = <password>
monitor_interval = 2000ms
# Row Read write split
[RWS-Row]
type = service
router = readwritesplit
servers = row_server_1, row_server_2
user = <user>
password = <password>
[RWS-Row-Listener]
type = listener
service = RWS-Row
socket = /tmp/rws-row.sock
# Columnstore Read write split
[RWS-Column]
type = service
router = readwritesplit
servers = column_server_1
user = <user>
password = <password>
[RWS-Column-Listener]
type = listener
service = RWS-Column
socket = /tmp/rws-col.sock
# Smart Query router
[SmartQuery]
type = service
router = smartrouter
targets = RWS-Row, RWS-Column
master = RWS-Row
user = <user>
password = <password>
[SmartQuery-Listener]
type = listener
service = SmartQuery
port = <port>[MyFilter]
type=filter
module=xxxfilter[Split-Service]
type=service
router=readwritesplit
servers=dbserver1,dbserver2,dbserver3,dbserver4
user=massi
password=6628C50E07CCE1F0392EDEEB9D1203F3
filters=hints | top10[top30]
type=filter
module=topfilter
count=30
filebase=/var/log/DBSessions/top30[BubbleGame]
type=service
router=readwritesplit
servers=dbbubble1,dbbubble2,dbbubble3,dbbubble4,dbbubble5
user=maxscale
password=6628C50E07CCE1F0392EDEEB9D1203F3[CassandraDB]
type=server
address=192.168.4.28
port=3306
[Cassandra]
type=service
router=readconnroute
router_options=running
servers=CassandraDB
user=maxscale
password=6628C50E07CCE1F0392EDEEB9D1203F3[HighScores]
type=filter
module=teefilter
match=insert.*HighScore.*values
service=Cassandra[BubbleGame]
type=service
router=readwritesplit
servers=dbbubble1,dbbubble2,dbbubble3,dbbubble4,dbbubble5
user=maxscale
password=6628C50E07CCE1F0392EDEEB9D1203F3
filters=HighScoresThe service uses the stored data when authenticating clients, checking their passwords and database access rights. This results in an authentication process very similar to the MariaDB Server itself. Unauthorized users are generally detected already at the MaxScale level instead of the backend servers. This may not apply in some cases, for example if MaxScale is using old user account data.
If authentication fails, the UAM updates its data from a backend. MaxScale may attempt authenticating the client again with the refreshed data without communicating the first failure to the client. This transparent user data update does not always work, in which case the client should try to log in again.
As the UAM is shared between all listeners of a service, its settings are defined in the service configuration. For more information, search the configuration guide for users_refresh_time, users_refresh_interval and_auth_all_servers_. Other settings which affect how the UAM connects to backends are the global settings auth_connect_timeout and local_address, and the various server-level ssl-settings.
To properly fetch user account information, the MaxScale service user must be able to read from various tables in the mysql-database: user, db,tables_priv, columns_priv, procs_priv, proxies_priv and roles_mapping. The user should also have the SHOW DATABASES-grant.
If using MariaDB ColumnStore, the following grant is required:
When a client logs in to MaxScale, MaxScale sees the client's IP address. When
MaxScale then connects the client to backends (using the client's username and
password), the backends see the connection coming from the IP address of
MaxScale. If the client user account is to a wildcard host ('alice'@'%'), this
is not an issue. If the host is restricted ('alice'@'123.123.123.123'),
authentication to backends will fail.
There are two primary ways to deal with this:
Duplicate user accounts. For every user account with a restricted hostname an
equivalent user account for MaxScale is added ('alice'@'maxscale-ip').
Use proxy protocol.
Option 1 limits the passwords for user accounts with shared usernames. Such accounts must use the same password since they will effectively share the MaxScale-to-backend user account. Option 2 requires server support.
See MaxScale Troubleshooting for additional information on how to solve authentication issues.
MaxScale supports wildcards _ and % for database-level grants. As with
MariaDB Server, grant select on test_.* to 'alice'@'%'; gives access totest_ as well as test1, test2 and so on. If the GRANT command escapes the
wildcard (grant select on test_.* to 'alice'@'%';) both MaxScale and the
MariaDB Server interpret it as only allowing access to test_. _ and %
are only interpreted as wildcards when the grant is to a database:grant select on test_.t1 to 'alice'@'%'; only grants access to the_test_.t1_-table, not to test1.t1.
The listener configuration defines authentication options which only affect the listener. authenticator defines the authentication plugins to use.authenticator_options sets various options. These options may affect an individual authentication plugin or the authentication as a whole. The latter are explained below. Multiple options can be given as a comma-separated list.
Type: boolean
Mandatory: No
Dynamic: No
Default: false
If enabled, MaxScale will not check the passwords of incoming clients and just assumes that they are correct. Wrong passwords are instead detected when MaxScale tries to authenticate to the backend servers.
This setting is mainly meant for failure tolerance in situations where the password check is performed outside of MaxScale. If, for example, MaxScale cannot use an LDAP-server but the backend databases can, enabling this setting allows clients to log in. Even with this setting enabled, a user account matching the incoming client username and IP must exist on the backends for MaxScale to accept the client.
This setting is incompatible with standard MariaDB/MySQL authentication plugin (MariaDBAuth in MaxScale). If enabled, MaxScale cannot authenticate clients to backend servers using standard authentication.
Type: boolean
Mandatory: No
Dynamic: No
Default: true
If disabled, MaxScale does not require that a valid user account entry for incoming clients exists on the backends. Specifically, only the client username needs to match a user account, hostname/IP is ignored.
This setting may be used to force clients to connect through MaxScale. Normally, creating the user jdoe@% will allow the user jdoe to connect from any IP-address. By disabling match_host and replacing the user with_jdoe@maxscale-IP_, the user can still connect from any client IP but will be forced to go through MaxScale.
Type: number
Mandatory: No
Dynamic: No
Default: 0
Controls database name matching for authentication when an incoming client logs in to a non-empty database. The setting functions similar to the MariaDB Server setting and should be set to the value used by the backends.
The setting accepts the values 0, 1 or 2:
0: case-sensitive matching (default)
1: convert the requested database name to lower case before using case-insensitive
matching. Assumes that database names on the server are stored in lower case.
2: use case-insensitive matching.
true and false are also accepted for backwards compatibility. These map to 1 and 0, respectively.
The identifier names are converted using an ASCII-only function. This means that non-ASCII characters will retain their case-sensitivity.
Starting with MaxScale versions 2.5.25, 6.4.6, 22.08.5 and 23.02.2, the behavior
of lower_case_table_names=1 is identical with how the MariaDB server
behaves. In older releases the comparisons were done in a case-sensitive manner
after the requested database name was converted into lowercase. Usinglower_case_table_names=2 will behave identically in all versions which makes
it a safe alternative to use when a mix of older and newer MaxScale versions is
being used.
This page is licensed: CC BY-SA / Gnu FDL
The namedserverfilter is a MariaDB MaxScale filter module able to route queries to servers based on regular expression (regex) matches. Since it is a filter instead of a router, the NamedServerFilter only sets routing suggestions. It requires a compatible router to be effective. Currently, bothreadwritesplit and hintrouter take advantage of routing hints in the data packets. This filter uses the PCRE2 library for regular expression matching.
The filter accepts settings in two modes: legacy and indexed. Only one of
the modes may be used for a given filter instance. The legacy mode is meant for
backwards compatibility and allows only one regular expression and one server
name in the configuration. In indexed mode, up to 25 regex-server pairs are
allowed in the form match01 - target01, match02 - target02 and so on.
Also, in indexed mode, the server names (targets) may contain a list of names or
special tags ->master or ->slave.
All parameters except the deprecated match and target parameters can
be modified at runtime. Any modifications to the filter configuration will
only affect sessions created after the change has completed.
Below is a configuration example for the filter in indexed-mode. The legacy mode is not recommended and may be removed in a future release. In the example, a SELECT on TableOne (match01) results in routing hints to two named servers, while a SELECT on TableTwo is suggested to be routed to the primary server of the service. Whether a list of server names is interpreted as a route-to-any or route-to-all is up to the attached router. The HintRouter sees a list as a suggestion to route-to-any. For additional information on hints and how they can also be embedded into SQL-queries, see Hint-Syntax.
NamedServerFilter requires at least one matchXY - targetXY pair.
Type: regex
Mandatory: No
Dynamic: Yes
Default: None
matchXY defines a PCRE2 regular expression against which the incoming SQL query is matched. XY must be a number in the range 01 - 25. Each match-setting pairs with a similarly indexed target-setting. If one is defined, the other must be defined as well. If a query matches the pattern, the filter attaches a routing hint defined by the target-setting to the query. The_options_-parameter affects how the patterns are compiled.
options
Type: enum
Mandatory: No
Dynamic: Yes
Values: ignorecase, case, extended
Default: ignorecase
Regular expression options
for matchXY.
Type: string
Mandatory: No
Dynamic: Yes
Default: None
The hint which is attached to the queries matching the regular expression defined by_matchXY_. If a compatible router is used in the service the query will be routed accordingly. The target can be one of the following:
a server or service name (adds a HINT_ROUTE_TO_NAMED_SERVER hint)
a list of server names, comma-separated (adds severalHINT_ROUTE_TO_NAMED_SERVER hints)
->master (adds a HINT_ROUTE_TO_MASTER hint)
->slave (adds a HINT_ROUTE_TO_SLAVE hint)
->all (adds a HINT_ROUTE_TO_ALL hint)
The support for service names was added in MaxScale 6.3.2. Older
versions of MaxScale did not accept service names in the target
parameters.
Type: string
Mandatory: No
Dynamic: Yes
Default: None
This optional parameter defines an IP address or mask which a connecting client's IP address is matched against. Only sessions whose address matches this setting will have this filter active and performing the regex matching. Traffic from other client IPs is simply left as is and routed straight through.
Since MaxScale 2.1 it's also possible to use % wildcards:
Note that using source=% to match any IP is not allowed.
Since MaxScale 2.3 it's also possible to specify multiple addresses separated by comma. Incoming client connections are subsequently checked against each.
Type: string
Mandatory: No
Dynamic: Yes
Default: None
This optional parameter defines a username the connecting client username is matched against. Only sessions that are connected using this username will have the match and routing hints applied to them. Traffic from other users is simply left as is and routed straight through.
The maximum number of accepted match - target pairs is 25.
In the configuration file, the indexed match and target settings may be in any order and may skip numbers. During SQL-query matching, however, the regexes are tested in ascending order: match01, match02, match03 and so on. As soon as a match is found for a given query, the routing hints are written and the packet is forwarded to the next filter or router. Any remaining match regexes are ignored. This means the match - target pairs should be indexed in priority order, or, if priority is not a factor, in order of decreasing match probability.
Binary-mode prepared statements (COM_STMT_PREPARE) are handled by matching the prepared sql against the match-parameters. If a match is found, the routing hints are attached to any execution of that prepared statement. Text- mode prepared statements are not supported in this way. To divert them, use regular expressions which match the specific "EXECUTE"-query.
This will route all queries matching the regular expression *from *users to
the server named server2. The filter will ignore character case in queries.
A query like SELECT * FROM users would be routed to server2 where as a query
like SELECT * FROM accounts would be routed according to the normal rules of
the router.
This page is licensed: CC BY-SA / Gnu FDL
The primary server where we will be replicating from needs to have binary logging
enabled, binlog_format set to row and binlog_row_image set tofull. These can be enabled by adding the two following lines to the my.cnf
file of the primary.
You can find out more about replication formats from the
We start by adding two new services into the configuration file. The first service is the binlogrouter service which will read the binary logs from the primary server. The second service will read the binlogs as they are streamed from the primary and convert them into Avro format files.
The source parameter in the avro-service points to the replication-service
we defined before. This service will be the data source for the avrorouter. The_filestem_ is the prefix in the binlog files and start_index is the binlog
number to start from. With these parameters, the avrorouter will start reading
events from binlog binlog.000015.
Note that the filestem and start_index must point to the file that is the
first binlog that the binlogrouter will replicate. For example, if the first
file you are replicating is my-binlog-file.001234, set the parameters tofilestem=my-binlog-file and start_index=1234.
For more information on the avrorouter options, read the Avrorouter Documentation.
Before starting the MaxScale process, we need to make sure that the binary logs
of the primary server contain the DDL statements that define the table
layouts. What this means is that the CREATE TABLE statements need to be in the
binary logs before the conversion process is started.
If the binary logs contain data modification events for tables that aren't created in the binary logs, the Avro schema of the table needs to be manually created. There are multiple ways to do this:
Dump the database to a replica, configure it to replicate from the primary and point MaxScale to this replica (this is the recommended method as it requires no extra steps)
Use the cdc_schema Go utility and copy the generated .avsc files to the avrodir
Use the Python version of the schema generator and copy the generated .avsc files to the avrodir
If you used the schema generator scripts, all Avro schema files for tables that
are not created in the binary logs need to be in the location pointed to by the_avrodir_ parameter. The files use the following naming:<database>.<table>.<schema_version>.avsc. For example, the schema file name of
the test.t1 table would be test.t1.0000001.avsc.
The next step is to start MariaDB MaxScale and set up the binlogrouter. We do that by connecting to the MySQL listener of the replication_router service and executing a few commands.
NOTE: GTID replication is not currently supported and file-and-position replication must be used.
This will start the replication of binary logs from the primary server at
172.18.0.1 listening on port 3000. The first file that the binlogrouter
replicates is binlog.000015. This is the same file that was configured as the
starting file in the avrorouter.
For more details about the SQL commands, refer to the Binlogrouter documentation.
After the binary log streaming has started, the avrorouter will automatically start processing the binlogs.
Next, create a simple test table and populated it with some data by executing the following statements.
To use the cdc.py command line client to connect to the CDC service, we must first create a user. This can be done via maxctrl by executing the following command.
This will create the maxuser:maxpwd credentials which can then be used to
request a JSON data stream of the test.t1 table that was created earlier.
The output is a stream of JSON events describing the changes done to the database.
The first record is always the JSON format schema for the table describing the types and names of the fields. All records that follow it represent the changes that have happened on the database.
This page is licensed: CC BY-SA / Gnu FDL

Admin users represent administrative users that are able to query and change MaxScale's configuration.
Get a single network user. The :name in the URI must be a valid network user name.
Response
Status: 200 OK
Get all network users.
Response
Status: 200 OK
Note: This endpoint has been deprecated and does nothing.
Note: This endpoint has been deprecated and does nothing.
Get all administrative users.
Response
Status: 200 OK
Create a new network user. The request body must define at least the following fields.
data.id
The username
data.type
Type of the object, must be inet
Only admin accounts can perform POST, PUT, DELETE and PATCH requests. If a basic
account performs one of the aforementioned request, the REST API will respond
with a 401 Unauthorized error.
Here is an example request body defining the network user my-user with the password my-password that is allowed to execute only read-only operations.
Response
This enables an existing UNIX account on the system for administrative operations. The request body must define at least the following fields.
data.id
The username
data.type
Type of the object, must be unix
Here is an example request body enabling the UNIX account jdoe for read-only operations.
Response
The :name part of the URI must be a valid user name.
Response
The :name part of the URI must be a valid user name.
Response
Update network user. Currently, only the password can be updated. This
means that the request body must define the data.attributes.password
field.
Here is an example request body that updates the password.
Response
This page is licensed: CC BY-SA / Gnu FDL
A filter resource represents an instance of a filter inside MaxScale. Multiple services can use the same filter and a single service can use multiple filters.
The :name in all of the URIs must be the name of a filter in MaxScale.
Get a single filter.
Response
Status: 200 OK
Get all filters.
Response
Status: 200 OK
Create a new filter. The posted object must define at least the following fields.
data.id
Name of the filter
data.type
Type of the object, must be filters
All of the filter parameters should be defined at creation time in thedata.attributes.parameters object.
As the service to filter relationship is ordered (filters are applied in the order they are listed), filter to service relationships cannot be defined at creation time.
The following example defines a request body which creates a new filter.
Response
Filter is created:
Status: 204 No Content
Filter parameters can be updated at runtime if the module supports it. Refer to the individual module documentation for more details on whether it supports runtime configuration and which parameters can be updated.
The following example modifies a filter by changing the match parameter to.*users.*.
Response
Filter is modified:
Status: 204 No Content
The :filter in the URI must map to the name of the filter to be destroyed.
A filter can only be destroyed if no service uses it. This means that thedata.relationships object for the filter must be empty. Note that the service
→ filter relationship cannot be modified from the filters resource and must be
done via the services resource.
This endpoint also supports the force=yes parameter that will unconditionally
delete the filter by first removing it from all services that it uses.
Response
Filter is destroyed:
Status: 204 No Content
This page is licensed: CC BY-SA / Gnu FDL
The rewrite filter allows modification of sql queries on the fly. Reasons for modifying queries can be to rewrite a query for performance, or to change a specific query when the client query is incorrect and cannot be changed in a timely manner.
The examples will use Rewrite Filter file format. See below.
The is not only capable of monitoring the state of a MariaDB primary-replica cluster but is also capable of performing failover and switchover. In addition, in some circumstances it is capable of rejoining a primary that has gone down and later reappears.
Note that the failover (and switchover and rejoin) functionality is only supported in conjunction with GTID-based replication and initially only for simple topologies, that is, 1 primary and several replicas.
The failover, switchover and rejoin functionality are inherent parts of the MariaDB Monitor, but neither automatic failover nor automatic rejoin are enabled by default.
The following examples have been written with the assumption that there
are four servers - server1, server2, server3 and server4 - of
which server1
Sharding is the method of splitting a single logical database server into separate physical databases. This tutorial describes a very simple way of sharding. Each schema is located on a different database server and MariaDB MaxScale's schemarouter module is used to combine them into a single logical database server.
This tutorial was written for Ubuntu 22.04, MaxScale 23.08 and . In addition to the MaxScale server, you'll need two MariaDB servers which will be used for the sharding. The installation of MariaDB is not covered by this tutorial.
CREATE USER 'maxscale'@'maxscalehost' IDENTIFIED BY 'maxscale-password';
GRANT SELECT ON mysql.user TO 'maxscale'@'maxscalehost';
GRANT SELECT ON mysql.db TO 'maxscale'@'maxscalehost';
GRANT SELECT ON mysql.tables_priv TO 'maxscale'@'maxscalehost';
GRANT SELECT ON mysql.columns_priv TO 'maxscale'@'maxscalehost';
GRANT SELECT ON mysql.procs_priv TO 'maxscale'@'maxscalehost';
GRANT SELECT ON mysql.proxies_priv TO 'maxscale'@'maxscalehost';
GRANT SELECT ON mysql.roles_mapping TO 'maxscale'@'maxscalehost';
GRANT SHOW DATABASES ON *.* TO 'maxscale'@'maxscalehost';GRANT ALL ON infinidb_vtable.* TO 'maxscale'@'maxscalehost';authenticator_options=skip_authentication=true,lower_case_table_names=1authenticator_options=skip_authentication=trueauthenticator_options=match_host=falseauthenticator_options=lower_case_table_names=0[NamedServerFilter]
type=filter
module=namedserverfilter
match01=^Select.*TableOne$
target01=server2,server3
match22=^SELECT.*TableTwo$
target22=->master
[MyService]
type=service
router=readwritesplit
servers=server1,server2,server3
user=myuser
password=mypasswd
filters=NamedServerFiltermatch01=^SELECT
options=case,extendedtarget01=MyServer2source=127.0.0.1source=192.%.%.%
source=192.168.%.%
source=192.168.10.%source=192.168.21.3,192.168.10.%user=john[NamedServerFilter]
type=filter
module=namedserverfilter
match02= *from *users
target02=server2
[MyService]
type=service
router=readwritesplit
servers=server1,server2
user=myuser
password=mypasswd
filters=NamedServerFilterbinlog_format=row
binlog_row_image=full# The Replication Proxy service
[replication-service]
type=service
router=binlogrouter
server_id=4000
master_id=3000
filestem=binlog
user=maxuser
password=maxpwd
# The Avro conversion service
[avro-service]
type=service
router=avrorouter
source=replication-service
filestem=binlog
start_index=15
# The listener for the replication-service
[replication-listener]
type=listener
service=replication-service
port=3306
# The client listener for the avro-service
[avro-listener]
type=listener
service=avro-service
protocol=CDC
port=4001CHANGE MASTER TO MASTER_HOST='172.18.0.1',
MASTER_PORT=3000,
MASTER_LOG_FILE='binlog.000015',
MASTER_LOG_POS=4,
MASTER_USER='maxuser',
MASTER_PASSWORD='maxpwd';
START SLAVE;CREATE TABLE test.t1 (id INT);
INSERT INTO test.t1 VALUES (1), (2), (3), (4), (5), (6), (7), (8), (9), (10);maxctrl call command cdc add_user avro-service maxuser maxpwdcdc.py -u maxuser -p maxpwd -h 127.0.0.1 -P 4001 test.t1{"namespace": "MaxScaleChangeDataSchema.avro", "type": "record", "name": "ChangeRecord", "fields": [{"name": "domain", "type": "int"}, {"name": "server_id", "type": "int"}, {"name": "sequence", "type": "int"}, {"name": "event_number", "type": "int"}, {"name": "timestamp", "type": "int"}, {"name": "event_type", "type": {"type": "enum", "name": "EVENT_TYPES", "symbols": ["insert", "update_before", "update_after", "delete"]}}, {"name": "id", "type": "int", "real_type": "int", "length": -1}]}
{"domain": 0, "server_id": 3000, "sequence": 11, "event_number": 1, "timestamp": 1537429419, "event_type": "insert", "id": 1}
{"domain": 0, "server_id": 3000, "sequence": 11, "event_number": 2, "timestamp": 1537429419, "event_type": "insert", "id": 2}
{"domain": 0, "server_id": 3000, "sequence": 11, "event_number": 3, "timestamp": 1537429419, "event_type": "insert", "id": 3}
{"domain": 0, "server_id": 3000, "sequence": 11, "event_number": 4, "timestamp": 1537429419, "event_type": "insert", "id": 4}
{"domain": 0, "server_id": 3000, "sequence": 11, "event_number": 5, "timestamp": 1537429419, "event_type": "insert", "id": 5}
{"domain": 0, "server_id": 3000, "sequence": 11, "event_number": 6, "timestamp": 1537429419, "event_type": "insert", "id": 6}
{"domain": 0, "server_id": 3000, "sequence": 11, "event_number": 7, "timestamp": 1537429419, "event_type": "insert", "id": 7}
{"domain": 0, "server_id": 3000, "sequence": 11, "event_number": 8, "timestamp": 1537429419, "event_type": "insert", "id": 8}
{"domain": 0, "server_id": 3000, "sequence": 11, "event_number": 9, "timestamp": 1537429419, "event_type": "insert", "id": 9}
{"domain": 0, "server_id": 3000, "sequence": 11, "event_number": 10, "timestamp": 1537429419, "event_type": "insert", "id": 10}$ maxkeys$ chown maxscale:maxscale /var/lib/maxscale/.secrets$ maxpasswd plaintextpassword
96F99AA1315BDC3604B006F427DD9484[MariaDB-Service]
type=service
router=readwritesplit
servers=MariaDB1,MariaDB2,MariaDB3
user=maxscale-user
password=96F99AA1315BDC3604B006F427DD9484[maxscale]
admin_host=127.0.0.1
admin_port=2222[maxscale]
admin_host=10.0.0.3
admin_port=2222[maxscale]
admin_secure_gui=true
admin_ssl_key=/certs/maxscale-key.pem
admin_ssl_cert=/certs/maxscale-cert.pem
admin_ssl_ca_cert=/certs/ca-cert.pem$ maxctrl --user=my_user --password=my_password --secure --tls-ca-cert=/certs/ca-cert.pem --tls-verify-server-cert=false show maxscale$ maxctrl create user my_user my_password --type=admin$ maxctrl destroy user admin[maxscale]
admin_audit = true
admin_audit_file = /var/log/maxscale/audit_files/audit.csv$ maxctrl rotate logs[MariaDB-Server1]
type=server
ssl=true
ssl_verify_peer_certificate=trueCREATE USER 'maxscale'@'%' IDENTIFIED BY 'maxscale_pw';
GRANT SELECT ON mysql.user TO 'maxscale'@'%';
GRANT SELECT ON mysql.db TO 'maxscale'@'%';
GRANT SELECT ON mysql.tables_priv TO 'maxscale'@'%';
GRANT SELECT ON mysql.columns_priv TO 'maxscale'@'%';
GRANT SELECT ON mysql.procs_priv TO 'maxscale'@'%';
GRANT SELECT ON mysql.proxies_priv TO 'maxscale'@'%';
GRANT SELECT ON mysql.roles_mapping TO 'maxscale'@'%';
GRANT SHOW DATABASES ON *.* TO 'maxscale'@'%';CREATE USER 'jdoe'@'maxscale-host' IDENTIFIED BY 'my_secret_password';MariaDB [(none)]> SHOW GRANTS FOR 'jdoe'@'client-host';
+-----------------------------------------------------------------------+
| Grants for jdoe@client-host |
+-----------------------------------------------------------------------+
| GRANT SELECT, INSERT, UPDATE, DELETE ON *.* TO 'jdoe'@'client-host' |
+-----------------------------------------------------------------------+
1 row in set (0.01 sec)GRANT SELECT, INSERT, UPDATE, DELETE ON *.* TO 'jdoe'@'maxscale-host';[maxscale]
threads=autosudo systemctl start maxscalesudo service maxscale start% sudo maxctrl list services
┌──────────────────┬────────────────┬─────────────┬───────────────────┬───────────────────────────┐
│ Service │ Router │ Connections │ Total Connections │ Servers │
├──────────────────┼────────────────┼─────────────┼───────────────────┼───────────────────────────┤
│ Splitter-Service │ readwritesplit │ 1 │ 1 │ dbserv1, dbserv2, dbserv3 │
└──────────────────┴────────────────┴─────────────┴───────────────────┴───────────────────────────┘
% sudo maxctrl list servers
┌─────────┬─────────────┬──────┬─────────────┬─────────────────┬───────────┐
│ Server │ Address │ Port │ Connections │ State │ GTID │
├─────────┼─────────────┼──────┼─────────────┼─────────────────┼───────────┤
│ dbserv1 │ 192.168.2.1 │ 3306 │ 0 │ Master, Running │ 0-3000-62 │
├─────────┼─────────────┼──────┼─────────────┼─────────────────┼───────────┤
│ dbserv2 │ 192.168.2.2 │ 3306 │ 0 │ Slave, Running │ 0-3000-62 │
├─────────┼─────────────┼──────┼─────────────┼─────────────────┼───────────┤
│ dbserv3 │ 192.168.2.3 │ 3306 │ 0 │ Slave, Running │ 0-3000-62 │
└─────────┴─────────────┴──────┴─────────────┴─────────────────┴───────────┘
% sudo maxctrl list listeners Splitter-Service
┌───────────────────┬──────┬──────┬─────────┐
│ Name │ Port │ Host │ State │
├───────────────────┼──────┼──────┼─────────┤
│ Splitter-Listener │ 3306 │ │ Running │
└───────────────────┴──────┴──────┴─────────┘data.attributes.password
The password for this user
data.attributes.account
Set to admin for administrative users and basic to read-only users
data.attributes.account
Set to admin for administrative users and basic to read-only users
data.attributes.module
The filter module to use
Native syntax
Rewriter native syntax uses placeholders to grab and replace parts of text.
Placeholders
The syntax for a plain placeholder is @{N} where N is a positive integer.
The syntax for a placeholder regex is @{N:regex}. It allows more control
when needed.
The below is a valid entry in rf format. For demonstration, all options are set. This entry is a do-nothing entry, but illustrates placeholders.
If the input sql is select id, name from my_table where id = 42
then @{2} = "id, name" and @{3} = "42". Since the replace template
is identical to the match template the end result is that the output sql
will be the same as the input sql.
Placeholders can be used as forward references.@{1:^}select @{2}, count(*) from @{3} group by @{2}.
For a match, the two @{2} text grabs must be equal.
Match template
The match template is used to match against the sql to be rewritten.
The match template can be partial from mytable. But the actual underlying
regex match is always for the whole sql. If the match template does not
start or end with a placeholder, placeholders are automatically added so
that the above becomes @{1}from mytable@{2}. The automatically added
placeholders cannot be used in the replace template.
Matching the whole input also means that Native syntax does not support
(and is not intended to support) scan and replace. Only the first occurrence
of the above from mytable can be modified in the replace template.
However, one can selectively choose to modify e.g. the first through
third occurrence of from mytable by writingfrom mytable @{1} from mytable @{2} from mytable @{3}.
For scan and replace use a different regex_grammar (see below).
Replace template
The replace template uses the placeholders from the match template to rewrite sql.
An important option for smooth matching is ignore_whitespace, which
is on (true) by default. It creates the match regex in such a way that
the amount and kind of whitespace does not affect matching. However,
to make ignore_whitespace always work, it is important to add
whitespace where allowed. If "id=42" is in the match template then
only the exact "id=42" can match. But if "id = 42" is used, andignore_whitespace is on, both "id=42" and "id = 42" will match.
Another example, and what not to do:
That works, but because the match lacks specific detail about the
expected sql, things are likely to break. In this caseshow indexes from my_table would no longer work.
The minimum detail in this case could be:
but if more detail is known, like something specific in the where clause, that too should be added.
Placeholder Regex
Syntax: @{N:regex}
In a placeholder regex the character } must be escaped to \}
(for literal matching). Plain parenthesis "()" indicate capturing
groups, which are internally used by the Native grammar.
Thus plain parentheses in a placeholder regex will break matching.
However, non-capturing groups can be used: e.g. @{1:(:?Jane|Joe)}.
To match a literal parenthesis use an escape, e.g. \(.
Suppose an application is misbehaving after an upgrade and a quick fix is needed.
This query select zip from address_book where str_id = "AZ-124" is correct,
but if the id is an integer the where clause should be id = 1234.
Using plain regular expressions
For scan and replace the regex_grammar must be set to something else than Native. An example will illustrate the usage.
Replace all occurrences of "wrong_table_name" with "correct_table_name". Further, if the replacement was made then replace all occurrences of wrong_column_name with correct_column_name.
Adding a rewrite filter.
template_file
Type: string
Mandatory: Yes
Dynamic: Yes
Default: No default value
Path to the template file.
regex_grammar
Type: string
Mandatory: No
Dynamic: Yes
Default: Native
Values: Native, ECMAScript, Posix, EPosix, Awk, Grep, EGrep
Default regex_grammar for templates
case_sensitive
Type: boolean
Mandatory: No
Dynamic: Yes
Default: true
Default case sensitivity for templates
log_replacement
Type: boolean
Mandatory: No
Dynamic: Yes
Default: false
Log replacements at NOTICE level.
regex_grammar
Type: string
Values: Native, ECMAScript, Posix, EPosix, Awk, Grep, EGrep
Default: From maxscale.cnf
Overrides the global regex_grammar of a template.
case_sensitive
Type: boolean
Default: From maxscale.cnf
Overrides the global case sensitivity of a template.
ignore_whitespace
Type: boolean
Default: true
Ignore whitespace differences in the match template and input sql.
continue_if_matched
Type: boolean
Default: false
If a template matches and the replacement is done, continue to the next template and apply it to the result of the previous rewrite.
what_if
Type: boolean
Default: false
Do not make the replacement, only log what would have been replaced (NOTICE level).
The rf format for an entry is:
The character # starts a single line comment when it is the
first character on a line.
Empty lines are ignored.
The rf format does not need any additional escaping to what the basic format requires (see Placeholder Regex).
Options are specified as follows:
The colon must stick to the option name.
The separators % and %% must be the exact content of
their respective separator lines.
The templates can span multiple lines. Whitespace does not
matter as long as ignore_whitespace = true. Always use space
where space is allowed to maximize the utility ofignore_whitespace.
Example
The json file format is harder to read and edit manually. It will be needed if support for editing of rewrite templates is added to the GUI.
All double quotes and escape characters have to be escaped in json, i.e '"' and '\'.
The same example as above is:
The configuration is re-read if any dynamic value is updated even if the value does not change.
ECMAScript ECMAScript
Posix V1_chap09.html#tag_09_03
EPosix V1_chap09.html#tag_09_04
Grep Same as Posix with the addition of newline '\n' as an alternation separator.
EGrep Same as EPosix with the addition of newline '\n' as an alternation separator in addition to '|'.
This page is licensed: CC BY-SA / Gnu FDL
The tee filter is a "plumbing" fitting in the MariaDB MaxScale filter toolkit. It can be used in a filter pipeline of a service to make copies of requests from the client and send the copies to another service within MariaDB MaxScale.
Please Note: Starting with MaxScale 2.2.0, any client that connects to a
service which uses a tee filter will require a grant for the loopback address,
i.e. 127.0.0.1.
The configuration block for the TEE filter requires the minimal filter parameters in its section within the MaxScale configuration file. The service to send the duplicates to must be defined.
The tee filter requires a mandatory parameter to define the service to replicate statements to and accepts a number of optional parameters.
Type: target
Mandatory: No
Dynamic: Yes
Default: none
The target where the filter will duplicate all queries. The target can be either a service or a server. The duplicate connection that is created to this target will be referred to as the "branch target" in this document.
Type: service
Mandatory: No
Dynamic: Yes
Default: none
The service where the filter will duplicate all queries. This parameter is
deprecated in favor of the target parameter and will be removed in a future
release. Both target and service cannot be defined.
Type: regex
Mandatory: No
Dynamic: Yes
Default: None
What queries should be included.
Type: regex
Mandatory: No
Dynamic: Yes
Default: None
What queries should be excluded.
Type: enum
Mandatory: No
Dynamic: Yes
Values: ignorecase, case, extended
Default: ignorecase
How regular expressions should be interpreted.
Type: string
Mandatory: No
Dynamic: Yes
Default: None
The optional source parameter defines an address that is used to match against the address from which the client connection to MariaDB MaxScale originates. Only sessions that originate from this address will be replicated.
Type: string
Mandatory: No
Dynamic: Yes
Default: None
The optional user parameter defines a user name that is used to match against the user from which the client connection to MariaDB MaxScale originates. Only sessions that are connected using this username are replicated.
Type: boolean
Mandatory: No
Dynamic: Yes
Default: false
Enable synchronous routing mode. When configured with sync=true, the filter
will queue new queries until the response from both the main and the branch
target has been received. This means that for n executed queries, n - 1
queries are guaranteed to be synchronized. Adding one extra statement
(e.g. SELECT 1) to a batch of statements guarantees that all previous SQL
statements have been successfully executed on both targets.
In the synchronous routing mode, a failure of the branch target will cause the client session to be closed.
All statements that are executed on the branch target are done in an
asynchronous manner. This means that when the client receives the response
there is no guarantee that the statement has completed on the branch
target. The sync feature provides some synchronization guarantees that can
be used to verify successful execution on both targets.
Any errors on the branch target will cause the connection to it to be
closed. If target is a service, it is up to the router to decide whether the
connection is closed. For direct connections to servers, any network errors
cause the connection to be closed. When the connection is closed, no new
queries will be routed to the branch target.
With sync=true, a failure of the branch target will cause the whole session
to be closed.
Read Module Commands documentation for details about module commands.
The tee filter supports the following module commands.
This command disables a tee filter instance. A disabled tee filter will not send any queries to the target service.
Enable a disabled tee filter. This resumes the sending of queries to the target service.
Assume an order processing system that has a table called orders. You also have another database server, the datamart server, that requires all inserts into orders to be replicated to it. Deletes and updates are not, however, required.
Set up a service in MariaDB MaxScale, called Orders, to communicate with the order processing system with the tee filter applied to it. Also set up a service to talk to the datamart server, using the DataMart service. The tee filter would have as its service entry the DataMart service, by adding a match parameter of "insert into orders" would then result in all requests being sent to the order processing system, and insert statements that include the orders table being additionally sent to the datamart server.
This page is licensed: CC BY-SA / Gnu FDL
Somewhat simplified, the MaxScale configuration file would look like:
If everything is in order, the state of the cluster will look something like this:
If the primary now for any reason goes down, then the cluster state will look like this:
Note that the status for server1 is Down.
Since failover is by default not enabled, the failover mechanism must be invoked manually:
There are quite a few arguments, so let's look at each one separately_call command indicates that it is a module command that is to be_
&#xNAN;invoked, mariadbmon indicates the module whose command we want to invoke (that
is the MariaDB Monitor),failover is the command we want to invoke, and TheMonitor is the first and only argument to that command, the name of
the monitor as specified in the configuration file.
The MariaDB Monitor will now autonomously deduce which replica is the most appropriate one to be promoted to primary, promote it to primary and modify the other replicas accordingly.
If we now check the cluster state we will see that one of the remaining replicas has been made into primary.
If server1 now reappears, it will not be rejoined to the cluster, as
shown by the following output:
Had auto_rejoin=true been specified in the monitor section, then an
attempt to rejoin server1 would have been made.
In MaxScale 2.2.1, rejoining cannot be initiated manually, but in a subsequent version a command to that effect will be provided.
To enable automatic failover, simply add auto_failover=true to the
monitor section in the configuration file.
When everything is running fine, the cluster state looks like follows:
If server1 now goes down, failover will automatically be performed and
an existing replica promoted to new primary.
If you are continuously monitoring the server states, you may notice for a
brief period that the state of server1 is Down and the state ofserver2 is still Slave, Running.
To enable automatic rejoin, simply add auto_rejoin=true to the
monitor section in the configuration file.
When automatic rejoin is enabled, the MariaDB Monitor will attempt to rejoin a failed primary as a replica, if it reappears.
When everything is running fine, the cluster state looks like follows:
Assuming auto_failover=true has been specified in the configuration
file, when server1 goes down for some reason, failover will be performed
and we end up with the following cluster state:
If server1 now reappears, the MariaDB Monitor will detect that and
attempt to rejoin the old primary as a replica.
Whether rejoining will succeed depends upon the actual state of the old primary. For instance, if the old primary was modified and the changes had not been replicated to the new primary, before the old primary went down, then automatic rejoin will not be possible.
If rejoining can be performed, then the cluster state will end up looking like:
Switchover is for cases when you explicitly want to move the primary role from one server to another.
If we continue from the cluster state at the end of the previous example
and want to make server1 primary again, then we must issue the following
command:
There are quite a few arguments, so let's look at each one separately_call command indicates that it is a module command that is to be_
&#xNAN;invoked, mariadbmon indicates the module whose command we want to invoke,switchover is the command we want to invoke, and TheMonitor is the first argument to the command, the name of the monitor
as specified in the configuration file,server1 is the second argument to the command, the name of the server we
&#xNAN;want to make into primary, and server2 is the third argument to the command, the name of the currentprimary.
If the command executes successfully, we will end up with the following cluster state:
This page is licensed: CC BY-SA / Gnu FDL
The easiest way to install MaxScale is to use the MariaDB repositories.
This tutorial uses a broader set of grants than is required for the sake of brevity and backwards compatibility. For the minimal set of grants, refer to the MaxScale Configuration Guide.
All MaxScale configurations require at least two accounts: one for reading authentication data and another for monitoring the state of the database. Services will use the first one and monitors will use the second one. In addition to this, we want to have a separate account that our application will use.
All of the users must be created on both of the MariaDB servers.
Each server will hold one unique schema which contains the data of one specific customer. We'll also create a shared schema that is present on all shards that the shard-local tables can be joined into.
Create the tables on the first server:
Create the tables on the second server:
The MaxScale configuration is stored in /etc/maxscale.cnf.
First, we configure two servers we will use to shard our database. The db-01
server has the customer_01 schema and the db-02 server has the customer_02
schema.
The next step is to configure the service which the users connect to. This
section defines which router to use, which servers to connect to and the
credentials to use. For sharding, we use schemarouter router and the
service_user credentials we defined earlier. By default the schemarouter warns
if two or more nodes have duplicate schemas so we need to ignore them withignore_tables_regex=.*.
After this we configure a listener for the service. The listener is the actual port that the user connects to. We will use the port 4000.
The final step is to configure a monitor which will monitor the state of the
servers. The monitor will notify MariaDB MaxScale if the servers are down. We
add the two servers to the monitor and use the monitor_user credentials. For
the sharding use-case, the galeramon module is suitable even if we're not
using a Galera cluster. The schemarouter is only interested in whether the
server is in the Running state or in the Down state.
After this we have a fully working configuration and the contents of/etc/maxscale.cnf should look like this.
Then you're ready to start MaxScale.
MariaDB MaxScale is now ready to start accepting client connections and routing them. Queries are routed to the right servers based on the database they target and switching between the shards is seamless since MariaDB MaxScale keeps the session state intact between servers.
To test, we query the schema that's located on the local shard and join it to the shared table.
The sharding also works even if no default database is selected.
One limitation of this sort of simple sharding is that cross-shard joins are not possible.
In most multi-tenant situations, this is an acceptable limitation. If you do need cross-shard joins, the Spider storage engine will provide you this.
This page is licensed: CC BY-SA / Gnu FDL
GET /v1/users/inet/:name{
"data": {
"attributes": {
"account": "admin",
"created": "Fri, 05 Jan 2024 07:23:54 GMT",
"last_login": "Fri, 05 Jan 2024 07:24:11 GMT",
"last_update": null,
"name": "admin"
},
"id": "admin",
"links": {
"self": "http://localhost:8989/v1/users/inet/admin/"
},
"type": "inet"
},
"links": {
"self": "http://localhost:8989/v1/users/inet/admin/"
}
}GET /v1/users/inet{
"data": [
{
"attributes": {
"account": "admin",
"created": "Fri, 05 Jan 2024 07:23:54 GMT",
"last_login": "Fri, 05 Jan 2024 07:24:11 GMT",
"last_update": null,
"name": "admin"
},
"id": "admin",
"links": {
"self": "http://localhost:8989/v1/users/inet/admin/"
},
"type": "inet"
}
],
"links": {
"self": "http://localhost:8989/v1/users/inet/"
}
}GET /v1/users/unix/:nameGET /v1/users/unixGET /v1/users{
"data": [
{
"attributes": {
"account": "admin",
"created": "Fri, 05 Jan 2024 07:23:54 GMT",
"last_login": "Fri, 05 Jan 2024 07:24:11 GMT",
"last_update": null,
"name": "admin"
},
"id": "admin",
"links": {
"self": "http://localhost:8989/v1/users/inet/admin/"
},
"type": "inet"
}
],
"links": {
"self": "http://localhost:8989/v1/users/inet/"
}
}POST /v1/users/inet{
"data": {
"id": "my-user", // The user to create
"type": "inet", // The type of the user
"attributes": {
"password": "my-password", // The password to use for the user
"account": "basic" // The type of the account
}
}
}Status: 204 No ContentPOST /v1/users/unix{
"data": {
"id": "jdoe", // Account name
"type": "unix" // Account type
"attributes": {
"account": "basic" // Type of the user account in MaxScale
}
}
}Status: 204 No ContentDELETE /v1/users/inet/:nameStatus: 204 No ContentDELETE /v1/users/unix/:nameStatus: 204 No ContentPATCH /v1/users/inet/:name{
"data": {
"attributes": {
"password": "new-password"
}
}
}Status: 204 No ContentGET /v1/filters/:name{
"data": {
"attributes": {
"filter_diagnostics": null,
"module": "qlafilter",
"parameters": {
"append": false,
"duration_unit": "ms",
"exclude": null,
"filebase": "/tmp/qla.log",
"flush": true,
"log_data": "date,user,query",
"log_type": "unified",
"match": null,
"module": "qlafilter",
"newline_replacement": " ",
"options": "",
"separator": ",",
"source": null,
"source_exclude": null,
"source_match": null,
"use_canonical_form": false,
"user": null,
"user_exclude": null,
"user_match": null
},
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
}
},
"id": "QLA",
"links": {
"self": "http://localhost:8989/v1/filters/QLA/"
},
"relationships": {
"services": {
"data": [
{
"id": "Read-Connection-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/filters/QLA/relationships/services/"
}
}
},
"type": "filters"
},
"links": {
"self": "http://localhost:8989/v1/filters/QLA/"
}
}GET /v1/filters{
"data": [
{
"attributes": {
"filter_diagnostics": null,
"module": "qlafilter",
"parameters": {
"append": false,
"duration_unit": "ms",
"exclude": null,
"filebase": "/tmp/qla.log",
"flush": true,
"log_data": "date,user,query",
"log_type": "unified",
"match": null,
"module": "qlafilter",
"newline_replacement": " ",
"options": "",
"separator": ",",
"source": null,
"source_exclude": null,
"source_match": null,
"use_canonical_form": false,
"user": null,
"user_exclude": null,
"user_match": null
},
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
}
},
"id": "QLA",
"links": {
"self": "http://localhost:8989/v1/filters/QLA/"
},
"relationships": {
"services": {
"data": [
{
"id": "Read-Connection-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/filters/QLA/relationships/services/"
}
}
},
"type": "filters"
},
{
"attributes": {
"module": "hintfilter",
"parameters": {
"module": "hintfilter"
},
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
}
},
"id": "Hint",
"links": {
"self": "http://localhost:8989/v1/filters/Hint/"
},
"relationships": {
"services": {
"data": [
{
"id": "Read-Connection-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/filters/Hint/relationships/services/"
}
}
},
"type": "filters"
}
],
"links": {
"self": "http://localhost:8989/v1/filters/"
}
}POST /v1/filters{
"data": {
"id": "test-filter", // Name of the filter
"type": "filters",
"attributes": {
"module": "qlafilter", // The filter uses the qlafilter module
"parameters": { // Filter parameters
"filebase": "/tmp/qla.log"
}
}
}
}PATCH /v1/filters/:name{
"data": {
"attributes": {
"parameters": {
"match": ".*users.*"
}
}
}
}DELETE /v1/filters/:filter%%
# options
regex_grammar: Native
case_sensitive: true
what_if: false
continue_if_matched: false
ignore_whitespace: true
%
# match template
@{1:^}select @{2} from my_table where id = @{3}
%
# replace template
select @{2} from my_table where id = @{3}%%
# use default options by leaving this blank
%
@{1:^}select count(distinct @{2}) from @{3}
%
select count(*) from (select distinct @{1} from @{2}) as t123
Input: select count(distinct author) from books where entity != "AI"
Rewritten: select count(*) from (select distinct author from books where entity != "AI") as t123%%
%
from mytable
%
from mytable force index (myindex)
Input: select name from mytable where id=42
Rewritten: select name from mytable force index (myindex) where id=42%%
%
@{1:^}select @{2} from mytable
%
select @{2} from mytable force index (myindex)%%
%
@{1:^}select zip_code from address_book where str_id = @{1:["]}@{2:[[:digit:]]+}@{3:["]}
%
select zip_code from address_book where id = @{2}
Input: select zip_code from address_book where str_id = "1234"
Rewritten: select zip_code from address_book where id = 1234%%
regex_grammar: EPosix
continue_if_matched: true
%
wrong_table_name
%
correct_table_name
%%
regex_grammar: EPosix
%
wrong_column_name
%
correct_column_name[Rewrite]
type = filter
module = rewritefilter
template_file = /path/to/template_file.rf
...
[Router]
type=service
...
filters=Rewrite%%
options
%
match template
%
replace templatecase_sensitive: true%%
case_sensitive: false
%
@{1:^}select @{2}
from mytable
where user = @{3}
%
select @{2} from mytable where user = @{3}
and @{3} in (select user from approved_users){ "templates" :
[
{
"case_sensitive" : false,
"match_template" : "@{1:^}select @{2} from mytable where user = @{3}",
"replace_template" : "select @{2} from mytable where user = @{3}
and @{3} in (select user from approved_users)"
}
]
}maxctrl alter filter Rewrite log_replacement=false[DataMartFilter]
type=filter
module=tee
target=DataMart
[Data-Service]
type=service
router=readconnroute
servers=server1
user=myuser
password=mypasswd
filters=DataMartFiltermatch=/insert.*into.*order*/exclude=/select.*from.*t1/options=case,extendedsource=127.0.0.1user=john[Orders]
type=service
router=readconnroute
servers=server1, server2, server3, server4
user=massi
password=6628C50E07CCE1F0392EDEEB9D1203F3
filters=ReplicateOrders
[ReplicateOrders]
type=filter
module=tee
target=DataMart
match=insert[ ]*into[ ]*orders
[DataMart]
type=service
router=readconnroute
servers=datamartserver
user=massi
password=6628C50E07CCE1F0392EDEEB9D1203F3
filters=QLA-DataMart
[QLA-DataMart]
type=filter
module=qlafilter
options=/var/log/DataMart/InsertsLog
[Orders-Listener]
type=listener
target=Orders
port=4011
[DataMart-Listener]
type=listener
target=DataMart
port=4012[server1]
type=server
address=192.168.121.51
port=3306
[server2]
...
[server3]
...
[server4]
...
[TheMonitor]
type=monitor
module=mariadbmon
servers=server1,server2,server3,server4
...$ maxctrl list servers
┌─────────┬─────────────────┬──────┬─────────────┬─────────────────┐
│ Server │ Address │ Port │ Connections │ State │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server1 │ 192.168.121.51 │ 3306 │ 0 │ Master, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server2 │ 192.168.121.190 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server3 │ 192.168.121.112 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server4 │ 192.168.121.201 │ 3306 │ 0 │ Slave, Running │
└─────────┴─────────────────┴──────┴─────────────┴─────────────────┘$ maxctrl list servers
┌─────────┬─────────────────┬──────┬─────────────┬────────────────┐
│ Server │ Address │ Port │ Connections │ State │
├─────────┼─────────────────┼──────┼─────────────┼────────────────┤
│ server1 │ 192.168.121.51 │ 3306 │ 0 │ Down │
├─────────┼─────────────────┼──────┼─────────────┼────────────────┤
│ server2 │ 192.168.121.190 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼────────────────┤
│ server3 │ 192.168.121.112 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼────────────────┤
│ server4 │ 192.168.121.201 │ 3306 │ 0 │ Slave, Running │
└─────────┴─────────────────┴──────┴─────────────┴────────────────┘$ maxctrl call command mariadbmon failover TheMonitor
OK$ maxctrl list servers
┌─────────┬─────────────────┬──────┬─────────────┬─────────────────┐
│ Server │ Address │ Port │ Connections │ State │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server1 │ 192.168.121.51 │ 3306 │ 0 │ Down │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server2 │ 192.168.121.190 │ 3306 │ 0 │ Master, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server3 │ 192.168.121.112 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server4 │ 192.168.121.201 │ 3306 │ 0 │ Slave, Running │
└─────────┴─────────────────┴──────┴─────────────┴─────────────────┘$ maxctrl list servers
┌─────────┬─────────────────┬──────┬─────────────┬─────────────────┐
│ Server │ Address │ Port │ Connections │ State │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server1 │ 192.168.121.51 │ 3306 │ 0 │ Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server2 │ 192.168.121.190 │ 3306 │ 0 │ Master, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server3 │ 192.168.121.112 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server4 │ 192.168.121.201 │ 3306 │ 0 │ Slave, Running │
└─────────┴─────────────────┴──────┴─────────────┴─────────────────┘[TheMonitor]
type=monitor
module=mariadbmon
servers=server1,server2,server3,server4
auto_failover=true
...$ maxctrl list servers
┌─────────┬─────────────────┬──────┬─────────────┬─────────────────┐
│ Server │ Address │ Port │ Connections │ State │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server1 │ 192.168.121.51 │ 3306 │ 0 │ Master, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server2 │ 192.168.121.190 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server3 │ 192.168.121.112 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server4 │ 192.168.121.201 │ 3306 │ 0 │ Slave, Running │
└─────────┴─────────────────┴──────┴─────────────┴─────────────────┘$ maxctrl list servers
┌─────────┬─────────────────┬──────┬─────────────┬────────────────────────┐
│ Server │ Address │ Port │ Connections │ State │
├─────────┼─────────────────┼──────┼─────────────┼────────────────────────┤
│ server1 │ 192.168.121.51 │ 3306 │ 0 │ Down │
├─────────┼─────────────────┼──────┼─────────────┼────────────────────────┤
│ server2 │ 192.168.121.190 │ 3306 │ 0 │ Master, Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼────────────────────────┤
│ server3 │ 192.168.121.112 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼────────────────────────┤
│ server4 │ 192.168.121.201 │ 3306 │ 0 │ Slave, Running │
└─────────┴─────────────────┴──────┴─────────────┴────────────────────────┘[TheMonitor]
type=monitor
module=mariadbmon
servers=server1,server2,server3,server4
auto_rejoin=true
...$ maxctrl list servers
┌─────────┬─────────────────┬──────┬─────────────┬─────────────────┐
│ Server │ Address │ Port │ Connections │ State │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server1 │ 192.168.121.51 │ 3306 │ 0 │ Master, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server2 │ 192.168.121.190 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server3 │ 192.168.121.112 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server4 │ 192.168.121.201 │ 3306 │ 0 │ Slave, Running │
└─────────┴─────────────────┴──────┴─────────────┴─────────────────┘$ maxctrl list servers
┌─────────┬─────────────────┬──────┬─────────────┬─────────────────┐
│ Server │ Address │ Port │ Connections │ State │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server1 │ 192.168.121.51 │ 3306 │ 0 │ Down │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server2 │ 192.168.121.190 │ 3306 │ 0 │ Master, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server3 │ 192.168.121.112 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server4 │ 192.168.121.201 │ 3306 │ 0 │ Slave, Running │
└─────────┴─────────────────┴──────┴─────────────┴─────────────────┘$ maxctrl list servers
┌─────────┬─────────────────┬──────┬─────────────┬─────────────────┐
│ Server │ Address │ Port │ Connections │ State │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server1 │ 192.168.121.51 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server2 │ 192.168.121.190 │ 3306 │ 0 │ Master, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server3 │ 192.168.121.112 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server4 │ 192.168.121.201 │ 3306 │ 0 │ Slave, Running │
└─────────┴─────────────────┴──────┴─────────────┴─────────────────┘$ maxctrl call command mariadbmon switchover TheMonitor server1 server2
OK$ maxctrl list servers
┌─────────┬─────────────────┬──────┬─────────────┬─────────────────┐
│ Server │ Address │ Port │ Connections │ State │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server1 │ 192.168.121.51 │ 3306 │ 0 │ Master, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server2 │ 192.168.121.190 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server3 │ 192.168.121.112 │ 3306 │ 0 │ Slave, Running │
├─────────┼─────────────────┼──────┼─────────────┼─────────────────┤
│ server4 │ 192.168.121.201 │ 3306 │ 0 │ Slave, Running │
└─────────┴─────────────────┴──────┴─────────────┴─────────────────┘# Install MaxScale
apt update
apt -y install sudo curl
curl -LsS https://r.mariadb.com/downloads/mariadb_repo_setup | sudo bash
apt -y install maxscale-- Create the user for the service
-- https://mariadb.com/kb/en/mariadb-maxscale-2308-authentication-modules/#required-grants
CREATE USER 'service_user'@'%' IDENTIFIED BY 'secret';
GRANT SELECT ON mysql.* TO 'service_user'@'%';
GRANT SHOW DATABASES ON *.* TO 'service_user'@'%';
-- Create the user for the monitor
-- https://mariadb.com/kb/en/mariadb-maxscale-2308-galera-monitor/#required-grants
CREATE USER 'monitor_user'@'%' IDENTIFIED BY 'secret';
GRANT REPLICATION CLIENT ON *.* TO 'monitor_user'@'%';
-- Create the application user
-- https://mariadb.com/kb/en/mariadb-maxscale-2308-authentication-modules/#limitations-and-troubleshooting
CREATE USER app_user@'%' IDENTIFIED BY 'secret';
GRANT SELECT, INSERT, UPDATE, DELETE ON *.* TO app_user@'%';CREATE DATABASE IF NOT EXISTS customer_01;
CREATE TABLE IF NOT EXISTS customer_01.accounts(id INT, account_type INT, account_name VARCHAR(255));
INSERT INTO customer_01.accounts VALUES (1, 1, 'foo');
-- The shared schema that's on all shards
CREATE DATABASE IF NOT EXISTS shared_info;
CREATE TABLE IF NOT EXISTS shared_info.account_types(account_type INT, type_name VARCHAR(255));
INSERT INTO shared_info.account_types VALUES (1, 'admin'), (2, 'user');CREATE DATABASE IF NOT EXISTS customer_02;
CREATE TABLE IF NOT EXISTS customer_02.accounts(id INT, account_type INT, account_name VARCHAR(255));
INSERT INTO customer_02.accounts VALUES (2, 2, 'bar');
-- The shared schema that's on all shards
CREATE DATABASE IF NOT EXISTS shared_info;
CREATE TABLE IF NOT EXISTS shared_info.account_types(account_type INT, type_name VARCHAR(255));
INSERT INTO shared_info.account_types VALUES (1, 'admin'), (2, 'user');[db-01]
type=server
address=192.168.0.102
port=3306
[db-02]
type=server
address=192.168.0.103
port=3306[Sharded-Service]
type=service
router=schemarouter
targets=db-02,db-01
user=service_user
password=secret
ignore_tables_regex=.*[Sharded-Service-Listener]
type=listener
service=Sharded-Service
port=4000[Shard-Monitor]
type=monitor
module=galeramon
servers=db-02,db-01
user=monitor_user
password=secret[db-01]
type=server
address=192.168.0.102
port=3306
[db-02]
type=server
address=192.168.0.103
port=3306
[Sharded-Service]
type=service
router=schemarouter
targets=db-02,db-01
user=service_user
password=secret
ignore_tables_regex=.*
[Sharded-Service-Listener]
type=listener
service=Sharded-Service
protocol=MariaDBClient
port=4000
[Shard-Monitor]
type=monitor
module=galeramon
servers=db-02,db-01
user=monitor_user
password=secretsystemctl start maxscale.service$ mariadb -A -u app_user -psecret -h 127.0.0.1 -P 4000
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 10.11.7-MariaDB-1:10.11.7+maria~ubu2004-log mariadb.org binary distribution
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> USE customer_01;
Database changed
MariaDB [customer_01]> SELECT c.account_name, c.account_type, s.type_name FROM accounts c
-> JOIN shared_info.account_types s ON (c.account_type = s.account_type);
+--------------+--------------+-----------+
| account_name | account_type | type_name |
+--------------+--------------+-----------+
| foo | 1 | admin |
+--------------+--------------+-----------+
1 row in set (0.001 sec)
MariaDB [customer_01]> USE customer_02;
Database changed
MariaDB [customer_02]> SELECT c.account_name, c.account_type, s.type_name FROM accounts c
-> JOIN shared_info.account_types s ON (c.account_type = s.account_type);
+--------------+--------------+-----------+
| account_name | account_type | type_name |
+--------------+--------------+-----------+
| bar | 2 | user |
+--------------+--------------+-----------+
1 row in set (0.000 sec)MariaDB [(none)]> SELECT c.account_name, c.account_type, s.type_name FROM customer_01.accounts c
-> JOIN shared_info.account_types s ON (c.account_type = s.account_type);
+--------------+--------------+-----------+
| account_name | account_type | type_name |
+--------------+--------------+-----------+
| foo | 1 | admin |
+--------------+--------------+-----------+
1 row in set (0.001 sec)
MariaDB [(none)]> SELECT c.account_name, c.account_type, s.type_name FROM customer_02.accounts c
-> JOIN shared_info.account_types s ON (c.account_type = s.account_type);
+--------------+--------------+-----------+
| account_name | account_type | type_name |
+--------------+--------------+-----------+
| bar | 2 | user |
+--------------+--------------+-----------+
1 row in set (0.001 sec)MariaDB [(none)]> SELECT * FROM customer_01.accounts UNION SELECT * FROM customer_02.accounts;
ERROR 1146 (42S02): Table 'customer_01.accounts' doesn't exist
MariaDB [(none)]> USE customer_01;
Database changed
MariaDB [customer_01]> SELECT * FROM customer_01.accounts UNION SELECT * FROM customer_02.accounts;
ERROR 1146 (42S02): Table 'customer_02.accounts' doesn't exist
MariaDB [customer_01]> USE customer_02;
Database changed
MariaDB [customer_02]> SELECT * FROM customer_01.accounts UNION SELECT * FROM customer_02.accounts;
ERROR 1146 (42S02): Table 'customer_01.accounts' doesn't existThe top filter is a filter module for MariaDB MaxScale that monitors every SQL statement that passes through the filter. It measures the duration of that statement, the time between the statement being sent and the first result being returned. The top N times are kept, along with the SQL text itself and a list sorted on the execution times of the query is written to a file upon closure of the client session.
Example minimal configuration:
The top filter has one mandatory parameter, filebase, and a number of optional
parameters.
filebase
Type: string
Mandatory: Yes
Dynamic: Yes
The basename of the output file created for each session. The session ID is added to the filename for each file written. This is a mandatory parameter.
The filebase may also be set as the filter, the mechanism to set the filebase via the filter option is superseded by the parameter. If both are set the parameter setting will be used and the filter option ignored.
count
Type: number
Mandatory: No
Dynamic: Yes
Default: 10
The number of SQL statements to store and report upon.
match
Type: regex
Mandatory: No
Dynamic: Yes
Default: None
Limits the queries logged by the filter.
exclude
Type: regex
Mandatory: No
Dynamic: Yes
Default: None
Limits the queries logged by the filter.
options
Type: enum
Mandatory: No
Dynamic: No
Values: ignorecase, case, extended
Default: case
Regular expression options
for match and exclude.
source
Type: string
Mandatory: No
Dynamic: Yes
Default: None
Defines an address that is used to match against the address from which the client connection to MariaDB MaxScale originates. Only sessions that originate from this address will be logged.
user
Type: string
Mandatory: No
Dynamic: Yes
Default: None
Defines a username that is used to match against the user from which the client connection to MariaDB MaxScale originates. Only sessions that are connected using this username will result in results being generated.
You have an order system and believe the updates of the PRODUCTS table is causing some performance issues for the rest of your application. You would like to know which of the many updates in your application is causing the issue.
Add a filter with the following definition:
Note the exclude entry, this is to prevent updates to the PRODUCTS_STOCK table from being included in the report.
One of your applications servers is slower than the rest, you believe it is related to database access but you are not sure what is taking the time.
Add a filter with the following definition:
In order to produce a comparison with an unaffected application server you can also add a second filter as a control.
In the service definition add both filters
You will then have two sets of logs files written, one which profiles the top 20 queries of the slow application server and another that gives you the top 20 queries of your control application server. These two sets of files can then be compared to determine what if anything is different between the two.
The following is an example report for a number of fictitious queries executed against the employees example database available for MySQL.
This page is licensed: CC BY-SA / Gnu FDL
The Galera Monitor is a monitoring module for MaxScale that monitors a Galera cluster. It detects whether nodes are a part of the cluster and if they are in sync with the rest of the cluster. It can also assign primary and replica roles inside MaxScale, allowing Galera clusters to be used with modules designed for traditional primary-replica clusters.
By default, the Galera Monitor will choose the node with the lowestwsrep_local_index value as the primary. This will mean that two MaxScales
running on different servers will choose the same server as the primary.
The following WSREP variables are inspected by galeramon to see whether a node is
usable. If the node is not usable, it loses the Master and Slave labels and
will be in the Running state.
If wsrep_ready=0, the WSREP system is not yet ready and the Galera node
cannot accept queries.
If wsrep_desync=1 is set, the node is desynced and is not participating in
the Galera replication.
If wsrep_reject_queries=[ALL|ALL_KILL] is set, queries are refused and the
node is unusable.
With wsrep_sst_donor_rejects_queries=1, donor nodes reject
queries. Galeramon treats this the same as if wsrep_reject_queries=ALL was
set.
If wsrep_local_state is not 4 (or 2 with available_when_donor=true), the
node is not in the correct state and is not used.
MaxScale 2.4.0 added support for replicas replicating off of Galera nodes. If a
non-Galera server monitored by galeramon is replicating from a Galera node also
monitored by galeramon, it will be assigned the Slave, Running status as long
as the replication works. This allows read-scaleout with Galera servers without
increasing the size of the Galera cluster.
The Galera Monitor requires the REPLICA MONITOR grant to work:
With MariaDB Server 10.4 and earlier, REPLICATION CLIENT is required instead.
If set_donor_nodes is configured, the SUPER grant is required:
A minimal configuration for a monitor requires a set of servers for monitoring and a username and a password to connect to these servers. The user requires the REPLICATION CLIENT privilege to successfully monitor the state of the servers.
For a list of optional parameters that all monitors support, read the Monitor Common document.
These are optional parameters specific to the Galera Monitor.
Type: boolean
Default: false
Dynamic: Yes
If a node marked as primary inside MaxScale happens to fail and the primary
status is assigned to another node MaxScale will normally return the primary
status to the original node after it comes back up. With this option enabled, if
the primary status is assigned to a new node it will not be reassigned to the
original node for as long as the new primary node is running. In this case theMaster Stickiness status bit is set which will be visible in themaxctrl list servers output.
Type: boolean
Default: false
Dynamic: Yes
This option allows Galera nodes to be used normally when they are donors in an
SST operation when the SST method is non-blocking
(e.g. wsrep_sst_method=mariadb-backup).
Normally when an SST is performed, both participating nodes lose their Synced,Master or Slave statuses. When this option is enabled, the donor is treated as
if it was a normal member of the cluster (i.e. wsrep_local_state = 4). This is
especially useful if the cluster drops down to one node and an SST is required
to increase the cluster size.
The current list of non-blocking SST
methods are xtrabackup, xtrabackup-v2 and mariadb-backup. Read the
documentation for more details.
Type: boolean
Default: false
Dynamic: Yes
This disables the assignment of primary and replica roles to the Galera cluster nodes. If this option is enabled, Synced is the only status assigned by this monitor.
Type: boolean
Default: false
Dynamic: Yes
Enable interaction with server priorities. This will allow the monitor to deterministically pick the write node for the monitored Galera cluster and will allow for controlled node replacement.
Type: boolean
Default: false
Dynamic: Yes
This option controls whether the write primary Galera node requires a_wsrep_local_index_ value of 0. This option was introduced in MaxScale 2.1.0 and it is disabled by default in versions 2.1.5 and newer. In versions 2.1.4 and older, the option was enabled by default.
A Galera cluster will always have a node which has a wsrep_local_index value of 0. Based on this information, multiple MaxScale instances can always pick the same node for writes.
If the root_node_as_master option is disabled for galeramon, the node with the
lowest index will always be chosen as the primary. If it is enabled, only the
node with a wsrep_local_index value of 0 can be chosen as the primary.
This parameter can work with disable_master_failback but using them together
is not advisable: the intention of root_node_as_master is to make sure that
all MaxScale instances that are configured to use the same Galera cluster will
send writes to the same node. If disable_master_failback is enabled, this is
no longer true if the Galera cluster reorganizes itself in a way that a
different node gets the node index 0, writes would still be going to the old
node that previously had the node index 0. A restart of one of the MaxScales or
a new MaxScale joining the cluster will cause writes to be sent to the wrong
node, thus resulting in an increasing the rate of deadlock errors and
sub-optimal performance.
Type: boolean
Default: false
Dynamic: Yes
This option controls whether the global variable wsrep_sst_donor should be set in each cluster node with slave' status. The variable contains a list of replica servers, automatically sorted, with possible primary candidates at its end.
The sorting is based either on wsrep_local_index or node server priority depending on the value of use_priority option. If no server has priority defined the sorting switches to wsrep_local_index. Node names are collected by fetching the result of the variable wsrep_node_name.
Example of variable being set in all replica nodes, assuming three nodes:
Note: in order to set the global variable wsrep_sst_donor, proper privileges are required for the monitor user that connects to cluster nodes. This option is disabled by default and was introduced in MaxScale 2.1.0.
If the use_priority option is set and a server is configured with thepriority=<int> parameter, galeramon will use that as the basis on which the
primary node is chosen. This requires the disable_master_role_setting to be
undefined or disabled. The server with the lowest positive value of priority
will be chosen as the primary node when a replacement Galera node is promoted to
a primary server inside MaxScale. If all candidate servers have the same
priority, the order of the servers in the servers parameter dictates which is
chosen as the primary.
Nodes with a negative value (priority < 0) will never be chosen as the
primary. This allows you to mark some servers as permanent replicas by assigning a
non-positive value into priority. Nodes with the default priority of 0 are
only selected if no nodes with higher priority are present and the normal node
selection rules apply to them (i.e. selection is based on wsrep_local_index).
Here is an example.
In this example node-1 is always used as the primary if available. If node-1
is not available, then the next node with the highest priority rank is used. In
this case it would be node-3. If both node-1 and node-3 were down, thennode-2 would be used. Because node-4 has a value of -1 in priority, it
will never be the primary. Nodes without priority parameter are considered as
having a priority of 0 and will be used only if all nodes with a positive_priority_ value are not available.
With priority ranks you can control the order in which MaxScale chooses the primary node. This will allow for a controlled failure and replacement of nodes.
This page is licensed: CC BY-SA / Gnu FDL
The KafkaImporter module reads messages from Kafka and streams them into a MariaDB server. The messages are inserted into a table designated by either the topic name or the message key (see table_name_in for details). By default the table will be automatically created with the following SQL:
The payload of the message is inserted into the data field from which the id
field is calculated. The payload must be a valid JSON object and it must either
contain a unique _id field or it must not exist or the value must be a JSON
null. This is similar to the MongoDB document format where the _id field is
the primary key of the document collection.
If a message is read from Kafka and the insertion into the table fails due to a
violation of one of the constraints, the message is ignored. Similarly, messages
with duplicate _id value are also ignored: this is done to avoid inserting the
same document multiple times whenever the connection to either Kafka or MariaDB
is lost.
The limitations on the data can be removed by either creating the table before
the KafkaImporter is started, in which case the CREATE TABLE IF NOT EXISTS
does nothing, or by altering the structure of the existing table. The minimum
requirement that must be met is that the table contains the data field to
which string values can be inserted into.
The database server where the data is inserted is chosen from the set of servers
available to the service. The first server labeled as the Master with the best
rank will be chosen. This means that a monitor must be configured for the
MariaDB server where the data is to be inserted.
In MaxScale versions 21.06.18, 22.08.15, 23.02.12, 23.08.8, 24.02.4 and 24.08.2
the _id field is not required to be present. Older versions of MaxScale used
the following SQL where the _id field was mandatory:
The user defined by the user parameter of the service must have INSERT andCREATE privileges on all tables that are created.
Type: string
Mandatory: Yes
Dynamic: Yes
The list of Kafka brokers as a CSV list in host:port format.
Type: stringlist
Mandatory: Yes
Dynamic: Yes
The comma separated list of topics to subscribe to.
Type: count
Mandatory: No
Dynamic: Yes
Default: 100
Maximum number of uncommitted records. The KafkaImporter will buffer records into batches and commit them once either enough records are gathered (controlled by this parameter) or when the KafkaImporter goes idle. Any uncommitted records will be read again if a reconnection to either Kafka or MariaDB occurs.
Type: enum
Mandatory: No
Dynamic: Yes
Values: PLAIN, SCRAM-SHA-256, SCRAM-SHA-512
Default: PLAIN
SASL mechanism to use. The Kafka broker must be configured with the same authentication scheme.
Type: string
Mandatory: No
Dynamic: Yes
Default: ""
SASL username used for authentication. If this parameter is defined,kafka_sasl_password must also be provided.
Type: string
Mandatory: No
Dynamic: Yes
Default: ""
SASL password for the user. If this parameter is defined, kafka_sasl_user must
also be provided.
Type: boolean
Mandatory: No
Dynamic: Yes
Default: false
Enable SSL for Kafka connections.
Type: path
Mandatory: No
Dynamic: Yes
Default: ""
SSL Certificate Authority file in PEM format. If this parameter is not defined, the system default CA certificate is used.
Type: path
Mandatory: No
Dynamic: Yes
Default: ""
SSL public certificate file in PEM format. If this parameter is defined,kafka_ssl_key must also be provided.
Type: path
Mandatory: No
Dynamic: Yes
Default: ""
SSL private key file in PEM format. If this parameter is defined,kafka_ssl_cert must also be provided.
Type: enum
Mandatory: No
Dynamic: Yes
Values: topic, key
Default: topic
The Kafka message part that is used to locate the table to insert the data into.
Enumeration Values:
topic: The topic named is used as the fully qualified table name.
key: The message key is used as the fully qualified table name. If the Kafka
message does not have a key, the message is ignored.
For example, all messages with a fully qualified table name of my_db.my_table
will be inserted into the table my_table located in the my_db database. If
the table or database names have special characters that must be escaped to make
them valid identifiers, the name must also contain those escape characters. For
example, to insert into a table named my table in the database my database,
the name would be:
Type: duration
Mandatory: No
Dynamic: Yes
Default: 5000ms
Timeout for both Kafka and MariaDB network communication.
Type: string
Default: InnoDB
Mandatory: No
Dynamic: Yes
The storage engine used for tables that are created by the KafkaImporter.
This defines the ENGINE table option and must be the name of a valid storage
engine in MariaDB. When the storage engine is something other than InnoDB, the
table is created without the generated column and the check constraints:
This is done to avoid conflicts where the custom engine does not support all the features that InnoDB supports.
The backend servers used by this service must be MariaDB version 10.2 or newer.
This page is licensed: CC BY-SA / Gnu FDL
Pluggable authentication module (PAM) is a general purpose authentication API.
An application using PAM can authenticate a user without knowledge about the
underlying authentication implementation. The actual authentication scheme is
defined in the operating system PAM config (e.g. /etc/pam.d/), and can be
quite elaborate. MaxScale supports a very limited form of the PAM protocol,
which this document details.
The MaxScale PAM module requires little configuration. All that is required is to change the listener authenticator module to "PAMAuth".
MaxScale uses the PAM authenticator plugin to authenticate users with plugin set to "pam" in the mysql.user-table. The PAM service name of a user is read from the authentication_string-column. The matching PAM service in the operating system PAM config is used for authenticating the user. If the_authentication_string_ for a user is empty, the fallback service "mysql" is used.
PAM service configuration is out of the scope of this document, see for more information. A simple service definition used for testing this module is below.
pam_use_cleartext_pluginType:
Mandatory: No
Dynamic: No
Default: false
If enabled, MaxScale communicates with the client as if using . This setting has no effect on MaxScale-to-backend communication, which adapts to either "dialog" or "mysql_clear_password", depending on which one the backend suggests. This setting is meant to be used with the similarly named MariaDB Server setting.
pam_modeType:
Mandatory: No
Dynamic: No
Values: password, password_2FA, suid
This setting defines the authentication mode used. Two values are supported:
password Normal password-based authentication
password_2FA Password + 2FA-code based authentication
suid Authenticate using suid sandbox subprocess
If set to password_2FA, any users authenticating via PAM will be asked two passwords ("Password" and "Verification code") during login. MaxScale uses the normal password when either the local PAM api or a backend asks for "Password". MaxScale answers any other password prompt (e.g. "Verification code") with the second password. See for more details. Two-factor mode is incompatible with_pam_use_cleartext_plugin_.
If set to suid, MaxScale will launch a separate subprocess for every client to
handle pam authentication. This subprocess runs the binarymaxscale_pam_auth_tool (installed in the binary directory), which calls the
system pam libraries. The binary is installed with the SUID bit set, which means
that it runs with root-privileges regardless of the user launching it. This
should bypass any file grant issues (e.g. reading etc/shadow) that may arise
with the password or password_2FA options. The suid-option may also
perform faster if many clients authenticate with pam simultaneously due
to better separation of clients. It may also resist buggy pam plugins crashing,
as the crash would be limited to the subprocess only. The MariaDB Server uses
a similar pam authentication scheme. suid-mode supports two-factor
authentication.
pam_backend_mappingType:
Mandatory: No
Dynamic: No
Values: none, mariadb
Defines backend authentication mapping, i.e. switch of authentication method between client-to-MaxScale and MaxScale-to-backend. Supported values:
none No mapping
mariadb Map users to normal MariaDB accounts
If set to "mariadb", MaxScale will authenticate clients to backends using
standard MariaDB authentication. Authentication to MaxScale itself still uses
PAM. MaxScale asks the local PAM system if the client username was mapped
to another username during authentication, and use the mapped username when
logging in to backends. Passwords for the mapped users can be given in a file,
see pam_mapped_pw_file below. If passwords are not given, MaxScale will try to
authenticate without a password. Because of this, normal PAM users and mapped
users cannot be used on the same listener.
Because the client still needs to authenticate to MaxScale normally, an anonymous user may be required. If the backends do not allow such a user, one can be manually added using the service setting .
To map usernames, the PAM service needs to use a module such as_pam_user_map.so_. This module is not a standard Linux component and needs to be installed separately. It is included in recent MariaDB Server packages and can also be compiled from source. See for more information on how to configure the module. If the goal is to only map users from PAM to MariaDB in MaxScale, then configuring user mapping on just the machine running MaxScale is enough.
Instead of using pam_backend_mapping, consider using the listener setting ,
as it is easier to configure. pam_backend_mapping should only be used when
the user mapping needs to be defined by pam.
pam_mapped_pw_fileType: path
Mandatory: No
Dynamic: No
Default: None
Path to a json-text file with user passwords. Default value is empty, which disables the feature.
This feature only works together with pam_backend_mapping=mariadb. The file is
only read during listener creation (typically MaxScale start) or when a listener
is modified during runtime. The file should contain passwords for the mapped
users. When a client is authenticating, MaxScale searches the password data for a
matching username. If one is found, MaxScale uses the supplied password when
logging in to backends. Otherwise, MaxScale tries to authenticate without a
password.
One array, "users_and_passwords", is read from the file. Each array element in the array must define the following fields:
"user": String. Mapped client username.
"password": String. Backend server password. Can be encrypted with maxpasswd.
An example file is below.
When backend authenticator mapping is not in use
(authenticator_options=pam_backend_mapping=none), the PAM authenticator
supports a limited version of .
It requires less configuration but is also less accurate than proper mapping.
Anonymous mapping is enabled in MaxScale if the following user exists:
Empty username (e.g. ''@'%' or ''@'myhost.com')
plugin = 'pam'
Proxy grant is on (The query SHOW GRANTS FOR user@host; returns at least one
row with GRANT PROXY ON ...)
When the authenticator detects such users, anonymous account mapping is enabled
for the hosts of the anonymous users. To verify this, enable the info log
(log_info=1 in MaxScale config file). When a client is logging in using the
anonymous user account, MaxScale will log a message starting with "Found
matching anonymous user ...".
When mapping is on, the MaxScale PAM authenticator does not require client
accounts to exist in the mysql.user-table received from the backend. MaxScale
only requires that the hostname of the incoming client matches the host field of
one of the anonymous users (comparison performed using LIKE). If a match is
found, MaxScale attempts to authenticate the client to the local machine with
the username and password supplied. The PAM service used for authentication is
read from the authentication_string-field of the anonymous user. If
authentication was successful, MaxScale then uses the username and password to
log to the backends.
Anonymous mapping is only attempted if the client username is not found in themysql.user-table as explained in . This means,
that if a user is found and the authentication fails, anonymous authentication
is not attempted even when it could use a different PAM service with a different
outcome.
Setting up PAM group mapping for the MariaDB server is a more involved process as the server requires details on which Unix user or group is mapped to which MariaDB user. See for more details. Performing all the steps in the guide also on the MaxScale machine is not required, as the MaxScale PAM plugin only checks that the client host matches an anonymous user and that the client (with the username and password it provided) can log into the local PAM configuration. If using normal password authentication, simply generating the Unix user and password should be enough.
The general PAM authentication scheme is difficult for a proxy such as MaxScale. An application using the PAM interface needs to define a conversation function to allow the OS PAM modules to communicate with the client, possibly exchanging multiple messages. This works when a client logs in to a normal server, but not with MaxScale since it needs to autonomously log into multiple backends. For MaxScale to successfully log into the servers, the messages and answers need to be predefined. The passwords given to MaxScale need to work as is when MaxScale logs into the backends. This requirement prevents the use of one-time passwords.
The MaxScale PAM authentication module supports two password modes. In normal mode, client authentication begins with MaxScale sending an AuthSwitchRequest packet. In addition to the command, the packet contains the client plugin name ("dialog" or "mysql_clear_password"), a message type byte (4) and the message "Password: ". In the next packet, the client should send the password, which MaxScale will forward to the PAM api running on the local machine. If the password is correct, an OK packet is sent to the client. If the local PAM api asks for additional credentials as is typical in two-factor authentication schemes, authentication fails. Informational messages such as password expiration notifications are allowed. These are simply printed to the log.
On the backend side, MaxScale expects the servers to act as MaxScale did towards the client. The servers should send an AuthSwitchRequest packet as defined above, MaxScale responds with the password received by the client authenticator and finally backend replies with OK. Informational messages from backends are only printed to the info-log.
MaxScale supports a limited form of two-factor authentication with thepam_mode=password_2FA-option. Since MaxScale uses the 2FA-code given by the
client to log in to the local PAM api as well as all the backends, the code must
be reusable. This prevents the use of any kind of centrally checked one-use
codes. Time-based codes work, assuming the backends are checking the codes
independently of each other. Automatic reconnection features (e.g.
readwritesplit-router) will not work, as the code has likely changed since
original authentication.
Optionally, the PAM configuration on the backend servers can be weakened such that the servers only asks for the normal password. This way, MaxScale will check the 2FA-code of the incoming client, while MaxScale logs into the backends using only the password.
Due to technical reasons, MaxScale does not forward the password prompts from the PAM api to the client. MaxScale will always ask for "Password" and "Verification code", even if the PAM api asks for other items. This prevents the use of authentication schemes where a specific question must be answered (e.g. "Input code Nr. 5"). This is not a significant limitation, as such schemes would not work with backend servers anyway.
MaxScale binary directory contains the test_pam_login-executable. This simple program asks for a username, password and PAM service and then uses the given credentials to login to the given service. test_pam_login uses the same code as MaxScale itself to communicate with the OS PAM interface and may be useful for diagnosing PAM login issues.
This page is licensed: CC BY-SA / Gnu FDL
The mirror router is designed for data consistency and database behavior
verification during system upgrades. It allows statement duplication to multiple
servers in a manner similar to that of the with exporting of collected query metrics.
For each executed query the router exports a JSON object that describes the query results and has the following fields:
The objects in the results array describe an individual query result and have
the following fields:
mainType: target
Mandatory: Yes
Dynamic: Yes
The main target from which results are returned to the client. This is a
mandatory parameter and must define one of the targets configured in thetargets parameter of the service.
If the connection to the main target cannot be created or is lost mid-session, the client connection will be closed. Connection failures to other targets are not fatal errors and any open connections to them will be closed. The router does not create new connections after the initial connections are created.
exporterType:
Mandatory: Yes
Dynamic: Yes
Values: log, file, kafka
The exporter where the data is exported. This is a mandatory parameter. Possible values are:
log
Exports metrics to MaxScale log on INFO level. No configuration parameters.
file
Exports metrics to a file. Configured with the parameter.
fileType: string
Default: No default value
Mandatory: No
Dynamic: Yes
The output file where the metrics will be written. The file must be writable by
the user that is running MaxScale, usually the maxscale user.
When the file parameter is altered at runtime, the old file is closed before
the new file is opened. This makes it a convenient way of rotating the file
where the metrics are exported. Note that the file name alteration must change
the value for it to take effect.
This is a mandatory parameter when configured with exporter=file.
kafka_brokerType: string
Default: No default value
Mandatory: No
Dynamic: Yes
The Kafka broker list. Must be given as a comma-separated list of broker hosts
with optional ports in host:port format.
This is a mandatory parameter when configured with exporter=kafka.
kafka_topicType: string
Default: No default value
Mandatory: No
Dynamic: Yes
The kafka topic where the metrics are sent.
This is a mandatory parameter when configured with exporter=kafka.
on_errorType:
Default: ignore
Mandatory: No
Dynamic: Yes
What to do when a backend network connection fails. Accepted values are:
ignore
Ignore the failing backend if it's not the backend that the main parameter
points to.
close
This parameter was added in MaxScale 6.0. Older versions always ignored failing backends.
reportType:
Default: always
Mandatory: No
Dynamic: Yes
When to report the result of the queries. Accepted values are:
always
Always report the result for all queries.
on_conflict
Only report when one or more backends returns a conflicting result.
This parameter was added in MaxScale 6.0. Older versions always reported the result.
Broken network connections are not recreated.
Prepared statements are not supported.
Contents of non-SQL statements are not added to the exported metrics.
Data synchronization in dynamic environments (e.g. when replication is in use) is not guaranteed. This means that result mismatches can be reported when the data is only eventually consistent.
This page is licensed: CC BY-SA / Gnu FDL
A listener resource represents a listener of a service in MaxScale. All listeners point to a service in MaxScale.
A session is an abstraction of a client connection, any number of related backend connections, a router module session and possibly filter module sessions. Each session is created on a service and each service can have multiple sessions.
[MyLogFilter]
type=filter
module=topfilter
[Service]
type=service
router=readconnroute
servers=server1
user=myuser
password=mypasswd
filters=MyLogFilterfilebase=/tmp/SqlQueryLogcount=30match=select.*from.*customer.*where
exclude=where
options=case,extendedsource=127.0.0.1user=john[ProductsUpdateTop20]
type=filter
module=topfilter
count=20
match=UPDATE.*PRODUCTS.*WHERE
exclude=UPDATE.*PRODUCTS_STOCK.*WHERE
filebase=/var/logs/top/ProductsUpdate[SlowAppServer]
type=filter
module=topfilter
count=20
source=192.168.0.32
filebase=/var/logs/top/SlowAppServer[ControlAppServer]
type=filter
module=topfilter
count=20
source=192.168.0.42
filebase=/var/logs/top/ControlAppServer[App-Service]
type=service
router=readconnroute
servers=server1
user=myuser
password=mypasswd
filters=SlowAppServer | ControlAppServer-bash-4.1$ cat /var/logs/top/Employees-top-10.137
Top 10 longest running queries in session.
==========================================
Time (sec) | Query
-----------+-----------------------------------------------------------------
22.985 | select sum(salary), year(from_date) from salaries s, (select distinct year(from_date) as y1 from salaries) y where (makedate(y.y1, 1) between s.from_date and s.to_date) group by y.y1
5.304 | select d.dept_name as "Department", y.y1 as "Year", count(*) as "Count" from departments d, dept_emp de, (select distinct year(from_date) as y1 from dept_emp order by 1) y where d.dept_no = de.dept_no and (makedate(y.y1, 1) between de.from_date and de.to_date) group by y.y1, d.dept_name order by 1, 2
2.896 | select year(now()) - year(birth_date) as age, gender, avg(salary) as "Average Salary" from employees e, salaries s where e.emp_no = s.emp_no and ("1988-08-01" between from_date AND to_date) group by year(now()) - year(birth_date), gender order by 1,2
2.160 | select dept_name as "Department", sum(salary) / 12 as "Salary Bill" from employees e, departments d, dept_emp de, salaries s where e.emp_no = de.emp_no and de.dept_no = d.dept_no and ("1988-08-01" between de.from_date AND de.to_date) and ("1988-08-01" between s.from_date AND s.to_date) and s.emp_no = e.emp_no group by dept_name order by 1
0.845 | select dept_name as "Department", avg(year(now()) - year(birth_date)) as "Average Age", gender from employees e, departments d, dept_emp de where e.emp_no = de.emp_no and de.dept_no = d.dept_no and ("1988-08-01" between from_date AND to_date) group by dept_name, gender
0.668 | select year(hire_date) as "Hired", d.dept_name, count(*) as "Count" from employees e, departments d, dept_emp de where de.emp_no = e.emp_no and de.dept_no = d.dept_no group by d.dept_name, year(hire_date)
0.249 | select moves.n_depts As "No. of Departments", count(moves.emp_no) as "No. of Employees" from (select de1.emp_no as emp_no, count(de1.emp_no) as n_depts from dept_emp de1 group by de1.emp_no) as moves group by moves.n_depts order by 1
0.245 | select year(now()) - year(birth_date) as age, gender, count(*) as "Count" from employees group by year(now()) - year(birth_date), gender order by 1,2
0.179 | select year(hire_date) as "Hired", count(*) as "Count" from employees group by year(hire_date)
0.160 | select year(hire_date) - year(birth_date) as "Age", count(*) as Count from employees group by year(hire_date) - year(birth_date) order by 1
-----------+-----------------------------------------------------------------
Session started Wed Jun 18 18:41:03 2014
Connection from 127.0.0.1
Username massi
Total of 24 statements executed.
Total statement execution time 35.701 seconds
Average statement execution time 1.488 seconds
Total connection time 46.500 seconds
-bash-4.1$CREATE USER 'maxscale'@'maxscalehost' IDENTIFIED BY 'maxscale-password';
GRANT REPLICA MONITOR ON *.* TO 'maxscale-user'@'maxscalehost';GRANT REPLICATION CLIENT ON *.* TO 'maxscale-user'@'maxscalehost';GRANT SUPER ON *.* TO 'maxscale'@'maxscalehost';[Galera-Monitor]
type=monitor
module=galeramon
servers=server1,server2,server3
user=myuser
password=mypwdSET GLOBAL wsrep_sst_donor = "galera001,galera000"[node-1]
type=server
address=192.168.122.101
port=3306
priority=1
[node-2]
type=server
address=192.168.122.102
port=3306
priority=3
[node-3]
type=server
address=192.168.122.103
port=3306
priority=2
[node-4]
type=server
address=192.168.122.104
port=3306
priority=-1CREATE TABLE IF NOT EXISTS my_table (
data JSON NOT NULL,
id VARCHAR(1024) AS (JSON_EXTRACT(data, '$._id')) UNIQUE KEY
);CREATE TABLE IF NOT EXISTS my_table (
data LONGTEXT CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL,
id VARCHAR(1024) AS (JSON_EXTRACT(data, '$._id')) UNIQUE KEY,
CONSTRAINT data_is_json CHECK(JSON_VALID(data)),
CONSTRAINT id_is_not_null CHECK(JSON_EXTRACT(data, '$._id') IS NOT NULL)
);`my database`.`my table`CREATE TABLE IF NOT EXISTS my_table (data JSON NOT NULL);kafka
Exports metrics to a Kafka broker. Configured with the kafka_broker and kafka_topic parameters.
Values: ignore, close
Values: always, on_conflict
query
The executed SQL if an SQL statement was executed
command
The SQL command
session
The connection ID of the session that executed the query
query_id
Query sequence number, starts from 1
results
Array of query result objects
target
The target where the query was executed
checksum
The CRC32 checksum of the result
rows
Number of returned rows
warnings
Number of returned warnings
duration
Query duration in milliseconds
type
Result type, one of ok, error or resultset
Get a single listener. The :name in the URI must be the name of a listener in MaxScale.
Response
Status: 200 OK
Get all listeners.
Response
Status: 200 OK
Creates a new listener. The request body must define the following fields.
data.id
Name of the listener
data.type
Type of the object, must be listeners
data.attributes.parameters.port OR data.attributes.parameters.socket
The TCP port or UNIX Domain Socket the listener listens on. Only one of the fields can be defined.
data.relationships.services.data
The service relationships data, must define a JSON object with an id value
that defines the service to use and a type value set to services.
The following is the minimal required JSON object for defining a new listener.
Refer to the Configuration Guide for a full list of listener parameters.
Response
Listener is created:
Status: 204 No Content
The request body must be a JSON object which represents a set of new definitions for the listener.
All parameters marked as modifiable at runtime can be modified. Currently, all
TLS/SSL parameters and the connection_init_sql_file and sql_mode parameters
can be modified at runtime.
Parameters that affect the network address or the port the listener listens on cannot be modified at runtime. To modify these parameters, recreate the listener.
Response
Listener is modified:
Status: 204 No Content
The :name must be a valid listener name. When a listener is destroyed, the network port it listens on is available for reuse.
Response
Listener is destroyed:
Status: 204 No Content
Listener cannot be deleted:
Status: 400 Bad Request
Stops a started listener. When a listener is stopped, new connections are no longer accepted and are queued until the listener is started again.
Parameters
This endpoint supports the following parameters:
force=yes
Close all existing connections that were created through this listener.
Response
Listener is stopped:
Status: 204 No Content
Starts a stopped listener.
Response
Listener is started:
Status: 204 No Content
This page is licensed: CC BY-SA / Gnu FDL
Get a single session. :id must be a valid session ID. The session ID is the same that is exposed to the client as the connection ID.
This endpoint also supports the rdns=true parameter, which instructs MaxScale to
perform reverse DNS on the client IP address. As this requires communicating with
an external server, the operation may be expensive.
Response
Status: 200 OK
Get all sessions.
Response
Status: 200 OK
The request body must be a JSON object which represents the new configuration of
the session. The :id must be a valid session ID that is active.
The log_debug, log_info, log_notice, log_warning and log_error boolean
parameters control whether the associated logging level is enabled:
The filters that a session uses can be updated by re-defining the filter relationship of the session. This causes new filter sessions to be opened immediately. The old filter session are closed and replaced with the new filter session the next time the session is idle. The order in which the filters are defined in the request body is the order in which the filters are installed, similar to how the filter relationship for services behaves.
Response
Session is modified:
Status: 204 No Content
This endpoint causes the session to re-read the configuration from the service. As a result of this, all backend connections will be closed and then opened again. All router and filter sessions will be created again which means that for modules that perform something whenever a new module session is opened, this behaves as if a new session was started.
This endpoint can be used to apply configuration changes that were done after the session was started. This can be useful for situations where the client connections live for a long time and connections are not recycled often enough.
Response
Session is was restarted:
Status: 204 No Content
This endpoint does the same thing as the /v1/sessions/:id/restart endpoint
except that it applies to all sessions.
Response
Session is was restarted:
Status: 204 No Content
This endpoint causes the session to be forcefully closed.
Request Parameters
This endpoint supports the following request parameters.
ttl
The time after which the session is killed. If this parameter is not given, the session is killed immediately. This can be used to give the session time to finish the work it is performing before the connection is closed.
Response
Session was killed:
Status: 204 No Content
This page is licensed: CC BY-SA / Gnu FDL
[server1]
type=server
address=127.0.0.1
port=3000
[server2]
type=server
address=127.0.0.1
port=3001
[MariaDB-Monitor]
type=monitor
module=mariadbmon
servers=server1,server2
user=maxuser
password=maxpwd
monitor_interval=2s
[Mirror-Router]
type=service
router=mirror
user=maxuser
password=maxpwd
targets=server1,server2
main=server1
exporter=file
file=/tmp/Mirror-Router.log
[Mirror-Listener]
type=listener
service=Mirror-Router
port=3306GET /v1/listeners/:name{
"data": {
"attributes": {
"parameters": {
"MariaDBProtocol": {
"allow_replication": true
},
"address": "::",
"authenticator": null,
"authenticator_options": null,
"connection_init_sql_file": null,
"connection_metadata": [
"character_set_client=auto",
"character_set_connection=auto",
"character_set_results=auto",
"max_allowed_packet=auto",
"system_time_zone=auto",
"time_zone=auto",
"tx_isolation=auto",
"maxscale=auto"
],
"port": 4006,
"protocol": "MariaDBProtocol",
"proxy_protocol_networks": null,
"service": "RW-Split-Router",
"socket": null,
"sql_mode": "default",
"ssl": false,
"ssl_ca": null,
"ssl_cert": null,
"ssl_cert_verify_depth": 9,
"ssl_cipher": null,
"ssl_crl": null,
"ssl_key": null,
"ssl_verify_peer_certificate": false,
"ssl_verify_peer_host": false,
"ssl_version": "MAX",
"type": "listener",
"user_mapping_file": null
},
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
},
"state": "Running"
},
"id": "RW-Split-Listener",
"relationships": {
"services": {
"data": [
{
"id": "RW-Split-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/listeners/RW-Split-Listener/relationships/services/"
}
}
},
"type": "listeners"
},
"links": {
"self": "http://localhost:8989/v1/listeners/RW-Split-Listener/"
}
}GET /v1/listeners{
"data": [
{
"attributes": {
"parameters": {
"MariaDBProtocol": {
"allow_replication": true
},
"address": "::",
"authenticator": null,
"authenticator_options": null,
"connection_init_sql_file": null,
"connection_metadata": [
"character_set_client=auto",
"character_set_connection=auto",
"character_set_results=auto",
"max_allowed_packet=auto",
"system_time_zone=auto",
"time_zone=auto",
"tx_isolation=auto",
"maxscale=auto"
],
"port": 4006,
"protocol": "MariaDBProtocol",
"proxy_protocol_networks": null,
"service": "RW-Split-Router",
"socket": null,
"sql_mode": "default",
"ssl": false,
"ssl_ca": null,
"ssl_cert": null,
"ssl_cert_verify_depth": 9,
"ssl_cipher": null,
"ssl_crl": null,
"ssl_key": null,
"ssl_verify_peer_certificate": false,
"ssl_verify_peer_host": false,
"ssl_version": "MAX",
"type": "listener",
"user_mapping_file": null
},
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
},
"state": "Running"
},
"id": "RW-Split-Listener",
"relationships": {
"services": {
"data": [
{
"id": "RW-Split-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/listeners/RW-Split-Listener/relationships/services/"
}
}
},
"type": "listeners"
},
{
"attributes": {
"parameters": {
"MariaDBProtocol": {
"allow_replication": true
},
"address": "::",
"authenticator": null,
"authenticator_options": null,
"connection_init_sql_file": null,
"connection_metadata": [
"character_set_client=auto",
"character_set_connection=auto",
"character_set_results=auto",
"max_allowed_packet=auto",
"system_time_zone=auto",
"time_zone=auto",
"tx_isolation=auto",
"maxscale=auto"
],
"port": 4008,
"protocol": "MariaDBProtocol",
"proxy_protocol_networks": null,
"service": "Read-Connection-Router",
"socket": null,
"sql_mode": "default",
"ssl": false,
"ssl_ca": null,
"ssl_cert": null,
"ssl_cert_verify_depth": 9,
"ssl_cipher": null,
"ssl_crl": null,
"ssl_key": null,
"ssl_verify_peer_certificate": false,
"ssl_verify_peer_host": false,
"ssl_version": "MAX",
"type": "listener",
"user_mapping_file": null
},
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
},
"state": "Running"
},
"id": "Read-Connection-Listener",
"relationships": {
"services": {
"data": [
{
"id": "Read-Connection-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/listeners/Read-Connection-Listener/relationships/services/"
}
}
},
"type": "listeners"
}
],
"links": {
"self": "http://localhost:8989/v1/listeners/"
}
}POST /v1/listeners{
"data": {
"id": "my-listener",
"type": "listeners",
"attributes": {
"parameters": {
"port": 3306
}
},
"relationships": {
"services": {
"data": [
{"id": "RW-Split-Router", "type": "services"}
]
}
}
}
}PATCH /v1/listeners/:nameDELETE /v1/listeners/:namePUT /v1/listeners/:name/stopPUT /v1/listeners/:name/startGET /v1/sessions/:id{
"data": {
"attributes": {
"client": {
"cipher": "",
"connection_attributes": {
"_client_name": "libmariadb",
"_client_version": "3.3.4",
"_os": "Linux",
"_pid": "502300",
"_platform": "x86_64",
"_server_host": "127.0.0.1"
},
"sescmd_history_len": 1,
"sescmd_history_stored_metadata": 0,
"sescmd_history_stored_responses": 1
},
"connected": "Fri, 05 Jan 2024 07:24:06 GMT",
"connections": [
{
"cipher": "",
"connection_id": 129,
"server": "server1"
}
],
"idle": 5.2000000000000002,
"io_activity": 16,
"log": [],
"memory": {
"connection_buffers": {
"backends": {
"server1": {
"misc": 662,
"readq": 0,
"total": 662,
"writeq": 0
}
},
"client": {
"misc": 654,
"readq": 65536,
"total": 66190,
"writeq": 0
},
"total": 66852
},
"exec_metadata": 0,
"last_queries": 0,
"sescmd_history": 485,
"total": 67337,
"variables": 0
},
"parameters": {
"log_debug": false,
"log_error": false,
"log_info": false,
"log_notice": false,
"log_warning": false
},
"port": 40664,
"queries": [],
"remote": "127.0.0.1",
"seconds_alive": 5.209291554,
"state": "Session started",
"thread": 2,
"user": "maxuser"
},
"id": "1",
"links": {
"self": "http://localhost:8989/v1/sessions/1/"
},
"relationships": {
"services": {
"data": [
{
"id": "RW-Split-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/sessions/1/relationships/services/"
}
}
},
"type": "sessions"
},
"links": {
"self": "http://localhost:8989/v1/sessions/1/"
}
}GET /v1/sessions{
"data": [
{
"attributes": {
"client": {
"cipher": "",
"connection_attributes": {
"_client_name": "libmariadb",
"_client_version": "3.3.4",
"_os": "Linux",
"_pid": "502300",
"_platform": "x86_64",
"_server_host": "127.0.0.1"
},
"sescmd_history_len": 1,
"sescmd_history_stored_metadata": 0,
"sescmd_history_stored_responses": 1
},
"connected": "Fri, 05 Jan 2024 07:24:06 GMT",
"connections": [
{
"cipher": "",
"connection_id": 129,
"server": "server1"
}
],
"idle": 5.2000000000000002,
"io_activity": 16,
"log": [],
"memory": {
"connection_buffers": {
"backends": {
"server1": {
"misc": 662,
"readq": 0,
"total": 662,
"writeq": 0
}
},
"client": {
"misc": 654,
"readq": 65536,
"total": 66190,
"writeq": 0
},
"total": 66852
},
"exec_metadata": 0,
"last_queries": 0,
"sescmd_history": 485,
"total": 67337,
"variables": 0
},
"parameters": {
"log_debug": false,
"log_error": false,
"log_info": false,
"log_notice": false,
"log_warning": false
},
"port": 40664,
"queries": [],
"remote": "127.0.0.1",
"seconds_alive": 5.2105843680000001,
"state": "Session started",
"thread": 2,
"user": "maxuser"
},
"id": "1",
"links": {
"self": "http://localhost:8989/v1/sessions/1/"
},
"relationships": {
"services": {
"data": [
{
"id": "RW-Split-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/sessions/1/relationships/services/"
}
}
},
"type": "sessions"
}
],
"links": {
"self": "http://localhost:8989/v1/sessions/"
}
}PATCH /v1/sessions/:id{
"data": {
"attributes": {
"parameters": {
"log_info": true
}
}
}
}{
"data": {
"attributes": {
"relationships": {
"filters": {
"data": [
{ "id": "my-cache-filter" },
{ "id": "my-log-filter" }
]
}
}
}
}
}POST /v1/sessions/:id/restartPOST /v1/sessions/restartDELETE /v1/sessions/:idDefault: password
Default: none
[Read-Write-Listener]
type=listener
address=::
service=Read-Write-Service
authenticator=PAMAuth
[Primary-Server]
type=server
address=123.456.789.10
port=12345auth required pam_unix.so
account required pam_unix.soauthenticator_options=pam_use_cleartext_plugin=1authenticator_options=pam_mode=password_2FAauthenticator_options=pam_backend_mapping=mariadbauthenticator_options=pam_mapped_pw_file=/home/root/passwords.json,pam_backend_mapping=mariadb{
"users_and_passwords": [
{
"user": "my_mapped_user1",
"password": "my_mapped_pw1"
},
{
"user": "my_mapped_user2",
"password": "A6D4C53619FFFF4DF252A0E595EDB0A12CA44E16AF154D0ED08F687E81604BFF42218B4EBA9F3EF8D907CF35E74ABDAA"
}
]
}The SchemaRouter provides an easy and manageable sharding solution by building a single logical database server from multiple separate ones. Each database is shown to the client and queries targeting unique databases are routed to their respective servers. In addition to providing simple database-based sharding, the schemarouter also enables cross-node session variable usage by routing all queries that modify the session to all nodes.
By default the SchemaRouter assumes that each database and table is only located on one server. If it finds the same database or table on multiple servers, it will close the session with the following error:
The exception to this rule are the system tables mysql, information_schema,performance_schema, sys that are never treated as duplicates.
If duplicate tables are expected, use the ignore_tables_regex parameter to controls which
duplicate tables are allowed. To disable the duplicate database detection, useignore_tables_regex=.*.
Schemarouter compares table and database names case-insensitively. This means
that the tables test.t1 and test.T1 are assumed to refer to the same table.
The main limitation of SchemaRouter is that aside from session variable writes and some specific queries, a query can only target one server. This means that queries which depend on results from multiple servers give incorrect results. See for more information.
From 2.3.0 onwards, SchemaRouter is capable of limited table family sharding.
Older versions of MaxScale required that the auth_all_servers parameter was
enabled in order for the schemarouter services to load the authentication data
from all servers instead of just one server. Starting with MaxScale 24.02, the
schemarouter automatically fetches the authentication data from all servers and
joins it together. At the same time, the auth_all_servers parameter has been
deprecated and is ignored if present in the configuration.
If a command modifies the session state by modifying any session or user
variables, the query is routed to all nodes. These statements include SET
statements as well as any other statements that modify the behavior of the
client.
If a client changes the default database after connecting, either with a USE <db> query or a COM_INIT_DB command, the query is routed to all servers
that contain the database. This same logic applies when a client connects with
a default database: the default database is set only on servers that actually
contain it.
If a query targets one or more tables that the schemarouter has discovered during the database mapping phase, the query is only routed if a server is found that contains all of the tables that the query uses. If no such server is found, the query is routed to the server that was previously used or to the first available backend if none have been used. If a query uses a table but doesn't define the database it is in, it is assumed to be located on the default database of the connection.
This means that all administrative commands, replication related command as well as certain transaction control statements (XA transaction) are routed to the first available server in certain cases. To avoid problems, use routing hints to direct where these statements should go.
Starting with MaxScale 6.4.5, transaction control commands (BEGIN, COMMIT
and ROLLBACK) are routed to all nodes. Older versions of MaxScale routed the
queries to the first available backend. This means that cross-shard
transactions are technically possible but, without external synchronization,
the transactions are not guaranteed to be globally consistent.
LOAD DATA LOCAL INFILE commands are routed to the first available server
that contains the tables listed in the query.
To check how databases and tables map to servers, execute the special querySHOW SHARDS. The query does not support any modifiers such as LIKE.
The schemarouter will also intercept the SHOW DATABASES command and generate
it based on its internal data. This means that newly created databases will not
show up immediately and will only be visible when the cached data has been
updated.
The schemarouter maps each of the servers to know where each database and table
is located. As each user has access to a different set of tables and databases,
the result is unique to the username and the set of servers that the service
uses. These results are cached by the schemarouter. The lifetime of the cached
result is controlled by the refresh_interval parameter.
When a server needs to be mapped, the schemarouter will route a query to each of the servers using the client's credentials. While this query is being executed, all other sessions that would otherwise share the cached result will wait for the update to complete. This waiting functionality was added in MaxScale 2.4.19, older versions did not wait for existing updates to finish and would perform parallel database mapping queries.
Here is an example configuration of the schemarouter:
The module generates the list of databases based on the servers parameter using the connecting client's credentials. The user and password parameters define the credentials that are used to fetch the authentication data from the database servers. The credentials used only require the same grants as mentioned in the configuration documentation.
The list of databases is built by sending a SHOW DATABASES query to all the servers. This requires the user to have at least USAGE and SELECT grants on the databases that need be sharded.
If you are connecting directly to a database or have different users on some
of the servers, you need to get the authentication data from all the
servers. You can control this with the auth_all_servers parameter. With
this parameter, MariaDB MaxScale forms a union of all the users and their
grants from all the servers. By default, the schemarouter will fetch the
authentication data from all servers.
For example, if two servers have the database shard and the following
rights are granted only on one server, all queries targeting the databaseshard would be routed to the server where the grants were given.
This would in effect allow the user 'john' to only see the database 'shard' on this server. Take notice that these grants are matched against MariaDB MaxScale's hostname instead of the client's hostname. Only user authentication uses the client's hostname and all other grants use MariaDB MaxScale's hostname.
ignore_tablesType: stringlist
Mandatory: No
Dynamic: Yes
Default: ""
List of full table names (e.g. db1.t1) to ignore when checking for duplicate tables. By default no tables are ignored.
This parameter was once called ignore_databases.
ignore_tables_regexType:
Mandatory: No
Dynamic: No
Default: ""
A that is matched against database names when checking for duplicate databases. By default no tables are ignored.
The following configuration ignores duplicate tables in the databases db1 and db2,
and all tables starting with "t" in db3.
This parameter was once called ignore_databases_regex.
max_sescmd_historyThis parameter has been moved to in MaxScale 6.0.
disable_sescmd_historyThis parameter has been moved to in MaxScale 6.0.
refresh_databasesType:
Mandatory: No
Dynamic: No
Default: false
Enable database map refreshing mid-session. These are triggered by a failure to
change the database i.e. USE ... queries. This feature is disabled by default.
Before MaxScale 6.2.0, this parameter did nothing. Starting with the 6.2.0 release of MaxScale this parameter now works again but it is disabled by default to retain the same behavior as in older releases.
refresh_intervalType:
Mandatory: No
Dynamic: Yes
Default: 300s
The minimum interval between database map refreshes in seconds. The default value is 300 seconds.
The interval is specified as documented . If no explicit unit is provided, the value is interpreted as seconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected. Note that since the granularity of the intervaltimeout is seconds, a timeout specified in milliseconds will be rejected, even if the duration is longer than a second.
max_stalenessType:
Mandatory: No
Dynamic: Yes
Default: 150s
The time how long stale database map entries can be used while an update is in
progress. When a database map entry goes stale, the next connection to be
created will start an update of the database map. While this update is ongoing,
other connections can use the stale entry for up to max_staleness seconds. If
this limit is exceeded and the update still hasn't completed, new connections
will instead block and wait for the update to finish.
This feature was added in MaxScale 23.08.0. Older versions of MaxScale always waited for the update to complete when the database map entry went stale.
This functionality was introduced in 2.3.0.
If the same database exists on multiple servers, but the database contains different tables in each server, SchemaRouter is capable of routing queries to the right server, depending on which table is being addressed.
As an example, suppose the database db exists on servers server1 and server2, but
that the database on server1 contains the table tbl1 and on server2 contains the
table tbl2. The query SELECT * FROM db.tbl1 will be routed to server1 and the querySELECT * FROM db.tbl2 will be routed to server2. As in the example queries, the table
names must be qualified with the database names for table-level sharding to work.
Specifically, the query series below is not supported.
The router_diagnostics output for a schemarouter service contains the
following fields.
shard_map_hits: Cache hits for the shard map cache.
shard_map_misses: Cache misses for the shard map cache.
In MaxScale 24.02, the queries sescmd_percentage, longest_sescmd_chain,times_sescmd_limit_exceeded, longest_session, shortest_session andaverage_session statistics have been replaced by core service statistics.
Read documentation for details about module commands.
The schemarouter supports the following module commands.
invalidate SERVICEInvalidates the database map cache of the given service. This can be used to schedule
the updates to the database maps to happen at off-peak hours by configuring a
high value for refresh_interval and invalidating the cache externally.
clear SERVICEClears the database map cache of the given service. This forces new connections to use a freshly retrieved entry.
If the set of databases and tables in each shard is very large, the update can
take some time. If there are stale cache entries and max_staleness is
configured to be higher than the time it takes to update the database map, the
invalidation will only slow down one client connection that ends up doing the
update. When the cache is cleared completely, all clients will have to wait for
the update to complete. In general, cache invalidation should be preferred over
cache clearing.
Cross-database queries (e.g. SELECT column FROM database1.table UNION select column FROM database2.table) are not properly supported. Such queries are routed either to the
first explicit database in the query, the current database in use or to the first
available database, depending on which succeeds.
Without a default database, queries that do not use fully qualified table
names and which do not modify the session state (e.g. SELECT * FROM t1) will
be routed to the first available server. This includes queries such as explicit
transaction commands (BEGIN, COMMIT, ROLLBACK), all non-table CREATE
commands (
is a small tutorial on how to set up a sharded database.
This page is licensed: CC BY-SA / Gnu FDL
This tutorial is a quick overview of what the MaxScale REST API offers, how it
can be used to inspect the state of MaxScale and how to use it to modify the
runtime configuration of MaxScale. The tutorial uses the curl command line
client to demonstrate how the API is used.
The MaxScale REST API listens on port 8989 on the local host. The admin_port
and admin_host parameters control which port and address the REST API listens
on. Note that for security reasons the API only listens for local connections
with the default configuration. It is critical that the default credentials are
changed and TLS/SSL encryption is configured before exposing the REST API to a
network.
The default user for the REST API is admin and the password is mariadb. The
easiest way to secure the REST API is to use the maxctrl command line client
to create a new admin user and delete the default one. To do this, run the
following commands:
This will create the user my_user with the password my_password that is an
administrative account. After this account is created, the default admin
account is removed with the next command.
The next step is to enable TLS encryption. To do this, you need a CA
certificate, a private key and a public certificate file all in PEM format. Add
the following three parameters under the [maxscale] section of the MaxScale
configuration file and restart MaxScale.
Use maxctrl to verify that the TLS encryption is enabled. In this tutorial our
server certificates are self-signed so the --tls-verify-server-cert=false
option is required.
If no errors are raised, this means that the communication via the REST API is now secure and can be used across networks.
Note: For the sake of brevity, the rest of this tutorial will omit the
TLS/SSL options from the curl command line. For more information, refer to thecurl manpage.
The most basic task to do with the REST API is to see whether MaxScale is up and
running. To do this, we do a HTTP request on the root resource (the -i option
shows the HTTP headers).
curl -i 127.0.0.1:8989/v1/
To query a resource collection endpoint, append it to the URL. The /v1/filters/
endpoint shows the list of filters configured in MaxScale. This is a resource
&#xNAN;collection endpoint: it contains the list of all resources of a particular
type.
curl 127.0.0.1:8989/v1/filters
The data holds the actual list of resources: the Hint and Logger
filters. Each object has the id field which is the unique name of that
object. It is the same as the section name in maxscale.cnf.
Each resource in the list has a relationships object. This shows the
relationship links between resources. In our example, the Hint filter is used
by a service named RW-Split-Hint-Router and the Logger is not currently in
use.
To request an individual resource, we add the object name to the resource
collection URL. For example, if we want to get only the Logger filter we
execute the following command.
curl 127.0.0.1:8989/v1/filters/Logger
Note that this time the data member holds an object instead of an array of
objects. All other parts of the response are similar to what was shown in the
previous example.
One of the uses of the REST API is to create new objects in MaxScale at runtime. This allows new servers, services, filters, monitor and listeners to be created without restarting MaxScale.
For example, to create a new server in MaxScale the JSON definition of a server
must be sent to the REST API at the /v1/servers/ endpoint. The request body
defines the server name as well as the parameters for it.
To create objects with curl, first write the JSON definition into a file.
To send the data, use the following command.
The -d option takes a file name prefixed with a @ as an argument. Here we
have @new_server.txt which is the name of the file where the JSON definition
was stored. The -X option defines the HTTP verb to use and to create a new
object we must use the POST verb.
To verify the data request the newly created object.
The easiest way to modify an object is to first request it, store the result in a file, edit it and then send the updated object back to the REST API.
Let's say we want to modify the port that the server we created earlier listens on. First we request the current object and store the result in a file.
After that we edit the file and change the port from 3003 to 3306. Next the modified JSON object is sent to the REST API as a PATCH command. To do this, execute the following command.
To verify that the data was updated correctly, request the updated object.
To continue with our previous example, we add the updated server to a
service. To do this, the relationships object of the server must be modified
to include the service we want to add the server to.
To define a relationship between a server and a service, the data member must
have the relationships field and it must contain an object with the services
field (some fields omitted for brevity).
The data.relationships.services.data field contains a list of objects that
define the id and type fields. The id is the name of the object (a service
or a monitor for servers) and the type tells which type it is. Only services
type objects should be present in the services object.
In our example we are linking the server1 server to the RW-Split-Router
service. As was seen with the previous example, the easiest way to do this is to
store the result, edit it and then send it back with a HTTP PATCH.
If we want to remove a server from all services and monitors, we can set thedata member of the services and monitors relationships to an empty array:
This is useful if you want to delete the server which can only be done if it has no relationships to other objects.
To delete an object, simply execute a HTTP DELETE request on the resource you
want to delete. For example, to delete the server1 server, execute the
following command.
In order to delete an object, it must not have any relationships to other objects.
The full list of all available endpoints in MaxScale can be found in the .
The maxctrl command line client is self-documenting and the maxctrl help
command is a good tool for exploring the various commands that are available in
it. The maxctrl api get command can be useful way to explore the REST API as
it provides a way to easily extract values out of the JSON data generated by the
REST API.
There is a multitude of REST API clients readily available and most of them are
far more convenient to use than curl. We recommend investigating what you need
and how you intend to either integrate or use the MaxScale REST API. Most modern
languages either have a built-in HTTP library or there exists a de facto
standard library.
The MaxScale REST API follows the JSON API specification and there exist libraries that are built specifically for these sorts of APIs
This page is licensed: CC BY-SA / Gnu FDL
ERROR 5000 (DUPDB): Error: duplicate tables found on two different shards.maxctrl create user my_user my_password --type=admin
maxctrl destroy user adminadmin_ssl_key=/certs/server-key.pem
admin_ssl_cert=/certs/server-cert.pem
admin_ssl_ca_cert=/certs/ca-cert.pemmaxctrl --user=my_user --password=my_password --secure --tls-ca-cert=/certs/ca-cert.pem --tls-verify-server-cert=false show maxscaleHTTP/1.1 200 OK
Connection: Keep-Alive
Content-Length: 0
Last-Modified: Mon, 04 Mar 2019 08:23:09 GMT
ETag: "0"
Date: Mon, 04 Mar 19 08:29:41 GMT{
"links": {
"self": "http://127.0.0.1:8989/v1/filters/"
},
"data": [
{
"id": "Hint",
"type": "filters",
"relationships": {
"services": {
"links": {
"self": "http://127.0.0.1:8989/v1/services/"
},
"data": [
{
"id": "RW-Split-Hint-Router",
"type": "services"
}
]
}
},
"attributes": {
"module": "hintfilter",
"parameters": {}
},
"links": {
"self": "http://127.0.0.1:8989/v1/filters/Hint"
}
},
{
"id": "Logger",
"type": "filters",
"relationships": {
"services": {
"links": {
"self": "http://127.0.0.1:8989/v1/services/"
},
"data": []
}
},
"attributes": {
"module": "qlafilter",
"parameters": {
"match": null,
"exclude": null,
"user": null,
"source": null,
"filebase": "/tmp/log",
"options": "ignorecase",
"log_type": "session",
"log_data": "date,user,query",
"newline_replacement": "\" \"",
"separator": ",",
"flush": false,
"append": false
},
"filter_diagnostics": {
"separator": ",",
"newline_replacement": "\" \""
}
},
"links": {
"self": "http://127.0.0.1:8989/v1/filters/Logger"
}
}
]
}{
"links": {
"self": "http://127.0.0.1:8989/v1/filters/Logger"
},
"data": {
"id": "Logger",
"type": "filters",
"relationships": {
"services": {
"links": {
"self": "http://127.0.0.1:8989/v1/services/"
},
"data": []
}
},
"attributes": {
"module": "qlafilter",
"parameters": {
"match": null,
"exclude": null,
"user": null,
"source": null,
"filebase": "/tmp/log",
"options": "ignorecase",
"log_type": "session",
"log_data": "date,user,query",
"newline_replacement": "\" \"",
"separator": ",",
"flush": false,
"append": false
},
"filter_diagnostics": {
"separator": ",",
"newline_replacement": "\" \""
}
},
"links": {
"self": "http://127.0.0.1:8989/v1/filters/Logger"
}
}
}{
"data": {
"id": "server1",
"type": "servers",
"attributes": {
"parameters": {
"address": "127.0.0.1",
"port": 3003
}
}
}
}curl -X POST -d @new_server.txt 127.0.0.1:8989/v1/serverscurl 127.0.0.1:8989/v1/servers/server1curl 127.0.0.1:8989/v1/servers/server1 > server1.txtcurl -X PATCH -d @server1.txt 127.0.0.1:8989/v1/servers/server1curl 127.0.0.1:8989/v1/servers/server1{
"data": {
"id": "server1",
"type": "servers",
"relationships": {
"services": {
"data": [
{
"id": "RW-Split-Router",
"type": "services"
}
]
}
},
"attributes": ...
}
}{
"data": {
"relationships": {
"services": {
"data": []
},
"monitors": {
"data": []
}
}
}
}curl -X DELETE 127.0.0.1:8989/v1/servers/server1If a query targets a table or a database that is present on all nodes
(e.g. information_schema) and the connection is using a default database,
the query is routed based on the default database. This makes it possible to
control where queries that do match a specific node are routed. If the
connection is not using a default database, the query is routed based solely
on the tables it contains.
If a query uses a table that is unknown to the schemarouter or executes a command that doesn't target a table, the query is routed to a server contains the current active default database. If the connection does not have a default database, the query is routed to the backend that was last used or to the first available backend if none have been used. If the query contains a routing hint that directs it to a server, the query is routed there.
CREATE DATABASECREATE SEQUENCESELECTCREATESET autocommit=1SET autocommit=0SET autocommit=1SELECT queries that modify session variables are not supported because uniform results cannot be guaranteed. If such a query is executed, the behavior of the router is undefined. To work around this limitation, the query must be executed in separate parts.
If a query targets a database the SchemaRouter has not mapped to a server, the query will be routed to the first available server. This possibly returns an error about database rights instead of a missing database.
Prepared statement support is limited. PREPARE, EXECUTE and DEALLOCATE are routed to the
correct backend if the statement is known and only requires one backend server. EXECUTE
IMMEDIATE is not supported and is routed to the first available backend and may give
wrong results. Similarly, preparing a statement from a variable (e.g. PREPARE stmt FROM @a) is not supported and may be routed wrong.
SHOW DATABASES is handled by the router instead of routed to a server. The router only
answers correctly to the basic version of the query. Any modifiers such as LIKE are
ignored. Starting with MaxScale 22.08, the database names will always be in lowercase.
SHOW TABLES is routed to the server with the current database. If using
table-level sharding, the results will be incomplete. Similarly, SHOW TABLES FROM db1 is routed to the server with database db1, ignoring table
sharding. Use SHOW SHARDS to get results from the router itself. Starting with
MaxScale 22.08, the database names will always be in lowercase.
USE db1 is routed to the server with db1. If the database is divided to multiple
servers, only one server will get the command.
The Query Log All (QLA) filter logs query content. Logs are written to a file in CSV format. Log elements are configurable and include the time submitted and the SQL statement text, among others.
A minimal configuration is below.
The qlafilter logs can be rotated by executing the maxctrl rotate logs
command. This will cause the log files to be reopened when the next message is
written to the file. This applies to both unified and session type logging.
The QLA filter has one mandatory parameter, filebase, and a number of optional
parameters. These were introduced in the 1.0 release of MariaDB MaxScale.
Type: string
Mandatory: Yes
Dynamic: No
The basename of the output file created for each session. A session index is added to the filename for each written session file. For unified log files,.unified is appended.
Type: regex
Mandatory: No
Dynamic: Yes
Default: None
Include queries that match the regex.
Type: regex
Mandatory: No
Dynamic: Yes
Default: None
Exclude queries that match the regex.
Type: enum_mask
Mandatory: No
Dynamic: Yes
Values: case, ignorecase, extended
Default: case
The extended option enables PCRE2 extended regular expressions.
Type: string
Mandatory: No
Dynamic: Yes
Default: ""
Limit logging to sessions with this user.
Type: string
Mandatory: No
Dynamic: Yes
Default: ""
Limit logging to sessions with this client source address.
Type: regex
Mandatory: No
Dynamic: Yes
Only log queries from users that match this pattern. If the user parameter is
used, the value of user_match is ignored.
Here is an example pattern that matches the users alice and bob:
Type: regex
Mandatory: No
Dynamic: Yes
Exclude all queries from users that match this pattern. If the user parameter
is used, the value of user_exclude is ignored.
Here is an example pattern that excludes the users alice and bob:
Type: regex
Mandatory: No
Dynamic: Yes
Only log queries from hosts that match this pattern. If the source parameter
is used, the value of source_match is ignored.
Here is an example pattern that matches the loopback interface as well as the
address 192.168.0.109:
Type: regex
Mandatory: No
Dynamic: Yes
Exclude all queries from hosts that match this pattern. If the source
parameter is used, the value of source_exclude is ignored.
Here is an example pattern that excludes the loopback interface as well as the
address 192.168.0.109:
Type: enum_mask
Mandatory: No
Dynamic: Yes
Values: session, unified, stdout
Default: session
The type of log file to use.
session
Write to session-specific files
unified
Use one file for all sessions
stdout
Same as unified, but to stdout
Type: enum_mask
Mandatory: No
Dynamic: Yes
Values: service, session, date, user, reply_time, total_reply_time, query, default_db, num_rows, reply_size, transaction, transaction_time, num_warnings, error_msg
Default: date, user, query
Type of data to log in the log files.
service
Service name
session
Unique session id (ignored for session files)
date
Timestamp
user
User and hostname of client
reply_time
Duration from client query to first server reply
total_reply_time
Duration from client query to last server reply (v6.2)
The durations reply_time and total_reply_time are by default in milliseconds, but can be specified to another unit using duration_unit.
The log entry is written when the last reply from the server is received. Prior to version 6.2 the entry was written when the query was received from the client, or if reply_time was specified, on first reply from the server.
NOTE The error_msg is the raw message from the server. Even if use_canonical_form is set the error message may contain user defined constants. For example:
Starting with MaxScale 24.02, the query parameter now correctly logs
the execution of binary protocol commands as SQL
(MXS-4959). The execution of
batched statements (COM_STMT_BULK_LOAD) used by some connectors is not
logged.
Type: string
Mandatory: No
Dynamic: Yes
Default: milliseconds
The unit for logging a duration. The unit can be milliseconds or microseconds.
The abbreviations ms for milliseconds and us for microseconds are also valid.
This option is available as of MaxScale version 6.2.
Type: bool
Mandatory: No
Dynamic: Yes
Default: false
When this option is true the canonical form of the query is logged. In the canonical form all user defined constants are replaced with question marks. This option is available as of MaxScale version 6.2.
Type: bool
Mandatory: No
Dynamic: Yes
Default: false
Flush log files after every write.
Type: bool
Mandatory: No
Dynamic: Yes
Default: true
Type: string
Mandatory: No
Dynamic: Yes
Default: ","
Defines the separator string between elements of log entries. The value should be enclosed in quotes.
Type: string
Mandatory: No
Dynamic: Yes
Default: " "
Default value is " " (one space). SQL-queries may include line breaks, which, if
printed directly to the log, may break automatic parsing. This parameter defines
what should be written in the place of a newline sequence (\r, \n or \r\n). If
this is set as the empty string, then newlines are not replaced and printed as
is to the output. The value should be enclosed in quotes.
Trailing parts of SQL queries that are larger than 16MiB are not logged. This means that the log output might contain truncated SQL.
Batched execution using COM_STMT_BULK_EXECUTE is not converted into their textual form. This is done due to the large volumes of data that are usually involved with batched execution.
Imagine you have observed an issue with a particular table and you want to determine if there are queries that are accessing that table but not using the primary key of the table. Let's assume the table name is PRODUCTS and the primary key is called PRODUCT_ID. Add a filter with the following definition:
The result of using this filter with the service used by the application would
be a log file of all select queries querying PRODUCTS without using the
PRODUCT_ID primary key in the predicates of the query. Executing SELECT * FROM PRODUCTS would log the following into /var/logs/qla/SelectProducts:
This page is licensed: CC BY-SA / Gnu FDL
The purpose of this tutorial is to introduce the MariaDB MaxScale Administrator to a few of the common administration tasks. This is intended to be an introduction for administrators who are new to MariaDB MaxScale and not a reference to all the tasks that may be performed.
show shards;
Database |Server |
---------|-------------|
db1.t1 |MyServer1 |
db1.t2 |MyServer1 |
db2.t1 |MyServer2 |[Shard-Router]
type=service
router=schemarouter
servers=server1,server2
user=myuser
password=mypwd# Execute this on both servers
CREATE USER 'john'@'%' IDENTIFIED BY 'password';
# Execute this only on the server where you want the queries to go
GRANT SELECT,USAGE ON shard.* TO 'john'@'%';[Shard-Router]
type=service
router=schemarouter
servers=server1,server2
user=myuser
password=mypwd
ignore_tables_regex=^db1|^db2|^db3\.tUSE db;
SELECT * FROM tbl1; // May be routed to an incorrect backend if using table sharding.[MyLogFilter]
type=filter
module=qlafilter
filebase=/tmp/SqlQueryLog
[MyService]
type=service
router=readconnroute
servers=server1
user=myuser
password=mypasswd
filters=MyLogFilterfilebase=/tmp/SqlQueryLoguser_match=/(^alice$)|(^bob$)/user_exclude=/(^alice$)|(^bob$)/source_match=/(^127[.]0[.]0[.]1)|(^192[.]168[.]0[.]109)/source_exclude=/(^127[.]0[.]0[.]1)|(^192[.]168[.]0[.]109)/MariaDB [test]> select secret from T where x password="clear text pwd";
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
that corresponds to your MariaDB server version for the right syntax to
use near 'password="clear text pwd"' at line 1newline_replacement=" NL "[ProductsSelectLogger]
type=filter
module=qlafilter
match=SELECT.*from.*PRODUCTS .*
exclude=WHERE.*PRODUCT_ID.*
filebase=/var/logs/qla/SelectProducts
[Product-Service]
type=service
router=readconnroute
servers=server1
user=myuser
password=mypasswd
filters=ProductsSelectLogger07:12:56.324 7/01/2016, SELECT * FROM PRODUCTSquery
The SQL of the query if it contains it
default_db
The default (current) database
num_rows
Number of rows in the result set (v6.2)
reply_size
Number of bytes received from the server (v6.2)
transaction
BEGIN, COMMIT and ROLLBACK (v6.2)
transaction_time
The duration of a transaction (v6.2)
num_warnings
Number of warnings in the server reply (v6.2)
error_msg
Error message from the server (if any) (v6.2)
server
The server where the query was routed (if any) (v22.08)
command
The protocol command that was executed (v24.02)
The REST API calls that MaxCtrl and MaxGui issue to MaxScale can be logged by enabling admin_audit.
The generated file is a csv file that can be opened in most spread sheet programs.
[Rotating Log Files](#Rotating Log Files) also applies to the audit file. The admin audit file will never be overwritten as a result of a rotate, unlike the regular log file (in case a rotate is issued, but the file name has not been moved). There is also the option to change the audit file name, which effectively rotates it independently of the regular log file.
For e.g. maxctrl alter maxscale admin_audit_file=/var/log/maxscale/admin_audit.march.csv.
MaxScale uses systemd for managing the process. This means that normalsystemctl commands can be used to start and stop MaxScale. To start MaxScale,
use systemctl start maxscale. To stop it, use systemctl stop maxscale.
The systemd service file for MaxScale is located in/lib/systemd/system/maxscale.service.
Additional command line options and other systemd configuration options
can be given to MariaDB MaxScale by creating a drop-in file for the
service unit file. You can do this with the systemctl edit maxscale.service
command. For more information about systemd drop-in
files, refer to the systemctl man page
and the systemd documentation.
It is possible to use the maxctrl command to obtain statistics about the
services that are running within MaxScale. The maxctrl command list services
will give very basic information regarding services. This command may be either
run in interactive mode or passed on the maxctrl command line.
Network listeners count as a user of the service, therefore there will always be one user per network port in which the service listens. More details can be obtained by using the "show service" command.
To determine what client are currently connected to MariaDB MaxScale, you can
use the list sessions command within maxctrl. This will give you IP address
and the ID of the session for that connection. As with any maxctrl
command this can be passed on the command line or typed interactively in
maxctrl.
Log rotation applies to the MaxScale log file, admin audit file and qlafilter files.
MariaDB MaxScale logs messages of different priority into a single log file. With the exception if error messages that are always logged, whether messages of a particular priority should be logged or not can be enabled via the maxctrl interface or in the configuration file. By default, MaxScale keeps on writing to the same log file. To prevent the file from growing indefinitely, the administrator must take action.
The name of the log file is maxscale.log. When the log is rotated, MaxScale closes the current log file and opens a new one using the same name.
Log file rotation is achieved by use of the rotate logs command
in maxctrl.
As there currently is only the maxscale log, that is the only one that will be rotated.
This may be integrated into the Linux logrotate mechanism by adding a configuration file to the /etc/logrotate.d directory. If we assume we want to rotate the log files once per month and wish to keep 5 log files worth of history, the configuration file would look as follows.
MariaDB MaxScale will also rotate all of its log files if it receives the USR1 signal. Using this the logrotate configuration script can be rewritten as
In older versions MaxScale renamed the log file, behavior which is not fully compliant with the assumptions of logrotate and may lead to issues, depending on the used logrotate configuration file. From version 2.1 onward, MaxScale will not itself rename the log file, but when the log is rotated, MaxScale will simply close and reopen the same log file. That will make the behavior fully compliant with logrotate.
MariaDB MaxScale supports the concept of maintenance mode for servers within a cluster. This allows for planned, temporary removal of a database from the cluster without the need to change the MariaDB MaxScale configuration.
To achieve this, you can use the set server command in maxctrl to set the
maintenance mode flag for the server. This may be done interactively within
maxctrl or by passing the command on the command line.
This will cause MariaDB MaxScale to stop routing any new requests to the server,
however if there are currently requests executing on the server these will not
be interrupted. Connections to servers in maintenance mode are closed as soon as
the next request arrives. To close them immediately, use the --force option
for maxctrl set server.
Clearing the maintenance mode for a server will bring it back into use. If multiple MariaDB MaxScale instances are configured to use the node then maintenance mode must be set within each MariaDB MaxScale instance.
Services can be stopped to temporarily halt their use. Stopping a service will cause it to stop accepting new connections until it is started. New connections are not refused if the service is stopped and are queued instead. This means that connecting clients will wait until the service is started again.
Starting a service will cause it to accept all queued connections that were created while it was stopped.
Stopping a monitor will cause it to stop monitoring the state of the servers
assigned to it. This is useful when the state of the servers is assigned
manually with maxctrl set server.
Starting a monitor will make it resume monitoring of the servers. Any manually assigned states will be overwritten by the monitor.
The MaxScale configuration can be changed at runtime by using the create,alter and destroy commands of maxctrl. These commands either create,
modify or destroy objects (servers, services, monitors etc.) inside
MaxScale. The exact syntax for each of the commands and any additional options
that they take can be seen with maxctrl --help <command>.
Not all parameters can be modified at runtime. Refer to the module documentation for more information on which parameters can be modified at runtime. If a parameter cannot be modified at runtime, the object can be destroyed and recreated in order to change it.
All runtime changes are persisted in files stored by default in/var/lib/maxscale/maxscale.cnf.d/. This means that any changes done at runtime
persist through restarts. Any changes done to objects in the main configuration
file are ignored if a persisted entry is found for it.
For example, if the address of a server is modified with maxctrl alter server db-server-1 address 192.168.0.100, the file/var/lib/maxscale/maxscale.cnf.d/db-server-1.cnf is created with the complete
configuration for the object. To remove all runtime changes for all objects,
remove all files found in /var/lib/maxscale/maxscale.cnf.d.
Modify global MaxScale parameters:
Some global parameters cannot be modified at runtime. Refer to the Configuration Guide for a full list of parameters that can be modified at runtime.
Create a new server
Modify a Server
Destroy a Server
A server can only be destroyed if it is not used by any services or monitors. To
automatically remove the server from the services and monitors that use it, use
the --force flag.
Drain a Server
When a server is set into the drain state, no new connections to it are
created. Unlike to the maintenance state which immediately stops all new
requests and closes all connections if used with the --force option, thedrain state allows existing connections to continue routing requests to them
in order to be gracefully closed once the client disconnects.
To remove the drain state, use clear server command:
Servers with the Master state cannot be drained. To drain them, first perform
a switchover to another node and then drain the server.
Create a new Monitor
Modify a Monitor
Add Server to a Monitor
Remove a Server from a Monitor
Destroy a Monitor
A monitor can only be destroyed if it is not monitoring any servers. To
automatically remove the servers from the monitor, use the --force flag.
Create a New Service
Modify a Service
Add Servers to a Service
Any servers added to services will only be used by new sessions. Existing sessions will use the servers that were available when they connected.
Remove Servers from a Service
Similarly to adding servers, removing servers from a service will only affect new sessions. Existing sessions keep using the servers even if they are removed from a service.
Change the Filters of a Service
The order of the filters is significant: the first filter will be the first to receive the query. The new set of filters will only be used by new sessions. Existing sessions will keep using the filters that were configured when they connected.
Destroy a Service
The service can only be destroyed if it uses no servers or clusters and has no
listeners associated with it. To force destruction of a service even if it does
use servers or has listeners, use the --force flag. This will also destroy any
listeners associated with the service.
Create a New Filter
Destroy a Filter
A filter can only be destroyed if it is not used by any services. To
automatically remove the filter from all services using it, use the --force
flag.
Filters cannot be altered at runtime in MaxScale 2.5. To modify the parameters of a filter, destroy it and recreate it with the modified parameters.
Create a New Listener
Destroy a Listener
Destroying a listener will close the network socket and stop it from accepting new connections. Existing connections that were created through it will keep displaying it as the originating listener.
Listeners cannot be moved from one service to another. In order to do this, the listener must be destroyed and then recreated with the new service.
MaxCtrl uses the same credentials as the MaxScale REST API. These users can be managed via MaxCtrl.
By default new users are only allowed to read data. To make the account an
administrative account, add the --type=admin option to the command:
Administrative accounts are allowed to use all MaxCtrl commands and modify any parts of MaxScale.
This page is licensed: CC BY-SA / Gnu FDL
This filter was introduced in MariaDB MaxScale 2.1.
With the masking filter it is possible to obfuscate the returned value of a particular column.
For instance, suppose there is a table person that, among other columns, contains the column ssn where the social security number of a person is stored.
With the masking filter it is possible to specify that when the ssn field is queried, a masked value is returned unless the user making the query is a specific one. That is, when making the query
instead of getting the real result, as in
the ssn would be masked, as in
Note that the masking filter should be viewed as a best-effort solution intended for protecting against accidental misuse rather than malicious attacks.
From MaxScale 2.3 onwards, the masking filter will reject statements that use functions in conjunction with columns that should be masked. Allowing function usage provides a way for circumventing the masking, unless a firewall filter is separately configured and installed.
Please see the configuration parameter for how to change the default behaviour.
From MaxScale 2.3.5 onwards, the masking filter will check the definition of user variables and reject statements that define a user variable using a statement that refers to columns that should be masked.
Please see the configuration parameter for how to change the default behaviour.
From MaxScale 2.3.5 onwards, the masking filter will examine unions and if the second or subsequent SELECT refer to columns that should be masked, the statement will be rejected.
Please see the configuration parameter for how to change the default behaviour.
From MaxScale 2.3.5 onwards, the masking filter will examine subqueries and if a subquery refers to columns that should be masked, the statement will be rejected.
Please see the configuration parameter for how to change the default behaviour.
Note that in order to ensure that it is not possible to get access to masked data, the privileges of the users should be minimized. For instance, if a user can create tables and perform inserts, he or she can execute something like
to get access to the cleartext version of a masked field ssn.
From MaxScale 2.3.5 onwards, the masking filter will, if any of theprevent_function_usage, check_user_variables, check_unions orcheck_subqueries parameters is set to true, block statements that
cannot be fully parsed.
Please see the configuration parameter for how to change the default behaviour.
From MaxScale 2.3.7 onwards, the masking filter will treat any strings
passed to functions as if they were fields. The reason is that as the
MaxScale query classifier is not aware of whether ANSI_QUOTES is
enabled or not, it is possible to bypass the masking by turning that
option on.
Before this change, the content of the field ssn would have been
returned in clear text even if the column should have been masked.
Note that this change will mean that there may be false positives
if ANSI_QUOTES is not enabled and a string argument happens to
be the same as the name of a field to be masked.
Please see the configuration parameter [treat_string_arg_as_field(#treat_string_arg_as_field) for how to change the default behaviour.
The masking filter can only be used for masking columns of the following
types: BINARY, VARBINARY, CHAR, VARCHAR, BLOB, TINYBLOB,MEDIUMBLOB, LONGBLOB, TEXT, TINYTEXT, MEDIUMTEXT, LONGTEXT,ENUM and SET
Currently, the masking filter can only work on packets whose payload is less
than 16MB. If the masking filter encounters a packet whose payload is exactly
that, thus indicating a situation where the payload is delivered in multiple
packets, the value of the parameter large_payloads specifies how the masking
filter should handle the situation.
The masking filter is taken into use with the following kind of configuration setup.
The masking filter has one mandatory parameter - rules.
rules
Type: path
Mandatory: Yes
Dynamic: Yes
Specifies the path of the file where the masking rules are stored. A relative path is interpreted relative to the module configuration directory of MariaDB MaxScale. The default module configuration directory is_/etc/maxscale.modules.d_.
warn_type_mismatch
Type:
Mandatory: No
Dynamic: Yes
Values: never, always
With this optional parameter the masking filter can be instructed to log a warning if a masking rule matches a column that is not of one of the allowed types.
large_payload
Type:
Mandatory: No
Dynamic: Yes
Values: ignore, abort
This optional parameter specifies how the masking filter should treat
payloads larger than 16MB, that is, payloads that are delivered in
multiple MySQL protocol packets.
The values that can be used are ignore, which means that columns in
such payloads are not masked, and abort, which means that if such
payloads are encountered, the client connection is closed. The default
is abort.
Note that the aborting behaviour is applied only to resultsets that contain columns that should be masked. There are no limitations on resultsets that do not contain such columns.
prevent_function_usage
Type:
Mandatory: No
Dynamic: Yes
Default: true
This optional parameter specifies how the masking filter should behave if a column that should be masked, is used in conjunction with some function. As the masking filter works only on the basis of the information in the returned result-set, if the name of a column is not present in the result-set, then the masking filter cannot mask a value. This means that the masking filter basically can be bypassed with a query like:
If the value of prevent_function_usage is true, then all
statements that contain functions referring to masked columns will
be rejected. As that means that also queries using potentially
harmless functions, such as LENGTH(masked_column), are rejected
as well, this feature can be turned off. In that case, the firewall
filter should be setup to allow or reject the use of certain functions.
require_fully_parsed
Type:
Mandatory: No
Dynamic: Yes
Default: true
This optional parameter specifies how the masking filter should
behave in case any of prevent_function_usage, check_user_variables,check_unions or check_subqueries is true and it encounters a
statement that cannot be fully parsed,
If true, then statements that cannot be fully parsed (due to a parser limitation) will be blocked.
Note that if this parameter is set to false, then prevent_function_usage,check_user_variables, check_unions and check_subqueries are rendered
less effective, as it with a statement that cannot be fully parsed may be
possible to bypass the protection that they are intended to provide.
treat_string_arg_as_field
Type:
Mandatory: No
Dynamic: Yes
Default: true
This optional parameter specifies how the masking filter should treat
strings used as arguments to functions. If true, they will be handled
as fields, which will cause fields to be masked even if ANSI_QUOTES has
been enabled and " is used instead of backtick.
check_user_variables
Type:
Mandatory: No
Dynamic: Yes
Default: true
This optional parameter specifies how the masking filter should behave with respect to user variables. If true, then a statement like
will be rejected if ssn is a column that should be masked.
check_unions
Type:
Mandatory: No
Dynamic: Yes
Default: true
This optional parameter specifies how the masking filter should behave with respect to UNIONs. If true, then a statement like
will be rejected if b is a column that should be masked.
check_subqueries
Type:
Mandatory: No
Dynamic: Yes
Default: true
This optional parameter specifies how the masking filter should behave with respect to subqueries. If true, then a statement like
will be rejected if a is a column that should be masked.
The masking rules are expressed as a JSON object.
The top-level object is expected to contain a key rules whose
value is an array of rule objects.
Each rule in the rules array is a JSON object, expected to
contain the keys replace, with, applies_to andexempted. The two former ones are obligatory and the two
latter ones optional.
replaceThe value of this key is an object that specifies the column
whose values should be masked. The object must contain the keycolumn and may contain the keys table and database. The
value of these keys must be a string.
If only column is specified, then a column with that name
matches irrespective of the table and database. If table
is specified, then the column matches only if it is in a table
with the specified name, and if database is specified when
the column matches only if it is in a database with the
specified name.
NOTE If a rule contains a table/database then if the resultset
does not contain table/database information, it will always be
considered a match if the column matches. For instance, given the
rule above, if there is a table person2, also containing an ssn
field, then a query like
will not return masked values, but a query like
will only return masked values, even if the ssn values fromperson2 in principle should not be masked. The same effect is
observed even with a nonsensical query like
even if nothing from person2 should be masked. The reason is that
as the resultset contains no table information, the values must be
masked if the column name matches, as otherwise the masking could
easily be circumvented with a query like
The optional key match makes partial replacement of the original
value possible: only the matched part would be replaced
with the fill character.
The match value must be a valid pcre2 regular expression.
obfuscateThe obfuscate rule allows the obfuscation of the value by passing it through an obfuscation algorithm. Current solution uses a non-reversible obfuscation approach.
However, note that although it is in principle impossible to obtain the original value from the obfuscated one, if the range of possible original values is limited, it is straightforward to figure out the possible original values by running all possible values through the obfuscation algorithm and then comparing the results.
The minimal configuration is:
Output example for Db field name = 'remo'
withThe value of this key is an object that specifies what the value of the matched
column should be replaced with for the replace rule. Currently, the object
is expected to contain either the key value or the key fill.
The value of both must be a string with length greater than zero.
If both keys are specified, value takes precedence.
If fill is not specified, the default X is used as its value.
If value is specified, then its value is used to replace the actual value
verbatim and the length of the specified value must match the actual returned
value (from the server) exactly. If the lengths do not match, the value offill is used to mask the actual value.
When the value of fill (fill-value) is used for masking the returned value,
the fill-value is used as many times as necessary to match the length of the
return value. If required, only a part of the fill-value may be used in the end
of the mask value to get the lengths to match.
applies_toWith this optional key, whose value must be an array of strings,
it can be specified what users the rule is applied to. Each string
should be a MariaDB account string, that is, % is a wildcard.
If this key is not specified, then the masking is performed for all
users, except the ones exempted using the key exempted.
exemptedWith this optional key, whose value must be an array of strings,
it can be specified what users the rule is not applied to. Each
string should be a MariaDB account string, that is, % is a wildcard.
Read documentation for details about module commands.
The masking filter supports the following module commands.
reloadReload the rules from the rules file. The new rules are taken into use only if the loading succeeds without any errors.
MyMaskingFilter refers to a particular filter section in the
MariaDB MaxScale configuration file.
In the following we configure a masking filter MyMasking that should always log a
warning if a masking rule matches a column that is of a type that cannot be masked,
and that should abort the client connection if a resultset package is larger than
16MB. The rules for the masking filter are in the file masking_rules.json.
masking_rules.jsonThe rules specify that the data of a column whose name is ssn, should
be replaced with the string 012345-ABCD. If the length of the data is
not exactly the same as the length of the replacement value, then the
data should be replaced with as many X characters as needed.
This page is licensed: CC BY-SA / Gnu FDL
A monitor resource represents a monitor inside MaxScale that monitors one or more servers.
The :name in all of the URIs must be the name of a monitor in MaxScale.
Get a single monitor.
Response
Status: 200 OK
Get all monitors.
Response
Status: 200 OK
Create a new monitor. The request body must define at least the following fields.
data.id
Name of the monitor
data.type
Type of the object, must be monitors
All monitor parameters can be defined at creation time.
The following example defines a request body which creates a new monitor and assigns two servers to be monitored by it. It also defines a custom value for the monitor_interval parameter.
Response
Monitor is created:
Status: 204 No Content
The request body must be a valid JSON document representing the modified monitor.
The following standard server parameter can be modified.
In addition to these standard parameters, the monitor specific parameters can also be modified. Refer to the monitor module documentation for details on these parameters.
Response
Monitor is modified:
Status: 204 No Content
Invalid request body:
Status: 400 Bad Request
The request body must be a JSON object that defines only the data field. The value of the data field must be an array of relationship objects that define the id and type fields of the relationship. This object will replace the existing relationships of the monitor.
The following is an example request and request body that defines a single server relationship for a monitor.
All relationships for a monitor can be deleted by sending an empty array as the_data_ field value. The following example removes all servers from a monitor.
Response
Monitor relationships modified:
Status: 204 No Content
Invalid JSON body:
Status: 400 Bad Request
Destroy a created monitor. The monitor must not have relationships to any servers in order to be destroyed.
This endpoint also supports the force=yes parameter that will unconditionally
delete the monitor by first unlinking it from all servers that it uses.
Response
Monitor is deleted:
Status: 204 No Content
Monitor could not be deleted:
Status: 400 Bad Request
Stops a started monitor.
Response
Monitor is stopped:
Status: 204 No Content
Starts a stopped monitor.
Response
Monitor is started:
Status: 204 No Content
This page is licensed: CC BY-SA / Gnu FDL
$ maxctrl list services
┌────────────────────────┬────────────────┬─────────────┬───────────────────┬────────────────────────────────────┐
│ Service │ Router │ Connections │ Total Connections │ Servers │
├────────────────────────┼────────────────┼─────────────┼───────────────────┼────────────────────────────────────┤
│ CLI │ cli │ 1 │ 1 │ │
├────────────────────────┼────────────────┼─────────────┼───────────────────┼────────────────────────────────────┤
│ RW-Split-Router │ readwritesplit │ 1 │ 1 │ server1, server2, server3, server4 │
├────────────────────────┼────────────────┼─────────────┼───────────────────┼────────────────────────────────────┤
│ RW-Split-Hint-Router │ readwritesplit │ 1 │ 1 │ server1, server2, server3, server4 │
├────────────────────────┼────────────────┼─────────────┼───────────────────┼────────────────────────────────────┤
│ SchemaRouter-Router │ schemarouter │ 1 │ 1 │ server1, server2, server3, server4 │
├────────────────────────┼────────────────┼─────────────┼───────────────────┼────────────────────────────────────┤
│ Read-Connection-Router │ readconnroute │ 1 │ 1 │ server1 │
└────────────────────────┴────────────────┴─────────────┴───────────────────┴────────────────────────────────────┘$ maxctrl list sessions
┌────┬─────────┬──────────────────┬──────────────────────────┬──────┬─────────────────┐
│ Id │ User │ Host │ Connected │ Idle │ Service │
├────┼─────────┼──────────────────┼──────────────────────────┼──────┼─────────────────┤
│ 6 │ maxuser │ ::ffff:127.0.0.1 │ Thu Aug 27 10:39:16 2020 │ 4 │ RW-Split-Router │
└────┴─────────┴──────────────────┴──────────────────────────┴──────┴─────────────────┘maxctrl rotate logs/var/log/maxscale/maxscale.log {
monthly
rotate 5
missingok
nocompress
sharedscripts
postrotate
\# run if maxscale is running
if test -n "`ps acx|grep maxscale`"; then
/usr/bin/maxctrl rotate logs
fi
endscript
}/var/log/maxscale/maxscale.log {
monthly
rotate 5
missingok
nocompress
sharedscripts
postrotate
kill -USR1 `cat /var/run/maxscale/maxscale.pid`
endscript
}maxctrl set server db-server-3 maintenancemaxctrl clear server db-server-3 maintenancemaxctrl stop service db-servicemaxctrl start service db-servicemaxctrl stop monitor db-monitormaxctrl start monitor db-monitormaxctrl alter maxscale auth_connect_timeout 5smaxctrl create server db-server-1 192.168.0.100 3306maxctrl alter server db-server-1 port 3307maxctrl destroy server db-server-1maxctrl set server db-server-1 drainmaxctrl clear server db-server-1 drainmaxctrl create monitor db-monitor mariadbmon user=db-user password=db-passwordmaxctrl alter monitor db-monitor monitor_interval 1000maxctrl link monitor db-monitor db-server-1maxctrl unlink monitor db-monitor db-server-1maxctrl destroy monitor db-monitormaxctrl create service db-service readwritesplit user=db-user password=db-passwordmaxctrl alter service db-service user new-db-usermaxctrl link service db-service db-server1maxctrl unlink service db-service db-server1maxctrl alter service-filters my-regexfilter my-qlafiltermaxctrl destroy service db-servicemaxctrl create filter regexfilter match=ENGINE=MyISAM replace=ENGINE=InnoDBmaxctrl destroy filter my-regexfiltermaxctrl create listener db-listener db-service 4006maxctrl destroy listener db-listenermaxctrl create user basic-user basic-passwordmaxctrl create user admin-user admin-password --type=adminmaxctrl alter user admin-user new-admin-passwordmaxctrl destroy user basic-userDefault: never
Default: abort
data.attributes.module
The monitor module to use
data.attributes.parameters.user
The user to use
data.attributes.parameters.password
The password to use
> SELECT name, ssn FROM person;+-------+-------------+
+ name | ssn |
+-------+-------------+
| Alice | 721-07-4426 |
| Bob | 435-22-3267 |
...+-------+-------------+
+ name | ssn |
+-------+-------------+
| Alice | XXX-XX-XXXX |
| Bob | XXX-XX-XXXX |
...CREATE TABLE cheat (revealed_ssn TEXT);
INSERT INTO cheat SELECT ssn FROM users;
SELECT revealed_ssn FROM cheat;mysql> set @@sql_mode = 'ANSI_QUOTES';
mysql> select concat("ssn") from managers;[Mask-SSN]
type=filter
module=masking
rules=...
[SomeService]
type=service
...
filters=Mask-SSNrules=/path/to/rules-filewarn_type_mismatch=alwayslarge_payload=ignoreSELECT CONCAT(masked_column) FROM tbl;prevent_function_usage=falserequire_fully_parsed=falsetreat_string_arg_as_field=falseset @a = (select ssn from customer where id = 1);check_user_variables=falseSELECT a FROM t1 UNION SELECT b FROM t2;check_unions=falseSELECT * FROM (SELECT a AS b FROM t1) AS t2;check_subqueries=false{
"rules": [ ... ]
}{
"rules": [
{
"replace": { ... },
"with": { ... },
"applies_to": [ ... ],
"exempted": [ ... ]
}
]
}{
"rules": [
{
"replace": {
"database": "db1",
"table": "person",
"column": "ssn"
},
"with": { ... },
"applies_to": [ ... ],
"exempted": [ ... ]
}
]
}SELECT ssn FROM person2;SELECT ssn FROM person UNION SELECT ssn FROM person2;SELECT ssn FROM person2 UNION SELECT ssn FROM person2;SELECT ssn FROM person UNION SELECT ssn FROM person;"replace": {
"column": "ssn",
"match": "(123)"
},
"with": {
"fill": "X#"
}"obfuscate": {
"column": "name"
}SELECT name from db1.tbl1;`
+------+
| name |
+------+
| $-~) |
+------+{
"rules": [
{
"replace": {
"column": "ssn"
},
"with": {
"value": "XXX-XX-XXXX"
},
"applies_to": [ ... ],
"exempted": [ ... ]
},
{
"replace": {
"column": "age"
},
"with": {
"fill": "*"
},
"applies_to": [ ... ],
"exempted": [ ... ]
},
{
"replace": {
"column": "creditcard"
},
"with": {
"value": "1234123412341234",
"fill": "0"
},
"applies_to": [ ... ],
"exempted": [ ... ]
},
]
}{
"rules": [
{
"replace": { ... },
"with": { ... },
"applies_to": [ "'alice'@'host'", "'bob'@'%'" ],
"exempted": [ ... ]
}
]
}{
"rules": [
{
"replace": { ... },
"with": { ... },
"applies_to": [ ... ],
"exempted": [ "'admin'" ]
}
]
}MaxScale> call command masking reload MyMaskingFilter[MyMasking]
type=filter
module=masking
warn_type_mismatch=always
large_payload=abort
rules=masking_rules.json
[MyService]
type=service
...
filters=MyMasking{
"rules": [
{
"replace": {
"column": "ssn"
},
"with": {
"value": "012345-ABCD",
"fill": "X"
}
}
]
}GET /v1/monitors/:name{
"data": {
"attributes": {
"module": "mariadbmon",
"monitor_diagnostics": {
"master": "server1",
"master_gtid_domain_id": 0,
"primary": null,
"server_info": [
{
"gtid_binlog_pos": "0-3000-5",
"gtid_current_pos": "0-3000-5",
"lock_held": null,
"master_group": null,
"name": "server1",
"read_only": false,
"server_id": 3000,
"slave_connections": [],
"state_details": null
},
{
"gtid_binlog_pos": "0-3000-5",
"gtid_current_pos": "0-3000-5",
"lock_held": null,
"master_group": null,
"name": "server2",
"read_only": false,
"server_id": 3001,
"slave_connections": [
{
"connection_name": "",
"gtid_io_pos": "",
"last_io_error": "",
"last_sql_error": "",
"master_host": "127.0.0.1",
"master_port": 3000,
"master_server_id": 3000,
"master_server_name": "server1",
"seconds_behind_master": 0,
"slave_io_running": "Yes",
"slave_sql_running": "Yes",
"using_gtid": "No"
}
],
"state_details": null
}
],
"state": "Idle"
},
"parameters": {
"assume_unique_hostnames": true,
"auto_failover": false,
"auto_rejoin": false,
"backend_connect_attempts": 1,
"backend_connect_timeout": "3000ms",
"backend_read_timeout": "3000ms",
"backend_write_timeout": "3000ms",
"backup_storage_address": null,
"backup_storage_path": null,
"cooperative_monitoring_locks": "none",
"cs_admin_api_key": null,
"cs_admin_base_path": "/cmapi/0.4.0",
"cs_admin_port": 8640,
"demotion_sql_file": null,
"disk_space_check_interval": "0ms",
"disk_space_threshold": null,
"enforce_read_only_slaves": false,
"enforce_simple_topology": false,
"enforce_writable_master": false,
"events": "all,master_down,master_up,slave_down,slave_up,server_down,server_up,synced_down,synced_up,donor_down,donor_up,lost_master,lost_slave,lost_synced,lost_donor,new_master,new_slave,new_synced,new_donor",
"failcount": 5,
"failover_timeout": "90000ms",
"handle_events": true,
"journal_max_age": "28800000ms",
"maintenance_on_low_disk_space": true,
"mariadb-backup_parallel": 1,
"mariadb-backup_use_memory": "1G",
"master_conditions": "primary_monitor_master",
"master_failure_timeout": "10000ms",
"module": "mariadbmon",
"monitor_interval": "1000ms",
"password": "*****",
"promotion_sql_file": null,
"rebuild_port": 4444,
"replication_custom_options": null,
"replication_master_ssl": false,
"replication_password": "*****",
"replication_user": "maxuser",
"script": null,
"script_max_replication_lag": -1,
"script_timeout": "90000ms",
"servers_no_promotion": null,
"slave_conditions": "",
"ssh_check_host_key": true,
"ssh_keyfile": null,
"ssh_port": 22,
"ssh_timeout": "10000ms",
"ssh_user": null,
"switchover_on_low_disk_space": false,
"switchover_timeout": "90000ms",
"type": "monitor",
"user": "maxuser",
"verify_master_failure": true
},
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
},
"state": "Running",
"ticks": 12
},
"id": "MariaDB-Monitor",
"links": {
"self": "http://localhost:8989/v1/monitors/MariaDB-Monitor/"
},
"relationships": {
"servers": {
"data": [
{
"id": "server1",
"type": "servers"
},
{
"id": "server2",
"type": "servers"
}
],
"links": {
"related": "http://localhost:8989/v1/servers/",
"self": "http://localhost:8989/v1/monitors/MariaDB-Monitor/relationships/servers/"
}
},
"services": {
"data": [
{
"id": "RW-Split-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/monitors/MariaDB-Monitor/relationships/services/"
}
}
},
"type": "monitors"
},
"links": {
"self": "http://localhost:8989/v1/monitors/MariaDB-Monitor/"
}
}GET /v1/monitors{
"data": [
{
"attributes": {
"module": "mariadbmon",
"monitor_diagnostics": {
"master": "server1",
"master_gtid_domain_id": 0,
"primary": null,
"server_info": [
{
"gtid_binlog_pos": "0-3000-5",
"gtid_current_pos": "0-3000-5",
"lock_held": null,
"master_group": null,
"name": "server1",
"read_only": false,
"server_id": 3000,
"slave_connections": [],
"state_details": null
},
{
"gtid_binlog_pos": "0-3000-5",
"gtid_current_pos": "0-3000-5",
"lock_held": null,
"master_group": null,
"name": "server2",
"read_only": false,
"server_id": 3001,
"slave_connections": [
{
"connection_name": "",
"gtid_io_pos": "",
"last_io_error": "",
"last_sql_error": "",
"master_host": "127.0.0.1",
"master_port": 3000,
"master_server_id": 3000,
"master_server_name": "server1",
"seconds_behind_master": 0,
"slave_io_running": "Yes",
"slave_sql_running": "Yes",
"using_gtid": "No"
}
],
"state_details": null
}
],
"state": "Idle"
},
"parameters": {
"assume_unique_hostnames": true,
"auto_failover": false,
"auto_rejoin": false,
"backend_connect_attempts": 1,
"backend_connect_timeout": "3000ms",
"backend_read_timeout": "3000ms",
"backend_write_timeout": "3000ms",
"backup_storage_address": null,
"backup_storage_path": null,
"cooperative_monitoring_locks": "none",
"cs_admin_api_key": null,
"cs_admin_base_path": "/cmapi/0.4.0",
"cs_admin_port": 8640,
"demotion_sql_file": null,
"disk_space_check_interval": "0ms",
"disk_space_threshold": null,
"enforce_read_only_slaves": false,
"enforce_simple_topology": false,
"enforce_writable_master": false,
"events": "all,master_down,master_up,slave_down,slave_up,server_down,server_up,synced_down,synced_up,donor_down,donor_up,lost_master,lost_slave,lost_synced,lost_donor,new_master,new_slave,new_synced,new_donor",
"failcount": 5,
"failover_timeout": "90000ms",
"handle_events": true,
"journal_max_age": "28800000ms",
"maintenance_on_low_disk_space": true,
"mariadb-backup_parallel": 1,
"mariadb-backup_use_memory": "1G",
"master_conditions": "primary_monitor_master",
"master_failure_timeout": "10000ms",
"module": "mariadbmon",
"monitor_interval": "1000ms",
"password": "*****",
"promotion_sql_file": null,
"rebuild_port": 4444,
"replication_custom_options": null,
"replication_master_ssl": false,
"replication_password": "*****",
"replication_user": "maxuser",
"script": null,
"script_max_replication_lag": -1,
"script_timeout": "90000ms",
"servers_no_promotion": null,
"slave_conditions": "",
"ssh_check_host_key": true,
"ssh_keyfile": null,
"ssh_port": 22,
"ssh_timeout": "10000ms",
"ssh_user": null,
"switchover_on_low_disk_space": false,
"switchover_timeout": "90000ms",
"type": "monitor",
"user": "maxuser",
"verify_master_failure": true
},
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
},
"state": "Running",
"ticks": 12
},
"id": "MariaDB-Monitor",
"links": {
"self": "http://localhost:8989/v1/monitors/MariaDB-Monitor/"
},
"relationships": {
"servers": {
"data": [
{
"id": "server1",
"type": "servers"
},
{
"id": "server2",
"type": "servers"
}
],
"links": {
"related": "http://localhost:8989/v1/servers/",
"self": "http://localhost:8989/v1/monitors/MariaDB-Monitor/relationships/servers/"
}
},
"services": {
"data": [
{
"id": "RW-Split-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/monitors/MariaDB-Monitor/relationships/services/"
}
}
},
"type": "monitors"
}
],
"links": {
"self": "http://localhost:8989/v1/monitors/"
}
}POST /v1/monitors{
data: {
"id": "test-monitor", // Name of the monitor
"type": "monitors",
"attributes": {
"module": "mariadbmon", // The monitor uses the mariadbmon module
"parameters": { // Monitor parameters
"monitor_interval": 1000,
"user": "maxuser,
"password": "maxpwd"
}
},
"relationships": { // List of server relationships that this monitor uses
"servers": {
"data": [ // This monitor uses two servers
{
"id": "server1",
"type": "servers"
},
{
"id": "server2",
"type": "servers"
}
]
}
}
}
}PATCH /v1/monitors/:namePATCH /v1/monitors/:name/relationships/serversPATCH /v1/monitors/my-monitor/relationships/servers
{
data: [
{ "id": "my-server", "type": "servers" }
]
}PATCH /v1/monitors/my-monitor/relationships/servers
{
data: []
}DELETE /v1/monitors/:namePUT /v1/monitors/:name/stopPUT /v1/monitors/:name/startThe KafkaCDC module reads data changes in MariaDB via replication and converts them into JSON objects that are then streamed to a Kafka broker.
DDL events (CREATE TABLE, ALTER TABLE) are streamed as JSON objects in the
following format (example created by CREATE TABLE test.t1(id INT)):
The domain, server_id and sequence fields contain the GTID that this event
belongs to. The event_number field is the sequence number of events inside the
transaction starting from 1. The timestamp field is the UNIX timestamp when
the event occurred. The event_type field contains the type of the event, one
of:
insert: the event is the data that was added to MariaDB
delete: the event is the data that was removed from MariaDB
update_before: the event contains the data before an update statement modified it
update_after: the event contains the data after an update statement modified it
All remaining fields contains data from the table. In the example event this
would be the fields id and data.
The sending of these schema objects is optional and can be disabled usingsend_schema=false.
DML events (INSERT, UPDATE, DELETE) are streamed as JSON objects that
follow the format specified in the DDL event. The objects are in the following
format (example created by INSERT INTO test.t1 VALUES (1)):
The table_name and table_schema fields were added in MaxScale 2.5.3. These
contain the table name and schema the event targets.
The router stores table metadata in the MaxScale data directory. The
default value is /var/lib/maxscale/<service name>. If data for a table
is replicated before a DDL event for it is replicated, the CREATE TABLE
will be queried from the primary server.
During shutdown, the Kafka event queue is flushed. This can take up to 60 seconds if the network is slow or there are network problems.
In order for kafkacdc to work, the binary logging on the source server must
be configured to use row-based replication and the row image must be set to
full by configuring binlog_format=ROW and binlog_row_image=FULL.
The servers parameter defines the set of servers where the data is
replicated from. The replication will be done from the first primary server
that is found.
The user and password of the service will be used to connect to the
primary. This user requires the REPLICATION SLAVE grant.
The KafkaCDC service must not be configured to use listeners. If a listener is configured, all attempts to start a session will fail.
Type: string
Mandatory: Yes
Dynamic: No
The list of Kafka brokers to use in host:port format. Multiple values
can be separated with commas. This is a mandatory parameter.
Type: string
Mandatory: Yes
Dynamic: No
The Kafka topic where the replicated events will be sent. This is a mandatory parameter.
Type: boolean
Mandatory: No
Dynamic: No
Default: false
Enable idempotent producer mode. This feature requires Kafka version 0.11 or newer to work and is disabled by default.
When enabled, the Kafka producer enters a strict mode which avoids event duplication due to broker outages or other network errors. In HA scenarios where there are more than two MaxScale instances, event duplication can still happen as there is no synchronization between the MaxScale instances.
The Kafka C library,librdkafka, describes the parameter as follows:
When set to true, the producer will ensure that messages are successfully produced exactly once and in the original produce order. The following configuration properties are adjusted automatically (if not modified by the user) when idempotence is enabled: max.in.flight.requests.per.connection=5 (must be less than or equal to 5), retries=INT32_MAX (must be greater than 0), acks=all, queuing.strategy=fifo.
Type: duration
Mandatory: No
Dynamic: Yes
Default: 10s
The connection and read timeout for the replication stream.
Type: string
Mandatory: No
Dynamic: No
Default: ""
The initial GTID position from where the replication is started. By default the replication is started from the beginning. The value of this parameter is only used if no previously replicated events with GTID positions can be retrieved from Kafka.
Starting in MaxScale 24.02, the special values newest and oldest can be
used:
newest uses the current value of @@gtid_binlog_pos as the GTID where the
replication is started from.
oldest uses the oldest binlog that's available in SHOW BINARY LOGS and
then extracting the oldest GTID from it with SHOW BINLOG EVENTS.
Once the replication has started and a GTID position has been recorded, this
parameter will be ignored. To reset the recorded GTID position, delete thecurrent_gtid.txt file located in /var/lib/maxscale/<SERVICE>/ where<SERVICE> is the name of the KafkaCDC service.
Type: number
Mandatory: No
Dynamic: No
Default: 1234
The used when replicating from the primary in direct replication mode. The default value is 1234. This parameter was added in MaxScale 2.5.7.
Type: regex
Mandatory: No
Dynamic: Yes
Default: ""
Only include data from tables that match this pattern.
For example, if configured with match=accounts[.].*, only data from theaccounts database is sent to Kafka.
The pattern is matched against the combined database and table name separated by
a period. This means that the event for the table t1 in the test database
would appear as test.t1. The behavior is the same even if the database or the
table name contains a period. This means that an event for the test.table
table in the my.data database would appear as my.data.test.table.
Here is an example configuration that only sends events for tables from thedb1 database. The accounts and users tables in the db1 database are
filtered out using the exclude parameter.
Type: regex
Mandatory: No
Dynamic: Yes
Default: ""
Exclude data from tables that match this pattern.
For example, if configured with exclude=mydb[.].*, all data from the tables in
the mydb database is not sent to Kafka.
The pattern matching works the same way for both of the exclude and match
parameters. See match for an explanation on how the patterns are
matched against the database and table names.
Type: boolean
Mandatory: No
Dynamic: No
Default: false
Controls whether multiple instances cooperatively replicate from the same cluster. This is a boolean parameter and is disabled by default. It was added in MaxScale 6.0.
When this parameter is enabled and the monitor pointed to by the cluster
parameter supports cooperative monitoring (currently only mariadbmon), the
replication is only active if the monitor owns the cluster it is monitoring.
Whenever an instance that does not own the cluster gains ownership of the cluster, the replication will continue from the latest GTID that was delivered to Kafka.
This means that multiple MaxScale instances can replicate from the same set of
servers and the event is only processed once. This feature does not provide
exactly-once semantics for the Kafka event delivery. However, it does provide
high-availability for the kafkacdc instances which allows automated failover
between multiple MaxScale instances.
Type: boolean
Mandatory: No
Dynamic: Yes
Default: true
Send JSON schema object into the stream whenever the table schema changes. These events, as described here, can be used to detect whenever the format of the data being sent changes.
If this information in these schema change events is not needed or the code that processes the Kafka stream can't handle them, they can be disabled with this parameter.
Type: boolean
Mandatory: No
Dynamic: No
Default: true
On startup, the latest GTID is by default read from the Kafka cluster. This makes it possible to recover the replication position stored by another MaxScale. Sometimes this is not desirable and the GTID should only be read from the local file or started anew. Examples of these are when the GTIDs are reset or the replication topology has changed.
Type: boolean
Mandatory: No
Dynamic: No
Default: false
Enable SSL for Kafka connections. This is a boolean parameter and is disabled by default.
Type: path
Mandatory: No
Dynamic: No
Default: ""
Path to the certificate authority file in PEM format. If this is not provided, the default system certificates will be used.
Type: path
Mandatory: No
Dynamic: No
Default: ""
Path to the public certificate in PEM format.
The client must provide a certificate if the Kafka server performs authentication of the client certificates. This feature is enabled by default in Kafka and is controlled by ssl.endpoint.identification.algorithm.
If kafka_ssl_cert is provided, kafka_ssl_key must also be provided.
Type: path
Mandatory: No
Dynamic: No
Default: ""
Path to the private key in PEM format.
If kafka_ssl_key is provided, kafka_ssl_cert must also be provided.
Type: string
Mandatory: No
Dynamic: No
Default: ""
Username for SASL authentication.
If kafka_sasl_user is provided, kafka_sasl_password must also be provided.
Type: string
Mandatory: No
Dynamic: No
Default: ""
Password for SASL authentication.
If kafka_sasl_password is provided, kafka_sasl_user must also be provided.
Type: enum
Mandatory: No
Dynamic: No
Values: PLAIN, SCRAM-SHA-256, SCRAM-SHA-512
Default: PLAIN
The SASL mechanism used. The default value is PLAIN which uses plaintext
authentication. It is recommended to enable SSL whenever plaintext
authentication is used.
Allowed values are:
PLAIN
SCRAM-SHA-256
SCRAM-SHA-512
The value that should be used depends on the SASL mechanism used by the Kafka broker.
The following configuration defines the minimal setup for streaming replication events from MariaDB into Kafka as JSON:
The KafkaCDC module provides at-least-once semantics for the generated events. This means that each replication event is delivered to kafka at least once but there can be duplicate events in case of failures.
This page is licensed: CC BY-SA / Gnu FDL
{
"namespace": "MaxScaleChangeDataSchema.avro",
"type": "record",
"name": "ChangeRecord",
"table": "t2", // name of the table
"database": "test", // the database the table is in
"version": 1, // schema version, incremented when the table format changes
"gtid": "0-3000-14", // GTID that created the current version of the table
"fields": [
{
"name": "domain", // First part of the GTID
"type": "int"
},
{
"name": "server_id", // Second part of the GTID
"type": "int"
},
{
"name": "sequence", // Third part of the GTID
"type": "int"
},
{
"name": "event_number", // Sequence number of the event inside the GTID
"type": "int"
},
{
"name": "timestamp", // UNIX timestamp when the event was created
"type": "int"
},
{
"name": "event_type", // Event type
"type": {
"type": "enum",
"name": "EVENT_TYPES",
"symbols": [
"insert", // The row that was inserted
"update_before", // The row before it was updated
"update_after", // The row after it was updated
"delete" // The row that was deleted
]
}
},
{
"name": "id", // Field name
"type": [
"null",
"long"
],
"real_type": "int", // Field type
"length": -1, // Field length, if found
"unsigned": false // Whether the field is unsigned
}
]
}{
"domain": 0,
"server_id": 3000,
"sequence": 20,
"event_number": 1,
"timestamp": 1580485945,
"event_type": "insert",
"id": 1,
"table_name": "t2",
"table_schema": "test"
}[Kafka-CDC]
type=service
router=kafkacdc
servers=server1
user=maxuser
password=maxpwd
bootstrap_servers=127.0.0.1:9092
topic=my-cdc-topic
match=db1[.]
exclude=db1 [.](accounts|users)# The server we're replicating from
[server1]
type=server
address=127.0.0.1
port=3306
# The monitor for the server
[MariaDB-Monitor]
type=monitor
module=mariadbmon
servers=server1
user=maxuser
password=maxpwd
monitor_interval=5s
# The MariaDB-to-Kafka CDC service
[Kafka-CDC]
type=service
router=kafkacdc
servers=server1
user=maxuser
password=maxpwd
bootstrap_servers=127.0.0.1:9092
topic=my-cdc-topicThis tutorial is an overview of what the MaxGUI offers as an alternative solution to MaxCtrl.
MaxScale object. i.e. Service, Server, Monitor, Filter, and Listener (Clicking on it will navigate to its detail page)
Create a new MaxScale object.
Dashboard Tab Navigation.
Search Input. This can be used as a quick way to search for a keyword in tables.
Dashboard graphs. Refresh interval is 10 seconds.
SESSIONS graph illustrates the total number of current sessions.
CONNECTIONS graph shows servers current connections.
LOAD graph shows the last second load of thread.
Logout of the app.
Sidebar navigation menu. Access to the following pages: Dashboard, Visualization, Settings, Logs Archive, Query Editor
Clicking on the Create New button (Annotation 2) to open a dialog for creating a new object.
The replication status of a server monitored by MariaDB-Monitor can be viewed by mousing over the server name. A tooltip will be displayed with the following information: replication_state, seconds_behind_master, slave_io_running, slave_sql_running.
A session can be killed easily on the "Current Sessions" list which can be found on the Dashboard, Server detail, and Service detail page.
Kill session button. This button is shown on the mouse hover.
Confirm killing the session dialog.
This page shows information on each MaxScale object and allow to edit its parameter, relationships and perform other manipulation operations. Most of the control buttons will be shown on the mouse hover. Below is a screenshot of a Monitor Detail page, other Detail pages also have a similar layout structure so this is used for illustration purpose.
Settings option. Clicking on the gear icon will show icons allowing to do different operations depending on the type of the Detail page.
Monitor Detail page, there are icons to Stop, Start, and Destroy monitor.
Service Detail page, there are icons to Stop, Start, and Destroy service.
Server Detail page, there are icons to Set maintenance mode, Clear server state, Drain and Delete server.
Filter and Listener Detail page, there is a delete icon to delete the object.
Switchover button. This button is shown on the mouse hover allowing to swap the running primary server with a designated secondary server.
Edit parameters button. This button is shown on the mouse hover allowing to edit the MaxScale object's parameter. Clicking on it will enable editable mode on the table. After finishing editing the parameters, simply click the Done Editing button.
A Detail page has tables showing "Relationship" between other MaxScale object. This "unlink" icon is shown on the mouse hover allowing to remove the relationship between two objects.
This button is used to link other MaxScale objects to the relationship.
This page visualizes MaxScale configuration and clusters.
This page visualizes MaxScale configuration as shown in the figure below.
A MaxScale object (a node graph). The position of the node in the graph can be changed by dragging and dropping it.
Anchor link. The detail page of each MaxScale object can be accessed by clicking on the name of the node.
Filter visualization button. By default, if the number of filters used by a service is larger than 3, filter nodes aren't visualized as shown in the figure. Clicking this button will visualize them.
Hide filter visualization button.
Refresh rate dropdown. The frequency with which the data is refreshed.
Create a new MaxScale object button.
This page shows all monitor clusters using mariadbmon module in a card-like view. Clicking on the card will visualize the cluster into a tree graph as shown in the figure below.
Drag a secondary server on top of a primary server to promote the secondary server as the new primary server.
Server manipulation operations button. Showing a dropdown with the following operations:
Set maintenance mode: Setting a server to a maintenance mode.
Clear server state: Clear current server state.
Drain server: Drain the server of connections.
Quick access to query editor button. Opening the Query Editor page for
this server. If the connection is already created for that server, it'll use
it. Otherwise, it creates a blank worksheet and shows a connection dialog to
connect to that server.
Carousel navigation button. Viewing more information about the server in the next slide.
Collapse the carousel.
Anchor link of the server. Opening the detail page of the server in a new tab.
Collapse its children nodes.
Rejoin node. When the auto_rejoin parameter is disabled, the node can be
manually rejoined by dragging it on top of the primary server.
Monitor manipulation operations button. Showing a dropdown with the following operations:
Stop monitor.
Start monitor.
Reset Replication.
Release Locks.
Master failover. Manually performing a primary failover. This option is
visible only when the auto_failover parameter is disabled.
Refresh rate dropdown. The frequency with which the data is refreshed.
Create a new MaxScale object button.
This page shows and allows editing of MaxScale parameters.
Edit parameters button. This button is shown on the mouse hover allowing to edit the MaxScale parameter. Clicking on it will enable editable mode on the table..
Editable parameters are visible as it's illustrated in the screenshot.
After finishing editing the parameters, simply click the Done Editing button.
This page show real-time MaxScale logs with filter options.
Filter by dropdown. All logs types are selected to be shown by default
Uncheck the box to disable showing a particular log type.
On this page, you may add numerous worksheets, each of which can be used for "Run queries", "Data migration" or "Create an ERD" task.
Clicking on the "Run Queries" card will open a dialog, providing options to establish a connection to different MaxScale object types, including "Listener, Server, Service".
The Query Editor worksheet will be rendered in the active worksheet after correctly connecting.
There are various features in the Query Editor worksheet, the most notable ones are listed below.
Create a new connection
If the connection of the Query Editor expires, or if you wish to make a new connection for the active worksheet, simply clicking on the button located on the right side of the query tabs navigation bar which features a server icon and an active connection name as a label. This will open the connection dialog and allow you to create a new connection.
Schemas objects sidebar
Set the current database
There are two ways to set the current database:
Double-click on the name of the database.
Right-click on the name of the database to show the context menu, then select
the Use database option.
Preview table data of the top 1000 rows
There are two ways to preview data of a table:
Click on the name of the table.
Right-click on the name of the table to show the context menu, then select
the Preview Data (top 1000) option.
Describe table
Right-click on the name of the table to show the context menu, then select theView Details option.
Alter/Drop/Truncate table
Right-click on the name of the table to show the context menu, then select the desired option.
Quickly insert an object into the editor
There are two ways to quickly insert an object to the editor:
Drag the object and drop it in the desire position in the editor.
Right-click on the object to show the context menu, then mouse
hover the Place to Editor option and select the desired insert option.
Show object creation statement and insights info
To view the statement that creates the given object in the Schemas objects sidebar, right-clicking on schema or table node and
select the View Insights option. For other objects such as view, stored
procedure, function and trigger, select the Show Create option.
Editor
The editor is powered by Monaco editor, therefore, its features are similar to those of Visual Studio Code.
To see the command palette, press F1 while the cursor is active on the editor.
The editor also comes with various options to assist your querying tasks. To see available options, right-click on the editor to show the context menu.
Re-execute old queries
Every executed query will be saved in the browser's storage (IndexedDB).
Query history can be seen in the History/Snippets tab.
To re-execute a query, follow the same step to insert an object into the editor
and click the execute query button in the editor.
Create query snippet
Press CTRL/CMD + D to save the current SQL in the editor to the snippets storage. A snippet is created with a prefix keyword, so when that keyword is typed in the editor, it will be suggested in the "code completion" menu.
Generate an ERD
To initiate the process, either right-click on the schema name and select theGenerate ERD option, or click on the icon button that resembles a line graph,
located on the schemas sidebar. This will open a dialog for selecting the
tables for the diagram.
Clicking on the "Data Migration" card will open a dialog, providing an option
to name the task. The Data Migration worksheet will be rendered in the active
worksheet after clicking the Create button in the dialog.
MaxScale uses ODBC for extracting and loading from the data source to a server in MaxScale. Before starting a migration, ensure that you have set up the necessary configurations on the MaxScale server. Instruction can be found here and limitations here.
Connections
Source connection shows the most common parameter inputs for creating
an ODBC connection. For extra parameters, enable the Advanced mode
to manually edit the Connection String input.
After successfully connected to both source and destination servers,
click on the Select objects to migrate to navigate to the next stage.
Objects Selection
Select the objects you wish to migrate to the MariaDB server.
After selecting the desired objects, click on the Prepare Migration Script to
navigate to the next stage. The migration scripts will be generated
differently based on the value selected for the Create mode input. Hover over
the question icon for additional information on the modes.
Migration
As shown in the screenshot, you can quickly modify the script for each object by selecting the corresponding object in the table and using the editors on the right-hand side to make any necessary changes.
After clicking the Start Migration button, the script for each object will be
executed in parallel.
Migration report
If errors are reported for certain objects, review the output messages and
adjust the script accordingly. Then, click the Manage button and select Restart.
To migrate additional objects, click the Manage button and selectMigrate other objects. Doing so will replace the current migration
report for the current object with a new one.
To retain the report and terminate open connections after migration, click theManage button, then select Disconnect, and finally delete the worksheet.
Deleting the worksheet will not delete the migration task. To clean-up
everything after migration, click the Manage button, then selectDelete.
There are various features in the ERD worksheet, the most notable ones are listed below.
From an empty new worksheet, clicking on the "Create an ERD" card will open a connection dialog. After connecting successfully, the ERD worksheet will be rendered in the active worksheet. The connection is required to retrieve essential metadata, such as engines, character sets, and collation.
Generate an ERD from the existing databases
Click on the icon button featured as a line graph, located on the top toolbar next to the connection button. This will open a dialog for selecting the tables for the diagram.
Create a new ERD
New tables can be created by using either of the following methods:
Click on the icon button that resembles a line graph, located on the top toolbar.
Right-click on the diagram board and select the Create Table
option.
Entity options
Two options are available: Edit Table and Remove from Diagram. These
options can be accessed using either of the following methods:
Right-click on the entity and choose the desired option.
Hover over the entity, click the gear icon button, and select the desired option.
For quickly editing or viewing the table definitions, double-clicking on the entity. The entity editor will be shown at the bottom of the worksheet.
Foreign keys quick common options
Edit Foreign Key, this opens an editor for viewing/editing foreign keys.
Remove Foreign Key.
Change to One To One or Change to One To Many. Toggling the uniqueness
of the foreign key column.
Set FK Column Mandatory or Set FK Column Optional. Toggling theNOT NULL option of the foreign key column.
Set Referenced Column Mandatory or Set Referenced Column Optional
Toggling the NOT NULL option of the referenced column.
To show the above foreign key common options, perform a right-click on the link within the diagram.
Viewing foreign key constraint SQL
Hover over the link in the diagram, the constraint SQL of that foreign key will be shown in a tooltip.
Quickly draw a foreign key link
As shown in the screenshot, a foreign key can be quickly established by performing the following actions:
Click on the entity that will have the foreign key.
Click on the connecting point of the desired foreign key column and drag it over the desired referenced column.
Entity editor
Table columns, foreign keys and indexes can be modified via the entity editor which can be accessed quickly by double-clicking on the entity.
Export options
Three options are available: Copy script to clipboard, Export script andExport as jpeg. These options can be accessed using either of the following
methods:
Right-click on the diagram board and choose the desired option.
Click the export icon button, and select the desired option.
Applying the script
Click the icon button resembling a play icon to execute the generated script for all tables in the diagram. This action will prompt a confirmation dialog for script execution. If needed, the script can be manually edited using the editor within the dialog.
Visual Enhancement options
The first section of the top toolbar, there are options to improve the visual of the diagram as follows:
Change the shape of links
Drawing foreign keys to columns
Auto-arrange entities
Highlight relationship
Zoom control
This page is licensed: CC BY-SA / Gnu FDL
The avrorouter is a MariaDB 10.0 binary log to Avro file converter. It consumes binary logs from a local directory and transforms them into a set of Avro files. These files can then be queried by clients for various purposes.
This router is intended to be used in tandem with the Binlog Server. The Binlog Server can connect to a primary server and request binlog records. These records can then consumed by the avrorouter directly from the binlog cache of the Binlog Server. This allows MariaDB MaxScale to automatically transform binlog events on the primary to local Avro format files.
The avrorouter can also consume binary logs straight from the primary. This will remove the need to configure the Binlog Server but it will increase the disk space requirement on the primary server by at least a factor of two.
The converted Avro files can be requested with the CDC protocol. This protocol should be used to communicate with the avrorouter and currently it is the only supported protocol. The clients can request either Avro or JSON format data streams from a database table.
MaxScale 2.4.0 added a direct replication mode that connects the avrorouter directly to a MariaDB server. This mode is an improvement over the binlogrouter based replication as it provides a more space-efficient and faster conversion process. This is the recommended method of using the avrorouter as it is faster, more efficient and less prone to errors caused by missing DDL events.
To enable the direct replication mode, add either the servers or the cluster
parameter to the avrorouter service. The avrorouter will then use one of the
servers as the replication source.
Here is a minimal avrorouter direct replication configuration:
In direct replication mode, the avrorouter stores the latest replicated GTID in
the current_gtid.txt file located in the avrodir (defaults to/var/lib/maxscale). To reset the replication process, stop MaxScale and remove
the file.
Additionally, the avrorouter will attempt to automatically create any missing schema files for tables that have data events for them but the DDL for those tables is not contained in the binlogs.
For information about common service parameters, refer to the .
gtid_start_pos
Type: string
Mandatory: No
Dynamic: No
Default: ""
The GTID where avrorouter starts the replication from in direct replication mode. The parameter value must be in the MariaDB GTID format e.g. 0-1-123 where the first number is the replication domain, the second the server_id value of the server and the last is the GTID sequence number.
This parameter has no effect in the traditional mode. If this parameter is defined, the replication will start from the implicit GTID that the primary first serves.
Starting in MaxScale 24.02, the special values newest and oldest can be
used:
newest uses the current value of @@gtid_binlog_pos as the GTID where the
replication is started from.
oldest uses the oldest binlog that's available in SHOW BINARY LOGS and
then extracting the oldest GTID from it with SHOW BINLOG EVENTS.
Once the replication has started and a GTID position has been recorded, this
parameter will be ignored. To reset the recorded GTID position, delete thecurrent_gtid.txt file located in /var/lib/maxscale/<SERVICE>/ where<SERVICE> is the name of the Avrorouter service.
server_id
Type: number
Mandatory: No
Dynamic: No
Default: 1234
The used when replicating from the primary in direct replication mode.
codec
Type:
Mandatory: No
Dynamic: No
Values: null, deflate
The compression codec to use. By default, the avrorouter does not use compression.
This parameter takes one of the following two values; null or_deflate_. These are the mandatory compression algorithms required by the Avro specification. For more information about the compression types, refer to the .
match and exclude
Type:
Mandatory: No
Dynamic: No
Default: ""
These filter events for processing depending on table names. Avrorouter does not support the_options_-parameter for regular expressions.
To prevent excessive matching of similarly named tables, surround each table
name with the ^ and $ tokens. For example, to match the test.clients table
but not test.clients_old table use match=^test[.]clients$. For multiple
tables, surround each table in parentheses and add a pipe character between
them: match=(^test[.]t1$)|(^test[.]t2$).
binlogdir
Type: path
Mandatory: No
Dynamic: No
Default: /var/lib/maxscale/
The location of the binary log files. This is the first mandatory parameter and it defines where the module will read binlog files from. Read access to this directory is required.
avrodir
Type: path
Mandatory: No
Dynamic: No
Default: /var/lib/maxscale/
The location where the Avro files are stored. This is the second mandatory parameter and it governs where the converted files are stored. This directory will be used to store the Avro files, plain-text Avro schemas and other files needed by the avrorouter. The user running MariaDB MaxScale will need both read and write access to this directory.
The avrorouter will also use the avrodir to store various internal files. These files are named avro.index and avro-conversion.ini. By default, the default data directory, /var/lib/maxscale/, is used. Before version 2.1 of MaxScale, the value of binlogdir was used as the default value for avrodir.
filestem
Type: string
Mandatory: No
Dynamic: No
Default: mysql-bin
The base name of the binlog files. The binlog files are assumed to follow the naming schema . where is the binlog number and is the value of this router option.
For example, with the following parameters:
The first binlog file the avrorouter would look for is /var/lib/mysql/binlogs/mybin.000001.
start_index
Type: number
Mandatory: No
Dynamic: No
Default: 1
The starting index number of the binlog file. The default value is 1. For the binlog mysql-bin.000001 the index would be 1, for mysql-bin.000005 the index would be 5.
If you need to start from a binlog file other than 1, you need to set the value of this option to the correct index. The avrorouter will always start from the beginning of the binary log file.
cooperative_replicationType:
Mandatory: No
Dynamic: No
Default: false
Controls whether multiple instances cooperatively replicate from the same cluster. This is a boolean parameter and is disabled by default. It was added in MaxScale 6.0.
When this parameter is enabled and the monitor pointed to by the cluster
parameter supports cooperative monitoring (currently only mariadbmon),
the replication is only active if the monitor owns the cluster it is
monitoring.
With this feature, multiple MaxScale instances can replicate from the same set of servers and only one of them actively processes the replication stream. This allows the avrorouter instances to be made highly-available without having to have them all process the events at the same time.
Whenever an instance that does not own the cluster gains ownership of the cluster, the replication will continue from the latest GTID processed by that instance. This means that if the instance hasn't replicated events that have been purged from the binary logs, the replication cannot continue.
Avro File Related Parameters
These options control how large the Avro file data blocks can get. Increasing or lowering the block size could have a positive effect depending on your use case. For more information about the Avro file format and how it organizes data, refer to the .
The avrorouter will flush a block and start a new one when either group_trx
transactions or group_rows row events have been processed. Changing these
options will also allow more frequent updates to stored data but this
will cause a small increase in file size and search times.
It is highly recommended to keep the block sizes relatively large to allow larger chunks of memory to be flushed to disk at one time. This will make the conversion process noticeably faster.
group_trx
Type: number
Mandatory: No
Dynamic: No
Default: 1
Controls the number of transactions that are grouped into a single Avro data block.
group_rows
Type: number
Mandatory: No
Dynamic: No
Default: 1000
Controls the number of row events that are grouped into a single Avro data block.
block_size
Type:
Mandatory: No
Dynamic: Yes
Default: 16KiB
The Avro data block size in bytes. The default is 16 kilobytes. Increase this value if individual events in the binary logs are very large. The value is a size type parameter which means that it can also be defined with an SI suffix. Refer to the for more details about size type parameters and how to use them.
max_file_size
Type:
Mandatory: No
Dynamic: No
Default: 0
If the size of a single Avro data file exceeds this limit, the avrorouter will rotate to a new file. This is done by closing the existing file and creating a new one with the next version number. By default the avrorouter does not rotate files based on their size. Setting the value to 0 disables file rotation based on size.
This uses the size of the file as reported by the operating system. The check for the file size is done after a transaction has been processed which means that large transactions can still cause the file size to exceed the given limit.
File rotation only works with the direct replication mode. The legacy file based replication mode does not support this.
max_data_age
Type:
Mandatory: No
Dynamic: No
Default: 0s
When enabled, the avrorouter will automatically purge any files that only have data that is older than the given limit. This means that all data files with at least one event that is newer than the configured limit will not be removed, even if the age of all the other events is above the limit. The purge operation is only done when a file rotation takes place (either manual or automatic) or when a schema change is detected.
This parameter is best combined with max_file_size to provide automatic
removal of stale data.
Automatic file purging only works with the direct replication mode. The legacy file based replication mode does not support this.
Example configuration
Read documentation for details about module commands.
The avrorouter supports the following module commands.
avrorouter::convert SERVICE {start | stop}Start or stop the binary log to Avro conversion. The first parameter is the name of the service to stop and the second parameter tells whether to start the conversion process or to stop it.
avrorouter::purge SERVICEThis command will delete all files created by the avrorouter. This includes all .avsc schema files and .avro data files as well as the internal state tracking files. Use this to completely reset the conversion process.
Note: Once the command has completed, MaxScale must be restarted to restart
the conversion process. Issuing a convert start command will not work.
WARNING: You will lose any and all converted data when this command is executed.
The avrorouter creates two files in the location pointed by avrodir:avro.index and avro-conversion.ini. The avro.index file is used to store the locations of the GTIDs in the .avro files. The avro-conversion.ini contains the last converted position and GTID in the binlogs. If you need to reset the conversion process, delete these two files and restart MaxScale.
To reset the binlog conversion process, issue the purge module command by
executing it via MaxCtrl and stop MaxScale. If manually created schema files
were used, they need to be recreated once MaxScale is stopped. After stopping
MaxScale and optionally creating the schema files, the conversion process can be
started by starting MaxScale.
The safest way to stop the avrorouter when used with the binlogrouter is to follow the following steps:
Issue STOP SLAVE on the binlogrouter
Wait for the avrorouter to process all files
Stop MaxScale with systemctl stop maxscale
This guarantees that the conversion process halts at a known good position in the latest binlog file.
The avrorouter comes with an example client program, cdc.py, written in Python 3. This client can connect to a MaxScale configured with the CDC protocol and the avrorouter.
Before using this client, you will need to install the Python 3 interpreter and add users to the service with the cdc_users.py script. Fore more details about the user creation, please refer to the and documentation.
Read the output of cdc.py --help for a full list of supported options
and a short usage description of the client program.
The avrorouter needs to have access to the CREATE TABLE statement for all tables for which there are data events in the binary logs. If the CREATE TABLE statements for the tables aren't present in the current binary logs, the schema files must be created.
In the direct replication mode, avrorouter will automatically create the missing
schema files by connecting to the database and executing a SHOW CREATE TABLE
statement. If a connection cannot be made or the service user lacks the
permission, an error will be logged and the data events for that table will not
be processed.
For the legacy binlog mode, the files must be generated with a schema file generator. There are currently two methods to generate the .avsc schema files.
The cdc_one_schema.py generates a schema file for a single table by reading a
tab separated list of field and type names from the standard input. This is the
recommended schema generation tool as it does not directly communicate with the
database thus making it more flexible.
The only requirement to run the script is that a Python interpreter is installed.
To use this script, pipe the output of the mysql command line into thecdc_one_schema.py script:
Replace the <user>, <host>, <port>, <database> and <table> with
appropriate values and run the command. Note that the -ss parameter is
mandatory as that will generate the tab separated output instead of the default
pretty-printed output.
An .avsc file named after the database and table name will be generated in the
current working directory. Copy this file to the location pointed by theavrodir parameter of the avrorouter.
Alternatively, you can also copy the output of the mysql command to a file and
feed it into the script if you cannot execute the SQL command directly:
If you want to use a specific Python interpreter instead of the one found in the
search path, you can modify the first line of the script from #!/usr/bin/env python to #!/path/to/python where /path/to/python is the absolute path to
the Python interpreter (both Python 2 and Python 3 can be used).
The cdc_schema.py executable is installed as a part of MaxScale. This is a Python 3 script that generates Avro schema files from an existing database.
The script will generate the .avsc schema files into the current directory. Run
the script for all required databases copy the generated .avsc files to the
directory where the avrorouter stores the .avro files (the value of avrodir).
The cdc_schema.go example Go program is provided with MaxScale. This file
can be used to create Avro schemas for the avrorouter by connecting to a
database and reading the table definitions. You can find the file in MaxScale's
share directory in /usr/share/maxscale/.
You'll need to install the Go compiler and run go get to resolve Go
dependencies before you can use the cdc_schema program. After resolving the
dependencies you can run the program with go run cdc_schema.go. The program
will create .avsc files in the current directory. These files should be moved
to the location pointed by the avrodir option of the avrorouter if they are
to be used by the router itself.
Read the output of go run cdc_schema.go -help for more information on how
to run the program.
The shows you how the Avrorouter works with the Binlog Server to convert binlogs from a primary server into easy to process Avro data.
Here is a simple configuration example which reads binary logs locally from/var/lib/mysql/ and stores them as Avro files in /var/lib/maxscale/avro/.
The service has one listener listening on port 4001 for CDC protocol clients.
Here is an example how you can query for data in JSON format using the cdc.py Python script. It queries the table test.mytable for all change records.
You can then combine it with the cdc_kafka_producer.py to publish these change records to a Kafka broker.
For more information on how to use these scripts, see the output of cdc.py -h
and cdc_kafka_producer.py -h.
To build the avrorouter from source, you will need the library, liblzma, and sqlite3 development
headers. When configuring MaxScale with CMake, you will need to add-DBUILD_CDC=Y to build the CDC module set.
The Avro C library needs to be build with position independent code enabled. You can do this by adding the following flags to the CMake invocation when configuring the Avro C library.
For more details about building MaxScale from source, please refer to the document.
The router_diagnostics output for an avrorouter service contains the following
fields.
infofile: File where the avrorouter stores the conversion process state.
avrodir: Directory where avro files are stored
binlogdir: Directory where binlog files are read from
The avrorouter does not support the following data types, conversions or SQL statements:
BIT
Fields CAST from integer types to string types
The avrorouter does not do any crash recovery. This means that the avro files need to be removed or truncated to valid block lengths before starting the avrorouter.
This page is licensed: CC BY-SA / Gnu FDL


















Default: null
binlog_name: Current binlog namebinlog_pos: Current binlog position
gtid: Current GTID
gtid_timestamp: Current GTID timestamp
gtid_event_number: Current GTID event number

This document settings supported by all monitors. These should be defined in the monitor section of the configuration file.
moduleType: string
Mandatory: Yes
Dynamic: No
The monitor module this monitor should use. Typically mariadbmon orgaleramon.
userType: string
Mandatory: Yes
Dynamic: Yes
Username used by the monitor to connect to the backend servers. If a server defines
the monitoruser parameter, that value will be used instead.
passwordType: string
Mandatory: Yes
Dynamic: Yes
Password for the user defined with the user parameter. If a server defines
the monitorpw parameter, that value will be used instead.
Note: In older versions of MaxScale this parameter was called passwd. The
use of passwd was deprecated in MaxScale 2.3.0.
serversType: string
Mandatory: Yes
Dynamic: Yes
A comma-separated list of servers the monitor should monitor.
monitor_intervalType:
Mandatory: No
Dynamic: Yes
Default: 2s
Defines how often the monitor updates the status of the servers. Choose a lower
value if servers should be queried more often. The smallest possible value is
100 milliseconds. If querying the servers takes longer than monitor_interval,
the effective update rate is reduced.
The interval is specified as documented . If no explicit unit is provided, the value is interpreted as milliseconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected.
backend_connect_timeoutType:
Mandatory: No
Dynamic: Yes
Default: 3s
This parameter controls the timeout for connecting to a monitored server. The interval is specified as documented . If no explicit unit is provided, the value is interpreted as seconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected. Note that since the granularity of the timeout is seconds, a timeout specified in milliseconds will be rejected, even if the duration is longer than a second. The minimum value is 1 second.
backend_write_timeoutType:
Mandatory: No
Dynamic: Yes
Default: 3s
This parameter controls the timeout for writing to a monitored server. The timeout is specified as documented . If no explicit unit is provided, the value is interpreted as seconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected. Note that since the granularity of the timeout is seconds, a timeout specified in milliseconds will be rejected, even if the duration is longer than a second. The minimum value is 1 seconds.
backend_read_timeoutType:
Mandatory: No
Dynamic: Yes
Default: 3s
This parameter controls the timeout for reading from a monitored server. The timeout is specified as documented . If no explicit unit is provided, the value is interpreted as seconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected. Note that since the granularity of the timeout is seconds, a timeout specified in milliseconds will be rejected, even if the duration is longer than a second. The minimum value is 1 second.
backend_connect_attemptsType: number
Mandatory: No
Dynamic: Yes
Default: 1
This parameter defines the maximum times a backend connection is attempted every
monitoring loop. Every attempt may take up to backend_connect_timeout seconds
to perform. If none of the attempts are successful, the backend is considered to
be unreachable and down.
disk_space_thresholdType: string
Mandatory: No
Dynamic: Yes
Default: None
This parameter duplicates the disk_space_threshold.
If the parameter has not been specified for a server, then the one specified
for the monitor is applied.
NOTE: Since MariaDB 10.4.7, MariaDB 10.3.17 and MariaDB 10.2.26, the
information will be available only if the monitor user has the FILE
privilege.
That is, if the disk configuration is the same on all servers monitored by the monitor, it is sufficient (and more convenient) to specify the disk space threshold in the monitor section, but if the disk configuration is different on all or some servers, then the disk space threshold can be specified individually for each server.
For example, suppose server1, server2 and server3 are identical
in all respects. In that case we can specify disk_space_threshold
in the monitor.
However, if the servers are heterogeneous with the disk used for the data directory mounted on different paths, then the disk space threshold must be specified separately for each server.
If most of the servers have the data directory disk mounted on the same path, then the disk space threshold can be specified on the monitor and separately on the server with a different setup.
Above, server1 has the disk used for the data directory mounted
at /DbData while both server2 and server3 have it mounted on/data and thus the setting in the monitor covers them both.
disk_space_check_intervalType:
Mandatory: No
Dynamic: Yes
Default: 0s
With this parameter it can be specified the minimum amount of time between disk space checks. The interval is specified as documented . If no explicit unit is provided, the value is interpreted as milliseconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected. The default value is 0, which means that by default the disk space will not be checked.
Note that as the checking is made as part of the regular monitor interval
cycle, the disk space check interval is affected by the value ofmonitor_interval. In particular, even if the value ofdisk_space_check_interval is smaller than that of monitor_interval,
the checking will still take place at monitor_interval intervals.
scriptType: string
Mandatory: No
Dynamic: Yes
Default: None
This command will be executed on a server state change. The parameter should be an absolute path to a command or the command should be in the executable path. The user running MaxScale should have execution rights to the file itself and the directory it resides in. The script may have placeholders which MaxScale will substitute with useful information when launching the script.
The placeholders and their substitution results are:
$INITIATOR -> IP and port of the server which initiated the event
$EVENT -> event description, e.g. "server_up"
$LIST -> list of IPs and ports of all servers
The expanded variable value can be an empty string if no servers match the
variable's requirements. For example, if no primaries are available $MASTERLIST
will expand into an empty string. The list-type substitutions will only contain
servers monitored by the current monitor.
The above script could be executed as:
See section below for an example script.
Any output by the executed script will be logged into the MaxScale log. Each outputted line will be logged as a separate log message.
The log level on which the messages are logged depends on the format of the
messages. If the first word in the output line is one of alert:, error:,warning:, notice:, info: or debug:, the message will be logged on the
corresponding level. If the message is not prefixed with one of the keywords,
the message will be logged on the notice level. Whitespace before, after or
between the keyword and the colon is ignored and the matching is
case-insensitive.
Currently, the script must not execute any of the following MaxCtrl calls as they cause a deadlock:
alter monitor to the monitor executing the script
stop monitor to the monitor executing the script
call command to a MariaDB-Monitor that is executing the script
script_timeoutType:
Mandatory: No
Dynamic: Yes
Default: 90s
The timeout for the executed script. The interval is specified as documented . If no explicit unit is provided, the value is interpreted as seconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected. Note that since the granularity of the timeout is seconds, a timeout specified in milliseconds will be rejected, even if the duration is longer than a second.
If the script execution exceeds the configured timeout, it is stopped by sending a SIGTERM signal to it. If the process does not stop, a SIGKILL signal will be sent to it once the execution time is greater than twice the configured timeout.
eventsType: enum
Mandatory: No
Dynamic: Yes
Values: master_down, master_up, slave_down
A list of event names which cause the script to be executed. If this option is not defined, all events cause the script to be executed. The list must contain a comma separated list of event names.
The following table contains all the possible event types and their descriptions.
journal_max_ageType:
Mandatory: No
Dynamic: Yes
Default: 28800s
The maximum journal file age. The interval is specified as documented . If no explicit unit is provided, the value is interpreted as seconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected. Note that since the granularity of the max age is seconds, a max age specified in milliseconds will be rejected, even if the duration is longer than a second.
When the monitor starts, it reads any stored journal files. If the journal file is older than the value of journal_max_age, it will be removed and the monitor starts with no prior knowledge of the servers.
Starting with MaxScale 2.2.0, the monitor modules keep an on-disk journal of the latest server states. This change makes the monitors crash-safe when options that introduce states are used. It also allows the monitors to retain stateful information when MaxScale is restarted.
For MySQL monitor, options that introduce states into the monitoring process are
the detect_stale_master and detect_stale_slave options, both of which are
enabled by default. Galeramon has the disable_master_failback parameter which
introduces a state.
The default location for the server state journal is in/var/lib/maxscale/<monitor name>/monitor.dat where <monitor name> is the
name of the monitor section in the configuration file. If MaxScale crashes or is
shut down in an uncontrolled fashion, the journal will be read when MaxScale is
started. To skip the recovery process, manually delete the journal file before
starting MaxScale.
Below is an example monitor configuration which launches a script with all supported substitutions. The example script reads the results and prints it to file and sends it as email.
File "maxscale_monitor_alert_script.sh":
initiator="" parent="" children="" event="" node_list="" list="" master_list="" slave_list="" synced_list=""
process_arguments() { while [ "$1" != "" ]; do if [[ "$1" =~ ^--initiator=.* ]]; then initiator=${1#'--initiator='} elif [[ "$1" =~ ^--parent.* ]]; then parent=${1#'--parent='} elif [[ "$1" =~ ^--children.* ]]; then children=${1#'--children='} elif [[ "$1" =~ ^--event.* ]]; then event=${1#'--event='} elif [[ "$1" =~ ^--node_list.* ]]; then node_list=${1#'--node_list='} elif [[ "$1" =~ ^--list.* ]]; then list=${1#'--list='} elif [[ "$1" =~ ^--master_list.* ]]; then master_list=${1#'--master_list='} elif [[ "$1" =~ ^--slave_list.* ]]; then slave_list=${1#'--slave_list='} elif [[ "$1" =~ ^--synced_list.* ]]; then synced_list=${1#'--synced_list='} fi shift done }
process_arguments $@ read -r -d '' MESSAGE << EOM A server has changed state. The following information was provided:
Initiator: $initiator Parent: $parent Children: $children Event: $event Node list: $node_list List: $list Primary list: $master_list Replica list: $slave_list Synced list: $synced_list EOM
echo "$MESSAGE" > /path/to/script_output.txt
echo "$MESSAGE" | mail -s "MaxScale received $event event for initiator $initiator." mariadb_admin@domain.com |
This page is licensed: CC BY-SA / Gnu FDL
The diff-router, hereafter referred to as Diff, compares the
behaviour of one MariaDB server version to that of another.
Diff will send the workload both to the server currently being used - called main - and to another server - called other - whose behaviour needs to be assessed.
The responses from main are returned to the client, without waiting for the responses from other. While running, Diff collects latency histogram data that later can be used for evaluating the behaviour of main and other.
Although Diff is a normal MaxScale router that can be configured manually, typically it is created using commands provided by the router itself. As its only purpose is to compare the behaviour of different servers, it is only meaningful to start it provided certain conditions are fulfilled and those conditions are easily ensured using the router itself.
Diff collects latency information separately for each canonical
&#xNAN;statement, which simply means a statement where all literals have been
replaced with question marks. For instance, the canonical statement ofSELECT f FROM t WHERE f = 10 and SELECT f FROM t WHERE f = 20 is
in both cases SELECT f FROM t WHERE f = ?. The latency information
of both of those statements will be collected under the same canonical
statement.
Before starting to register histogram data, Diff will collect from main that will be used for defining the edges and the number of bins of the histogram.
The responses from main and other are considered to be different
if the checksum of the response from main and *other differ, or
if the response time of _other* is outside the boundaries of the histogram edges calculated from the samples from main.
A difference in the response time of individual queries is not a meaningful criteria, as there for varying reasons (e.g. network traffic) can be a significant amount of variance in the results. It would only always cause a large number of false positives.
When a discrepancy is detected, an EXPLAIN statement will be executed if the query was a DELETE, SELECT, INSERT or UPDATE. The EXPLAIN will be executed using the same connection that was used for executing the original statement. In the normal case, the EXPLAIN will be executed immediately after the original statement, but if the client is streaming requests, an other statement may have been exceuted in between.
EXPLAINs are not always executed, but the frequency is controlled by and . The EXPLAIN results are included in the of Diff.
While running, Diff will also collect QPS information over a sliding window whose size is defined by .
Diff produces two kinds of output:
Output that is generated when Diff terminates or upon . That output can be visualized as explained .
Diff can continuously report queries whose responses from main and other differ as described .
When Diff starts it will create a directory diff in MaxScale's
data directory (typically /var/lib/maxscale). Under that it
will create a directory whose name is the same as that of the
service specified in . The output files are created
in that directory.
The behaviour and usage of Diff is most easily explained using an example.
Consider the following simple configuration that only includes the very essential.
There is a service MyService that uses a single server MyServer1,
which, for this example, is assumed to run MariaDB 10.5.
Suppose that the server should be upgraded to 11.2 and we want to find out whether there would be some issues with that.
In order to use Diff for comparing the behaviour of MariaDB 10.5 and MariaDB 11.2, the following steps must be taken.
Install MariaDB 11.2 on a host that performance wise is similar to the host on which MariaDB 10.5 is running.
Configure the MariaDB 11.2 server to replicate from the MariaDB 10.5 server.
Create a server entry for the MariaDB 11.2 server in the MaxScale configuration.
The created entry could be something like:
With these steps Diff is ready to be used.
Diff is controlled using a number of module commands.
Create
Syntax: create new-service existing-service used-server new-server
Where:
new-service: The name of the service using the Diff router, to be
created.
existing-service: The name of an existing service in whose context
the new server is to be evaluated.
used-server: A server used by existing-service
With this command, preparations for comparing the server MariaDB_112
against the server MyServer1 of the service MyService will be made.
At this point it will be checked in what kind of replication relationshipMariaDB_112 is with respect to MyServer1. If the steps in were followed, it will be detected thatMariaDB_112 replicates from MyServer1.
If everything seems to be in order, the service DiffMyService will be
created. Settings such as user and password that are needed by the
service DiffMyService will be copied from MyService.
Using maxctrl we can check that the service indeed has been created.
Now the comparison can be started.
Start
Syntax: start diff-service
Where:
diff-service: The name of the service created in thecreate` step.
When Diff is started, it performs the following steps:
All sessions of MyService are suspended.
In the MyService service, the server target MyServer1
is replaced with DiffMyService.
The replication from MyServer1 to MariaDB_112
In the first step, all sessions that are idle will immediately be suspended, which simply means that nothing is read from the client socket. Sessions that are waiting for a response from the server and sessions that have an active transaction continue to run. Immediately when a session becomes idle, it is suspended.
Once all sessions have been suspended, the service is rewired.
In the case of MyService above, it means that the targetMyServer1 is replaced with DiffMyService. That is, requests
that earlier were sent to MyServer1, will, once the sessions
are resumed, be sent to DiffMyService, which sends them forward
to both MyServer1 and MariaDB_112.
Restarting the sessions means that the direct connections toMyServer1 will be closed and equivalent ones created via the
service DiffMyService, which will also create connections
to MariaDB_112.
When the sessions are resumed, client requests will again be
processed, but they will now be routed via DiffMyService.
With maxctrl we can check that MyServer has been rewired.
The target of MyService is DiffMyService instead of MyServer1
that it used to be.
The output object returned by create tells the current state.
The sessions object shows how many sessions there are in total
and how many that currently are suspended. Since there were no
existing sessions in this example, they are both 0.
The state shows what Diff is currently doing. synchronizing
means that it is in the process of changing MyService to useDiffMyService. sync_state shows that it is currently in the
process of suspending sessions.
Status
Syntax: status diff-service
Where:
diff-service: The name of the service created in thecreate` step.
When Diff has been started, its current status can be checked with the
command status. The output is the same as what was returned when
Diff was started.
The state is now comparing, which means that everything is ready
and clients can connect in normal fashion.
Summary
Syntax: summary diff-service
Where:
diff-service: The name of the service created in thecreate` step.
While Diff is running, it is possible at any point to request a summary.
The summary consists of two files, one for the main server and
one for the other server. The files are written to a subdirectory
with the same name as the Diff service, which is created in the
subdirectory diff in the data directory of MaxScale.
Assuming the data directory is the default /var/lib/maxscale,
the directory would in this example be/var/lib/maxscale/diff/DiffMyService.
The names of the files will be the server name, concatenated with a timestamp. In this example, the names of the files could be:
The visualization of the results is done using the program.
Stop
Syntax: stop diff-service
Where:
diff-service: The name of the service created in thecreate` step.
The comparison can stopped with the command stop.
Stopping Diff reverses the effect of starting it:
All sessions are suspended.
In the service, 'DiffMyService' is replaced with 'MyServer1'.
The sessions are restarted.
The sessions are resumed.
As the sessions have to be suspended, it may take a while before the operation has completed. The status can be checked with the 'status' command.
Destroy
Syntax: destroy diff-service
Where:
diff-service: The name of the service created in thecreate` step.
As the final step, the command destroy can be called to
destroy the service.
The visualization of the data is done with the maxvisualize program,
which is part of the Capture functionality. The visualization will
open up a browser window to show the visualization.
If no browser opens up, the visualization URL is also printed into the command line which by default should be.
In the case of the example above, the directory where the output files
are created would be /var/lib/maxscale/diff/MyService. And the files
to be used when visualizing would be called something likeMyServer1_2024-05-07_140323.json andMariaDB_112_2024-05-07_140323.json. The timestamp will be different
every time is executed.
The order is significant; the first argument is the baseline and the second argument the results compared to the baseline.
If the value of is something else but never, Diff
will continously log results to a file whose name is the concatenation
for the main and other server followed by a timestamp. In the example
above, the name would be something likeMyServer1_MariaDB_112_2024-02-15_152838.json.
Each line (here expanded for readability) in the file will look like:
The meaning of the fields are as follows:
id: Running number, increases for each query, but will not be in strict increasing order if a statement needed to be EXPLAINed and the following did not.
session: The session id.
command: The protocol packet type.
query: The SQL of the query.
Instead of an explain object, there may be an explained_by array,
containing the ids of similar statements (i.e. their canonical
statement is the same) that were EXPLAINed.
Diff can run in a read-only or read-write mode and the mode is deduced from the replication relationship between main and_other_.
If other replicates from main, it is assumed that main is the primary. In this case Diff will, when started, stop the replication from main to other. When the comparison ends Diff will, depending on the value of either reset the replication from main to other or leave the situation as it is.
If other and main replicates from a third seriver, it is assumed main is a replica. In this case, Diff will, when started, leave the replication as it is and do nothing when the comparison ends.
If the replication relationship between main and other is anything else, Diff will refuse to start.
mainType: server
Mandatory: Yes
Dynamic: No
The main target from which results are returned to the client. Must be a server and must be one of the servers listed in .
If the connection to the main target cannot be created or is lost mid-session, the client connection will be closed.
serviceType: service
Mandatory: Yes
Dynamic: No
Specifies the service Diff will modify.
explainType:
Mandatory: No
Dynamic: Yes
Values: none, other, `both'
Specifies whether a request should be EXPLAINed on only other, both other and main or neither.
explain_entriesType: non-negative integer
Mandatory: No
Dynamic: Yes
Default: 2
Specifies how many times at most a particular canonical statement is EXPLAINed during the period specified by .
explain_periodType:
Mandatory: No
Dynamic: Yes
Default: 15m
Specifies the length of the period during which at most number of EXPLAINs are executed for a statement.
max_request_lagType: non-negative integer
Mandatory: No
Dynamic: Yes
Default: 10
Specifies the maximum number of requests other may be lagging behind main before the execution of SELECTs against other are skipped to bring it back in line with main.
on_errorType:
Mandatory: No
Dynamic: Yes
Values: close, ignore
Specifies whether an error from other, will cause the session to be closed. By default it will not.
percentileType: count
Mandatory: No
Dynamic: Yes
Min: 1
Specifies the percentile of sampels that will be considered when calculating the width and number of bins of the histogram.
qps_windowType:
Mandatory: No
Dynamic: No
Default: 15m
Specifies the size of the sliding window during which QPS is calculated and stored. When a is requested, the QPS information will also be saved.
reportType:
Mandatory: No
Dynamic: Yes
Values: always, on_discrepancy, never
Specifies when the results of executing a statement on other and main should be logged; always, when there is a significant difference or_never_.
reset_replicationType:
Mandatory: No
Dynamic: Yes
Default: true
If Diff has started in read-write mode and the value ofreset_replication is true, when the comparison ends
it will execute the following on other:
If Diff has started in read-only mode, the value of reset_replication
will be ignored.
Note that since Diff writes updates directly to both main and_other_ there is no guarantee that it will be possible to simply
start the replication. Especially not if gtid_strict_mode
is on.
retain_faster_statementsType: non-negative integer
Mandatory: No
Dynamic: Yes
Default: 5
Specifies the number of faster statements that are retained in memory. The statements will be saved in the summary when the comparison ends, or when Diff is explicitly instructed to do so.
retain_slower_statementsType: non-negative integer
Mandatory: No
Dynamic: Yes
Default: 5
samplesType: count
Mandatory: No
Dynamic: Yes
Min: 100
Specifies the number of samples that will be collected in order to define the edges and number of bins of the histograms.
Diff is currently not capable of adapting to any changes made in the cluster configuration. For instance, if Diff starts up in read-only mode and main is subsequently made primary, Diff will not sever the replication from main to other. The result will be that other receives the same writes twice; once via the replication from the server it is replicating from and once when Diff executes the same writes.
This page is licensed: CC BY-SA / Gnu FDL
[maxscale]
threads=auto
[server1]
type=server
address=127.0.0.1
port=3306
[cdc-service]
type=service
router=avrorouter
servers=server1
user=maxuser
password=maxpwd
[cdc-listener]
type=listener
service=cdc-service
protocol=CDC
port=4001filestem=mybin
binlogdir=/var/lib/mysql/binlogs/[replication-router]
type=service
router=binlogrouter
router_options=server-id=4000,binlogdir=/var/lib/mysql,filestem=binlog
user=maxuser
password=maxpwd
[avro-router]
type=service
router=avrorouter
binlogdir=/var/lib/mysql
filestem=binlog
avrodir=/var/lib/maxscalemysql -ss -u <user> -p -h <host> -P <port> -e 'DESCRIBE `<database>`.`<table>`'|./cdc_one_schema.py <database> <table># On the database server
mysql -ss -u <user> -p -h <host> -P <port> -e 'DESCRIBE `<database>`.`<table>`' > schema.tsv
# On the MaxScale server
./cdc_one_schema.py <database> <table> < schema.tsvusage: cdc_schema.py [--help] [-h HOST] [-P PORT] [-u USER] [-p PASSWORD] DATABASE[avro-converter]
type=service
router=avrorouter
user=myuser
password=mypasswd
router_options=binlogdir=/var/lib/mysql/,
filestem=binlog,
avrodir=/var/lib/maxscale/avro/
[avro-listener]
type=listener
service=avro-converter
protocol=CDC
port=4001cdc.py --user=myuser --password=mypasswd --host=127.0.0.1 --port=4001 test.mytablecdc.py --user=myuser --password=mypasswd --host=127.0.0.1 --port=4001 test.mytable |
cdc_kafka_producer.py --kafka-broker 127.0.0.1:9092 --kafka-topic test.mytable-DCMAKE_C_FLAGS=-fPIC -DCMAKE_CXX_FLAGS=-fPIC$NODELIST -> list of IPs and ports of all running servers$SLAVELIST -> list of IPs and ports of all replica servers
$MASTERLIST -> list of IPs and ports of all primary servers
$SYNCEDLIST -> list of IPs and ports of all synced Galera nodes
$PARENT -> IP and port of the parent of the server which initiated the event.
For primary-replica setups, this will be the primary if the initiating server is a
replica.
$CHILDREN -> list of IPs and ports of the child nodes of the server who
initiated the event. For primary-replica setups, this will be a list of replica
servers if the initiating server is a primary.
slave_upserver_downserver_uplost_masterlost_slavenew_masternew_slaveDefault: All events
lost_master
A server lost Primary status
lost_slave
A server lost Replica status
new_master
A new Primary was detected
new_slave
A new Replica was detected
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
#!/usr/bin/env bash
master_down
A Primary server has gone down
master_up
A Primary server has come up
slave_down
A Replica server has gone down
slave_up
A Replica server has come up
server_down
A server with no assigned role has gone down
server_up
A server with no assigned role has come up
1
2
3
4
5
6
7
8
new-server: The server that should be compared to used-server.
The sessions are restarted, which will cause existing connections
to MyServer1 to be closed and new ones to be created, via Diff,
to both MyServer1 and MariaDB_112.
The sessions are resumed, which means that the client traffic will continue.
results: Array of results.
target: The server the result relates to.
checksum: The checksum of the result.
rows: How many rows were returned.
warnings: The number of warnings.
duration: The execution duration in nanonseconds.
type: What type of result resultset, ok or error.
explain: The result of EXPLAIN FORMAT=JSON statement.
Default: both
Default: ignore
Default: 99
Default: on_discrepancy
servers=MyServer1,MyServer2monitor_interval=2sbackend_connect_timeout=3sbackend_write_timeout=3sbackend_read_timeout=3sbackend_connect_attempts=1[server1]
type=server
...
[server2]
type=server
...
[server3]
type=server
...
[monitor]
type=monitor
servers=server1,server2,server3
disk_space_threshold=/data:80
...[server1]
type=server
disk_space_threshold=/data:80
...
[server2]
type=server
disk_space_threshold=/Data:80
...
[server3]
type=server
disk_space_threshold=/DBData:80
...
[monitor]
type=monitor
servers=server1,server2,server3
...[server1]
type=server
disk_space_threshold=/DbData:80
...
[server2]
type=server
...
[server3]
type=server
...
[monitor]
type=monitor
servers=server1,server2,server3
disk_space_threshold=/data:80
...script=/home/user/myscript.sh initiator=$INITIATOR event=$EVENT live_nodes=$NODELIST/home/user/myscript.sh initiator=[192.168.0.10]:3306 event=master_down live_nodes=[192.168.0.201]:3306,[192.168.0.121]:3306events=master_down,slave_down[MyMonitor]
type=monitor
module=mariadbmon
servers=C1N1,C1N2,C1N3
user=maxscale
password=password
monitor_interval=10s
script=/path/to/maxscale_monitor_alert_script.sh --initiator=$INITIATOR --parent=$PARENT --children=$CHILDREN --event=$EVENT --node_list=$NODELIST --list=$LIST --master_list=$MASTERLIST --slave_list=$SLAVELIST --synced_list=$SYNCEDLIST[MyServer1]
type=server
address=192.168.1.2
port=3306
[MyService]
type=service
router=readwritesplit
servers=MyServer1
...[MariaDB_112]
type=server
address=192.168.1.3
port=3306
protocol=mariadbbackendmaxctrl call command diff create DiffMyService MyService MyServer1 MariaDB_112
{
"status": "Diff service 'DiffMyService' created. Server 'MariaDB_112' ready to be evaluated."
}maxctrl list services
┌───────────────┬────────────────┬─────────────┬───────────────────┬────────────────────────┐
│ Service │ Router │ Connections │ Total Connections │ Targets │
├───────────────┼────────────────┼─────────────┼───────────────────┼────────────────────────┤
│ MyService │ readwritesplit │ 0 │ 0 │ MyServer1 │
├───────────────┼────────────────┼─────────────┼───────────────────┼────────────────────────┤
│ DiffMyService │ diff │ 0 │ 0 │ MyServer1, MariaDB_112 │
└───────────────┴────────────────┴─────────────┴───────────────────┴────────────────────────┘maxctrl call command diff start DiffMyService
{
"sessions": {
"suspended": 0,
"total": 0
},
"state": "synchronizing",
"sync_state": "suspending_sessions"
}maxctrl list services
┌───────────────┬────────────────┬─────────────┬───────────────────┬────────────────────────┐
│ Service │ Router │ Connections │ Total Connections │ Targets │
├───────────────┼────────────────┼─────────────┼───────────────────┼────────────────────────┤
│ MyService │ readwritesplit │ 0 │ 0 │ DiffMyService │
├───────────────┼────────────────┼─────────────┼───────────────────┼────────────────────────┤
│ DiffMyService │ diff │ 0 │ 0 │ MyServer1, MariaDB_112 │
└───────────────┴────────────────┴─────────────┴───────────────────┴────────────────────────┘{
"sessions": {
"suspended": 0,
"total": 0
},
"state": "synchronizing",
"sync_state": "suspending_sessions"
}maxctrl call command diff status DiffMyService
{
"sessions": {
"suspended": 0,
"total": 0
},
"state": "comparing",
"sync_state": "not_applicable"
}maxctrl call command diff summary DiffMyService
OKMyServer1_2024-05-07_140323.json
MariaDB_112_2024-05-07_140323.jsonmaxctrl call command diff stop DiffMyService
{
"sessions": {
"suspended": 0,
"total": 0
},
"state": "stopping",
"sync_state": "suspending_sessions"
}maxctrl call command diff destroy DiffMyService
OKmaxvisualize MyServer1_2024-05-07_140323.json MariaDB_112_2024-05-07_140323.json{
"id": 1,
"session": 1,
"command": "COM_QUERY",
"query": "select @@version_comment limit 1",
"results": [
{
"target": "MyServer1",
"checksum": "0f491b37",
"rows": 1,
"warnings": 0,
"duration": 257805,
"type": "resultset",
"explain": { ... }
},
{
"target": "MariaDB_112",
"checksum": "0f491b37",
"rows": 1,
"warnings": 0,
"duration": 170043,
"type": "resultset",
"explain": { ... }
}
]
}RESET SLAVE
START SLAVEThis document describes the version 1 of the MaxScale REST API.
Although JSON does not define a syntax for comments, some of the JSON examples
have C-style inline comments in them. These comments use // to mark the start
of the comment and extend to the end of the current line.
Read the REST API section of the configuration guide for more details on how to configure the REST API.
The MaxScale REST API uses HTTP Basic Access
authentication with the MaxScale administrative interface users. The default
user is admin:mariadb.
It is highly recommended to enable HTTPS on the MaxScale REST API to make the communication between the client and MaxScale secure. Without it, the passwords can be intercepted from the network traffic. Refer to the Configuration Guide for more details on how to enable HTTPS for the MaxScale REST API.
For more details on how administrative interface users are created and managed, refer to the MaxCtrl documentation as well as the documentation of the users resource.
MaxScale supports authentication via JSON Web Tokens.
The /v1/auth endpoint can be used to generate new tokens which are returned in
the following form.
Note that by default the /auth endpoint requires the connection to be
encrypted (HTTPS) and attempts to use it without encryption will be treated as
an error. To allow use of the /auth endpoint without encryption, useadmin_secure_gui=false.
If the token is used to authenticate users in a web browser, the token can be
optionally stored in cookies. This can be enabled with the persist=yes
parameter in the request:
When the token is stored in the cookies, it will be stored in the token_sig
cookie using the SameSite=Strict and HttpOnly cookie options. This means the
JavaScript context of the browser will not have access to it. This is done to
prevent CSRF attacks.
By default, the generated tokens are valid for 8 hours. The token validity
period can be set with the max-age request parameter:
When max-age is combined with persist, the Max-Age cookie option is
also set to the same value.
The maximum lifetime of the tokens is controlled by the admin_jwt_max_age
parameter. If the configured value is less than 8 hours, the default token
lifetime is also lowered to it. A request for a token with a lifetime longer
than admin_jwt_max_age will be accepted but the token will have its lifetime
set to admin_jwt_max_age.
To use the token for authentication, the generated token must be presented in the Authorization header with the Bearer authentication scheme. For example, the token above would be used in the following manner:
If MaxScale is restarted, all generated tokens are invalidated.
/auth Request Parameters
The /auth endpoint supports the following request parameters that must be
given in the HTTP query string.
max-age
Sets the token maximum age in seconds. The default is max-age=28800. Only
positive values between 1 and 2147483646 are accepted and if a non-positive
or a non-integer value is found, the parameter is ignored.
persist
Store the generated token in cookies instead of returning it as the response body.
This parameter expects only one value, yes, as its argument. Whenpersist=yes is set, the token is stored in the token_sig cookie and the
response is 204 No Content instead of 200 OK.
The token_sig cookie contains the JWT and is stored as a HttpOnly cookie
which prevents access to from JavaScript. This is done to mitigate any
attacks that might leak the token.
The MaxScale REST API provides the following resources. All resources conform to the JSON API specification.
In addition to the named resources, the REST API will respond with a HTTP 200 OK
response to GET requests on the root resource (/) as well as the namespace
root resource (/v1/). These can be used for HTTP health checks to determine
whether MaxScale is running.
All of the current resources are in the /v1/ namespace of the MaxScale REST
API. Further additions to the namespace can be added that do not break backwards
compatibility of any existing resources. What this means in practice is that:
No resources or URLs will be removed
The API will be JSON API compliant
Note that this means that the contents of individual resources can change. New fields can be added, old ones can be removed and the meaning of existing fields can change. The aim is to be as backwards compatible as reasonably possible without sacrificing the clarity and functionality of the API.
Since MaxScale 2.4.0, the use of the version prefix /v1/ is optional: if the
prefix is not used, the latest API version is used.
All resources return complete JSON objects. The returned objects can have a_relationships_ field that represents any relations the object has to other objects. This closely resembles the JSON API definition of links.
In the relationships objects, all resources have a self link that points to the resource itself. This allows easy access to the objects pointed by the relationships as the reply URL is included in the response itself.
To create a relationship between two objects, define it in the initial POST
request. To modify the relationships of existing objects, perform a PATCH
request with the new definition of the relevant relationship. To completely
remove all relationships from an object, the data field of the corresponding
relationship object must be set to an empty array.
The following lists the resources and the types of links each resource can have in addition to the self link. Examples of these relationships can be seen in the resource documentation.
services - Service resource
servers
List of servers used by the service
services
List of services used by the service
filters
List of filters used by the service
NOTE: This is an ordered relationship where the order of the filters
defines the order in which they process queries.
listeners
List of listeners used by the service
monitors - Monitor resource
servers
List of servers used by the monitor
filters - Filter resource
services
List of services that use this filter
NOTE: This is a one-way relationship that can only be modified from theservices resource.
servers - Server resource
services
List of services that use this server
monitors
List of monitors that use this server
listeners - Listener resource
services
The service that the listener points to
All parameters that use boolean values use the same rules that are used for the boolean values in the
MaxScale configuration. For example, both pretty=off and pretty=false
disable the pretty option.
All the resources that return JSON content also support the following
parameters. Parameters are given in the HTTP query string:https://localhost:8989/v1/servers?pretty=true&fields[servers]=state.
pretty
Pretty-print output.
If this parameter is set to true then the returned objects are formatted
in a more human readable format. If the parameter is set to false then the
returned objects are formatted in a compact format. All resources support
this parameter. The default value for this parameter is true.
fields[TYPE]=field1,field2...
Return a
This parameter controls which fields are returned in the REST API
response. The TYPE value in the fields parameter must be the resource
type that is being retrieved (i.e. the servers in /v1/servers and/v1/server/server1). The value of the parameter must be a comma-separated
list of that mark which
fields of the object to return. Only fields in objects in the attributes
and relationships objects are inspected. This means that if the path
marked by the JSON Pointer contains an array in it, it will not advance past
this array.
For example, to return only the server state output from the /servers
endpoint, the fields[servers]=state parameter can be used. This would
return only the
filter=json_ptr=expr
Filter the output of the result
This parameter controls which rows are returned in a REST API response that
returns an array in the data member (i.e. a request to a resource
collection). Requests to individual resources are not filtered.
The argument to the filter parameter must be a key-value pair with a valid as the key and either a
valid JSON type as the value or a . The comparison is done for each
individual object in the data array of the result. If given only a JSON
value, the stored value is compared for equality. If an expression is used,
the expression is evaluated and only rows that match are returned.
For example, if the object stored in data[0] has a value pointed by the
given JSON pointer and that value compares equal to the given JSON value,
the array row is kept in the result. Examples for filtering expression can
be found .
A practical use for this parameter is to return only sessions for a
particular service. For example, to return sessions for theRW-Split-Router service, thefilter=/relationships/services/data/0/id="RW-Split-Router" parameter can
be used. Note the double quotes around the "RW-Split-Router", they are
required to correctly convert strings into JSON values.
filter[json_path]=expr
Filter based on a JSONPath and a filtering expression.
Similar to the filter parameter that takes a JSON Pointer, this version of
the filter controls which rows are returned in a REST API response for a
resource collection. Requests to individual resources are never filtered.
The value inside the brackets must be a valid
expression that MaxScale supports. The currently supported syntax is:
dot notation: $.store.book
bracket notation: $['store']['book']
The root object being optional is an extension to the JSONPath specification
that MaxScale implements.
The expr value must be a filter-expression. Similarly to the other filter
parameter, the comparison is done for each individual object in the data
array of the result.
If the JSONPath expression returns multiple objects, the comparison is done
for each element and if any of them matches, the object is considered to
match. In other words, wildcard JSONPath expressions are ORed together.
If multiple filter[json_path]=expr parameters are found in the request,
all returned values must match all of them. In other words, the filter
parameters are all combined into an AND expression. For example, the
following filter will only return all values whose id field is "srv1 and
the attributes.parameters.port field is 3306:filter[id]=eq("srv1")&filter[attributes.parameters.port]=eq(3306)
page[size]
The number of elements that are returned for resource collections. By
default all elements in the resource collection are returned. The value must
be a valid positive integer, otherwise the parameter is ignored. If
pagination is used, the links object will have pagination links to the
next page if more elements are available.
page[number]
How many pages of results to skip. The first page of results starts from 0
and each page has no more than page[size] elements. If defined,page[size] must also be defined, otherwise this parameter is ignored. This
should be considered pseudo-pagination as the results are not guaranteed to
be consistent between requests.
sync
Control configuration synchronization.
If this parameter is set to false then the configuration synchronization
is disabled for this request. This can be used to perform configuration
changes when MaxScale is unable to reach the cluster used to synchronize the
configuration. The modifications to the local configuration will be
overwritten when the next modification to the cluster's configuration is
done which means this should only be used to perform temporary fixes.
MaxScale 24.02 added support for expressions in the filter request
parameter. Each resource in a resource collection that evaluates to a true value
will be kept in the returned result. All rows that evaluate to false are
removed.
Equality and inequality is defined for all JSON types but ordering is defined only for numbers and strings. The logical operators allow one or more sub-expressions.
The following table lists the supported operations in the filter expressions. In
it, the stored value is marked as S and the literal JSON value in the
expression as V. For the logical operators, the sub-expression is marked asexpr.
eq(json)
S == V
All JSON types
ne(json)
S != V
All JSON types
lt(json)
S < V
Numbers and strings
le(json)
S <= V
Numbers and strings
Filter Expression Examples
Ranges of values can be defined using an and() expression with the range
limits defined with the ordering operators ge() and le(). To filter the
sessions in MaxScale to ones that have an ID between 50 and 100, the following
filtering expression can be used.
Limiting the result to only the given values can be done with an or()
expression that uses eq() expressions to select the rows to return. To only
return sessions with IDs 1, 5, 10 the following filtering expression can be
used.
Similarly, excluding certain rows from the result can be done by simply
replacing or() with not(). This expression would exclude sessions with the
IDs 1, 5 and 10 from the result.
REST makes use of the HTTP protocols in its aim to provide a natural way to understand the workings of an API. The following request headers are understood by this API.
Authorization
Credentials for authentication. This header should consist of a HTTP Basic
Access authentication type payload which is the base64 encoded value of the
username and password joined by a colon e.g. Base64("maxuser:maxpwd").
Content-Type
All PUT and POST requests must use the Content-Type: application/json media
type and the request body must be a complete and valid JSON representation of a
resource. All PATCH requests must use the Content-Type: application/json media
type and the request body must be a JSON document containing a partial
definition of the modified resource.
Host
The address and port of the server.
If-Match
The request is performed only if the provided ETag value matches the one on the server. This field should be used with PATCH requests to prevent concurrent updates to the same resource.
The value of this header must be a value from the ETag header retrieved from
the same resource at an earlier point in time.
If-Modified-Since
If the content has not changed the server responds with a 304 status code. If the content has changed the server responds with a 200 status code and the requested resource.
The value of this header must be a date value in the "HTTP-date" format.
If-None-Match
If the content has not changed the server responds with a 304 status code. If the content has changed the server responds with a 200 status code and the requested resource.
The value of this header must be a value from the ETag header retrieved from
the same resource at an earlier point in time.
If-Unmodified-Since
The request is performed only if the requested resource has not been modified since the provided date.
The value of this header must be a date value in the "HTTP-date" format.
X-HTTP-Method-Override
Some clients only support GET and PUT requests. By providing the string value of
the intended method in the X-HTTP-Method-Override header, a client can, for
example, perform a POST, PATCH or DELETE request with the PUT method
(e.g. X-HTTP-Method-Override: PATCH).
If this header is defined in the request, the current method of the request is replaced with the one in the header. The HTTP method must be in uppercase and it must be one of the methods that the requested resource supports.
Allow
All resources return the Allow header with the supported HTTP methods. For
example the resource /services will always return the Accept: GET, PATCH, PUT
header.
Accept-Patch
All PATCH capable resources return the Accept-Patch: application/json-patch
header.
Date
Returns the RFC 1123 standard form date when the reply was sent. The date is in English and it uses the server's local timezone.
ETag
An identifier for a specific version of a resource. The value of this header changes whenever a resource is modified via the REST API. It will not change if an internal MaxScale event (e.g. server changing state or statistics being updated) causes a change.
When the client sends the If-Match or If-None-Match header, the provided
value should be the value of the ETag header of an earlier GET.
Last-Modified
The date when the resource was last modified in "HTTP-date" format.
Location
If an out of date resource location is requested, a HTTP return code of 3XX with
the Location header is returned. The value of the header contains the new
location of the requested resource as a relative URI.
WWW-Authenticate
The requested authentication method. For example, WWW-Authenticate: Basic
would require basic HTTP authentication.
Mxs-Warning
This header is used for sending generic warnings to clients about actions that were successful and valid but could cause problems in the future. Currently these are used to indicate when a configuration change was made to a static object and an overriding configuration is created or when a static object is being deleted at runtime.
The content of the header is the human-readable warning that should be displayed to a user.
Every HTTP response starts with a line with a return code which indicates the outcome of the request. The API uses some of the standard HTTP values:
200 OK
Successful HTTP requests, response has a body.
201 Created
A new resource was created.
202 Accepted
The request has been accepted for processing, but the processing has not been completed.
204 No Content
Successful HTTP requests, response has no body.
This class of status code indicates the client must take additional action to complete the request.
301 Moved Permanently
This and all future requests should be directed to the given URI.
302 Found
The response to the request can be found under another URI using the same method as in the original request.
303 See Other
The response to the request can be found under another URI using a GET method.
304 Not Modified
Indicates that the resource has not been modified since the version specified by the request headers If-Modified-Since or If-None-Match.
307 Temporary Redirect
The request should be repeated with another URI but future requests should use the original URI.
308 Permanent Redirect
The request and all future requests should be repeated using another URI.
The 4xx class of status code is when the client seems to have erred. Except when responding to a HEAD request, the body of the response MAY contains a JSON representation of the error.
The error field contains a short error description and the description field contains a more detailed version of the error message.
400 Bad Request
The server cannot or will not process the request due to client error.
401 Unauthorized
Authentication is required. The response includes a WWW-Authenticate header.
403 Forbidden
The request was a valid request, but the client does not have the necessary permissions for the resource.
404 Not Found
The requested resource could not be found.
405 Method Not Allowed
A request method is not supported for the requested resource.
406 Not Acceptable
The requested resource is capable of generating only content not acceptable according to the Accept headers sent in the request.
409 Conflict
Indicates that the request could not be processed because of conflict in the request, such as an edit conflict be tween multiple simultaneous updates.
411 Length Required
The request did not specify the length of its content, which is required by the requested resource.
412 Precondition Failed
The server does not meet one of the preconditions that the requester put on the request.
413 Payload Too Large
The request is larger than the server is willing or able to process.
414 URI Too Long
The URI provided was too long for the server to process.
415 Unsupported Media Type
The request entity has a media type which the server or resource does not support.
422 Unprocessable Entity
The request was well-formed but was unable to be followed due to semantic errors.
423 Locked
The resource that is being accessed is locked.
428 Precondition Required
The origin server requires the request to be conditional. This error code is
returned when none of the Modified-Since or Match type headers are used.
431 Request Header Fields Too Large
The server is unwilling to process the request because either an individual header field, or all the header fields collectively, are too large.
The server failed to fulfill an apparently valid request.
500 Internal Server Error
A generic error message, given when an unexpected condition was encountered and no more specific message is suitable.
501 Not Implemented
The server either does not recognize the request method, or it lacks the ability to fulfill the request.
502 Bad Gateway
The server was acting as a gateway or proxy and received an invalid response from the upstream server.
503 Service Unavailable
The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state.
504 Gateway Timeout
The server was acting as a gateway or proxy and did not receive a timely response from the upstream server.
505 HTTP Version Not Supported
The server does not support the HTTP protocol version used in the request.
506 Variant Also Negotiates
Transparent content negotiation for the request results in a circular reference.
507 Insufficient Storage
The server is unable to store the representation needed to complete the request.
508 Loop Detected
The server detected an infinite loop while processing the request (sent in lieu of 208 Already Reported).
510 Not Extended
Further extensions to the request are required for the server to fulfil it.
The following response headers are not currently in use. Future versions of the API could return them.
206 Partial Content
The server is delivering only part of the resource (byte serving) due to a range header sent by the client.
300 Multiple Choices
Indicates multiple options for the resource from which the client may choose (via agent-driven content negotiation).
407 Proxy Authentication Required
The client must first authenticate itself with the proxy.
408 Request Timeout
The server timed out waiting for the request. According to HTTP specifications: "The client did not produce a request within the time that the server was prepared to wait. The client MAY repeat the request without modifications at any later time."
410 Gone
Indicates that the resource requested is no longer available and will not be available again.
416 Range Not Satisfiable
The client has asked for a portion of the file (byte serving), but the server cannot supply that portion.
417 Expectation Failed
The server cannot meet the requirements of the Expect request-header field.
421 Misdirected Request
The request was directed at a server that is not able to produce a response.
424 Failed Dependency
The request failed due to failure of a previous request.
426 Upgrade Required
The client should switch to a different protocol such as TLS/1.0, given in the Upgrade header field.
429 Too Many Requests
The user has sent too many requests in a given amount of time. Intended for use with rate-limiting schemes.
This page is licensed: CC BY-SA / Gnu FDL
The binlogrouter is a router that acts as a replication proxy for MariaDB primary-replica replication. The router connects to a primary, retrieves the binary logs and stores them locally. Replica servers can connect to MaxScale like they would connect to a normal primary server. If the primary server goes down, replication between MaxScale and the replicas can still continue up to the latest point to which the binlogrouter replicated to. The primary can be changed without disconnecting the replicas and without them noticing that the primary server has changed. This allows for a more highly available replication setup.
In addition to the high availability benefits, the binlogrouter creates only one connection to the primary whereas with normal replication each individual replica will create a separate connection. This reduces the amount of work the primary database has to do which can be significant if there are a large number of replicating replicas.
File purge and archive are mutually exclusive. Archiving simply means that a binlog is moved to another directory. That directory can be mounted to another file system for backups or, for example, a locally mounted S3 bucket.
If archiving is started from a primary that still has all its history intact, a full copy of the primary can be archived.
File compression preserves disk space and makes archiving faster. All binlogs except the very last one, which is the one being logged to, can be compressed. The overhead of reading from a compressed binlog is small, and is typically only needed when a replica goes down, reconnects and is far enough behind the current GTID that an older file needs to be opened.
There is no automated way as of yet for the binlogrouter to use archived files, but should the need arise files can be copied from the archive to the binlog directory. See .
The related configuration options, which are explained in more detail in the
configuration section are: Select purge or archive. Directory where binlog files are stored (the default is usually fine). Directory to which files are archived. This directory
&#xNAN;must exist when MaxScale is started. The minimum number of
binlogs to keep before purge or archive is allowed. Duration from the last file
&#xNAN;modification until the binlog is eligible for purge or archive. Select a compression algorithm
or none for no compression. Currently only zstandard is supported.
The minimum number of binlogs not to compress.
Following are example settings where it is expected that a replica is down for no more than 24 hours.
There is usually no reason to modify the contents of the binlog directory. Changing the contents can cause failures if not done correctly. Never make any changes if running a version prior to 23.08, except when a is needed.
A binlog file has the name .<sequence_number>. The basename is decided by the primary server. The sequence number increases by one for each file and is six digits long. The first file has the name .000001
The file binlog.index contains the view of the current state of the binlogs
as a list of file names ordered by the file sequence number. binlog.index is
automatically generated any time the contents of datadir changes.
Older files can be manually deleted and should be deleted in the order they were created, lowest to highest sequence number. Prefer to use purge configuration.
Archived files can be copied back to datadir, but care should be taken to copy
them back in the reverse order they were created, highest to lowest sequence number.
The copied over files will be re-archived once expire_log_duration time has
passed.
Never leave a gap in the sequence numbers, and always preserve the name of a binlog file if copied. Do not copy binlog files on top of existing binlog files.
As of version 24.02 any binlog except the latest one can be manually compressed, .e.g:
The binlogrouter supports a subset of the SQL constructs that the MariaDB server supports. The following commands are supported:
CHANGE MASTER TO
The binlogrouter supports the same syntax as the MariaDB server but only the following values are allowed:
MASTER_HOST
NOTE: MASTER_LOG_FILE and MASTER_LOG_POS are not supported
as binlogrouter only supports GTID based replication.
STOP SLAVE
Stops replication, same as MariaDB.
START SLAVE
Starts replication, same as MariaDB.
If the server from which the binlogrouter replicates from is using semi-sync replication, the binlogrouter will acknowledge the replicated events.
The binlogrouter is configured similarly to how normal routers are configured in MaxScale. It requires at least one listener where clients can connect to and one server from which the database user information can be retrieved. An example configuration can be found in the section of this document.
datadirType: path
Mandatory: No
Dynamic: No
Default: /var/lib/maxscale/binlogs
Directory where binary log files are stored.
archivedirType: string
Mandatory: Yes
Default: No
Dynamic: No
Mandatory if expiration_mode=archive
The directory to where files are archived. This is presumably a directory mounted to a remote file system or an S3 bucket. Ensure that the user running MaxScale (typically "maxscale") has sufficient privileges on the archive directory. S3 buckets mounted with s3fs may require setting permissions manually:
The directory must exist when MaxScale starts.
server_idType: count
Mandatory: No
Dynamic: No
Default: 1234
The server ID that MaxScale uses when connecting to the primary and when serving binary logs to the replicas.
net_timeoutType:
Mandatory: No
Dynamic: No
Default: 10s
Network connection and read timeout for the connection to the primary.
select_masterType:
Mandatory: No
Dynamic: No
Default: false
Automatically select the primary server to replicate from.
When this feature is enabled, the primary which binlogrouter will replicate
from will be selected from the servers defined by a monitor cluster=TheMonitor.
Alternatively servers can be listed in servers. The servers should be monitored
by a monitor. Only servers with the Master status are used. If multiple primary
servers are available, the first available primary server will be used.
If a CHANGE MASTER TO command is received while select_master is on, the
command will be honored and select_master turned off until the next reboot.
This allows the Monitor to perform failover, and more importantly, switchover.
It also allows the user to manually redirect the Binlogrouter. The current
primary is "sticky", meaning that the same primary will be chosen on reboot.
NOTE: Do not use the mariadbmon parameter if the monitor is
monitoring a binlogrouter. The binlogrouter does not support all the SQL
commands that the monitor will send and the rejoin will fail. This restriction
will be lifted in a future version.
The GTID the replication will start from, will be based on the latest replicated
GTID. If no GTID has been replicated, the router will start replication from the
start. Manual configuration of the GTID can be done by first configuring the
replication manually with CHANGE MASTER TO.
expiration_modeType:
Dynamic: No
Values: purge, archive
Default: purge
Choose whether expired logs should be purged or archived.
expire_log_durationType:
Mandatory: No
Dynamic: No
Default: 0s
Duration after which a binary log file expires, i.e. becomes eligible for purge or archive. This is similar to the server system variable.
A value of 0s turns off purging.
.
The duration is measured from the last modification of the log file. Files are
purged in the order they were created. The automatic purge works in a similar
manner to PURGE BINARY LOGS TO <filename> in that it will stop the purge if
an eligible file is in active use, i.e. being read by a replica.
expire_log_minimum_filesType: number
Mandatory: No
Dynamic: No
Default: 2
The minimum number of log files the automatic purge keeps. At least one file is always kept.
compression_algorithmType:
Mandatory: No
Dynamic: No
Values: none, zstandard
number_of_noncompressed_filesType: count
Mandatory: No
Dynamic: No
Default: 2
The minimum number of log files that are not compressed. At least one file is not compressed.
ddl_onlyType: boolean
Mandatory: No
Dynamic: No
Default: false
When enabled, only DDL events are written to the binary logs. This means thatCREATE, ALTER and DROP events are written but INSERT, UPDATE andDELETE events are not.
This mode can be used to keep a record of all the schema changes that occur on a
database. As only the DDL events are stored, it becomes very easy to set up an
empty server with no data in it by simply pointing it at a binlogrouter instance
that has ddl_only enabled.
encryption_key_idType: string
Mandatory: No
Dynamic: No
Default: ""
Encryption key ID used to encrypt the binary logs. If configured, an must also be configured and it must contain the key with the given ID. If the encryption key manager supports versioning, new binary logs will be encrypted using the latest encryption key. Old binlogs will remain encrypted with older key versions and remain readable as long as the key versions used to encrypt them are available.
Once binary log encryption has been enabled, the encryption key ID cannot be changed and the key must remain available to MaxScale in order for replication to work. If an encryption key is not available or the key manager fails to retrieve it, the replication from the currently selected primary server will stop. If the replication is restarted manually, the encryption key retrieval is attempted again.
Re-encryption of binlogs using another encryption key is not possible. However, this is possible if the data is replicated to a second MaxScale server that uses a different encryption key. The same approach can also be used to decrypt binlogs.
encryption_cipherType:
Mandatory: No
Dynamic: No
Values: AES_CBC, AES_CTR, AES_GCM
The encryption cipher to use. The encryption key size also affects which mode is used: only 128, 192 and 256 bit encryption keys are currently supported.
Possible values are:
AES_GCM (default)
.
AES_CBC
.
rpl_semi_sync_slave_enabledType:
Mandatory: No
Default: false
Dynamic: Yes
Enable replication when replicating from a MariaDB server. If enabled, the binlogrouter will send acknowledgment for each received event. Note that the parameter must be enabled in the MariaDB server where the replication is done from for the semi-synchronous replication to take place.
Configure and start MaxScale.
If you have not configured select_master=true (automatic
primary selection), issue a CHANGE MASTER TO command to binlogrouter.
Redirect each replica to replicate from Binlogrouter
Binlogrouter does not read any of the data that a version prior to 2.5 has saved. By default binlogrouter will request the replication stream from the blank state (from the start of time), which is basically meant for new systems. If a system is live, the entire replication data probably does not exist, and if it does, it is not necessary for binlogrouter to read and store all the data.
Binlogrouter uses which has three kernel limits.
While Binlogrouter uses a modest number of inotify instances, the limit max_user_instances applies to the total
number of instances for the user and has a low default value on many systems. A double or triple of the
default value should suffice. The other two limits, max_queued_events and max_user_watches are usually high
enough, but it is sensible to double (triple) them if max_user_instances was doubled (tripled).
Note that binlogrouter only supports GTID based replication.
The method described here inflicts the least downtime. Assuming you have configured MaxScale version 2.5 or newer, and it is ready to go:
Redirect each replica that replicates from Binlogrouter to replicate from the primary.
Stop the old version of MaxScale, and start the new one. Verify routing functionality.
Issue a CHANGE MASTER TO command, or use .
Run maxctrl list servers. Make sure all your servers are accounted for.
Pick the lowest gtid state (e.g. 0-1000-1234,1-1001-5678) on display and
issue this command to Binlogrouter:
NOTE: Even with select_master=true you have to set @@global.gtid_slave_pos
if any binlog files have been purged on the primary. The server will only stream
from the start of time if the first binlog file is present.
See .
Redirect each replica to replicate from Binlogrouter.
If for any reason you need to "bootstrap" the binlogrouter you can change
the datadir or delete the entire binglog directory (datadir) when
MaxScale is NOT running. This could be necessary if files are accidentally
deleted or the file system becomes corrupt.
No changes are required to the attached replicas.
if is set to true and the primary contains
the entire binlog history, a simple restart of MaxScale sufficies.
In the normal case, the primary does not have the entire history and you will need to set the GTID position to a starting value, usually the earliest gtid state of all replicas. Once MaxScale has been restarted connect to the binlogrouter from the command line.
If is set to true issue:
else
When replicating from a Galera cluster, must be set to true, and the servers must be monitored by the . Configuring binlogrouter is the same as described above.
The Galera cluster must be configured to use .
The MariaDB version must be 10.5.1 or higher. The required GTID related server settings for MariaDB/Galera to work with Binlogrouter are listed here:
The following is a small configuration file with automatic primary selection. With it, the service will accept connections on port 3306.
Old-style replication with binlog name and file offset is not supported and the replication must be started by setting up the GTID to replicate from.
Only replication from MariaDB servers (including Galera) is supported.
Old encrypted binary logs are not re-encrypted with newer key versions ()
The MariaDB server where the replication is done from must be configured withbinlog_checksum=CRC32
This page is licensed: CC BY-SA / Gnu FDL
GET /v1/auth{
"meta": {
"token": "eyJhbGciOiJIUzI1NiJ9.eyJhY2NvdW50IjoiYWRtaW4iLCJhdWQiOiJhZG1pbiIsImV4cCI6MTY4OTk1MDgwNCwiaWF0IjoxNjg5OTIyMDA0LCJpc3MiOiJtYXhzY2FsZSIsInN1YiI6ImFkbWluIn0.LRFeXaFAhYNBm7kLIosUpR2nOgd5H-gv3MpuLaCpPvk"
}
}GET /v1/auth?persist=yesGET /v1/auth?max-age=28800Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJhdWQiOiJhZG1pbiIsImV4cCI6MTU4MzI1NDE1MSwiaWF0IjoxNTgzMjI1MzUxLCJpc3MiOiJtYXhzY2FsZSJ9.B1BqhjjKaCWKe3gVXLszpOPfeu8cLiwSb4CMIJAoyqwfilter=id=and(ge(50),le(100))filter=id=or(eq(1),eq(5),eq(10))filter=id=not(eq(1),eq(5),eq(10)){
"error": {
"detail" : "The new `/servers/` resource is missing the `port` parameter"
}
}data.attributes.statedata.attributes.statistics.connectionsfields[servers]=statistics/connectionsarray values: $.store.book[0]
multiple array values: $.store.book[0,1,2]
array wildcards: $.store.book[*].price
object wildcards: $.store.bicycle.*
optional root object: store.book
ge(json)
S >= V
Numbers and strings
gt(json)
S > V
Numbers and strings
and(expr...)
expr && expr
Expressions
or(expr...)
expr || expr
Expressions
not(expr...)
!expr
Expressions
MASTER_PORTMASTER_USER
MASTER_PASSWORD
MASTER_USE_GTID
MASTER_SSL
MASTER_SSL_CA
MASTER_SSL_CAPATH
MASTER_SSL_CERT
MASTER_SSL_CRL
MASTER_SSL_CRLPATH
MASTER_SSL_KEY
MASTER_SSL_CIPHER
MASTER_SSL_VERIFY_SERVER_CERT
RESET SLAVE
Resets replication. Note that the RESET SLAVE ALL form that is supported
by MariaDB isn't supported by the binlogrouter.
SHOW BINARY LOGS
Lists the current files and their sizes. These will be different from the ones listed by the original primary where the binlogrouter is replicating from.
PURGE { BINARY | MASTER } LOGS TO <filename>
Purges binary logs up to but not including the given file. The file name
must be one of the names shown in SHOW BINARY LOGS. The version of this
command which accepts a timestamp is not currently supported.
Automatic purging is supported using the configuration
parameter expire_log_duration.
The files are purged in the order they were created. If a file to be purged
is detected to be in use, the purge stops. This means that the purge will
stop at the oldest file that a replica is still reading.
NOTE: You should still take precaution not to purge files that a potential
replica will need in the future. MaxScale can only detect that a file is
in active use when a replica is connected, and requesting events from it.
SHOW MASTER STATUS
Shows the name and position of the file to which the binlogrouter will write the next replicated data. The name and position do not correspond to the name and position in the primary.
SHOW SLAVE STATUS
Shows the replica status information similar to what a normal MariaDB replica server shows. Some of the values are replaced with constants values that never change. The following values are not constant:
Slave_IO_State: Set to Waiting for primary to send event when
replication is ongoing.
Master_Host: Address of the current primary.
Master_User: The user used to replicate.
Master_Port: The port the primary is listening on.
Master_Log_File: The name of the latest file that the binlogrouter is
writing to.
Read_Master_Log_Pos: The current position where the last event was
written in the latest binlog.
Slave_IO_Running: Set to Yes if replication running and No if it's
not.
Slave_SQL_Running Set to Yes if replication running and No if it's
not.
Exec_Master_Log_Pos: Same as Read_Master_Log_Pos.
Gtid_IO_Pos: The latest replicated GTID.
SELECT { Field } ...
The binlogrouter implements a small subset of the MariaDB SELECT syntax as it is mainly used by the replicating replicas to query various parameters. If a field queried by a client is not known to the binlogrouter, the value will be returned back as-is. The following list of functions and variables are understood by the binlogrouter and are replaced with actual values:
@@gtid_slave_pos, @@gtid_current_pos or @@gtid_binlog_pos: All of
these return the latest GTID replicated from the primary.
version() or @@version: The version string returned by MaxScale when
a client connects to it.
UNIX_TIMESTAMP(): The current timestamp.
@@version_comment: Always pinloki.
@@global.gtid_domain_id: Always 0.
@master_binlog_checksum: Always CRC32.
@@session.auto_increment_increment: Always 1
@@character_set_client: Always utf8
@@character_set_connection: Always utf8
@@character_set_results: Always utf8
@@character_set_server: Always utf8mb4
@@collation_server: Always utf8mb4_general_ci
@@collation_connection: Always utf8_general_ci
@@init_connect: Always an empty string
@@interactive_timeout: Always 28800
@@license: Always BSL
@@lower_case_table_names: Always 0
@@max_allowed_packet: Always 16777216
@@net_write_timeout: Always 60
@@performance_schema: Always 0
@@query_cache_size: Always 1048576
@@query_cache_type: Always OFF
@@sql_mode: Always an empty string
@@system_time_zone: Always UTC
@@time_zone: Always SYSTEM
@@tx_isolation: Always REPEATABLE-READ
@@wait_timeout: Always 28800
SET
@@global.gtid_slave_pos: Set the position from which binlogrouter should
start replicating. E.g. SET @@global.gtid_slave_pos="0-1000-1234,1-1001-5678"
SHOW VARIABLES LIKE '...'
Shows variables matching a string. The LIKE operator in SHOW VARIABLES
is mandatory for the binlogrouter. This means that a plain SHOW VARIABLES
is not currently supported. In addition, the LIKE operator in
binlogrouter only supports exact matches.
Currently the only variables that are returned are gtid_slave_pos,gtid_current_pos and gtid_binlog_pos which return the current GTID
coordinates of the binlogrouter. In addition to these, the server_id
variable will return the configured server ID of the binlogrouter.
Default: none
Default: AES_GCM
AES_CTR
If the primary contains binlogs from the blank state, and there is a large amount of data, consider purging old binlogs. See .
The SQL resource represents a database connection.
expiration_mode=archive
archivedir = /mnt/binlog-s3
expire_log_minimum_files=3
expire_log_duration=24h
compression_algorithm=zstandard
number_of_noncompressed_files=2zstd --rm -z binlog.001234s3fs my-bucket /home/joe/S3_bucket_mount/ -o umask=0077mariadb -u USER -pPASSWORD -h maxscale-IP -P binlog-PORT
CHANGE MASTER TO master_host="primary-IP", master_port=PRIMARY-PORT, master_user=USER, master_password="PASSWORD", master_use_gtid=slave_pos;
START SLAVE;mariadb -u USER -pPASSWORD -h replica-IP -P replica-PORT
STOP SLAVE;
CHANGE MASTER TO master_host="maxscale-IP", master_port=binlog-PORT,
master_user="USER", master_password="PASSWORD", master_use_gtid=slave_pos;
START SLAVE;
SHOW SLAVE STATUS \Gmariadb -u USER -pPASSWORD -h replica-IP -P replica-PORT
STOP SLAVE;
CHANGE MASTER TO master_host="master-IP", master_port=master-PORT,
master_user="USER", master_password="PASSWORD", master_use_gtid=slave_pos;
START SLAVE;
SHOW SLAVE STATUS \Gmariadb -u USER -pPASSWORD -h maxscale-IP -P binlog-PORT
CHANGE MASTER TO master_host="primary-IP", master_port=primary-PORT,
master_user=USER,master_password="PASSWORD", master_use_gtid=slave_pos;STOP SLAVE
SET @@global.gtid_slave_pos = "0-1000-1234,1-1001-5678";
START SLAVEmariadb -u USER -pPASSWORD -h replica-IP -P replica-PORT
STOP SLAVE;
CHANGE MASTER TO master_host="maxscale-IP", master_port=binlog-PORT,
master_user="USER", master_password="PASSWORD",
master_use_gtid=slave_pos;
START SLAVE;
SHOW SLAVE STATUS \Gmariadb -u USER -pPASSWORD -h maxscale-IP -P binlog-PORT
STOP SLAVE;
SET @@global.gtid_slave_pos = "gtid_state";
START SLAVE;mariadb -u USER -pPASSWORD -h maxscale-IP -P binlog-PORT
CHANGE MASTER TO master_host="primary-IP", master_port=PRIMARY-PORT, master_user=USER, master_password="PASSWORD", master_use_gtid=slave_pos;
SET @@global.gtid_slave_pos = "gtid_state";
START SLAVE;[mariadb]
log_slave_updates = ON
log_bin = pinloki # binlog file base name. Must be the same on all servers
gtid_domain_id = 1001 # Must be different for each galera server
binlog_format = ROW
[galera]
wsrep_on = ON
wsrep_gtid_mode = ON
wsrep_gtid_domain_id = 42 # Must be the same for all servers[server1]
type=server
address=192.168.0.1
port=3306
[server2]
type=server
address=192.168.0.2
port=3306
[MariaDB-Monitor]
type=monitor
module=mariadbmon
servers=server1, server2
user=maxuser
password=maxpwd
monitor_interval=10s
[Replication-Proxy]
type=service
router=binlogrouter
cluster=MariaDB-Monitor
select_master=true
expiration_mode=archive
archivedir=/mnt/somedir
expire_log_minimum_files=3
expire_log_duration=24h
compression_algorithm=zstandard
number_of_noncompressed_files=2
user=maxuser
password=maxpwd
[Replication-Listener]
type=listener
service=Replication-Proxy
port=3306The following endpoints provide a simple REST API interface for executing SQL queries on servers and services in MaxScale.
This endpoint also supports executing SQL queries using an ODBC driver. The results returned by connections that use ODBC drivers can differ from the ones returned by normal SQL connections to objects in MaxScale.
This document uses the :id value in the URL to represent a connection ID and
the :query_id to represent a query ID. These values do not need to be manually
added as the relevant links are returned in the request body of each endpoint.
The endpoints use JSON Web Tokens to uniquely identify open SQL connections. A
connection token can be acquired with a POST /v1/sql request and can be used
with the POST /v1/sql/:id/query, GET /v1/sql/:id/results/:query_id andDELETE /v1/sql endpoints. All of these endpoints accept a connection token in
the token parameter of the request:
In addition to request parameters, the token can be stored in cookies in which
case they are automatically used by the REST API. For more information about
token storage in cookies, see the documentation for POST /v1/sql.
All of the endpoints that operate on a single connection support the following
request parameters. The GET /v1/sql and GET /v1/sql/:id endpoints are an
exception as they ignore the current connection token.
token
The connection token to use for the request. If provided, the value is unconditionally used even if a cookie with a valid token exists.
Response
Response contains the requested resource.
Status: 200 OK
Response
Response contains a resource collection with all the open SQL connections.
Status: 200 OK
The request body must be a JSON object consisting of the following fields:
target
The object to connect to. This is a mandatory value and the
given value must be the name of a valid server, service or listener in
MaxScale or the value odbc if an ODBC connection is being made.
user
The username to use when creating the connection. This is a mandatory value when connecting to an object in MaxScale.
password
The password for the user. This is a mandatory value when connecting to an object in MaxScale.
db
The default database for the connection. By default the connection will have no default database. This is ignored by ODBC connections.
timeout
Connection timeout in seconds. The default connection timeout is 10 seconds. This controls how long the SQL connection creation can take before an error is returned. This is accepted by all connection types.
connection_string
Connection string that defines the ODBC connection. This is a required value for ODBC type connections and is ignored by all other connection types.
Here is an example request body:
And here is an example request that uses an ODBC driver to connect to a remote server:
The response will contain the new connection with the token stored atmeta.token. If the request uses the persist=yes request parameter, the token
is stored in cookies instead of the metadata object and the response body will
not contain the token.
The location of the newly created connection will be stored at links.self in
the response body as well as in the Location header.
The token must be given to all subsequent requests that use the connection. It
must be either given in the token parameter of a request or it must be stored
in the cookies. If both a token parameter and a cookie exist at the same time,
the token parameter will be used instead of the cookie.
Request Parameters
This endpoint supports the following request parameters.
persist
Store the connection token in cookies instead of returning it as the response body.
This parameter expects only one value, yes, as its argument. Whenpersist=yes is set, the token is stored in the conn_id_sig_<id> cookie
where the <id> part is replaced by the ID of the connection.
max-age
Sets the connection token maximum age in seconds. The default ismax-age=28800. Only positive values are accepted and if a non-positive or
a non-integer value is found, the parameter is ignored. Once the token age
exceeds the configured maximum value, the token can no longer be used and a
new connection must be created.
Response
Connection was opened:
Status: 201 Created
Missing or invalid payload:
Status: 400 Bad Request
Response
Connection was closed:
Status: 204 No Content
Missing or invalid connection token:
Status: 400 Bad Request
Reconnects an existing connection. This can also be used if the connection to the backend server was lost due to a network error.
The connection will use the same credentials that were passed to the POST /v1/sql endpoint. The new connection will still have the same ID in the REST
API but will be treated as a new connection by the database. A reconnection
re-initializes the connection and resets the session state. Reconnections cannot
take place while a transaction is open.
Response
Reconnection was successful:
Status: 204 No Content
Reconnection failed or connection is already in use:
Status: 503 Service Unavailable
Missing or invalid connection token:
Status: 400 Bad Request
Clones an existing connection. This is done by opening a new connection using the credentials and configuration from the given connection.
Request Parameters
This endpoint supports the same request parameters as the POST /v1/sql
endpoint.
Response
The response is identical to the one in the POST /v1/sql endpoint. In
addition, this endpoint can return the following responses.
Connection is already in use:
Status: 503 Service Unavailable
Missing or invalid connection token:
Status: 400 Bad Request
The request body must be a JSON object with the value of the sql field set to
the SQL to be executed:
The request body must be a JSON object consisting of the following fields:
sql
The SQL to be executed. If the SQL contain multiple statements, multiple results are returned in the response body.
max_rows
The maximum number of rows returned in the response. By default this is 1000 rows. Setting the value to 0 means no limit. Any extra rows in the result will be discarded.
By default, the complete result is returned in the response body. If the SQL
query returns more than one result, the results array will contain all the
results. If the async=true request option is used, the query is queued for
execution.
The results array can have three types of objects: resultsets, errors, and OK
responses.
A resultset consists of the data field with the result data stored as a two
dimensional array. The names of the fields are stored in an array in thefields field and the field types and other metadata are stored in themetadata field. These types of results will be returned for any operation
that returns rows (i.e. SELECT statements)
An error consists of an object with the errno field set to the MariaDB error
code, the message field set to the human-readable error message and thesqlstate field set to the current SQLSTATE of the connection.
An OK response is returned for any result that completes successfully but not
return rows (e.g. an INSERT or UPDATE statement). The affected_rows
field contains the number of rows affected by the operation, thelast_insert_id contains the auto-generated ID and the warnings field
contains the number of warnings raised by the operation.
It is also possible for the fields of the error response to be present in the resultset response if the result ended with an error but still generated some data. Usually this happens when query execution is interrupted but a partial result was generated by the server.
Request Parameters
async
If set to true, the query is queued for asynchronous execution and the
results must be retrieved later from the URL stored in links.self field
of the response. The HTTP response code is set to HTTP 202 Accepted if the
query was successfully queued for execution.
Response
Query successfully executed:
Status: 201 Created
Query queued for execution:
Status: 202 Accepted
Invalid payload or missing connection token:
Status: 400 Bad Request
Fatal connection error:
Status: 503 Service Unavailable
If the API returns this response, the connection to the database server was
lost. The only valid action to take at this point is to close it with theDELETE /v1/sql/:id endpoint.
The results are only available if a POST /v1/sql/:id/queries was executed with
the async field set to true. The result of any asynchronous query can be
read multiple times. Only the latest result is stored: executing a new query
will cause the latest result to be erased. Results can be explicitly erased
with a DELETE request.
Response
Query successfully executed:
Status: 201 Created
Query not yet complete:
Status: 202 Accepted
No asynchronous results expected, invalid payload or missing connection token:
Status: 400 Bad Request
Fatal connection error:
Status: 503 Service Unavailable
If the API returns this response, the connection to the database server was
lost. The only valid action to take at this point is to close it with theDELETE /v1/sql/:id endpoint.
Erases the latest result of an asynchronously executed query. All asynchronous results are erased when the connection that owns them is closed.
Response
Result erased:
Status: 200 OK
Connection is busy or it was not found:
Status: 503 Service Unavailable
Missing connection token:
Status: 400 Bad Request
This endpoint cancels the current query being executed by this connection. If no query is being done and the connection is idle, no action is taken.
If the connection is busy but it is not executing a query, an attempt to cancel
is still made: in this case the results of this operation are undefined for ODBC
connections, for MariaDB connections this will cause a KILL QUERY command to
be executed.
Response
Query was canceled:
Status: 200 OK
Connection was not found:
Status: 503 Service Unavailable
Missing connection token:
Status: 400 Bad Request
Get the list of configured ODBC drivers found by the driver manager. The list of
drivers includes all drivers known to the driver manager for which an installed
library was found (i.e. Driver or Driver64 in /etc/odbcinst.ini points to
a file).
Response
The response contains a resource collection with all available drivers.
Status: 200 OK
The ETL operation requires two connections: an ODBC connection to a remote server (source connection) and a connection to a server in MaxScale (destination connection). All ETL operations must be done on the ODBC connection.
The ETL operations require that the MariaDB ODBC driver is installed on the MaxScale server. This driver is often available in the package manager of your operating system but it can also be downloaded from the MariaDB website. Installation instructions for installing the driver manually can be found here.
The request body must be a JSON object consisting of the following fields:
target
The target connection ID that defines the destination server. This must be the
ID of a connection (i.e. data.attributes.id ) to a server in MaxScale
created via the POST /v1/sql/ endpoint.
type
The type of the ETL data source. The value must be a string with one of the following values:
tables
An array of objects, each of which must define a table and a schema
field. The table field defines the name of the table to be imported and theschema field defines the schema it is in. If the objects contain a value forcreate, select or insert, the SQL generation for that part is skipped.
connection_string
Extra connection string that is appended to the destination server's connection string. This connection will always use the MariaDB ODBC driver. The list of supported options can be found here.
threads
The number of parallel connections used during the ETL operation. By default the ETL operation will use up to 16 connections.
timeout
The connection and query timeout in seconds. By default a timeout of 30 seconds is used.
create_mode
A string with either normal, ignore or replace which controls how tables
are handled that already exist on the destination server.
If left undefined or set to normal, the tables are created using a normalCREATE TABLE statement. This will cause an error to be reported if the table
already exist and will prevent the ETL from proceeding past the object
creation stage.
If set to ignore the tables are created with CREATE TABLE IF NOT EXISTS
which will ignore any existing tables and assume that they are compatible with
the rest of the ETL operation. This mode can be used to continuously load data
from an external source into MariaDB.
If set to replace, the tables are created with CREATE OR REPLACE TABLE
which will cause existing tables to be dropped if they exist. This is not a
reversible process so caution should be taken when this mode is used.
catalog
The catalog for the tables. This is only used when type is set togeneric. In all other cases this value is ignored.
Here is an example payload that prepares the table test.t1 for extraction from
a MariaDB server.
The token for the source connection is provided the same way it is
provided for all other /v1/sql endpoints: in the token request
parameter or in the cookies. The destination connection token is
provided either in the target_token request parameter or as a cookie.
This endpoints supports the following additional request parameters.
target_token
The connection token for the destination connection. If provided, the value is unconditionally used even if a cookie with a valid token exists for the destination connection.
Response
ETL operation prepared:
Status: 202 Accepted
Once complete, the /v1/sql/:id/queries/:query_id endpoint will return the
following result.
Invalid payload or missing connection token:
Status: 400 Bad Request
The behavior of this endpoint is identical to the preparation but instead of preparing the operation, this endpoint will execute the prepared ETL and load the data into MariaDB.
The intended way of doing an ETL operation is to first prepare it using the/v1/sql/:id/etl/prepare endpoint to retrieve the SQL statements that define
the ETL operation. Then if the ETL preparation is successful, thedata.attributes.results.tables value from the response is used as the tables
value for the ETL start operation, done on the /v1/sql/:id/etl/start endpoint.
If any of the create, select or insert fields for a table in the tables
list have not been defined, the SQL will be automatically generated by querying
the source server, similarly to how the preparation generates the SQL
statements. This means that the preparation step is optional if there is no need
to adjust the automatically generated SQL.
The ETL operation can be canceled using the /v1/sql/:id/cancel endpoint which
will roll back the ETL operation. Any tables that were created during the ETL
operation are not deleted on the destination server and must be manually cleaned
up.
If the ETL fails to extract the SQL from the source server or an error was
encountered during table creation, the response will have the "ok" field set
to false. Both the top-level object as well as the individual tables can have
the "error" field set to the error message. This field is filled during the
ETL operation which means errors are visible even if the ETL is still ongoing.
During the ETL operation, tables that have been fully processed will have a"execution_time" field. This field has the total time in seconds it took to
execute the data loading step.
Response
ETL operation started:
Status: 202 Accepted
Once complete, the /v1/sql/:id/queries/:query_id endpoint will return the
following result.
Invalid payload or missing connection token:
Status: 400 Bad Request
This page is licensed: CC BY-SA / Gnu FDL
POST /v1/sql/query?token=eyJhbGciOiJIUzI1NiJ9.eyJhdWQiOiJhZG1pbiIsImV4cCI6MTU4MzI1NDE1MSwiaWF0IjoxNTgzMjI1MzUxLCJpc3MiOiJtYXhzY2FsZSJ9.B1BqhjjKaCWKe3gVXLszpOPfeu8cLiwSb4CMIJAoyqwGET /v1/sql/:id{
"data": {
"attributes": {
"seconds_idle": 0.0013705639999999999,
"sql": null,
"target": "server1",
"thread_id": 10
},
"id": "96be0ffe-10fb-4ed1-8e66-a17ef1eea0fe",
"links": {
"related": "http://localhost:8989/v1/sql/96be0ffe-10fb-4ed1-8e66-a17ef1eea0fe/queries/",
"self": "http://localhost:8989/v1/sql/96be0ffe-10fb-4ed1-8e66-a17ef1eea0fe/"
},
"type": "sql"
},
"links": {
"self": "http://localhost:8989/v1/sql/96be0ffe-10fb-4ed1-8e66-a17ef1eea0fe/"
}
}GET /v1/sql{
"data": [
{
"attributes": {
"seconds_idle": 0.0010341230000000001,
"sql": null,
"target": "server1",
"thread_id": 12
},
"id": "90761656-3352-420b-83e7-0dcef691552a",
"links": {
"related": "http://localhost:8989/v1/sql/90761656-3352-420b-83e7-0dcef691552a/queries/",
"self": "http://localhost:8989/v1/sql/90761656-3352-420b-83e7-0dcef691552a/"
},
"type": "sql"
},
{
"attributes": {
"seconds_idle": 0.002397377,
"sql": null,
"target": "server1",
"thread_id": 11
},
"id": "98a8b5c5-3632-4f0f-98bb-0dc440a3409a",
"links": {
"related": "http://localhost:8989/v1/sql/98a8b5c5-3632-4f0f-98bb-0dc440a3409a/queries/",
"self": "http://localhost:8989/v1/sql/98a8b5c5-3632-4f0f-98bb-0dc440a3409a/"
},
"type": "sql"
}
],
"links": {
"self": "http://localhost:8989/v1/sql/"
}
}POST /v1/sql{
"user": "jdoe",
"password": "my-s3cret",
"target": "server1",
"db": "test",
"timeout": 15
}{
"target": "odbc",
"connection_string": "Driver=MariaDB;SERVER=127.0.0.1;UID=maxuser;PWD=maxpwd"
}{
"data": {
"attributes": {
"seconds_idle": 7.6394000000000001e-5,
"sql": null,
"target": "server1",
"thread_id": 13
},
"id": "f4e38d96-99b4-479e-ac36-5f3b437aff99",
"links": {
"related": "http://localhost:8989/v1/sql/f4e38d96-99b4-479e-ac36-5f3b437aff99/queries/",
"self": "http://localhost:8989/v1/sql/f4e38d96-99b4-479e-ac36-5f3b437aff99/"
},
"type": "sql"
},
"links": {
"self": "http://localhost:8989/v1/sql/f4e38d96-99b4-479e-ac36-5f3b437aff99/"
},
"meta": {
"token": "eyJhbGciOiJIUzI1NiJ9.eyJhdWQiOiJmNGUzOGQ5Ni05OWI0LTQ3OWUtYWMzNi01ZjNiNDM3YWZmOTkiLCJleHAiOjE2ODk5NTA4MDQsImlhdCI6MTY4OTkyMjAwNCwiaXNzIjoibXhzLXF1ZXJ5Iiwic3ViIjoiZjRlMzhkOTYtOTliNC00NzllLWFjMzYtNWYzYjQzN2FmZjk5In0.gCKYl7XwwnMLjJbQT6UShDuK8aJ6gessmredQ1i0On4"
}
}DELETE /v1/sql/:idPOST /v1/sql/:id/reconnectPOST /v1/sql/:id/clonePOST /v1/sql/:id/queries{
"sql": "SELECT * FROM test.t1",
"max_rows": 1000
}{
"data": {
"attributes": {
"execution_time": 0.00028492799999999999,
"results": [
{
"complete": true,
"data": [
[
1
],
[
2
],
[
3
]
],
"fields": [
"id"
],
"metadata": [
{
"catalog": "def",
"decimals": 0,
"length": 11,
"name": "id",
"schema": "test",
"table": "t1",
"type": "LONG"
}
]
}
],
"sql": "SELECT id FROM test.t1"
},
"id": "b7243d92-5bc6-4814-80fb-6772831ead4b.1",
"type": "queries"
},
"links": {
"self": "http://localhost:8989/v1/sql/b7243d92-5bc6-4814-80fb-6772831ead4b/queries/b7243d92-5bc6-4814-80fb-6772831ead4b.1/"
}
}{
"data": {
"attributes": {
"execution_time": 0.00012686699999999999,
"results": [
{
"errno": 1064,
"message": "You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'TABLE test.t1' at line 1",
"sqlstate": "42000"
}
],
"sql": "SELECT syntax_error FROM TABLE test.t1"
},
"id": "621bacd9-48fd-436c-afda-b4e4d0d7b228.1",
"type": "queries"
},
"links": {
"self": "http://localhost:8989/v1/sql/621bacd9-48fd-436c-afda-b4e4d0d7b228/queries/621bacd9-48fd-436c-afda-b4e4d0d7b228.1/"
}
}{
"data": {
"attributes": {
"execution_time": 0.000474659,
"results": [
{
"affected_rows": 0,
"last_insert_id": 0,
"warnings": 0
}
],
"sql": "CREATE TABLE test.my_table(id INT)"
},
"id": "60005d40-c034-4aa3-94de-b15c14d9c91c.1",
"type": "queries"
},
"links": {
"self": "http://localhost:8989/v1/sql/60005d40-c034-4aa3-94de-b15c14d9c91c/queries/60005d40-c034-4aa3-94de-b15c14d9c91c.1/"
}
}{
"data": {
"attributes": {
"execution_time": 0.00014767200000000001,
"results": [
{
"complete": true,
"data": [
[
1
]
],
"fields": [
"1"
],
"metadata": [
{
"catalog": "def",
"decimals": 0,
"length": 1,
"name": "1",
"schema": "",
"table": "",
"type": "LONG"
}
]
}
],
"sql": "SELECT 1"
},
"id": "5999b711-d190-4f0e-8322-db3ce3bd97a2.1",
"type": "queries"
},
"links": {
"self": "http://localhost:8989/v1/sql/5999b711-d190-4f0e-8322-db3ce3bd97a2/queries/5999b711-d190-4f0e-8322-db3ce3bd97a2.1/"
}
}{
"data": {
"attributes": {
"execution_time": 0.0,
"sql": "SELECT 1"
},
"id": "3d23f7e0-6a83-4282-94a5-8a1089d56f72.1",
"type": "queries"
},
"links": {
"self": "http://localhost:8989/v1/sql/3d23f7e0-6a83-4282-94a5-8a1089d56f72/queries/3d23f7e0-6a83-4282-94a5-8a1089d56f72.1/"
}
}GET /v1/sql/:id/queries/:query_id{
"data": {
"attributes": {
"execution_time": 0.00011945,
"results": [
{
"complete": true,
"data": [
[
1
]
],
"fields": [
"1"
],
"metadata": [
{
"catalog": "def",
"decimals": 0,
"length": 1,
"name": "1",
"schema": "",
"table": "",
"type": "LONG"
}
]
}
],
"sql": "SELECT 1"
},
"id": "63ec5e96-2bfa-40a9-b631-425b4e3e993c.1",
"type": "queries"
},
"links": {
"self": "http://localhost:8989/v1/sql/63ec5e96-2bfa-40a9-b631-425b4e3e993c/queries/63ec5e96-2bfa-40a9-b631-425b4e3e993c.1/"
}
}DELETE /v1/sql/:id/queries/:query_idPOST /v1/sql/:id/cancelGET /v1/sql/odbc/drivers{
"data": [
{
"attributes": {
"description": "ODBC for MariaDB",
"driver": "/usr/lib/libmaodbc.so",
"driver64": "/usr/lib64/libmaodbc.so",
"fileusage": "1"
},
"id": "MariaDB",
"type": "drivers"
}
],
"links": {
"self": "http://localhost:8989/v1/sql/odbc/drivers/"
}
}POST /v1/sql/:id/etl/prepare- `mariadb`
Extract data from a MariaDB database.
- `postgresql`
Extract data from a PostgreSQL database. This requires that the PostgreSQL
ODBC driver is installed on the MaxScale server. This driver is often
available in the package manager of your operating system.
- `generic`
Extract data from a generic ODBC source. This uses the ODBC catalog
functions to determine the table layout. The results provided by this are
not as accurate as the specialized versions but it can serve as a good
starting point from which manual modifications to the SQL can be done.
This ETL type requires that the table catalog is provided at the top level
with the `catalog` field. The meaning of the catalog differs between
database implementations.{
"type": "mariadb",
"target": "e2a56d2f-6514-4926-8dba-dca0c4ae3a86",
"tables": [
{
"table": "t1",
"schema": "test"
}
]
}{
"data": {
"attributes": {
"execution_time": 0.0062226729999999997,
"results": {
"ok": true,
"stage": "prepare",
"tables": [
{
"create": "CREATE DATABASE IF NOT EXISTS `test`;\nUSE `test`;\nCREATE TABLE `t1` (\n `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,\n `data` varchar(255) DEFAULT NULL,\n UNIQUE KEY `id` (`id`)\n) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci",
"insert": "INSERT INTO `test`.`t1` (`id`,`data`) VALUES (?,?)",
"schema": "test",
"select": "SELECT `id`,`data` FROM `test`.`t1`",
"table": "t1"
}
]
},
"sql": "ETL"
},
"id": "31dc09b7-ec09-4e6d-b098-e925f706233c.1",
"type": "queries"
},
"links": {
"self": "http://localhost:8989/v1/sql/31dc09b7-ec09-4e6d-b098-e925f706233c/queries/31dc09b7-ec09-4e6d-b098-e925f706233c.1/"
}
}POST /v1/sql/:id/etl/start{
"data": {
"attributes": {
"execution_time": 0.0094386039999999997,
"results": {
"ok": true,
"stage": "load",
"tables": [
{
"create": "CREATE DATABASE IF NOT EXISTS `test`;\nUSE `test`;\nCREATE TABLE `t1` (\n `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,\n `data` varchar(255) DEFAULT NULL,\n UNIQUE KEY `id` (`id`)\n) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci",
"execution_time": 0.0033923809999999999,
"insert": "INSERT INTO `test`.`t1` (`id`,`data`) VALUES (?,?)",
"rows": 1,
"schema": "test",
"select": "SELECT `id`,`data` FROM `test`.`t1`",
"table": "t1"
}
]
},
"sql": "ETL"
},
"id": "1391b67e-58a7-4be3-b686-2498cb3a0e06.1",
"type": "queries"
},
"links": {
"self": "http://localhost:8989/v1/sql/1391b67e-58a7-4be3-b686-2498cb3a0e06/queries/1391b67e-58a7-4be3-b686-2498cb3a0e06.1/"
}
}A server resource represents a backend database server.
The :name in all of the URIs must be the name of a server in MaxScale.
Get a single server.
Response
Status: 200 OK
Response
Response contains a resource collection with all servers.
Status: 200 OK
Create a new server by defining the resource. The posted object must define at least the following fields.
data.id
Name of the server
data.type
Type of the object, must be servers
The following is the minimal required JSON object for defining a new server.
The relationships of a server can also be defined at creation time. This allows new servers to be created and immediately taken into use.
Refer to the for a full list of server parameters.
Response
Server created:
Status: 204 No Content
Invalid JSON body:
Status: 400 Bad Request
The request body must be a valid JSON document representing the modified server.
In addition to the server , the services and monitors fields of the relationships object can be modified. Removal, addition and modification of the links will change which service and monitors use this server.
For example, removing the first value in the services list in the_relationships_ object from the following JSON document will remove the_server1_ from the service RW-Split-Router.
Removing a service from a server is analogous to removing the server from the service. Both unlink the two objects from each other.
Request for PATCH /v1/servers/server1 that modifies the address of the server:
Request for PATCH /v1/servers/server1 that modifies the server relationships:
If parts of the resource are not defined (e.g. the attributes field in the
above example), those parts of the resource are not modified. All parts that are
defined are interpreted as the new definition of those part of the resource. In
the above example, the relationships of the resource are completely redefined.
Response
Server modified:
Status: 204 No Content
Invalid JSON body:
Status: 400 Bad Request
The :type in the URI must be either services, for service relationships, or monitors, for monitor relationships.
The request body must be a JSON object that defines only the data field. The value of the data field must be an array of relationship objects that define the id and type fields of the relationship. This object will replace the existing relationships of the particular type from the server.
The following is an example request and request body that defines a single service relationship for a server.
All relationships for a server can be deleted by sending an empty array as the_data_ field value. The following example removes the server from all services.
Response
Server relationships modified:
Status: 204 No Content
Invalid JSON body:
Status: 400 Bad Request
A server can only be deleted if it is not used by any services or monitors.
This endpoint also supports the force=yes parameter that will unconditionally
delete the server by first unlinking it from all services and monitors that use
it.
Response
Server is destroyed:
Status: 204 No Content
Server is in use:
Status: 400 Bad Request
This endpoint requires that the state parameter is passed with the
request. The value of state must be one of the following values.
For example, to set the server db-server-1 into maintenance mode, a request to the following URL must be made:
This endpoint also supports the force=yes parameter that will cause all
connections to the server to be closed if state=maintenance is also set. By
default setting a server into maintenance mode will cause connections to be
closed only after the next request is sent.
The following example forcefully closes all connections to server db-server-1 and sets it into maintenance mode:
Response
Server state modified:
Status: 204 No Content
Missing or invalid parameter:
Status: 400 Bad Request
This endpoint requires that the state parameter is passed with the
request. The value of state must be one of the values defined in the_set_ endpoint documentation.
Response
Server state modified:
Status: 204 No Content
Missing or invalid parameter:
Status: 400 Bad Request
This page is licensed: CC BY-SA / Gnu FDL
A service resource represents a service inside MaxScale. A service is a collection of network listeners, filters, a router and a set of backend servers.
The :name in all of the URIs must be the name of a service in MaxScale.
Get a single service.
Response
Status: 200 OK
Get all services.
Response
Status: 200 OK
Create a new service by defining the resource. The posted object must define at least the following fields.
data.id
Name of the service
data.type
Type of the object, must be services
The data.attributes.parameters object is used to define router and service
parameters. All configuration parameters that can be defined in the
configuration file can also be added to the parameters object. The exceptions to
this are the type, router, servers and filters parameters which must not
be defined.
As with other REST API resources, the data.relationships field defines the
relationships of the service to other resources. Services can have two types of
relationships: servers and filters relationships.
If the request body defines a valid relationships object, the service is
linked to those resources. For servers, this is equivalent to adding the list of
server names into the parameter. For
filters, this is equivalent to adding the filters in thedata.relationships.filters.data array to the parameter in the
order they appear. For other services, this is equivalent to adding the list of
server names into the parameter.
The following example defines a new service with both a server and a filter relationship.
Response
Service is created:
Status: 204 No Content
A service can only be destroyed if the service uses no servers or filters and
all the listeners pointing to the service have been destroyed. This means that
the data.relationships must be an empty object and data.attributes.listeners
must be an empty array in order for the service to qualify for destruction.
If there are open client connections that use the service when it is destroyed, they are allowed to gracefully close before the service is destroyed. This means that the destruction of a service can be acknowledged via the REST API before the destruction process has fully completed.
To find out whether a service is still in use after it has been destroyed, the resource should be used. If a session for the service is still open, it has not yet been destroyed.
This endpoint also supports the force=yes parameter that will unconditionally
delete the service by first unlinking it from all servers and filters that it
uses.
Response
Service is destroyed:
Status: 204 No Content
The request body must be a JSON object which represents a set of new definitions for the service.
All standard service parameters can be modified. Refer to the documentation on the details of these parameters.
In addition to the standard service parameters, router parameters can be updated at runtime if the router module supports it. Refer to the individual router documentation for more details on whether the router supports it and which parameters can be updated at runtime.
The following example modifies a service by changing the user parameter to admin.
Response
Service is modified:
Status: 204 No Content
The :type in the URI must be either servers, services or filters, depending on which relationship is being modified.
The request body must be a JSON object that defines only the data field. The value of the data field must be an array of relationship objects that define the id and type fields of the relationship. This object will replace the existing relationships of this type for the service.
Note: The order of the values in the filters relationship will define the
order the filters are set up in. The order in which the filters appear in the
array will be the order in which the filters are applied to each query. Refer
to the parameter
for more details.
The following is an example request and request body that defines a single
server relationship for a service that is equivalent to a servers=my-server
parameter.
All relationships for a service can be deleted by sending an empty array as the_data_ field value. The following example removes all servers from a service.
Response
Service relationships modified:
Status: 204 No Content
Invalid JSON body:
Status: 400 Bad Request
Stops a started service.
Parameters
This endpoint supports the following parameters:
force=yes
Close all existing connections that were created through this listener.
Response
Service is stopped:
Status: 204 No Content
Starts a stopped service.
Response
Service is started:
Status: 204 No Content
Reloads the list of database users used for authentication.
Response
Users are reloaded:
Status: 204 No Content
This endpoint is deprecated, use the listeners endpoint instead.
This endpoint is deprecated, use the listeners endpoint instead.
This endpoint is deprecated, use the listeners endpoint instead.
This endpoint is deprecated, use the listeners endpoint instead.
This page is licensed: CC BY-SA / Gnu FDL
data.attributes.parameters.address OR data.attributes.parameters.socket
data.attributes.parameters.port
The port to use. Needs
to be defined if the address field is defined.
master
Server is a Master
slave
Server is a Slave
maintenance
Server is put into maintenance
running
Server is up and running
synced
Server is a Galera node
drain
Server is drained of connections
data.attributes.router
The router module to use
data.attributes.parameters.user
The user to use
data.attributes.parameters.password
The password to use
GET /v1/servers/:name{
"data": {
"attributes": {
"gtid_binlog_pos": "0-3000-8",
"gtid_current_pos": "0-3000-8",
"last_event": "master_up",
"lock_held": null,
"master_group": null,
"master_id": -1,
"name": "server1",
"node_id": 3000,
"parameters": {
"address": "127.0.0.1",
"disk_space_threshold": null,
"extra_port": 0,
"max_routing_connections": 0,
"monitorpw": null,
"monitoruser": null,
"persistmaxtime": "0ms",
"persistpoolmax": 0,
"port": 3000,
"priority": 0,
"private_address": null,
"proxy_protocol": false,
"rank": "primary",
"replication_custom_options": null,
"socket": null,
"ssl": false,
"ssl_ca": null,
"ssl_cert": null,
"ssl_cert_verify_depth": 9,
"ssl_cipher": null,
"ssl_key": null,
"ssl_verify_peer_certificate": false,
"ssl_verify_peer_host": false,
"ssl_version": "MAX"
},
"read_only": false,
"replication_lag": 0,
"server_id": 3000,
"slave_connections": [],
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
},
"state": "Master, Running",
"state_details": null,
"statistics": {
"active_operations": 0,
"adaptive_avg_select_time": "0ns",
"connection_pool_empty": 0,
"connections": 1,
"failed_auths": 0,
"max_connections": 1,
"max_pool_size": 0,
"persistent_connections": 0,
"response_time_distribution": {
"read": {
"distribution": [
{
"count": 0,
"time": "0.000001",
"total": 0.0
},
{
"count": 0,
"time": "0.000010",
"total": 0.0
},
{
"count": 0,
"time": "0.000100",
"total": 0.0
},
{
"count": 0,
"time": "0.001000",
"total": 0.0
},
{
"count": 0,
"time": "0.010000",
"total": 0.0
},
{
"count": 0,
"time": "0.100000",
"total": 0.0
},
{
"count": 0,
"time": "1.000000",
"total": 0.0
},
{
"count": 0,
"time": "10.000000",
"total": 0.0
},
{
"count": 0,
"time": "100.000000",
"total": 0.0
},
{
"count": 0,
"time": "1000.000000",
"total": 0.0
},
{
"count": 0,
"time": "10000.000000",
"total": 0.0
},
{
"count": 0,
"time": "100000.000000",
"total": 0.0
}
],
"operation": "read",
"range_base": 10
},
"write": {
"distribution": [
{
"count": 0,
"time": "0.000001",
"total": 0.0
},
{
"count": 0,
"time": "0.000010",
"total": 0.0
},
{
"count": 1,
"time": "0.000100",
"total": 9.0147000000000003e-5
},
{
"count": 3,
"time": "0.001000",
"total": 0.00131908
},
{
"count": 0,
"time": "0.010000",
"total": 0.0
},
{
"count": 0,
"time": "0.100000",
"total": 0.0
},
{
"count": 0,
"time": "1.000000",
"total": 0.0
},
{
"count": 0,
"time": "10.000000",
"total": 0.0
},
{
"count": 0,
"time": "100.000000",
"total": 0.0
},
{
"count": 0,
"time": "1000.000000",
"total": 0.0
},
{
"count": 0,
"time": "10000.000000",
"total": 0.0
},
{
"count": 0,
"time": "100000.000000",
"total": 0.0
}
],
"operation": "write",
"range_base": 10
}
},
"reused_connections": 0,
"routed_packets": 4,
"total_connections": 1
},
"triggered_at": "Fri, 05 Jan 2024 07:23:54 GMT",
"uptime": 2372,
"version_string": "10.6.15-MariaDB-1:10.6.15+maria~ubu2004-log"
},
"id": "server1",
"links": {
"self": "http://localhost:8989/v1/servers/server1/"
},
"relationships": {
"monitors": {
"data": [
{
"id": "MariaDB-Monitor",
"type": "monitors"
}
],
"links": {
"related": "http://localhost:8989/v1/monitors/",
"self": "http://localhost:8989/v1/servers/server1/relationships/monitors/"
}
},
"services": {
"data": [
{
"id": "RW-Split-Router",
"type": "services"
},
{
"id": "Read-Connection-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/servers/server1/relationships/services/"
}
}
},
"type": "servers"
},
"links": {
"self": "http://localhost:8989/v1/servers/server1/"
}
}GET /v1/servers{
"data": [
{
"attributes": {
"gtid_binlog_pos": "0-3000-8",
"gtid_current_pos": "0-3000-8",
"last_event": "master_up",
"lock_held": null,
"master_group": null,
"master_id": -1,
"name": "server1",
"node_id": 3000,
"parameters": {
"address": "127.0.0.1",
"disk_space_threshold": null,
"extra_port": 0,
"max_routing_connections": 0,
"monitorpw": null,
"monitoruser": null,
"persistmaxtime": "0ms",
"persistpoolmax": 0,
"port": 3000,
"priority": 0,
"private_address": null,
"proxy_protocol": false,
"rank": "primary",
"replication_custom_options": null,
"socket": null,
"ssl": false,
"ssl_ca": null,
"ssl_cert": null,
"ssl_cert_verify_depth": 9,
"ssl_cipher": null,
"ssl_key": null,
"ssl_verify_peer_certificate": false,
"ssl_verify_peer_host": false,
"ssl_version": "MAX"
},
"read_only": false,
"replication_lag": 0,
"server_id": 3000,
"slave_connections": [],
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
},
"state": "Master, Running",
"state_details": null,
"statistics": {
"active_operations": 0,
"adaptive_avg_select_time": "0ns",
"connection_pool_empty": 0,
"connections": 1,
"failed_auths": 0,
"max_connections": 1,
"max_pool_size": 0,
"persistent_connections": 0,
"response_time_distribution": {
"read": {
"distribution": [
{
"count": 0,
"time": "0.000001",
"total": 0.0
},
{
"count": 0,
"time": "0.000010",
"total": 0.0
},
{
"count": 0,
"time": "0.000100",
"total": 0.0
},
{
"count": 0,
"time": "0.001000",
"total": 0.0
},
{
"count": 0,
"time": "0.010000",
"total": 0.0
},
{
"count": 0,
"time": "0.100000",
"total": 0.0
},
{
"count": 0,
"time": "1.000000",
"total": 0.0
},
{
"count": 0,
"time": "10.000000",
"total": 0.0
},
{
"count": 0,
"time": "100.000000",
"total": 0.0
},
{
"count": 0,
"time": "1000.000000",
"total": 0.0
},
{
"count": 0,
"time": "10000.000000",
"total": 0.0
},
{
"count": 0,
"time": "100000.000000",
"total": 0.0
}
],
"operation": "read",
"range_base": 10
},
"write": {
"distribution": [
{
"count": 0,
"time": "0.000001",
"total": 0.0
},
{
"count": 0,
"time": "0.000010",
"total": 0.0
},
{
"count": 1,
"time": "0.000100",
"total": 9.0147000000000003e-5
},
{
"count": 3,
"time": "0.001000",
"total": 0.00131908
},
{
"count": 0,
"time": "0.010000",
"total": 0.0
},
{
"count": 0,
"time": "0.100000",
"total": 0.0
},
{
"count": 0,
"time": "1.000000",
"total": 0.0
},
{
"count": 0,
"time": "10.000000",
"total": 0.0
},
{
"count": 0,
"time": "100.000000",
"total": 0.0
},
{
"count": 0,
"time": "1000.000000",
"total": 0.0
},
{
"count": 0,
"time": "10000.000000",
"total": 0.0
},
{
"count": 0,
"time": "100000.000000",
"total": 0.0
}
],
"operation": "write",
"range_base": 10
}
},
"reused_connections": 0,
"routed_packets": 4,
"total_connections": 1
},
"triggered_at": "Fri, 05 Jan 2024 07:23:54 GMT",
"uptime": 2372,
"version_string": "10.6.15-MariaDB-1:10.6.15+maria~ubu2004-log"
},
"id": "server1",
"links": {
"self": "http://localhost:8989/v1/servers/server1/"
},
"relationships": {
"monitors": {
"data": [
{
"id": "MariaDB-Monitor",
"type": "monitors"
}
],
"links": {
"related": "http://localhost:8989/v1/monitors/",
"self": "http://localhost:8989/v1/servers/server1/relationships/monitors/"
}
},
"services": {
"data": [
{
"id": "RW-Split-Router",
"type": "services"
},
{
"id": "Read-Connection-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/servers/server1/relationships/services/"
}
}
},
"type": "servers"
},
{
"attributes": {
"gtid_binlog_pos": "0-3001-12",
"gtid_current_pos": "0-3001-12",
"last_event": "lost_slave",
"lock_held": null,
"master_group": null,
"master_id": 3000,
"name": "server2",
"node_id": 3001,
"parameters": {
"address": "127.0.0.1",
"disk_space_threshold": null,
"extra_port": 0,
"max_routing_connections": 0,
"monitorpw": null,
"monitoruser": null,
"persistmaxtime": "0ms",
"persistpoolmax": 0,
"port": 3001,
"priority": 0,
"private_address": null,
"proxy_protocol": false,
"rank": "primary",
"replication_custom_options": null,
"socket": null,
"ssl": false,
"ssl_ca": null,
"ssl_cert": null,
"ssl_cert_verify_depth": 9,
"ssl_cipher": null,
"ssl_key": null,
"ssl_verify_peer_certificate": false,
"ssl_verify_peer_host": false,
"ssl_version": "MAX"
},
"read_only": false,
"replication_lag": -1,
"server_id": 3001,
"slave_connections": [
{
"connection_name": "",
"gtid_io_pos": "",
"last_io_error": "",
"last_sql_error": "",
"master_host": "127.0.0.1",
"master_port": 3000,
"master_server_id": 3000,
"seconds_behind_master": null,
"slave_io_running": "No",
"slave_sql_running": "No",
"using_gtid": "No"
}
],
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
},
"state": "Running",
"state_details": null,
"statistics": {
"active_operations": 0,
"adaptive_avg_select_time": "0ns",
"connection_pool_empty": 0,
"connections": 0,
"failed_auths": 0,
"max_connections": 1,
"max_pool_size": 0,
"persistent_connections": 0,
"response_time_distribution": {
"read": {
"distribution": [
{
"count": 0,
"time": "0.000001",
"total": 0.0
},
{
"count": 0,
"time": "0.000010",
"total": 0.0
},
{
"count": 0,
"time": "0.000100",
"total": 0.0
},
{
"count": 1,
"time": "0.001000",
"total": 0.00037632399999999998
},
{
"count": 0,
"time": "0.010000",
"total": 0.0
},
{
"count": 0,
"time": "0.100000",
"total": 0.0
},
{
"count": 0,
"time": "1.000000",
"total": 0.0
},
{
"count": 0,
"time": "10.000000",
"total": 0.0
},
{
"count": 0,
"time": "100.000000",
"total": 0.0
},
{
"count": 0,
"time": "1000.000000",
"total": 0.0
},
{
"count": 0,
"time": "10000.000000",
"total": 0.0
},
{
"count": 0,
"time": "100000.000000",
"total": 0.0
}
],
"operation": "read",
"range_base": 10
},
"write": {
"distribution": [
{
"count": 0,
"time": "0.000001",
"total": 0.0
},
{
"count": 0,
"time": "0.000010",
"total": 0.0
},
{
"count": 0,
"time": "0.000100",
"total": 0.0
},
{
"count": 0,
"time": "0.001000",
"total": 0.0
},
{
"count": 0,
"time": "0.010000",
"total": 0.0
},
{
"count": 0,
"time": "0.100000",
"total": 0.0
},
{
"count": 0,
"time": "1.000000",
"total": 0.0
},
{
"count": 0,
"time": "10.000000",
"total": 0.0
},
{
"count": 0,
"time": "100.000000",
"total": 0.0
},
{
"count": 0,
"time": "1000.000000",
"total": 0.0
},
{
"count": 0,
"time": "10000.000000",
"total": 0.0
},
{
"count": 0,
"time": "100000.000000",
"total": 0.0
}
],
"operation": "write",
"range_base": 10
}
},
"reused_connections": 0,
"routed_packets": 1,
"total_connections": 1
},
"triggered_at": "Fri, 05 Jan 2024 07:24:07 GMT",
"uptime": 2372,
"version_string": "10.6.15-MariaDB-1:10.6.15+maria~ubu2004-log"
},
"id": "server2",
"links": {
"self": "http://localhost:8989/v1/servers/server2/"
},
"relationships": {
"monitors": {
"data": [
{
"id": "MariaDB-Monitor",
"type": "monitors"
}
],
"links": {
"related": "http://localhost:8989/v1/monitors/",
"self": "http://localhost:8989/v1/servers/server2/relationships/monitors/"
}
},
"services": {
"data": [
{
"id": "RW-Split-Router",
"type": "services"
},
{
"id": "Read-Connection-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/servers/server2/relationships/services/"
}
}
},
"type": "servers"
}
],
"links": {
"self": "http://localhost:8989/v1/servers/"
}
}POST /v1/servers{
"data": {
"id": "server3",
"type": "servers",
"attributes": {
"parameters": {
"address": "127.0.0.1",
"port": 3003
}
}
}
}{
"data": {
"id": "server4",
"type": "servers",
"attributes": {
"parameters": {
"address": "127.0.0.1",
"port": 3002
}
},
"relationships": {
"services": {
"data": [
{
"id": "RW-Split-Router",
"type": "services"
},
{
"id": "Read-Connection-Router",
"type": "services"
}
]
},
"monitors": {
"data": [
{
"id": "MySQL-Monitor",
"type": "monitors"
}
]
}
}
}
}PATCH /v1/servers/:name{
"data": {
"attributes": {
"parameters": {
"address": "192.168.0.123"
}
}
}
}{
"data": {
"relationships": {
"services": {
"data": [
{ "id": "Read-Connection-Router", "type": "services" }
]
},
"monitors": {
"data": [
{ "id": "MySQL-Monitor", "type": "monitors" }
]
}
}
}
}PATCH /v1/servers/:name/relationships/:typePATCH /v1/servers/my-db-server/relationships/services
{
data: [
{ "id": "my-rwsplit-service", "type": "services" }
]
}PATCH /v1/servers/my-db-server/relationships/services
{
data: []
}DELETE /v1/servers/:namePUT /v1/servers/:name/setPUT /v1/servers/db-server-1/set?state=maintenancePUT /v1/servers/db-server-1/set?state=maintenance&force=yesPUT /v1/servers/:name/clearGET /v1/services/:name{
"data": {
"attributes": {
"connections": 0,
"listeners": [
{
"attributes": {
"parameters": {
"MariaDBProtocol": {
"allow_replication": true
},
"address": "::",
"authenticator": null,
"authenticator_options": null,
"connection_init_sql_file": null,
"connection_metadata": [
"character_set_client=auto",
"character_set_connection=auto",
"character_set_results=auto",
"max_allowed_packet=auto",
"system_time_zone=auto",
"time_zone=auto",
"tx_isolation=auto",
"maxscale=auto"
],
"port": 4008,
"protocol": "MariaDBProtocol",
"proxy_protocol_networks": null,
"service": "Read-Connection-Router",
"socket": null,
"sql_mode": "default",
"ssl": false,
"ssl_ca": null,
"ssl_cert": null,
"ssl_cert_verify_depth": 9,
"ssl_cipher": null,
"ssl_crl": null,
"ssl_key": null,
"ssl_verify_peer_certificate": false,
"ssl_verify_peer_host": false,
"ssl_version": "MAX",
"type": "listener",
"user_mapping_file": null
},
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
},
"state": "Running"
},
"id": "Read-Connection-Listener",
"relationships": {
"services": {
"data": [
{
"id": "Read-Connection-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/listeners/Read-Connection-Listener/relationships/services/"
}
}
},
"type": "listeners"
}
],
"parameters": {
"auth_all_servers": false,
"connection_keepalive": "300000ms",
"disable_sescmd_history": false,
"enable_root_user": false,
"force_connection_keepalive": false,
"idle_session_pool_time": "-1ms",
"localhost_match_wildcard_host": true,
"log_auth_warnings": true,
"log_debug": false,
"log_info": false,
"log_notice": false,
"log_warning": false,
"master_accept_reads": true,
"max_connections": 0,
"max_replication_lag": "0ms",
"max_sescmd_history": 50,
"multiplex_timeout": "60000ms",
"net_write_timeout": "0ms",
"password": "*****",
"prune_sescmd_history": true,
"rank": "primary",
"retain_last_statements": -1,
"router": "readconnroute",
"router_options": "master",
"session_trace": false,
"strip_db_esc": true,
"type": "service",
"user": "maxuser",
"user_accounts_file": null,
"user_accounts_file_usage": "add_when_load_ok",
"version_string": null,
"wait_timeout": "0ms"
},
"router": "readconnroute",
"router_diagnostics": {
"queries": 0,
"server_query_statistics": []
},
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
},
"started": "Fri, 05 Jan 2024 07:23:54 GMT",
"state": "Started",
"statistics": {
"active_operations": 0,
"avg_sescmd_history_length": 0.0,
"avg_session_lifetime": 0.0,
"connections": 0,
"failed_auths": 0,
"max_connections": 0,
"max_sescmd_history_length": 0,
"max_session_lifetime": 0,
"routed_packets": 0,
"total_connections": 0
},
"total_connections": 0,
"users": [
{
"default_role": "",
"global_priv": false,
"host": "localhost",
"plugin": "mysql_native_password",
"proxy_priv": false,
"ssl": false,
"super_priv": false,
"user": "mariadb.sys"
},
{
"default_role": "",
"global_priv": true,
"host": "127.0.0.1",
"plugin": "mysql_native_password",
"proxy_priv": false,
"ssl": false,
"super_priv": true,
"user": "maxuser"
},
{
"default_role": "",
"global_priv": true,
"host": "%",
"plugin": "mysql_native_password",
"proxy_priv": false,
"ssl": false,
"super_priv": true,
"user": "maxuser"
},
{
"default_role": "",
"global_priv": true,
"host": "localhost",
"plugin": "mysql_native_password",
"proxy_priv": false,
"ssl": false,
"super_priv": true,
"user": "root"
},
{
"default_role": "",
"global_priv": true,
"host": "%",
"plugin": "mysql_native_password",
"proxy_priv": false,
"ssl": false,
"super_priv": true,
"user": "root"
}
],
"users_last_update": "Fri, 05 Jan 2024 07:23:55 GMT"
},
"id": "Read-Connection-Router",
"links": {
"self": "http://localhost:8989/v1/services/Read-Connection-Router/"
},
"relationships": {
"filters": {
"data": [
{
"id": "QLA",
"type": "filters"
},
{
"id": "Hint",
"type": "filters"
}
],
"links": {
"related": "http://localhost:8989/v1/filters/",
"self": "http://localhost:8989/v1/services/Read-Connection-Router/relationships/filters/"
}
},
"listeners": {
"data": [
{
"id": "Read-Connection-Listener",
"type": "listeners"
}
],
"links": {
"related": "http://localhost:8989/v1/listeners/",
"self": "http://localhost:8989/v1/services/Read-Connection-Router/relationships/listeners/"
}
},
"servers": {
"data": [
{
"id": "server1",
"type": "servers"
},
{
"id": "server2",
"type": "servers"
}
],
"links": {
"related": "http://localhost:8989/v1/servers/",
"self": "http://localhost:8989/v1/services/Read-Connection-Router/relationships/servers/"
}
}
},
"type": "services"
},
"links": {
"self": "http://localhost:8989/v1/services/Read-Connection-Router/"
}
}GET /v1/services{
"data": [
{
"attributes": {
"connections": 1,
"listeners": [
{
"attributes": {
"parameters": {
"MariaDBProtocol": {
"allow_replication": true
},
"address": "::",
"authenticator": null,
"authenticator_options": null,
"connection_init_sql_file": null,
"connection_metadata": [
"character_set_client=auto",
"character_set_connection=auto",
"character_set_results=auto",
"max_allowed_packet=auto",
"system_time_zone=auto",
"time_zone=auto",
"tx_isolation=auto",
"maxscale=auto"
],
"port": 4006,
"protocol": "MariaDBProtocol",
"proxy_protocol_networks": null,
"service": "RW-Split-Router",
"socket": null,
"sql_mode": "default",
"ssl": false,
"ssl_ca": null,
"ssl_cert": null,
"ssl_cert_verify_depth": 9,
"ssl_cipher": null,
"ssl_crl": null,
"ssl_key": null,
"ssl_verify_peer_certificate": false,
"ssl_verify_peer_host": false,
"ssl_version": "MAX",
"type": "listener",
"user_mapping_file": null
},
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
},
"state": "Running"
},
"id": "RW-Split-Listener",
"relationships": {
"services": {
"data": [
{
"id": "RW-Split-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/listeners/RW-Split-Listener/relationships/services/"
}
}
},
"type": "listeners"
}
],
"parameters": {
"auth_all_servers": false,
"causal_reads": "none",
"causal_reads_timeout": "10000ms",
"connection_keepalive": "300000ms",
"delayed_retry": false,
"delayed_retry_timeout": "10000ms",
"disable_sescmd_history": false,
"enable_root_user": false,
"force_connection_keepalive": false,
"idle_session_pool_time": "-1ms",
"lazy_connect": false,
"localhost_match_wildcard_host": true,
"log_auth_warnings": true,
"log_debug": false,
"log_info": false,
"log_notice": false,
"log_warning": false,
"master_accept_reads": false,
"master_failure_mode": "fail_on_write",
"master_reconnection": true,
"max_connections": 0,
"max_replication_lag": "0ms",
"max_sescmd_history": 50,
"max_slave_connections": 255,
"multiplex_timeout": "60000ms",
"net_write_timeout": "0ms",
"optimistic_trx": false,
"password": "*****",
"prune_sescmd_history": true,
"rank": "primary",
"retain_last_statements": -1,
"retry_failed_reads": true,
"reuse_prepared_statements": false,
"router": "readwritesplit",
"session_trace": false,
"slave_connections": 255,
"slave_selection_criteria": "least_current_operations",
"strict_multi_stmt": false,
"strict_sp_calls": false,
"strict_tmp_tables": true,
"strip_db_esc": true,
"transaction_replay": false,
"transaction_replay_attempts": 5,
"transaction_replay_checksum": "full",
"transaction_replay_max_size": 1048576,
"transaction_replay_retry_on_deadlock": false,
"transaction_replay_retry_on_mismatch": false,
"transaction_replay_safe_commit": true,
"transaction_replay_timeout": "30000ms",
"type": "service",
"use_sql_variables_in": "all",
"user": "maxuser",
"user_accounts_file": null,
"user_accounts_file_usage": "add_when_load_ok",
"version_string": null,
"wait_timeout": "0ms"
},
"router": "readwritesplit",
"router_diagnostics": {
"queries": 4,
"replayed_transactions": 0,
"ro_transactions": 1,
"route_all": 1,
"route_master": 3,
"route_slave": 0,
"rw_transactions": 0,
"server_query_statistics": [
{
"avg_selects_per_session": 0,
"avg_sess_duration": "0ns",
"id": "server1",
"read": 1,
"total": 4,
"write": 3
},
{
"avg_selects_per_session": 0,
"avg_sess_duration": "0ns",
"id": "server2",
"read": 1,
"total": 1,
"write": 0
}
],
"trx_max_size_exceeded": 0
},
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
},
"started": "Fri, 05 Jan 2024 07:23:54 GMT",
"state": "Started",
"statistics": {
"active_operations": 0,
"avg_sescmd_history_length": 0.0,
"avg_session_lifetime": 0.0,
"connections": 1,
"failed_auths": 0,
"max_connections": 1,
"max_sescmd_history_length": 0,
"max_session_lifetime": 0,
"routed_packets": 4,
"total_connections": 1
},
"total_connections": 1,
"users": [
{
"default_role": "",
"global_priv": false,
"host": "localhost",
"plugin": "mysql_native_password",
"proxy_priv": false,
"ssl": false,
"super_priv": false,
"user": "mariadb.sys"
},
{
"default_role": "",
"global_priv": true,
"host": "127.0.0.1",
"plugin": "mysql_native_password",
"proxy_priv": false,
"ssl": false,
"super_priv": true,
"user": "maxuser"
},
{
"default_role": "",
"global_priv": true,
"host": "%",
"plugin": "mysql_native_password",
"proxy_priv": false,
"ssl": false,
"super_priv": true,
"user": "maxuser"
},
{
"default_role": "",
"global_priv": true,
"host": "localhost",
"plugin": "mysql_native_password",
"proxy_priv": false,
"ssl": false,
"super_priv": true,
"user": "root"
},
{
"default_role": "",
"global_priv": true,
"host": "%",
"plugin": "mysql_native_password",
"proxy_priv": false,
"ssl": false,
"super_priv": true,
"user": "root"
}
],
"users_last_update": "Fri, 05 Jan 2024 07:23:55 GMT"
},
"id": "RW-Split-Router",
"links": {
"self": "http://localhost:8989/v1/services/RW-Split-Router/"
},
"relationships": {
"listeners": {
"data": [
{
"id": "RW-Split-Listener",
"type": "listeners"
}
],
"links": {
"related": "http://localhost:8989/v1/listeners/",
"self": "http://localhost:8989/v1/services/RW-Split-Router/relationships/listeners/"
}
},
"monitors": {
"data": [
{
"id": "MariaDB-Monitor",
"type": "monitors"
}
],
"links": {
"related": "http://localhost:8989/v1/monitors/",
"self": "http://localhost:8989/v1/services/RW-Split-Router/relationships/monitors/"
}
}
},
"type": "services"
},
{
"attributes": {
"connections": 0,
"listeners": [
{
"attributes": {
"parameters": {
"MariaDBProtocol": {
"allow_replication": true
},
"address": "::",
"authenticator": null,
"authenticator_options": null,
"connection_init_sql_file": null,
"connection_metadata": [
"character_set_client=auto",
"character_set_connection=auto",
"character_set_results=auto",
"max_allowed_packet=auto",
"system_time_zone=auto",
"time_zone=auto",
"tx_isolation=auto",
"maxscale=auto"
],
"port": 4008,
"protocol": "MariaDBProtocol",
"proxy_protocol_networks": null,
"service": "Read-Connection-Router",
"socket": null,
"sql_mode": "default",
"ssl": false,
"ssl_ca": null,
"ssl_cert": null,
"ssl_cert_verify_depth": 9,
"ssl_cipher": null,
"ssl_crl": null,
"ssl_key": null,
"ssl_verify_peer_certificate": false,
"ssl_verify_peer_host": false,
"ssl_version": "MAX",
"type": "listener",
"user_mapping_file": null
},
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
},
"state": "Running"
},
"id": "Read-Connection-Listener",
"relationships": {
"services": {
"data": [
{
"id": "Read-Connection-Router",
"type": "services"
}
],
"links": {
"related": "http://localhost:8989/v1/services/",
"self": "http://localhost:8989/v1/listeners/Read-Connection-Listener/relationships/services/"
}
}
},
"type": "listeners"
}
],
"parameters": {
"auth_all_servers": false,
"connection_keepalive": "300000ms",
"disable_sescmd_history": false,
"enable_root_user": false,
"force_connection_keepalive": false,
"idle_session_pool_time": "-1ms",
"localhost_match_wildcard_host": true,
"log_auth_warnings": true,
"log_debug": false,
"log_info": false,
"log_notice": false,
"log_warning": false,
"master_accept_reads": true,
"max_connections": 0,
"max_replication_lag": "0ms",
"max_sescmd_history": 50,
"multiplex_timeout": "60000ms",
"net_write_timeout": "0ms",
"password": "*****",
"prune_sescmd_history": true,
"rank": "primary",
"retain_last_statements": -1,
"router": "readconnroute",
"router_options": "master",
"session_trace": false,
"strip_db_esc": true,
"type": "service",
"user": "maxuser",
"user_accounts_file": null,
"user_accounts_file_usage": "add_when_load_ok",
"version_string": null,
"wait_timeout": "0ms"
},
"router": "readconnroute",
"router_diagnostics": {
"queries": 0,
"server_query_statistics": []
},
"source": {
"file": "/etc/maxscale.cnf",
"type": "static"
},
"started": "Fri, 05 Jan 2024 07:23:54 GMT",
"state": "Started",
"statistics": {
"active_operations": 0,
"avg_sescmd_history_length": 0.0,
"avg_session_lifetime": 0.0,
"connections": 0,
"failed_auths": 0,
"max_connections": 0,
"max_sescmd_history_length": 0,
"max_session_lifetime": 0,
"routed_packets": 0,
"total_connections": 0
},
"total_connections": 0,
"users": [
{
"default_role": "",
"global_priv": false,
"host": "localhost",
"plugin": "mysql_native_password",
"proxy_priv": false,
"ssl": false,
"super_priv": false,
"user": "mariadb.sys"
},
{
"default_role": "",
"global_priv": true,
"host": "127.0.0.1",
"plugin": "mysql_native_password",
"proxy_priv": false,
"ssl": false,
"super_priv": true,
"user": "maxuser"
},
{
"default_role": "",
"global_priv": true,
"host": "%",
"plugin": "mysql_native_password",
"proxy_priv": false,
"ssl": false,
"super_priv": true,
"user": "maxuser"
},
{
"default_role": "",
"global_priv": true,
"host": "localhost",
"plugin": "mysql_native_password",
"proxy_priv": false,
"ssl": false,
"super_priv": true,
"user": "root"
},
{
"default_role": "",
"global_priv": true,
"host": "%",
"plugin": "mysql_native_password",
"proxy_priv": false,
"ssl": false,
"super_priv": true,
"user": "root"
}
],
"users_last_update": "Fri, 05 Jan 2024 07:23:55 GMT"
},
"id": "Read-Connection-Router",
"links": {
"self": "http://localhost:8989/v1/services/Read-Connection-Router/"
},
"relationships": {
"filters": {
"data": [
{
"id": "QLA",
"type": "filters"
},
{
"id": "Hint",
"type": "filters"
}
],
"links": {
"related": "http://localhost:8989/v1/filters/",
"self": "http://localhost:8989/v1/services/Read-Connection-Router/relationships/filters/"
}
},
"listeners": {
"data": [
{
"id": "Read-Connection-Listener",
"type": "listeners"
}
],
"links": {
"related": "http://localhost:8989/v1/listeners/",
"self": "http://localhost:8989/v1/services/Read-Connection-Router/relationships/listeners/"
}
},
"servers": {
"data": [
{
"id": "server1",
"type": "servers"
},
{
"id": "server2",
"type": "servers"
}
],
"links": {
"related": "http://localhost:8989/v1/servers/",
"self": "http://localhost:8989/v1/services/Read-Connection-Router/relationships/servers/"
}
}
},
"type": "services"
}
],
"links": {
"self": "http://localhost:8989/v1/services/"
}
}POST /v1/services{
"data": {
"id": "my-service",
"type": "services",
"attributes": {
"router": "readwritesplit",
"parameters": {
"user": "maxuser",
"password": "maxpwd"
}
},
"relationships": {
"filters": {
"data": [
{
"id": "QLA",
"type": "filters"
}
]
},
"servers": {
"data": [
{
"id": "server1",
"type": "servers"
}
]
}
}
}
}DELETE /v1/services/:namePATCH /v1/services/:name{
"data": {
"attributes": {
"parameters": {
"user": "admin"
}
}
}
}PATCH /v1/services/:name/relationships/:typePATCH /v1/services/my-rw-service/relationships/servers
{
data: [
{ "id": "my-server", "type": "servers" }
]
}PATCH /v1/services/my-rw-service/relationships/servers
{
data: []
}PUT /v1/services/:name/stopPUT /v1/services/:name/startPOST /v1/services/:name/reloadGET /v1/services/:name/listenersGET /v1/services/:name/listeners/:listenerPOST /v1/services/:name/listenersDELETE /v1/services/:service/listeners/:nameThis document provides a short overview of the readwritesplit router module and its intended use case scenarios. It also displays all router configuration parameters with their descriptions. A list of current limitations of the module is included and use examples are provided.
The readwritesplit router is designed to increase the read-only processing capability of a cluster while maintaining consistency. This is achieved by splitting the query load into read and write queries. Read queries, which do not modify data, are spread across multiple nodes while all write queries will be sent to a single node.
The router is designed to be used with a traditional Primary-Replica replication cluster. It automatically detects changes in the primary server and will use the current primary server of the cluster. With a Galera cluster, one can achieve a resilient setup and easy primary failover by using one of the Galera nodes as a Write-Primary node, where all write queries are routed, and spreading the read load over all the nodes.
Maintenance and Draining stateWhen a server that readwritesplit uses is put into maintenance mode, any ongoing requests are allowed to finish before the connection is closed. If the server that is put into maintenance mode is a primary, open transaction are allowed to complete before the connection is closed. Note that this means neither idle session nor long-running transactions will be closed by readwritesplit. To forcefully close the connections, use the following command:
If a server is put into the Draining state while a connection is open, the
connection will be used normally. Whenever a new connection needs to be created,
whether that be due to a network error or when a new session being opened, only
servers that are neither Draining nor Drained will be used.
Readwritesplit router-specific settings are specified in the configuration file of MariaDB MaxScale in its specific section. The section can be freely named but the name is used later as a reference in a listener section.
For more details about the standard service parameters, refer to the .
Starting with 2.3, all router parameters can be configured at runtime. Usemaxctrl alter service to modify them. The changed configuration will only be
taken into use by new sessions.
max_slave_connectionsType: integer
Mandatory: No
Dynamic: Yes
Default: 255
max_slave_connections sets the maximum number of replicas a router session uses
at any moment. The default is to use at most 255 replica connections per client
connection. In older versions the default was to use all available replicas with
no limit.
For MaxScale 2.5.12 and newer, the minimum value is 0.
For MaxScale versions 2.5.11 and older, the minimum value is 1. These versions suffer from a bug () that causes the parameter to accept any values but only function when a value greater than one was given.
Starting with MaxScale 2.5.0, the use of percentage values inmax_slave_connections is deprecated. The support for percentages will be
removed in a future release.
For example, if you have configured MaxScale with one primary and three replicas
and set max_slave_connections=2, for each client connection a connection to
the primary and two replica connections would be opened. The read query load
balancing is then done between these two replicas and writes are sent to the
primary.
By tuning this parameter, you can control how dynamic the load balancing is at
the cost of extra created connections. With a lower value ofmax_slave_connections, less connections per session are created and the set of
possible replica servers is smaller. With a higher value inmax_slave_connections, more connections are created which requires more
resources but load balancing will almost always give the best single query
response time and performance. Longer sessions are less affected by a highmax_slave_connections as the relative cost of opening a connection is lower.
Behavior of max_slave_connections=0
When readwritesplit is configured with max_slave_connections=0, readwritesplit
will behave slightly differently in that it will route all reads to the current
master server. This is a convenient way to force all of the traffic to go to a
single node while still being able to leverage the replay and reconnection
features of readwritesplit.
In this mode, the behavior of master_failure_mode=fail_on_write also changes
slightly. If the current Master server fails and a read is done when there's
no other Master server available, the connection will be closed. This is done
to prevent an extra slave connection from being opened that would not be closed
if a new Master server would arrive.
slave_connectionsType: integer
Mandatory: No
Dynamic: Yes
Default: 255
This parameter controls how many replica connections each new session starts
with. The default value is 255 which is the same as the default value ofmax_slave_connections.
In contrast to max_slave_connections, slave_connections serves as a
soft limit on how many replica connections are created. The number of replica
connections can exceed slave_connections if the load balancing algorithm
finds an unconnected replica server better than all other replicas.
Setting this parameter to 1 allows faster connection creation and improved
resource usage due to the smaller amount of initial backend
connections. It is recommended to use slave_connections=1 when the
lifetime of the client connections is short.
max_replication_lagType:
Mandatory: No
Dynamic: Yes
Default: 0s
NOTE Up until 23.02, this parameter was called max_slave_replication_lag,
which has been deprecated but still works as an alias for max_replication_lag.
Specify how many seconds a replica is allowed to be behind the primary. The lag of
a replica must be less than the configured value in order for it to be used for
routing. If set to 0s (the default value), the feature is disabled.
The replica lag must be less than max_replication_lag. This means that it
is possible to define, with max_replication_lag=1s, that all replicas must
be up to date in order for them to be used for routing.
Note that this feature does not guarantee that writes done on the primary are visible for reads done on the replica. This is mainly due to the method of replication lag measurement. For a feature that guarantees this, refer to .
The lag is specified as documented . Note that since the granularity of the lag is seconds, a lag specified in milliseconds will be rejected, even if the duration is longer than a second.
The Readwritesplit-router does not detect the replication lag itself. A monitor such as the MariaDB-monitor for a Primary-Replica cluster is required. This option only affects Primary-Replica clusters. Galera clusters do not have a concept of replica lag even if the application of write sets might have lag. When a server is disqualified from routing because of replication lag, a warning is logged. Similarly, when the server has caught up enough to be a valid routing target, another warning is logged. These messages are only logged when a query is being routed and the replication state changes.
Starting with MaxScale versions 23.08.7, 24.02.3 and 24.08.1, readwritesplit
will discard connections to any servers that have excessive replication lag. The
connection will be discarded if a server is lagging behind by more than twice
the amount of max_replication_lag and the server is behind by more than 300
seconds (replication lag > MAX(300, 2 * max_replication_lag)).
use_sql_variables_inType:
Mandatory: No
Dynamic: Yes
Values: master, all
This parameter controls how SELECT statements that use SQL user variables are
handled. Here is an example of such a query that uses it to return an increasing
row number for a resultset:
By default MaxScale will route both the SET and SELECT statements to all
nodes. Any future reads of the user variables can also be performed on any node.
The possible values for this parameter are:
all (default)
Modifications to user variables inside SELECT statements as well as reads
of user variables are routed to all servers.
Versions before MaxScale 22.08 returned an error if a user variable was
modified inside of a SELECT statement when use_sql_variables_in=all was
used. MaxScale 22.08 will instead route the query to all servers and discard
the extra results.
master
DML statements, such as INSERT, UPDATE or DELETE, that modify SQL user
variables are still treated as writes and are only routed to the primary
server. For example, after the following query the value of @myid is no longer
the same on all servers and the SELECT statement can return different values
depending where it ends up being executed:
master_reconnectionType:
Mandatory: No
Dynamic: Yes
Default: true (>= MaxScale 24.02), false(<= MaxScale 23.08)
Allow the primary server to change mid-session. This feature requires thatdisable_sescmd_history is not used.
Starting with MaxScale 24.02, if disable_sescmd_history is enabled,master_reconnection will be automatically disabled.
When a readwritesplit session starts, it will pick a primary server as the
current primary server of that session. When master_reconnection is disabled,
when this primary server is lost or changes to another server, the connection
will be closed.
When master_reconnection is enabled, readwritesplit can sometimes recover a
lost connection to the primary server. This largely depends on the value ofmaster_failure_mode.
With master_failure_mode=fail_instantly, the primary server is only allowed to
change to another server. This change must happen without a loss of the primary
server.
With master_failure_mode=fail_on_write, the loss of the primary server is no
longer a fatal error: if a replacement primary server appears before any write
queries are received, readwritesplit will transparently reconnect to the new
primary server.
In both cases the change in the primary server can only take place ifprune_sescmd_history is enabled or max_sescmd_history has not yet
been exceeded and the session does not have an open transaction.
The recommended configuration is to use master_reconnection=true andmaster_failure_mode=fail_on_write. This provides improved fault tolerance
without any risk to the consistency of the database.
slave_selection_criteriaType:
Mandatory: No
Dynamic: Yes
Values: least_current_operations, adaptive_routing, least_behind_master
This option controls how the readwritesplit router chooses the replicas it
connects to and how the load balancing is done. The default behavior is to route
read queries to the replica server with the lowest amount of ongoing queries i.e.least_current_operations.
The option syntax:
Where <criteria> is one of the following values.
least_current_operations (default), the replica with least active operations
adaptive_routing, based on server average response times.
least_behind_master, the replica with smallest replication lag
least_current_operations uses the current number of active operations
(i.e. SQL queries) as the load balancing metric and it optimizes for maximal
query throughput. Each query gets routed to the server with the least active
operations which results in faster servers processing more traffic.
adaptive_routing uses the server response time and current estimated server
load as the load balancing metric. The server that is estimated to finish an
additional query first is chosen. A modified average response time for each
server is continuously updated to allow slow servers at least some traffic and
quickly react to changes in server load conditions. This selection criteria is
designed for heterogeneous clusters: servers of differing hardware, differing
network distances, or when other loads are running on the servers (including a
backup). If the servers are queried by other clients than MaxScale, the load
caused by them is indirectly taken into account.
least_behind_master uses the measured replication lag as the load balancing
metric. This means that servers that are more up-to-date are favored which
increases the likelihood of the data being read being up-to-date. However, this
is not as effective as causal_reads would be as there's no guarantee that
writes done by the same connection will be routed to a server that has
replicated those changes. The recommended approach is to useLEAST_CURRENT_OPERATIONS or ADAPTIVE_ROUTING in combination withcausal_reads
NOTE: least_global_connections and least_router_connections should not
be used, they are legacy options that exist only for backwards
compatibility. Using them will result in skewed load balancing as the algorithm
uses a metric that's too coarse (number of connections) to load balance
something that's finer (individual SQL queries).
The least_global_connections and least_router_connections use the
connections from MariaDB MaxScale to the server, not the amount of connections
reported by the server itself.
Starting with MaxScale versions 2.5.29, 6.4.11, 22.08.9, 23.02.5 and 23.08.1,
lowercase versions of the values are also accepted. For example,slave_selection_criteria=LEAST_CURRENT_OPERATIONS andslave_selection_criteria=least_current_operations are both accepted as valid
values.
Starting with MaxScale 23.08.1, the legacy uppercase values have been deprecated. All runtime modifications of the parameter will now be persisted in lowercase. The uppercase values are still accepted but will be removed in a future MaxScale release.
master_accept_readsType:
Mandatory: No
Dynamic: Yes
Default: false
Enables the primary server to be used for reads. This is a useful option to enable if you are using a small number of servers and wish to use the primary for reads as well and the load on it does not reduce the write throughput of the cluster.
By default, no reads are sent to the primary as long as there is a valid replica
server available. If no replicas are available, reads are sent to the primary
regardless of the value of master_accept_reads.
strict_multi_stmtType:
Mandatory: No
Dynamic: Yes
Default: false
When a client executes a multi-statement query, it will be treated as if it were a DML statement and routed to the primary. If the option is enabled, all queries after a multi-statement query will be routed to the primary to guarantee a consistent session state.
If the feature is disabled, queries are routed normally after a multi-statement query.
Warning: Enable the strict mode only if you know that the clients will send statements that cause inconsistencies in the session state.
strict_sp_callsType:
Mandatory: No
Dynamic: Yes
Default: false
Similar to strict_multi_stmt, this option allows all queries after a CALL
operation on a stored procedure to be routed to the primary.
All warnings and restrictions that apply to strict_multi_stmt also apply tostrict_sp_calls.
strict_tmp_tablesType:
Mandatory: No
Dynamic: Yes
Default: true (>= MaxScale 24.02), false (<= MaxScale 23.08)
When strict_tmp_tables is disabled, all temporary tables are lost when a
reconnection of the primary node occurs. This means that if a reconnection to
the primary takes place, temporary tables might appear to disappear in the
middle of a connection.
When strict_tmp_tables is enabled, reconnections are prevented as long as a
temporary tables exist. In this case if the primary node is lost and temporary
table exist, the session is closed. If a session creates temporary tables but
does not drop them, this behavior will effectively disable reconnections until
the session is closed.
master_failure_modeType:
Mandatory: No
Dynamic: Yes
Values: fail_instantly, fail_on_write, error_on_write
This option controls how the failure of a primary server is handled.
The following table describes the values for this option and how they treat the loss of a primary server.
These also apply to new sessions created after the primary has failed. This means
that in fail_on_write or error_on_write mode, connections are accepted as
long as replica servers are available.
When configured with fail_on_write or error_on_write, sessions that are idle
will not be closed even if all backend connections for that session have
failed. This is done in the hopes that before the next query from the idle
session arrives, a reconnection to one of the replicas is made. However, this can
leave idle connections around unless the client application actively closes
them. To prevent this, use the
parameter.
Note: If master_failure_mode is set to error_on_write and the connection
to the primary is lost, by default, clients will not be able to execute write
queries without reconnecting to MariaDB MaxScale once a new primary is
available. If is enabled, the
session can recover if one of the replicas is promoted as the primary.
retry_failed_readsType:
Mandatory: No
Dynamic: Yes
Default: true
This option controls whether autocommit selects are retried in case of failure.
When a simple autocommit select is being executed outside of a transaction and the replica server where the query is being executed fails, readwritesplit can retry the read on a replacement server. This makes the failure of a replica transparent to the client.
If a part of the result was already delivered to the client, the query will not
be retried. The retrying of queries with partially delivered results is only
possible when transaction_replay is enabled.
delayed_retryType:
Mandatory: No
Dynamic: Yes
Default: false
Retry queries over a period of time.
When this feature is enabled, a failure to route a query due to a connection problem will not immediately result in an error. The routing of the query is delayed until either a valid candidate server is available or the retry timeout is reached. If a candidate server becomes available before the timeout is reached, the query is routed normally and no connection error is returned. If no candidates are found and the timeout is exceeded, the router returns to normal behavior and returns an error.
When combined with the master_reconnection parameter, failures of writes done
outside of transactions can be hidden from the client connection. This allows a
primary to be replaced while writes are being sent.
Starting with MaxScale 21.06.18, 22.08.15, 23.02.12, 23.08.8, 24.02.4 and
24.08.1, delayed_retry will no longer attempt to retry a query if it was
already sent to the database. If a query is received while a valid target server
is not available, the execution of the query is delayed until a valid target is
found or the delayed retry timeout is hit. If a query was already sent, it will
not be replayed to prevent duplicate execution of statements.
In older versions of MaxScale, duplicate execution of a statement can occur if
the connection to the server is lost or the server crashes but the server comes
back up before the timeout for the retrying is exceeded. At this point, if the
server managed to read the client's statement, it will be executed. For this
reason, it is recommended to only enable delayed_retry for older versions of
MaxScale when the possibility of duplicate statement execution is an acceptable
risk.
delayed_retry_timeoutType:
Mandatory: No
Dynamic: Yes
Default: 10s
The duration to wait until an error is returned to the client whendelayed_retry is enabled.
The timeout is specified as documented . If no explicit unit is provided, the value is interpreted as seconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected. Note that since the granularity of the timeout is seconds, a timeout specified in milliseconds will be rejected, even if the duration is longer than a second.
transaction_replayType:
Mandatory: No
Dynamic: Yes
Default: false
Replay interrupted transactions.
Enabling this parameter enables both delayed_retry and master_reconnection
and sets master_failure_mode to fail_on_write, thereby overriding any
configured values for these parameters.
When the server where the transaction is in progress fails, readwritesplit can migrate the transaction to a replacement server. This can completely hide the failure of a primary node without any visible effects to the client.
If no replacement node becomes available, the client connection is closed.
To control how long a transaction replay can take, usetransaction_replay_timeout.
Please refer to the section for a more detailed explanation of what should and should not be done with transaction replay.
transaction_replay_max_sizeType:
Mandatory: No
Dynamic: Yes
Default: 1 MiB
The limit on transaction size for transaction replay in bytes. Any transaction
that exceeds this limit will not be replayed. The default value is 1 MiB. This
limit applies at a session level which means that the total peak memory
consumption can be transaction_replay_max_size times the number of client
connections.
The amount of memory needed to store a particular transaction will be slightly larger than the length in bytes of the SQL used in the transaction. If the limit is ever exceeded, a message will be logged at the info level.
The number of times that this limit has been exceeded is shown inmaxctrl show service as trx_max_size_exceeded.
Read for more details on size type parameters in MaxScale.
transaction_replay_attemptsType: integer
Mandatory: No
Dynamic: Yes
Default: 5
The upper limit on how many times a transaction replay is attempted before giving up.
A transaction replay failure can happen if the server where the transaction is being replayed fails while the replay is in progress. In practice this parameter controls how many server and network failures a single transaction replay tolerates. If a transaction is replayed successfully, the counter for failed attempts is reset.
transaction_replay_timeoutType:
Mandatory: No
Dynamic: Yes
Default: 30s (>= MaxScale 24.02), 0s (<= MaxScale 23.08)
The time how long transactions are attempted for. To explicitly disable this feature, set the value to 0 seconds.
The timeout is and the value must include a unit for the duration.
When transaction_replay_timeout is enabled, the time a transaction replay can
take is controlled solely by this parameter. This is a more convenient and
predictable method of controlling how long a transaction replay can be attempted
before the connection is closed.
If delayed_retry_timeout is less than transaction_replay_timeout, it is set
to the same value.
Without transaction_replay_timeout the time how long a transaction can be
retried is controlled by delayed_retry_timeout andtransaction_replay_attempts. This can result in a maximum replay time limit ofdelayed_retry_timeout multiplied by transaction_replay_attempts, by default
this is 50 seconds. The minimum replay time limit can be as low astransaction_replay_attempts seconds (5 seconds by default) in cases where the
connection fails after it was created. Usually this happens due to problems like
the max_connections limit being hit on the database server.
transaction_replay_timeout is the recommended method of controlling the
timeouts for transaction replay and is by default set to 30 seconds in MaxScale
24.02.
transaction_replay_retry_on_deadlockType:
Mandatory: No
Dynamic: Yes
Default: false
Enable automatic retrying of transactions that end up in a deadlock.
If this feature is enabled and a transaction returns a deadlock error
(e.g. SQLSTATE 40001: Deadlock found when trying to get lock; try restarting transaction),
the transaction is automatically retried. If the retrying of the transaction
results in another deadlock error, it is retried until it either succeeds or a
transaction checksum error is encountered.
transaction_replay_safe_commitType:
Mandatory: No
Dynamic: Yes
Default: true
If a transaction is ending and the COMMIT statement at the end of it is
interrupted, there is a risk of duplicating the transaction if it is
replayed. This parameter prevents the retrying of transactions that are about to
commit.
This parameter was added in MaxScale 23.08.0 and is enabled by default. The older version of MaxScale always attempted to replay the transaction even if there was a risk of duplicating the transaction.
If the data that is about to be modified is read before it is modified and it is
locked in an appropriate manner (e.g. with SELECT ... FOR UPDATE or with theSERIALIZABLE isolation level), it is safe to replay a transaction that was
about to commit. This is because the checksum of the transaction will mismatch
if the original transaction ended up committing on the server. Disabling this
feature can enable more robust delivery of transactions but it requires that the
SQL is correctly formed and compatible with this behavior.
transaction_replay_retry_on_mismatchType:
Mandatory: No
Dynamic: Yes
Default: false
Retry transactions that end in checksum mismatch.
When enabled, any replayed transactions that end with a checksum mismatch are
retried until they either succeeds or one of the transaction replay limits is
reached (delayed_retry_timeout, transaction_replay_timeout ortransaction_replay_attempts).
transaction_replay_checksumType:
Mandatory: No
Dynamic: Yes
Values: full, result_only, no_insert_id
Selects which transaction checksum method is used to verify the result of the replayed transaction.
Note that only transaction_replay_checksum=full is guaranteed to retain the
consistency of the replayed transaction.
Possible values are:
full (default)
All responses from the server are included in the checksum. This retains the full consistency guarantee of the replayed transaction as it must match exactly the one that was already returned to the client.
result_only
Only resultsets and errors are included in the checksum. OK packets (i.e. successful queries that do not return results) are ignored. This mode is intended to be used in cases where the extra information (auto-generated ID, warnings etc.) returned in the OK packet is not used by the application. This mode is safe to use only if the auto-generated ID is not actually used by any following queries. An example of such behavior would be a transaction that ends with an
optimistic_trxType:
Mandatory: No
Dynamic: Yes
Default: false
Enable optimistic transaction execution. This parameter controls whether normal
transactions (i.e. START TRANSACTION or BEGIN) are load balanced across
replicas. This feature is disabled by default and enabling it implicitly enablestransaction_replay, delayed_retry and master_reconnection parameters.
When this mode is enabled, all transactions are first attempted on replica servers. If the transaction contains no statements that modify data, it is completed on the replica. If the transaction contains statements that modify data, it is rolled back on the replica server and restarted on the primary. The rollback is initiated the moment a data modifying statement is intercepted by readwritesplit so only read-only statements are executed on replica servers.
As with transaction_replay and transactions that are replayed, if the results
returned by the primary server are not identical to the ones returned by the
replica up to the point where the first data modifying statement was executed, the
connection is closed. If the execution of ROLLBACK statement on the replica fails,
the connection to that replica is closed.
All limitations that apply to transaction_replay also apply tooptimistic_trx.
causal_readsType:
Mandatory: No
Dynamic: Yes
Values: none, local, global
Enable causal reads. This feature requires MariaDB 10.2.16 or newer to function.
If a client connection modifies the database and causal_reads is enabled, any
subsequent reads performed on replica servers will be done in a manner that
prevents replication lag from affecting the results.
The following table contains a comparison of the modes. Read the for more information on what a sync consists of and why minimizing the number of them is important.
The fast, fast_global and fast_universal modes should only be used when
low latency is more important than proper distribution of reads. These modes
should only be used when the workload is mostly read-only with only occasional
writes. If used with a mixed or a write-heavy workload, the traffic will end up
being routed almost exclusively to the primary server.
Note: This feature also enables multi-statement execution of SQL in the
protocol. This is equivalent to using allowMultiQueries=true in
or using CLIENT_MULTI_STATEMENTS and CLIENT_MULTI_RESULTS in the
Connector/C. The Implementation of causal_reads section explains why this is
necessary.
The possible values for this parameter are:
none (default)
Read causality is disabled.
local
Writes are locally visible. Writes are guaranteed to be visible only to the connection that does it. Unrelated modifications done by other connections are not visible. This mode improves read scalability at the cost of latency and reduces the overall load placed on the primary server without breaking causality guarantees.
Before MaxScale 2.5.0, the causal_reads parameter was a boolean
parameter. False values translated to none and true values translated tolocal. The use of boolean parameters is deprecated but still accepted in
MaxScale 2.5.0.
Implementation of causal_reads
This feature is based on the MASTER_GTID_WAIT function and the tracking of
server-side status variables. By tracking the latest GTID that each statement
generates, readwritesplit can then perform a synchronization operation with the
help of the MASTER_GTID_WAIT function.
If the replica has not caught up to the primary within the configured time, as specified by , it will be retried on the primary.
The exception to this rule is the fast mode which does not do any
synchronization at all. This can be done as any reads that would go to
out-of-date servers will be re-routed to the current primary.
Normal SQL
A practical example can be given by the following set of SQL commands executed
with autocommit=1.
As the statements are not executed inside a transaction, from the load balancer's point of view, the latter statement can be routed to a replica server. The problem with this is that if the value that was inserted on the primary has not yet replicated to the server where the SELECT statement is being performed, it can appear as if the value we just inserted is not there.
By prefixing these types of SELECT statements with a command that guarantees consistent results for the reads, read scalability can be improved without sacrificing consistency.
The set of example SQL above will be translated by MaxScale into the following statements.
The SET command will synchronize the replica to a certain logical point in the
replication stream (see for more
details). If the synchronization fails, the query will not run and it will be
retried on the server where the transaction was originally done.
Prepared Statements
Binary protocol prepared statements are handled in a different manner. Instead of adding the synchronization SQL into the original SQL query, it is sent as a separate packet before the prepared statement is executed.
We'll use the same example SQL but use a binary protocol prepared statement for the SELECT:
The SQL that MaxScale executes will be the following:
Both the synchronization query and the execution of the prepared statement are sent at the same time. This is done to remove the need to wait for the result of the synchronization query before routing the execution of the prepared statement. This keeps the performance of causal_reads for prepared statements the same as it is for normal SQL queries.
As a result of this, each time the synchronization query times out, the
connection will be killed by the KILL statement and readwritesplit will retry
the query on the primary. This is done to prevent the execution of the prepared
statement that follows the synchronization query from being processed by the
MariaDB server.
It is recommend that the session command history is enabled whenever prepared
statements are used with causal_reads. This allows new connections to be
created whenever a causal read times out.
A failed causal read inside of a read-only transaction started withSTART TRANSACTION READ ONLY will return the following error:
Older versions of MaxScale attempted to retry the command on the current primary server which would cause the connection to be closed and a warning to be logged.
Limitations of Causal Reads
Starting with MaxScale 24.02.5, the fast modes fast, fast_global andfast_universal work with Galera clusters. In older versions, none of thecausal_reads modes worked with Galera. The non-fast modes that rely on the
function still do not work with Galera. This is because Galera does not
implement a mechanism that allows a client to wait for a particular GTID.
If the combination of the original SQL statement and the modifications added to it by readwritesplit exceed the maximum packet size (16777213 bytes), the causal read will not be attempted and a non-causal read is done instead. This applies only to text protocol queries as the binary protocol queries use a different synchronization mechanism.
causal_reads_timeoutType:
Mandatory: No
Dynamic: Yes
Default: 10s
The timeout for the replica synchronization done by causal_reads.
The timeout is specified as documented . If no explicit unit is provided, the value is interpreted as seconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected. Note that since the granularity of the timeout is seconds, a timeout specified in milliseconds will be rejected, even if the duration is longer than a second.
lazy_connectType:
Mandatory: No
Dynamic: Yes
Default: false
Lazy connection creation causes connections to backend servers to be opened only when they are needed. This reduces the load that is placed on the backend servers when the client connections are short.
Normally readwritesplit opens as many connections as it can when the session is
first opened. This makes the execution of the first query faster when all
available connections are already created. When lazy_connect is enabled, this
initial connection creation is skipped. If the client executes only read
queries, no connection to the primary is made. If only write queries are made,
only the primary connection is used.
In MaxScale 23.08.2, if a
is received as the first command, the default behavior is to execute it on a
replica. If is enabled, the query is
executed on the primary server, if one is available. In practice this means that
workloads which are mostly reads with infrequent writes should disablemaster_accept_reads if they also use lazy_connect.
Older versions of MaxScale always tried to execute all session commands on the primary node if one was available.
reuse_prepared_statementsType:
Mandatory: No
Dynamic: Yes
Default: false
Reuse identical prepared statements inside the same client connection. This feature only applies to binary protocol prepared statements.
When this parameter is enabled and the connection prepares an identical prepared statement multiple times, instead of preparing it on the server the existing prepared statement handle is reused. This also means that whenever prepared statements are closed by the client, they will be left open by readwritesplit.
Enabling this feature will increase memory usage of a session. The amount of memory stored per prepared statement is proportional to the length of the prepared SQL statement and the number of parameters the statement has.
The router_diagnostics output for a readwritesplit service contains the
following fields.
queries: Number of queries executed through this service.
route_master: Number of writes routed to primary.
route_slave: Number of reads routed to replicas.
The general rule with server ranks is that primary servers will be used before secondary servers. Readwritesplit is an exception to this rule. The following rules govern how readwritesplit behaves with servers that have different ranks.
Sessions will use the current primary server as long as possible. This means that sessions with a secondary primary will not use the main primary as long as the secondary primary is available.
All replica connections will use the same rank as the primary connection. Any stale connections with a different rank than the primary will be discarded.
If no primary connection is available and master_reconnection is enabled, a
connection to the best primary is created. If the new primary has a different
priority than existing connections have, the connections with a different rank
will be discarded.
The readwritesplit router supports routing hints. For a detailed guide on hint syntax and functionality, please read document.
Note: Routing hints will always have the highest priority when a routing decision is made. This means that it is possible to cause inconsistencies in the session state and the actual data in the database by adding routing hints to DDL/DML statements which are then directed to replica servers. Only use routing hints when you are sure that they can cause no harm.
An exception to this rule is transaction_replay: when it is enabled, all
routing hints inside transaction are ignored. This is done to prevent changes
done inside a re-playable transaction from affecting servers outside of the
transaction. This behavior was added in MaxScale 6.1.4. Older versions allowed
routing hints to override the transaction logic.
If a SELECT statement with a maxscale route to slave hint is received
while autocommit is disabled, the query will be routed to a replica server. This
causes some metadata locks to be acquired on the database in question which
will block DDL statements on the server until either the connection is closed
or autocommit is enabled again.
The readwritesplit router implements the following module commands.
reset-gtidThe command resets the global GTID state in the router. It can be used withcausal_reads=global to reset the state. This can be useful when the cluster is
reverted to an earlier state and the GTIDs recorded in MaxScale are no longer
valid.
The first and only argument to the command is the router name. For example, to
reset the GTID state of a readwritesplit named My-RW-Router, the following
MaxCtrl command should be used:
Examples of the readwritesplit router in use can be found in the folder.
Here is a small explanation which shows what kinds of queries are routed to which type of server.
Routing to primary is important for data consistency and because majority of writes are written to binlog and thus become replicated to replicas.
The following operations are routed to primary:
DML statements (INSERT, UPDATE, DELETE etc.)
DDL statements (DROP, CREATE, ALTER etc.)
In addition to these, if the readwritesplit service is configured with themax_replication_lag parameter, and if all replicas suffer from too much
replication lag, then statements will be routed to the primary. (There might be
other similar configuration parameters in the future which limit the number of
statements that will be routed to replicas.)
Transaction Isolation Level Tracking
If either session_track_transaction_info=CHARACTERISTICS orsession_track_system_variables=tx_isolation is configured for the MariaDB
server, readwritesplit will track the transaction isolation level and lock the
session to the primary when the isolation level is set to serializable. This
retains the correctness of the isolation level which can otherwise cause
problems.
Starting with MaxScale 23.08, once the transaction isolation level is set to
something other than SERIALIZABLE, the session is no longer locked to the
primary and returns to its normal state. Older versions of MaxScale remain
locked to the primary even if the session goes out of the SERIALIZABLE
isolation level.
The ability to route some statements to replicas is important because it also decreases the load targeted to primary. Moreover, it is possible to have multiple replicas to share the load in contrast to single primary.
Queries which can be routed to replicas must be auto committed and belong to one of the following group:
Read-only statements (i.e. SELECT) that only use read-only built-in functions
All statements within an explicit read-only transaction (START TRANSACTION READ ONLY)
SHOW statements except SHOW MASTER STATUS
The list of supported built-in functions can be found .
A third class of statements includes those which modify session data, such as session system variables, user-defined variables, the default database, etc. We call them session commands, and they must be replicated as they affect the future results of read and write operations. They must be executed on all servers that could execute statements on behalf of this client.
Session commands include for example:
Commands that modify the session state (SET, USE, CHANGE USER)
Text protocol PREPARE statements
Binary protocol prepared statements
NOTE: if variable assignment is embedded in a write statement it is routed
to primary only. For example, INSERT INTO t1 values(@myvar:=5, 7) would be
routed to primary only.
The router stores all of the executed session commands so that in case of a
replica failure, a replacement replica can be chosen and the session command history
can be repeated on that new replica. This means that the router stores each
executed session command for the duration of the session. Applications that use
long-running sessions might cause MariaDB MaxScale to consume a growing amount
of memory unless the sessions are closed. This can be solved by adjusting the
value of max_sescmd_history.
In the following cases, a query is routed to the same server where the previous query was executed. If no previous target is found, the query is routed to the current primary.
If a query uses the FOUND_ROWS() function, it will be routed to the server
where the last query was executed. This is done with the assumption that a
query with SQL_CALC_FOUND_ROWS was previously executed.
COM_STMT_FETCH_ROWS will always be routed to the same server where the COM_STMT_EXECUTE was routed.
Read queries are routed to the primary server in the following situations:
Query is executed inside an open read-write transaction
Statement includes a stored procedure or an UDF call
If there are multiple statements inside one query e.g.INSERT INTO ... ; SELECT LAST_INSERT_ID();
If a prepared statement targets a temporary table on the primary, the replica servers will fail to execute it. This will cause all replica connections to be closed (MXS-1816).
When transaction replay is enabled, readwritesplit calculates a checksum of the server responses for each transaction. This is used to determine whether a replayed transaction was identical to the original transaction. Starting with MaxScale 23.08, a 128-bit xxHash checksum is stored for each statement that is in the transaction. Older versions of MaxScale used a single 160-bit SHA1 checksum for the whole transaction.
If the results from the replacement server are not identical when the
transaction is replayed, the client connection is closed. This means that any
transaction with a server specific result (e.g. NOW(), @@server_id) cannot
be replayed successfully but it will still be attempted.
If a transaction reads data before updating it, the rows should be locked by
using SELECT ... FOR UPDATE. This will prevent overlapping transactions when
multiple transactions are being replayed that modify the same set of rows.
If the connection to the server where the transaction is being executed is
lost when the final COMMIT is being executed, it is impossible to know
whether the transaction was successfully committed. This means that there
is a possibility for duplicate transaction execution which can result in
data duplication in certain cases.
In MaxScale 23.08, the transaction_replay_safe_commit variable controls
whether a replay is attempted or not whenever a COMMIT is interrupted. By
default the transaction will not be replayed. Older versions of MaxScale always
replayed the transaction.
Data duplication can happen if the transaction consists of the following statement types:
INSERT of rows into a table that does not have an auto-increment primary key
A "blind update" of one or more rows e.g. UPDATE t SET c = c + 1 WHERE id = 123
A "blind delete" e.g. DELETE FROM t LIMIT 100
This is not an exhaustive list and any operations that do not check the row contents before performing the operation on them might face this problem.
In all cases the problem of duplicate transaction execution can be avoided by
including a SELECT ... FOR UPDATE in the statement. This will guarantee that
in the case that the transaction fails when it is being committed, the row is
only modified if it matches the expected contents.
Similarly, a connection loss during COMMIT can also result in transaction
replay failure. This happens due to the same reason as duplicate transaction
execution but the retried transaction will not be committed. This can be
considered a success case as the transaction replay detected that the results of
the two transactions are different. In these cases readwritesplit will abort the
transaction and close the client connection.
Statements that result in an implicit commit do not reset the transaction when
transaction_replay is enabled. This means that if the transaction is replayed,
the transaction will be committed twice due to the implicit commit being
present. The exception to this are the transaction management statements such asBEGIN and START TRANSACTION: they are detected and will cause the
transaction to be correctly reset.
In older versions of MaxScale, if a connection to a server is lost while a
statement is being executed and the result was partially delivered to the
client, readwritesplit would immediately close the session without attempting to
replay the failing statement. Starting with MaxScale 23.08, this limitation no
longer applies if the statement was done inside of a transaction andtransaction_replay is enabled
().
If the connection to the server where a transaction is being executed is lost
while a ROLLBACK is being executed, readwritesplit will still attempt to
replay the transaction in the hopes that the real response can be delivered to
the client. However, this does mean that it is possible that a rolled back
transaction which gets replayed ends up with a conflict and is reported as a
replay failure when in reality a rolled back transaction could be safely
ignored.
Limitations in Session State Modifications
Any changes to the session state (e.g. autocommit state, SQL mode) done inside a transaction will remain in effect even if the connection to the server where the transaction is being executed fails. When readwritesplit creates a new connection to a server to replay the transaction, it will first restore the session state by executing all session commands that were executed. This means that if the session state is changed mid-transaction in a way that affects the results, transaction replay will fail.
The following partial transaction demonstrates the problem by using inside a transaction.
If this transaction has to be replayed the actual SQL that gets executed is the following.
First the session state is restored by executing all commands that changed the state after which the actual transaction is replayed. Due to the fact that the SQL_MODE was changed mid-transaction, one of the queries will now return an error instead of the result we expected leading to a transaction replay failure.
Limitations in Service-to-Service Routing
In a service-to-service configuration (i.e. a service using another service in
its targets list ), if the topmost service starts a transaction, all
lower-level readwritesplit services will also behave as if a transaction is
open. If a connection to a backend database fails during this, it can result in
unnecessary transaction replays which in turn can end up with checksum
conflicts. The recommended approach is to not use any commands inside a
transaction that would be routed to more than one node.
Limitations in multi-statement handling
When a multi-statement query is executed through the readwritesplit router, it will always be routed to the primary. See for more details.
If the multi-statement query creates a temporary table, it will not be detected and reads to this table can be routed to replica servers. To prevent this, always execute the temporary table creation as an individual statement.
Limitations in client session handling
Some of the queries that a client sends are routed to all backends instead of
just to one. These queries include USE <db name> and SET autocommit=0, among
many others. Readwritesplit sends a copy of these queries to each backend server
and forwards the primary's reply to the client. Below is a list of MySQL commands
which are classified as session commands.
Prior to MaxScale 2.3.0, session commands that were 2²⁴ - 1 bytes or longer were not supported and caused the session to be closed.
There is a possibility for misbehavior. If USE mytable is executed in one of
the replicas and fails, it may be due to replication lag rather than the database
not existing. Thus, the same command may produce different result in different
backend servers. The replicas which fail to execute a session command will be
dropped from the active list of replicas for this session to guarantee a
consistent session state across all the servers used by the session. In
addition, the server will not be used again for routing for the duration of the
session.
The above-mentioned behavior for user variables can be partially controlled with
the configuration parameter use_sql_variables_in:
WARNING
If a SELECT query modifies a user variable when the use_sql_variables_in
parameter is set to all, it will not be routed and the client will receive an
error. A log message is written into the log further explaining the reason for
the error. Here is an example use of a SELECT query which modifies a user
variable and how MariaDB MaxScale responds to it.
Allow user variable modification in SELECT queries by settinguse_sql_variables_in=master. This will route all queries that use user
variables to the primary.
This page is licensed: CC BY-SA / Gnu FDL
Default: all
Modifications to user variables inside SELECT statements as well as reads
of user variables are routed to the primary server. This forces more of the
traffic onto the primary server but it reduces the amount of data that is
discarded for any SELECT statement that also modifies a user
variable. With this mode, the state of user variables is not deterministic
if they are modified inside of a SELECT statement. SET statements that
modify user variables are still routed to all servers.
least_router_connectionsleast_global_connectionsDefault: least_current_operations
least_global_connections, the replica with least connections from MariaDB MaxScaleleast_router_connections, the replica with least connections from this service
Default: fail_on_write (MaxScale 23.08: fail_instantly)
Default: full
INSERTAUTO_INCREMENTno_insert_id
The same as result_only but results from queries that useLAST_INSERT_ID() are also ignored. This mode is safe to use only if the
result of the query is not used by any subsequent statement in the
transaction.
fastfast_globaluniversalfast_universalDefault: none
universal
Cluster
High, one sync per read plus a roundtrip to the primary.
fast_universal
Cluster
Low, one roundtrip to the primary.
global
Writes are globally visible. If one connection writes a value, all
connections to the same service will see it. In general this mode is slower
than the local mode due to the extra synchronization it has to do. This
guarantees global happens-before ordering of reads when all transactions are
inside a single GTID domain.This mode gives similar benefits as the local
mode in that it improves read scalability at the cost of latency.
With MaxScale versions 2.5.14 and older, multi-domain use of causal_reads
could cause non-causal reads to occur. Starting with MaxScale 2.5.15, this
was fixed and all the GTID coordinates are passed alongside all requests
which makes multi-domain GTIDs safe to use. However, this does mean that the
GTID coordinates will never be reset: if replication is reset and GTID
coordinates go "backwards", readwritesplit will not consider these as being
newer than the ones already stored. To reset the stored GTID coordinates in
readwritesplit, MaxScale must be restarted.
MaxScale 6.4.11 added the new reset-gtid module command to
readwritesplit. This allows the global GTID state used bycausal_reads=global to be reset without having to restart MaxScale.
fast
This mode is similar to the local mode where it will only affect the
connection that does the write but where the local mode waits for a replica
server to catch up, the fast mode will only use servers that are known to
have replicated the write. This means that if no replica has replicated the
write, the primary where the write was done will be used. The value ofcausal_reads_timeout is ignored in this mode. Currently the replication
state is only updated by the mariadbmon monitor whenever the servers are
monitored. This means that a smaller monitor_interval provides faster
replication state updates and possibly better overall usage of servers.
This mode is the inverse of the local mode in the sense that it improves
read latency at the cost of read scalability while still retaining the
causality guarantees for reads. This functionality can also be considered an
improved version of the functionality that the CCRFilter module provides.
fast_global
This mode is identical to the fast mode except that it uses the global
GTID instead of the session local one. This is similar to how local andglobal modes differ from each other. The value of causal_reads_timeout
is ignored in this mode. Currently the replication state is only updated by
the mariadbmon monitor whenever the servers are monitored. This means that a
smaller monitor_interval provides faster replication state updates and
possibly better overall usage of servers.
universal
The universal mode guarantees that all SELECT statements always see the
latest observable transaction state on a database cluster. The basis of this
is the @@gtid_current_pos variable which is read from the current primary
server before each read. This guarantees that if a transaction was visible
at the time the read is received by readwritesplit, the transaction is
guaranteed to be complete on the replica server where the read is done.
This mode is the most consistent of all the modes. It provides consistency
regardless of where a write originated from but it comes at the cost of
increased latency. For every read, a round trip to the current primary server
is done. This means that the latency of any given SELECT statement increases
by roughly twice the network latency between MaxScale and the database
cluster. In addition, an extra SELECT statement is always executed on the
primary which places some load on the server.
fast_universal
A mix of fast and universal. This mode that guarantees that all SELECT
statements always see the latest observable transaction state but unlike theuniversal mode that waits on the server to catch up, this mode behaves
like fast and routes the query to the current primary if no replicas are
available that have caught up.
This mode provides the same consistency guarantees of universal with a
constant latency overhead of one extra roundtrip. However, this also puts
the most load on the primary node as even a moderate write load can cause
the GTIDs of replicas to lag too far behind.
SQL like INSERT ... RETURNING that commits a transaction and returns a
resultset will only work with causal reads if the connector supports the
DEPRECATE_EOF protocol feature. The following table contains a list of MariaDB
connectors and whether they support the protocol feature.
Connector/C++
No
1.1.5
Connector/ODBC
No
3.2.5
route_all: Number of session commands routed to all servers.rw_transactions: Number of explicit read-write transactions.
ro_transactions: Number of explicit read-only transactions.
replayed_transactions: Number of replayed transactions.
server_query_statistics: Statistics for each configured and used server consisting of the following fields.
id: Name of the server
total: Total number of queries.
read: Total number of reads.
write: Total number of writes.
avg_sess_duration: Average duration of a client session to this server.
avg_sess_active_pct: Average percentage of time client sessions were active. 0% means connections were opened but never used.
avg_selects_per_session: Average number of selects per session.
If no open connections exist, the servers with the best rank will used.
Stored procedure calls
User-defined function calls
Queries that use sequences (NEXT VALUE FOR seq, NEXTVAL(seq) or seq.nextval)
Statements that use any of the following functions:
LAST_INSERT_ID()
GET_LOCK()
RELEASE_LOCK()
IS_USED_LOCK()
IS_FREE_LOCK()
Statements that use any of the following variables:
@@last_insert_id
@@identity
Other miscellaneous commands (COM_QUIT, COM_PING etc.)
fail_instantly
When the failure of the primary server is detected, the connection will be closed immediately.
fail_on_write
The client connection is closed if a write query is received when no primary is available.
error_on_write
If no primary is available and a write query is received, an error is returned stating that the connection is in read-only mode.
local
Session
Low, one sync per write.
fast
Session
None, no sync at all.
global
Service
Medium, one sync per read.
fast_global
Service
Connector/J
Yes
3.5.2
Connector/Node.js
Yes
3.4.0
Connector/R2DBC
Yes
1.3.0
Connector/C
No
3.4.4
None, no sync at all.
This filter was introduced in MariaDB MaxScale 2.1.
From MaxScale version 2.2.11 onwards, the cache filter is no longer considered experimental. The following changes to the default behaviour have also been made:
The default value of cached_data is now thread_specific (used to beshared).
The default value of selects is now assume_cacheable (used to beverify_cacheable).
The cache filter is a simple cache that is capable of caching the result of SELECTs, so that subsequent identical SELECTs are served directly by MaxScale, without the queries being routed to any server.
By default the cache will be used and populated in the following circumstances:
There is no explicit transaction active, that is, autocommit is used,
there is an explicitly read-only transaction (that is,START TRANSACTION READ ONLY) active, or
there is a transaction active and no statement that modifies the database has been performed.
In practice, the last bullet point basically means that if a transaction has
been started with BEGIN, START TRANSACTION or START TRANSACTION READ WRITE, then the cache will be used and populated until the first UPDATE,INSERT or DELETE statement is encountered.
That is, in default mode the cache effectively causes the system to behave
as if the isolation level would be READ COMMITTED, irrespective of what
the isolation level of the backends actually is.
The default behaviour can be altered using the configuration parameter .
By default it is assumed that all SELECT statements are cacheable, which
means that also statements like SELECT LOCALTIME are cached. Please check for how to change the default behaviour.
All of these limitations may be addressed in forthcoming releases.
Resultsets of prepared statements are not cached.
Multi-statements are always sent to the backend and their result isnot cached.
The cache is not aware of grants.
The implication is that unless the cache has been explicitly configured who the caching should apply to, the presence of the cache may provide a user with access to data he should not have access to.
Please read the section for more detailed information.
However, from 2.5 onwards it is possible to configure the cache to cache the data of each user separately, which effectively means that there can be no unintended sharing. Please see for how to change the default behaviour.
information_schemaWhen is enabled, SELECTs targeting tables
in information_schema are not cached. The reason is that as the content
of the tables changes as the side-effect of something else, the cache would
not know when to invalidate the cache-entries.
Since MaxScale 2.5, the cache is capable of invalidating entries in the cache when a modification (UPDATE, INSERT or DELETE) that may affect those entries is made.
The cache invalidation works on the table-level, that is, a modification made to a particular table will cause all cache entries that refer to that table to be invalidated, irrespective of whether the modification actually has an impact on the cache entries or not. For instance, suppose the result of the following SELECT has been cached
An insert like
will cause the cache entry containing the result of that SELECT to be invalidated even if the INSERT actually does not affect it. Please see for how to enable the invalidation.
When invalidation has been enabled MaxScale must be able to completely parse a SELECT statement for its results to be stored in the cache. The reason is that in order to be able to invalidate cache entries, MaxScale must know what tables a SELECT statement depends upon. Consequently, if (and only if) invalidation has been enabled and MaxScale fails to parse a statement, the result of that particular statement will not be cached.
When invalidation has been enabled, MaxScale will also parse all UPDATE, INSERT and DELETE statements, in order to find out what tables are modified. If that parsing fails, MaxScale will by default clear the entire cache. The reason is that unless MaxScale can completely parse the statement it cannot know what tables are modified and hence not what cache entries should be invalidated. Consequently, to prevent stale data from being returned, the entire cache is cleared. The default behaviour can be changed using the configuration parameter .
Note that what threading approach is used has a big impact on the invalidation. Please see for how the threading approach affects the invalidation.
Note also that since the invalidation may not, depending on how the cache has been configured, be visible to all sessions of all users, it is still important to configure a reasonable and TTL.
The invalidation offered by the MaxScale cache can be said to be of_best efforts_ quality. The reason is that in order to ensure that the cache in all circumstances reflects the state in the actual database, would require that the operations involving the cache and the MariaDB server are synchronized, which would cause an unacceptable overhead.
What best efforts means in this context is best illustrated using an example.
Suppose a client executes the statement SELECT * FROM tbl and that the result
is cached. Next time that or any other client executes the same statement, the
result is returned from the cache and the MariaDB server will not be accessed
at all.
If a client now executes the statement INSERT INTO tbl VALUES (...), the
cached value for the SELECT statement above and all other statements that are
dependent upon tbl will be invalidated. That is, the next time someone executes
the statement SELECT * FROM tbl the result will again be fetched from the
MariaDB server and stored to the cache.
However, suppose some client executes the statement SELECT COUNT(*) FROM tbl
at the same time someone else executes the INSERT ... statement. A possible
chain of events is as follows:
That is, the SELECT is performed in the database server before theINSERT. However, since the timelines are proceeding independently of
each other, the events may be re-ordered as far as the cache is concerned.
That is, the cached value for SELECT COUNT(*) FROM tbl will reflect the
situation before the insert and will thus not be correct.
The stale result will be returned until the value has reached its time-to-live or its invalidation is caused by some update operation.
The cache is simple to add to any existing service. However, some experimentation may be required in order to find the configuration settings that provide the maximum benefit.
Each configured cache filter uses a storage of its own. That is, if there are two services, each configured with a specific cache filter, then, even if queries target the very same servers the cached data will not be shared.
Two services can use the same cache filter, but then either the services should use the very same servers or a completely different set of servers, where the used table names are different. Otherwise there can be unintended sharing.
The cache filter has no mandatory parameters but a range of optional ones.
Note that it is advisable to specify max_size to prevent the cache from
using up all memory there is, in case there is very little overlap among the
queries.
storage
Type: string
Mandatory: No
Dynamic: No
Default: storage_inmemory
The name of the module that provides the storage for the cache. That
module will be loaded and provided with the value of storage_options as
argument. For instance:
See for what storage modules are available.
storage_options
Type: string
Mandatory: No
Dynamic: No
Default:
NOTE Deprecated in 23.02.
A string that is provided verbatim to the storage module specified in storage,
when the module is loaded. Note that the needed arguments and their format depend
upon the specific module.
From 23.02 onwards, the storage module configuration should be provided using nested parameters.
hard_ttl
Type:
Mandatory: No
Dynamic: No
Default: 0s (no limit)
Hard time to live; the maximum amount of time the cached result is used before it is discarded and the result is fetched from the backend (and cached). See also .
soft_ttl
Type:
Mandatory: No
Dynamic: No
Default: 0s (no limit)
Soft time to live; the amount of time - in seconds - the cached result is
used before it is refreshed from the server. When soft_ttl has passed, the
result will be refreshed when the first client requests the value.
However, as long as has not passed, all other clients
requesting the same value will use the result from the cache while it is being
fetched from the backend. That is, as long as soft_ttl but not hard_ttl
has passed, even if several clients request the same value at the same time,
there will be just one request to the backend.
If the value of soft_ttl is larger than hard_ttl it will be adjusted
down to the same value.
max_resultset_rows
Type: count
Mandatory: No
Dynamic: No
Default: 0 (no limit)
Specifies the maximum number of rows a resultset can have in order to be stored in the cache. A resultset larger than this, will not be stored.
max_resultset_size
Type:
Mandatory: No
Dynamic: No
Default: 0 (no limit)
Specifies the maximum size of a resultset, for it to be stored in the cache. A resultset larger than this, will not be stored. The size can be specified as described .
Note that the value of max_resultset_size should not be larger than the
value of max_size.
max_count
Type: count
Mandatory: No
Dynamic: No
Default: 0 (no limit)
The maximum number of items the cache may contain. If the limit has been reached and a new item should be stored, then an older item will be evicted.
Note that if cached_data is thread_specific then this limit will be
applied to each cache separately. That is, if a thread specific cache
is used, then the total number of cached items is #threads * the value
of max_count.
max_size
Type:
Mandatory: No
Dynamic: No
Default: 0 (no limit)
The maximum size the cache may occupy. If the limit has been reached and a new item should be stored, then some older item(s) will be evicted to make space.
Note that if cached_data is thread_specific then this limit will be
applied to each cache separately. That is, if a thread specific cache
is used, then the total size is #threads * the value of max_size.
rules
Type: path
Mandatory: No
Dynamic: Yes
Default: "" (no rules)
Specifies the path of the file where the caching rules are stored. A relative path is interpreted relative to the data directory of MariaDB MaxScale.
Note that the rules will be reloaded, and applied if different, every time a dynamic configuration change is made. Thus, to cause a reloading of the rules, alter the rules parameter to the same value it has.
cached_data
Type:
Mandatory: No
Dynamic: No
Values: shared, thread_specific
An enumeration option specifying how data is shared between threads. The allowed values are:
shared: The cached data is shared between threads. On the one hand
it implies that there will be synchronization between threads, on
the other hand that all threads will use data fetched by any thread.
thread_specific: The cached data is specific to a thread. On the
one hand it implies that no synchronization is needed between threads,
on the other hand that the very same data may be fetched and stored
multiple times.
Default is thread_specific. See max_count and max_size what implication
changing this setting to shared has.
selects
Type:
Mandatory: No
Dynamic: Yes
Values: assume_cacheable, verify_cacheable
An enumeration option specifying what approach the cache should take with
respect to SELECT statements. The allowed values are:
assume_cacheable: The cache can assume that all SELECT statements,
without exceptions, are cacheable.
verify_cacheable: The cache can not assume that all SELECT
statements are cacheable, but must verify that.
Default is assume_cacheable. In this case, all SELECT statements are
assumed to be cacheable and will be parsed only if some specific rule
requires that.
If verify_cacheable is specified, then all SELECT statements will be
parsed and only those that are safe for caching - e.g. do not call any
non-cacheable functions or access any non-cacheable variables - will be
subject to caching.
If verify_cacheable has been specified, the cache will not be used in
the following circumstances:
The SELECT uses any of the following functions: BENCHMARK,CONNECTION_ID, CONVERT_TZ, CURDATE, CURRENT_DATE, CURRENT_TIMESTAMP,CURTIME, DATABASE, ENCRYPT, FOUND_ROWS, GET_LOCK
Note that parsing all SELECT statements carries a performance
cost. Please read for more details.
cache_in_transactions
Type:
Mandatory: No
Dynamic: No
Values: never, read_only_transactions, all_transactions
An enumeration option specifying how the cache should behave when there are active transactions:
never: When there is an active transaction, no data will be returned
from the cache, but all requests will always be sent to the backend.
The cache will be populated inside explicitly read-only transactions.
Inside transactions that are not explicitly read-only, the cache will
be populated until the first non-SELECT statement.
read_only_transactions: The cache will be used and populated inside
explicitly read-only transactions. Inside transactions that are not
explicitly read-only, the cache will be populated, but not used
until the first non-SELECT statement.
all_transactions: The cache will be used and populated inside
explicitly read-only transactions. Inside transactions that are not
explicitly read-only, the cache will be used and populated until the
first non-SELECT statement.
Default is all_transactions.
The values read_only_transactions and all_transactions have roughly the
same effect as changing the isolation level of the backend to read_committed.
debug
Type: number
Mandatory: No
Dynamic: Yes
Default: 0
An integer value, using which the level of debug logging made by the cache can be controlled. The value is actually a bitfield with different bits denoting different logging.
0 (0b00000) No logging is made.
1 (0b00001) A matching rule is logged.
2 (0b00010
Default is 0. To log everything, give debug a value of 31.
enabled
Type:
Mandatory: No
Dynamic: No
Default: true
Specifies whether the cache is initially enabled or disabled.
The value affects the initial state of the MaxScale user variables using which the behaviour of the cache can be modified at runtime. Please see for details.
invalidate
Type:
Mandatory: No
Dynamic: No
Values: never, current
An enumeration option specifying how the cache should invalidate cache entries.
The effect of current depends upon the value of cached_data. If the value
is shared, that is, all threads share the same cache, then the effect of an
invalidation is immediately visible to all sessions, as there is just one cache.
However, if the value is thread_specific, then an invalidation will affect only
the cache that the session happens to be using.
If it is important and sufficient that an application immediately sees a change
that it itself has caused, then a combination of invalidate=current
and cached_data=thread_specific can be used.
If it is important that an application immediately sees all changes, irrespective
of who has caused them, then a combination of invalidate=current
and cached_data=shared must be used.
clear_cache_on_parse_errors
Type:
Mandatory: No
Dynamic: No
Default: true
This boolean option specifies how the cache should behave in case of parsing errors when invalidation has been enabled.
true: If the cache fails to parse an UPDATE/INSERT/DELETE
statement then all cached data will be cleared.
false: A failure to parse an UPDATE/INSERT/DELETE statement
is ignored and no invalidation will take place due that statement.
The default value is true.
Changing the value to false may mean that stale data is returned from
the cache, if an UPDATE/INSERT/DELETE cannot be parsed and the statement
affects entries in the cache.
users
Type:
Mandatory: No
Dynamic: No
Values: mixed, isolated
An enumeration option specifying how the cache should cache data for different users.
Note that if isolated has been specified, then each user will
conceptually have a cache of his own, which is populated
independently from each other. That is, if two users make the
same query, then the data will be fetched twice and also stored
twice. So, a isolated cache will in general use more memory and
cause more traffic to the backend compared to a mixed cache.
timeout
Type:
Mandatory: No
Dynamic: No
Default: 5s
The timeout used when performing operations to distributed storages such as redis or memcached.
The cache filter can be configured at runtime by executing SQL commands. If there is more than one cache filter in a service, only the first cache filter will be able to process the variables. The remaining filters will not see them and thus configuring them at runtime is not possible.
@maxscale.cache.populate
Using the variable @maxscale.cache.populate it is possible to specify at
runtime whether the cache should be populated or not. Its initial value is
the value of the configuration parameter enabled. That is, by default the
value is true.
The purpose of this variable is make it possible for an application to decide statement by statement whether the cache should be populated.
In the example above, the first SELECT will always be sent to the
server and the result will be cached, provided the actual cache rules
specifies that it should be. The second SELECT may be served from the
cache, depending on the value of @maxscale.cache.use (and the cache
rules).
The value of @maxscale.cache.populate can be queried
but only after it has been explicitly set once.
@maxscale.cache.use
Using the variable @maxscale.cache.use it is possible to specify at
runtime whether the cache should be used or not. Its initial value is
the value of the configuration parameter enabled. That is, by default the
value is true.
The purpose of this variable is make it possible for an application to decide statement by statement whether the cache should be used.
The first SELECT will be served from the cache, providing the rules
specify that the statement should be cached, the cache indeed contains
the result and the date is not stale (as specified by the TTL).
If the data is stale, the SELECT will be sent to the server and
the cache entry will be updated, irrespective of the value of@maxscale.cache.populate.
If @maxscale.cache.use is true but the result is not found in the
cache, and the result is subsequently fetched from the server, the
result will not be added to the cache, unless@maxscale.cache.populate is also true.
The value of @maxscale.cache.use can be queried
but only after it has explicitly been set once.
@maxscale.cache.soft_ttl
Using the variable @maxscale.cache.soft_ttl it is possible at runtime
to specify in seconds what soft ttl should be applied. Its initial
value is the value of the configuration parameter soft_ttl. That is,
by default the value is 0.
The purpose of this variable is make it possible for an application to decide statement by statement what soft ttl should be applied.
When data is SELECTed from the unimportant table unimportant, the data
will be returned from the cache provided it is no older than 10 minutes,
but when data is SELECTed from the important table important, the
data will be returned from the cache provided it is no older than 1 minute.
Note that @maxscale.cache.hard_ttl overrules @maxscale.cache.soft_ttl
in the sense that if the former is less that the latter, then soft ttl
will, when used, be adjusted down to the value of hard ttl.
The value of @maxscale.cache.soft_ttl can be queried
but only after it has explicitly been set once.
@maxscale.cache.hard_ttl
Using the variable @maxscale.cache.hard_ttl it is possible at runtime
to specify in seconds what hard ttl should be applied. Its initial
value is the value of the configuration parameter hard_ttl. That is,
by default the value is 0.
The purpose of this variable is make it possible for an application to decide statement by statement what hard ttl should be applied.
Note that as @maxscale.cache.hard_ttl overrules @maxscale.cache.soft_ttl,
is is important to ensure that the former is at least as large as the latter
and for best overall performance that it is larger.
The value of @maxscale.cache.hard_ttl can be queried
but only after it has explicitly been set once.
Client Driven Caching
With @maxscale.cache.populate and @maxscale.cache.use is it possible
to make the caching completely client driven.
Provide no rules file, which means that all SELECT statements are
subject to caching and that all users receive data from the cache. Set
the startup mode of the cache to disabled.
Now, in order to mark statements that should be cached, set@maxscale.cache.populate to true, and perform those SELECTs.
Note that those SELECTs must return something in order for the
statement to be marked for caching.
After this, the value of @maxscale.cache.use will decide whether
or not the cache is considered.
With @maxscale.cache.use being true, the cache is considered
and the result returned from there, if not stale. If it is stale,
the result is fetched from the server and the cached entry is updated.
By setting a very long TTL it is possible to prevent the cache from ever considering an entry to be stale and instead manually cause the cache to be updated when needed.
What caching approach is used and how different users are treated has a significant impact on the behaviour of the cache. In the following the implication of different combinations is explained.
Invalidation takes place only in the current cache, so how visible
the invalidation is, depends upon the configuration value ofcached_data.
cached_data=thread_specific
The invalidation is visible only to the sessions that are handled by the same worker thread where the invalidation occurred. Sessions of the same or other users that are handled by different worker threads will not see the new value before the TTL causes the value to be refreshed.
cache_data=shared
The invalidation is immediately visible to all sessions of all users.
The caching rules are expressed as a JSON object or as an array of JSON objects.
There are two decisions to be made regarding the caching; in what circumstances should data be stored to the cache and in what circumstances should the data in the cache be used.
Expressed in JSON this looks as follows
or, in case an array is used, as
The store field specifies in what circumstances data should be stored to
the cache and the use field specifies in what circumstances the data in
the cache should be used. In both cases, the value is a JSON array containing
objects.
If an array of rule objects is specified, then, when looking for a rule that
matches, the store field of each object are evaluated in sequential order
until a match is found. Then, the use field of that object is used when
deciding whether data in the cache should be used.
By default, if no rules file have been provided or if the store field is
missing from the object, the results of all queries will be stored to the
cache, subject to max_resultset_rows and max_resultset_size cache filter
parameters.
By providing a store field in the JSON object, the decision whether to
store the result of a particular query to the cache can be controlled in
a more detailed manner. The decision to cache the results of a query can
depend upon
the database,
the table,
the column, or
the query itself.
Each entry in the store array is an object containing three fields,
where,
the attribute can be database, table, column or query,
the op can be =, !=, like or unlike, and
If op is = or != then value is used as a string; if it is like
or unlike, then value is interpreted as a pcre2 regular expression.
Note though that if attribute is database, table or column, then
the string is interpreted as a name, where a dot . denotes qualification
or scoping.
The objects in the store array are processed in order. If the result
of a comparison is true, no further processing will be made and the
result of the query in question will be stored to the cache.
If the result of the comparison is false, then the next object is processed. The process continues until the array is exhausted. If there is no match, then the result of the query is not stored to the cache.
Note that as the query itself is used as the key, although the following queries
and
target the same table and produce the same results, they will be cached separately. The same holds for queries like
and
as well. Although they conceptually are identical, there will be two cache entries.
Note that if a column has been specified in a rule, then a statement will match irrespective of where that particular column appears. For instance, if a rule specifies that the result of statements referring to the column a should be cached, then the following statement will match
and so will
Qualified Names
When using = or != in the rule object in conjunction with database,table and column, the provided string is interpreted as a name, that is,
dot (.) denotes qualification or scope.
In practice that means that if attribute is database then value may
not contain a dot, if attribute is table then value may contain one
dot, used for separating the database and table names respectively, and
if attribute is column then value may contain one or two dots, used
for separating table and column names, or database, table and column names.
Note that if a qualified name is used as a value, then all parts of the
name must be available for a match. Currently Maria DB MaxScale may not
always be capable of deducing in what table a particular column is. If
that is the case, then a value like tbl.field may not necessarily
be a match even if the field is field and the table actually is tbl.
Implication of the default database
If the rules concerns the database, then only if the statement refers
to no specific database, will the default database be considered.
Regexp Matching
The string used for matching the regular expression contains as much information as there is available. For instance, in a situation like
the string matched against the regular expression will be somedb.tbl.fld.
Examples
Cache all queries targeting a particular database.
Cache all queries not targeting a particular table
That will exclude queries targeting table tbl1 irrespective of which database it is in. To exclude a table in a particular database, specify the table name using a qualified name.
Cache all queries containing a WHERE clause
Note that this will actually cause all queries that contain WHERE anywhere, to be cached.
By default, if no rules file have been provided or if the use field is
missing from the object, all users may be returned data from the cache.
By providing a use field in the JSON object, the decision whether to use
data from the cache can be controlled in a more detailed manner. The decision
to use data from the cache can depend upon
the user.
Each entry in the use array is an object containing three fields,
where,
the attribute can be user,
the op can be =, !=, like or unlike, and
the value a string.
If op is = or != then value is interpreted as a MariaDB account
string, that is, % means indicates wildcard, but if op is like orunlike it is simply assumed value is a pcre2 regular expression.
For instance, the following are equivalent:
Note that if op is = or != then the usual assumptions apply,
that is, a value of bob is equivalent with 'bob'@'%'. If like
or unlike is used, then no assumptions apply, but the string is
used verbatim as a regular expression.
The objects in the use array are processed in order. If the result
of a comparison is true, no further processing will be made and the
data in the cache will be used, subject to the value of ttl.
If the result of the comparison is false, then the next object is processed. The process continues until the array is exhausted. If there is no match, then data in the cache will not be used.
Note that use is relevant only if the query is subject to caching,
that is, if all queries are cached or if a query matches a particular
rule in the store array.
Examples
Use data from the cache for all users except admin (actually 'admin'@'%'),
regardless of what host the admin user comes from.
As the cache is not aware of grants, unless the cache has been explicitly
configured who the caching should apply to, the presence of the cache
may provide a user with access to data he should not have access to.
Note that the following applies only if users=mixed has been configured.
If users=isolated has been configured, then there can never be any
unintended sharing between users.
Suppose there is a table access that the user alice has access to,
but the user bob does not. If bob tries to access the table, he will
get an error as reply:
If we now setup caching for the table, using the simplest possible rules file, bob will get access to data from the table, provided he executes a select identical with one alice has executed.
For instance, suppose the rules look as follows:
If alice now queries the table, she will get the result, which also will be cached:
If bob now executes the very same query, and the result is still in the cache, it will be returned to him.
That can be prevented, by explicitly declaring in the rules that the caching should be applied to alice only.
With these rules in place, bob is again denied access, since queries
targeting the table access will in his case not be served from the cache.
There are two types of storages that can be used; local and shared.
The only local storage implementation is storage_inmemory that simply
stores the cache values in memory. The storage is not persistent and is
destroyed when MaxScale terminates. Since the storage exists in the MaxScale
process, it is very fast and provides almost always a performance benefit.
Currently there are two shared storages; storage_memcached andstorage_redis that are implemented using
and respectively.
The shared storages are accessed across the network and consequently it is_not_ self-evident that their use will provide any performance benefit. Namely, irrespective of whether the data is fetched from the cache or from the server there will be a network hop and often that network hop is, as far as the performance goes, what costs the most.
The presence of a shared cache may provide a performance benefit_if the network between MaxScale and the storage server (memcached or_ &#xNAN;Redis) is faster than the network between MaxScale and the database &#xNAN;server, if the used SELECT statements are heavy (that is, take a significant amount of time) to process for the database server, or
if the presence of the cache reduces the overall load of an otherwise overloaded database server.
As a general rule a shared storage should not be used without first assessing its value using a realistic workload.
storage_inmemoryThis simple storage module uses the standard memory allocator for storing the cached data.
This storage module takes no arguments.
storage_memcachedThis storage module uses for storing the cached data.
Multiple MaxScale instances can share the same memcached server and items cached by one MaxScale instance will be used by the other. Note that all MaxScale instances should have exactly the same configuration, as otherwise there can be unintended sharing.
storage_memcache has the following parameters:
server
Type: The Memcached server address specified as host[:port]
Mandatory: Yes
Dynamic: No
If no port is provided, then the default port 11211 will be used.
max_value_size
Type:
Mandatory: No
Dynamic: No
Default: 1Mi
By default, the maximum size of a value stored to memcached is 1MiB, but that can be configured to something else, in which case this parameter should be set accordingly.
The value of max_value_size will be used for capping max_resultset_size,
that is, if memcached has been configured to allow larger values than 1MiB
but max_value_size has not been set accordingly, only resultsets up to 1MiB
in size will be cached.
Example
From MaxScale 23.02 onwards, the storage configuration should be provided as nested parameters.
Although deprecated in 23.02, the configuration can also be provided
using storage_options:
Limitations
Invalidation is not supported.
Configuration values given to max_size and max_count are ignored.
Security
Neither the data in the memcached server nor the traffic between MaxScale and the memcached server is encrypted. Consequently, anybody with access to the memcached server or to the network have access to the cached data.
storage_redisThis storage module uses for storing the cached data.
Note that Redis should be configured with no idle timeout or with a timeout that is very large. Otherwise MaxScale may have to repeatedly connect to Redis, which will hurt both the functionality and the performance.
Multiple MaxScale instances can share the same redis server and items cached by one MaxScale instance will be used by the other. Note that all MaxScale instances should have exactly the same configuration, as otherwise there can be unintended sharing.
If storage_redis cannot connect to the Redis server, caching will silently
be disabled and a connection attempt will be made after a
interval.
If a timeout error occurs during an operation, reconnecting will be attempted
after a delay, which will be an increasing multiple of timeout. For example,
if timeout is the default 5 seconds, then reconnection attempts will first
be made after 10 seconds, then after 15 seconds, then 20 and so on. However,
once 60 seconds have been reached, the delay will no longer be increased but
the delay will stay at one minute. Note that each time a reconnection attempt
is made, unless the reason for the timeout has disappeared, the client will be
stalled for timeout seconds.
storage_redis has the following parameters:
server
Type: The Redis server address specified as host[:port]
Mandatory: Yes
Dynamic: No
If no port is provided, then the default port 6379 will be used.
username
Type: string
Mandatory: No
Dynamic: No
Default: ""
Please see for more information.
password
Type: string
Mandatory: No
Dynamic: No
Default: ""
Please see for more information.
ssl
Type:
Mandatory: No
Dynamic: No
Default: false
Please see for more information.
ssl_cert
Type: Path to existing readable file.
Mandatory: No
Dynamic: No
Default: ""
The SSL client certificate that MaxScale should use with the Redis
server. The certificate must match the key defined in ssl_key.
Please see for more information.
ssl_key
Type: Path to existing readable file.
Mandatory: No
Dynamic: No
Default: ""
The SSL client private key MaxScale should use with the Redis server.
Please see for more information.
ssl_ca
Type: Path to existing readable file.
Mandatory: No
Dynamic: No
Default: ""
The Certificate Authority (CA) certificate for the CA that signed the
certificate specified with ssl_cert.
Please see for more information.
Authentication
If password is provided, MaxScale will authenticate against Redis when a connection
has been created. The authentication is performed using the command, with only the password as argument,
if no username was provided in the configuration, or username and password as
arguments, if both were.
Note that if the authentication is in the Redis configuration file
specified using requirepass, then only the password should be provided.
If the Redis server version is 6 or higher and the Redis ACL system is used,
then both username and password must be provided.
SSL
If ssl_key, ssl_cert and ssl_ca are provided, then SSL/TLS will be used
in the communication with the Redis server, if ssl is set to true.
Note that the SSL/TLS support is only available in Redis from version 6 onwards and that the support is not by default built into Redis, but has to be specifically enabled at compile time as explained .
Example
From MaxScale 23.02 onwards, the storage configuration should be provided as nested parameters.
Although deprecated in 23.02, the configuration can also be provided
using storage_options:
Limitations
There is no distinction between soft and hard ttl, but only hard ttl is used.
Configuration values given to max_size and max_count are ignored.
Invalidation
storage_redis supports invalidation, but the caveats documented
are of greater significance since also the communication between the cache and the
cache storage is asynchronous and takes place over the network.
NOTE If invalidation is turned on after caching has been used (in non-invalidation mode), redis must be flushed as otherwise there will be entries in the cache that will not be affected by the invalidation.
Security
The data in the redis server is not encrypted. Consequently, anybody with access to the redis server has access to the cached data.
Unless has been enabled, anybody with access to the network has access to the cached data.
In the following we define a cache MyCache that uses the cache storage modulestorage_inmemory and whose soft ttl is 30 seconds and whose hard ttl is45 seconds. The cached data is shared between all threads and the maximum size
of the cached data is 50 mebibytes. The rules for the cache are in the filecache_rules.json.
cache_rules.jsonThe rules specify that the data of the table sbtest should be cached.
When the cache filter was introduced, the most significant factor affecting
the performance of the cache was whether the statements needed to be parsed.
Initially, all statements were parsed in order to exclude SELECT statements
that use non-cacheable functions, access non-cacheable variables or refer
to system or user variables. Later, the default value of the selects parameter
was changed to assume_cacheable, to maximize the default performance.
With the default configuration, the cache itself will not cause the statements
to be parsed. However, even with assume_cacheable configured, a rule referring
specifically to a database, table or column will still cause the
statement to be parsed.
For instance, a simple rule like
cannot be fulfilled without parsing the statement.
If the rule is instead expressed using a regular expression
then the statement will not be parsed.
However, when the was introduced, the parsing cost was significantly reduced and currently the cost for parsing and regular expression matching is roughly the same.
In the following is a table with numbers giving a rough picture of the relative cost of different approaches.
In the table, regexp match means that the cacheable statements were picked out using a rule like
while exact match means that the cacheable statements were picked out using a rule like
The exact match rule requires all statements to be parsed.
As the purpose of the test is to illustrate the overhead of different approaches, the rules were formulated so that all SELECT statements would match.
Note that these figures were obtained by running sysbench, MaxScale and the server in the same computer, so they are only indicative.
For comparison, without caching, the qps is 33.
As can be seen, due to the query classifier cache there is no difference between exact and regex based matching.
For maximum performance:
Arrange the situation so that the default selects=assume_cacheable
can be used, and use no rules.
Otherwise it is mostly a personal preference whether exact or regex based rules are used. However, one should always test with real data and real queries before choosing one over the other.
This page is licensed: CC BY-SA / Gnu FDL
maxctrl set server <server> maintenance --forceSET @rownum := 0;
SELECT @rownum := @rownum + 1 AS rownum, user, host FROM mysql.user;SET @myid := 0;
INSERT INTO test.t1 VALUES (@myid := @myid + 1);
SELECT @myid; -- Might return 1 or 0slave_selection_criteria=<criteria># Use the primary for reads
master_accept_reads=true# Enable strict multi-statement mode
strict_multi_stmt=trueINSERT INTO test.t1 (id) VALUES (1);
SELECT * FROM test.t1 WHERE id = 1;INSERT INTO test.t1 (id) VALUES (1);
-- These are executed as one multi-query
SET @maxscale_secret_variable=(
SELECT CASE
WHEN MASTER_GTID_WAIT('0-3000-8', 10) = 0 THEN 1
ELSE (SELECT 1 FROM INFORMATION_SCHEMA.ENGINES)
END); SELECT * FROM test.t1 WHERE id = 1;COM_QUERY: INSERT INTO test.t1 (id) VALUES (1);
COM_STMT_PREPARE: SELECT * FROM test.t1 WHERE id = ?;
COM_STMT_EXECUTE: ? = 123COM_QUERY: INSERT INTO test.t1 (id) VALUES (1);
COM_STMT_PREPARE: SELECT * FROM test.t1 WHERE id = ?;
COM_QUERY: IF (MASTER_GTID_WAIT('0-3000-8', 10) <> 0) THEN KILL (SELECT CONNECTION_ID()); END IF
COM_STMT_EXECUTE: ? = 123Error: 1792
SQLSTATE: 25006
Message: Causal read timed out while in a read-only transaction, cannot retry command.maxctrl call command readwritesplit reset-gtid My-RW-RouterSET SQL_MODE=''; -- A session command
BEGIN;
SELECT "hello world"; -- Returns the string "hello world"
SET SQL_MODE='ANSI_QUOTES'; -- A session command
SELECT 'hello world'; -- Returns the string "hello world"SET SQL_MODE=''; -- Replayed session command
SET SQL_MODE='ANSI_QUOTES'; -- Replayed session command
BEGIN;
SELECT "hello world"; -- Returns an error
SELECT 'hello world'; -- Returns the string "hello world"COM_INIT_DB (USE <db name> creates this)
COM_CHANGE_USER
COM_STMT_CLOSE
COM_STMT_SEND_LONG_DATA
COM_STMT_RESET
COM_STMT_PREPARE
COM_QUIT (no response, session is closed)
COM_REFRESH
COM_DEBUG
COM_PING
SQLCOM_CHANGE_DB (USE ... statements)
SQLCOM_DEALLOCATE_PREPARE
SQLCOM_PREPARE
SQLCOM_SET_OPTION
SELECT ..INTO variable|OUTFILE|DUMPFILE
SET autocommit=1|0use_sql_variables_in=[master|all] (default: all)MySQL [(none)]> set @id=1;
Query OK, 0 rows affected (0.00 sec)
MySQL [(none)]> SELECT @id := @id + 1 FROM test.t1;
ERROR 1064 (42000): Routing query to backend failed. See the error log for further details.Default: thread_specific
Default: assume_cacheable
IS_FREE_LOCKIS_USED_LOCKLAST_INSERT_IDLOAD_FILELOCALTIMELOCALTIMESTAMPMASTER_POS_WAITNOWRANDRELEASE_LOCKSESSION_USERSLEEPSYSDATESYSTEM_USERUNIX_TIMESTAMPUSERUUIDUUID_SHORTThe SELECT accesses any of the following fields: CURRENT_DATE,CURRENT_TIMESTAMP, LOCALTIME, LOCALTIMESTAMP
The SELECT uses system or user variables.
Default: all_transactions
4 (0b00100) A decision to use data from the cache is logged.
8 (0b01000) A decision not to use data from the cache is logged.
16 (0b10000) Higher level decisions are logged.
Default: never
Default: mixed
the value a string.
verify_cacheable
regexp match
80
verify_cacheable
exact match
80
thread_specific
No thread contention. Data/work duplicated across threads. May cause unintended sharing.
No thread contention. Data/work duplicated across threads and users. No unintended sharing. Requires the most amount of memory.
shared
Thread contention under high load. No duplicated data/work. May cause unintended sharing. Requires the least amount of memory.
Thread contention under high load. Data/work duplicated across users. No unintended sharing.
assume_cacheable
none
100
assume_cacheable
regexp match
83
assume_cacheable
exact match
83
verify_cacheable
none
80
SELECT * FROM t WHERE a=1;INSERT INTO t SET a=42;Timeline 1 Timeline 2
Clients execute INSERT ... SELECT COUNT(*) FROM tbl
MaxScale -> DB SELECT COUNT(*) FROM tbl
MaxScale -> DB INSERT ...MaxScale -> Cache Delete invalidated values
MaxScale -> Cache Store result and invalidation key[Cache]
type=filter
module=cache
hard_ttl=30
soft_ttl=20
rules=...
...
[Cached-Routing-Service]
type=service
...
filters=Cachestorage=storage_redishard_ttl=60ssoft_ttl=60smax_resultset_rows=1000max_resultset_size=128Kimax_count=1000max_size=100Mirules=/path/to/rules-filemaxctrl alter filter MyCache rules='/path/to/rules-file'cached_data=sharedselects=verify_cacheablecache_in_transactions=neverdebug=31enabled=false* `never`: No invalidation is performed. This is the default.
* `current`: When a modification is made, entries in the cache used by
the current session are invalidated. Other sessions that use the same
cache will also be affected, but sessions that use another cache will
not.* `mixed`: The data of different users is stored in the same
cache. This is the default and may cause that a user can
access data he should not have access to.
* `isolated`: Each user has a unique cache and there can be
no unintended sharing.timeout=7000msSET @maxscale.cache.populate=TRUE;
SELECT a, b FROM tbl;
SET @maxscale.cache.populate=FALSE;
SELECT a, b FROM tbl;SELECT @maxscale.cache.populate;SET @maxscale.cache.use=TRUE;
SELECT a, b FROM tbl;
SET @maxscale.cache.use=FALSE;
SELECT a, b FROM tbl;SELECT @maxscale.cache.use;SET @maxscale.cache.soft_ttl=600;
SELECT a, b FROM unimportant;
SET @maxscale.cache.soft_ttl=60;
SELECT c, d FROM important;SELECT @maxscale.cache.soft_ttl;SET @maxscale.cache.soft_ttl=600, @maxscale.cache.hard_ttl=610;
SELECT a, b FROM unimportant;
SET @maxscale.cache.soft_ttl=60, @maxscale.cache.hard_ttl=65;
SELECT c, d FROM important;SELECT @maxscale.cache.hard_ttl;[TheCache]
type=filter
module=cache
enabled=falseSET @maxscale.cache.populate=TRUE;
SELECT a, b FROM tbl1;
SELECT c, d FROM tbl2;
SELECT e, f FROM tbl3;
SET @maxscale.cache.populate=FALSE;SET @maxscale.cache.use=TRUE;
SELECT a, b FROM tbl1;
SET @maxscale.cache.use=FALSE;UPDATE tbl1 SET a = ...;
SET @maxscale.cache.populate=TRUE;
SELECT a, b FROM tbl1;
SET @maxscale.cache.populate=FALSE;{
store: [ ... ],
use: [ ... ]
}[
{
store: [ ... ],
use: [ ... ]
},
{ ... }
]{
"attribute": <string>,
"op": <string>
"value": <string>
}SELECT * FROM db1.tblUSE db1;
SELECT * FROM tblSELECT * FROM tbl WHERE a = 2 AND b = 3;SELECT * FROM tbl WHERE b = 3 AND a = 2;SELECT a FROM tbl;SELECT b FROM tbl WHERE a > 5;USE somedb;
SELECT fld FROM tbl;{
"store": [
{
"attribute": "database",
"op": "=",
"value": "db1"
}
]
}{
"store": [
{
"attribute": "table",
"op": "!=",
"value": "tbl1"
}
]
}{
"store": [
{
"attribute": "table",
"op": "!=",
"value": "db1.tbl1"
}
]
}{
"store": [
{
"attribute": "query",
"op": "like",
"value": ".*WHERE.*"
}
]
}{
"attribute": <string>,
"op": <string>
"value": <string>
}{
"attribute": "user",
"op": "=",
"value": "'bob'@'%'"
}
{
"attribute": "user",
"op": "like",
"value": "bob@.*"
}{
"use": [
{
"attribute": "user",
"op": "!=",
"value": "admin"
}
]
}MySQL [testdb]> select * from access;
ERROR 1142 (42000): SELECT command denied to user 'bob'@'localhost' for table 'access'{
"store": [
{
"attribute": "table",
"op": "=",
"value": "access"
}
]
}MySQL [testdb]> select * from access;
+------+------+
| a | b |
+------+------+
| 47 | 11 |
+------+------+MySQL [testdb]> select current_user();
+----------------+
| current_user() |
+----------------+
| bob@127.0.0.1 |
+----------------+
1 row in set (0.00 sec)
MySQL [testdb]> select * from access;
+------+------+
| a | b |
+------+------+
| 47 | 11 |
+------+------+{
"store": [
{
"attribute": "table",
"op": "=",
"value": "access"
}
],
"use": [
{
"attribute": "user",
"op": "=",
"value": "'alice'@'%'"
}
]
}storage=storage_inmemorystorage=storage_memcached[Cache-Filter]
type=filter
module=cache
storage=storage_memcached
storage_memcached.server=192.168.1.31
storage_memcached.max_value_size=10Mstorage_options="server=192.168.1.31,max_value_size=10M"storage=storage_redis[Cache-Filter]
type=filter
module=cache
storage=storage_redis
storage_redis.server=192.168.1.31
storage_redis.username=hello
storage_redis.password=worldstorage_options="server=192.168.1.31,username=hello,password=world"$ redis-cli flushall[MyCache]
type=filter
module=cache
storage=storage_inmemory
soft_ttl=30
hard_ttl=45
cached_data=shared
max_size=50Mi
rules=cache_rules.json
[MyService]
type=service
...
filters=MyCache{
"store": [
{
"attribute": "table",
"op": "=",
"value": "sbtest"
}
]
}{
"store": [
{
"attribute": "database",
"op": "=",
"value": "db1"
}
]
}{
"store": [
{
"attribute": "query",
"op": "like",
"value": "FROM db1\\..*"
}
]
}{
"attribute": "query",
"op": "unlike",
"value": "FROM nomatch"
}{
"attribute": "database",
"op": "!=",
"value": "nomatch"
}MariaDB Monitor monitors a Primary-Replica replication cluster. It probes the state of the backends and assigns server roles such as primary and replica, which are used by the routers when deciding where to route a query. It can also modify the replication cluster by performing failover, switchover and rejoin. Backend server versions older than MariaDB/MySQL 5.5 are not supported. Failover and other similar operations require MariaDB 10.4 or later.
Up until MariaDB MaxScale 2.2.0, this monitor was called MySQL Monitor.
The monitor user requires the following grant:
In MariaDB Server versions 10.5.0 to 10.5.8, the monitor user instead requires REPLICATION SLAVE ADMIN:
In MariaDB Server 10.5.9 and later, REPLICA MONITOR is required:
If the monitor needs to query server disk space (i.e. disk_space_threshold is
set), then the FILE-grant is required with MariaDB Server versions 10.4.7,
10.3.17, 10.2.26 and 10.1.41 and later.
MariaDB Server 10.5.2 introduces CONNECTION ADMIN. This is recommended since it allows the monitor to log in even if server connection limit has been reached.
If are used, the following additional grants are required:
MariaDB 10.5.2 and later require read access to mysql.global_priv:
As of MariaDB Server 11.0.1, the SUPER-privilege no longer contains several of its former sub-privileges. These must be given separately.
If a separate replication user is defined (with replication_user andreplication_password), it requires the following grant:
Only one backend can be primary at any given time. A primary must be running
(successfully connected to by the monitor) and its read_only-setting must be
off. A primary may not be replicating from another server in the monitored
cluster unless the primary is part of a multiprimary group. Primary selection
prefers to select the server with the most replicas, possibly in multiple
replication layers. Only replicas reachable by a chain of running relays or
directly connected to the primary count. When multiple servers are tied for
primary status, the server which appears earlier in the servers-setting of the
monitor is selected.
Servers in a cyclical replication topology (multiprimary group) are interpreted as having all the servers in the group as replicas. Even from a multiprimary group only one server is selected as the overall primary.
After a primary has been selected, the monitor prefers to stick with the choice even if other potential primaries with more replica servers are available. Only if the current primary is clearly unsuitable does the monitor try to select another primary. An existing primary turns invalid if:
It is unwritable (read_only is on).
It has been down for more than failcount monitor passes and has no running replicas. Running replicas behind a downed relay count. A replica in this context is any server with at least a partially running replication connection (either io or sql thread is running). The replicas must also be down for more than failcount monitor passes to allow new master selection.
It did not previously replicate from another server in the cluster but it is now replicating.
It was previously part of a multiprimary group but is no longer, or the multiprimary group is replicating from a server not in the group.
Cases 1 and 2 cover the situations in which the DBA, an external script or even another MaxScale has modified the cluster such that the old primary can no longer act as primary. Cases 3 and 4 are less severe. In these cases the topology has changed significantly and the primary should be re-selected, although the old primary may still be the best choice.
The primary change described above is different from failover and switchover described in section . A primary change only modifies the server roles inside MaxScale but does not modify the cluster other than changing the targets of read and write queries. Failover and switchover perform a primary change on their own.
As a general rule, it's best to avoid situations where the cluster has multiple standalone servers, separate primary-replica pairs or separate multiprimary groups. Due to primary invalidation rule 2, a standalone primary can easily lose the primary status to another valid primary if it goes down. The new primary probably does not have the same data as the previous one. Non-standalone primaries are less vulnerable, as a single running replica or multiprimary group member will keep the primary valid even when down.
A minimal configuration for a monitor requires a set of servers for monitoring and a username and a password to connect to these servers.
From MaxScale 2.2.1 onwards, the module name is mariadbmon instead ofmysqlmon. The old name can still be used.
The grants required by user depend on which monitor features are used. A full
list of the grants can be found in the
section.
For a list of optional parameters that all monitors support, read the document.
These are optional parameters specific to the MariaDB Monitor. Failover, switchover and rejoin-specific parameters are listed in their own . Rebuild-related parameters are described in the . ColumnStore parameters are described in the .
assume_unique_hostnamesType:
Mandatory: No
Dynamic: Yes
Default: true
When active, the monitor assumes that server hostnames and
ports are consistent between the server definitions in the MaxScale
configuration file and the "SHOW ALL SLAVES STATUS" outputs of the servers
themselves. Specifically, the monitor assumes that if server A is replicating
from server B, then A must have a replica connection with Master_Host andMaster_Port equal to B's address and port in the configuration file. If this
is not the case, e.g. an IP is used in the server while a hostname is given in
the file, the monitor may misinterpret the topology. The monitor attempts name
resolution on the addresses if a simple string comparison
does not find a match. Using exact matching addresses is, however, more
reliable. In MaxScale 24.02.0, an alternative IP or hostname for a server can be
given in .
This setting must be ON to use any cluster operation features such as failover or switchover, because MaxScale uses the addresses and ports in the configuration file when issuing "CHANGE MASTER TO"-commands.
If the network configuration is such that the addresses MaxScale uses to connect
to backends are different from the ones the servers use to connect to each
other and private_address is not used, assume_unique_hostnames should be
set to OFF. In this mode, MaxScale uses server id:s it queries from
the servers and the Master_Server_Id fields of the replica connections
to deduce which server is replicating from which. This is not perfect though,
since MaxScale doesn't know the id:s of servers it has
never connected to (e.g. server has been down since MaxScale was started). Also,
the Master_Server_Id-field may have an incorrect value if the replica connection
has not been established. MaxScale will only trust the value if the monitor has
seen the replica connection IO thread connected at least once. If this is not the
case, the replica connection is ignored.
private_addressString. This is an optional server setting, yet documented here since it's only used by MariaDB Monitor. If not set, the normal server address setting is used.
Defines an alternative IP-address or hostname for the server for use with replication. Whenever MaxScale modifies replication (e.g. during switchover), the private address is given as Master_Host to "CHANGE MASTER TO"-commands. Also, when detecting replication, any Master_Host-values from "SHOW SLAVE STATUS"-queries are compared to the private addresses of configured servers if the normal address doesn't match.
This setting is useful if replication and application traffic are separated to different network interfaces.
master_conditionsType:
Mandatory: No
Dynamic: Yes
Values: none, connecting_slave, connected_slave
Designate additional conditions for_Master_-status, i.e. qualified for read and write queries.
Normally, if a suitable primary candidate server is found as described in , MaxScale designates it Master.master_conditions sets additional conditions for a primary server. This setting is an enum_mask, allowing multiple conditions to be set simultaneously. Conditions 2, 3 and 4 refer to replica servers. A single replica must fulfill all of the given conditions for the primary to be viable.
If the primary candidate fails master_conditions but fulfills_slave_conditions_, it may be designated Slave instead.
The available conditions are:
none : No additional conditions
connecting_slave : At least one immediate replica (not behind relay) is attempting to replicate or is replicating from the primary (Slave_IO_Running is 'Yes' or 'Connecting', Slave_SQL_Running is 'Yes'). A replica with incorrect replication credentials does not count. If the replica is currently down, results from the last successful monitor tick are used.
connected_slave : Same as above, with the difference that the replication connection must be up (Slave_IO_Running is 'Yes'). If the replica is currently down, results from the last successful monitor tick are used.
running_slave : Same as connecting_slave, with the addition that the replica must also be Running.
The default value of this setting ismaster_requirements=primary_monitor_master,disk_space_ok to ensure that both
monitors use the same primary server when cooperating and that the primary is
not out of disk space.
For example, to require that the primary must have a replica which is both connected and running, set
slave_conditionsType:
Mandatory: No
Dynamic: Yes
Values: none, linked_master, running_master
Designate additional conditions for Slave-status, i.e qualified for read queries.
Normally, a server is Slave if it is at least attempting to replicate from the primary candidate or a relay (Slave_IO_Running is 'Yes' or 'Connecting', Slave_SQL_Running is 'Yes', valid replication credentials). The primary candidate does not necessarily need to be writable, e.g. if it fails its_master_conditions_. slave_conditions sets additional conditions for a replica server. This setting is an enum_mask, allowing multiple conditions to be set simultaneously.
The available conditions are:
none : No additional conditions. This is the default value.
linked_master : The replica must be connected to the primary (Slave_IO_Running and Slave_SQL_Running are 'Yes') and the primary must be Running. The same applies to any relays between the replica and the primary.
running_master : The primary must be running. Relays may be down.
writable_master : The primary must be writable, i.e. labeled Master.
For example, to require that the primary server of the cluster must be running and writable for any servers to have Slave-status, set
failcountType: number
Mandatory: No
Dynamic: Yes
Default: 5
Number of consecutive monitor passes a primary server must be down before it is
considered failed. If automatic failover is enabled (auto_failover=true), it
may be performed at this time. A value of 0 or 1 enables immediate failover.
If automatic failover is not possible, the monitor will try to search for another server to fulfill the primary role. See section for more details. Changing the primary may break replication as queries could be routed to a server without previous events. To prevent this, avoid having multiple valid primary servers in the cluster.
The worst-case delay between the primary failure and the start of the failover
can be estimated by summing up the timeout values and monitor_interval and
multiplying that by failcount:
enforce_writable_masterType:
Mandatory: No
Dynamic: Yes
Default: false
If set to ON, the monitor attempts to disable the read_only-flag on the primary when seen. The flag is checked every monitor tick. The monitor user requires the SUPER-privilege for this feature to work.
Typically, the primary server should never be in read-only-mode. Such a situation may arise due to misconfiguration or accident, or perhaps if MaxScale crashed during switchover.
When this feature is enabled, setting the primary manually to read_only will no longer cause the monitor to search for another primary. The primary will instead for a moment lose its [Master]-status (no writes), until the monitor again enables writes on the primary. When starting from scratch, the monitor still prefers to select a writable server as primary if possible.
enforce_read_only_slavesType:
Mandatory: No
Dynamic: Yes
Default: false
If set to ON, the monitor attempts to enable the read_only-flag on any writable replica server. The flag is checked every monitor tick. The monitor user requires the SUPER-privilege (or READ_ONLY ADMIN) for this feature to work. While the read_only-flag is ON, only users with the SUPER-privilege (or READ_ONLY ADMIN) can write to the backend server. If temporary write access is required, this feature should be disabled before attempting to disable read_only manually. Otherwise, the monitor will quickly re-enable it.
read_only won't be enabled on the master server, even if it has lost [Master]-status due to and is marked [Slave].
enforce_read_only_serversType:
Mandatory: No
Dynamic: Yes
Default: false
Works similar to except will set_read_only_ on any writable server that is not the primary and not in maintenance (a superset of the servers altered by enforce_read_only_slaves).
The monitor user requires the SUPER-privilege (or READ_ONLY ADMIN) for this feature to work. If the cluster has no valid primary or primary candidate, read_only is not set on any server as it is unclear which servers should be altered.
maintenance_on_low_disk_spaceType:
Mandatory: No
Dynamic: Yes
Default: true
If a running server that is not the primary or a relay primary is out of disk space the server is set to maintenance mode. Such servers are not used for router sessions and are ignored when performing a failover or other cluster modification operation. See the general monitor parameters and on how to enable disk space monitoring.
Once a server has been put to maintenance mode, the disk space situation of that server is no longer updated. The server will not be taken out of maintenance mode even if more disk space becomes available. The maintenance flag must be removed manually:
cooperative_monitoring_locksType:
Mandatory: No
Dynamic: Yes
Values: none, majority_of_all, majority_of_running
Using this setting is recommended when multiple MaxScales are monitoring the same backend cluster. When enabled, the monitor attempts to acquire exclusive locks on the backend servers. The monitor considers itself the primary monitor if it has a majority of locks. The majority can be either over all configured servers or just over running servers. See for more details on how this feature works and which value to use.
Allowed values:
none Default value, no locking.
majority_of_all Primary monitor requires a majority of locks, even counting
servers which are [Down].
majority_of_running Primary monitor requires a majority of locks over
[Running] servers.
This setting is separate from the global MaxScale setting passive. If_passive_ is set to true, cluster operations are disabled even if monitor has
acquired the locks. Generally, it's best not to mix cooperative monitoring with_passive_. Either set passive=false or do not set it at all.
script_max_replication_lagType: number
Mandatory: No
Dynamic: Yes
Default: -1
Defines a replication lag limit in seconds for launching the monitor script configured in the script-parameter. If the replication lag of a server goes above this limit, the script is ran with the $EVENT-placeholder replaced by "rlag_above". If the lag goes back below the limit, the script is ran again with replacement "rlag_below".
Negative values disable this feature. For more information on monitor scripts, see .
Starting with MaxScale 2.2.1, MariaDB Monitor supports replication cluster modification. The operations implemented are:
, which replaces a failed primary with a replica
, which swaps a running primary with a replica
, which swaps a running primary with a replica, ignoring most errors. Can break replication.
, which schedules a switchover and returns
See for more information on the implementation of the commands.
The cluster operations require that the monitor user (user) has the following
privileges:
SUPER, to modify replica connections, set globals such as read_only and kill connections from other super-users
REPLICATION CLIENT (REPLICATION SLAVE ADMIN in MariaDB Server 10.5), to list replica connections
RELOAD, to flush binary logs
PROCESS, to check if the event_scheduler process is running
A list of the grants can be found in the section.
The privilege system was changed in MariaDB Server 10.5. The effects of this on the MaxScale monitor user are minor, as the SUPER-privilege contains many of the required privileges and is still required to kill connections from other super-users.
In MariaDB Server 11.0.1 and later, SUPER no longer contains all the required grants. The monitor requires:
READ_ONLY ADMIN, to set read_only
REPLICA MONITOR and REPLICATION SLAVE ADMIN, to view and manage replication connections
RELOAD, to flush binary logs
PROCESS, to check if the event_scheduler process is running
In addition, the monitor needs to know which username and password a
replica should use when starting replication. These are given inreplication_user and replication_password.
The user can define files with SQL statements which are executed on any server
being demoted or promoted by cluster manipulation commands. See the sections onpromotion_sql_file and demotion_sql_file for more information.
The monitor can manipulate scheduled server events when promoting or demoting a
server. See the section on handle_events for more information.
All cluster operations can be activated manually through MaxCtrl. See section for more details.
See for information on possible issues with failover and switchover.
Failover
Failover replaces a failed primary with a running replica. It does the following:
Select the most up-to-date replica of the old primary to be the new primary. The selection criteria is as follows in descending priority:
gtid_IO_pos (latest event in relay log)
gtid_current_pos (most processed events)
log_slave_updates is on
Failover is considered successful if steps 1 to 3 succeed, as the cluster then has at least a valid primary server.
Switchover
Switchover swaps a running primary with a running replica. It does the following:
Prepare the old primary for demotion:
If backend_read_timeout is short, extend it and reconnect.
Stop any external replication.
Enable the read_only-flag to stop writes from normal users.
Similar to failover, switchover is considered successful if the new primary was successfully promoted.
Switchover-force
Switchover-force performs the same steps as a normal switchover but ignores any errors on the old primary. Switchover-force also does not expect the new primary to reach the gtid-position of the old, as the old primary could be receiving more events constantly. Thus, switchover-force may lose events and replication can break on multiple (or even all) replicas. This is an unsafe command and should only be used as a last resort.
Rejoin
Rejoin joins a standalone server to the cluster or redirects a replica replicating from a server other than the primary. A standalone server is joined by:
Run the commands in demotion_sql_file.
Enable the read_only-flag.
Disable scheduled server events (if event handling is on).
Start replication: CHANGE MASTER TO and START SLAVE.
A server which is replicating from the wrong primary is redirected simply with STOP SLAVE, RESET SLAVE, CHANGE MASTER TO and START SLAVE commands.
Reset Replication
Reset-replication (added in MaxScale 2.3.0) deletes binary logs and resets gtid:s. This destructive command is meant for situations where the gtid:s in the cluster are out of sync while the actual data is known to be in sync. The operation proceeds as follows:
Reset gtid:s and delete binary logs on all servers:
Stop (STOP SLAVE) and delete (RESET SLAVE ALL) all replica connections.
Enable the read_only-flag.
Disable scheduled server events (if event handling is on).
Cluster operations can be activated manually through the REST API or MaxCtrl. The commands are only performed when MaxScale is in active mode. The commands generally match their automatic versions. The exception is rejoin, in which the manual command allows rejoining even when the joining server has empty gtid:s. This rule allows the user to force a rejoin on a server without binary logs.
All commands require the monitor instance name as the first parameter. Failover selects the new primary server automatically and does not require additional parameters. Rejoin requires the name of the joining server as second parameter. Replication reset accepts the name of the new primary server as second parameter. If not given, the current primary is selected.
Switchover takes one to three parameters. If only the monitor name is given, switchover will autoselect both the replica to promote and the current primary as the server to be demoted. If two parameters are given, the second parameter is interpreted as the replica to promote. If three parameters are given, the third parameter is interpreted as the current primary. The user-given current primary is compared to the primary server currently deduced by the monitor and if the two are unequal, an error is given.
Example commands are below:
The commands follow the standard module command syntax. All require the monitor configuration name (MyMonitor) as the first parameter. For switchover, the last two parameters define the server to promote (NewPrimaryServ) and the server to demote (OldPrimaryServ). For rejoin, the server to join (OldPrimaryServ) is required. Replication reset requires the server to promote (NewPrimaryServ).
It is safe to perform manual operations even with automatic failover, switchover or rejoin enabled since automatic operations cannot happen simultaneously with manual ones.
When a cluster modification is initiated via the REST-API, the URL path is of the form:
<operation> is the name of the command e.g. failover, switchover,
rejoin or reset-replication.
<monitor-name> is the monitor name from the MaxScale configuration file.
<server-name1> and <server-name2> are server names as described
above for MaxCtrl. Only switchover accepts both, failover doesn't need any
and both rejoin and reset-replication accept one.
Given a MaxScale configuration file like
with the assumption that server2 is the current primary, then the URL
path for making server4 the new primary would be:
Example REST-API paths for other commands are listed below.
Queued switchover
Most cluster modification commands wait until the operation either succeeds or fails. async-switchover is an exception, as it returns immediately. Otherwise_async-switchover_ works identical to a normal switchover command. Use the module command fetch-cmd-result to view the result of the queued command.fetch-cmd-result returns the status or result of the latest manual command, whether queued or not.
Failover can activate automatically if auto_failover is on. The activation
begins when the primary has been down at least failcount monitor iterations.
Before modifying the cluster, the monitor checks that all prerequisites for the
failover are fulfilled. If the cluster does not seem ready, an error is printed
and the cluster is rechecked during the next monitor iteration.
Switchover can also activate automatically with theswitchover_on_low_disk_space-setting. The operation begins if the primary
server is low on disk space but otherwise the operating logic is quite similar
to automatic failover.
Rejoin stands for starting replication on a standalone server or redirecting a replica replicating from the wrong primary (any server that is not the cluster primary). The rejoined servers are directed to replicate from the current cluster primary server, forcing the replication topology to a 1-primary-N-replicas configuration.
A server is categorized as standalone if the server has no replica connections, not even stopped ones. A server is replicating from the wrong primary if the replica IO thread is connected but the primary server id seen by the replica does not match the cluster primary id. Alternatively, the IO thread may be stopped or connecting but the primary server host or port information differs from the cluster primary info. These criteria mean that a STOP SLAVE does not yet set a replica as standalone.
With auto_rejoin active, the monitor will try to rejoin any servers matching
the above requirements. Rejoin does not obey failcount and will attempt to
rejoin any valid servers immediately. When activating rejoin manually, the
user-designated server must fulfill the same requirements.
Switchover and failover are meant for simple topologies (one primary and several replicas). Using these commands with complicated topologies (multiple primaries, relays, circular replication) may give unpredictable results and should be tested before use on a production system.
The server cluster is assumed to be well-behaving with no significant
replication lag (within failover_timeout/switchover_timeout) and all
commands that modify the cluster (such as "STOP SLAVE", "CHANGE MASTER",
"START SLAVE") complete in a few seconds (faster than backend_read_timeout
and backend_write_timeout).
The backends must all use GTID-based replication, and the domain id should not change during a switchover or failover. Replicas should not have extra local events so that GTIDs are compatible across the cluster.
Failover cannot be performed if MaxScale was started only after the primary
server went down. This is because MaxScale needs reliable information on the
gtid domain of the cluster and the replication topology in general to properly
select the new primary. enforce_simple_topology=1 relaxes this requirement.
Failover may lose events. If a primary goes down before sending new events to at least one replica, those events are lost when a new primary is chosen. If the old primary comes back online, the other servers have likely moved on with a diverging history and the old primary can no longer join the replication cluster.
To reduce the chance of losing data, use . In semisynchronous mode, the primary waits for a replica to receive an event before returning an acknowledgement to the client. This does not yet guarantee a clean failover. If the primary fails after preparing a transaction but before receiving replica acknowledgement, it will still commit the prepared transaction as part of its crash recovery. If the replicas never saw this transaction, the old primary has diverged from the cluster. See for more information. This situation is much less likely in MariaDB Server 10.6.2 and later, as the improved crash recovery logic will delete such transactions.
Even a controlled shutdown of the primary may lose events. The server does not by default wait for all data to be replicated to the replicas when shutting down and instead simply closes all connections. Before shutting down the primary with the intention of having a replica promoted, run switchover first to ensure that all data is replicated. For more information on server shutdown, see .
Switchover requires that the cluster is "frozen" for the duration of the operation. This means that no data modifying statements such as INSERT or UPDATE are executed and the GTID position of the primary server is stable. When switchover begins, the monitor sets the global read_only flag on the old primary backend to stop any updates. read_only does not affect users with the SUPER-privilege so any such user can issue writes during a switchover. These writes have a high chance of breaking replication, because the write may not be replicated to all replicas before they switch to the new primary. To prevent this, any users who commonly do updates should NOT have the SUPER-privilege. For even more security, the only SUPER-user session during a switchover should be the MaxScale monitor user. This also applies to users running scheduled server events. Although the monitor by default disables events on the master, an event may already be executing. If the event definer has SUPER-privilege, the event can write to the database even through read_only.
When mixing rejoin with failover/switchover, the backends should have_log_slave_updates_ on. The rejoining server is likely lagging behind the rest of the cluster. If the current cluster primary does not have binary logs from the moment the rejoining server lost connection, the rejoining server cannot continue replication. This is an issue if the primary has changed and the new primary does not have log_slave_updates on.
If an automatic cluster operation such as auto-failover or auto-rejoin fails,
all cluster modifying operations are disabled for failcount monitor iterations,
after which the operation may be retried. Similar logic applies if the cluster is
unsuitable for such operations, e.g. replication is not using GTID.
The monitor detects if a server in the cluster is replicating from an external primary (a server that is not monitored by the monitor). If the replicating server is the cluster primary server, then the cluster itself is considered to have an external primary.
If a failover/switchover happens, the new primary server is set to replicate from
the cluster external primary server. The username and password for the replication
are defined in replication_user and replication_password. The address and
port used are the ones shown by SHOW ALL SLAVES STATUS on the old cluster
primary server. In the case of switchover, the old primary also stops replicating
from the external server to preserve the topology.
After failover the new primary is replicating from the external primary. If the failed old primary comes back online, it is also replicating from the external server. To normalize the situation, either have auto_rejoin on or manually execute a rejoin. This will redirect the old primary to the current cluster primary.
auto_failover
Type:
Mandatory: No
Dynamic: Yes
Default: false
Enable automatic primary failover. When automatic failover is enabled, MaxScale will elect a new primary server for the cluster if the old primary goes down. A server is assumed Down if it cannot be connected to, even if this is caused by incorrect credentials. Failover triggers if the primary stays down for monitor intervals. Failover will not take place if MaxScale is set .
As failover alters replication, it requires more privileges than normal monitoring. See for a list of grants.
Failover is designed to be used with simple primary-replica topologies. More complicated topologies, such as multilayered or circular replication, are not guaranteed to always work correctly. Test before using failover with such setups.
auto_rejoin
Type:
Mandatory: No
Dynamic: Yes
Default: false
Enable automatic joining of servers to the cluster. When enabled, MaxScale will attempt to direct servers to replicate from the current cluster primary if they are not currently doing so. Replication will be started on any standalone servers. Servers that are replicating from another server will be redirected. This effectively enforces a 1-primary-N-replicas topology. The current primary itself is not redirected, so it can continue to replicate from an external primary. Rejoin is also not performed on any server that is replicating from multiple sources, as this indicates a complicated topology (this rule is overridden by ).
This feature is often paired with to redirect the former primary when it comes back online. Sometimes this kind of rejoin will fail as the old primary may have transactions that were never replicated to the current one. See for more information.
As an example, consider the following series of events:
Replica A goes down
Primary goes down and a failover is performed, promoting Replica B
Replica A comes back
Old primary comes back
Replica A is still trying to replicate from the downed primary, since it wasn't online during failover. If auto_rejoin is on, Replica A will quickly be redirected to Replica B, the current primary. The old primary will also rejoin the cluster if possible.
switchover_on_low_disk_space
Type:
Mandatory: No
Dynamic: Yes
Default: false
If enabled, the monitor will attempt to
switchover a primary server low on disk space with a replica. The switch is only
done if a replica without disk space issues is found. Ifmaintenance_on_low_disk_space is also enabled, the old primary (now a replica)
will be put to maintenance during the next monitor iteration.
For this parameter to have any effect, disk_space_threshold must be specified
for the
or the .
Also,
must be defined for the monitor.
enforce_simple_topology
Type:
Mandatory: No
Dynamic: Yes
Default: false
This setting tells the monitor to assume that the servers should be arranged in a
1-primary-N-replicas topology and the monitor should try to keep it that way. Ifenforce_simple_topology is enabled, the settings assume_unique_hostnames,auto_failover and auto_rejoin are also activated regardless of their individual
settings.
By default, mariadbmon will not rejoin servers with more than one replication
stream configured into the cluster. Starting with MaxScale 6.2.0, whenenforce_simple_topology is enabled, all servers will be rejoined into the
cluster and any extra replication sources will be removed. This is done to make
automated failover with multi-source external replication possible.
This setting also allows the monitor to perform a failover to a cluster where the primary server has not been seen [Running]. This is usually the case when the primary goes down before MaxScale is started. When using this feature, the monitor will guess the GTID domain id of the primary from the replicas. For reliable results, the GTID:s of the cluster should be simple.
replication_user and replication_password
Type: string
Mandatory: No
Dynamic: Yes
Default: None
The username and password of the replication user. These are given as the values
for MASTER_USER and MASTER_PASSWORD whenever a CHANGE MASTER TO command is
executed.
Both replication_user and replication_password parameters must be defined if
a custom replication user is used. If neither of the parameters is defined, theCHANGE MASTER TO-command will use the monitor credentials for the replication
user.
The credentials used for replication must have the REPLICATION SLAVE
privilege.
replication_password uses the same encryption scheme as other password
parameters. If password encryption is in use, replication_password must be
encrypted with the same key to avoid erroneous decryption.
replication_master_ssl
Type:
Mandatory: No
Dynamic: Yes
Default: false
If set to ON, any CHANGE MASTER TO-command generated will set MASTER_SSL=1 to enable
encryption for the replication stream. This setting should only be enabled if the backend
servers are configured for ssl. This typically means setting ssl_ca, ssl_cert and_ssl_key_ in the server configuration file. Additionally, credentials for the replication
user should require an encrypted connection (e.g. ALTER USER repl@'%' REQUIRE SSL;).
If the setting is left OFF, MASTER_SSL is not set at all, which will preserve existing
settings when redirecting a replica connection.
replication_custom_options
Type: string
A custom string added to "CHANGE MASTER TO"-commands sent by the monitor whenever setting up replication (e.g. during switchover). Useful for defining ssl certificates or other specialized replication options. MaxScale does not check the contents of the string, so care should be taken to ensure that only valid options are set and that the contents do not interfere with the options MaxScale sets on its own (e.g. MASTER_HOST). This setting can also be configured for an individual server. If configured for both the monitor and a server, the server setting takes priority.
failover_timeout and switchover_timeout
Type:
Mandatory: No
Dynamic: Yes
Default: 90s
Time limit for failover and switchover operations. The default
values are 90 seconds for both. switchover_timeout is also used as the time
limit for a rejoin operation. Rejoin should rarely time out, since it is a
faster operation than switchover.
The timeouts are specified as documented . If no explicit unit is provided, the value is interpreted as seconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected. Note that since the granularity of the timeouts is seconds, a timeout specified in milliseconds will be rejected, even if the duration is longer than a second.
If no successful failover/switchover takes place within the configured time period, a message is logged and automatic failover is disabled. This prevents further automatic modifications to the misbehaving cluster.
verify_master_failure
Type:
Mandatory: No
Dynamic: Yes
Default: true
Enable additional primary failure verification for automatic failover.verify_master_failure enables this feature and defines the timeout.
The primary failure timeout is specified as documented . If no explicit unit is provided, the value is interpreted as seconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected. Note that since the granularity of the timeout is seconds, a timeout specified in milliseconds will be rejected, even if the duration is longer than a second.
Failure verification is performed by checking whether the replica servers are
still connected to the primary and receiving events. An event is either a change
in the Gtid_IO_Pos-field of the SHOW SLAVE STATUS output or a heartbeat
event. Effectively, if a replica has received an event withinmaster_failure_timeout duration, the primary is not considered down when
deciding whether to failover, even if MaxScale cannot connect to the primary.master_failure_timeout should be longer than the Slave_heartbeat_period of
the replica connection to be effective.
If every replica loses its connection to the primary (Slave_IO_Running is not "Yes"), primary failure is considered verified regardless of timeout. This allows faster failover when the primary properly disconnects.
For automatic failover to activate, the failcount requirement must also be
met.
master_failure_timeout
Type:
Mandatory: No
Dynamic: Yes
Default: 10s
master_failure_timeout is specified as documented . If no explicit unit
is provided, the value is interpreted as seconds in MaxScale 2.4. In subsequent
versions a value without a unit may be rejected. Note that since the granularity
of the timeout is seconds, a timeout specified in milliseconds will be rejected,
even if the duration is longer than a second.
servers_no_promotion
Type: string
Mandatory: No
Dynamic: Yes
Default: None
This is a comma-separated list of server names that will not be chosen for primary promotion during a failover or autoselected for switchover. This does not affect switchover if the user selects the server to promote. Using this setting can disrupt new primary selection for failover such that a non-optimal server is chosen. At worst, this will cause replication to break. Alternatively, failover may fail if all valid promotion candidates are in the exclusion list.
As of MaxScale 24.02.4 and 24.08.1, this setting also affects primary
server selection during MaxScale startup or due to replication topology
changes. A server listed in servers_no_promotion will thus not be
selected as primary unless manually designated in a switchover-command.
promotion_sql_file and demotion_sql_file
Type: string
Mandatory: No
Dynamic: Yes
Default: None
These optional settings are paths to text files with SQL statements in them. During promotion or demotion, the contents are read line-by-line and executed on the backend. Use these settings to execute custom statements on the servers to complement the built-in operations.
Empty lines or lines starting with '#' are ignored. Any results returned by the statements are ignored. All statements must succeed for the failover, switchover or rejoin to continue. The monitor user may require additional privileges and grants for the custom commands to succeed.
When promoting a replica to primary during switchover or failover, thepromotion_sql_file is read and executed on the new primary server after its
read-only flag is disabled. The commands are ran before starting replication
from an external primary if any.
demotion_sql_file is ran on an old primary during demotion to replica, before the
old primary starts replicating from the new primary. The file is also ran before
rejoining a standalone server to the cluster, as the standalone server is
typically a former primary server. When redirecting a replica replicating from a
wrong primary, the sql-file is not executed.
Since the queries in the files are ran during operations which modify
replication topology, care is required. If promotion_sql_file contains data
modification (DML) queries, the new primary server may not be able to
successfully replicate from an external primary. demotion_sql_file should never
contain DML queries, as these may not replicate to the replica servers before
replica threads are stopped, breaking replication.
handle_events
Type:
Mandatory: No
Dynamic: Yes
Default: true
If enabled, the monitor continuously queries the servers for enabled scheduled events and uses this information when performing cluster operations, enabling and disabling events as appropriate.
When a server is being demoted, any events with "ENABLED" status are set to "SLAVESIDE_DISABLED". When a server is being promoted to primary, events that are either "SLAVESIDE_DISABLED" or "DISABLED" are set to "ENABLED" if the same event was also enabled on the old primary server last time it was successfully queried. Events are considered identical if they have the same schema and name. When a standalone server is rejoined to the cluster, its events are also disabled since it is now a replica.
The monitor does not check whether the same events were disabled and enabled during a switchover or failover/rejoin. All events that meet the criteria above are altered.
The monitor does not enable or disable the event scheduler itself. For the events to run on the new primary server, the scheduler should be enabled by the admin. Enabling it in the server configuration file is recommended.
Events running at high frequency may cause replication to break in a failover scenario. If an old primary which was failed over restarts, its event scheduler will be on if set in the server configuration file. Its events will also remember their "ENABLED"-status and run when scheduled. This may happen before the monitor rejoins the server and disables the events. This should only be an issue for events running more often than the monitor interval or events that run immediately after the server has restarted.
As of MaxScale 2.5, MariaDB-Monitor supports cooperative monitoring. This means that multiple monitors (typically in different MaxScale instances) can monitor the same backend server cluster and only one will be the primary monitor. Only the primary monitor may perform switchover, failover or rejoin operations. The primary also decides which server is the primary. Cooperative monitoring is enabled with the -setting. Even with this setting, only one monitor per server per MaxScale is allowed. This limitation can be circumvented by defining multiple copies of a server in the configuration file.
Cooperative monitoring uses for coordinating between monitors. When cooperating, the monitor regularly checks the status of a lock named maxscale_mariadbmonitor on every server and acquires it if free. If the monitor acquires a majority of locks, it is the primary. If a monitor cannot claim majority locks, it is a secondary monitor.
The primary monitor of a cluster also acquires the lockmaxscale_mariadbmonitor_master on the primary server. Secondary monitors check which server this lock is taken on and only accept that server as the primary. This arrangement is required so that multiple monitors can agree on which server is the primary regardless of replication topology. If a secondary monitor does not see the primary-lock taken, then it won't mark any server as [Master], causing writes to fail.
The lock-setting defines how many locks are required for primary status. Settingcooperative_monitoring_locks=majority_of_all means that the primary monitor
needs n_servers/2 + 1 (rounded down) locks. For example, a cluster of three
servers needs two locks for majority, a cluster of four needs three, and a
cluster of five needs three.
This scheme is resistant against split-brain situations in the sense
that multiple monitors cannot be primary simultaneously. However, a split may
cause both monitors to consider themselves secondary, in which case a primary
server won't be detected.
Even without a network split, cooperative_monitoring_locks=majority_of_all
will lead to neither monitor claiming lock majority once too many servers go
down. This scenario is depicted in the image below. Only two out of four servers
are running when three are needed for majority. Although both MaxScales see both
running servers, neither is certain they have majority and the cluster stays in
read-only mode. If the primary server is down, no failover is performed either.
Setting cooperative_monitoring_locks=majority_of_running changes the way_n_servers_ is calculated. Instead of using the total number of servers, only
servers currently [Running] are considered. This scheme adapts to multiple
servers going down, ensuring that claiming lock majority is always possible.
However, it can lead to multiple monitors claiming primary status in a
split-brain situation. As an example, consider a cluster with servers 1 to 4
with MaxScales A and B, as in the image below. MaxScale A can connect to
servers 1 and 2 (and claim their locks) but not to servers 3 and 4 due to
a network split. MaxScale A thus assumes servers 3 and 4 are down. MaxScale B
does the opposite, claiming servers 3 and 4 and assuming 1 and 2 are down.
Both MaxScales claim two locks out of two available and assume that they have
lock majority. Both MaxScales may then promote their own primaries and route
writes to different servers.
The recommended strategy depends on which failure scenario is more likely and/or more destructive. If it's unlikely that multiple servers are ever down simultaneously, then majority_of_all is likely the safer choice. On the other hand, if split-brain is unlikely but multiple servers may be down simultaneously, then majority_of_running would keep the cluster operational.
To check if a monitor is primary, fetch monitor diagnostics with maxctrl show monitors or the REST API. The boolean field primary indicates whether the
monitor has lock majority on the cluster. If cooperative monitoring is disabled,
the field value is null. Lock information for individual servers is listed in
the server-specific field lock_held. Again, null indicates that locks are
not in use or the lock status is unknown.
If a MaxScale instance tries to acquire the locks but fails to get majority (perhaps another MaxScale was acquiring locks simultaneously) it will release any acquired locks and try again after a random number of monitor ticks. This prevents multiple MaxScales from fighting over the locks continuously as one MaxScale will eventually wait less time than the others. Conflict probability can be further decreased by configuring each monitor with a different_monitor_interval_.
The flowchart below illustrates the lock handling logic.
Monitor cooperation depends on the server locks. The locks are connection-specific. The owning connection can manually release a lock, allowing another connection to claim it. Also, if the owning connection closes, the MariaDB Server process releases the lock. How quickly a lost connection is detected affects how quickly the primary monitor status moves from one monitor and MaxScale to another.
If the primary MaxScale or its monitor is stopped normally, the monitor connections are properly closed, releasing the locks. This allows the secondary MaxScale to quickly claim the locks. However, if the primary simply vanishes (broken network), the connection may just look idle. In this case, the MariaDB Server may take a long time before it considers the monitor connection lost. This time ultimately depends on TCP keepalive settings on the machines running MariaDB Server.
On MariaDB Server 10.3.3 and later, the TCP keepalive settings can be configured for just the server process. See for information on settings tcp_keepalive_interval, tcp_keepalive_probes and_tcp_keepalive_time_. These settings can also be set on the operating system level, as described .
As of MaxScale 6.4.16, 22.08.13, 23.02.10, 23.08.6 and 24.02.2, configuring TCP keepalive is no longer necessary as monitor sets the session wait_timeout variable when acquiring a lock. This causes the MariaDB Server to close the monitor connection if the connection appears idle for too long. The value of_wait_timeout_ used depends on the monitor interval and connection timeout settings, and is logged at MaxScale startup.
A monitor can also be ordered to manually release its locks via the module command release-locks. This is useful for manually changing the primary monitor. After running the release-command, the monitor will not attempt to reacquire the locks for one minute, even if it wasn't the primary monitor to begin with. This command can cause the cluster to become temporarily unusable by MaxScale. Only use it when there is another monitor ready to claim the locks.
Backup operations manipulate the contents of a MariaDB Server, saving it or overwriting it. MariaDB-Monitor supports three backup operations:
rebuild-server: Replace the contents of a database server with the contents of another.
create-backup: Copy the contents of a database server to a storage location.
restore-from-backup: Overwrite the contents of a database server with a backup.
These operations do not modify server config files, only files in the data directory (typically /var/lib/mysql) are affected.
All of these operations are monitor commands and best launched with MaxCtrl.
The operations are asynchronous, which means MaxCtrl won't wait for the
operation to complete and instead immediately returns "OK". To see the current
status of an operation, either check MaxScale log or use the
fetch-cmd-result-command
(e.g. maxctrl call command mariadbmon fetch-cmd-result MyMonitor).
To perform backup operations, MaxScale requires ssh-access on all affected machines. The ssh_user and ssh_keyfile-settings define the SSH credentials MaxScale uses to access the servers. MaxScale must be able to run commands with_sudo_ on both the source and target servers. See and below for more information.
The following tools need to be installed on the backends:
mariadb-backup. Backs up and restores MariaDB Server contents. Installed e.g.
with yum install MariaDB-backup. See for more
information.
pigz. Compresses and decompresses the backup stream. Installed e.g. withyum install pigz.
socat. Streams data from one machine to another. Is likely already
installed. If not, can be installed e.g. with yum install socat.
mariadb-backup needs server credentials to log in and authenticate to the MariaDB Server being copied from. For this, MaxScale uses the monitor user. The monitor user may thus require additional privileges. See for more details.
The rebuild server-operation replaces the contents of a database server with the contents of another server. The source server is effectively cloned and all data on the target server is lost. This is useful when a replica server has diverged from the primary server, or when adding a new server to the cluster. MaxScale performs this operation by running mariadb-backup on both the source and target servers.
When launched, the rebuild operation proceeds as below. If any step fails, the operation is stopped and the target server will be left in an unspecified state.
Log in to both servers with ssh and check that the tools listed above are
present (e.g. mariadb-backup -v should succeed).
Check that the port used for transferring the backup is free on the source server. If not, kill the process holding it. This requires running lsof and kill.
Test the connection by streaming a short message from the source host to the target.
Launch mariadb-backup on the source machine, compress the stream and listen for an incoming connection. This is performed with a command like
The rebuild-operation is a monitor module command and takes four arguments:
Monitor name, e.g. MyMonitor.
Target server name, e.g. MyTargetServer.
Source server name, e.g. MySourceServer. This parameter is optional.
If not specified, the monitor prefers to autoselect an up-to-date replica
server to avoid increasing load on the primary server. Due to the--safe-slave-backup-option, the replica will stop
replicating until the backup data has been transferred.
Data directory on target server. This parameter is optional. If not specified, the monitor will ask the target server. If target server is not running, monitor will assume /var/lib/mysql. Thus, this only needs to be defined with non-standard directory setups.
The following example rebuilds MyTargetServer with contents of MySourceServer.
The following example uses a custom data directory on the target.
The operation does not launch if the target server is already replicating or if the source server is not a primary or replica.
Steps 6 and 8 can take a long time depending on the size of the database and if writes are ongoing. During these steps, the monitor will continue monitoring the cluster normally. After each monitor tick the monitor checks if the rebuild-operation can proceed. No other monitor operations, either manual or automatic, can run until the rebuild completes.
The create backup-operation copies the contents of a database server to the backup storage. The source server is not modified but may slow down during backup creation. MaxScale performs this operation by running mariadb-backup on both the source and storage servers. The storage location is defined by the backup_storage_address and backup_storage_path settings. Normal ssh-settings are used to access the storage server. The backup storage machine does not need to have a MariaDB Server installed.
Backup creation runs somewhat similar to rebuild-server. The main difference is that the backup data is simply saved to a directory and not prepared or used to start a MariaDB Server. If any step fails, the operation is stopped and the backup storage directory will be left in an unspecified state.
Init. See rebuild-server.
Check listen port on backup storage machine. See rebuild-server.
Check that the backup storage main directory exists. Check that it does not contain a backup with the same name as the one being created. Create the final backup directory.
Test the connection by streaming a short message from the source host to the backup storage.
Backup creation is a monitor module command and takes three arguments: the monitor name, source server name and backup name. Backup name defines the subdirectory where the backup is saved and should be a valid directory name. The command
would save the backup of MySourceServer to<backup_storage_path>/wednesday_161122 on the host defined inbackup_storage_address. ssh_user needs to have read and write access
to the main storage directory. The source server must be a primary or replica.
Similar to rebuild-server, the monitor will continue monitoring the servers while the backup is transferred.
The restore-operation is the reverse of create-backup. It overwrites the contents of an existing MariaDB Server with a backup from the backup storage. The backup is not removed and can be used again. MaxScale performs this operation by transferring the backup contents as a tar archive and overwriting the target server data directory. The backup storage is defined in monitor settings similar to create-backup.
The restore-operation runs somewhat similar to rebuild-server. The main difference is that the backup data is copied with tar instead of mariadb-backup. If any step fails, the operation is stopped and the target server will be left in an unspecified state.
Init. See rebuild-server.
Check listen port on target machine. See rebuild-server.
Check that the backup storage main directory exists and that it contains a backup with the name requested.
Test the connection by streaming a short message from the backup storage to the target machine.
Server restoration is a monitor module command and takes four arguments.
Monitor name, e.g. MyMonitor.
Target server name, e.g. MyNewServer.
Backup name. This parameter defines the subdirectory where the backup is read from and should be an existing directory on the backup storage host.
Data directory on target server. This parameter is optional. If not specified, the monitor will ask the target server. If target server is not running, monitor will assume /var/lib/mysql. Thus, this only needs to be defined with non-standard directory setups.
The command
would erase the contents of MyTargetServer and replace them with the backup
contained in<backup_storage_path>/wednesday_161122 on the host defined inbackup_storage_address. ssh_user needs to have read access
to the main storage directory and the backup. The target server must not be
a primary or replica.
The following example uses a custom data directory on the target.
Similar to rebuild-server, the monitor will continue monitoring the servers while the backup is transferred and prepared.
ssh_user
Type: string
Mandatory: No
Dynamic: Yes
Default: None
Ssh username. Used when logging in to backend servers to run commands.
ssh_keyfile
Type: path
Mandatory: No
Dynamic: Yes
Default: None
Path to file with an ssh private key. Used when logging in to backend servers to run commands.
ssh_check_host_key
Type:
Mandatory: No
Dynamic: Yes
Default: true
Boolean, default: true. When logging in to backends, require that the server is already listed in the known_hosts-file of the user running MaxScale.
ssh_timeout
Type:
Mandatory: No
Dynamic: Yes
Default: 10s
The rebuild operation consists of multiple ssh commands. Most of the commands are assumed to complete quickly. If these commands take more than ssh_timeout to complete, the operation fails. Adjust this setting if rebuild fails due to ssh commands timing out. This setting does not affect steps 5 and 6, as these are assumed to take significant time.
ssh_port
Type: number
Mandatory: No
Dynamic: Yes
Default: 22
SSH port. Used for running remote commands on servers.
rebuild_port
Type: number
Mandatory: No
Dynamic: Yes
Default: 4444
The port which the source server listens on for a connection. The port must not be blocked by a firewall or listened on by any other program. If another process is listening on the port when rebuild is starting, MaxScale will attempt to kill the process.
mariadb-backup_use_memory
String, default: "1G". Given as is tomariadb-backup --prepare --use-memory=<mariadb-backup_use_memory>. If set to empty,
no --use-memory is set and mariadb-backup will use its internal default. See for more
information.
mariadb-backup_parallel
Numeric, default: 1. Given as is tomariadb-backup --backup --parallel=<val>.
Defines the number of threads used for parallel data file transfer. See for more
information.
backup_storage_address
Type: string
Mandatory: No
Dynamic: Yes
Default: None
Address of the backup storage. Does not need to have MariaDB Server running or be monitored by the monitor. Connected to with ssh. Must have enough disk space to store all backups.
backup_storage_path
Type: path
Mandatory: No
Dynamic: Yes
Default: None
Path to main backup storage directory on backup storage host. ssh_user needs to have full access to this directory to save and read backups.
If giving MaxScale general sudo-access is out of the question, MaxScale must be
allowed to run the specific commands required by the backup operations. This can
be achieved by creating a file with the commands in the/etc/sudoers.d-directory. In the example below, the user johnny is given the
power to run commands as root. The contents of the file may need to be tweaked
due to changes in install locations.
Since MaxScale version 22.08, MariaDB Monitor can run ColumnStore administrative commands against a ColumnStore cluster. The commands interact with the ColumnStore REST-API present in recent ColumnStore versions and have been tested with MariaDB-Server 10.6 running the ColumnStore plugin version 6.2. None of the commands affect monitor configuration or replication topology. MariaDB Monitor simply relays the commands to the backend cluster.
MariaDB Monitor can fetch cluster status, add and remove nodes, start and stop
the cluster, and set cluster read-only or readwrite. MaxScale only communicates
with the first server in the servers-list.
Most of the commands are asynchronous, i.e. they do not wait for the operation to complete on the ColumnStore backend before returning to the command prompt. MariaDB Monitor itself, however, runs the command in the background and does not perform normal monitoring until the operation completes or fails. After an operation has started the user should use fetch-cmd-result to check its status. The examples below show how to run the commands using MaxCtrl. If a command takes a timeout-parameter, the timeout can be given in seconds (s), minutes (m) or hours (h).
ColumnStore command settings are listed . At leastcs_admin_api_key must be set.
Fetch cluster status. Returns the result as is. Status fetching has an automatic timeout of ten seconds.
Examples:
Add or remove a node to/from the ColumnStore cluster.
<node-host> is the hostname or IP of the node being added or removed.
Examples:
Examples:
Examples:
cs_admin_port
Numeric, default: 8640. The REST-API port on the ColumnStore nodes. All nodes are assumed to listen on the same port.
cs_admin_api_key
String. The API-key MaxScale sends to the ColumnStore nodes when making a REST-API request. Should match the value configured on the ColumnStore nodes.
cs_admin_base_path
String, default: /cmapi/0.4.0. Base path sent with the REST-API request.
fetch-cmd-resultFetches the result of the last manual command. Requires monitor name as parameter. Most commands only return a generic success message or an error description. ColumnStore commands may return more data. Scheduling another command clears a stored result.
cancel-cmdCancels the latest operation, whether manual or automatic, if possible. Requires monitor name as parameter. A scheduled manual command is simply canceled before it can run. If a command is already running, it stops as soon as possible. The cancel-cmd itself does not wait for a running operation to stop. Use fetch-cmd-result or check the log to see if the operation has truly completed. Canceling is most useful for stopping a stalled rebuild operation.
See the .
Before performing failover or switchover, the monitor checks that prerequisites are fulfilled, printing any errors and warnings found. This should catch and explain most issues with failover or switchover not working. If the operations are attempted and still fail, then most likely one of the commands the monitor issued to a server failed or timed out. The log should explain which query failed.
A typical failure reason is that a command such as STOP SLAVE takes longer than thebackend_read_timeout of the monitor, causing the connection to break. As of 2.3, the
monitor will retry most such queries if the failure was caused by a timeout. The retrying
continues until the total time for a failover or switchover has been spent. If the log
shows warnings or errors about commands timing out, increasing the backend timeout
settings of the monitor should help. Other settings to look at are query_retries andquery_retry_timeout. These are general MaxScale settings described in the . Settingquery_retries to 2 is a reasonable first try.
If switchover causes the old primary (now replica) to fail replication, then most
likely a user or perhaps a scheduled event performed a write while monitor
had set read_only=1. This is possible if the user performing the write has
"SUPER" or "READ_ONLY ADMIN" privileges. The switchover-operation tries to kick
out SUPER-users but this is not certain to succeed. Remove these privileges
from any users that regularly do writes to prevent them from interfering with
switchover.
The server configuration files should have log-slave-updates=1 to ensure that
a newly promoted primary has binary logs of previous events. This allows the new
primary to replicate past events to any lagging replicas.
To print out all queries sent to the servers, start MaxScale with--debug=enable-statement-logging. This setting prints all queries sent to the
backends by monitors and authenticators. The printed queries may include
usernames and passwords.
If a replica is shown in maxctrl as "Slave of External Server" instead of
"Slave", the reason is likely that the "Master_Host"-setting of the replication connection
does not match the MaxScale server definition. As of 2.3.2, the MariaDB Monitor by default
assumes that the replica connections (as shown by SHOW ALL SLAVES STATUS) use the exact
same "Master_Host" as used the MaxScale configuration file server definitions. This is
controlled by the setting .
Since MaxScale 2.2 it's possible to detect a replication setup which includes Binlog Server: the required action is to add the binlog server to the list of servers only if master_id identity is set.
This page is licensed: CC BY-SA / Gnu FDL
running_slaveprimary_monitor_masterdisk_space_okDefault: primary_monitor_master, disk_space_ok
primary_monitor_master : If this MaxScale is cooperating with another MaxScale and this is the secondary MaxScale, require that the candidate primary is selected also by the primary MaxScale.
disk_space_ok : The candidate primary must not be low on disk space. This option only takes effect if disk space check is enabled. Added in MaxScale 23.08.5.
writable_masterprimary_monitor_masterDefault: none
primary_monitor_master : If this MaxScale is cooperating with another MaxScale and this is the secondary MaxScale, require that the candidate primary is selected also by the primary MaxScale.
disk_space_ok : The replica must not be low on disk space. This option only takes effect if disk space check is enabled. Added in MaxScale 23.08.5.
Default: none
rejoin, which directs servers to replicate from the primary
reset-replication (added in MaxScale 2.3.0), which deletes binary logs and resets gtid:s
SHOW DATABASES and EVENT, to list and modify server events
SELECT on mysql.user, to see which users have SUPER
SELECT on mysql.global_priv so see to see which users have READ_ONLY ADMIN
SHOW DATABASES, EVENT and SET USER, to list and modify server events
BINLOG ADMIN, to delete binary logs (during reset-replication)
CONNECTION ADMIN, to kill connections
SELECT on mysql.user, to see which users have SUPER
SELECT on mysql.global_priv so see to see which users have READ_ONLY ADMIN
disk space is not low
If the new primary has unprocessed relay log items, cancel and try again later.
Prepare the new primary:
Remove the replica connection the new primary used to replicate from the old primary.
Disable the read_only-flag.
Enable scheduled server events (if event handling is on). Only events that were enabled on the old primary are enabled.
Run the commands in promotion_sql_file.
Start replication from external primary if one existed.
Redirect all other replicas to replicate from the new primary:
STOP SLAVE
CHANGE MASTER TO
START SLAVE
Check that all replicas are replicating.
Kill connections from super and read-only admin users since read_only does not affect them. During this step, all writes are blocked with "FLUSH TABLES WITH READ LOCK".
Disable scheduled server events (if event handling is on).
Run the commands in demotion_sql_file.
Flush the binary log ("flush logs") so that all events are on disk.
Wait a moment to check that gtid is stable.
Wait for the new primary to catch up with the old primary.
Promote new primary and redirect replicas as in failover steps 3 and 4. Also redirect the demoted old primary.
Check that all replicas are replicating.
Set the sequence number of gtid_slave_pos to zero. This also affects gtid_current_pos.
Prepare new primary:
Disable the read_only-flag.
Enable scheduled server events (if event handling is on). Events are only enabled if the cluster had a primary server when starting the reset-replication operation. Only events that were enabled on the previous primary are enabled on the new.
Direct other servers to replicate from the new primary as in the other operations.
mariadb-backup --backup --safe-slave-backup --stream=xbstream --parallel=1 | pigz -c | socat - TCP-LISTEN:<port>Ask the target server what its data directory is (select @@datadir;). Stop
MariaDB Server on the target machine and delete all contents of the data
directory.
On the target machine, connect to the source machine, read the backup stream,
decompress it and write to the data directory. This is performed with a command
like socat -u TCP:<host>:<port> STDOUT | pigz -dc | mbstream -x. This step can
take a long time if there is much data to transfer.
Check that the data directory on the target machine is not empty, i.e. that the transfer at least appears to have succeeded.
Prepare the backup on the target server with a command likemariadb-backup --use-memory=1G --prepare. This step can also take some time if
the source server performed writes during data transfer.
On the target server, change ownership of datadir contents to the mysql-user and start MariaDB-server.
Read gtid from the data directory. Have the target server start replicating from the primary if it is not one already.
Serve backup on source. Similar to rebuild-server step 4.
Transfer backup directly to the final storage directory. Similar to rebuild-server step 5.
Check that the copied backup data looks ok.
On the backup storage machine, compress the backup with tar and serve it
with socat, listening for an incoming connection. This is performed with
a command like tar -zc -C <backup_dir> . | socat - TCP-LISTEN:<port>.
Ask the target server what its data directory is (select @@datadir;). Stop
MariaDB Server on the target machine and delete all contents of the data
directory.
On the target machine, connect to the source machine, read the backup stream,
decompress it and write to the data directory. This is performed with a command
like socat -u TCP:<host>:<port> STDOUT | sudo tar -xz -C /var/lib/mysql/.
This step can take a long time if there is much data to transfer.
From here on, the operation proceeds as from rebuild-server step 7.


CREATE USER 'maxscale'@'maxscalehost' IDENTIFIED BY 'maxscale-password';
GRANT REPLICATION CLIENT ON *.* TO 'maxscale'@'maxscalehost';GRANT REPLICATION SLAVE ADMIN ON *.* TO 'maxscale'@'maxscalehost';GRANT REPLICA MONITOR ON *.* TO 'maxscale'@'maxscalehost';GRANT FILE ON *.* TO 'maxscale'@'maxscalehost';GRANT CONNECTION ADMIN ON *.* TO 'maxscale'@'maxscalehost';GRANT SUPER, RELOAD, PROCESS, SHOW DATABASES, EVENT ON *.* TO 'maxscale'@'maxscalehost';
GRANT SELECT ON mysql.user TO 'maxscale'@'maxscalehost';GRANT SELECT ON mysql.global_priv TO 'maxscale'@'maxscalehost';GRANT RELOAD, PROCESS, SHOW DATABASES, EVENT, SET USER, READ_ONLY ADMIN ON *.* TO 'maxscale'@'maxscalehost';
GRANT REPLICATION SLAVE ADMIN, BINLOG ADMIN, CONNECTION ADMIN ON *.* TO 'maxscale'@'maxscalehost';
GRANT SELECT ON mysql.user TO 'maxscale'@'maxscalehost';
GRANT SELECT ON mysql.global_priv TO 'maxscale'@'maxscalehost';CREATE USER 'replication'@'replicationhost' IDENTIFIED BY 'replication-password';
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'replicationhost';[MyMonitor]
type=monitor
module=mariadbmon
servers=server1,server2,server3
user=myuser
password=mypwdmaster_conditions=connected_slave,running_slaveslave_conditions=running_master,writable_master(monitor_interval + backend_connect_timeout) * failcountmaxctrl clear server server2 Maintcall command mariadbmon failover MONITORcall command mariadbmon switchover MONITOR [NEW_PRIMARY] [OLD_PRIMARY]call command mariadbmon switchover-force MONITOR [NEW_PRIMARY] [OLD_PRIMARY]call command mariadbmon rejoin MONITOR OLD_PRIMARYmaxctrl call command mariadbmon reset-replication MONITOR [NEW_PRIMARY]maxctrl call command mariadbmon failover MyMonitor
maxctrl call command mariadbmon rejoin MyMonitor OldPrimaryServ
maxctrl call command mariadbmon reset-replication MyMonitor
maxctrl call command mariadbmon reset-replication MyMonitor NewPrimaryServ
maxctrl call command mariadbmon switchover MyMonitor
maxctrl call command mariadbmon switchover MyMonitor NewPrimaryServ
maxctrl call command mariadbmon switchover MyMonitor NewPrimaryServ OldPrimaryServ
maxctrl call command mariadbmon switchover-force MyMonitor NewPrimaryServ/v1/maxscale/modules/mariadbmon/<operation>?<monitor-name>&<server-name1>&<server-name2>[Cluster1]
type=monitor
module=mariadbmon
servers=server1, server2, server3, server 4
.../v1/maxscale/modules/mariadbmon/switchover?Cluster1&server4&server2/v1/maxscale/modules/mariadbmon/failover?Cluster1
/v1/maxscale/modules/mariadbmon/rejoin?Cluster1&server3
/v1/maxscale/modules/mariadbmon/reset-replication?Cluster1&server3maxctrl call command mariadbmon async-switchover Cluster1
OK
maxctrl call command mariadbmon fetch-cmd-result Cluster1
{
"links": {
"self": "http://localhost:8989/v1/maxscale/modules/mariadbmon/fetch-cmd-result"
},
"meta": "switchover completed successfully."
}switchover_on_low_disk_space=trueenforce_simple_topology=truereplication_custom_options=MASTER_SSL_CERT = '/tmp/certs/client-cert.pem',
MASTER_SSL_KEY = '/tmp/certs/client-key.pem',
MASTER_SSL_CA = '/tmp/certs/ca.pem',
MASTER_SSL_VERIFY_SERVER_CERT=0servers_no_promotion=backup_dc_server1,backup_dc_server2promotion_sql_file=/home/root/scripts/promotion.sql
demotion_sql_file=/home/root/scripts/demotion.sqlmaxctrl call command mariadbmon release-locks MyMonitor1maxctrl call command mariadbmon async-rebuild-server MyMonitor MyTargetServer MySourceServermaxctrl call command mariadbmon async-rebuild-server MyMonitor MyTargetServer MySourceServer /my_datadirmaxctrl call command mariadbmon async-create-backup MyMonitor MySourceServer wednesday_161122maxctrl call command mariadbmon async-restore-from-backup MyMonitor MyTargetServer wednesday_161122maxctrl call command mariadbmon async-restore-from-backup MyMonitor MyTargetServer wednesday_161122 /my_datadirmariadb-backup_use_memory=2Gmariadb-backup_parallel=2backup_storage_address=192.168.1.11backup_storage_path=/home/maxscale_ssh_user/backup_storagejohnny ALL= NOPASSWD: /bin/systemctl stop mariadb
johnny ALL= NOPASSWD: /bin/systemctl start mariadb
johnny ALL= NOPASSWD: /usr/sbin/lsof
johnny ALL= NOPASSWD: /bin/kill
johnny ALL= NOPASSWD: /usr/bin/mariadb-backup
johnny ALL= NOPASSWD: /bin/mbstream
johnny ALL= NOPASSWD: /bin/rm -rf /var/lib/mysql/*
johnny ALL= NOPASSWD: /bin/chown -R mysql\:mysql /var/lib/mysql
johnny ALL= NOPASSWD: /bin/cat /var/lib/mysql/xtrabackup_binlog_info
johnny ALL= NOPASSWD: /bin/tar -xz -C /var/lib/mysql/maxctrl call command mariadbmon cs-get-status <monitor-name>
maxctrl call command mariadbmon async-cs-get-status <monitor-name>maxctrl call command mariadbmon cs-get-status MyMonitor
{
"mcs1": {
"cluster_mode": "readwrite",
"dbrm_mode": "master",
<snip>
maxctrl call command mariadbmon async-cs-get-status MyMonitor
OK
maxctrl call command mariadbmon fetch-cmd-result MyMonitor
{
"mcs1": {
"cluster_mode": "readwrite",
"dbrm_mode": "master",
<snip>maxctrl call command mariadbmon async-cs-add-node <monitor-name> <node-host> <timeout>
maxctrl call command mariadbmon async-cs-remove-node <monitor-name> <node-host> <timeout>maxctrl call command mariadbmon async-cs-add-node MyMonitor mcs3 1m
OK
maxctrl call command mariadbmon fetch-cmd-result MyMonitor
{
"node_id": "mcs3",
"timestamp": "2022-05-05 08:07:51.518268"
}
maxctrl call command mariadbmon async-cs-remove-node MyMonitor mcs3 1m
OK
maxctrl call command mariadbmon fetch-cmd-result MyMonitor
{
"node_id": "mcs3",
"timestamp": "2022-05-05 10:46:46.506947"
}maxctrl call command mariadbmon async-cs-start-cluster <monitor-name> <timeout>
maxctrl call command mariadbmon async-cs-stop-cluster <monitor-name> <timeout>maxctrl call command mariadbmon async-cs-start-cluster MyMonitor 1m
OK
maxctrl call command mariadbmon fetch-cmd-result MyMonitor
{
"timestamp": "2022-05-05 09:41:57.140732"
}
maxctrl call command mariadbmon async-cs-stop-cluster MyMonitor 1m
OK
maxctrl call command mariadbmon fetch-cmd-result MyMonitor
{
"mcs1": {
"timestamp": "2022-05-05 09:45:33.779837"
},
<snip>maxctrl call command mariadbmon async-cs-set-readonly <monitor-name> <timeout>
maxctrl call command mariadbmon async-cs-set-readwrite <monitor-name> <timeout>maxctrl call command mariadbmon async-cs-set-readonly MyMonitor 30s
OK
maxctrl call command mariadbmon fetch-cmd-result MyMonitor
{
"cluster-mode": "readonly",
"timestamp": "2022-05-05 09:49:18.365444"
}
maxctrl call command mariadbmon async-cs-set-readwrite MyMonitor 30s
OK
maxctrl call command mariadbmon fetch-cmd-result MyMonitor
{
"cluster-mode": "readwrite",
"timestamp": "2022-05-05 09:50:30.718972"
}cs_admin_port=8641cs_admin_api_key=somekey123maxctrl call command mariadbmon fetch-cmd-result MariaDB-Monitor
"switchover completed successfully."maxctrl call command mariadbmon cancel-cmd MariaDB-Monitor
OKMaxCtrl is a command line administrative client for MaxScale which uses the MaxScale REST API for communication. It has replaced the legacy MaxAdmin command line client that is no longer supported or included.
By default, the MaxScale REST API listens on port 8989 on the local host. The
default credentials for the REST API are admin:mariadb. The users used by the
REST API are the same that are used by the MaxAdmin network interface. This
means that any users created for the MaxAdmin network interface should work with
the MaxScale REST API and MaxCtrl.
For more information about the MaxScale REST API, refer to the REST API documentation and the Configuration Guide.
MaxCtrl does not work when used from a SystemD unit with MemoryDenyWriteExecute=true.
If the file ~/.maxctrl.cnf exists, maxctrl will use any values in the
section [maxctrl] as defaults for command line arguments. For instance,
to avoid having to specify the user and password on the command line,
create the file .maxctrl.cnf in your home directory, with the following
content:
Note that all access rights to the file must be removed from everybody else but the owner. MaxCtrl refuses to use the file unless the rights have been removed.
Another file from which to read the defaults can be specified with the -c
flag.
This page is licensed: CC BY-SA / Gnu FDL
[maxctrl]
u = my-name
p = my-passwordUsage: list servers
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
List all servers in MaxScale.
Field | Description
----- | -----------
Server | Server name
Address | Address where the server listens
Port | The port on which the server listens
Connections | Current connection count
State | Server state
GTID | Current value of @@gtid_current_pos
Monitor | The monitor for this serverUsage: list services
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
List all services and the servers they use.
Field | Description
----- | -----------
Service | Service name
Router | Router used by the service
Connections | Current connection count
Total Connections | Total connection count
Targets | Targets that the service usesUsage: list listeners [service]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
List listeners of all services. If a service is given, only listeners for that service are listed.
Field | Description
----- | -----------
Name | Listener name
Port | The port where the listener listens
Host | The address or socket where the listener listens
State | Listener state
Service | Service that this listener points toUsage: list monitors
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
List all monitors in MaxScale.
Field | Description
----- | -----------
Monitor | Monitor name
State | Monitor state
Servers | The servers that this monitor monitorsUsage: list sessions
Options:
--rdns Perform a reverse DNS lookup on client IPs [boolean] [default: false]
--version Show version number [boolean]
--help Show help [boolean]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
List all client sessions.
Field | Description
----- | -----------
Id | Session ID
User | Username
Host | Client host address
Connected | Time when the session started
Idle | How long the session has been idle, in seconds
Service | The service where the session connected
Memory | Memory usage (not exhaustive)Usage: list filters
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
List all filters in MaxScale.
Field | Description
----- | -----------
Filter | Filter name
Service | Services that use the filter
Module | The module that the filter usesUsage: list modules
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
List all currently loaded modules.
Field | Description
----- | -----------
Module | Module name
Type | Module type
Version | Module versionUsage: list threads
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
List all worker threads.
Field | Description
----- | -----------
Id | Thread ID
Current FDs | Current number of managed file descriptors
Total FDs | Total number of managed file descriptors
Load (1s) | Load percentage over the last second
Load (1m) | Load percentage over the last minute
Load (1h) | Load percentage over the last hourUsage: list users
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
List network the users that can be used to connect to the MaxScale REST API.
Field | Description
----- | -----------
Name | User name
Type | User type
Privileges | User privileges
Created | When the user was created
Last Updated | The last time the account password was updated
Last Login | The last time the user logged inUsage: list commands
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
List all available module commands.
Field | Description
----- | -----------
Module | Module name
Commands | Available commandsUsage: list queries
List queries options:
-l, --max-length Maximum SQL length to display. Use --max-length=0 for no limit. [number] [default: 120]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
List all active queries being executed through MaxScale. In order for this command to work, MaxScale must be configured with 'retain_last_statements' set to a value greater than 0.Usage: show server <server>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Show detailed information about a server. The `Parameters` field contains the currently configured parameters for this server. See `--help alter server` for more details about altering server parameters.
Field | Description
----- | -----------
Server | Server name
Source | File where the object is stored in
Address | Address where the server listens
Port | The port on which the server listens
State | Server state
Version | Server version
Uptime | Server uptime in seconds
Last Event | The type of the latest event
Triggered At | Time when the latest event was triggered at
Services | Services that use this server
Monitors | Monitors that monitor this server
Master ID | The server ID of the master
Node ID | The node ID of this server
Slave Server IDs | List of slave server IDs
Current Connections | Current connection count
Total Connections | Total cumulative connection count
Max Connections | Maximum number of concurrent connections ever seen
Statistics | Server statistics
Parameters | Server parametersUsage: show servers
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Show detailed information about all servers.
Field | Description
----- | -----------
Server | Server name
Source | File where the object is stored in
Address | Address where the server listens
Port | The port on which the server listens
State | Server state
Version | Server version
Uptime | Server uptime in seconds
Last Event | The type of the latest event
Triggered At | Time when the latest event was triggered at
Services | Services that use this server
Monitors | Monitors that monitor this server
Master ID | The server ID of the master
Node ID | The node ID of this server
Slave Server IDs | List of slave server IDs
Current Connections | Current connection count
Total Connections | Total cumulative connection count
Max Connections | Maximum number of concurrent connections ever seen
Statistics | Server statistics
Parameters | Server parametersUsage: show service <service>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Show detailed information about a service. The `Parameters` field contains the currently configured parameters for this service. See `--help alter service` for more details about altering service parameters.
Field | Description
----- | -----------
Service | Service name
Source | File where the object is stored in
Router | Router that the service uses
State | Service state
Started At | When the service was started
Users Loaded At | When the users for the service were loaded
Current Connections | Current connection count
Total Connections | Total connection count
Max Connections | Historical maximum connection count
Cluster | The cluster that the service uses
Servers | Servers that the service uses
Services | Services that the service uses
Filters | Filters that the service uses
Parameters | Service parameter
Router Diagnostics | Diagnostics provided by the router moduleUsage: show services
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Show detailed information about all services.
Field | Description
----- | -----------
Service | Service name
Source | File where the object is stored in
Router | Router that the service uses
State | Service state
Started At | When the service was started
Users Loaded At | When the users for the service were loaded
Current Connections | Current connection count
Total Connections | Total connection count
Max Connections | Historical maximum connection count
Cluster | The cluster that the service uses
Servers | Servers that the service uses
Services | Services that the service uses
Filters | Filters that the service uses
Parameters | Service parameter
Router Diagnostics | Diagnostics provided by the router moduleUsage: show monitor <monitor>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Show detailed information about a monitor. The `Parameters` field contains the currently configured parameters for this monitor. See `--help alter monitor` for more details about altering monitor parameters.
Field | Description
----- | -----------
Monitor | Monitor name
Source | File where the object is stored in
Module | Monitor module
State | Monitor state
Servers | The servers that this monitor monitors
Parameters | Monitor parameters
Monitor Diagnostics | Diagnostics provided by the monitor moduleUsage: show monitors
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Show detailed information about all monitors.
Field | Description
----- | -----------
Monitor | Monitor name
Source | File where the object is stored in
Module | Monitor module
State | Monitor state
Servers | The servers that this monitor monitors
Parameters | Monitor parameters
Monitor Diagnostics | Diagnostics provided by the monitor moduleUsage: show session <session>
Options:
--rdns Perform a reverse DNS lookup on client IPs [boolean] [default: false]
--version Show version number [boolean]
--help Show help [boolean]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Show detailed information about a single session. The list of sessions can be retrieved with the `list sessions` command. The <session> is the session ID of a particular session.
The `Connections` field lists the servers to which the session is connected and the `Connection IDs` field lists the IDs for those connections.
Field | Description
----- | -----------
Id | Session ID
Service | The service where the session connected
State | Session state
User | Username
Host | Client host address
Port | Client network port
Database | Current default database of the connection
Connected | Time when the session started
Idle | How long the session has been idle, in seconds
Parameters | Session parameters
Client TLS Cipher | Client TLS cipher
Connections | Ordered list of backend connections
Connection IDs | Thread IDs for the backend connections
Queries | Query history
Log | Per-session log messages
Memory | Memory usage (not exhaustive)Usage: show sessions
Options:
--rdns Perform a reverse DNS lookup on client IPs [boolean] [default: false]
--version Show version number [boolean]
--help Show help [boolean]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Show detailed information about all sessions. See `--help show session` for more details.
Field | Description
----- | -----------
Id | Session ID
Service | The service where the session connected
State | Session state
User | Username
Host | Client host address
Port | Client network port
Database | Current default database of the connection
Connected | Time when the session started
Idle | How long the session has been idle, in seconds
Parameters | Session parameters
Client TLS Cipher | Client TLS cipher
Connections | Ordered list of backend connections
Connection IDs | Thread IDs for the backend connections
Queries | Query history
Log | Per-session log messages
Memory | Memory usage (not exhaustive)Usage: show filter <filter>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The list of services that use this filter is show in the `Services` field.
Field | Description
----- | -----------
Filter | Filter name
Source | File where the object is stored in
Module | The module that the filter uses
Services | Services that use the filter
Parameters | Filter parameters
Diagnostics | Filter diagnosticsUsage: show filters
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Show detailed information of all filters.
Field | Description
----- | -----------
Filter | Filter name
Source | File where the object is stored in
Module | The module that the filter uses
Services | Services that use the filter
Parameters | Filter parameters
Diagnostics | Filter diagnosticsUsage: show listener <listener>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Field | Description
----- | -----------
Name | Listener name
Source | File where the object is stored in
Service | Services that the listener points to
Parameters | Listener parametersUsage: show filters
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Show detailed information of all filters.
Field | Description
----- | -----------
Filter | Filter name
Source | File where the object is stored in
Module | The module that the filter uses
Services | Services that use the filter
Parameters | Filter parameters
Diagnostics | Filter diagnosticsUsage: show module <module>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This command shows all available parameters as well as detailed version information of a loaded module.
Field | Description
----- | -----------
Module | Module name
Type | Module type
Version | Module version
Maturity | Module maturity
Description | Short description about the module
Parameters | All the parameters that the module accepts
Commands | Commands that the module providesUsage: show modules
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Displays detailed information about all modules.
Field | Description
----- | -----------
Module | Module name
Type | Module type
Version | Module version
Maturity | Module maturity
Description | Short description about the module
Parameters | All the parameters that the module accepts
Commands | Commands that the module providesUsage: show maxscale
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
See `--help alter maxscale` for more details about altering MaxScale parameters.
Field | Description
----- | -----------
Version | MaxScale version
Commit | MaxScale commit ID
Started At | Time when MaxScale was started
Activated At | Time when MaxScale left passive mode
Uptime | Time MaxScale has been running
Config Sync | MaxScale configuration synchronization
Parameters | Global MaxScale parameters
System | System InformationUsage: show thread <thread>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Show detailed information about a worker thread.
Field | Description
----- | -----------
Id | Thread ID
State | The state of the thread
Accepts | Number of TCP accepts done by this thread
Reads | Number of EPOLLIN events
Writes | Number of EPOLLOUT events
Hangups | Number of EPOLLHUP and EPOLLRDUP events
Errors | Number of EPOLLERR events
Avg event queue length | Average number of events returned by one epoll_wait call
Max event queue length | Maximum number of events returned by one epoll_wait call
Max exec time | The longest time spent processing events returned by a epoll_wait call
Max queue time | The longest time an event had to wait before it was processed
Current FDs | Current number of managed file descriptors
Total FDs | Total number of managed file descriptors
Load (1s) | Load percentage over the last second
Load (1m) | Load percentage over the last minute
Load (1h) | Load percentage over the last hour
QC cache size | Query classifier size
QC cache inserts | Number of times a new query was added into the query classification cache
QC cache hits | How many times a query classification was found in the query classification cache
QC cache misses | How many times a query classification was not found in the query classification cache
QC cache evictions | How many times a query classification result was evicted from the query classification cache
Sessions | The current number of sessions
Zombies | The current number of zombie connections, waiting to be discarded
Memory | The current (partial) memory usageUsage: show threads
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
--kind The kind of threads to display, only the running or all. [string] [choices: "running", "all"] [default: "running"]
Show detailed information about all worker threads.
Field | Description
----- | -----------
Id | Thread ID
State | The state of the thread
Accepts | Number of TCP accepts done by this thread
Reads | Number of EPOLLIN events
Writes | Number of EPOLLOUT events
Hangups | Number of EPOLLHUP and EPOLLRDUP events
Errors | Number of EPOLLERR events
Avg event queue length | Average number of events returned by one epoll_wait call
Max event queue length | Maximum number of events returned by one epoll_wait call
Max exec time | The longest time spent processing events returned by a epoll_wait call
Max queue time | The longest time an event had to wait before it was processed
Current FDs | Current number of managed file descriptors
Total FDs | Total number of managed file descriptors
Load (1s) | Load percentage over the last second
Load (1m) | Load percentage over the last minute
Load (1h) | Load percentage over the last hour
QC cache size | Query classifier size
QC cache inserts | Number of times a new query was added into the query classification cache
QC cache hits | How many times a query classification was found in the query classification cache
QC cache misses | How many times a query classification was not found in the query classification cache
QC cache evictions | How many times a query classification result was evicted from the query classification cache
Sessions | The current number of sessions
Zombies | The current number of zombie connections, waiting to be discarded
Memory | The current (partial) memory usageUsage: show logging
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
See `--help alter logging` for more details about altering logging parameters.
Field | Description
----- | -----------
Current Log File | The current log file MaxScale is logging into
Enabled Log Levels | List of log levels enabled in MaxScale
Parameters | Logging parametersUsage: show commands <module>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This command shows the parameters the command expects with the parameter descriptions.
Field | Description
----- | -----------
Command | Command name
Parameters | Parameters the command supports
Descriptions | Parameter descriptionsUsage: show qc_cache
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Show contents (statement and hits) of query classifier cache.Usage: show dbusers <service>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Show information about the database users of the service.
Field | Description
----- | -----------
User | The user name of the account
Host | The host of the account
Plugin | Authentication plugin
TLS | Whether TLS is required from this user
Super | Does the user have a SUPER grant
Global | Does the user have global database access
Proxy | Whether this is a proxy user
Role | The default role for this userUsage: set server <server> <state>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Set options:
--force If combined with the `maintenance` state, this forcefully closes all connections to the target server [boolean] [default: false]
Options:
--version Show version number [boolean]
--help Show help [boolean]
If <server> is monitored by a monitor, this command should only be used to set the server into the `maintenance` or the `drain` state. Any other states will be overridden by the monitor on the next monitoring interval. To manually control server states, use the `stop monitor <name>` command to stop the monitor before setting the server states manually.
When a server is set into the `drain` state, no new connections to it are allowed but existing connections are allowed to gracefully close. Servers with the `Master` status cannot be drained or set into maintenance mode. To clear a state set by this command, use the `clear server` command.
To forcefully close all connections to a server, use `set server <name> maintenance --force`Usage: clear server <server> <state>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This command clears a server state set by the `set server <server> <state>` commandUsage: enable log-priority <log>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The `debug` log priority is only available for debug builds of MaxScale.Usage: disable log-priority <log>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The `debug` log priority is only available for debug builds of MaxScale.Usage: create server <name> <host|socket> [port] [params...]
Create server options:
--services Link the created server to these services [array]
--monitors Link the created server to these monitors [array]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The created server will not be used by any services or monitors unless the --services or --monitors options are given. The list of servers a service or a monitor uses can be altered with the `link` and `unlink` commands. If the <host|socket> argument is an absolute path, the server will use a local UNIX domain socket connection. In this case the [port] argument is ignored.
The recommended way of declaring parameters is with the new `key=value` syntax added in MaxScale 6.2.0. Note that for some parameters (e.g. `extra_port` and `proxy_protocol`) this is the only way to pass them. The redundant option parameters have been deprecated in MaxScale 22.08.Usage: create monitor <name> <module> [params...]
Create monitor options:
--servers Link the created monitor to these servers. All non-option arguments after --servers are interpreted as server names e.g. `--servers srv1 srv2 srv3`. [array]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The list of servers given with the --servers option should not contain any servers that are already monitored by another monitor. The last argument to this command is a list of key=value parameters given as the monitor parameters. The redundant option parameters have been deprecated in MaxScale 22.08.Usage: service <name> <router> <params...>
Create service options:
--servers Link the created service to these servers. All non-option arguments after --servers are interpreted as server names e.g. `--servers srv1 srv2 srv3`. [array]
--filters Link the created service to these filters. All non-option arguments after --filters are interpreted as filter names e.g. `--filters f1 f2 f3`. [array]
--services Link the created service to these services. All non-option arguments after --services are interpreted as service names e.g. `--services svc1 svc2 svc3`. [array]
--cluster Link the created service to this cluster (i.e. a monitor) [string]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The last argument to this command is a list of key=value parameters given as the service parameters. If the --servers, --services or --filters options are used, they must be defined after the service parameters. The --cluster option is mutually exclusive with the --servers and --services options.
Note that the `user` and `password` parameters must be defined.Usage: filter <name> <module> [params...]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The last argument to this command is a list of key=value parameters given as the filter parameters.Usage: create listener <service> <name> <port> [params...]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The new listener will be taken into use immediately. The last argument to this command is a list of key=value parameters given as the listener parameters. These parameters override any parameters set via command line options: e.g. using `protocol=mariadb` will override the `--protocol=cdc` option. The redundant option parameters have been deprecated in MaxScale 22.08.Usage: create user <name> <password>
Create user options:
--type Type of user to create [string] [choices: "admin", "basic"] [default: "basic"]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
By default the created user will have read-only privileges. To make the user an administrative user, use the `--type=admin` option. Basic users can only perform `list` and `show` commands.Usage: create report <file>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The generated report contains the state of all the objects in MaxScale as well as all other required information needed to diagnose problems.Usage: destroy server <name>
Destroy options:
--force Remove the server from monitors and services before destroying it [boolean] [default: false]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The server must be unlinked from all services and monitor before it can be destroyed.Usage: destroy monitor <name>
Destroy options:
--force Remove monitored servers from the monitor before destroying it [boolean] [default: false]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The monitor must be unlinked from all servers before it can be destroyed.Usage: destroy listener { <listener> | <service> <listener> }
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Destroying a listener closes the listening socket, opening it up for immediate reuse. If only one argument is given and it is the name of a listener, it is unconditionally destroyed. If two arguments are given and they are a service and a listener, the listener is only destroyed if it is for the given service.Usage: destroy service <name>
Destroy options:
--force Remove filters, listeners and servers from service before destroying it [boolean] [default: false]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The service must be unlinked from all servers and filters. All listeners for the service must be destroyed before the service itself can be destroyed.Usage: destroy filter <name>
Destroy options:
--force Automatically remove the filter from all services before destroying it [boolean] [default: false]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The filter must not be used by any service when it is destroyed.Usage: destroy user <name>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The last remaining administrative user cannot be removed. Create a replacement administrative user before attempting to remove the last administrative user.Usage: destroy session <id>
Destroy options:
--ttl Give session this many seconds to gracefully close [number] [default: 0]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This causes the client session with the given ID to be closed. If the --ttl option is used, the session is given that many seconds to gracefully stop. If no TTL value is given, the session is closed immediately.Usage: link service <name> <target...>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This command links targets to a service, making them available for any connections that use the service. A target can be a server, another service or a cluster (i.e. a monitor). Before a server is linked to a service, it should be linked to a monitor so that the server state is up to date. Newly linked targets are only available to new connections, existing connections will use the old list of targets. If a monitor (a cluster of servers) is linked to a service, the service must not have any other targets linked to it.Usage: link monitor <name> <server...>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Linking a server to a monitor will add it to the list of servers that are monitored by that monitor. A server can be monitored by only one monitor at a time.Usage: unlink service <name> <target...>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This command unlinks targets from a service, removing them from the list of available targets for that service. New connections to the service will not use the unlinked targets but existing connections can still use the targets. A target can be a server, another service or a cluster (a monitor).Usage: unlink monitor <name> <server...>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This command unlinks servers from a monitor, removing them from the list of monitored servers. The servers will be left in their current state when they are unlinked from a monitor.Usage: start service <name>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This starts a service stopped by `stop service <name>`Usage: start listener <name>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This starts a listener stopped by `stop listener <name>`Usage: start monitor <name>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This starts a monitor stopped by `stop monitor <name>`Usage: start [services|maxscale]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This command will execute the `start service` command for all services in MaxScale.Usage: stop service <name>
Stop options:
--force Close existing connections after stopping the service [boolean] [default: false]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Stopping a service will prevent all the listeners for that service from accepting new connections. Existing connections will still be handled normally until they are closed.Usage: stop listener <name>
Stop options:
--force Close existing connections after stopping the listener [boolean] [default: false]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Stopping a listener will prevent it from accepting new connections. Existing connections will still be handled normally until they are closed.Usage: stop monitor <name>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Stopping a monitor will pause the monitoring of the servers. This can be used to manually control server states with the `set server` command.Usage: stop [services|maxscale]
Stop options:
--force Close existing connections after stopping all services [boolean] [default: false]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This command will execute the `stop service` command for all services in MaxScale.Usage: alter server <server> <key=value> ...
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
To display the server parameters, execute `show server <server>`.
The parameters should be given in the `key=value` format. This command also supports the legacy method
of passing parameters as `key value` pairs but the use of this is not recommended.Usage: alter monitor <monitor> <key=value> ...
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
To display the monitor parameters, execute `show monitor <monitor>`
The parameters should be given in the `key=value` format. This command also supports the legacy method
of passing parameters as `key value` pairs but the use of this is not recommended.Usage: alter service <service> <key=value> ...
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
To display the service parameters, execute `show service <service
The parameters should be given in the `key=value` format. This command also supports the legacy method
of passing parameters as `key value` pairs but the use of this is not recommended.Usage: alter service-filters <service> [filters...]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The order of the filters given as the second parameter will also be the order in which queries pass through the filter chain. If no filters are given, all existing filters are removed from the service.
For example, the command `maxctrl alter service-filters my-service A B C` will set the filter chain for the service `my-service` so that A gets the query first after which it is passed to B and finally to C. This behavior is the same as if the `filters=A|B|C` parameter was defined for the service.
The parameters should be given in the `key=value` format. This command also supports the legacy method
of passing parameters as `key value` pairs but the use of this is not recommended.Usage: alter filter <filter> <key=value> ...
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
To display the filter parameters, execute `show filter <filter>`. Some filters support runtime configuration changes to all parameters. Refer to the filter documentation for details on whether it supports runtime configuration changes and which parameters can be altered.
The parameters should be given in the `key=value` format. This command also supports the legacy method
of passing parameters as `key value` pairs but the use of this is not recommended.
Note: To pass options with dashes in them, surround them in both single and double quotes:
maxctrl alter filter my-namedserverfilter target01 '"->master"'Usage: alter listener <listener> <key=value> ...
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
To display the listener parameters, execute `show listener <listener>`
The parameters should be given in the `key=value` format. This command also supports the legacy method
of passing parameters as `key value` pairs but the use of this is not recommended.Usage: alter logging <key=value> ...
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
To display the logging parameters, execute `show logging`
The parameters should be given in the `key=value` format. This command also supports the legacy method
of passing parameters as `key value` pairs but the use of this is not recommended.Usage: alter maxscale <key=value> ...
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
To display the MaxScale parameters, execute `show maxscale`.
The parameters should be given in the `key=value` format. This command also supports the legacy method
of passing parameters as `key value` pairs but the use of this is not recommended.Usage: alter user <name> <password>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Changes the password for a user. To change the user type, destroy the user and then create it again.Usage: alter session <session> <key=value> ...
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Alter parameters of a session. To get the list of modifiable parameters, use `show session <session>`
The parameters should be given in the `key=value` format. This command also supports the legacy method
of passing parameters as `key value` pairs but the use of this is not recommended.Usage: alter session-filters <session> [filters...]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
The order of the filters given as the second parameter will also be the order in which queries pass through the filter chain. If no filters are given, all existing filters are removed from the session. The syntax is similar to `alter service-filters`.Usage: rotate logs
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This command is intended to be used with the `logrotate` command.Usage: reload service <service>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]Usage: reload service <service>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This command reloads the TLS certificates for all listeners and servers as well as the REST API in MaxScale. The REST API JWT signature keys are also rotated by this command.Usage: reload session <id>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This command reloads the configuration of a session. When a session is reloaded, it internally restarts the MaxScale session. This means that new connections are created and taken into use before the old connections are discarded. The session will use the latest configuration of the service the listener it used pointed to. This means that the behavior of the session can change as a result of a reload if the configuration has changed. If the reloading fails, the old configuration will remain in use. The external session ID of the connection will remain the same as well as any statistics or session level alterations that were done before the reload.Usage: reload sessions
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
This command reloads the configuration of all sessions. When a session is reloaded, it internally restarts the MaxScale session. This means that new connections are created and taken into use before the old connections are discarded. The session will use the latest configuration of the service the listener it used pointed to. This means that the behavior of the session can change as a result of a reload if the configuration has changed. If the reloading fails, the old configuration will remain in use. The external session ID of the connection will remain the same as well as any statistics or session level alterations that were done before the reload.Usage: call command <module> <command> [params...]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
Options:
--version Show version number [boolean]
--help Show help [boolean]
To inspect the list of module commands, execute `list commands`Usage: get <resource> [path]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
API options:
--sum Calculate sum of API result. Only works for arrays of numbers e.g. `api get --sum servers data[].attributes.statistics.connections`. [boolean] [default: false]
--pretty Pretty-print output. [boolean] [default: false]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Perform a raw REST API call. The path definition uses JavaScript syntax to extract values. For example, the following command extracts all server states as an array of JSON values: maxctrl api get servers data[].attributes.stateUsage: post <resource> <value>
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
API options:
--sum Calculate sum of API result. Only works for arrays of numbers e.g. `api get --sum servers data[].attributes.statistics.connections`. [boolean] [default: false]
--pretty Pretty-print output. [boolean] [default: false]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Perform a raw REST API call. The provided value is passed as-is to the REST API after building it with JSON.parseUsage: patch <resource> [path]
Global Options:
-c, --config MaxCtrl configuration file [string] [default: "~/.maxctrl.cnf"]
-u, --user Username to use [string] [default: "admin"]
-p, --password Password for the user. To input the password manually, use -p '' or --password='' [string] [default: "mariadb"]
-h, --hosts List of MaxScale hosts. The hosts must be in HOST:PORT format and each value must be separated by a comma. [string] [default: "127.0.0.1:8989"]
-t, --timeout Request timeout in plain milliseconds, e.g '-t 1000', or as duration with suffix [h|m|s|ms], e.g. '-t 10s' [string] [default: "10000"]
-q, --quiet Silence all output. Ignored while in interactive mode. [boolean] [default: false]
--tsv Print tab separated output [boolean] [default: false]
--skip-sync Disable configuration synchronization for this command [boolean] [default: false]
HTTPS/TLS Options:
-s, --secure Enable HTTPS requests [boolean] [default: false]
--tls-key Path to TLS private key [string]
--tls-passphrase Password for the TLS private key [string]
--tls-cert Path to TLS public certificate [string]
--tls-ca-cert Path to TLS CA certificate [string]
-n, --tls-verify-server-cert Whether to verify server TLS certificates [boolean] [default: true]
API options:
--sum Calculate sum of API result. Only works for arrays of numbers e.g. `api get --sum servers data[].attributes.statistics.connections`. [boolean] [default: false]
--pretty Pretty-print output. [boolean] [default: false]
Options:
--version Show version number [boolean]
--help Show help [boolean]
Perform a raw REST API call. The provided value is passed as-is to the REST API after building it with JSON.parseThe nosqlprotocol module allows a MariaDB server or cluster to be
used as the backend of an application using a MongoDB® client library.
Internally, all documents are stored in a table containing two columns;
an id column for the object id and a doc column for the document itself.
When the MongoDB® client application issues MongoDB protocol commands, either directly or indirectly via the client library, they are transparently converted into the equivalent SQL and executed against the MariaDB backend. The MariaDB responses are then in turn converted into the format expected by the MongoDB® client library and application.
There are a number of with which the behavior of nosqlprotocol can be adjusted. A minimal configuration looks like:
nosqlprotocol.user and nosqlprotocol.password specify the
credentials that will be used when accessing the backend database or
cluster. Note that the same credentials will be used for all connecting
MongoDB® clients.
Since nosqlprotocol is a listener, there must be a service to which the client requests will be sent. Nosqlprotocol places no limitations on what filters, routers or backends can be used.
To configure the same listener with MaxCtrl, the parameters must be passed in a JSON object in the following manner:
All the parameters that the nosqlprotocol module takes must be passed in the same JSON object.
A complete example can be found at the of this document.
Nosqlprotocol supports SCRAM authentication as implemented by MongoDB®.
The mechanisms SCRAM-SHA-1 and SCRAM-SHA-256 are both supported.
If nosqlprotocol has been setup so that no authentication is required, then when connecting only the host and port should be provided, but neither a username nor a password.
For instance, if the MongoDB Node.JS Driver is used, then the connection string should look like:
Similarly, if the Mongo Shell is used, only the host and port should be provided:
A MariaDB user consists of a name and a host part. A user 'user'@'%'
and a user 'user'@'127.0.0.1' are completely different. The host part
specifies where a user may connect from, with % being a wildcard that
matches all hosts. What data a user is allowed to access and modify is
specified by what privileges are granted to the user.
A NoSQL user is somewhat different. It is created in the context of a
particular database, so there may be a user userx in the database dbA
and different user with the same name userx in the database dbB. What
hosts a user may connect from can be restricted, but that is a property of
the user and not an implicit part of it. What data a user is allowed to
access and modify is specified by the roles that have been assigned to
the user.
From the above it should be clear that there is not a 1-to-1 correspondence between the concept of a user in NoSQL and the concept of a user in MariaDB, but that some additional conventions are needed.
To make it possible to have different NoSQL users with the same name, the database in whose context the user is created is prepended to the user name, separated with a dot, when the MariaDB user is created.
This is perhaps easiest to illustrate using an example:
Currently there are two user accounts defined. Even though there is
a user bob, creating a NoSQL user bob succeeds.
If we now, from the MariaDB prompt, check the users we will see:
The MariaDB user corresponding to the NoSQL user bob, created in the
context of the database test, has test as a prefix.
mariadb databaseThe fact that NoSQL users have the database embedded in the MariaDB name may be inconvenient if the same data is accessed both as NoSQL via nosqlprotocol and as SQL directly from MariaDB. It also makes it impossible to use an existing MariaDB account from NoSQL.
To provide a solution for this problem, the database mariadb is treated
in a specific fashion. A user created in the context of the mariadb
database is created in the MariaDB server without the database prefix.
If we now try to create a user bob in the mariadb database it will fail,
because the user 'bob'@'%' exists already.
If we create a user with another name it will succeed.
And if we check the situation from MariaDB,
we will see that alice was created without a database prefix.
When creating a user nosqlprotocol accepts all roles as predefined by MongoDB®, but not all of them are translated into GRANT privileges. The following table shows what privilege(s) a particular role is converted to.
The following roles are shorthands for several other roles.
dbOwner differs from root in that the privileges of the former
apply only to a particular database, while the privileges of the
latter apply to all databases. However, the role root can
only be assigned to a user in the admin database.
In addition there are AnyDatabase versions of dbAdmin, read andreadWrite (e.g readAnyDatabase) that can be assigned to a user in
the admin database. If so, then the privilege is granted on *.*,
otherwise on <db>.*.
If the root role is assigned to a user in the admin database,
then the privileges are granted on *.*, otherwise on <db>.*.
Other pre-defined roles are recognized and stored in the local nosqlprotocol account database, but they do not affect what privileges are granted to the MariaDB user. Currently user-defined roles are not supported.
Authenticationwise nosqlprotocol can be used in three different ways:
Anonymously
Shared credentials
Unique credentials
If there is an anonymous user on the MariaDB server and if nosqlprotocol is configured without a user/password, then all nosqlprotocol clients will access the MariaDB server as anonymous users.
Note that the anonymous MariaDB user is only intended for testing and should in general not be used, but deleted.
If nosqlprotocol is configured with
then each MongoDB® client will use those credentials when accessing the MariaDB server. Note that from the perspective of the MariaDB server, it is not possible to distinguish between different MongoDB® clients.
If nosqlprotocol authentication has been taken into use and a MongoDB® client authenticates, either when connecting or later, then the credentials of MongoDB® client will be used when accessing the MariaDB server.
Note that even if nosqlprotocol authentication has been enabled, authentication
is not required, and if the MongoDB® client has not authenticated itself, the
credentials specified with nosqlprotocol.[user|password] (or the anonymous
user) will be used when accessing the MariaDB server.
To enforce authentication, specify
in the configuration. If authentication is required, then any command that requires access to the MariaDB server will fail, unless the client has authenticated.
By default nosqlprotocol does no authorization. However, a nosqlprotocol client is always subject to the authorization performed by the MariaDB server.
When nosqlprotocol authorization is enabled by adding
to the configuration file, some commands will be subject to authorization, by nosqlprotocol. The following table lists the commands and what role they require.
It is important to note that even if nosqlprotocol authorization is enabled, the MariaDB server has the final word. That is, even if the roles of a user would be sufficient for a particular operation, if the granted privileges are not, the operation will not succeed. There may be a mismatch between roles and grants, for instance, if the wrong roles were specified when the user was added, or if the grants have been altered directly and not via nosqlprotocol.
The authentication/authorization can be bootstrapped explicitly or implicitly. Bootstrapping explicitly provides more control, while bootstrapping implicitly is much more convenient.
In order to enable authorization you need to have NoSQL users and those can be created with or added with .
If you want to create a user, then you first need to configure nosqlprotocol with credentials that are sufficient for creating a user:
At this point nosqlprotocol.authentication_required andnosqlprotocol.authorization_enabled should both be false. Note that
as those are their default values, they do not have to be specified.
Start MaxScale and connect to it with the MongoDB® command line client
Then create the user.
Alternatively you can add an existing user. Note that it should be
added to the mariadb database, unless it was created with the
convention of having the database as a prefix, e.g. db.bob.
Now you should shutdown MaxScale and add the entries
and start MaxScale.
The nosqlprotocol.user and nosqlprotocol.password can be removed but
as they will be ignored with nosqlprotocol.authentication_required=true
being present, it is not mandatory.
If you now try to create a user when not having been authenticated or
when authenticated as a user without the userAdmin role, the result
will be:
NOTE When a client authenticates, the password will not be transferred in cleartext over the network, so, even without SSL, it is not possible to gain access to a password by monitoring the network traffic.
However, when a user is created or added (or the password is changed), the password will be transferred in cleartext. To prevent eavesdropping, create/add users when connecting over a domain socket, or use
With implicit bootstrapping, you should first create the MariaDB user that should appear as the initial NoSQL user. As explained , the concept of a user is somewhat different in MariaDB and NoSQL, which means that certain factors must be taken into account when creating the MariaDB user. Then at first startup, nosqlprotocol will create the corresponding NoSQL user, which will enable the authenticated and authorized use of nosqlprotocol.
When MaxScale is started, if the following hold
nosqlprotocol.authentication_required andnosqlprotocol.authorization_enabled are true in the configuration
section of the nosqlprotocol listener,
nosqlprotocol.user and nosqlprotocol.password are provided, and
there are no NoSQL users in the NoSQL account database.
then, MaxScale will
wait until the primary of the service pointed to by the listener is available,
connect using the credentials specified in nosqlprotocol.user
and nosqlprotocol.password,
execute SHOW GRANTS,
Immediately thereafter it is possible to connect to the nosqlprotocol port with a MongoDB® client using the specified credentials.
Note that after the bootstrapping, nosqlprotocol will not use
the user and password settings and they can be removed.
Grants
When a NoSQL user is created using the MariaDB grants are obtained from the specified NoSQL roles as explained .
When implicitly creating a NoSQL user from an existing user in MariaDB, the inverse operation must be performed. There are many factors that affect what NoSQL roles the grants of a user are translated into:
whether the user is a regular or admin user,
whether the privileges are on *.* or some specific db.*, and
the privileges themselves, e.g. SELECT, DELETE, etc.
In NoSQL, every user resides in a specific database. Note that this does not mean that the database would have to exist in MariaDB.
When it comes to users, the database effectively means a scope, which in the case of nosqlprotocol is handled by prefixing the corresponding MariaDB user name with the database/scope name.
When creating a user to be used from NoSQL, there are three options for the user's name:
The name can be of the format some_db.user_name wheresome_db can be anything (subject to the naming rules of
MariaDB), except admin or mariadb. In this case, the
user will be a regular user, who can access data in
databases that she has been granted access to.
The name can be of the format admin.user_name where admin
is exactly just that. In this case, the user will be an
admin user, who can access any database.
What database the privileges can be specified ON depends on
what kind of user is being created.
If it is a regular user, the privileges must be granted on a
specific database, such as \dbA``.*`. Note that there is no
dependency between this database and the (conceptual) database
the user resides in.
If it is an admin user, the privileges must be granted on
the *.* database.
In NoSQL, a role can be database specific or generic. However,
a generic role can only be assigned to a user in the admin
database. In practice this means that if the privileges are
on *.*, then the user must reside in the admin database
(e.g. admin.bob) or it is treated as an error.
The following table shows what privileges are required for a
role to be assigned. Note that ALL PRIVILEGES can be used as
well.
Only required if the user is an admin user.
The AnyDatabase version will be assigned, if the user is
an admin user.
If certain roles are assigned, then other roles will be assigned as well.
If the roles dbAdmin, readWrite and userAdmin are
assigned, then dbOwner will be assigned as well.
If the roles dbAdminAnyDatabase, readWriteAnyDatabase
and userAdminAnyDatabase are assigned, then root will
be assigned as well.
Once the user has been created and the desired privileges have been granted, the NoSQL listener should be configured as follows:
At MaxScale startup, the NoSQL user will then be created.
Examples
Admin User
We want the initial NoSQL user to be an administrator, with full rights.
As we want an admin user, the name is prefixed with admin,
which will have that effect. And since it is an admin user, the
privileges are granted ON *.*.
Thereafter, we specify the following in the configuration file,
and start MaxScale.
As the creation of the initial user can be made only after the monitor for the listener's service has marked one server as primary, whether the creation succeeded or not must be checked from MaxScale' log file:
Under normal conditions, the bootstrapping will be almost instantaneous.
It is now possible to connect using any MongoDB® client application.
Note that when connecting the user is passed as nosql_admin and not
as admin.nosql_admin. The fact that we want to authenticate against
the admin database is expressed by passing the database as the last
argument.
As can be seen, the user has the any roles on the admin
database, which means that all databases can be accessed and
modified, and that new users can be created.
Test User
We want the initial NoSQL user to be a user with limited rights, intended to be used for testing.
As we want a user with limited rights, the name is not prefixed
with admin. The privileges are granted specifically on databasetest.*. Indeed, if *.* had been used, the creation of the initial
NoSQL user would have failed with an error. Here, the user is created
in the same database that the user is given access to, but it could
have been another one. Further, several GRANT statements could have
been used, had we wanted to give access to several databases.
Thereafter, we specify the following in the configuration file,
and start MaxScale.
As the creation of the initial user can be made only after the monitor for the listener's service has marked one server as primary, whether the creation succeeded or not must be checked from MaxScale' log file:
Under normal conditions, the bootstrapping will be almost instantaneous.
It is now possible to connect using any MongoDB® client application.
Note that when connecting the user is passed as test_user and not
as test.test_user. The fact that we want to authenticate against
the test database is expressed by passing the database as the last
argument.
As can be seen, the user has the readWrite role on the test database,
which means that only the test database can be accessed and modified.
Since nosqlprotocol is a regular protocol module used in a listener,
the TLS/SSL support of listeners is available. Please see
for details.
So as to be able to connect to the MariaDB server on behalf of clients, nosqlprotocol must know their password. As the password is not transferred to nosqlprotocol during the authentication in a way that could be used when logging into MariaDB, the password must be stored when the user is created with or added with .
Note that the password is not stored in cleartext but as three
different hashes; hashed with sha1 for use with MariaDB, salted
and hashed with sha1 for use with the SCRAM-SHA-1 authentication
mechanism (if that is enabled for the user) and salted and hashed
with sha256 for use with the SCRAM-SHA-256 authentication mechanism
(if that is enabled for the user).
The account information can be stored privately, in which case it can be used only by a particular MaxScale instance, or in a_shared_ manner, in which case multiple MaxScale instances can share the information and a user created/added on one instance can be used on another.
In the private case, the account information of nosqlprotocol is
stored in an database
whose name is <libdir>/nosqlprotocol/<listener-name>-v1.db,
where <libdir> is the libdir of MaxScale, typically/var/lib/maxscale, <listener-name> is the name of the
listener section in the MaxScale configuration file, and -v1
a suffix for making schema evolution easier, should there be
a need for that.
For instance, given a configuration like
the account information will be stored in the file<libdir>/nosqlprotocol/NoSQL-Listener-v1.db.
Note that since the database name is derived from the listener name, changing the name of the listener in the configuration file will have the effect of making all accounts disappear. To retain the accounts, the database file should also be renamed.
At first startup, the nosqlprotocol directory and
the file NoSQL-Listener-v1.db will be created. They will
be created with file permissions that only allow MaxScale
access. At subsequent startups the permissions will be checked
and MaxScale will refuse to start if the permissions allow
access to others.
We strongly recommend that no manual modifications are made to the database.
Note that we make no guarantees that the way in which the
account information is stored by nosqlprotocol will remain the
same even between maintenance releases. We do guarantee,
however, that even if the way in which the account information is
stored changes, existing account information will automatically
be converted and no manual intervention, such as re-creation of
accounts, will be needed.
In the shared case, the account information of nosqlprotocol is stored in the cluster of the service in front of which the NoSQL listener resides. The primary of the cluster will be used both for reading and writing data.
A table whose name is the same as the listener's name in the
MaxScale configuration will be created in the database
specified with the
parameter. If it is not specified explicitly, the default isnosqlprotocol. The name of the table will be the name of
the listener section in the MaxScale configuration file.
For instance, given a configuration like
the account information will be stored in the tablenosqlprotocol.NoSQL-Listener.
Note that since the table name is derived from the listener name, changing the name of the listener in the configuration file will have the effect of making all accounts disappear. To retain the accounts, the table should also be renamed.
nosqlprotocol will create the table when needed, so the
user specified with
must have sufficient grants to be able to do that.
nosqlprotocol will store in the table, data that allow
any MaxScale to authenticate a MongoDB® client, irrespective
of which MaxScale instance was used when the user was created.
nosqlprotocol also stores in the table the SHA1 of a user's
password, to be able to authenticate against the MariaDB server.
Therefore it is strongly suggested to enable encryption key
management in MaxScale and to provide an authentication
key ID with so
that the data will be encrypted.
If shared authentication has been enabled with then and must also be provided. With the database name can optionally be changed, and with an encryption key ID, using which the sensitive data is encrypted, can optionally be provided.
Note that we make no guarantees that the table in which the
account information is stored by nosqlprotocol will remain the
same even between maintenance releases. We do guarantee,
however, that even if the way in which the account information is
stored changes, existing account information will automatically
be converted and no manual intervention, such as re-creation of
accounts, will be needed.
Nosqlprotocol fully supports wire protocol version 6 and only provides rudimentary support for earlier wire protocol versions, but reports at startup that it would support versions 0 to 6. The reason is that some client libraries are buggy and use an old wire protocol version if the server claims to support only version 6. Consequently, one should use a client library version that at least supports wire protocol version 6.
As the goal of nosqlprotocol is to implement, to the extent that it is feasible, the wire protocol and the database commands the way MongoDB® implements them, it should be possible to use any language specific driver.
However, during the development of nosqlprotocol, the only client library that has been verified to work is version 3.6 of MongoDB Node.JS Driver.
Using the following parameters, the behavior of nosqlprotocol can be
adjusted. As they are not generic listener parameters, but specific tonosqlprotocol they must be qualified with the nosqlprotocol-prefix.
For instance:
userType: string
Mandatory: No
Default: ""
Specifies the user to be used when connecting to the backend, if the MongoDB® client is not authenticated.
passwordType: string
Mandatory: No
Default: ""
Specifies the password to be used when connecting to the backend, is the MongoDB® client is not authenticated. Note that the same user/password combination will be used for all unauthenticated MongoDB® clients connecting to the same listener port.
authentication_requiredType:
Mandatory: No
Default: false
Specifies whether the client always must authenticate. If authentication is required,
it does not matter whether user and password have been specified, the client must
authenticate.
Authentication should not be required before users have been created with or added with , with authentication being optional and authorization being disabled.
NOTE: All client activity is always subject to authorization performed by the MariaDB server.
authentication_sharedType:
Mandatory: No
Default: false
Specifies whether the NoSQL account information should be stored in a shared manner or privately.
authentication_dbType: string
Mandatory: No
Default: "NoSQL"
Specifies the database of the table where the NoSQL account information
is stored, if authentication_shared is true. If the database does not
exist, nosqlprotocol will attempt to create it, so either is should be
manually created or the used specified with authentication_user should
have the grants required to do so.
authentication_key_idType: string
Mandatory: No
Default: ""
The encryption key ID, using which the NoSQL account information should be encrypted with when stored in the MariaDB server. If an encryption key ID is given, the encryption key manager in MaxScale must also be enabled.
The encryption key must be a 256-bit key. Keys of shorter length are rejected as invalid encryption keys.
authentication_userType: string
Mandatory: Yes, if authentication_shared is true.
Specifies the user to be used when modifying and accessing the NoSQL account information stored in the MariaDB server.
authentication_passwordType: string
Mandatory: No
Default: ""
Specifies the password of authentication_user.
authorization_enabledType:
Mandatory: No
Default: false
Specifies whether nosqlprotocol itself should perform authorization in the context of the commands , and . Authorization should not be enabled before users have been created with or added with with authorization being disabled.
NOTE: All client activity is always subject to authorization performed by the MariaDB server.
hostType: string
Mandatory: No
Default: "%"
Specifies the host to be used when a MariaDB user is created via nosqlprotocol.
By default all users are created as ...@'%', which means that it is possible to
connect to the MariaDB server from any host using the credentials of the created
user. For tighter security, the IP-address of the MaxScale host can be specified.
NOTE: This value does not specify from which host it is allowed to connect to MaxScale.
on_unknown_commandType:
Mandatory: No
Values: return_error, return_empty
Default: return_error
Specifies what should happen in case a clients sends an unrecognized command.
Enumeration values:
return_error: An error document is returned.
return_empty: An empty document is returned.
log_unknown_commandType:
Mandatory: No
Default: false
Specifies whether an unknown command should be logged. This is primarily for debugging purposes, to find out whether a client uses a command that currently is not supported.
auto_create_databasesType:
Mandatory: No
Default: true
Specifies whether databases should automatically be created, as needed.
Note that setting this parameter to true, without also settingauto_create_tables to true, has no effect at all.
auto_create_tablesType:
Mandatory: No
Default: true
Specifies whether tables should automatically be created, as needed.
Note that this applies only if the relevant database already exists.
If a database should also be created if needed, then auto_create_databases
must also be set to true.
id_lengthType: count
Mandatory: No
Range: [35, 2048]
*Default: 35
Specifies the length of the id column in tables that are automatically created.
ordered_insert_behaviorType:
Mandatory: No
Values: atomic, default
Default: default
Enumeration values:
default: Each document is inserted using a separate INSERT, either in a
multi-statement or in a compound statement. Whether an error causes the remaining
insertions to be aborted, depends on the value of ordered specified in the
insert command.
atomic: If the value of ordered in the insert command is true
(the default) then all documents are inserted using a single INSERT statement,
that is, either all insertions succeed or none will. If
What combination of ordered_insert_behavior and ordered (in the insert command
document) is used, has an impact on the performance. Please see the discussion at .
cursor_timeoutType:
Mandatory: No
Default: 60s
Specifies how long a cursor can be idle, that is, not accessed, before it is automatically closed.
debugType:
Mandatory: No
Values: none, in, out, back
Specifies what should be logged as notice messages.
Enumeration values:
none: Nothing is logged.
in: The incoming protocol command is logged.
out: The outgoing SQL sent to the backend is logged.
So, specify
to have the incoming command, the corresponding SQL sent to the backend and the resulting response sent to the client logged.
internal_cacheType: string
Mandatory: No
Default: ''
Specifies what internal cache to use if any. Currently, the only
permissible value is cache, which refers to the .
Please see for more information.
By default, nosqlprotocol automatically creates databases as needed.
The default behavior can be changed by setting auto_create_databases to
false. In that case, databases must manually be created.
Each MongoDB® collection corresponds to a MariaDB table with the same name. However, it is always possible to access a collection irrespective of whether the corresponding table exists or not; it will simply appear to be empty.
Inserting documents into a collection, whose corresponding table does not
exist, succeeds, provided auto_create_tables is true, as the table will
in that case be created.
When nosqlprotocol creates a table, it uses a statement like
where the length of the VARCHAR is specified by the value of id_length,
whose default and minimum is 35.
NOTE If the tables are created manually, then the CREATE statement_must_ contain a similar AS-clause as the one above and should contain
a similar constraint.
Note that nosqlprotocol does not in any way verify that the table
corresponding to a collection being accessed or modified does indeed
have the expected columns id and doc of the expected types, but it
simply uses the table, which will fail if the layout is not the expected
one.
To reduce the risk for confusion, the recommendation is to use a specific database for tables that contain documents.
The following operators are currently supported.
$eq
$gt
$gte
$in
$and
$not
$nor
$or
$exists
$type
$type
When $type is used, it will be converted into a condition involving one or more comparisons. The following subset
of types can be used in $type queries:
The "number" alias is supported and will match values whose MariaDB type isDOUBLE or INTEGER.
$mod
$regex
$all
$elemMatch
$size
$elemMatch
As arguments, only the operators $eq and $ne are supported.
$bit
$currentDate
$inc
$max
The following commands are supported. At each command is specified what fields are relevant for the command.
All non-listed fields are ignored; their presence or absence have no impact, unless otherwise explicitly specified.
The following fields are relevant.
The following fields are relevant.
The following fields are relevant.
Each element of the deletes array contains the following fields:
The following fields are relevant.
All other fields are ignored.
Projection
The projection parameter determines which fields are returned in the matching documents.
The projection parameter takes a document of the following form:
If a projection document is not provided or if it is empty, the entire document
will be returned.
Embedded Field Specification
For fields in an embedded documents, the field can be specified using:
dot notation; e.g. "field.nestedfield": <value>
In particular, specifying fields in embedded documents using nested form is not supported.
_id Field Projection
The _id field is included in the returned documents by default unless you
explicitly specify _id: 0 in the projection to suppress the field.
Inclusion or Exclusion
A projection cannot contain both include and exclude specifications,
with the exception of the _id field:
In projections that explicitly include fields, the _id field is the only field that can be explicitly excluded.
In projections that explicitly excludes fields, the _id field is the only field that can be explicitly include; however, the _id field is included by default.
NOTE Currently _id is the only field that can be excluded, and only
if other fields are explicitly included.NOTE Currently exclusion of other fields but _id is not supported.
Filtering by _id
Note that there is a significant difference between
and
In the former case the generated WHERE clause will be
and in the latter
That is, in the former case the indexed column id will be used, in the
latter it will not.
The following fields are relevant.
All other fields are ignored.
The following fields are relevant.
The following fields are relevant.
The insert command inserts one or more documents into the table whose
name is the same as that of the collection. If the option auto_create_tables
is true, then the table is created if it does not already exist. If the
value is false, then the insert will fail unless the table already exists.
The following fields are relevant.
ordered
The impact of ordered is dependent upon the value of ordered_insert_behavior.
default
In this case ordered has the same impact as in MongoDB®. That is, if the value
is true, then when an insert of a document fails, return without inserting any
remaining documents listed in the inserts array. If false, then when an insert
of a document fails, continue to insert the remaining documents.
atomic
If ordered is true, then all documents will be inserted using a single
INSERT command. That is, if the insertion of any document fails, for instance,
due to a duplicate id, then no document will be inserted. If ordered is false,
then the behavior is identical with that of default.
Performance
What combination of ordered_insert_behavior and ordered is used, has an
impact on the performance.
Of these, atomic + true is the fastest and atomic|default + false the slowest,
being roughly twice as slow. The performance of 'default + true' is halfway between
the two.
The following fields are relevant.
The following fields are relevant.
All other fields are ignored.
Update Statements
Each element of the updates array is an update statement document. Each document contains the following fields:
Note that currently it is possible to set multi to true in conjunction
with a replacement-style update, even though MongoDB® rejects that.
All other fields are ignored, with the exception of upsert that if present
with the value of true will cause the command to fail.
Behavior
Currently only updating using update operator expressions or with a_replacement document_ is supported. In particular, updating using anaggregation pipeline is not supported.
Update with an Update Operator Expressions document
The update statement field u can accept a document that only contains expressions. For example:
In this case, the update command updates only the corresponding fields in the document.
Update with a Replacement Document
The update statement field u field can accept a replacement document,
i.e. the document contains only field:value expressions. For example:
In this case, the update command replaces the matching document with the update document. The update command can only replace a single matching document; i.e. the multi field cannot be true.
Note If the replacement document contains an _id field, it will be ignored and the
document id will remain non-changed while the document otherwise is replaced. This is
different from MongoDB® where the presence of the _id field in the replacement document
causes an error, if the value is not the same as it is in the document being replaced.
The following fields are relevant.
If you are not logged in and using authentication, logout has no effect.
Note that in order to be logged out, the logging out must be done while using the same database that was used when you logged on.
Always returns
Creates a new MariaDB user and adds an entry to the local nosqlprotocol account database.
The following fields are relevant.
The MariaDB user will be created as '<db>.<user>'@'%' where <db> is
the name of the NoSQL database in whose context the user is created, and<user> the value of the createUser field. For instance, with the
following command
the MariaDB user 'myDatabase.user1'@'%' will be created.
The elements of the roles array are converted into privileges
as explained in .
In practice the creation is performed as follows:First the MariaDB user is created. Then the privileges are granted.
Finally the local nosqlprotocol account database is updated.
If the granting of privileges fails, an attempt will be made to drop the user.
Drops all users from the local nosqlprotocol account database and the corresponding MariaDB users.
The following fields are relevant.
If no users can be dropped, e.g. due to an authorization error, then an error will be returned. If even a single user can be dropped the returned document tells how many were dropped, which does not necessarily indicate that all users were dropped.
The following fields are relevant.
The user will first be dropped from the MariaDB server and if that succeeds also from the local nosqlprotocol account database.
This command adds more roles to a NoSQL user, which may imply that additional privileges are granted to the corresponding MariaDB user.
Note that roles assigned to different databases will result in separate GRANT statements, which means that it is possible that some succeed and others do not.
This command removes roles from an NoSQL user, which may imply that privileges are revoked from the corresponding MariaDB user.
Note that roles to be removed from different databases will result in separate REVOKE statements, which means that it is possible that some succeed and others do not.
This command updates the information about a particular user.
Changes to customData or mechanisms are made only to the local
nosqlprotocol database, but changes to pwd or roles require
the MariaDB server to be updated.
This command returns information about one or more users.
The following fields are relevant.
The returned information depends the valie of usersInfo:
Note that users may always view their own information. Otherwise the user must
have the userAdmin or userAdminAnyDatabase role.
If showCredentials is true, the returned object(s) will contain amariadb: { password: "*..."} field, where password is theSHA1(SHA1()) value of the password used when logging to MariaDB.
That is, the same string that is found in the password column in
the mysql.user table.
The following fields are relevant.
The following fields are relevant.
All other fields are ignored.
This command will always return the document
The following fields are relevant.
The following document will always be returned:
The following fields are relevant.
Currently, capped collections and views are not supported. Consequently, specifying that the collection should be capped or that it should be a view on another collection, will cause the command to fail.
The following fields are relevant.
NOTE Currently it is not possible to create indexes, but the command will nonetheless return success, provide the index specification passes some rudimentary sanity checks. Note also that the collection will be created if it does not exist.
The following fields are relevant.
The following fields are relevant.
The following fields are relevant.
NOTE Currently it is not possible to create indexes and thus there
will never be any indexes that could be dropped. However, provided the
specified collection exists, dropping indexes will always succeed except
for an attempt to drop the built-in _id_ index.
The following fields are relevant.
The response will always be
The following fields are relevant.
The following fields are relevant.
Note that the command lists all collections (that is, tables) that are found in the current database. The listed collections may or may not be suitable for being accessed using nosqlprotocol.
The following fields are relevant.
The following fields are relevant.
NOTE As it currently is not possible to actually create indexes,
although an attempt to do so using createIndexes will succeed, the
result will always only contain information about the built-in
index _id_.
The following fields are relevant.
The following fields are relevant.
Any kind of parameter is accepted and the response will always be:
The following fields are relevant.
The command returns a document containing the stable fields. In addition, there is a field maxscale whose value is the MaxScale version, expressed as a string.
The following fields are relevant.
The command will return a document of the expected layout, but the content is only rudimentary.
The following fields are relevant.
The following fields are relevant.
The command returns a document of the correct format, but no actual log data will be returned.
The following fields are relevant.
The following fields are relevant.
The following fields are relevant.
The following fields are relevant.
The following fields are relevant.
The command does not actually perform any validation but for checking
that the collection exists. The response will contain in nrecords
the current number of documents/rows it contains.
The following fields are relevant.
This is an internal command, implemented only because the Mongo Shell uses it.
The following fields are relevant.
The following document will always be returned:
Definition
mxsAddUser
The mxsAddUser command adds an existing MariaDB user to the local
nosqlprotocol account database. Use if the
MariaDB user should be created as well.
Note that the mxsAddUser command does not check that the user exists
or that the specified roles are compatible with the grants of the user.
Syntax
The 'mxsAddUser' command has the following syntax:
Command Fields
The command has the following fields:
The value of mxsAddUser should be the name (without the host part) of
an existing user in the MariaDB server and the value of pwd should be
that user's password in cleartext.
The roles array should contain roles that a compatible with the
grants of the user. Please check
for a discussion on how to map roles map to grants.
Returns
If the addition of the user succeeds, the command returns a document
with the single field ok whose value is 1.
If there is a failure of some kind, the command returns an error document
Definition
mxsCreateDatabase
The 'mxsCreateDatabase' command creates a new database and must be run
against the admin database.
Syntax
The 'mxsCreateDatabase' has the following syntax:
Command Fields
The command takes the following fields:
Returns
If database creation succeeds, the command returns a document with the
single field ok whose value is 1.
If the database creation fails, the command returns an error document.
Definition
mxsDiagnose
The mxsDiagnose command provides diagnostics for any other command; that is, how
MaxScale will handle that command.
Syntax
The mxsDiagnose command has the following syntax:
Command Fields
The command takes the following fields:
Returns
The command returns a document that contains diagnostics of the command provided as argument. For example:
kind specifies of what kind the command is; an immediate command is one for
which MaxScale autonomously can generate the response, a single command is one
where the command will cause a single SQL statement to be sent to the backend, and
a multi command is one where potentially multiple SQL statements will be sent to
the backend.
If the command is immediate then there will be a field response containing
the actual response of the command, if the command is single then there will be
a field sql containing the actual statement that would have been sent to the backend,
and if the command is multi then there will be a field sql containing an array
of statements that would have been sent to the backend.
If an error occurs while the command is being diagnosed, then there will be noresponse field but an error field whose value is an error document. Note that
the value of ok will always be 1.
Definition
mxsGetConfig
The mxsGetConfig command returns the current configuration of the session
and must be run against the 'admin' database.
Syntax
The mxsGetConfig has the following syntax:
Command Fields
The command takes the following fields:
Returns
The command returns a document that contains the current configuration of the session. For example:
Definition
mxsRemoveUser
The mxsRemoveUser removes a user from the local nosqlprotocol account
database. Use if the MariaDB user should be dropped
as well.
Syntax
The 'mxsRemoveUser' command has the following syntax:
Command Fields
The command has the following fields:
Returns
If the removal of the user succeeds, the command returns a document
with the single field ok whose value is 1.
If there is a failure of some kind, the command returns an error document
Definition
mxsSetConfig
The mxsSetConfig command changes the configuration of the session
and must be run against the 'admin' database.
Note that the changes only affect the current session and are not persisted.
Syntax
The mxsSetConfig has the following syntax:
Command Fields
The command takes the following fields:
The document takes the following fields:
Returns
The command returns a document that contains the changed configuration of the session. For example:
Definition
mxsUpdateUser
The mxsUpdateUser command updates a user in the local nosqlprotocol
account database. Use to update MariaDB user
as well.
Note that the mxsUpdateUser command does not check that the changed
data is compatible e.g. with the grants of the corresponding MariaDB
user.
Syntax
The 'mxsUpdateUser' command has the following syntax:
Command Fields
The command has the following fields:
The roles array should contain roles that a compatible with the
grants of the user. Please check
for a discussion on how to map roles map to grants.
Returns
If the updating of the user succeeds, the command returns a document
with the single field ok whose value is 1.
If there is a failure of some kind, the command returns an error document
When a document is created, an id of type ObjectId will be autogenerated by
the MongoDB® client library. If the id is provided explicitly, by assigning a
value to the _id field, the value must be an ObjectId, a string or an
integer.
The conversion of the BSON used in the communication between the client and MaxScale, to the SQL used in the communication between MaxScale and the server carries a not insignificant cost, as does the conversion of result sets returned by the server to the BSON returned by MaxScale to the client. The regular provides no remedy for this, as it is located after the protocol and uses SQL as the key and stores result sets as values.
From 23.08 onwards, the nosqlprotocol has a built-in cache that when used under certain conditions diminishes the conversion cost. The cache is enabled by adding the following line to the NoSQL listener.
This effectively causes the to be used inside the NoSQL protocol module. The internal cache can be configured just like the cache filter is, by using the following nested configuration syntax.
A limitation is that only the default storage storage_inmemory is
supported; storage_redis and storage_memcached cannot be used.
The cache works on both the SQL and the BSON layer. When a NoSQL request that potentially is cacheable arrives, a lookup for the BSON response is made. If it is found, the response is returned to the client. That is, in this case neither BSON -> SQL, nor a Result Set -> BSON translation will be made.
If the request is not potentially cacheable or the response is not available, the request is processed normally, which may mean that the request is translated into SQL. If the SQL is a SELECT, a second lookup will be made for the corresponding result set. If that is found, the result set is processed, but the roundtrip to the server will be saved.
So, when a result set is received from the server, it will be cached and
if the generated NoSQL response is also cacheable, it will be cached as
well. The benefit of this approach is that two Find NoSQL requests
may effectively return the same documents, even though one but not the
other NoSQL response is cacheable. Both will benefit the result set being
in the cache. Since the used storage follows an LRU-approach when evicting
data from the cache, the less valuable result will be evicted first.
The responses of the following commands are cached.
, provided all found documents can be returned in one response,
i.e., if singleBatch is true or batchSize is large enough.
Currently 30% of the tests in the test-suite pass.
The following is a minimal setup for getting nosqlprotocol up and running. It is assumed the reader knows how to configure MaxScale for normal use. If not, please start with the . Note that as nosqlprotocol is the first component in the MaxScale routing chain, it can be used with all routers and filters.
In the following it is assumed that MaxScale already has been configured
for normal use and that there exists a service [TheService].
The values the_user and the_password must be replaced with the
actual credentials to be used for every MongoDB® client that connects.
If MaxScale is now started, the following entry should appear in the log file.
The mongo Shell is a powerful tool with which to access and manipulate a MongoDB database. It is part of the MongoDB® package. Having the native MongoDB database installed is convenient, as it makes it easy to ascertain whether a problem is due to nosqlprotocol not fully implementing something or due to the API not being used in the correct fashion.
With the mongo shell, all that is needed is to invoke it with the port_nosqlprotocol_ is listening on:
If the shell prompt appears, then a connection was successfully established and the shell can be used.
The db variable is implicitly available, and refers by default to
the test database.
The command inserted a document into the collection called collection.
The table corresponding to that collection is created implicitly because
the default value of auto_create_tables is true. Here, the object id
is specified explicitly, but there is no need for that, as one will be
created if needed.
To check whether the documents was inserted into the collection, thefind command can be issued:
As can be seen, the document was indeed inserted into the collection
With the mysql shell, the content of the actual table can be checked.
The collection collection is represented by a table collection with
the two columns id and doc. id is a virtual column whose content is
the value of the _id field of the document in the doc column.
All MongoDB® commands that mongdbprotocol support (but for the ones that
do not require database access), basically access or manipulate the
content in the doc column using the of MariaDB.
From within the mongo shell itself it is easy to find out just what SQL a particular MongoDB command is translated into.
For instance, the SQL that the insert command with which the document was added can be found out like:
Similarily, the SQL of the find command can be find out like:
The returned SQL can be directly pasted at the mysql prompt, which is
quite convenient in case the MongoDB® command does not behave as expected.
As all client libraries implement and depend on the MongoDB® wire protocol, all client libraries should work with nosqlprotocol. However, the only client library that has been used and that has been verified to work is version 3.6 of the MongoDB Node.JS Driver.
In principle, the only thing that needs to be altered in an existing program using the library is to change the uri string that typically is something like
to
with the assumption that the default nosqlprotocol port is used.
In practice, additional modifications may be needed since nosqlprotocol does not implement all commands and does not in all cases implement the full functionality of the commands that it supports.
Store the following into a file called insert.js.
Then, run the program like
As the id is not explicitly provided, it will not be the same.
Store the following into a file called find.js.
Then, run the program like
This page is licensed: CC BY-SA / Gnu FDL
userAdmin
userAdmin
userAdmin
create a corresponding NoSQL user into the NoSQL account database.
The name can be of the format user_name. In this case, the
user will be a regular user that from the NoSQL side appears
to reside in the mariadb database. The primary purpose of
this alternative is to enable the use of existing users from
the NoSQL side.
X
X
dbAdmin[AnyDatabase]
X
read[AnyDatabase]
X
X
X
X
X
X
readWrite[AnyDatabase]
X
X
userAdmin[AnyDatabase]
ordereddefaultDefault: none
back: The response sent back to the client is logged.$lte
$ne
$nin
$alwaysTrue
Array
4
"array"
ARRAY
Boolean
5
"bool"
BOOLEAN
32-bit integer
16
"int"
INTEGER
$mul
$pop
$push
$rename
$set
$unset
skip
Positive integer
Optional. Number of documents to skip. Defaults to 0.
limit
Non-negative integer
Optional. The maximum number of documents to return. If unspecified, then defaults to no limit. A limit of 0 is equivalent to setting no limit.
batchSize
Non-negative integer
Optional. The number of documents to return in the first batch. Defaults to 101. A batchSize of 0 means that the cursor will be established, but no documents will be returned in the first batch.
singleBatch
boolean
Optional. Determines whether to close the cursor after the first batch. Defaults to false.
update
document
Mandatory, if remove is not specified. See for details.
new
boolean
Optional. If true the modified document and not the original document is returned. If remove is specified, then the original document is always returned.
fields
document
Optional. Specified which fields to return. See for details.
upsert
boolean
Optional. If true then a document will be created, if one is not found.
mechanisms
array
Optional. The specific supported SCRAM mechanisms for this user. Must be a subset of the supported mechanisms.
digestPassword
boolean
Optional. If specified, must be true.
mechanisms
array
Optional. The specific SCRAM mechanisms for user credentials. Note that if a new pwd is provided, then the array can contain all supported SCRAM mechanisms. If a new pwd is not provided, then the array must be a subset of the existing mechanisms of the user.
mechanisms
array
Optional. The specific supported SCRAM mechanisms for this user. Must be a subset of the supported mechanisms.
digestPassword
boolean
Optional. If specified, must be true.
mechanisms
array
Optional. The specific supported SCRAM mechanisms for this user. If a new password is not provided, the specified mechanisms must be a subset of the current mechanisms.
digestPassword
boolean
Optional. If specified, must be true.
dbAdmin
ALTER, CREATE, DROP, SHOW DATABASES, SELECT
read
SELECT
readWrite
CREATE, DELETE, INDEX, INSERT, SELECT, UPDATE
userAdmin
CREATE USER, GRANT OPTION
dbOwner
dbAdmin, readWrite, userAdmin
root
dbAdmin, readWrite, userAdmin
userAdmin
userAdmin
userAdmin
userAdmin
userAdmin
userAdmin
X
X
Double
1
"double"
DOUBLE
String
2
"string"
STRING
object
3
"object"
count
string
The name of the collection to count.
query
document
Optional. A query that selects which documents to count in the collection
limit
integer
Optional. The maximum number of matching documents to return.
skip
integer
Optional. The number of matching documents to skip before returning results.
distinct
string
The name of the collection to query for distinct values.
key
string
The field for which to return distinct values.
query
document
Optional. A query that selects which documents to count in the collection
delete
string
The name of the target table.
deletes
array
An array of one or more delete statements to perform in the named collection.
ordered
boolean
Optional. If true, then when a delete statement fails, return without performing the remaining delete statements. If false, then when a delete statement fails, continue with the remaining delete statements, if any. Defaults to true.
q
document
The query that matches documents to delete.
limit
integer
The number of matching documents to delete. Specify either a 0 to delete all matching documents or 1 to delete a single document.
find
string
The name of the target table.
filter
document
Optional. The query predicate. If unspecified, then all documents in the collection will match the predicate.
sort
document
Optional. The sort specification for the ordering of the results.
projection
document
Optional. The projection specification to determine which fields to includein the returned documents.
: <1 or true>
Specifies the inclusion of a field.
: <0 or false>
Specifies the exclusion of a field.
findAndModify
string
The name of the target table.
query
document
Optional. The query predicate.
sort
document
Optional. The sort specification used when the document is selected.
remove
boolean
Mandatory, if update is not specified. If true, the document will be deleted.
getLastError
any
Ignored.
getMore
long
The cursor id.
collection
string
The name of the collection over which the cursor is operating.
batchSize
positive integer
Optional. The number of documents to return in the batch.
insert
string
The name of the target collection (i.e. table).
documents
array
An array of one or more documents to be inserted to the named collection.
ordered
boolean
Optional, with default being true. See below for description.
default
All documents are inserted within a compound statement, in a transaction containing as many INSERT statements as there are documents.
All documents are inserted in a single multi-statement transaction containing as many INSERT IGNORE statements as there are documents.
atomic
All documents are inserted using a single INSERT statement.
Same as above
resetError
any
Ignored.
update
string
The name of the target table.
updates
array
An array of documents that describe what to updated.
q
document
The query that matches documents to update.
u
document
The modifications to apply. See behavior below for details.
multi
boolean
Optional. If true, updates all documents that meet the query criteria. If false limit the update to one document that meets the query criteria. Defaults to false.
logout
any
Ignored.
createUser
string
The name of the user to be added.
pwd
string
The password in cleartext.
customData
document
Optional. Any arbitrary information.
roles
array
The roles granted to the user.
dropAllUsersFromDatabase
any
Ignored.
dropUser
string
The name of the user to be dropped.
grantRolesToUser
string
The name of the user to give additional roles.
roles
array
An array of additional roles.
revokeRolesFromUser
string
The name of the user to remove roles from.
roles
array
An array of roles to remove.
updateUser
string
The user whose information should be updated.
pwd
string
Optional. The new password in cleartext.
customData
document
Optional. Any arbitrary information.
roles
array
Optional. The roles granted to the user. Note that the existing ones are replaced and not amended with these roles.
usersInfo
various
Specifies what to return. See below.
showCredentials
boolean
Optional, default false. Specifies whether the credentials should be returned.
{ usersInfo: 1 }
Returns information of all users in the database where the command is run.
{ usersInfo: }
Returns information about a specific user in the database where the command is run.
{ usersInfo: { user: , db: }}
Returns information about the user specified by the name and database.
{ usersInfo: [{ user: , db: }, ...]}
Returns information about specified users.
{ usersInfo: [ , ... ]}
Returns information about specified users in the database where the command is run.
isMaster
any
Ignored.
replSetGetStatus
any
Ignored.
endSessions
array
Ignored.
create
string
The name of the collection to create.
capped
boolean
Optional. If specified, the value must be false as capped collections are not supported.
viewOn
string
Optional. If specified, the command will fail as views are not supported.
createIndexes
string
The collection for which to create indexes.
drop
string
The name of the collection to drop.
dropDatabase
any
Ignored.
dropIndexes
any
Ignored.
fsync
any
Ignored
killCursors
string
The name of the collection.
cursors
array
The ids of the cursors to kill.
listCollections
any
Ignored.
filter
document
The field name is honored, other fields are not but cause warnings to be logged.
nameOnly
boolean
Optional. A flag to indicate whether the command should return just the collection names and type or return both the name and other information.
listDatabases
any
Ignored.
nameOnly
boolean
Optional. A flag to indicate whether the command should return just the database names, or return both database names and size information.
listIndexes
string
The name of the collection.
renameCollection
string
The namespace of the collection to rename. The namespace is a combination of the database name and the name of the collection.
to
string
The new namespace of the collection. Moving a collection/table from one database to another succeeds provided the databases reside in the same filesystem.
dropTarget
boolean
Optional. If true, the target collection/table will be dropped before the renaming is made. The default value is false.
setParameter
any
Ignored.
buildInfo
any
Ignored.
explain
document
Document specifying the command to be explained. The commands are aggregate, count, delete, distinct, find, findAndModify, mapReduce and update.
verbosity
string
Either queryPlanner, executionStats or allPlansExecution.
getCmdLineOpts
any
Ignored.
getLog
string
*, global and startupWarnings
hostInfo
any
Ignored.
listCommands
any
Ignored.
ping
any
Ignored.
serverStatus
any
Ignored.
validate
string
The name of the collection to validate.
whatsmyri
any
Ignored.
getFreeMonitoringStatus
any
Ignored.
mxsAddUser
string
The name of the user to be added.
pwd
string
The password in cleartext.
customData
document
Optional. Any arbitrary information.
roles
array
The roles granted to the user.
mxsCreateDatabase
string
The name of the database to be created.
mxsDiagnose
document
A command as provided to db.runCommand(...).
mxsGetConfig
Ignored.
mxsRemoveUser
string
The name of the user to be removed.
mxsSetConfig
document
A document specifying the configuration.
on_unknown_command
string
Either "return_error" or "return_empty"
auto_create_tables
boolean
Whether tables should be created as needed.
id_length
integer
id column VARCHAR size in created tables.
mxsUpdateUser
string
The name of the user to be updated.
pwd
string
The password in cleartext.
customData
document
Optional. Any arbitrary information.
roles
array
The roles granted to the user.
X
OBJECT
[TheService]
type=service
...
[NoSQL-Listener]
type=listener
service=TheService
protocol=nosqlprotocol
nosqlprotocol.user=the_user
nosqlprotocol.password=the_password
port=17017maxctrl create listener TheService MongoDB-Listener --protocol=nosqlprotocol 'nosqlprotocol={"user":"the_user", "password": "the_password"}'const uri = "mongodb://127.0.0.1:17017"$ mongo --host 127.0.0.1 --port 17017
MongoDB shell version v4.4.1
...
>MariaDB [(none)]> select user, host from mysql.user;
+-------------+-----------+
| User | Host |
+-------------+-----------+
| bob | % |
| mysql | localhost |
+-------------+-----------+
2 rows in set (0.001 sec)> use test;
switched to db test
> db.runCommand({createUser: "bob", pwd: "bobspwd", roles: []});
{ "ok" : 1 }MariaDB [(none)]> select user, host from mysql.user;
+-------------+-----------+
| User | Host |
+-------------+-----------+
| bob | % |
| test.bob | % |
| mysql | localhost |
+-------------+-----------+
3 rows in set (0.001 sec)> use mariadb
switched to db mariadb
> db.runCommand({createUser: "bob", pwd: "bobspwd", roles: []});
{
"ok" : 0,
"errmsg" : "User \"bob\" already exists",
"code" : 51003,
"codeName" : "Location51003"
}> db.runCommand({createUser: "alice", pwd: "alicespwd", roles: []});
{ "ok" : 1 }MariaDB [(none)]> select user, host from mysql.user;
+-------------+-----------+
| User | Host |
+-------------+-----------+
| alice | % |
| bob | % |
| test.bob | % |
| mysql | localhost |
+-------------+-----------+
4 rows in set (0.001 sec)...
nosqlprotocol.user=theuser
nosqlprotocol.password=thepasswordnosqlprotocol.authentication_required=truenosqlprotocol.authorization_enabled=truenosqlprotocol.user = user_with_privileges_for_creating_a_user
nosqlprotocol.password = the_users_password$ mongo --port 17017
...
>> use admin;
switched to db admin
> db.runCommand({createUser: "nosql_admin", pwd: "nosql_pwd", roles: ["userAdmin"]});
{ "ok" : 1 }> use mariadb;
switched to db admin
> db.runCommand({mxsAddUser: "bob", pwd: "bob_pwd", roles: ["userAdmin"]});
{ "ok" : 1 }nosqlprotocol.authentication_required=true
nosqlprotocol.authorization_enabled=true> use test;
switched to db test
> db.runCommand({createUser: "alice", pwd: "alices_pwd", roles: []});
{
"ok" : 0,
"errmsg" : "command createUser requires authentication",
"code" : 13,
"codeName" : "Unauthorized"
}[NoSQL-Listener]
...
nosqlprotocol.user=db.the_user
nosqlprotocol.password=the_password
nosqlprotocol.authentication_required=true
nosqlprotocol.authorization_enabled=true
...CREATE USER 'admin.nosql_admin'@'%' IDENTIFIED BY 'nosql_password';
GRANT ALL PRIVILEGES ON *.* TO 'admin.nosql_admin'@'%' WITH GRANT OPTION;[NoSQL-Listener]
type=listener
service=...
protocol=nosqlprotocol
nosqlprotocol.user=admin.nosql_admin
nosqlprotocol.password=nosql_password
nosqlprotocol.authentication_required=true
nosqlprotocol.authorization_enabled=true... notice : [nosqlprotocol] Created initial NoSQL user 'admin.nosql_admin'.$ mongo --quiet --port 17017 -u nosql_admin -p nosql_password admin
>> db.runCommand({usersInfo: 1});
{
"users" : [
{
"_id" : "admin.nosql_admin",
"userId" : UUID("7d921459-3099-42a7-ad06-ed37ac002161"),
"user" : "nosql_admin",
"db" : "admin",
"roles" : [
{
"db" : "admin",
"role" : "dbAdminAnyDatabase"
},
{
"db" : "admin",
"role" : "readWriteAnyDatabase"
},
{
"db" : "admin",
"role" : "userAdminAnyDatabase"
},
{
"db" : "admin",
"role" : "root"
}
],
"mechanisms" : [
"SCRAM-SHA-256"
]
}
],
"ok" : 1
}CREATE USER 'test.test_user'@'%' IDENTIFIED BY 'test_password';
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, INDEX ON `test`.* TO 'test.test_user'@'%';[NoSQL-Listener]
type=listener
service=...
protocol=nosqlprotocol
nosqlprotocol.user=test.test_user
nosqlprotocol.password=test_password
nosqlprotocol.authentication_required=true
nosqlprotocol.authorization_enabled=true... notice : [nosqlprotocol] Created initial NoSQL user 'test.test_user'.$ mongo --quiet --port 17017 -u test_user -p test_password test
>> db.runCommand({usersInfo: 1});
{
"users" : [
{
"_id" : "test.test_user",
"userId" : UUID("714f35e7-4276-45af-863c-0be4d1f5dd74"),
"user" : "test_user",
"db" : "test",
"roles" : [
{
"db" : "test",
"role" : "readWrite"
}
],
"mechanisms" : [
"SCRAM-SHA-256"
]
}
],
"ok" : 1
}[NoSQL-Listener]
type=listener
service=TheService
protocol=nosqlprotocol
...[NoSQL-Listener]
type=listener
service=TheService
protocol=nosqlprotocol
...[NoSQL-Listener]
type=listener
service=TheService
protocol=nosqlprotocol
nosqlprotocol.user=the_user
nosqlprotocol.password=the_password
nosqlprotocol.on_unknown_command=return_error
...nosqlprotocol.debug=in,out,backCREATE TABLE name (id VARCHAR(35) AS (JSON_COMPACT(JSON_EXTRACT(doc, "$._id"))) UNIQUE KEY,
doc JSON,
CONSTRAINT id_not_null CHECK(id IS NOT NULL));{ <field1>: <value>, <field2>: <value> ... }> db.runCommand({find: "collection", filter: { _id: 4711 }});> db.runCommand({find: "collection", filter: { _id: { $eq: 4711 }}});... WHERE (id = '4711')... WHERE (JSON_EXTRACT(doc, '$._id') = 4711)updates: [
{
q: <query>,
u: { $set: { status: "D" } },
...
},
...
]updates: [
{
q: <query>,
u: { status: "D", quantity: 4 },
...
},
...
]{ ok: 1 }> use myDatabase;
> db.runCommand({createUser: "user1", pwd: "pwd1", roles: []});{
"ok" : 0,
"errmsg" : "not running with --replSet",
"code" : 76,
"codeName" : "NoReplicationEnabled"
}{ "ok" : 1 }{
"errmsg" : "fsync not supported by MaxScale:nosqlprotocol",
"code" : 115,
"codeName" : "CommandNotSupported",
"ok" : 0
}{ "ok" : 1 }{ "state" : "undecided", "ok" : 1 }db.runCommand(
{
mxsAddUser: "<name>",
pwd: passwordPrompt(), // Or "<cleartext password>"
customData: { <any information> },
roles: [
{ role: "<role>", db: "<database>" } | "<role>",
...
],
mechanisms: [ "<scram-mechanism>", ...],
digestPassword: <boolean>
}
)> db.runCommand({mxsAddUser: "user", pwd: "pwd", roles: ["readWrite"]});
{ "ok" : 1 }> db.runCommand({mxsAddUser: "user2", pwd: "pwd2", roles: ["redWrite"]});
{
"ok" : 0,
"errmsg" : "No role named redWrite@test",
"code" : 31,
"codeName" : "RoleNotFound"
}db.adminCommand(
{
mxsCreateDatabase: <name>
}
)> db.adminCommand({mxsCreateDatabase: "db"});
{ "ok" : 1 }> db.adminCommand({mxsCreateDatabase: "db"});
{
"ok" : 0,
"errmsg" : "The database 'db' exists already.",
"code" : 48,
"codeName" : "NamespaceExists"
}db.runCommand(
{
mxsDiagnose: <command>
}
)> db.runCommand({mxsDiagnose: {ping:1}});
{ "kind" : "immediate", "response" : { "ok" : 1 }, "ok" : 1 }
> db.runCommand({mxsDiagnose: {find:"person", filter: { name: "Bob"}}});
{
"kind" : "single",
"sql" : "SELECT doc FROM `test`.`person` WHERE ( JSON_EXTRACT(doc, '$.name') = 'Bob') ",
"ok" : 1
}
> db.runCommand({mxsDiagnose: {delete:"person", deletes: [{q: { name: "Bob"}, limit:0}, {q: {name: "Alice"}, limit:0}]}});
{
"kind" : "single",
"sql" : [
"DELETE FROM `test`.`person` WHERE ( JSON_EXTRACT(doc, '$.name') = 'Bob') ",
"DELETE FROM `test`.`person` WHERE ( JSON_EXTRACT(doc, '$.name') = 'Alice') "
],
"ok" : 1
}db.runCommand(
{
mxsGetConfig: <any>
});> db.runCommand({mxsGetConfig: 1});
{
"config" : {
"on_unknown_command" : "return_error",
"auto_create_tables" : true,
"id_length" : 35
...
},
"ok" : 1
}db.runCommand(
{
mxsRemoveUser: "<name>"
}
)> db.runCommand({mxsRemoveUser: "user"});
{ "ok" : 1 }> db.runCommand({mxsRemoveUser: "user"});
{
"ok" : 0,
"errmsg" : "User 'user@test' not found",
"code" : 11,
"codeName" : "UserNotFound"
}db.runCommand(
{
mxsSetConfig: document
});> db.runCommand({mxsGetConfig: 1});
{
"config" : {
"on_unknown_command" : "return_error",
"auto_create_tables" : true,
"id_length" : 35
...
},
"ok" : 1
}
> db.runCommand({mxsSetConfig: { auto_create_tables: false}});
{
"config" : {
"on_unknown_command" : "return_error",
"auto_create_tables" : false,
"id_length" : 35
...
},
"ok" : 1
}db.runCommand(
{
mxsUpdateUser: "<name>",
pwd: passwordPrompt(), // Or "<cleartext password>"
customData: { <any information> },
roles: [
{ role: "<role>", db: "<database>" } | "<role>",
...
],
mechanisms: [ "<scram-mechanism>", ...],
digestPassword: <boolean>
}
)> db.runCommand({mxsUpdateUser: "user", pwd: "pwd", roles: ["readWrite"]});
{ "ok" : 1 }> db.runCommand({mxsUpdateUser: "user", roles: ["redWrite"]});
{
"ok" : 0,
"errmsg" : "No role named redWrite@test",
"code" : 31,
"codeName" : "RoleNotFound"
}[My-NoSQL-Listener]
...
nosqlprotocol.internal_cache=cache[My-NoSQL-Listener]
...
nosqlprotocol.internal_cache=cache
nosqlprotocol.cache.max_size=10M
nosqlprotocol.cache.soft_ttl=30s
nosqlprotocol.cache.hard_ttl=40s
...[TheService]
type=service
...
[NoSQL-Listener]
type=listener
service=TheService
protocol=nosqlprotocol
nosqlprotocol.user=the_user
nosqlprotocol.password=the_password
port=17017... notice : (NoSQL-Listener); Listening for connections at [127.0.0.1]:17017$ mongo --port 17017
MongoDB shell version v4.4.1
connecting to: mongodb://127.0.0.1:17017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("694f3eed-329f-487a-8d73-9a2d4cf82d62") }
MongoDB server version: 4.4.1
---
...
---
>> db.runCommand({insert: "collection", documents: [{_id: 1, "hello": "world"}]});
{ "n" : 1, "ok" : 1 }> db.runCommand({find: "collection"});
{
"cursor" : {
"firstBatch" : [
{
"_id" : 1,
"hello" : "world"
}
],
"id" : NumberLong(0),
"ns" : "test.collection"
},
"ok" : 1
}MariaDB [(none)]> select * from test.collection;
+------+------------------------------------+
| id | doc |
+------+------------------------------------+
| 1.0 | { "_id" : 1.0, "hello" : "world" } |
+------+------------------------------------+> db.runCommand({mxsDiagnose: {insert: "collection", documents: [{_id: 1, "hello": "world"}]}});
{
"kind" : "multi",
"sql" : [
"INSERT INTO `test`.`collection` (doc) VALUES ('{ \"_id\" : 1.0, \"hello\" : \"world\" }')"
],
"ok" : 1
}> db.runCommand({mxsDiagnose: {find: "collection"}});
{
"kind" : "single",
"sql" : "SELECT doc FROM `test`.`collection` ",
"ok" : 1
}const uri = "mongodb+srv://<user>:<password>@<cluster-url>?writeConcern=majority";const uri = "mongodb://<maxscale-ip>:17017";const { MongoClient } = require("mongodb");
const uri = "mongodb://127.0.0.1:17017";
const client = new MongoClient(uri, { useUnifiedTopology: true });
async function run() {
try {
await client.connect();
const database = client.db("mydb");
const movies = database.collection("movies");
// create a document to be inserted
const movie = { title: "Apocalypse Now", director: "Francis Ford Coppola" };
const result = await movies.insertOne(movie);
console.log(
`${result.insertedCount} documents were inserted with the _id: ${result.insertedId}`,
);
} finally {
await client.close();
}
}
run().catch(console.dir);$ nodejs insert.js
1 documents were inserted with the _id: 60afca73bf486114e3fb48b8const { MongoClient } = require("mongodb");
const uri = "mongodb://127.0.0.1:17017";
const client = new MongoClient(uri, { useUnifiedTopology: true });
async function run() {
try {
await client.connect();
const database = client.db("mydb");
const movies = database.collection("movies");
// Query for a movie that has the title 'Apocalypse Now'
const query = { title: "Apocalypse Now" };
const options = {
// Include only the 'director' field in the returned document
projection: { _id: 0, director: 1 },
};
const movie = await movies.findOne(query, options);
// Returns a document and not a cursor, so print directly.
console.log(movie);
} finally {
await client.close();
}
}
run().catch(console.dir);$ nodejs find.js
{ director: 'Francis Ford Coppola' }The MaxScale resource represents a MaxScale instance and it is the core on top of which the modules build upon.
Retrieve global information about a MaxScale instance. This includes various file locations, configuration options and version information.
Response
Status: 200 OK
Update MaxScale parameters. The request body must define updated values for thedata.attributes.parameters object. The parameters that can be modified are
listed in the /v1/maxscale/modules/maxscale endpoint and have the modifiable
value set to true.
Response
Parameters modified:
Status: 204 No Content
Invalid JSON body:
Status: 400 Bad Request
Get the information and statistics of a particular thread. The :id in
the URI must map to a valid thread number between 0 and the configured
value of threads.
Response
Status: 200 OK
Get the information for all threads. Returns a collection of threads resources.
Response
Status: 200 OK
Get information about the current state of logging, enabled log files and the location where the log files are stored.
Note: The parameters in this endpoint are a subset of the parameters in the/v1/maxscale endpoint. Because of this, the parameters in this endpoint are
deprecated as of MaxScale 6.0.
Note: In MaxScale 2.5 the log_throttling and ms_timestamp parameters
were incorrectly named as throttling and highprecision. In MaxScale 6,
the parameter names are now correct which means the parameters declared here
aren't fully backwards compatible.
Response
Status: 200 OK
Get the contents of the MaxScale logs. This endpoint was added in MaxScale 6.
To navigate the log, use the prev link to move backwards to older log
entries. The latest log entries can be read with the last link.
The entries are sorted in ascending order by the time they were logged. This means that with the default parameters, the latest logged event is the last element in the returned array.
Parameters
This endpoint supports the following parameters:
page[size]
Set number of rows of data to read. By default, 50 rows of data are read from the log.
page[cursor]
Set position from where the log data is retrieved. The default position to retrieve the log data is the end of the log. This value should not be modified by the user and the values returned in the
Response
Status: 200 OK
Get the contents of the MaxScale logs as separate entries. This endpoint was
added in MaxScale 24.02. This endpoint is nearly identical to the/v1/maxscale/logs/data endpoint except that this is a resource collection
where each log line is a separate resource.
Parameters
This endpoint supports the same parameters as .
Response
Status: 200 OK
Stream the contents of the MaxScale logs. This endpoint was added in MaxScale 6.
This endpoint opens a
connection and streams the contents of the log to it. Each WebSocket message
will contain the JSON representation of the log message. The JSON is formatted
in the same way as the values in the log array of the /v1/maxscale/logs/data
endpoint:
If the client writes any data to the open socket, it will be treated as an error and the stream is closed.
The WebSocket ping and close commands are not yet supported and will be treated as errors.
When maxlog is used as source of log data, any log messages logged after log
rotation will not be sent if the file was moved or truncated. To fetch new
events after log rotation, reopen the WebSocket connection.
Parameters
This endpoint supports the following parameters:
page[cursor]
Set position from where the log data is retrieved. The default position to
retrieve the log data is the end of the log.
To stream data from a known point, first read the data via the/v1/maxscale/logs/data endpoint and then use the id value of the newest
log message (i.e. the first value in the log array) to start the stream.
priority
Response
Upgrade started:
Status: 101 Switching Protocols
Client didn't request a WebSocket upgrade:
Status: 426 Upgrade Required
Note: The modification of logging parameters via this endpoint has
deprecated in MaxScale 6.0. The parameters should be modified with the/v1/maxscale endpoint instead.
Any PATCH requests done to this endpoint will be redirected to the/v1/maxscale endpoint. Due to the misspelling of the ms_timestamp andlog_throttling parameters, this is not fully backwards compatible.
Update logging parameters. The request body must define updated values for thedata.attributes.parameters object. All logging parameters can be altered at runtime.
Response
Parameters modified:
Status: 204 No Content
Invalid JSON body:
Status: 400 Bad Request
Flushes any pending messages to disk and reopens the log files. The body of the message is ignored.
Response
Status: 204 No Content
Reloads all TLS certificates for listeners and servers as well as the REST API itself. If the reloading fails, the old certificates will remain in use for the objects that failed to reload. This also causes the JWT signature keys to be reloaded if one of the asymmetric key algorithms is being used. If JWTs are being signed with a random symmetric keys, a new random key is created.
The reloading is not transactional: if a single listener or server fails to reload its certificates, the remaining ones are not reloaded. This means that a failed reload can partially reload certificates. The REST API certificates are only reloaded if all other certificate reloads were successful.
Response
Status: 204 No Content
Retrieve information about a loaded module. The :name must be the name of a
valid loaded module or either maxscale or servers.
The maxscale module will display the global configuration options
(i.e. everything under the [maxscale] section) as a module.
The servers module displays the server object type and the configuration
parameters it accepts as a module.
Any parameter with the modifiable value set to true can be modified
at runtime using a PATCH command on the corresponding object endpoint.
Response
Status: 200 OK
Retrieve information about all loaded modules.
This endpoint supports the load=all parameter. When defined, all modules
located in the MaxScale module directory (libdir) will be loaded. This allows
one to see the parameters of a module before the object is created.
Response
Status: 200 OK
For read-only commands:
For commands that can modify data:
Modules can expose commands that can be called via the REST API. The module
resource lists all commands in the data.attributes.commands list. Each value
is a command sub-resource identified by its id field and the HTTP method the
command uses is defined by the attributes.method field.
The :module in the URI must be a valid name of a loaded module and :command must be a valid command identifier that is exposed by that module. All parameters to the module commands are passed as HTTP request parameters.
Here is an example POST requests to the mariadbmon module command reset-replication with two parameters, the name of the monitor instance and the server name:
Response
Command with output:
Status: 200 OK
The contents of the meta field will contain the output of the module
command. This output depends on the command that is being executed. It can
contain any valid JSON value.
Command with no output:
Status: 204 No Content
Classify provided statement and return the result.
Response
Status: 200 OK
This page is licensed: CC BY-SA / Gnu FDL
linksidpriority
Include messages only from these log levels. The default is to include all
messages.
The value given should be a comma-separated list of log priorities. The
priorities are alert, error, warning, notice, info anddebug. Note that the debug log level is only used in debug builds of
MaxScale.
Include messages only from these log levels. The default is to include all
messages.
The value given should be a comma-separated list of log priorities. The
priorities are alert, error, warning, notice, info anddebug. Note that the debug log level is only used in debug builds of
MaxScale.
GET /v1/maxscale{
"data": {
"attributes": {
"activated_at": "Fri, 05 Jan 2024 07:23:54 GMT",
"commit": "af392f1a43e72e5538e1956aa500d77aca4d4456",
"config_sync": null,
"parameters": {
"admin_audit": false,
"admin_audit_exclude_methods": [],
"admin_audit_file": "/var/log/maxscale/admin_audit.csv",
"admin_auth": true,
"admin_enabled": true,
"admin_gui": true,
"admin_host": "127.0.0.1",
"admin_jwt_algorithm": "auto",
"admin_jwt_issuer": "maxscale",
"admin_jwt_key": null,
"admin_jwt_max_age": "86400000ms",
"admin_log_auth_failures": true,
"admin_oidc_url": null,
"admin_pam_readonly_service": null,
"admin_pam_readwrite_service": null,
"admin_port": 8989,
"admin_readonly_hosts": "*",
"admin_readwrite_hosts": "*",
"admin_secure_gui": false,
"admin_ssl_ca": null,
"admin_ssl_cert": null,
"admin_ssl_key": null,
"admin_ssl_version": "MAX",
"admin_verify_url": null,
"auth_connect_timeout": "10000ms",
"auth_read_timeout": "10000ms",
"auth_write_timeout": "10000ms",
"auto_tune": [],
"cachedir": "/var/cache/maxscale",
"config_sync_cluster": null,
"config_sync_db": "mysql",
"config_sync_interval": "5000ms",
"config_sync_password": null,
"config_sync_timeout": "10000ms",
"config_sync_user": null,
"connector_plugindir": "/usr/lib64/maxscale/plugin",
"datadir": "/var/lib/maxscale",
"debug": null,
"dump_last_statements": "never",
"execdir": "/usr/bin",
"key_manager": "none",
"language": "/var/lib/maxscale",
"libdir": "/usr/lib64/maxscale",
"load_persisted_configs": true,
"local_address": null,
"log_debug": false,
"log_info": false,
"log_notice": true,
"log_throttling": {
"count": 10,
"suppress": 10000,
"window": 1000
},
"log_warn_super_user": false,
"log_warning": true,
"logdir": "/var/log/maxscale",
"max_auth_errors_until_block": 10,
"max_read_amount": 0,
"maxlog": true,
"module_configdir": "/etc/maxscale.modules.d",
"ms_timestamp": false,
"passive": false,
"persist_runtime_changes": true,
"persistdir": "/var/lib/maxscale/maxscale.cnf.d",
"piddir": "/var/run/maxscale",
"query_classifier_cache_size": 5003753472,
"query_retries": 1,
"query_retry_timeout": "5000ms",
"rebalance_period": "0ms",
"rebalance_threshold": 20,
"rebalance_window": 10,
"retain_last_statements": 0,
"session_trace": 0,
"session_trace_match": null,
"skip_name_resolve": false,
"sql_mode": "default",
"syslog": false,
"threads": 3,
"threads_max": 256,
"users_refresh_interval": "0ms",
"users_refresh_time": "0ms",
"writeq_high_water": 65536,
"writeq_low_water": 1024
},
"process_datadir": "/var/lib/maxscale/data1",
"started_at": "Fri, 05 Jan 2024 07:23:54 GMT",
"system": {
"machine": {
"cores_available": 8,
"cores_physical": 8,
"cores_virtual": 8.0,
"memory_available": 33358356480,
"memory_physical": 33358356480
},
"maxscale": {
"query_classifier_cache_size": 5003753472,
"threads": 3
},
"os": {
"machine": "x86_64",
"nodename": "monolith",
"release": "6.6.4-100.fc38.x86_64",
"sysname": "Linux",
"version": "#1 SMP PREEMPT_DYNAMIC Sun Dec 3 18:11:27 UTC 2023"
}
},
"uptime": 12,
"version": "24.02.0"
},
"id": "maxscale",
"type": "maxscale"
},
"links": {
"self": "http://localhost:8989/v1/maxscale/"
}
}PATCH /v1/maxscaleGET /v1/maxscale/threads/:id{
"data": {
"attributes": {
"stats": {
"accepts": 0,
"avg_event_queue_length": 1,
"current_descriptors": 5,
"errors": 0,
"hangups": 0,
"listening": true,
"load": {
"last_hour": 0,
"last_minute": 0,
"last_second": 0
},
"max_event_queue_length": 1,
"max_exec_time": 0,
"max_queue_time": 0,
"memory": {
"query_classifier": 0,
"sessions": 0,
"total": 0,
"zombies": 0
},
"query_classifier_cache": {
"evictions": 0,
"hits": 0,
"inserts": 0,
"misses": 0,
"size": 0
},
"reads": 20,
"sessions": 0,
"state": "Active",
"total_descriptors": 5,
"writes": 0,
"zombies": 0
}
},
"id": "0",
"links": {
"self": "http://localhost:8989/v1/threads/0/"
},
"type": "threads"
},
"links": {
"self": "http://localhost:8989/v1/maxscale/threads/0/"
}
}GET /v1/maxscale/threads{
"data": [
{
"attributes": {
"stats": {
"accepts": 0,
"avg_event_queue_length": 1,
"current_descriptors": 5,
"errors": 0,
"hangups": 0,
"listening": true,
"load": {
"last_hour": 0,
"last_minute": 0,
"last_second": 0
},
"max_event_queue_length": 1,
"max_exec_time": 0,
"max_queue_time": 0,
"memory": {
"query_classifier": 0,
"sessions": 0,
"total": 0,
"zombies": 0
},
"query_classifier_cache": {
"evictions": 0,
"hits": 0,
"inserts": 0,
"misses": 0,
"size": 0
},
"reads": 21,
"sessions": 0,
"state": "Active",
"total_descriptors": 5,
"writes": 0,
"zombies": 0
}
},
"id": "0",
"links": {
"self": "http://localhost:8989/v1/threads/0/"
},
"type": "threads"
},
{
"attributes": {
"stats": {
"accepts": 1,
"avg_event_queue_length": 1,
"current_descriptors": 8,
"errors": 0,
"hangups": 0,
"listening": true,
"load": {
"last_hour": 0,
"last_minute": 0,
"last_second": 0
},
"max_event_queue_length": 2,
"max_exec_time": 1,
"max_queue_time": 0,
"memory": {
"query_classifier": 1481,
"sessions": 70221,
"total": 71702,
"zombies": 0
},
"query_classifier_cache": {
"evictions": 0,
"hits": 0,
"inserts": 3,
"misses": 4,
"size": 1481
},
"reads": 35,
"sessions": 1,
"state": "Active",
"total_descriptors": 8,
"writes": 15,
"zombies": 0
}
},
"id": "1",
"links": {
"self": "http://localhost:8989/v1/threads/1/"
},
"type": "threads"
},
{
"attributes": {
"stats": {
"accepts": 0,
"avg_event_queue_length": 1,
"current_descriptors": 5,
"errors": 0,
"hangups": 0,
"listening": true,
"load": {
"last_hour": 0,
"last_minute": 0,
"last_second": 0
},
"max_event_queue_length": 1,
"max_exec_time": 0,
"max_queue_time": 0,
"memory": {
"query_classifier": 0,
"sessions": 0,
"total": 0,
"zombies": 0
},
"query_classifier_cache": {
"evictions": 0,
"hits": 0,
"inserts": 0,
"misses": 0,
"size": 0
},
"reads": 20,
"sessions": 0,
"state": "Active",
"total_descriptors": 5,
"writes": 0,
"zombies": 0
}
},
"id": "2",
"links": {
"self": "http://localhost:8989/v1/threads/2/"
},
"type": "threads"
}
],
"links": {
"self": "http://localhost:8989/v1/maxscale/threads/"
}
}GET /v1/maxscale/logs{
"data": {
"attributes": {
"log_file": "/var/log/maxscale/maxscale.log",
"log_priorities": [
"alert",
"error",
"warning",
"notice"
],
"parameters": {
"log_debug": false,
"log_info": false,
"log_notice": true,
"log_throttling": {
"count": 10,
"suppress": 10000,
"window": 1000
},
"log_warning": true,
"maxlog": true,
"ms_timestamp": false,
"syslog": false
}
},
"id": "logs",
"type": "logs"
},
"links": {
"self": "http://localhost:8989/v1/maxscale/logs/"
}
}GET /v1/maxscale/logs/data{
"data": {
"attributes": {
"log": [
{
"id": "37",
"message": "MaxScale started with 3 worker threads.",
"priority": "notice",
"timestamp": "2024-01-05 07:23:54",
"unix_timestamp": 1704439434
},
{
"id": "38",
"message": "Read 8 user@host entries from 'server1' for service 'RW-Split-Router'.",
"priority": "notice",
"timestamp": "2024-01-05 07:23:55",
"unix_timestamp": 1704439435
},
{
"id": "39",
"message": "Read 8 user@host entries from 'server1' for service 'Read-Connection-Router'.",
"priority": "notice",
"timestamp": "2024-01-05 07:23:55",
"unix_timestamp": 1704439435
}
],
"log_source": "maxlog"
},
"id": "log_data",
"type": "log_data"
},
"links": {
"last": "http://localhost:8989/v1/maxscale/logs/data/?page%5Bsize%5D=3",
"prev": "http://localhost:8989/v1/maxscale/logs/data/?page%5Bcursor%5D=34&page%5Bsize%5D=3",
"self": "http://localhost:8989/v1/maxscale/logs/data/?page%5Bcursor%5D=40&page%5Bsize%5D=3"
}
}GET /v1/maxscale/logs/entries{
"data": [
{
"attributes": {
"log_source": "maxlog",
"message": "MaxScale started with 3 worker threads.",
"priority": "notice",
"timestamp": "2024-01-05 07:23:54",
"unix_timestamp": 1704439434
},
"id": "37",
"type": "log_entry"
},
{
"attributes": {
"log_source": "maxlog",
"message": "Read 8 user@host entries from 'server1' for service 'RW-Split-Router'.",
"priority": "notice",
"timestamp": "2024-01-05 07:23:55",
"unix_timestamp": 1704439435
},
"id": "38",
"type": "log_entry"
},
{
"attributes": {
"log_source": "maxlog",
"message": "Read 8 user@host entries from 'server1' for service 'Read-Connection-Router'.",
"priority": "notice",
"timestamp": "2024-01-05 07:23:55",
"unix_timestamp": 1704439435
},
"id": "39",
"type": "log_entry"
}
],
"links": {
"last": "http://localhost:8989/v1/maxscale/logs/entries/?page%5Bsize%5D=3",
"prev": "http://localhost:8989/v1/maxscale/logs/entries/?page%5Bcursor%5D=34&page%5Bsize%5D=3",
"self": "http://localhost:8989/v1/maxscale/logs/entries/?page%5Bcursor%5D=40&page%5Bsize%5D=3"
},
"meta": {
"total": 3
}
}GET /v1/maxscale/logs/stream{
"id": "572",
"message": "MaxScale started with 8 worker threads, each with a stack size of 8388608 bytes.",
"priority": "notice",
"timestamp": "2020-09-25 10:01:29"
}PATCH /v1/maxscale/logsPOST /v1/maxscale/logs/flushPOST /v1/maxscale/tls/reloadGET /v1/maxscale/modules/:name{
"data": {
"attributes": {
"api": "router",
"commands": [
{
"attributes": {
"arg_max": 1,
"arg_min": 1,
"description": "Reset global GTID state in readwritesplit.",
"method": "POST",
"parameters": [
{
"description": "Readwritesplit service",
"required": true,
"type": "SERVICE"
}
]
},
"id": "reset-gtid",
"links": {
"self": "http://localhost:8989/v1/modules/readwritesplit/reset-gtid/"
},
"type": "module_command"
}
],
"description": "A Read/Write splitting router for enhancement read scalability",
"maturity": "GA",
"module_type": "Router",
"parameters": [
{
"default_value": "none",
"description": "Causal reads mode",
"enum_values": [
"none",
"local",
"global",
"fast_global",
"fast",
"universal",
"fast_universal",
"false",
"off",
"0",
"true",
"on",
"1"
],
"mandatory": false,
"modifiable": true,
"name": "causal_reads",
"type": "enum"
},
{
"default_value": "10000ms",
"description": "Timeout for the slave synchronization",
"mandatory": false,
"modifiable": true,
"name": "causal_reads_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": false,
"description": "Retry failed writes outside of transactions",
"mandatory": false,
"modifiable": true,
"name": "delayed_retry",
"type": "bool"
},
{
"default_value": "10000ms",
"description": "Timeout for delayed_retry",
"mandatory": false,
"modifiable": true,
"name": "delayed_retry_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": false,
"description": "Create connections only when needed",
"mandatory": false,
"modifiable": true,
"name": "lazy_connect",
"type": "bool"
},
{
"default_value": false,
"description": "Use master for reads",
"mandatory": false,
"modifiable": true,
"name": "master_accept_reads",
"type": "bool"
},
{
"default_value": "fail_on_write",
"description": "Master failure mode behavior",
"enum_values": [
"fail_instantly",
"fail_on_write",
"error_on_write"
],
"mandatory": false,
"modifiable": true,
"name": "master_failure_mode",
"type": "enum"
},
{
"default_value": true,
"description": "Reconnect to master",
"mandatory": false,
"modifiable": true,
"name": "master_reconnection",
"type": "bool"
},
{
"default_value": "0ms",
"description": "Maximum replication lag",
"mandatory": false,
"modifiable": true,
"name": "max_replication_lag",
"type": "duration",
"unit": "ms"
},
{
"default_value": 255,
"description": "Maximum number of slave connections",
"mandatory": false,
"modifiable": true,
"name": "max_slave_connections",
"type": "count"
},
{
"deprecated": true,
"description": "Alias for 'max_replication_lag'",
"mandatory": false,
"modifiable": true,
"name": "max_slave_replication_lag",
"type": "duration"
},
{
"default_value": false,
"description": "Optimistically offload transactions to slaves",
"mandatory": false,
"modifiable": true,
"name": "optimistic_trx",
"type": "bool"
},
{
"default_value": true,
"description": "Automatically retry failed reads outside of transactions",
"mandatory": false,
"modifiable": true,
"name": "retry_failed_reads",
"type": "bool"
},
{
"default_value": false,
"description": "Reuse identical prepared statements inside the same connection",
"mandatory": false,
"modifiable": true,
"name": "reuse_prepared_statements",
"type": "bool"
},
{
"default_value": 255,
"description": "Starting number of slave connections",
"mandatory": false,
"modifiable": true,
"name": "slave_connections",
"type": "count"
},
{
"default_value": "least_current_operations",
"description": "Slave selection criteria",
"enum_values": [
"least_global_connections",
"least_router_connections",
"least_behind_master",
"least_current_operations",
"adaptive_routing",
"LEAST_GLOBAL_CONNECTIONS",
"LEAST_ROUTER_CONNECTIONS",
"LEAST_BEHIND_MASTER",
"LEAST_CURRENT_OPERATIONS",
"ADAPTIVE_ROUTING"
],
"mandatory": false,
"modifiable": true,
"name": "slave_selection_criteria",
"type": "enum"
},
{
"default_value": false,
"description": "Lock connection to master after multi-statement query",
"mandatory": false,
"modifiable": true,
"name": "strict_multi_stmt",
"type": "bool"
},
{
"default_value": false,
"description": "Lock connection to master after a stored procedure is executed",
"mandatory": false,
"modifiable": true,
"name": "strict_sp_calls",
"type": "bool"
},
{
"default_value": true,
"description": "Prevent reconnections if temporary tables exist",
"mandatory": false,
"modifiable": true,
"name": "strict_tmp_tables",
"type": "bool"
},
{
"default_value": false,
"description": "Retry failed transactions",
"mandatory": false,
"modifiable": true,
"name": "transaction_replay",
"type": "bool"
},
{
"default_value": 5,
"description": "Maximum number of times to retry a transaction",
"mandatory": false,
"modifiable": true,
"name": "transaction_replay_attempts",
"type": "count"
},
{
"default_value": "full",
"description": "Type of checksum to calculate for results",
"enum_values": [
"full",
"result_only",
"no_insert_id"
],
"mandatory": false,
"modifiable": true,
"name": "transaction_replay_checksum",
"type": "enum"
},
{
"default_value": 1048576,
"description": "Maximum size of transaction to retry",
"mandatory": false,
"modifiable": true,
"name": "transaction_replay_max_size",
"type": "size"
},
{
"default_value": false,
"description": "Retry transaction on deadlock",
"mandatory": false,
"modifiable": true,
"name": "transaction_replay_retry_on_deadlock",
"type": "bool"
},
{
"default_value": false,
"description": "Retry transaction on checksum mismatch",
"mandatory": false,
"modifiable": true,
"name": "transaction_replay_retry_on_mismatch",
"type": "bool"
},
{
"default_value": true,
"description": "Prevent replaying of about-to-commit transaction",
"mandatory": false,
"modifiable": true,
"name": "transaction_replay_safe_commit",
"type": "bool"
},
{
"default_value": "30000ms",
"description": "Timeout for transaction replay",
"mandatory": false,
"modifiable": true,
"name": "transaction_replay_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": "all",
"description": "Whether to route SQL variable modifications to all servers or only to the master",
"enum_values": [
"all",
"master"
],
"mandatory": false,
"modifiable": true,
"name": "use_sql_variables_in",
"type": "enum"
},
{
"default_value": false,
"deprecated": true,
"description": "Retrieve users from all backend servers instead of only one",
"mandatory": false,
"modifiable": true,
"name": "auth_all_servers",
"type": "bool"
},
{
"default_value": "300000ms",
"description": "How often idle connections are pinged",
"mandatory": false,
"modifiable": true,
"name": "connection_keepalive",
"type": "duration",
"unit": "ms"
},
{
"deprecated": true,
"description": "Alias for 'wait_timeout'",
"mandatory": false,
"modifiable": true,
"name": "connection_timeout",
"type": "duration"
},
{
"default_value": false,
"description": "Disable session command history",
"mandatory": false,
"modifiable": true,
"name": "disable_sescmd_history",
"type": "bool"
},
{
"default_value": false,
"description": "Allow the root user to connect to this service",
"mandatory": false,
"modifiable": true,
"name": "enable_root_user",
"type": "bool"
},
{
"default_value": false,
"description": "Ping connections unconditionally",
"mandatory": false,
"modifiable": true,
"name": "force_connection_keepalive",
"type": "bool"
},
{
"default_value": "-1ms",
"description": "Put connections into pool after session has been idle for this long",
"mandatory": false,
"modifiable": true,
"name": "idle_session_pool_time",
"type": "duration",
"unit": "ms"
},
{
"default_value": true,
"description": "Match localhost to wildcard host",
"mandatory": false,
"modifiable": true,
"name": "localhost_match_wildcard_host",
"type": "bool"
},
{
"default_value": true,
"description": "Log a warning when client authentication fails",
"mandatory": false,
"modifiable": true,
"name": "log_auth_warnings",
"type": "bool"
},
{
"default_value": false,
"description": "Log debug messages for this service (debug builds only)",
"mandatory": false,
"modifiable": true,
"name": "log_debug",
"type": "bool"
},
{
"default_value": false,
"description": "Log info messages for this service",
"mandatory": false,
"modifiable": true,
"name": "log_info",
"type": "bool"
},
{
"default_value": false,
"description": "Log notice messages for this service",
"mandatory": false,
"modifiable": true,
"name": "log_notice",
"type": "bool"
},
{
"default_value": false,
"description": "Log warning messages for this service",
"mandatory": false,
"modifiable": true,
"name": "log_warning",
"type": "bool"
},
{
"default_value": 0,
"description": "Maximum number of connections",
"mandatory": false,
"modifiable": true,
"name": "max_connections",
"type": "count"
},
{
"default_value": 50,
"description": "Session command history size",
"mandatory": false,
"modifiable": true,
"name": "max_sescmd_history",
"type": "count"
},
{
"default_value": "60000ms",
"description": "How long a session can wait for a connection to become available",
"mandatory": false,
"modifiable": true,
"name": "multiplex_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": "0ms",
"description": "Network write timeout",
"mandatory": false,
"modifiable": true,
"name": "net_write_timeout",
"type": "duration",
"unit": "ms"
},
{
"description": "Password for the user used to retrieve database users",
"mandatory": true,
"modifiable": true,
"name": "password",
"type": "password"
},
{
"default_value": true,
"description": "Prune old session command history if the limit is exceeded",
"mandatory": false,
"modifiable": true,
"name": "prune_sescmd_history",
"type": "bool"
},
{
"default_value": "primary",
"description": "Service rank",
"enum_values": [
"primary",
"secondary"
],
"mandatory": false,
"modifiable": true,
"name": "rank",
"type": "enum"
},
{
"default_value": -1,
"description": "Number of statements kept in memory",
"mandatory": false,
"modifiable": true,
"name": "retain_last_statements",
"type": "int"
},
{
"default_value": false,
"description": "Enable session tracing for this service",
"mandatory": false,
"modifiable": true,
"name": "session_trace",
"type": "bool"
},
{
"default_value": false,
"deprecated": true,
"description": "Track session state using server responses",
"mandatory": false,
"modifiable": true,
"name": "session_track_trx_state",
"type": "bool"
},
{
"default_value": true,
"deprecated": true,
"description": "Strip escape characters from database names",
"mandatory": false,
"modifiable": true,
"name": "strip_db_esc",
"type": "bool"
},
{
"description": "Username used to retrieve database users",
"mandatory": true,
"modifiable": true,
"name": "user",
"type": "string"
},
{
"description": "Load additional users from a file",
"mandatory": false,
"modifiable": false,
"name": "user_accounts_file",
"type": "path"
},
{
"default_value": "add_when_load_ok",
"description": "When and how the user accounts file is used",
"enum_values": [
"add_when_load_ok",
"file_only_always"
],
"mandatory": false,
"modifiable": false,
"name": "user_accounts_file_usage",
"type": "enum"
},
{
"description": "Custom version string to use",
"mandatory": false,
"modifiable": true,
"name": "version_string",
"type": "string"
},
{
"default_value": "0ms",
"description": "Connection idle timeout",
"mandatory": false,
"modifiable": true,
"name": "wait_timeout",
"type": "duration",
"unit": "ms"
}
],
"version": "V1.1.0"
},
"id": "readwritesplit",
"links": {
"self": "http://localhost:8989/v1/modules/readwritesplit/"
},
"type": "modules"
},
"links": {
"self": "http://localhost:8989/v1/maxscale/modules/"
}
}GET /v1/maxscale/modules{
"data": [
{
"attributes": {
"commands": [],
"description": "maxscale",
"maturity": "GA",
"module_type": "maxscale",
"parameters": [
{
"default_value": false,
"description": "Enable REST audit logging",
"mandatory": false,
"modifiable": true,
"name": "admin_audit",
"type": "bool"
},
{
"default_value": [],
"description": "List of HTTP methods to exclude from audit logging, e.g. \"GET\"",
"enum_values": [
"GET",
"PUT",
"POST",
"PATCH",
"DELETE",
"HEAD",
"CONNECT",
"OPTIONS",
"TRACE"
],
"mandatory": false,
"modifiable": true,
"name": "admin_audit_exclude_methods",
"type": "enum list"
},
{
"default_value": "/var/log/maxscale/admin_audit.csv",
"description": "Full path to admin audit file",
"mandatory": false,
"modifiable": true,
"name": "admin_audit_file",
"type": "string"
},
{
"default_value": true,
"description": "Admin interface authentication.",
"mandatory": false,
"modifiable": false,
"name": "admin_auth",
"type": "bool"
},
{
"default_value": true,
"description": "Admin interface is enabled.",
"mandatory": false,
"modifiable": false,
"name": "admin_enabled",
"type": "bool"
},
{
"default_value": true,
"description": "Enable admin GUI.",
"mandatory": false,
"modifiable": false,
"name": "admin_gui",
"type": "bool"
},
{
"default_value": "127.0.0.1",
"description": "Admin interface host.",
"mandatory": false,
"modifiable": false,
"name": "admin_host",
"type": "string"
},
{
"default_value": "auto",
"description": "JWT signature algorithm",
"enum_values": [
"auto",
"HS256",
"HS384",
"HS512",
"RS256",
"RS384",
"RS512",
"ES256",
"ES384",
"ES512",
"PS256",
"PS384",
"PS512",
"ED25519",
"ED448"
],
"mandatory": false,
"modifiable": false,
"name": "admin_jwt_algorithm",
"type": "enum"
},
{
"default_value": "maxscale",
"description": "The issuer claim for all JWTs generated by MaxScale.",
"mandatory": false,
"modifiable": false,
"name": "admin_jwt_issuer",
"type": "string"
},
{
"description": "Encryption key ID for symmetric signature algorithms. If left empty, MaxScale will generate a random key that is used to sign the JWT.",
"mandatory": false,
"modifiable": false,
"name": "admin_jwt_key",
"type": "string"
},
{
"default_value": "86400000ms",
"description": "Maximum age of the JWTs generated by MaxScale",
"mandatory": false,
"modifiable": true,
"name": "admin_jwt_max_age",
"type": "duration",
"unit": "ms"
},
{
"default_value": true,
"description": "Log admin interface authentication failures.",
"mandatory": false,
"modifiable": true,
"name": "admin_log_auth_failures",
"type": "bool"
},
{
"description": "Extra public certificates used to validate externally signed JWTs",
"mandatory": false,
"modifiable": true,
"name": "admin_oidc_url",
"type": "string"
},
{
"description": "PAM service for read-only users.",
"mandatory": false,
"modifiable": false,
"name": "admin_pam_readonly_service",
"type": "string"
},
{
"description": "PAM service for read-write users.",
"mandatory": false,
"modifiable": false,
"name": "admin_pam_readwrite_service",
"type": "string"
},
{
"default_value": 8989,
"description": "Admin interface port.",
"mandatory": false,
"modifiable": false,
"name": "admin_port",
"type": "int"
},
{
"default_value": "*",
"description": "Allowed hosts for read-only rest-api users.",
"mandatory": false,
"modifiable": false,
"name": "admin_readonly_hosts",
"type": "host pattern list"
},
{
"default_value": "*",
"description": "Allowed hosts for read-only rest-api users.",
"mandatory": false,
"modifiable": false,
"name": "admin_readwrite_hosts",
"type": "host pattern list"
},
{
"default_value": true,
"description": "Only serve GUI over HTTPS.",
"mandatory": false,
"modifiable": false,
"name": "admin_secure_gui",
"type": "bool"
},
{
"description": "Admin SSL CA cert",
"mandatory": false,
"modifiable": false,
"name": "admin_ssl_ca",
"type": "path"
},
{
"deprecated": true,
"description": "Alias for 'admin_ssl_ca'",
"mandatory": false,
"modifiable": false,
"name": "admin_ssl_ca_cert",
"type": "path"
},
{
"description": "Admin SSL cert",
"mandatory": false,
"modifiable": true,
"name": "admin_ssl_cert",
"type": "path"
},
{
"description": "Admin SSL key",
"mandatory": false,
"modifiable": true,
"name": "admin_ssl_key",
"type": "path"
},
{
"default_value": "MAX",
"description": "Minimum required TLS protocol version for the REST API",
"enum_values": [
"MAX",
"TLSv10",
"TLSv11",
"TLSv12",
"TLSv13"
],
"mandatory": false,
"modifiable": false,
"name": "admin_ssl_version",
"type": "enum"
},
{
"description": "URL for third-party verification of client tokens",
"mandatory": false,
"modifiable": false,
"name": "admin_verify_url",
"type": "string"
},
{
"default_value": "10000ms",
"description": "Connection timeout for fetching user accounts.",
"mandatory": false,
"modifiable": true,
"name": "auth_connect_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": "10000ms",
"description": "Read timeout for fetching user accounts (deprecated).",
"mandatory": false,
"modifiable": true,
"name": "auth_read_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": "10000ms",
"description": "Write timeout for fetching user accounts (deprecated).",
"mandatory": false,
"modifiable": true,
"name": "auth_write_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": [],
"description": "Specifies whether a MaxScale parameter whose value depends on a specific global server variable, should automatically be updated to match the variable's current value.",
"mandatory": false,
"modifiable": false,
"name": "auto_tune",
"type": "stringlist"
},
{
"description": "Cluster used for configuration synchronization. If left empty (i.e. value is \"\"), synchronization is not done.",
"mandatory": false,
"modifiable": true,
"name": "config_sync_cluster",
"type": "string"
},
{
"default_value": "mysql",
"description": "Database where the 'maxscale_config' table is created.",
"mandatory": false,
"modifiable": false,
"name": "config_sync_db",
"type": "string"
},
{
"default_value": "5000ms",
"description": "How often to synchronize the configuration.",
"mandatory": false,
"modifiable": true,
"name": "config_sync_interval",
"type": "duration",
"unit": "ms"
},
{
"description": "Password for the user used for configuration synchronization.",
"mandatory": false,
"modifiable": true,
"name": "config_sync_password",
"type": "password"
},
{
"default_value": "10000ms",
"description": "Timeout for the configuration synchronization operations.",
"mandatory": false,
"modifiable": true,
"name": "config_sync_timeout",
"type": "duration",
"unit": "ms"
},
{
"description": "User account used for configuration synchronization.",
"mandatory": false,
"modifiable": true,
"name": "config_sync_user",
"type": "string"
},
{
"description": "Debug options",
"mandatory": false,
"modifiable": false,
"name": "debug",
"type": "string"
},
{
"default_value": "never",
"description": "In what circumstances should the last statements that a client sent be dumped.",
"enum_values": [
"on_close",
"on_error",
"never"
],
"mandatory": false,
"modifiable": true,
"name": "dump_last_statements",
"type": "enum"
},
{
"default_value": "none",
"description": "Key manager type",
"enum_values": [
"none",
"file",
"kmip",
"vault"
],
"mandatory": false,
"modifiable": true,
"name": "key_manager",
"type": "enum"
},
{
"default_value": true,
"description": "Specifies whether persisted configuration files should be loaded on startup.",
"mandatory": false,
"modifiable": false,
"name": "load_persisted_configs",
"type": "bool"
},
{
"description": "Local address to use when connecting.",
"mandatory": false,
"modifiable": false,
"name": "local_address",
"type": "string"
},
{
"default_value": false,
"description": "Specifies whether debug messages should be logged (meaningful only with debug builds).",
"mandatory": false,
"modifiable": true,
"name": "log_debug",
"type": "bool"
},
{
"default_value": false,
"description": "Specifies whether info messages should be logged.",
"mandatory": false,
"modifiable": true,
"name": "log_info",
"type": "bool"
},
{
"default_value": true,
"description": "Specifies whether notice messages should be logged.",
"mandatory": false,
"modifiable": true,
"name": "log_notice",
"type": "bool"
},
{
"default_value": {
"count": 10,
"suppress": 10000,
"window": 1000
},
"description": "Limit the amount of identical log messages than can be logged during a certain time period.",
"mandatory": false,
"modifiable": true,
"name": "log_throttling",
"type": "throttling"
},
{
"default_value": false,
"description": "Log a warning when a user with super privilege logs in.",
"mandatory": false,
"modifiable": false,
"name": "log_warn_super_user",
"type": "bool"
},
{
"default_value": true,
"description": "Specifies whether warning messages should be logged.",
"mandatory": false,
"modifiable": true,
"name": "log_warning",
"type": "bool"
},
{
"default_value": 10,
"description": "The maximum number of authentication failures that are tolerated before a host is temporarily blocked.",
"mandatory": false,
"modifiable": true,
"name": "max_auth_errors_until_block",
"type": "int"
},
{
"default_value": 0,
"description": "Maximum amount of data read before return to epoll_wait.",
"mandatory": false,
"modifiable": false,
"name": "max_read_amount",
"type": "size"
},
{
"default_value": true,
"description": "Log to MaxScale's own log.",
"mandatory": false,
"modifiable": true,
"name": "maxlog",
"type": "bool"
},
{
"default_value": false,
"description": "Enable or disable high precision timestamps.",
"mandatory": false,
"modifiable": true,
"name": "ms_timestamp",
"type": "bool"
},
{
"default_value": false,
"description": "True if MaxScale is in passive mode.",
"mandatory": false,
"modifiable": true,
"name": "passive",
"type": "bool"
},
{
"default_value": true,
"description": "Persist configurations changes done at runtime.",
"mandatory": false,
"modifiable": false,
"name": "persist_runtime_changes",
"type": "bool"
},
{
"default_value": "qc_sqlite",
"deprecated": true,
"description": "The name of the query classifier to load.",
"mandatory": false,
"modifiable": false,
"name": "query_classifier",
"type": "string"
},
{
"deprecated": true,
"description": "Arguments for the query classifier.",
"mandatory": false,
"modifiable": false,
"name": "query_classifier_args",
"type": "string"
},
{
"default_value": 5003753472,
"description": "Maximum amount of memory used by query classifier cache.",
"mandatory": false,
"modifiable": true,
"name": "query_classifier_cache_size",
"type": "size"
},
{
"default_value": 1,
"description": "Number of times an interrupted query is retried.",
"mandatory": false,
"modifiable": false,
"name": "query_retries",
"type": "int"
},
{
"default_value": "5000ms",
"description": "The total timeout in seconds for any retried queries.",
"mandatory": false,
"modifiable": true,
"name": "query_retry_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": "0ms",
"description": "How often should the load of the worker threads be checked and rebalancing be made.",
"mandatory": false,
"modifiable": true,
"name": "rebalance_period",
"type": "duration",
"unit": "ms"
},
{
"default_value": 20,
"description": "If the difference in load between the thread with the maximum load and the thread with the minimum load is larger than the value of this parameter, then work will be moved from the former to the latter.",
"mandatory": false,
"modifiable": true,
"name": "rebalance_threshold",
"type": "int"
},
{
"default_value": 10,
"description": "The load of how many seconds should be taken into account when rebalancing.",
"mandatory": false,
"modifiable": true,
"name": "rebalance_window",
"type": "count"
},
{
"default_value": 0,
"description": "How many statements should be retained for each session for debugging purposes.",
"mandatory": false,
"modifiable": true,
"name": "retain_last_statements",
"type": "count"
},
{
"default_value": 0,
"description": "How many log entries are stored in the session specific trace log.",
"mandatory": false,
"modifiable": true,
"name": "session_trace",
"type": "count"
},
{
"description": "Regular expression that is matched against the contents of the session trace log and if it matches the contents are logged when the session stops.",
"mandatory": false,
"modifiable": true,
"name": "session_trace_match",
"type": "regex"
},
{
"default_value": false,
"description": "Do not resolve client IP addresses to hostnames during authentication",
"mandatory": false,
"modifiable": true,
"name": "skip_name_resolve",
"type": "bool"
},
{
"default_value": false,
"deprecated": true,
"description": "Skip service and monitor permission checks.",
"mandatory": false,
"modifiable": true,
"name": "skip_permission_checks",
"type": "bool"
},
{
"default_value": "default",
"description": "The query classifier sql mode.",
"enum_values": [
"default",
"oracle"
],
"mandatory": false,
"modifiable": false,
"name": "sql_mode",
"type": "enum"
},
{
"default_value": false,
"description": "Log to syslog.",
"mandatory": false,
"modifiable": true,
"name": "syslog",
"type": "bool"
},
{
"default_value": 8,
"description": "This parameter specifies how many threads will be used for handling the routing.",
"mandatory": false,
"modifiable": true,
"name": "threads",
"type": "count"
},
{
"default_value": 256,
"description": "This parameter specifies a hard maximum for the number of routing threads.",
"mandatory": false,
"modifiable": false,
"name": "threads_max",
"type": "count"
},
{
"default_value": "0ms",
"description": "How often the users will be refreshed.",
"mandatory": false,
"modifiable": true,
"name": "users_refresh_interval",
"type": "duration",
"unit": "ms"
},
{
"default_value": "30000ms",
"description": "How often the users can be refreshed.",
"mandatory": false,
"modifiable": true,
"name": "users_refresh_time",
"type": "duration",
"unit": "ms"
},
{
"default_value": 65536,
"description": "High water mark of dcb write queue.",
"mandatory": false,
"modifiable": true,
"name": "writeq_high_water",
"type": "size"
},
{
"default_value": 1024,
"description": "Low water mark of dcb write queue.",
"mandatory": false,
"modifiable": true,
"name": "writeq_low_water",
"type": "size"
}
],
"version": "24.02.0"
},
"id": "maxscale",
"links": {
"self": "http://localhost:8989/v1/modules/maxscale/"
},
"type": "modules"
},
{
"attributes": {
"commands": [],
"description": "servers",
"maturity": "GA",
"module_type": "servers",
"parameters": [
{
"description": "Server address",
"mandatory": false,
"modifiable": true,
"name": "address",
"type": "string"
},
{
"description": "Server authenticator (deprecated)",
"mandatory": false,
"modifiable": false,
"name": "authenticator",
"type": "string"
},
{
"description": "Server disk space threshold",
"mandatory": false,
"modifiable": true,
"name": "disk_space_threshold",
"type": "disk_space_limits"
},
{
"default_value": 0,
"description": "Server extra port",
"mandatory": false,
"modifiable": true,
"name": "extra_port",
"type": "count"
},
{
"default_value": 0,
"description": "Maximum routing connections",
"mandatory": false,
"modifiable": true,
"name": "max_routing_connections",
"type": "count"
},
{
"description": "Monitor password",
"mandatory": false,
"modifiable": true,
"name": "monitorpw",
"type": "password"
},
{
"description": "Monitor user",
"mandatory": false,
"modifiable": true,
"name": "monitoruser",
"type": "string"
},
{
"default_value": "0ms",
"description": "Maximum time that a connection can be in the pool",
"mandatory": false,
"modifiable": true,
"name": "persistmaxtime",
"type": "duration",
"unit": "ms"
},
{
"default_value": 0,
"description": "Maximum size of the persistent connection pool",
"mandatory": false,
"modifiable": true,
"name": "persistpoolmax",
"type": "count"
},
{
"default_value": 3306,
"description": "Server port",
"mandatory": false,
"modifiable": true,
"name": "port",
"type": "count"
},
{
"default_value": 0,
"description": "Server priority",
"mandatory": false,
"modifiable": true,
"name": "priority",
"type": "int"
},
{
"description": "Server private address (replication)",
"mandatory": false,
"modifiable": true,
"name": "private_address",
"type": "string"
},
{
"description": "Server protocol (deprecated)",
"mandatory": false,
"modifiable": false,
"name": "protocol",
"type": "string"
},
{
"default_value": false,
"description": "Enable proxy protocol",
"mandatory": false,
"modifiable": true,
"name": "proxy_protocol",
"type": "bool"
},
{
"default_value": "primary",
"description": "Server rank",
"enum_values": [
"primary",
"secondary"
],
"mandatory": false,
"modifiable": true,
"name": "rank",
"type": "enum"
},
{
"description": "Custom CHANGE MASTER TO options",
"mandatory": false,
"modifiable": true,
"name": "replication_custom_options",
"type": "string"
},
{
"description": "Server UNIX socket",
"mandatory": false,
"modifiable": true,
"name": "socket",
"type": "string"
},
{
"default_value": false,
"description": "Enable TLS for server",
"mandatory": false,
"modifiable": true,
"name": "ssl",
"type": "bool"
},
{
"description": "TLS certificate authority",
"mandatory": false,
"modifiable": true,
"name": "ssl_ca",
"type": "path"
},
{
"deprecated": true,
"description": "Alias for 'ssl_ca'",
"mandatory": false,
"modifiable": true,
"name": "ssl_ca_cert",
"type": "path"
},
{
"description": "TLS public certificate",
"mandatory": false,
"modifiable": true,
"name": "ssl_cert",
"type": "path"
},
{
"default_value": 9,
"description": "TLS certificate verification depth",
"mandatory": false,
"modifiable": true,
"name": "ssl_cert_verify_depth",
"type": "count"
},
{
"description": "TLS cipher list",
"mandatory": false,
"modifiable": true,
"name": "ssl_cipher",
"type": "string"
},
{
"description": "TLS private key",
"mandatory": false,
"modifiable": true,
"name": "ssl_key",
"type": "path"
},
{
"default_value": false,
"description": "Verify TLS peer certificate",
"mandatory": false,
"modifiable": true,
"name": "ssl_verify_peer_certificate",
"type": "bool"
},
{
"default_value": false,
"description": "Verify TLS peer host",
"mandatory": false,
"modifiable": true,
"name": "ssl_verify_peer_host",
"type": "bool"
},
{
"default_value": "MAX",
"description": "Minimum TLS protocol version",
"enum_values": [
"MAX",
"TLSv10",
"TLSv11",
"TLSv12",
"TLSv13"
],
"mandatory": false,
"modifiable": true,
"name": "ssl_version",
"type": "enum"
},
{
"default_value": "server",
"description": "Object type",
"mandatory": false,
"modifiable": false,
"name": "type",
"type": "string"
}
],
"version": "24.02.0"
},
"id": "servers",
"links": {
"self": "http://localhost:8989/v1/modules/servers/"
},
"type": "modules"
},
{
"attributes": {
"api": "filter",
"commands": [],
"description": "A hint parsing filter",
"maturity": "Alpha",
"module_type": "Filter",
"parameters": [],
"version": "V1.0.0"
},
"id": "hintfilter",
"links": {
"self": "http://localhost:8989/v1/modules/hintfilter/"
},
"type": "modules"
},
{
"attributes": {
"api": "authenticator",
"commands": [],
"description": "Standard MySQL/MariaDB authentication (mysql_native_password)",
"maturity": "GA",
"module_type": "Authenticator",
"parameters": null,
"version": "V2.1.0"
},
"id": "MariaDBAuth",
"links": {
"self": "http://localhost:8989/v1/modules/MariaDBAuth/"
},
"type": "modules"
},
{
"attributes": {
"api": "monitor",
"commands": [
{
"attributes": {
"arg_max": 3,
"arg_min": 1,
"description": "Switch primary server with replica",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "New primary (optional)",
"required": false,
"type": "[SERVER]"
},
{
"description": "Current primary (optional)",
"required": false,
"type": "[SERVER]"
}
]
},
"id": "switchover",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/switchover/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 3,
"arg_min": 1,
"description": "Switch primary server with replica. Ignores most errors.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "New primary (optional)",
"required": false,
"type": "[SERVER]"
},
{
"description": "Current primary (optional)",
"required": false,
"type": "[SERVER]"
}
]
},
"id": "switchover-force",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/switchover-force/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 3,
"arg_min": 1,
"description": "Schedule primary switchover. Does not wait for completion.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "New primary (optional)",
"required": false,
"type": "[SERVER]"
},
{
"description": "Current primary (optional)",
"required": false,
"type": "[SERVER]"
}
]
},
"id": "async-switchover",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/async-switchover/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 1,
"arg_min": 1,
"description": "Perform primary failover",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
}
]
},
"id": "failover",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/failover/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 1,
"arg_min": 1,
"description": "Schedule primary failover. Does not wait for completion.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
}
]
},
"id": "async-failover",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/async-failover/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 2,
"arg_min": 2,
"description": "Rejoin server to a cluster",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "Joining server",
"required": true,
"type": "SERVER"
}
]
},
"id": "rejoin",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/rejoin/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 2,
"arg_min": 2,
"description": "Rejoin server to a cluster. Does not wait for completion.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "Joining server",
"required": true,
"type": "SERVER"
}
]
},
"id": "async-rejoin",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/async-rejoin/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 2,
"arg_min": 1,
"description": "Delete replica connections, delete binary logs and set up replication (dangerous)",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "Primary server (optional)",
"required": false,
"type": "[SERVER]"
}
]
},
"id": "reset-replication",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/reset-replication/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 2,
"arg_min": 1,
"description": "Delete replica connections, delete binary logs and set up replication (dangerous). Does not wait for completion.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "Primary server (optional)",
"required": false,
"type": "[SERVER]"
}
]
},
"id": "async-reset-replication",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/async-reset-replication/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 1,
"arg_min": 1,
"description": "Release any held server locks for 1 minute.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
}
]
},
"id": "release-locks",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/release-locks/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 1,
"arg_min": 1,
"description": "Release any held server locks for 1 minute. Does not wait for completion.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
}
]
},
"id": "async-release-locks",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/async-release-locks/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 1,
"arg_min": 1,
"description": "Fetch result of the last scheduled command.",
"method": "GET",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
}
]
},
"id": "fetch-cmd-result",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/fetch-cmd-result/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 1,
"arg_min": 1,
"description": "Cancel the last scheduled command.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
}
]
},
"id": "cancel-cmd",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/cancel-cmd/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 3,
"arg_min": 3,
"description": "Add a node to a ColumnStore cluster. Does not wait for completion.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "Hostname/IP of node to add to ColumnStore cluster",
"required": true,
"type": "STRING"
},
{
"description": "Timeout",
"required": true,
"type": "STRING"
}
]
},
"id": "async-cs-add-node",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/async-cs-add-node/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 3,
"arg_min": 3,
"description": "Remove a node from a ColumnStore cluster. Does not wait for completion.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "Hostname/IP of node to remove from ColumnStore cluster",
"required": true,
"type": "STRING"
},
{
"description": "Timeout",
"required": true,
"type": "STRING"
}
]
},
"id": "async-cs-remove-node",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/async-cs-remove-node/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 1,
"arg_min": 1,
"description": "Get ColumnStore cluster status.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
}
]
},
"id": "cs-get-status",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/cs-get-status/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 1,
"arg_min": 1,
"description": "Get ColumnStore cluster status. Does not wait for completion.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
}
]
},
"id": "async-cs-get-status",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/async-cs-get-status/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 2,
"arg_min": 2,
"description": "Start ColumnStore cluster. Does not wait for completion.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "Timeout",
"required": true,
"type": "STRING"
}
]
},
"id": "async-cs-start-cluster",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/async-cs-start-cluster/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 2,
"arg_min": 2,
"description": "Stop ColumnStore cluster. Does not wait for completion.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "Timeout",
"required": true,
"type": "STRING"
}
]
},
"id": "async-cs-stop-cluster",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/async-cs-stop-cluster/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 2,
"arg_min": 2,
"description": "Set ColumnStore cluster read-only. Does not wait for completion.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "Timeout",
"required": true,
"type": "STRING"
}
]
},
"id": "async-cs-set-readonly",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/async-cs-set-readonly/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 2,
"arg_min": 2,
"description": "Set ColumnStore cluster readwrite. Does not wait for completion.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "Timeout",
"required": true,
"type": "STRING"
}
]
},
"id": "async-cs-set-readwrite",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/async-cs-set-readwrite/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 4,
"arg_min": 2,
"description": "Rebuild a server with mariadb-backup. Does not wait for completion.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "Target server",
"required": true,
"type": "SERVER"
},
{
"description": "Source server (optional)",
"required": false,
"type": "[SERVER]"
},
{
"description": "Target data directory (optional)",
"required": false,
"type": "[STRING]"
}
]
},
"id": "async-rebuild-server",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/async-rebuild-server/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 3,
"arg_min": 3,
"description": "Create a backup with mariadb-backup. Does not wait for completion.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "Source server",
"required": true,
"type": "SERVER"
},
{
"description": "Backup name",
"required": true,
"type": "STRING"
}
]
},
"id": "async-create-backup",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/async-create-backup/"
},
"type": "module_command"
},
{
"attributes": {
"arg_max": 4,
"arg_min": 3,
"description": "Restore a server from a backup. Does not wait for completion.",
"method": "POST",
"parameters": [
{
"description": "Monitor name",
"required": true,
"type": "MONITOR"
},
{
"description": "Target server",
"required": true,
"type": "SERVER"
},
{
"description": "Backup name",
"required": true,
"type": "STRING"
},
{
"description": "Target data directory (optional)",
"required": false,
"type": "[STRING]"
}
]
},
"id": "async-restore-from-backup",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/async-restore-from-backup/"
},
"type": "module_command"
}
],
"description": "A MariaDB Primary/Replica replication monitor",
"maturity": "GA",
"module_type": "Monitor",
"parameters": [
{
"default_value": true,
"description": "Assume that hostnames are unique",
"mandatory": false,
"modifiable": true,
"name": "assume_unique_hostnames",
"type": "bool"
},
{
"default_value": false,
"description": "Enable automatic server failover",
"mandatory": false,
"modifiable": true,
"name": "auto_failover",
"type": "bool"
},
{
"default_value": false,
"description": "Enable automatic server rejoin",
"mandatory": false,
"modifiable": true,
"name": "auto_rejoin",
"type": "bool"
},
{
"description": "Address of backup storage.",
"mandatory": false,
"modifiable": true,
"name": "backup_storage_address",
"type": "string"
},
{
"description": "Backup storage directory path.",
"mandatory": false,
"modifiable": true,
"name": "backup_storage_path",
"type": "string"
},
{
"default_value": "none",
"description": "Cooperative monitoring type",
"enum_values": [
"none",
"majority_of_running",
"majority_of_all"
],
"mandatory": false,
"modifiable": true,
"name": "cooperative_monitoring_locks",
"type": "enum"
},
{
"description": "The API key used in communication with the ColumnStore admin daemon.",
"mandatory": false,
"modifiable": false,
"name": "cs_admin_api_key",
"type": "string"
},
{
"default_value": "/cmapi/0.4.0",
"description": "The base path to be used when accessing the ColumnStore administrative daemon. If, for instance, a daemon URL is https://localhost:8640/cmapi/0.4.0/node/start then the admin_base_path is \"/cmapi/0.4.0\".",
"mandatory": false,
"modifiable": false,
"name": "cs_admin_base_path",
"type": "string"
},
{
"default_value": 8640,
"description": "Port of the ColumnStore administrative daemon.",
"mandatory": false,
"modifiable": false,
"name": "cs_admin_port",
"type": "count"
},
{
"description": "Path to SQL file that is executed during node demotion",
"mandatory": false,
"modifiable": true,
"name": "demotion_sql_file",
"type": "path"
},
{
"default_value": false,
"description": "Enable read_only on all slave servers",
"mandatory": false,
"modifiable": true,
"name": "enforce_read_only_slaves",
"type": "bool"
},
{
"default_value": false,
"description": "Enforce a simple topology",
"mandatory": false,
"modifiable": true,
"name": "enforce_simple_topology",
"type": "bool"
},
{
"default_value": false,
"description": "Disable read_only on the current master server",
"mandatory": false,
"modifiable": true,
"name": "enforce_writable_master",
"type": "bool"
},
{
"default_value": 5,
"description": "Number of failures to tolerate before failover occurs",
"mandatory": false,
"modifiable": true,
"name": "failcount",
"type": "count"
},
{
"default_value": "90000ms",
"description": "Timeout for failover",
"mandatory": false,
"modifiable": true,
"name": "failover_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": true,
"description": "Manage server-side events",
"mandatory": false,
"modifiable": true,
"name": "handle_events",
"type": "bool"
},
{
"default_value": true,
"description": "Put the server into maintenance mode when it runs out of disk space",
"mandatory": false,
"modifiable": true,
"name": "maintenance_on_low_disk_space",
"type": "bool"
},
{
"default_value": 1,
"description": "mariadb-backup thread count.",
"mandatory": false,
"modifiable": true,
"name": "mariadb-backup_parallel",
"type": "int"
},
{
"default_value": "1G",
"description": "mariadb-backup buffer pool size.",
"mandatory": false,
"modifiable": true,
"name": "mariadb-backup_use_memory",
"type": "string"
},
{
"default_value": "primary_monitor_master",
"description": "Conditions that the master servers must meet",
"enum_values": [
"none",
"connecting_slave",
"connected_slave",
"running_slave",
"primary_monitor_master"
],
"mandatory": false,
"modifiable": true,
"name": "master_conditions",
"type": "enum_mask"
},
{
"default_value": "10000ms",
"description": "Master failure timeout",
"mandatory": false,
"modifiable": true,
"name": "master_failure_timeout",
"type": "duration",
"unit": "ms"
},
{
"description": "Path to SQL file that is executed during node promotion",
"mandatory": false,
"modifiable": true,
"name": "promotion_sql_file",
"type": "path"
},
{
"default_value": 4444,
"description": "Listen port used for transferring server backup.",
"mandatory": false,
"modifiable": true,
"name": "rebuild_port",
"type": "count"
},
{
"description": "Custom CHANGE MASTER TO options",
"mandatory": false,
"modifiable": true,
"name": "replication_custom_options",
"type": "string"
},
{
"default_value": false,
"description": "Enable SSL when configuring replication",
"mandatory": false,
"modifiable": true,
"name": "replication_master_ssl",
"type": "bool"
},
{
"description": "Password for the user that is used for replication",
"mandatory": false,
"modifiable": true,
"name": "replication_password",
"type": "password"
},
{
"description": "User used for replication",
"mandatory": false,
"modifiable": true,
"name": "replication_user",
"type": "string"
},
{
"default_value": -1,
"description": "Replication lag limit at which the script is run",
"mandatory": false,
"modifiable": true,
"name": "script_max_replication_lag",
"type": "int"
},
{
"description": "List of servers that are never promoted",
"mandatory": false,
"modifiable": true,
"name": "servers_no_promotion",
"type": "serverlist"
},
{
"default_value": "",
"description": "Conditions that the slave servers must meet",
"enum_values": [
"linked_master",
"running_master",
"writable_master",
"primary_monitor_master",
"none"
],
"mandatory": false,
"modifiable": true,
"name": "slave_conditions",
"type": "enum_mask"
},
{
"default_value": true,
"description": "Is SSH host key check enabled.",
"mandatory": false,
"modifiable": true,
"name": "ssh_check_host_key",
"type": "bool"
},
{
"description": "SSH keyfile. Used for running remote commands on servers.",
"mandatory": false,
"modifiable": false,
"name": "ssh_keyfile",
"type": "path"
},
{
"default_value": 22,
"description": "SSH port. Used for running remote commands on servers.",
"mandatory": false,
"modifiable": true,
"name": "ssh_port",
"type": "count"
},
{
"default_value": "10000ms",
"description": "SSH connection and command timeout",
"mandatory": false,
"modifiable": true,
"name": "ssh_timeout",
"type": "duration",
"unit": "ms"
},
{
"description": "SSH username. Used for running remote commands on servers.",
"mandatory": false,
"modifiable": false,
"name": "ssh_user",
"type": "string"
},
{
"default_value": false,
"description": "Perform a switchover when a server runs out of disk space",
"mandatory": false,
"modifiable": true,
"name": "switchover_on_low_disk_space",
"type": "bool"
},
{
"default_value": "90000ms",
"description": "Timeout for switchover",
"mandatory": false,
"modifiable": true,
"name": "switchover_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": true,
"description": "Verify master failure",
"mandatory": false,
"modifiable": true,
"name": "verify_master_failure",
"type": "bool"
},
{
"default_value": 1,
"description": "Number of connection attempts to make to a server",
"mandatory": false,
"modifiable": true,
"name": "backend_connect_attempts",
"type": "count"
},
{
"default_value": "3000ms",
"description": "Connection timeout for monitor connections",
"mandatory": false,
"modifiable": true,
"name": "backend_connect_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": "3000ms",
"description": "Read timeout for monitor connections",
"mandatory": false,
"modifiable": true,
"name": "backend_read_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": "3000ms",
"description": "Write timeout for monitor connections",
"mandatory": false,
"modifiable": true,
"name": "backend_write_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": "0ms",
"description": "How often the disk space is checked",
"mandatory": false,
"modifiable": true,
"name": "disk_space_check_interval",
"type": "duration",
"unit": "ms"
},
{
"description": "Disk space threshold",
"mandatory": false,
"modifiable": true,
"name": "disk_space_threshold",
"type": "string"
},
{
"default_value": "all,master_down,master_up,slave_down,slave_up,server_down,server_up,synced_down,synced_up,donor_down,donor_up,lost_master,lost_slave,lost_synced,lost_donor,new_master,new_slave,new_synced,new_donor",
"description": "Events that cause the script to be called",
"enum_values": [
"all",
"master_down",
"master_up",
"slave_down",
"slave_up",
"server_down",
"server_up",
"synced_down",
"synced_up",
"donor_down",
"donor_up",
"lost_master",
"lost_slave",
"lost_synced",
"lost_donor",
"new_master",
"new_slave",
"new_synced",
"new_donor"
],
"mandatory": false,
"modifiable": true,
"name": "events",
"type": "enum_mask"
},
{
"default_value": "28800000ms",
"description": "The time the on-disk cached server states are valid for",
"mandatory": false,
"modifiable": true,
"name": "journal_max_age",
"type": "duration",
"unit": "ms"
},
{
"default_value": "2000ms",
"description": "How often the servers are monitored",
"mandatory": false,
"modifiable": true,
"name": "monitor_interval",
"type": "duration",
"unit": "ms"
},
{
"description": "Password for the user used to monitor the servers",
"mandatory": true,
"modifiable": true,
"name": "password",
"type": "password"
},
{
"description": "Script to run whenever an event occurs",
"mandatory": false,
"modifiable": true,
"name": "script",
"type": "string"
},
{
"default_value": "90000ms",
"description": "Timeout for the script",
"mandatory": false,
"modifiable": true,
"name": "script_timeout",
"type": "duration",
"unit": "ms"
},
{
"description": "List of servers to use",
"mandatory": false,
"modifiable": true,
"name": "servers",
"type": "serverlist"
},
{
"description": "Username used to monitor the servers",
"mandatory": true,
"modifiable": true,
"name": "user",
"type": "string"
}
],
"version": "V1.5.0"
},
"id": "mariadbmon",
"links": {
"self": "http://localhost:8989/v1/modules/mariadbmon/"
},
"type": "modules"
},
{
"attributes": {
"api": "protocol",
"commands": [],
"description": "The client to MaxScale MySQL protocol implementation",
"maturity": "GA",
"module_type": "Protocol",
"parameters": [
{
"default_value": true,
"description": "Allow use of the replication protocol through this listener",
"mandatory": false,
"modifiable": false,
"name": "allow_replication",
"type": "bool"
},
{
"default_value": "::",
"description": "Listener address",
"mandatory": false,
"modifiable": false,
"name": "address",
"type": "string"
},
{
"description": "Listener authenticator",
"mandatory": false,
"modifiable": false,
"name": "authenticator",
"type": "string"
},
{
"description": "Authenticator options",
"mandatory": false,
"modifiable": false,
"name": "authenticator_options",
"type": "string"
},
{
"description": "Path to connection initialization SQL",
"mandatory": false,
"modifiable": true,
"name": "connection_init_sql_file",
"type": "path"
},
{
"default_value": [
"character_set_client=auto",
"character_set_connection=auto",
"character_set_results=auto",
"max_allowed_packet=auto",
"system_time_zone=auto",
"time_zone=auto",
"tx_isolation=auto",
"maxscale=auto"
],
"description": "Metadata that's sent to all connecting clients.",
"mandatory": false,
"modifiable": true,
"name": "connection_metadata",
"type": "stringlist"
},
{
"default_value": 0,
"description": "Listener port",
"mandatory": false,
"modifiable": false,
"name": "port",
"type": "count"
},
{
"default_value": "MariaDBProtocol",
"description": "Listener protocol to use",
"mandatory": false,
"modifiable": false,
"name": "protocol",
"type": "module"
},
{
"description": "Allowed (sub)networks for proxy protocol connections. Should be a comma-separated list of IPv4 or IPv6 addresses.",
"mandatory": false,
"modifiable": true,
"name": "proxy_protocol_networks",
"type": "string"
},
{
"description": "Service to which the listener connects to",
"mandatory": true,
"modifiable": false,
"name": "service",
"type": "service"
},
{
"description": "Listener UNIX socket",
"mandatory": false,
"modifiable": false,
"name": "socket",
"type": "string"
},
{
"default_value": "default",
"description": "SQL parsing mode",
"enum_values": [
"default",
"oracle"
],
"mandatory": false,
"modifiable": true,
"name": "sql_mode",
"type": "enum"
},
{
"default_value": false,
"description": "Enable TLS for server",
"mandatory": false,
"modifiable": true,
"name": "ssl",
"type": "bool"
},
{
"description": "TLS certificate authority",
"mandatory": false,
"modifiable": true,
"name": "ssl_ca",
"type": "path"
},
{
"deprecated": true,
"description": "Alias for 'ssl_ca'",
"mandatory": false,
"modifiable": true,
"name": "ssl_ca_cert",
"type": "path"
},
{
"description": "TLS public certificate",
"mandatory": false,
"modifiable": true,
"name": "ssl_cert",
"type": "path"
},
{
"default_value": 9,
"description": "TLS certificate verification depth",
"mandatory": false,
"modifiable": true,
"name": "ssl_cert_verify_depth",
"type": "count"
},
{
"description": "TLS cipher list",
"mandatory": false,
"modifiable": true,
"name": "ssl_cipher",
"type": "string"
},
{
"description": "TLS certificate revocation list",
"mandatory": false,
"modifiable": true,
"name": "ssl_crl",
"type": "string"
},
{
"description": "TLS private key",
"mandatory": false,
"modifiable": true,
"name": "ssl_key",
"type": "path"
},
{
"default_value": false,
"description": "Verify TLS peer certificate",
"mandatory": false,
"modifiable": true,
"name": "ssl_verify_peer_certificate",
"type": "bool"
},
{
"default_value": false,
"description": "Verify TLS peer host",
"mandatory": false,
"modifiable": true,
"name": "ssl_verify_peer_host",
"type": "bool"
},
{
"default_value": "MAX",
"description": "Minimum TLS protocol version",
"enum_values": [
"MAX",
"TLSv10",
"TLSv11",
"TLSv12",
"TLSv13"
],
"mandatory": false,
"modifiable": true,
"name": "ssl_version",
"type": "enum"
},
{
"description": "Path to user and group mapping file",
"mandatory": false,
"modifiable": true,
"name": "user_mapping_file",
"type": "path"
}
],
"version": "V1.1.0"
},
"id": "MariaDBProtocol",
"links": {
"self": "http://localhost:8989/v1/modules/MariaDBProtocol/"
},
"type": "modules"
},
{
"attributes": {
"api": "parser",
"commands": [],
"description": "MariaDB SQL parser using sqlite3.",
"maturity": "GA",
"module_type": "Parser",
"parameters": null,
"version": "V1.0.0"
},
"id": "pp_sqlite",
"links": {
"self": "http://localhost:8989/v1/modules/pp_sqlite/"
},
"type": "modules"
},
{
"attributes": {
"api": "filter",
"commands": [
{
"attributes": {
"arg_max": 3,
"arg_min": 1,
"description": "Show unified log file as a JSON array",
"method": "GET",
"parameters": [
{
"description": "Filter to read logs from",
"required": true,
"type": "FILTER"
},
{
"description": "Start reading from this line",
"required": false,
"type": "[STRING]"
},
{
"description": "Stop reading at this line (exclusive)",
"required": false,
"type": "[STRING]"
}
]
},
"id": "log",
"links": {
"self": "http://localhost:8989/v1/modules/qlafilter/log/"
},
"type": "module_command"
}
],
"description": "A simple query logging filter",
"maturity": "GA",
"module_type": "Filter",
"parameters": [
{
"default_value": true,
"description": "Append new entries to log files instead of overwriting them",
"mandatory": false,
"modifiable": true,
"name": "append",
"type": "bool"
},
{
"default_value": "ms",
"description": "Duration in milliseconds (ms) or microseconds (us)",
"enum_values": [
"ms",
"milliseconds",
"us",
"microseconds"
],
"mandatory": false,
"modifiable": true,
"name": "duration_unit",
"type": "enum"
},
{
"description": "Exclude queries matching this pattern from the log",
"mandatory": false,
"modifiable": true,
"name": "exclude",
"type": "regex"
},
{
"description": "The basename of the output file",
"mandatory": true,
"modifiable": true,
"name": "filebase",
"type": "string"
},
{
"default_value": false,
"description": "Flush log files after every write",
"mandatory": false,
"modifiable": true,
"name": "flush",
"type": "bool"
},
{
"default_value": "date,user,query",
"description": "Type of data to log in the log files",
"enum_values": [
"service",
"session",
"date",
"user",
"query",
"reply_time",
"total_reply_time",
"default_db",
"num_rows",
"reply_size",
"transaction",
"transaction_time",
"num_warnings",
"error_msg",
"server",
"command"
],
"mandatory": false,
"modifiable": true,
"name": "log_data",
"type": "enum_mask"
},
{
"default_value": "session",
"description": "The type of log file to use",
"enum_values": [
"session",
"unified",
"stdout"
],
"mandatory": false,
"modifiable": true,
"name": "log_type",
"type": "enum_mask"
},
{
"description": "Only log queries matching this pattern",
"mandatory": false,
"modifiable": true,
"name": "match",
"type": "regex"
},
{
"default_value": " ",
"description": "Value used to replace newlines",
"mandatory": false,
"modifiable": true,
"name": "newline_replacement",
"type": "string"
},
{
"default_value": "",
"description": "Regular expression options",
"enum_values": [
"case",
"ignorecase",
"extended"
],
"mandatory": false,
"modifiable": true,
"name": "options",
"type": "enum_mask"
},
{
"default_value": ",",
"description": "Defines the separator between elements of a log entry",
"mandatory": false,
"modifiable": true,
"name": "separator",
"type": "string"
},
{
"description": "Log queries only from this network address",
"mandatory": false,
"modifiable": true,
"name": "source",
"type": "string"
},
{
"description": "Exclude queries from hosts that match this pattern",
"mandatory": false,
"modifiable": true,
"name": "source_exclude",
"type": "regex"
},
{
"description": "Log queries only from hosts that match this pattern",
"mandatory": false,
"modifiable": true,
"name": "source_match",
"type": "regex"
},
{
"default_value": false,
"description": "Write queries in canonical form",
"mandatory": false,
"modifiable": true,
"name": "use_canonical_form",
"type": "bool"
},
{
"description": "Log queries only from this user",
"mandatory": false,
"modifiable": true,
"name": "user",
"type": "string"
},
{
"description": "Exclude queries from users that match this pattern",
"mandatory": false,
"modifiable": true,
"name": "user_exclude",
"type": "regex"
},
{
"description": "Log queries only from users that match this pattern",
"mandatory": false,
"modifiable": true,
"name": "user_match",
"type": "regex"
}
],
"version": "V1.1.1"
},
"id": "qlafilter",
"links": {
"self": "http://localhost:8989/v1/modules/qlafilter/"
},
"type": "modules"
},
{
"attributes": {
"api": "router",
"commands": [],
"description": "A connection based router to load balance based on connections",
"maturity": "GA",
"module_type": "Router",
"parameters": [
{
"default_value": true,
"description": "Use master for reads",
"mandatory": false,
"modifiable": true,
"name": "master_accept_reads",
"type": "bool"
},
{
"default_value": "0ms",
"description": "Maximum acceptable replication lag",
"mandatory": false,
"modifiable": true,
"name": "max_replication_lag",
"type": "duration",
"unit": "ms"
},
{
"default_value": "running",
"description": "A comma separated list of server roles",
"enum_values": [
"master",
"slave",
"running",
"synced"
],
"mandatory": false,
"modifiable": true,
"name": "router_options",
"type": "enum_mask"
},
{
"default_value": false,
"deprecated": true,
"description": "Retrieve users from all backend servers instead of only one",
"mandatory": false,
"modifiable": true,
"name": "auth_all_servers",
"type": "bool"
},
{
"default_value": "300000ms",
"description": "How often idle connections are pinged",
"mandatory": false,
"modifiable": true,
"name": "connection_keepalive",
"type": "duration",
"unit": "ms"
},
{
"deprecated": true,
"description": "Alias for 'wait_timeout'",
"mandatory": false,
"modifiable": true,
"name": "connection_timeout",
"type": "duration"
},
{
"default_value": false,
"description": "Disable session command history",
"mandatory": false,
"modifiable": true,
"name": "disable_sescmd_history",
"type": "bool"
},
{
"default_value": false,
"description": "Allow the root user to connect to this service",
"mandatory": false,
"modifiable": true,
"name": "enable_root_user",
"type": "bool"
},
{
"default_value": false,
"description": "Ping connections unconditionally",
"mandatory": false,
"modifiable": true,
"name": "force_connection_keepalive",
"type": "bool"
},
{
"default_value": "-1ms",
"description": "Put connections into pool after session has been idle for this long",
"mandatory": false,
"modifiable": true,
"name": "idle_session_pool_time",
"type": "duration",
"unit": "ms"
},
{
"default_value": true,
"description": "Match localhost to wildcard host",
"mandatory": false,
"modifiable": true,
"name": "localhost_match_wildcard_host",
"type": "bool"
},
{
"default_value": true,
"description": "Log a warning when client authentication fails",
"mandatory": false,
"modifiable": true,
"name": "log_auth_warnings",
"type": "bool"
},
{
"default_value": false,
"description": "Log debug messages for this service (debug builds only)",
"mandatory": false,
"modifiable": true,
"name": "log_debug",
"type": "bool"
},
{
"default_value": false,
"description": "Log info messages for this service",
"mandatory": false,
"modifiable": true,
"name": "log_info",
"type": "bool"
},
{
"default_value": false,
"description": "Log notice messages for this service",
"mandatory": false,
"modifiable": true,
"name": "log_notice",
"type": "bool"
},
{
"default_value": false,
"description": "Log warning messages for this service",
"mandatory": false,
"modifiable": true,
"name": "log_warning",
"type": "bool"
},
{
"default_value": 0,
"description": "Maximum number of connections",
"mandatory": false,
"modifiable": true,
"name": "max_connections",
"type": "count"
},
{
"default_value": 50,
"description": "Session command history size",
"mandatory": false,
"modifiable": true,
"name": "max_sescmd_history",
"type": "count"
},
{
"default_value": "60000ms",
"description": "How long a session can wait for a connection to become available",
"mandatory": false,
"modifiable": true,
"name": "multiplex_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": "0ms",
"description": "Network write timeout",
"mandatory": false,
"modifiable": true,
"name": "net_write_timeout",
"type": "duration",
"unit": "ms"
},
{
"description": "Password for the user used to retrieve database users",
"mandatory": true,
"modifiable": true,
"name": "password",
"type": "password"
},
{
"default_value": true,
"description": "Prune old session command history if the limit is exceeded",
"mandatory": false,
"modifiable": true,
"name": "prune_sescmd_history",
"type": "bool"
},
{
"default_value": "primary",
"description": "Service rank",
"enum_values": [
"primary",
"secondary"
],
"mandatory": false,
"modifiable": true,
"name": "rank",
"type": "enum"
},
{
"default_value": -1,
"description": "Number of statements kept in memory",
"mandatory": false,
"modifiable": true,
"name": "retain_last_statements",
"type": "int"
},
{
"default_value": false,
"description": "Enable session tracing for this service",
"mandatory": false,
"modifiable": true,
"name": "session_trace",
"type": "bool"
},
{
"default_value": false,
"deprecated": true,
"description": "Track session state using server responses",
"mandatory": false,
"modifiable": true,
"name": "session_track_trx_state",
"type": "bool"
},
{
"default_value": true,
"deprecated": true,
"description": "Strip escape characters from database names",
"mandatory": false,
"modifiable": true,
"name": "strip_db_esc",
"type": "bool"
},
{
"description": "Username used to retrieve database users",
"mandatory": true,
"modifiable": true,
"name": "user",
"type": "string"
},
{
"description": "Load additional users from a file",
"mandatory": false,
"modifiable": false,
"name": "user_accounts_file",
"type": "path"
},
{
"default_value": "add_when_load_ok",
"description": "When and how the user accounts file is used",
"enum_values": [
"add_when_load_ok",
"file_only_always"
],
"mandatory": false,
"modifiable": false,
"name": "user_accounts_file_usage",
"type": "enum"
},
{
"description": "Custom version string to use",
"mandatory": false,
"modifiable": true,
"name": "version_string",
"type": "string"
},
{
"default_value": "0ms",
"description": "Connection idle timeout",
"mandatory": false,
"modifiable": true,
"name": "wait_timeout",
"type": "duration",
"unit": "ms"
}
],
"version": "V2.0.0"
},
"id": "readconnroute",
"links": {
"self": "http://localhost:8989/v1/modules/readconnroute/"
},
"type": "modules"
},
{
"attributes": {
"api": "router",
"commands": [
{
"attributes": {
"arg_max": 1,
"arg_min": 1,
"description": "Reset global GTID state in readwritesplit.",
"method": "POST",
"parameters": [
{
"description": "Readwritesplit service",
"required": true,
"type": "SERVICE"
}
]
},
"id": "reset-gtid",
"links": {
"self": "http://localhost:8989/v1/modules/readwritesplit/reset-gtid/"
},
"type": "module_command"
}
],
"description": "A Read/Write splitting router for enhancement read scalability",
"maturity": "GA",
"module_type": "Router",
"parameters": [
{
"default_value": "none",
"description": "Causal reads mode",
"enum_values": [
"none",
"local",
"global",
"fast_global",
"fast",
"universal",
"fast_universal",
"false",
"off",
"0",
"true",
"on",
"1"
],
"mandatory": false,
"modifiable": true,
"name": "causal_reads",
"type": "enum"
},
{
"default_value": "10000ms",
"description": "Timeout for the slave synchronization",
"mandatory": false,
"modifiable": true,
"name": "causal_reads_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": false,
"description": "Retry failed writes outside of transactions",
"mandatory": false,
"modifiable": true,
"name": "delayed_retry",
"type": "bool"
},
{
"default_value": "10000ms",
"description": "Timeout for delayed_retry",
"mandatory": false,
"modifiable": true,
"name": "delayed_retry_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": false,
"description": "Create connections only when needed",
"mandatory": false,
"modifiable": true,
"name": "lazy_connect",
"type": "bool"
},
{
"default_value": false,
"description": "Use master for reads",
"mandatory": false,
"modifiable": true,
"name": "master_accept_reads",
"type": "bool"
},
{
"default_value": "fail_on_write",
"description": "Master failure mode behavior",
"enum_values": [
"fail_instantly",
"fail_on_write",
"error_on_write"
],
"mandatory": false,
"modifiable": true,
"name": "master_failure_mode",
"type": "enum"
},
{
"default_value": true,
"description": "Reconnect to master",
"mandatory": false,
"modifiable": true,
"name": "master_reconnection",
"type": "bool"
},
{
"default_value": "0ms",
"description": "Maximum replication lag",
"mandatory": false,
"modifiable": true,
"name": "max_replication_lag",
"type": "duration",
"unit": "ms"
},
{
"default_value": 255,
"description": "Maximum number of slave connections",
"mandatory": false,
"modifiable": true,
"name": "max_slave_connections",
"type": "count"
},
{
"deprecated": true,
"description": "Alias for 'max_replication_lag'",
"mandatory": false,
"modifiable": true,
"name": "max_slave_replication_lag",
"type": "duration"
},
{
"default_value": false,
"description": "Optimistically offload transactions to slaves",
"mandatory": false,
"modifiable": true,
"name": "optimistic_trx",
"type": "bool"
},
{
"default_value": true,
"description": "Automatically retry failed reads outside of transactions",
"mandatory": false,
"modifiable": true,
"name": "retry_failed_reads",
"type": "bool"
},
{
"default_value": false,
"description": "Reuse identical prepared statements inside the same connection",
"mandatory": false,
"modifiable": true,
"name": "reuse_prepared_statements",
"type": "bool"
},
{
"default_value": 255,
"description": "Starting number of slave connections",
"mandatory": false,
"modifiable": true,
"name": "slave_connections",
"type": "count"
},
{
"default_value": "least_current_operations",
"description": "Slave selection criteria",
"enum_values": [
"least_global_connections",
"least_router_connections",
"least_behind_master",
"least_current_operations",
"adaptive_routing",
"LEAST_GLOBAL_CONNECTIONS",
"LEAST_ROUTER_CONNECTIONS",
"LEAST_BEHIND_MASTER",
"LEAST_CURRENT_OPERATIONS",
"ADAPTIVE_ROUTING"
],
"mandatory": false,
"modifiable": true,
"name": "slave_selection_criteria",
"type": "enum"
},
{
"default_value": false,
"description": "Lock connection to master after multi-statement query",
"mandatory": false,
"modifiable": true,
"name": "strict_multi_stmt",
"type": "bool"
},
{
"default_value": false,
"description": "Lock connection to master after a stored procedure is executed",
"mandatory": false,
"modifiable": true,
"name": "strict_sp_calls",
"type": "bool"
},
{
"default_value": true,
"description": "Prevent reconnections if temporary tables exist",
"mandatory": false,
"modifiable": true,
"name": "strict_tmp_tables",
"type": "bool"
},
{
"default_value": false,
"description": "Retry failed transactions",
"mandatory": false,
"modifiable": true,
"name": "transaction_replay",
"type": "bool"
},
{
"default_value": 5,
"description": "Maximum number of times to retry a transaction",
"mandatory": false,
"modifiable": true,
"name": "transaction_replay_attempts",
"type": "count"
},
{
"default_value": "full",
"description": "Type of checksum to calculate for results",
"enum_values": [
"full",
"result_only",
"no_insert_id"
],
"mandatory": false,
"modifiable": true,
"name": "transaction_replay_checksum",
"type": "enum"
},
{
"default_value": 1048576,
"description": "Maximum size of transaction to retry",
"mandatory": false,
"modifiable": true,
"name": "transaction_replay_max_size",
"type": "size"
},
{
"default_value": false,
"description": "Retry transaction on deadlock",
"mandatory": false,
"modifiable": true,
"name": "transaction_replay_retry_on_deadlock",
"type": "bool"
},
{
"default_value": false,
"description": "Retry transaction on checksum mismatch",
"mandatory": false,
"modifiable": true,
"name": "transaction_replay_retry_on_mismatch",
"type": "bool"
},
{
"default_value": true,
"description": "Prevent replaying of about-to-commit transaction",
"mandatory": false,
"modifiable": true,
"name": "transaction_replay_safe_commit",
"type": "bool"
},
{
"default_value": "30000ms",
"description": "Timeout for transaction replay",
"mandatory": false,
"modifiable": true,
"name": "transaction_replay_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": "all",
"description": "Whether to route SQL variable modifications to all servers or only to the master",
"enum_values": [
"all",
"master"
],
"mandatory": false,
"modifiable": true,
"name": "use_sql_variables_in",
"type": "enum"
},
{
"default_value": false,
"deprecated": true,
"description": "Retrieve users from all backend servers instead of only one",
"mandatory": false,
"modifiable": true,
"name": "auth_all_servers",
"type": "bool"
},
{
"default_value": "300000ms",
"description": "How often idle connections are pinged",
"mandatory": false,
"modifiable": true,
"name": "connection_keepalive",
"type": "duration",
"unit": "ms"
},
{
"deprecated": true,
"description": "Alias for 'wait_timeout'",
"mandatory": false,
"modifiable": true,
"name": "connection_timeout",
"type": "duration"
},
{
"default_value": false,
"description": "Disable session command history",
"mandatory": false,
"modifiable": true,
"name": "disable_sescmd_history",
"type": "bool"
},
{
"default_value": false,
"description": "Allow the root user to connect to this service",
"mandatory": false,
"modifiable": true,
"name": "enable_root_user",
"type": "bool"
},
{
"default_value": false,
"description": "Ping connections unconditionally",
"mandatory": false,
"modifiable": true,
"name": "force_connection_keepalive",
"type": "bool"
},
{
"default_value": "-1ms",
"description": "Put connections into pool after session has been idle for this long",
"mandatory": false,
"modifiable": true,
"name": "idle_session_pool_time",
"type": "duration",
"unit": "ms"
},
{
"default_value": true,
"description": "Match localhost to wildcard host",
"mandatory": false,
"modifiable": true,
"name": "localhost_match_wildcard_host",
"type": "bool"
},
{
"default_value": true,
"description": "Log a warning when client authentication fails",
"mandatory": false,
"modifiable": true,
"name": "log_auth_warnings",
"type": "bool"
},
{
"default_value": false,
"description": "Log debug messages for this service (debug builds only)",
"mandatory": false,
"modifiable": true,
"name": "log_debug",
"type": "bool"
},
{
"default_value": false,
"description": "Log info messages for this service",
"mandatory": false,
"modifiable": true,
"name": "log_info",
"type": "bool"
},
{
"default_value": false,
"description": "Log notice messages for this service",
"mandatory": false,
"modifiable": true,
"name": "log_notice",
"type": "bool"
},
{
"default_value": false,
"description": "Log warning messages for this service",
"mandatory": false,
"modifiable": true,
"name": "log_warning",
"type": "bool"
},
{
"default_value": 0,
"description": "Maximum number of connections",
"mandatory": false,
"modifiable": true,
"name": "max_connections",
"type": "count"
},
{
"default_value": 50,
"description": "Session command history size",
"mandatory": false,
"modifiable": true,
"name": "max_sescmd_history",
"type": "count"
},
{
"default_value": "60000ms",
"description": "How long a session can wait for a connection to become available",
"mandatory": false,
"modifiable": true,
"name": "multiplex_timeout",
"type": "duration",
"unit": "ms"
},
{
"default_value": "0ms",
"description": "Network write timeout",
"mandatory": false,
"modifiable": true,
"name": "net_write_timeout",
"type": "duration",
"unit": "ms"
},
{
"description": "Password for the user used to retrieve database users",
"mandatory": true,
"modifiable": true,
"name": "password",
"type": "password"
},
{
"default_value": true,
"description": "Prune old session command history if the limit is exceeded",
"mandatory": false,
"modifiable": true,
"name": "prune_sescmd_history",
"type": "bool"
},
{
"default_value": "primary",
"description": "Service rank",
"enum_values": [
"primary",
"secondary"
],
"mandatory": false,
"modifiable": true,
"name": "rank",
"type": "enum"
},
{
"default_value": -1,
"description": "Number of statements kept in memory",
"mandatory": false,
"modifiable": true,
"name": "retain_last_statements",
"type": "int"
},
{
"default_value": false,
"description": "Enable session tracing for this service",
"mandatory": false,
"modifiable": true,
"name": "session_trace",
"type": "bool"
},
{
"default_value": false,
"deprecated": true,
"description": "Track session state using server responses",
"mandatory": false,
"modifiable": true,
"name": "session_track_trx_state",
"type": "bool"
},
{
"default_value": true,
"deprecated": true,
"description": "Strip escape characters from database names",
"mandatory": false,
"modifiable": true,
"name": "strip_db_esc",
"type": "bool"
},
{
"description": "Username used to retrieve database users",
"mandatory": true,
"modifiable": true,
"name": "user",
"type": "string"
},
{
"description": "Load additional users from a file",
"mandatory": false,
"modifiable": false,
"name": "user_accounts_file",
"type": "path"
},
{
"default_value": "add_when_load_ok",
"description": "When and how the user accounts file is used",
"enum_values": [
"add_when_load_ok",
"file_only_always"
],
"mandatory": false,
"modifiable": false,
"name": "user_accounts_file_usage",
"type": "enum"
},
{
"description": "Custom version string to use",
"mandatory": false,
"modifiable": true,
"name": "version_string",
"type": "string"
},
{
"default_value": "0ms",
"description": "Connection idle timeout",
"mandatory": false,
"modifiable": true,
"name": "wait_timeout",
"type": "duration",
"unit": "ms"
}
],
"version": "V1.1.0"
},
"id": "readwritesplit",
"links": {
"self": "http://localhost:8989/v1/modules/readwritesplit/"
},
"type": "modules"
}
],
"links": {
"self": "http://localhost:8989/v1/maxscale/modules/"
}
}GET /v1/maxscale/modules/:module/:commandPOST /v1/maxscale/modules/:module/:commandPOST /v1/maxscale/modules/mariadbmon/reset-replication?MariaDB-Monitor&server1{
"links": {
"self": "http://localhost:8989/v1/maxscale/modules/mariadbmon/reset-replication"
},
"meta": [ // Output of module command (module dependent)
{
"name": "value"
}
]
}GET /v1/maxscale/query_classifier/classify?sql=<statement>GET /v1/maxscale/query_classifier/classify?sql=SELECT+1{
"data": {
"attributes": {
"canonical": "SELECT ?",
"fields": [],
"functions": [],
"operation": "sql::OP_SELECT",
"parse_result": "Parser::Result::PARSED",
"type_mask": "sql::TYPE_READ"
},
"id": "classify",
"type": "classify"
},
"links": {
"self": "http://localhost:8989/v1/maxscale/query_classifier/classify/"
}
}This document describes how to configure MariaDB MaxScale and presents some possible usage scenarios. MariaDB MaxScale is designed with flexibility in mind, and consists of an event processing core with various support functions and plugin modules that tailor the behavior of the program.
A server represents an individual database server to which a client can be connected via MariaDB MaxScale. The status of a server varies during the lifetime of the server and typically the status is updated by some monitor. However, it is also possible to update the status of a server manually.
For more information on how to manually set these states via MaxCtrl, read the .
A monitor module is capable of monitoring the state of a particular kind of cluster and making that state available to the routers of MaxScale.
Examples of monitor modules are mariadbmon that is capable of monitoring
a regular primary-replica cluster and in addition of performing both switchover
and failover, galeramon that is capable of monitoring a Galera cluster,
and csmon that is capable of monitoring a Columnstore cluster.
Monitor modules have sections of their own in the MaxScale configuration file.
A filter module resides in front of routers in the request processing chain of MaxScale. That is, a filter will see a request before it reaches the router and before a response is sent back to the client. This allows filters to reject, handle, alter or log information about a request.
Examples of filters cache that provides query caching according to rules,regexfilter that can rewrite requests according to regular expressions, andqlafilter that logs information about requests.
Filters have sections of their own in the MaxScale configuration file that are referred to from services.
A router module is capable of routing requests to backend servers according to
the characteristics of a request and/or the algorithm the router
implements. Examples of routers are readconnroute that provides connection
&#xNAN;routing, that is, the server is chosen according to specified rules when the
session is created and all requests are subsequently routed to that server,
and readwritesplit that provides statement routing, that is, each
individual request is routed to the most appropriate server.
Routers do not have sections of their own in the MaxScale configuration file, but are referred to from services.
A service abstracts a set of databases and makes them appear as a single one
to the client. Depending on what router (e.g. readconnroute orreadwritesplit) the service uses, the servers are used in some particular
way. If the service uses filters, then all requests will be pre-processed in
some way before they reach the router.
Services have sections of their own in the MaxScale configuration file.
A listener defines a port MaxScale listens on. Connection requests arriving on that port will be forwarded to the service the listener is associated with. A listener may be associated with a single service, but several listeners may be associated with the same service.
Listeners have sections of their own in the MaxScale configuration file.
An defines common parameters used in other configuration sections.
The administration of MaxScale can be divided in two parts:
Writing the MaxScale configuration file, which is described in the following .
Performing runtime modifications using
For detailed information about MaxCtrl please refer to the specific documentation referred to above. In the following it will only be explained how MaxCtrl relate to each other, as far as user credentials go.
Note: By default all runtime configuration changes are saved on disk and loaded on startup. Refer to the section for more details on how it works and how to disable it.
MaxCtrl can connect using TCP/IP sockets. When connecting with MaxCtrl using
TCP/IP sockets, the user and password must be provided and are checked against a
separate user credentials database. By default, that database contains the useradmin whose password is mariadb.
Note that if MaxCtrl is invoked without explicitly providing a user and password
then it will by default use admin and mariadb. That means that when the
default user is removed, the credentials must always be provided.
The REST API calls to MaxScale can be logged by enabling .
For more detail see the admin audit configuration values admin_audit,admin_audit_file and admin_audit_exclude_methods below
and .
The following list of global configuration parameters can NOT be changed at runtime and can only be defined in a configuration file:
admin_auth
admin_enabled
admin_gui
admin_host
All other parameters that relate to objects can be altered at runtime or can be changed by destroying and recreating the object in question.
MaxScale by default reads configuration from the file /etc/maxscale.cnf. If
the command line argument --configdir=<path> is given, maxscale.cnf is
searched for in <path> instead. If the argument --config=<file> is given,
configuration is read from the file <file>.
MaxScale also looks for a directory with the same name as the configuration
file, followed by ".d" (for example /etc/maxscale.cnf.d). If found, MaxScale
recursively reads all files with the ".cnf" suffix in the directory hierarchy.
Other files are ignored.
After loading normal configuration files, MaxScale reads runtime-generated configuration files, if any, from the .
Different configuration sections can be arranged with little restrictions.
Global path settings such as logdir, piddir and datadir are only read from
the main configuration file. Other global settings are also best left in the
main file to ensure they are read before other configuration sections are
parsed.
The configuration file format used is , similar to the MariaDB Server. The files contain sections and each section can contain multiple key-value pairs.
Comments are defined by prefixing a row with a hash (#). Trailing comments are not supported.
A parameter can be defined on multiple lines as shown below. A value spread over multiple lines is simply concatenated. The additional lines of the value definition need to have at least one whitespace character in the beginning.
Section names may not contain whitespace and must not start with the characters@@.
As the object names are used to form URLs in the MaxScale REST API, they must be
safe for use in URLs. This means that only alphanumeric characters (i.e. a-zA-Z and 0-9) and the special characters _.~- can be used.
By default all changes done at runtime via the MaxScale GUI, MaxCtrl or the REST API will be saved on disk, inside the directory. The changes done at runtime will override the configuration found in the static configuration files for that particular object.
This means that if an object that is found in /etc/maxscale.cnf is modified at
runtime, all future changes to it must also be done at runtime. Any
modifications done to /etc/maxscale.cnf after a runtime change has been made
are ignored for that object.
To prevent the saving of runtime changes and to make all runtime changes
volatile, add and under the [maxscale]
section. This will make MaxScale behave like the MariaDB server does: any
changes done with SET GLOBAL statements are lost if the process is restarted.
Boolean type parameters interpret the values true, yes, on and 1 as_true_ values and false, no, off and 0 as false values. Starting with
MaxScale 23.02, the REST API also accepts the same boolean values for boolean
type parameters.
Where specifically noted, a number denoting a size can be suffixed by a subset
of the IEC binary prefixes or the SI prefixes. In the former case the number
will be interpreted as a certain multiple of 1024 and in the latter case as a
certain multiple of 1000. The supported IEC binary suffixes are Ki, Mi, Gi
and Ti and the supported SI suffixes are k, M, G and T. In both cases,
the matching is case-insensitive.
For instance, the following entries
are equivalent, as are the following
A number denoting a duration can be suffixed by one of the case-insensitive
suffixes h, m or min, s and ms, for specifying durations in hours,
minutes, seconds and milliseconds, respectively.
For instance, the following entries
are equivalent.
Note that if an explicit unit is not specified, then it is specific to the configuration parameter whether the duration is interpreted as seconds or milliseconds.
Not providing an explicit unit has been deprecated in MaxScale 2.4.
A number denoting a percent must be suffixed with %.
For instance
Many modules have settings which accept a regular expression. In most cases, these settings are named either match or exclude, and are used to filter users or queries. MaxScale uses the for matching regular expressions.
When writing a regular expression (regex) type parameter to a MaxScale configuration file,
the pattern string should be enclosed in slashes e.g. ^select -> match=/^select/. This
clarifies where the pattern begins and ends, even if it includes whitespace. Without
slashes the configuration loader trims the pattern from the ends. The slashes are removed
before compiling the pattern. For backwards compatibility, the slashes are not yet
mandatory. Omitting them is, however, deprecated and will be rejected in a future release
of MaxScale. Currently, binlogfilter, ccrfilter, qlafilter, tee and avrorouter
accept parameters in this type of regular expression form. Some other modules may not
handle the slashes yet correctly.
PCRE2 supports a complicated regular expression . MaxScale typically uses
regular expressions simply, only checking whether the pattern and subject match at some
point. For example, using the QLAFilter and setting match=/SELECT/ causes the filter to
accept any query with the text "SELECT" somewhere within. To force the pattern to only
match at the beginning of the query, set match=/^SELECT/. To only match the end, setmatch=/SELECT$/.
Modules which accept regular expression parameters also often accept options which affect
how the patterns are compiled. Typically, this setting is called options and accepts
values such as ignorecase, case and extended.
ignorecase: Causes the regular expression matcher to ignore letter case, and
is often on by default. When enabled, /SELECT/ would match both SELECT andselect.
extended: Ignores whitespace and # comments in the pattern. Note that this
is not the same as the extended regular expression syntax that for examplegrep -E uses.
These settings can also be defined in the pattern itself, so they can be
used even in modules without pattern compilation settings. The pattern
settings are (?i) for ignorecase and (?x) for extended. See the
for more information.
Standard regular expression settings for filters
Many filters use the settings match, exclude and options. Since these settings are used in a similar way across these filters, the settings are explained here. The documentation of the filters link here and describe any exceptions to this generalized explanation.
These settings typically limit the queries the filter module acts on. match and_exclude_ define PCRE2 regular expression patterns while options affects how both of the
patterns are compiled. options works as explained above, accepting the valuesignorecase, case and extended, with ignorecase being the default.
The queries are matched as they arrive to the filter on their way to a routing module. If_match_ is defined, the filter only acts on queries matching that pattern. If match is not defined, all queries are considered to match.
If exclude is defined, the filter only acts on queries not matching that pattern. If_exclude_ is not defined, nothing is excluded.
If both are defined, the query needs to match match but not match exclude.
Even if a filter does not act on a query, the query is not lost. The query is simply passed on to the next module in the processing chain as if the filter was not there.
Enumeration type parameters have a pre-defined set of accepted values. For types
declared as enum, only one value is accepted. For enum_mask types, multiple
values can be defined by separating them with commas. All enumeration values in
MaxScale are case-sensitive.
For example the router_options parameter in the readconnroute router is a
mask type enumeration:
A pathlist type parameter expects one or more filesystem paths separated by
colons. The value must not include space between the separators.
Here is an example path list parameter that points to /tmp/something.log and/var/log/maxscale/maxscale.log:
The global settings, in a section named [MaxScale], allow various parameters
that affect MariaDB MaxScale as a whole to be tuned. This section must be
defined in the root configuration file which by default is /etc/maxscale.cnf.
core_fileType:
Default: true
Dynamic: No
This parameter specifies whether a core file should be generated if MaxScale
crashes. The default is true although usually a core file is not needed,
as MaxScale is capable of logging the full strack trace of all threads
when it crashes.
auto_tuneType: string list
Values: all or list of auto tunable parameters, separated by ,
Default: No
Mandatory: No
An auto tunable parameter is a parameter whose value can be derived from a
particular server variable. With this parameter it can be specified whetherall or a specific set of parameters should automatically be set.
The current auto tunable parameters are:
The values of the server variables are collected by monitors, which means that if the servers of a service are not monitored by a monitor, then the parameters of that service will not be auto tuned.
Note that even if auto_tune is set to all, the auto tunable parameters
can still be set in the configuration file and modified with maxctrl.
However, the specified value will be overwritten at the next auto tuning
round, but only if the servers of the service are monitored by a monitor.
threadsType: number or auto
Mandatory: No
Dynamic: No
Default: auto
This parameter controls the number of worker threads that are handling the
events coming from the kernel. The default is auto which uses as many threads
as there are CPU cores. MaxScale versions older than 6 used one thread by
default.
You can explicitly enable automatic configuration of this value by setting the
value to auto. This way MariaDB MaxScale will detect the number of available
processors and set the amount of threads to be equal to that number.
Note that if MaxScale is running in a container where the CPU resources
have been limited, the use of auto may cause MaxScale to use more resources
than what is available. In such a situation auto should not be used, but instead
an explicit number that corresponds to the amount of CPU resources available in
the container. As a rule of thumb, an appropriate value for threads is the_vCPU_ of the container rounded up to the nearest integer. For instance, if
the vCPU of the container is 0.5 then 1 is an appropriate value forthreads, if the vCPU is 2.3 then 3 is.
The maximum value for threads is specified by .
From 23.02 onwards it is possible to change the number threads at runtime. Please see for more details.
Additional threads will be created to execute other internal services within MariaDB MaxScale. This setting is used to configure the number of threads that will be used to manage the user connections.
threads_maxType: positive integer
Default: 256
Dynamic: No
This parameter specifies the hard limit for the number of worker threads, which is specified using .
At startup, if the value of threads is larger than that of threads_max,
the value of threads will be reduced to that. At runtime, an attempt to
increase the value of threads beyond that of threads_max is an error.
rebalance_periodType:
Mandatory: No
Dynamic: Yes
Default: 0s
This duration parameter controls how often the load of the worker threads should be checked. The default value is 0, which means that no checks and no rebalancing will be performed.
Note that the value of rebalance_period should not be smaller than the
value of rebalance_window whose default value is 10.
If the value of rebalance_period is significantly shorter than that
of rebalance_window, it may lead to oscillation where work is constantly
moved from one thread to another.
rebalance_thresholdType: number
Mandatory: No
Dynamic: Yes
Default: 20
This integer parameter controls at which point MaxScale should start moving work from one worker thread to another.
If the difference in load between the thread with the maximum load and the thread with the minimum load is larger than the value of this parameter, then work will be moved from the former to the latter.
Although the load of a thread can vary between 0 and 100, the value of this parameter must be between 5 and 100.
Note that rebalancing will not be performed unless rebalance_period
has been specified.
rebalance_windowType: number
Mandatory: No
Dynamic: Yes
Default: 10
This integer parameter controls how many seconds of load should be taken into account when deciding whether work should be moved from one thread to another.
The default value is 10, which means that the load during the last 10 seconds is considered when deciding whether work should be moved.
The minimum value is 1 and the maximum 60.
skip_name_resolveType:
Mandatory: No
Dynamic: Yes
Default: false
This parameter controls whether reverse domain name lookups are made to convert client IP addresses to hostnames. If enabled, client IP addresses will not be resolved to hostnames during authentication or for the REST API even if requested.
If you have database users that use a hostname in the host part of the user
(i.e. 'user'@'my-hostname.org'), a reverse lookup on the client IP address is
done to see if it matches the host. Reverse DNS lookups can be very slow which
is why it is recommended that they are disabled and that users are defined using
an IP address.
auth_connect_timeoutType:
Mandatory: No
Dynamic: Yes
Default: 10s
Duration, default 10s. This setting defines the connection timeout when attempting to fetch MariaDB/MySQL/Clustrix users from a backend server. The same value is also used for read and write timeouts. Increasing this value causes MaxScale to wait longer for a response from a server before user fetching fails. Other servers may then be attempted.
The value is given as . If no explicit unit is provided, the value is interpreted as seconds. In subsequent versions a value without a unit may be rejected. Since the granularity of the timeout is seconds, a timeout specified in milliseconds will be rejected even if the given value is longer than a second.
auth_read_timeoutDeprecated and ignored as of MaxScale 2.5.0. See auth_connect_timeout above.
auth_write_timeoutDeprecated and ignored as of MaxScale 2.5.0. See auth_connect_timeout above.
query_retriesType: number
Mandatory: No
Dynamic: No
Default: 1
The number of times an interrupted internal query will be retried. The default is to retry the query once. This feature was added in MaxScale 2.1.10 and was disabled by default until MaxScale 2.3.0.
An interrupted query is any query that is interrupted by a network
error. Connection timeouts are included in network errors and thus is it
advisable to make sure that the value of query_retry_timeout is set to an
adequate value. Internal queries are only used to retrieve authentication data
and monitor the servers.
query_retry_timeoutType:
Mandatory: No
Dynamic: Yes
Default: 10s
The total timeout in seconds for any retried queries. The default value is 5 seconds.
An interrupted query is retried for either the configured amount of attempts or until the configured timeout is reached.
The value is specified as documented . If no explicit unit is provided, the value is interpreted as seconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected. Note that since the granularity of the timeout is seconds, a timeout specified in milliseconds will be rejected, even if the duration is longer than a second.
passiveType:
Mandatory: No
Dynamic: Yes
Default: false
Controls whether MaxScale is a passive node in a cluster of multiple MaxScale instances.
This parameter is intended to be used with multiple MaxScale instances that use failover functionality to manipulate the cluster in some form. Passive nodes only observe the clusters being monitored and take no direct actions.
The following functionality is disabled when passive mode is enabled:
Automatic failover in the mariadbmon module
Automatic rejoin in the mariadbmon module
Launching of monitor scripts
NOTE: Even if MaxScale is in passive mode, it will still accept clients and route any traffic sent to it. The only operations affected by the passive mode are the ones listed above.
ms_timestampType:
Mandatory: No
Dynamic: Yes
Default: false
Enable or disable the high precision timestamps in logfiles. Enabling this adds millisecond precision to all logfile timestamps.
syslogType:
Mandatory: No
Dynamic: Yes
Default: false
Log messages to the system journal. This logs messages using the native SystemD
journal interface. The logs can be viewed with journalctl.
MaxScale 22.08 changed the default value of syslog from true tofalse. This was done to remove the redundant logging that it caused as bothsyslog and maxlog were enabled by default. This caused each message to be
logged twice: once into the system journal and once into MaxScale's own logfile.
maxlogType:
Mandatory: No
Dynamic: Yes
Default: true
Log messages to MariaDB MaxScale's log file. The name of the log file ismaxscale.log and it is located in the directory pointed by .
log_warningType:
Mandatory: No
Dynamic: Yes
Default: true
Log messages whose syslog priority is warning.
MaxScale logs warning level messages whenever a condition is encountered that the user should be notified of but does not require immediate action or it indicates a minor problem.
log_noticeType:
Mandatory: No
Dynamic: Yes
Default: true
Log messages whose syslog priority is notice.
These messages contain information that is helpful for the user and they usually do not indicate a problem. These are logged whenever something worth nothing happens in either MaxScale or in the servers it monitors.
log_infoType:
Mandatory: No
Dynamic: Yes
Default: false
Log messages whose syslog priority is info.
These messages provide detailed information about the internal workings of MariaDB MaxScale. These messages should only be enabled when there is a need to inspect the internal logic of MaxScale. A common use-case is to see why a particular query was handled in a certain way. Almost all modules log some messages on the info level and this can be very helpful when trying to solve routing related problems.
log_debugType:
Mandatory: No
Dynamic: Yes
Default: false
Log messages whose syslog priority is debug.
These messages are intended for development purposes and are disabled by default. These are rarely useful outside of debugging core MaxScale issues.
Note: If MariaDB MaxScale has been built in release mode, then debug messages are excluded from the build and this setting will not have any effect. If an attempt to enable these is made, a warning is logged.
log_warn_super_userType:
Mandatory: No
Dynamic: No
Default: false
When enabled, a warning is logged whenever a client with SUPER-privilege successfully authenticates. This also applies to COM_CHANGE_USER-commands. The setting is intended for diagnosing situations where a client interferes with a primary server switchover. Super-users bypass the read_only-flag which switchover uses to block writes to the primary.
log_augmentationType: number
Mandatory: No
Dynamic: Yes
Default: 0
Enable or disable the augmentation of messages. If this is enabled, then each logged message is appended with the name of the function where the message was logged. This is primarily for development purposes and hence is disabled by default.
To disable the augmentation use the value 0 and to enable it use the value 1.
log_throttlingType: number, ,
Mandatory: No
Dynamic: Yes
Default: 10, 1000ms, 10000ms
It is possible that a particular error (or warning) is logged over and over again, if the cause for the error persistently remains. To prevent the log from flooding, it is possible to specify how many times a particular error may be logged within a time period, before the logging of that error is suppressed for a while.
In the example above, the logging of a particular error will be suppressed for 15 seconds if the error has been logged 8 times in 2 seconds.
The default is 10, 1000ms, 10000ms, which means that if the same error is
logged 10 times in one second, the logging of that error is suppressed for the
following 10 seconds.
Whenever an error message that is being throttled is logged within the triggering window (the second argument), the suppression window is extended. This continues until there is a pause in the messages that is longer than the triggering window.
For example, with the default configuration the messages must pause for at least one second in order for the throttling to eventually stop. This mechanism prevents long-lasting error conditions from slowly filling up the log with short bursts of messages.
To disable log throttling, add an entry with an empty value
or one where any of the integers is 0.
The durations can be specified as documented . If no explicit unit is provided, the value is interpreted as milliseconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected.
Note that notice, info and debug messages are never throttled.
logdirType: path
Mandatory: No
Dynamic: No
Default: /var/log/maxscale
Set the directory where the logfiles are stored. The folder needs to be both readable and writable by the user running MariaDB MaxScale.
datadirType: path
Mandatory: No
Dynamic: No
Default: /var/lib/maxscale
Set the directory where the data files used by MariaDB MaxScale are stored. Modules can write to this directory and for example the binlogrouter uses this folder as the default location for storing binary logs.
This is also the directory where the password encryption key is read from that
is generated by maxkeys.
secretsdirType: path
Mandatory: No
Dynamic: No
Default: ""
The location where the .secrets file is read from. If secretsdir is not
defined, the file is read from .
This parameter was added in MaxScale 6.4.16, 22.08.13, 23.02.10, 23.08.6 and 24.02.2.
libdirType: path
Mandatory: No
Dynamic: No
Default: OS Dependent
Set the directory where MariaDB MaxScale looks for modules. The library directory is the only directory that MariaDB MaxScale uses when it searches for modules. If you have custom modules for MariaDB MaxScale, make sure you have them in this folder.
The default value depends on the operating system. For RHEL versions the value
is /usr/lib64/maxscale/. For Debian and Ubuntu it is/usr/lib/x86_64-linux-gnu/maxscale/
sharedirType: path
Mandatory: No
Dynamic: No
Default: /usr/share/maxscale
Sets the directory where static data assets are loaded.
The MaxScale GUI static files are located in the gui/ subdirectory. If the GUI
files have been manually moved somewhere else, this path must be configured to
point to the parent directory of the gui/ subdirectory.
The MaxScale REST API only serves files for the GUI that are located in thegui/ subdirectory of the configured sharedir. Any files whose real path
resolves to outside of this directory are not served by the MaxScale GUI: this
is done to prevent other files from being accessible via the MaxScale REST
API. This means that path to the GUI source directory can contain symbolic links
but all parts after the /gui/ directory must reside inside it.
cachedirType: path
Mandatory: No
Dynamic: No
Default: /var/cache/maxscale
Configure the directory MariaDB MaxScale uses to store cached data.
piddirType: path
Mandatory: No
Dynamic: No
Default: /var/run/maxscale
Configure the directory for the PID file for MariaDB MaxScale. This file contains the Process ID for the running MariaDB MaxScale process.
execdirType: path
Mandatory: No
Dynamic: No
Default: /usr/bin
Configure the directory where the executable files reside. All internal processes which are launched will use this directory to look for executable files.
connector_plugindirType: path
Mandatory: No
Dynamic: No
Default: OS Dependent
Location of the MariaDB Connector-C plugin directory. The MariaDB Connector-C used in MaxScale can use this directory to load authentication plugins. The versions of the plugins must be binary compatible with the connector version that MaxScale was built with.
Starting with version 6.2.0, the plugins are bundled with MaxScale and the
default value now points to the bundled plugins. The location where the plugins
are stored depends on the operating system. For RHEL versions the value is/usr/lib64/maxscale/plugin/. For Debian and Ubuntu it is/usr/lib/x86_64-linux-gnu/maxscale/plugin/.
Older versions of MaxScale used /usr/lib/mysql/plugin/ as the default value.
persistdirType: path
Mandatory: No
Dynamic: No
Default: /var/lib/maxscale/maxscale.cnf.d/
Configure the directory where persisted configurations are stored. When a new object is created via MaxCtrl, it will be stored in this directory. Do not use this directory for normal configuration files, use /etc/maxscale.cnf.d/ instead. The user MaxScale is running as must be able to write into this directory.
module_configdirType: path
Mandatory: No
Dynamic: No
Default: /etc/maxscale.modules.d/
Configure the directory where module configurations are stored. Path arguments are resolved relative to this directory. This directory should be used to store module specific configurations.
Any configuration parameter that is not an absolute path will be interpreted as a relative path. The relative paths use the module configuration directory as the working directory.
For example, the configuration parameter file=my_file.txt would be interpreted
as /etc/maxscale.modules.d/my_file.txt whereas file=/home/user/my_file.txt would
be interpreted as /home/user/my_file.txt.
languageType: path
Mandatory: No
Dynamic: No
Default: /var/lib/maxscale/
Set the folder where the errmsg.sys file is located in. MariaDB MaxScale will look for the errmsg.sys file installed with MariaDB MaxScale from this folder.
query_classifierDeprecated since MariaDB MaxScale 23.08.
query_classifier_cache_sizeType:
Mandatory: No
Dynamic: Yes
Default: System Dependent
Specifies the maximum size of the query classifier cache. The default limit is 15% of total system memory starting with MaxScale 2.3.7. In older versions the default limit was 40% of total system memory. This feature was added in MaxScale 2.3.0.
When the query classifier cache has been enabled, MaxScale will, after a statement has been parsed, store the classification result using the canonicalized version of the statement as the key.
If the classification result for a statement is needed, MaxScale will first canonicalize the statement and check whether the result can be found in the cache. If it can, the statement will not be parsed at all but the cached result is used.
The configuration parameter takes one integer that specifies the maximum size of the cache. The size of the cache can be specified as explained .
Note that MaxScale uses a separate cache for each worker thread. To obtain the
amount of memory available for each thread, divide the cache size with the value
of threads. If statements are evicted from the cache (visible in the
diagnostic output), consider increasing the cache size.
Note also that limit is not a hard limit, but an approximate one. Namely, although the memory needed for storing the canonicalized statement and the classification result is correctly accounted for, there is additional overhead whose size is not exactly known and over which we do not have direct control.
Using maxctrl show threads it is possible to check what the actual size of
the cache is and to see performance statistics.
query_classifier_argsDeprecated since MariaDB MaxScale 23.08.
substitute_variablesType:
Mandatory: No
Dynamic: No
Default: false
Enable or disable the substitution of environment variables in the MaxScale configuration file. If the substitution of variables is enabled and a configuration line like
is encountered, then $SOME_VALUE will be replaced with the actual value
of the environment variable SOME_VALUE. Note:Variable substitution will be made only if '$' is the first character
&#xNAN;of the value. Everything following '$' is interpreted as the name of the environment
variable.
Referring to a non-existing environment variable is a fatal error.
The setting of substitute_variables will have an effect on all parameters
in the all other sections, irrespective of where the [maxscale] section
is placed in the configuration file. However, in the [maxscale] section,
to ensure that substitution will take place, place thesubstitute_variables=true line first.
sql_modeType:
Mandatory: No
Dynamic: No
Values: default, oracle
Specifies whether the query classifier parser should initially expect MariaDB or PL/SQL kind of SQL.
The allowed values are:default: The parser expects regular MariaDB SQL.oracle : The parser expects PL/SQL.
NOTE If sql_mode is set to oracle, then MaxScale will also assume
that autocommit initially is off.
At runtime, MariaDB MaxScale will recognize statements like
and
and change mode accordingly.
NOTE If set sql_mode=oracle; is encountered, then MaxScale will also
behave as if autocommit had been turned off and conversely, ifset sql_mode=default; is encountered, then MaxScale will also behave
as if autocommit had been turned on.
Note that MariaDB MaxScale is not explicitly aware of the sql mode of
the server, so the value of sql_mode should reflect the sql mode used
when the server is started.
local_addressType: string
Mandatory: No
Dynamic: No
Default: ""
What specific local address/interface to use when connecting to servers.
This can be used for ensuring that MaxScale uses a particular interface when connecting to servers, in case the computer MaxScale is running on has multiple interfaces.
If given as a hostname, MaxScale will perform name lookup on the address when starting and reuse the result.
users_refresh_timeType:
Mandatory: No
Dynamic: Yes
Default: 30s
How often, in seconds, MaxScale at most may refresh the users from the backend server.
MaxScale will at startup load the users from the backend server, but if the authentication of a user fails, MaxScale assumes it is because a new user has been created and will thus refresh the users. By default, MaxScale will do that at most once per 30 seconds and with this configuration option that can be changed. A value of 0 allows infinite refreshes and a negative value disables the refreshing entirely.
The value is specified as documented . If no explicit unit is provided, the value is interpreted as seconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected. Note that since the granularity of the timeout is seconds, a timeout specified in milliseconds will be rejected, even if the duration is longer than a second.
In MaxScale 2.3.9 and older versions, the minimum allowed value was 10 seconds but, due to a bug, the default value was 0 which allowed infinite refreshes.
users_refresh_intervalType:
Mandatory: No
Dynamic: Yes
Default: 0s
How often, in seconds, MaxScale will automatically refresh the users from the backend server.
This configuration is used to periodically refresh the backend users, making sure
they are up to date. The default value for this setting is 0, meaning the users
are not periodically refreshed. However, they can still be refreshed in case of
failed authentication depending on users_refresh_time.
retain_last_statementsType: number
Mandatory: No
Dynamic: Yes
Default: 0
How many statements MaxScale should store for each session. This is for debugging purposes, as in case of problems it is often of value to be able to find out exactly what statements were sent before a particular problem turned up.
Note: See also dump_last_statements using which the actual dumping
of the statements is enabled. Unless both of the parameters are defined,
the statement dumping mechanism doesn't work.
dump_last_statementsType:
Mandatory: No
Dynamic: Yes
Values: on_close, on_error, never
With this configuration item it is specified in what circumstances MaxScale
should dump the last statements that a client sent. The allowed values arenever, on_error and on_close. With never the statements are never
logged, with on_error they are logged if the client closes the connection
improperly, and with on_close they are always logged when a client session
is closed.
Note that you need to specify with retain_last_statements how many statements
MaxScale should retain for each session. Unless it has been set to another value
than 0, this configuration setting will not have an effect.
session_traceType: number
Mandatory: No
Dynamic: Yes
Default: 0
How many log entries are stored in the session specific trace log. This log is written to disk when a session ends abnormally and can be used for debugging purposes. Currently the session trace log is written to the log in the following situations:
When MaxScale receives a fatal signal and is about to crash.
Whenever an unexpected response is read from a server
If the session is not closed gracefully (i.e. client doesn't send a COM_QUIT packet)
Whenever readwritesplit receives a response that is was not expecting.
It would be good to enable this if a session is disconnected and the log is not detailed enough. In this case the info log might reveal the true cause of why the connection was closed.
Default is 0.
The session trace log is also exposed by REST API and is shown withmaxctrl show sessions.
The order in which the session trace messages are logged into the log changed in MaxScale 6.4.9 (MXS-4716). Newer versions will log the messages in the "normal log order" of older events coming first and newer events appearing later in the file. Older versions of MaxScale logged the trace dump in the reverse order with the newest messages first and oldest ones last.
session_trace_matchType:
Mandatory: No
Dynamic: Yes
Default: None
If both session_trace and session_trace_match are defined, and a trace log
entry of a session matches the regular expression, the trace log is written to
disk. The check for the match is done when the session is stopping.
The most effective way to debug MaxScale related issues is to turn on log_info
and observe the events written into the MaxScale log. The only problem with this
approach is that it can cause a severe performance bottleneck and can easily
fill up the disk as the amount of data written to it is significant. Withsession_trace and session_trace_match, the content that actually gets logged
can be filtered to only what is needed.
For example, the following configuration would only log the trace log messages from sessions that execute SQL queries with syntax errors:
This could be used to easily identify which applications execute the queries without having to gather the info level log output from all the sessions that connect to MaxScale. For every session that ends up logging a syntax error message, the last 1000 lines of log output done by that session is written into the MaxScale log.
writeq_high_waterType:
Mandatory: No
Dynamic: Yes
Default: 65536
High water mark for network write buffer. When the size of the outbound network buffer in MaxScale for a single connection exceeds this value, network traffic throtting for that connection is started. The parameter accepts . The default value was 16777216 bytes before 22.08.4.
More specifically, if the client side write queue is above this value, it will block traffic coming from backend servers. If the backend side write queue is above this value, it will block traffic from client.
The buffer that this parameter controls is the buffer internal to MaxScale and
is not the kernel TCP send buffer. This means that the total amount of buffered
data is determined by both the kernel TCP buffers and the value ofwriteq_high_water.
Network throttling is only enabled when writeq_high_water is non-zero. In
MaxScale 23.02 and earlier, also writeq_low_water had to be non-zero.
writeq_low_waterType:
Mandatory: No
Dynamic: Yes
Default: 1024
Low water mark for network write buffer. Once the traffic throttling is enabled,
it will only be disabled when the network write buffer is belowwriteq_low_water bytes. The parameter accepts . The
default value was 8192 bytes before 22.08.4.
The value of writeq_high_water must always be greater than the value ofwriteq_low_water.
persist_runtime_changesType:
Default: true
Dynamic: No
Persist changes done at runtime. This parameter was added in MaxScale 22.08.0.
When persist_runtime_changes is enabled, runtime configuration changes done
with the GUI, MaxCtrl or via the REST API cause a new configuration file to be
saved in /var/lib/maxscale/maxscale.cnf.d/. If load_persisted_configs is
enabled, these files will be applied on top of any existing values found in
static configuration files whenever MaxScale is starting up.
load_persisted_configsType:
Mandatory: No
Dynamic: No
Default: true
Load persisted runtime changes on startup. This parameter was added in MaxScale 2.3.6.
All runtime configuration changes are persisted in generated configuration files
located by default in /var/lib/maxscale/maxscale.cnf.d/ and are loaded on
startup after main configuration files have been read. To make runtime
configurations volatile (i.e. they are lost when maxscale is restarted), useload_persisted_configs=false. All changes are still persisted since it stores
the current runtime state of MaxScale. This makes problem analysis easier if an
unexpected outage happens.
max_auth_errors_until_blockType: number
Mandatory: No
Dynamic: Yes
Default: 10
The maximum number of authentication failures that are tolerated before a host is temporarily blocked. The default value is 10 failures. After a host is blocked, connections from it are rejected for 60 seconds. To disable this feature, set the value to 0.
Note that the configured value is not a hard limit. The number of tolerated
failures is between max_auth_errors_until_block and threads * max_auth_errors_until_block where max_auth_errors_until_block is the
configured value of this parameter and threads is the number of configured
threads.
debugType: string
Mandatory: No
Dynamic: No
Default: ""
Define debug options from the --debug command line option. Either the command line option or the parameter should be used, not both. The debug options are only for testing purposes and are not to be used in production.
The MaxScale REST API is an HTTP interface that provides JSON format data intended to be consumed by monitoring applications and visualization tools.
The following options must be defined under the [maxscale] section in the
configuration file.
admin_hostType: string
Mandatory: No
Dynamic: No
Default: "127.0.0.1"
The network interface where the REST API listens on. The default value is the
IPv4 address 127.0.0.1 which only listens for local connections.
admin_portType: number
Mandatory: No
Dynamic: No
Default: 8989
The port where the REST API listens on. The default value is port 8989.
admin_authType:
Mandatory: No
Dynamic: No
Default: true
Enable REST API authentication using HTTP Basic Access authentication. This is not a secure method of authentication without HTTPS but it does add a small layer of security.
For more information, read the .
admin_ssl_keyType: path
Mandatory: No
Dynamic: No
Default: ""
The path to the TLS private key in PEM format for the admin interface.
If the admin_ssl_key and admin_ssl_cert options are all defined, the admin
interface will use encrypted HTTPS instead of plain HTTP.
admin_ssl_certType: path
Mandatory: No
Dynamic: No
Default: ""
The path to the TLS public certificate in PEM format. See admin_ssl_key
documentation for more details.
admin_ssl_ca_certDeprecated since MariaDB MaxScale 22.08. See admin_ssl_ca.
admin_ssl_caType: path
Mandatory: No
Dynamic: No
Default: ""
The path to the TLS CA certificate in PEM format. If defined, the client certificate, if provided, will be validated against it. This parameter is optional starting with MaxScale 2.3.19.
NOTE Up until MariaDB MaxScale 6, the parameter was called admin_ssl_ca_cert,
which is still accepted as an alias for admin_ssl_ca.
admin_ssl_versionType:
Mandatory: No
Dynamic: No
Values: MAX, TLSv1.0, TLSv1.1
This parameter controls the enabled TLS versions in the REST API. Accepted values are:
TLSv10
TLSv11
TLSv12
TLSv13
MaxScale versions 6.4.16, 22.08.13, 23.02.10, 23.08.6, 24.02.2 and all newer releases accept also the following alias values:
TLSv1.0
TLSv1.1
TLSv1.2
TLSv1.3
The default value is MAX which negotiates the highest level of encryption that
both the client and server support. The list of supported TLS versions depends
on the operating system and what TLS versions the GnuTLS library supports.
For example, to enable only TLSv1.1 and TLSv1.3, useadmin_ssl_version=TLSv1.1,TLSv1.3.
This parameter was added in MaxScale 2.5.7.
Older versions of MaxScale interpreted admin_ssl_version as the minimum
allowed TLS version. In those versions, admin_ssl_version=TLSv1.2 allowed both
TLSv1.2 and TLSv1.3. In MaxScale 6.4.16, 22.08.13, 23.02.10, 23.08.6, 24.02.2
and all newer versions, the value is a enumeration of accepted TLS protocol
versions. In these versions, admin_ssl_version=TLSv1.2 only allows TLSv1.2. To
retain the old behavior, specify all the accepted values withadmin_ssl_version=TLSv1.2,TLSv1.3
admin_enabledType:
Mandatory: No
Dynamic: No
Default: true
Enable or disable the admin interface. This allows the admin interface to be completely disabled to prevent access to it.
admin_guiType:
Mandatory: No
Dynamic: No
Default: true
Enable or disable the admin graphical user interface.
MaxScale provides a GUI for administrative operations via the REST API. When the
GUI is enabled, the root REST API resource (i.e. http://localhost:8989/) will
serve the GUI. When disabled, the REST API will respond with a 200 OK to the
request. By disabling the GUI, the root resource can be used as a low overhead
health check.
admin_secure_guiType:
Mandatory: No
Dynamic: No
Default: true
Whether to serve the GUI only over secure HTTPS connections.
To be secure by default, the GUI is only served over HTTPS connections as
it uses a token authentication scheme. This also controls whether the/auth endpoint requires an encrypted connection.
To allow use of the GUI without having to configure TLS certificates for the MaxScale REST API, set this parameter to false.
admin_log_auth_failuresType:
Mandatory: No
Dynamic: Yes
Default: true
Log authentication failures for the admin interface.
admin_pam_readwrite_service/admin_pam_readonly_serviceType: string
Mandatory: No
Dynamic: No
Default: ""
Use Pluggable Authentication Modules (PAM) for REST API authentication. The settings
accept a PAM service name which is used during authentication if normal authentication
fails. admin_pam_readwrite_service should accept users who can do any
MaxCtrl/REST-API-operation. admin_pam_readonly_service should accept users who can only
do read operations. Because REST-API does not support back and forth communication between
the client and MaxScale, the PAM services must be simple. They should only ask for the
password and nothing else.
If only admin_pam_readwrite_service is configured, both read and write operations can be
authenticated by PAM. If only admin_pam_readonly_service is configured, only read
operations can be authenticated by PAM. If both are set, the service used is determined by
the requested operation. Leave or set both empty to disable PAM for REST-API.
admin_readwrite_hostsType: string
Mandatory: No
Dynamic: No
Default: %
Limit REST-API logins to specific source addresses/hosts. Supports
a comma-separated list of addresses and hostnames. Addresses can be given in
CIDR-notation. Admin clients still need to supply credentials as usual.
By default, all source addresses are allowed. admin_readwrite_hosts lists
the hosts from which any operation is allowed.
When listing hostnames, % and _ act as wildcards, similar to the hostname
component in MariaDB Server user accounts. localhost is a reserved hostname
and will not match any connection (use 127.0.0.1 for loopback connections).
When checking the source host of the incoming REST-API client, MaxScale first compares against addresses and address masks. If a match was not found and the setting values contain hostnames, reverse name lookup is performed on the client address. The lookup can take a while in rare cases. To prevent such slowdown, use only IP-addresses in the host lists.
skip_name_resolve cannot be enabled if admin_readwrite_hosts oradmin_readonly_hosts includes hostname patterns, as these would not work.
admin_readonly_hostsWorks similar to admin_readwrite_hosts. Lists the hosts from which only read
operations are allowed. An admin client can do a read operation if their source
address matches either admin_readwrite_hosts or admin_readonly_hosts.
admin_jwt_algorithmType:
Mandatory: No
Dynamic: No
Values: auto, HS256, HS384
The signature algorithm used by the MaxScale REST API when generating JSON Web Tokens.
For more information about the tokens and how they work, refer to .
If a symmetric algorithm is used (i.e. HS256, HS384 or HS512), MaxScale
will generate a random encryption key on startup and use that to sign the
messages. The symmetric key can also be retrieved from an if the admin_jwt_key parameter is defined.
If an asymmetric algorithm (i.e. public key authentication) is used, both theadmin_ssl_cert and admin_ssl_key parameters must be defined and they must
contain a private key and a public certificate of the correct type. If the wrong
key type, key length or elliptic curve is used, MaxScale will refuse to start.
Asymmetric key algorithms make it possible for the clients of the REST API to validate that the token was indeed generated by the correct entity.
Symmetric algorithms make it easy to share the same tokens between multiple MaxScale instances as the shared secret can be stored in a key management system.
The possible values for this parameter are:
auto
MaxScale will attempt to detect the best algorithm to use for
signatures. The algorithm used depends on the private key type: RSA keys usePS256, EC keys use the ES256, ES384 or ES512 depending on the curve,
Ed25519 keys use ED25519 and Ed448 keys uses ED448. If MaxScale cannot
auto-detect the key type, it falls back to HS256
admin_jwt_keyType: string
Mandatory: No
Dynamic: No
Default: ""
The ID for the encryption key used to sign the JSON Web Tokens. If configured, an must also be configured and it must contain the key with the given ID. If no key is defined, MaxScale will use a random encryption key whenever a symmetric signature algorithm is used.
Currently, the encryption key is only read on startup. This means that the tokens will be signed by the latest key version that is available on startup: rotating the encryption key in the key management system will not cause the JWTs to be signed with newer versions of the key.
admin_jwt_max_ageType:
Mandatory: No
Dynamic: No
Default: 24h
The maximum lifetime of a token generated by the /auth endpoint.
If a client requests for a token with a lifetime that exceeds the configured value, the token lifetime is silently truncated to this value. This can be used to control the maximum length of a MaxGUI session.
This also acts as the effective maximum age of any database connection created
from the /sql endpoint.
admin_oidc_urlType: string
Mandatory: No
Dynamic: No
Default: ""
The URL to a OpenID Connect server that is used for JWT validation.
If defined, any tokens signed by this server are accepted as valid bearer tokens
for the MaxScale REST API. The "sub" field of the token is assumed to be the
username of an administrative user in MaxScale and the "account" claim is
assumed to be the type of the user: "admin" for administrative users with full
access to the REST-API and "basic" for users with read-only access to the
REST-API. This means that all users must be first created with maxctrl create user before the tokens are accepted if the OIDC provider is not able to add the"account" claim.
If this URL is changed at runtime, the new certificates will not be
fetched until a maxctrl reload tls command is executed.
admin_verify_urlType: string
Mandatory: No
Dynamic: No
Default: ""
URL to a server to which the REST API token verification is delegated.
If the URL is defined, any tokens passed to the REST API will be validated by
doing a GET request to the URL with the client's token as a bearer token. TheReferer header of the request is set to the URL being requested by the client
and the custom X-Referrer-Method header is set to the HTTP method being used
(PUT, GET etc.).
Note: When admin_verify_url is used and the remote server cannot
be accessed, all REST API access that uses tokens will be disabled. The
only way to use the REST API with tokens is to remove admin_verify_url
from the configuration which requires restarting MaxScale. The REST API
still accepts HTTP Basic Access authentication even if the remote server
cannot be reached.
By delegating the authentication and authorization of the REST API to an external server, users can implement custom access control systems for the MaxScale REST API.
admin_jwt_issuerType: string
Mandatory: No
Dynamic: No
Default: maxscale
The issuer ("iss") claim of all JWTs generated by MaxScale. This can be set
to a custom value to uniquely identify which MaxScale issued a JWT. This is
especially useful for cases where the MaxScale GUI is used from behind
a reverse proxy.
admin_auditType:
Mandatory: No
Dynamic: Yes
Default: false
Enable logging of incoming REST API calls.
admin_audit_fileType: string
Mandatory: No
Dynamic: Yes
Default: /var/log/maxscale/admin_audit.csv
The file where the REST API auditing information is logged.
If a non-default value is used, the directory where the file resides must
exist. For example, with /var/log/maxscale/audit_files/audit.csv, the
directory /var/log/maxscale/audit_files must exist.
admin_audit_exclude_methodsType:
Mandatory: No
Dynamic: Yes
Values: GET, PUT, POST
List of comma separated HTTP methods to exclude from logging
Currently MaxScale does not use CONNECT or TRACE.
Resetting to log all methods can be done in the configuration file by
writing admin_audit_exclude_methods= or at runtime withmaxctrl alter maxscale admin_audit_exclude_methods=.
Remember that once a runtime change has been made, the entry for that
setting is ignored in the main configuration file (usually maxscale.cnf).
config_sync_clusterType: monitor
Mandatory: No
Dynamic: Yes
Default: None
This parameter controls which cluster (i.e. monitor) is used to synchronize
configuration changes between MaxScale instances. The first server labeledMaster will be used for the synchronization.
By default configuration synchronization is not enabled and it must be
explicitly enabled by defining a monitor name for config_sync_cluster.
When config_sync_cluster is defined, config_sync_user andconfig_sync_password must also be defined.
For a detailed description of this feature, refer to the section.
config_sync_userType: string
Mandatory: No
Dynamic: Yes
Default: None
The username for the account that is used to synchronize configuration changes
across MaxScale instances. Both this parameter and config_sync_password are
required if config_sync_cluster is configured.
This account must have the following grants:
The mysql.maxscale_config table can be pre-created in which case the CREATE
grant is not needed by the user configured in config_sync_user. The following
SQL is used to create the table.
If the database where the table is created is changed with config_sync_db, the
grants must be adjusted to target that database instead.
config_sync_passwordType: password
Mandatory: No
Dynamic: Yes
Default: None
The password for config_sync_user. Both this parameter and config_sync_user
are required if config_sync_cluster is configured. This password can
optionally be encrypted using maxpasswd.
config_sync_dbType: string
Mandatory: No
Dynamic: No
Default: mysql
The database where the maxscale_config table is created. By default the table
is created in the mysql database. This parameter was added in MaxScale
versions 6.4.6 and 22.08.5.
As tables in the mysql database cannot have triggers on them, the database
must be changed to a user-created one in order to create triggers on the table.
An example use-case for triggers on this table is to track all configuration
changes done to MaxScale by inserting them into a separate table.
config_sync_intervalType:
Mandatory: No
Dynamic: Yes
Default: 5s
How often to synchronize the configuration with the cluster.
As the synchronization involves selecting the configuration version from the database, this value should not be set to an unreasonably low value. The default value of 5 second should provide a good compromise between responsiveness and how much load it places on the database.
config_sync_timeoutType:
Mandatory: No
Dynamic: Yes
Default: 10s
Timeout for all SQL operations done during the configuration synchronization. If an operation exceeds this timeout, the configuration change is treated as failed and an error is reported to the client that did the change.
key_managerType:
Dynamic: Yes
Values: none, file, kmip, vault
The encryption key manager to use. The available encryption key managers are:
none - No key manager, encryption keys are not available.
file -
kmip -
Refer to the section for more information on how to configure the key managers. The key managers each have their configuration in their own namespace and must have their name as a prefix.
For example to configure the file key manager, the following must be used:
MaxScale logs warnings and errors for various reasons and often it is self- evident and generally applicable whether some occurrence should warrant a warning or an error, or perhaps just an info-level message.
However, there are events whose seriousness is not self-evident. For instance, in some environments an authentication failure may simply indicate that someone has made a typo, while in some other environment that can only happen in case there has been a security breech.
To handle events like these, MaxScale defines events whose logging
facility and level can be controlled by the administrator. Given an eventX, its facility and level are controlled in the following manner:
The above means that if event X occurs, then that is logged using the
facility LOG_LOCAL0 and the level LOG_ERR.
The valid values of facilityare the facility values reported byman
syslog, e.g.LOG_AUTH,LOG_LOCAL0andLOG_USER. Likewise, the valid values forlevelare the ones also reported byman syslog, e.g.LOG_WARNING,LOG_ERRandLOG_CRIT`.
Note that MaxScale does not act upon the level, that is, even if the level
of a particular event is defined to be LOG_EMERG, MaxScale will not shut
down if that event occurs.
The default facility is LOG_USER and the default level is LOG_WARNING.
Note that you may also have to configure rsyslog to ensure that the
event can be logged to the intended log file. For instance, if the facility
is chosen to be LOG_AUTH, then /etc/rsyslog.conf should contain a line
like
for the logged events to end up in /var/log/auth.log, where the initialauth is the relevant entry.
The available events are:
This event occurs when there is an authentication failure.
A service represents the database service that MariaDB MaxScale offers to the clients. In general a service consists of a set of backend database servers and a routing algorithm that determines how MariaDB MaxScale decides to send statements or route connections to those backend servers.
A service may be considered as a virtual database server that MariaDB MaxScale makes available to its clients.
Several different services may be defined using the same set of backend servers. For example a connection based routing service might be used by clients that already performed internal read/write splitting, whilst a different statement based router may be used by clients that are not written with this functionality in place. Both sets of applications could access the same data in the same databases.
A service is identified by a service name, which is the name of the configuration file section and a type parameter of service.
In order for MariaDB MaxScale to forward any requests it must have at least one service defined within the configuration file. The definition of a service alone is not enough to allow MariaDB MaxScale to forward requests however, the service is merely present to link together the other configuration elements.
routerType: router
Mandatory: Yes
Dynamic: No
The router parameter of a service defines the name of the router module that
will be used to implement the routing algorithm between the client of MariaDB
MaxScale and the backend databases. Additionally routers may also be passed a
comma separated list of options that are used to control the behavior of the
routing algorithm. The two parameters that control the routing choice are router
and router_options. The router options are specific to a particular router and
are used to modify the behavior of the router. The read connection router can be
passed options of master, slave or synced, an example of configuring a service
to use this router and limiting the choice of servers to those in slave state
would be as follows.
To change the router to connect on to servers in the master state as well as
slave servers, the router options can be modified to include the master state.
A more complete description of router options and what is available for a given router is included with the documentation of the router itself.
filtersType: filter list
Mandatory: No
Dynamic: Yes
Default: None
The filters option allow a set of filters to be defined for a service; requests from the client are passed through these filters before being sent to the router for dispatch to the backend server. The filters parameter takes one or more filter names, as defined within the filter definition section of the configuration file. Multiple filters are separated using the | character.
The requests pass through the filters from left to right in the order defined in the configuration parameter.
targetsType: target list
Mandatory: No
Dynamic: Yes
Default: None
The targets parameter is a comma separated list of server and/or service names
that comprise the routing targets of the service. This parameter was added in
MaxScale 2.5.0.
This parameter allows nested service configurations to be defined without having to configure listeners for all services. For example, one use-case is to use multiple readwritesplit services behind a schemarouter service to have both the sharding of schemarouter with the high-availability of readwritesplit.
NOTE: The targets parameter is mutually exclusive with the cluster andservers parameters.
serversType: server list
Mandatory: No
Dynamic: Yes
Default: None
The servers parameter in a service definition provides a comma separated list of the backend servers that comprise the service. The server names are those used in the name section of a block with a type parameter of server (see below).
NOTE: The servers parameter is mutually exclusive with the cluster andtargets parameters.
clusterType: monitor
Mandatory: No
Dynamic: Yes
Default: None
The servers the service uses are defined by the monitor specified as value of this configuration parameter.
NOTE: The cluster parameter is mutually exclusive with the servers andtargets parameters.
userType: string
Mandatory: Yes
Dynamic: Yes
This setting defines the user the service uses to fetch user account information from backends. A password is specified using .
See for more information (such as required grants) and troubleshooting tips regarding user account management and client authentication.
passwordType: string
Mandatory: Yes
Dynamic: Yes
This settings defines the password the service uses to fetch user account information from backends. The password may be either a plain text password or an . The user is specified using .
See for more information (such as required grants) and troubleshooting tips regarding user account management and client authentication.
From 23.08.0 onwards, MaxScale will remember the previous password when the password is changed. If the fetching of the user account information fails using the new password, it will be attempted using the previous one. The purpose of this change is to make it a smoother operation to change the password of the service user. The steps are as follows:
$ maxctrl alter service MyService password=TheNewPassword
MariaDB [(none)]> set password for TheServiceUser = password('TheNewPassword');
Since the old password is remembered and used if the new password does not work, it is no longer necessary to perform those steps simultaneously.
enable_root_userType:
Mandatory: No
Dynamic: Yes
Default: false
This parameter controls the ability of the root user to connect to MariaDB MaxScale and hence onwards to the backend servers via MariaDB MaxScale.
localhost_match_wildcard_hostDeprecated and ignored.
version_stringType: string
Mandatory: No
Dynamic: No
Default: None
This parameter sets a custom version string that is sent in the MySQL Handshake from MariaDB MaxScale to clients.
Example:
If not set, MaxScale will attempt to use a version string from the
backend databases by selecting the version string of the database with
the lowest version number. If the selected version is from the MariaDB
10 series, a 5.5.5- prefix will be added to it similarly to how the
MariaDB 10 series versions added it.
If MaxScale has not been able to connect to a single database and the
versions are unknown, the default value of 5.5.5-10.4.32 <MaxScale version>-maxscale is used where <MaxScale version> is the version of
MaxScale.
auth_all_serversType:
Mandatory: No
Dynamic: Yes
Default: false
This parameter controls whether only a single server or all of the servers are used when loading the users from the backend servers.
By default MaxScale uses the first server labeled as Master as the source of
the authentication data. When this option is enabled, the authentication data is
loaded from all the servers and combined into one big data set.
Note: This parameter was deprecated in MaxScale 24.02.0 but it was then un-deprecated as there were still uses for it. Modules that required this to function correctly (e.g. schemarouter) now automatically enable it.
strip_db_escType:
Mandatory: No
Dynamic: Yes
Default: true
Note: This parameter has been deprecated in MaxScale 23.08. The stripping of escape characters is in all known cases the correct thing to do.
This setting controls whether escape characters (\) are removed from database
names when loading user grants from a backend server. When enabled, a grant
such as grant select on test_.* to 'user'@'%'; is read as grant select on test_.* to 'user'@'%';
This setting has no effect on database-level grants fetched from a MariaDB Server. The database names of a MariaDB Server are compared using the LIKE operator to properly handle wildcards and escaped wildcards. This setting may affect database names in table and column level grants, although these typically do not contain backlashes.
Some visual database management tools automatically escape some characters and this might cause conflicts when MaxScale tries to authenticate users.
log_auth_warningsType:
Mandatory: No
Dynamic: Yes
Default: true
Enable or disable the logging of authentication failures and warnings. If enabled, messages about failed authentication attempts will be logged with details about who tried to connect to MariaDB MaxScale and from where.
log_warningType:
Mandatory: No
Dynamic: Yes
Default: false
When enabled, this allows a service to log warning messages even if the global log level configuration disables them.
Note that disabling the service level logging does not override the global
logging configuration: with log_warning=false in the service andlog_warning=true globally, warnings will still be logged for all services.
log_noticeType:
Mandatory: No
Dynamic: Yes
Default: false
When enabled, this allows a service to log notice messages even if the global log level configuration disables them.
log_infoType:
Mandatory: No
Dynamic: Yes
Default: false
When enabled, this allows a service to log info messages even if the global log level configuration disables them.
log_debugType:
Mandatory: No
Dynamic: Yes
Default: false
When enabled, this allows a service to log debug messages even if the global log level configuration disables them.
Debug messages are only enabled for debug builds. Enabling log_debug in a
release build does nothing.
wait_timeoutType:
Mandatory: No
Dynamic: Yes
Default: 28800s (>= MaxScale 24.02.5, 25.01.2), 0s (<= MaxScale 24.02.4, 25.01.1)
The wait_timeout parameter is used to disconnect sessions to MariaDB MaxScale that have been idle for too long. The session timeout is set to 28800 seconds by default. A value of zero is interpreted as no timeout.
This parameter used to be called connection_timeout and this name is still
accepted as an alias for wait_timeout. The old name has been deprecated in
MaxScale 23.08.
The default value of wait_timeout changed from 0s to 28800s in MaxScale
versions 24.02.5 and 25.01.2 to match the default value of MariaDB
().
Note that since the granularity of the timeout is seconds, a timeout specified in milliseconds will be rejected, even if the duration is longer than a second.
This parameter only takes effect in top-level services. A top-level service is
the service where the listener that the client connected to points (i.e. the
value of service in the listener). If a service defines other services in itstargets parameter, the wait_timeout for those is not used.
The value of wait_timeout in MaxScale should be lower than the lowestwait_timeout value on the backend servers. This way idle clients are
disconnected by MaxScale before the backend servers have to close them. Any
client-side idle timeouts (e.g. maximum lifetime for connection pools) should be
lower than wait_timeout in both MaxScale and MariaDB. This way the client
application will end up closing the connection itself which most of the time
results in better and more helpful error messages.
Warning: If a connection is idle for longer than the configured connection timeout, it will be forcefully disconnected and a warning will be logged in the MaxScale log file.
Example:
max_connectionsType: number
Mandatory: No
Dynamic: Yes
Default: 0
The maximum number of simultaneous connections MaxScale should permit to this service. If the parameter is zero or is omitted, there is no limit. Any attempt to make more connections after the limit is reached will result in a "Too many connections" error being returned.
Warning: In MaxScale 2.5, it is possible that the number of concurrent
connections temporarily exceeds the value of max_connections. This has been
fixed in MaxScale 6.
Example:
session_track_trx_stateType:
Mandatory: No
Dynamic: Yes
Default: false
*Note: This parameter has been deprecated in MaxScale 23.08 as the feature is now used automatically if needed. In addition, the session tracking no longer needs to be enabled in MariaDB for the transaction state tracking to work correctly.
Enable transaction state tracking by offloading it to the backend servers. Getting the transaction state from the server will be more accurate for stored procedures or multi-statement SQL that modifies the transaction state non-atomically.
In general, it is better to avoid using this type of SQL as tracking the
transaction state via the server responses is not compatible with features such
as transaction_replay in readwritesplit. session_track_trx_state should only
be enabled if the default transaction tracking done by MaxScale does not produce
the desired outcome.
This is only supported by MariaDB versions 10.3 or newer. The following must be configured in the MariaDB server in order for this feature to work. Not configuring the MariaDB server with it can result in the transaction state being wrong in MaxScale which can result in data inconsistency.
retain_last_statementsType: number
Mandatory: No
Dynamic: Yes
Default: -1
How many statements MaxScale should store for each session of this service.
This overrides the value of the global setting with the same name. Ifretain_last_statements has been specified in the global section of the
MaxScale configuration file, then if it has not been explicitly specified
for the service, the global value holds, otherwise the service specific
value rules. That is, it is possible to enable the setting globally and
turn it off for a specific service, or just enable it for specific services.
The value of this parameter can be changed at runtime using maxctrl and the
new value will take effect for sessions created thereafter.
connection_keepaliveType:
Mandatory: No
Dynamic: Yes
Default: 300s
Keep idle connections alive by sending pings to backend servers. This feature was introduced in MaxScale 2.5.0 where it was changed from a readwritesplit-specific feature to a generic service feature. The default value for this parameter is 300 seconds. To disable this feature, set the value to 0.
The keepalive interval is specified as documented . If no explicit unit is provided, the value is interpreted as seconds in MaxScale 2.5. In subsequent versions a value without a unit may be rejected. Note that since the granularity of the keepalive is seconds, a keepalive specified in milliseconds will be rejected, even if the duration is longer than a second.
The parameter value is the interval in seconds between each keepalive ping. A keepalive ping will be sent to a backend server if the connection has been idle for longer than the configured keepalive interval.
Starting with MaxScale 2.5.21 and 6.4.0, the keepalive pings are not sent if the client
has been idle for longer than the configured value ofconnection_keepalive. Older versions of MaxScale sent the keepalive pings
regardless of the client state.
This parameter only takes effect in top-level services. A top-level service is
the service where the listener that the client connected to points (i.e. the
value of service in the listener). If a service defines other services in itstargets parameter, the connection_keepalive for those is not used.
If the value of connection_keepalive is changed at runtime, the change in the
value takes effect immediately.
As the connection keepalive pings must be done only when there's no ongoing
query, all requests and responses must be tracked by MaxScale. In the case ofreadconnroute, this will incur a small drop in performance. For routers that
rely on result tracking (e.g. readwritesplit and schemarouter), the
performance will be the same with or without connection_keepalive.
If you want to avoid the performance cost and you don't need the connection
keepalive feature, you can disable it with connection_keepalive=0s.
force_connection_keepaliveType: boolean
Mandatory No
Dynamic: Yes
Default: false
By default, connection keepalive pings are only sent if the client is either
executing a query or has been idle for less than the duration configured inconnection_keepalive. When this parameter is enabled, keepalive pings are
unconditionally sent to any backends that have been idle for longer thanconnection_keepalive seconds. This option was added in MaxScale 6.4.9 and can
be used to emulate the pre-2.5.21 behavior if long-lived application connections
rely on the old unconditional keepalive pings.
Note: if force_connection_keepalive is enabled and connection_keepalive in
MaxScale is set to a lower value than the wait_timeout on the database, the
client idle timeouts that wait_timeout control are no longer effective. This
happens because MaxScale unconditionally sends the pings which make the client
behave like it is not idle and thus the connections will never be killed due towait_timeout.
net_write_timeoutType:
Mandatory No
Dynamic: Yes
Default: 0s
This parameter controls how long a network write to the client can stay buffered. This feature is disabled by default.
When net_write_timeout is configured and data is buffered on the client
network connection, if the time since the last successful network write exceeds
the configured limit, the client connection will be disconnected.
The value is specified as documented . If no explicit unit is provided, the value is interpreted as seconds in MaxScale 2.4. In subsequent versions a value without a unit may be rejected. Note that since the granularity of the timeout is seconds, a timeout specified in milliseconds will be rejected, even if the duration is longer than a second.
max_sescmd_historyType: number
Mandatory: No
Dynamic: Yes
Default: 50
max_sescmd_history sets a limit on how many distinct session commands are
stored in the session command history. When the history limit is exceeded, the
history is either pruned to the last max_sescmd_history command (whenprune_sescmd_history is enabled) or the history is disabled and server
reconnections are no longer possible.
The required history size can be estimated by counting the total number of
session state modifying commands (e.g SET NAMES) that are used by a
client. Note that connectors usually add some commands that aren't visible to
the application developer which means a safety margin should be added. A good
rule of thumb is to count the expected number of statements and double that
number. The default value of 50 is a value that'll work for most applications
that do not rely heavily on user variables.
Starting with MaxScale versions 21.06.18, 22.08.15, 23.02.12, 23.08.8, 24.02.4
and 24.08.1, binary protocol prepared statements do not count towards themax_sescmd_history limit. In practice this means that all binary protocol
prepared statements opened by the client are also kept open by MaxScale and are
restored whenever a reconnection to a server happens. The limits imposed bymax_sescmd_history apply to other text protocol commands e.g. SET NAMES.
Note that text protocol prepared statements count as text protocol commands and
are thus potentially pruned when history pruning happens. If an application uses
a lot of PREPARE stmt FROM <sql> commands, it is recommended that the value ofmax_sescmd_history is increased accordingly.
In older versions of MaxScale, binary protocol prepared statements were limited
by max_sescmd_history and were also pruned by prune_sescmd_history but this
caused problems when the binary protocol prepared statment were pruned while
they were still open from the client's point of view. In older versions, the
recommended value of max_sescmd_history is the number of state modifying
commands plus the maximum number of open prepared statments that any application
may use.
This parameter was moved into the MaxScale core in MaxScale 6.0. The parameter
can be configured for all routers that support the session command
history. Currently only readwritesplit and schemarouter support it.
In addition to limiting the number of commands to store, it also acts as a hard
limit on the number of packets that may be queued up on a backend before it is
closed. Packets are queued while the TCP socket is being opened and when
prepared statements are being prepared. In certain rare cases, a slow server may
fall behind and not catch up to the rest of the cluster and a backlog of packets
forms. In these cases, if more than max_sescmd_history packets are queued, the
connection to the server is closed.
prune_sescmd_historyType:
Mandatory: No
Dynamic: Yes
Default: true
This option enables pruning of the session command history when it exceeds the
value configured in max_sescmd_history. When this option is enabled, only a
set number of statements are stored in the history. This limits the per-session
memory use while still allowing safe reconnections.
This parameter is intended to be used with pooled connections that remain in use for a very long time. Most connection pool implementations do not reset the session state and instead re-initialize it with new values. This causes the session command history to grow at roughly a constant rate for the lifetime of the pooled connection.
Starting with MaxScale 23.08, the session command history is also simplified before being stored. The simplification is done by removing repeated occurrences of the same command and only executing the latest one of them. The order in which the commands are executed still remains the same but inter-dependencies between commands are not preserved.
For example, the following set of commands demonstrates how the history simplification works and how inter-dependencies can be lost.
In the example, the value of @my_home has a dependency on the value of@my_planet which is lost when the same statement is executed again and
the history simplification removes the earlier one.
This same problem can occur even in older versions of MaxScale that used a sliding window of the history when the window moves past the statement that later statement depended on. If inter-dependent session commands are being used, the history pruning should be disabled.
Each client-side session that uses a pooled connection only executes a finite amount of session commands. By retaining a shorter history that encompasses all session commands the individual clients execute, the session state of a pooled connection can be accurately recreated on another server.
When the session command history pruning is enabled, there is a theoretical possibility that upon server reconnection the session states of the connections are inconsistent. This can only happen if the length of the stored history is shorter than the list of relevant statements that affect the session state. In practice the default value of 50 session commands is a fairly reasonable value and the risk of inconsistent session state is relatively low.
In case the default history length is too short for safe pruning, set the value
of max_sescmd_history to the total number of commands that affect the session
state plus a safety margin of 10. The safety margin reserves some extra space
for new commands that might be executed due to changes in the client side
application.
Starting with MaxScale 24.02.1, the execution of simple session commands done with binary protocol prepared statements are also stored in the history. A simple session command in the binary protocol is one that:
Takes no parameters
Modifies the session state
Is executed while the original prepared statement is still in the history
The same limitations that apply to the text protocol session commands apply to the binary protocol session commands.
This parameter was moved into the MaxScale core in MaxScale 6.0. The parameter
can be configured for all routers that support the session command
history. Currently only readwritesplit and schemarouter support it.
disable_sescmd_historyType:
Mandatory: No
Dynamic: Yes
Default: false
This option disables the session command history. This way no history is stored and if a replica server fails, the router will not try to replace the failed replica. Disabling session command history will allow long-lived connections without causing a constant growth in the memory consumption.
This parameter should only be used when either the memory footprint must be as small as possible or when the pruning of the session command history is not acceptable.
This parameter was moved into the MaxScale core in MaxScale 6.0. The parameter
can be configured for all routers that support the session command
history. Currently only readwritesplit and schemarouter support it.
user_accounts_fileType: path
Mandatory: No
Dynamic: No
Default: ""
Defines path to a file with additional user accounts for incoming clients. Default value is empty, which disables the feature.
In addition to querying the backends, MaxScale can read users from a file. This feature is useful when backends have limitations on the type of users that can be created, or if MaxScale needs to allow users to log in even when backends are down (e.g. binlog router). The users read from the file are only present on MaxScale, so logging into backends can still fail. The format of the file is protocol-specific. The following only applies to MariaDB-protocol, which is also the only protocol supporting this feature.
The file contains json text. Three objects are read from it: user, db and_roles_mapping_, none of which are mandatory. These objects must be arrays which contain user information similar to the mysql.user, mysql.db and_mysql.roles_mapping_ tables on the server. Each array element must define at least the string fields user and host, which define the user account to add or modify.
The elements in the user-array may contain the following additional fields. If a field is not defined, it is assumed either empty (string) or false (boolean).
password: String. Password hash, similar to the equivalent column on server.
plugin: String. Authentication plugin used by client, similar to server.
authentication_string: String. Additional authentication info, similar to server.
default_role: String. Default role of user, similar to server.
The elements in the db-array must contain the following additional field:
db: String. Database which the user can access. Can contain % and _ wildcards.
The elements in the roles_mapping-array must contain the following additional field:
role: String. Role the user can access.
When users are read from both servers and the file, the server takes priority.
That is, if user 'joe'@'%' is defined on both, the file-version is discarded.
The file can still affect the database grants and roles of 'joe'@'%', as the_db_ and roles_mapping-arrays are read separately and added to existing grant
and role lists.
An example users file is below.
user_accounts_file_usageType:
Mandatory: No
Dynamic: No
Values: add_when_load_ok, file_only_always
Defines when user_accounts_file is read. The value is an enum, either "add_when_load_ok" (default) or "file_only_always".
"add_when_load_ok" means that the file is only read when users are successfully read from a server. The file contents are then added to the server-based data. If reading from server fails (e.g. servers are down), the file is ignored.
"file_only_always" means that users are not read from the servers at all and the file contents is all that matters. The state of the servers is ignored. This mode can be useful with the binlog router, as it allows clients to log in and fetch binary logs from MaxScale even when backend servers are down.
idle_session_pool_timeType:
Mandatory: No
Dynamic: Yes
Default: -1s
Normally, MaxScale only pools backend connections when a session is closed (controlled by server settings persistpoolmax and_persistmaxtime_). Other sessions can use the pooled connections instead of creating new connections to backends. If connection sharing is enabled, MaxScale can pool backend connections also from running sessions, and re-attach a pooled connection when a session is doing a query. This effectively allows multiple sessions to share backend connections.
idle_session_pool_time defines the amount of time a session must be idle before its backend connections may be pooled. To enable connection sharing, set_idle_session_pool_time_ to zero or greater. The value can be given in seconds or milliseconds.
This feature has a significant drawback: when a backend connection is reused, it needs to be restored to the correct state. This means reauthenticating and replaying session commands. This can add a significant delay before the connection is actually ready for a query. If the session command history size exceeds the value of max_sescmd_history, connection sharing is disabled for the session.
This feature should only be used when limiting the backend connection count is a priority, even at the cost of query delay and throughput. This feature only works when the following server settings are also set in MaxScale configuration:
Since reusing a backend connection is an expensive operation, MaxScale only
pools connections when another session requires them. idle_session_pool_time
thus effectively limits the frequency at which a connection can be moved from
one session to another. Setting idle_session_pool_time=0ms causes MaxScale to
move connections as soon as possible.
See below for more information on configuring connection sharing.
Details, limitations and suggestions for connection sharing
As noted above, when a connection is pooled and reused its state is lost. Although session variables and prepared statements are restored by replaying session commands, some state information cannot be transferred.
The most common such state is a transaction. When a transaction is on, connection sharing is disabled for that session until the transaction completes. Other similar situations may not be properly detected, and it's the responsibility of the user to avoid introducing such state to the session when using connection sharing. This means that the following should not be used:
Statements such as LOCK TABLES and GET LOCK or any other statement that
introduces state into the connection.
Temporary tables and some problematic user or session variables such asLAST_INSERT_ID(). For LAST_INSERT_ID(), the value returned by the connector
must be used instead of the variable.
Stored procedures that cause session level side-effects.
Several settings affect connection sharing and its effectiveness. Reusing a connection is an expensive operation so its frequency should be minimized. The important configuration settings in addition to idle_session_pool_time are MaxScale server settings , and . The service settings , and also have an effect. These settings should be tuned according to the use case.
persistpoolmax limits how many connections can be kept in a pool for a given server. If the pool is full, no more connections are detached from sessions even if they are idle and required. The pool size should be large enough to contain any connections being transferred between sessions, but not be greater thanmax_routing_connections. Using the value of max_routing_connections is a reasonable starting point.
persistmaxtime limits the time a connection may stay in the pool. This should
be high enough so that pooled connections are not unnecessarily closed. Cleaning
up clearly unneeded connections from the pool may be useful whenmax_routing_connections is restrictively tuned. Because each MaxScale routing
thread has its own connection pool, one thread can monopolize access to a
server. For example, if the pool of thread 1 has 100 connections to ServerA
with max_routing_connections=100, other threads can no longer connect to the
server. In such a situation, reducing persistmaxtime of ServerA may help as
it would cause unneeded connections in the pool to be closed faster. Such
connection slots then become available to other routing threads. Reducing the
number of may also help, as it reduces pool
fragmentation. This may reduce overall throughput, though. When using connection
sharing, backend connections are only in the pool momentarily. Consequently,persistmaxtime can be set quite low, e.g. 10s.
If a client session exceeds max_sescmd_history (default 50), pooling is disabled for that session. If many sessions do this and_max_routing_connections_ is set, other sessions will stall as they cannot find a backend connection. This can be avoided with prune_sescmd_history. However, pruning means that old session commands will not be replayed when a pooled connection is reused. If the pruned commands are important (e.g. statement preparations), the session may fail later on.
If the number of clients actively running queries is greater thanmax_routing_connections, query throughput will suffer as clients will need to take turns. In this situation, it's imperative to minimize the number of backend connections a single session uses. The settings to achieve this depend on the router. For ReadWriteSplit the following should be used:
The above settings mean that MaxScale can process roughly (number of replica servers X max_routing_connections) read queries simultaneously. Write queries will still need to take turns as there is only one primary server.
The following configuration snippet shows example server and service configurations for connection sharing with ReadWriteSplit.
multiplex_timeoutType:
Mandatory: No
Dynamic: Yes
Default: 60s
When connection sharing (as described above) is on, clients may have to wait for their turn to use a backend connection. If too much time passes without a connection becoming available, MaxScale returns an error to the client, usually also ending the session. multiplex_timeout sets this timeout. Increase it if queries are failing with "Timed out when waiting for a connection". Decrease it if failing early is preferable to stalling.
Server sections define the backend database servers MaxScale uses. A server is identified by its section name in the configuration file. The only mandatory parameter of a server is type, but address and port are also usually defined. A server may be a member of one or more services. A server may only be monitored by at most one monitor.
addressType: string
Mandatory: Yes, if socket is not provided.
Dynamic: Yes
Default: ""
The IP-address or hostname of the machine running the database server. MaxScale uses this address to connect to the server.
Either address or socket must be defined, but not both. If the address is given as a hostname, MaxScale will perform name lookup on the hostname when starting and update the result every minute and when the address changes.
portType: number
Mandatory: No
Dynamic: Yes
Default: 3306
The port the backend server listens on for incoming connections. MaxScale uses this port to connect to the server.
socketType: string
Mandatory: Yes, if address is not provided.
Dynamic: Yes
Default: ""
The absolute path to a UNIX domain socket the MariaDB server is listening on.
private_addressType: string
Mandatory: No
Dynamic: Yes
Default: ""
Alternative IP-address or hostname for the server. This is currently only used by MariaDB Monitor to detect and set up replication. See for more information.
monitoruserType: string
Mandatory: No
Dynamic: Yes
Default: None
This setting together with define server-specific credentials for monitoring the server. Monitors typically use the credentials in their own configuration sections to connect to all servers. If server-specific settings are given, the monitor uses those instead.
monitorpwType: string
Mandatory: No
Dynamic: Yes
Default: None
This setting together with define server-specific credentials for monitoring the server. Monitors typically use the credentials in their own configuration sections to connect to all servers. If server-specific settings are given, the monitor uses those instead.
monitorpw may be either a plain text password or an encrypted password. See
the section for more information.
extra_portType: number
Mandatory: No
Dynamic: Yes
Default: 0
An alternative port used for administrative connections to the server. If this setting is defined, MaxScale uses it for monitoring the server and to fetch user accounts. Client sessions will still use the normal port.
Defining extra_port allows MaxScale to connect even when max_connections on the backend server has been reached. Extra-port connections have their own connection limit, which is one by default. This needs to be increased to allow both monitor and user account manager to connect.
If the connection to the extra-port fails due to connection number limit or if the port is not open on the server, normal port is used.
For more information, see and .
persistpoolmaxType: number
Mandatory: No
Dynamic: Yes
Default: 0
Sets the size of the server connection pool. Disabled by default. When enabled, MaxScale places unused connections to the server to a pool and reuses them later. Connections typically become unused when a session closes. If the size of the pool reaches persistpoolmax, unused connections are closed instead.
Every routing thread has its own pool. As of version 6.3.0, MaxScale will round up persistpoolmax so that every thread has an equal size pool.
When a MariaDB-protocol connection is taken from the pool to be used in a new session, the state of the connection is dependent on the router. ReadWriteSplit restores the connection to match the session state. Other routers do not.
persistmaxtimeType:
Mandatory: No
Dynamic: Yes
Default: 0s
The persistmaxtime parameter defaults to zero but can be set to a duration as
documented . If no explicit unit is provided, the value is
interpreted as seconds in MaxScale 2.4. In subsequent versions a value without a
unit may be rejected. Note that since the granularity of the parameter is
seconds, a value specified in milliseconds will be rejected, even if the
duration is longer than a second.
A DCB placed in the persistent pool for a server will only be reused if the elapsed time since it joined the pool is less than the given value. Otherwise, the DCB will be discarded and the connection closed.
max_routing_connectionsType: number
Mandatory: No
Dynamic: Yes
Default: 0
Maximum number of routing connections to this server. Connections held in a pool also count towards this maximum. Does not limit monitor connections or user account fetching. A value of 0 (default) means no limit.
Since every client session can generate a connection to a server, the server may run out of memory when the number of clients is high enough. This setting limits server memory use caused by MaxScale. The effect depends on if the service setting , i.e. connection sharing, is enabled or not.
If connection sharing is not on, max_routing_connections simply sets a limit. Any sessions attempting to exceed this limit will fail to connect to the backend. The client can still connect to MaxScale, but queries will fail.
If connection sharing is on, sessions exceeding the limit will be put on hold until a connection is available. Such sessions will appear unresponsive, as queries will hang, possibly for a long time. The timeout is controlled by .
proxy_protocolType:
Mandatory: No
Dynamic: Yes
Default: false
If proxy_protocol is enabled, MaxScale will send a
header when connecting client sessions to the server. The header contains the
original client IP address and port, as seen by MaxScale. The server will then
read the header and perform authentication as if the connection originated from
this address instead of MaxScale's IP address. With this feature, the user
accounts on the backend server can be simplified to only contain the actual
client hosts and not the MaxScale host.
NOTE: If you use a cloud load balancer like AWS ELB that supports the proxy
protocol in front of a MaxScale, you need to configure in MaxScale. This also needs
to be done whenever one MaxScale may connect to another Maxscale and the
connecting MaxScale has proxy_protocol enabled.
PROXY protocol will be supported by MariaDB 10.3, which this feature has been tested with. To use it, enable the PROXY protocol in MaxScale for every compatible server and configure the MariaDB servers themselves to accept the protocol headers from MaxScale's IP address. On the server side, the protocol should be enabled only for trusted IPs, as it allows the sender to spoof the connection origin. If a proxy header is sent to a server not expecting it, the connection will fail. Usually PROXY protocol should be enabled for every server in a cluster, as they typically have similar grants.
Other SQL-servers may support PROXY protocol as well, but the implementation may
be highly restricting. Strict adherence to the protocol requires that the
backend server does not allow mixing of un-proxied and proxied connections from
a given IP. MaxScale requires normal connections to backends for monitoring and
authentication data queries, which would be blocked. To bypass this restriction,
the server monitor needs to be disabled and the service listener needs to be
configured to disregard authentication errors (skip_authentication=true).
Server states also need to be set manually in MaxCtrl. These steps are not
required for MariaDB 10.3, since its implementation is more flexible and allows
both PROXY-headered and headerless connections from a proxy-enabled IP.
disk_space_thresholdType: Custom
Mandatory: No
Dynamic: No
Default: None
This parameter specifies how full a disk may be, before MaxScale should start
logging warnings or take other actions (e.g. perform a switchover). This
functionality will only work with MariaDB server versions 10.1.32, 10.2.14 and
10.3.6 onwards, if the DISKS information schema plugin has been installed.
NOTE: Since MariaDB 10.4.7, MariaDB 10.3.17 and MariaDB 10.2.26, the
information will be available only if the monitor user has the FILE
privilege.
A limit is specified as a path followed by a colon and a percentage specifying how full the corresponding disk may be, before action is taken. E.g. an entry like
specifies that the disk that has been mounted on /data may be used until 80%
of the total space has been consumed. Multiple entries can be specified by
separating them by a comma. If the path is specified using *, then the limit
applies to all disks. However, the value of * is only applied if there is not
an exact match.
Note that if a particular disk has been mounted on several paths, only one path need to be specified. If several are specified, then the one with the smallest percentage will be applied.
Examples:
The last line means that the disk mounted at /data1 may be used up to
80%, the disk mounted at /data2 may be used up to 60% and all other disks
mounted at any paths may be used up until 90% of maximum capacity, before
MaxScale starts to warn to take action.
Note that the path to be used, is one of the paths returned by:
There is no default value, but this parameter must be explicitly specified if the disk space situation should be monitored.
rankType:
Mandatory: No
Dynamic: Yes
Values: primary, secondary
This parameter controls the order in which servers are used. Valid values for
this parameter are primary and secondary. The default value isprimary.
This behavior depends on the router implementation but the general rule of thumb is that primary servers will be used before secondary servers.
Readconnroute will always use primary servers before secondary servers as long as they match the configured server type.
Readwritesplit will pick servers that have the same rank as the current primary. Read the for a detailed description of the behavior.
The following example server configuration demonstrates how rank can be used
to exclude DR-site servers from routing.
The main-site-primary and main-site-replica servers will be used as long as
they are available. When they are no longer available, the DR-site-primary andDR-site-replica will be used.
priorityType: number
Mandatory: No
Dynamic: Yes
Default: 0
Server priority. Currently only used by galeramon to choose the order in which nodes are selected as the current primary server. Refer to the section of the galeramon documentation for more information on how to use it.
Starting with MaxScale 2.5.21, this parameter also accepts negative values. In older versions, the parameter only accepted non-negative values.
replication_custom_optionsType: string
Default: None
Dynamic: Yes
Server-specific custom string added to "CHANGE MASTER TO"-commands sent by
MariaDB Monitor. Overrides replication_custom_options setting set in
the monitor. This setting affects the server where the command is ran at, not
the source of the replication. That is, if monitor sends a "CHANGE MASTER TO"-
command to server A telling it to replicate from server B, the setting value
from MaxScale configuration for server A would be used.
See for more information.
Monitor sections are used to define the monitoring module that watches a set of servers. Each server can only be monitored by one monitor.
Common monitor parameters .
A listener defines a port MaxScale listens on for incoming connections. Accepted connections are linked with a MaxScale service. Multiple listeners can feed the same service. Mandatory parameters are type, service and protocol.address is optional, it limits connections to a certain network interface only. socket is also optional and is used for Unix socket connections.
The network socket where the listener listens may have a backlog of connections. The size of this backlog is controlled by the_net.ipv4.tcp_max_syn_backlog_ and net.core.somaxconn kernel parameters.
Increasing the size of the backlog by modifying the kernel parameters helps with sudden connection spikes and rejected connections. For more information see .
serviceType: service
Mandatory: Yes
Dynamic: No
The service to which the listener is associated. This is the name of a service that is defined elsewhere in the configuration file.
protocolType: protocol
Mandatory: No
Dynamic: No
Default: mariadb
The name of the protocol module used for communication between the client and MaxScale. The same protocol is also used for backend communication.
Usually this does not need to be defined as the default protocol is the MariaDB network protocol that is used by SQL connections.
For NoSQL client connections, the protocol must be set toprotocol=nosqlprotocol. For more details on how to configure the NoSQL
protocol, refer to the module
documentation.
addressType: string
Mandatory: No
Dynamic: No
Default: "::"
This sets the address the listening socket is bound to. The address may be specified as an IP address in 'dot notation' or as a hostname. If left undefined the listener will bind to all network interfaces.
portType: number
Mandatory: Yes, if socket is not provided.
Dynamic: No
Default: 0
The port the listener listens on. If left undefined a default port for the protocol is used.
socketType: string
Mandatory: Yes, if port is not provided.
Dynamic: No
Default: ""
If defined, the listener uses Unix domain sockets to listen for incoming connections. The parameter value is the name of the socket to use.
If you want to use both network ports and UNIX domain sockets with a service, define two separate listeners that connect to the same service.
authenticatorType: string
Mandatory: No
Dynamic: No
Default: ""
The authenticator module to use. Each protocol module defines a default
authentication module, which is used if the setting is left undefined.
MariaDB and PostgreSQL protocols support multiple authenticators and they can
be used simultaneously by giving a comma-separated list e.g.authenticator=PAMAuth,mariadbauth,gssapiauth
authenticator_optionsType: string
Mandatory: No
Dynamic: No
Default: ""
This defines additional options for authentication. As of MaxScale 2.5.0, only_MariaDBClient_ and its authenticators support additional options. The value of this parameter should be a comma-separated list of key-value pairs. See authenticator specific documentation for more details.
sql_modeType:
Mandatory: No
Dynamic: Yes
Values: default, oracle
Specify the sql mode for the listener similarly to global sql_mode setting.
If both are used this setting will override the global setting for this listener.
proxy_protocol_networksDefine an IP-address or a subnetwork which may send a when connecting. The proxy header contains the original client IP-address and port, and MaxScale will use that information in its internal bookkeeping. This means the client is authenticated as if it was connecting from the host in the proxy header. If proxy protocol is also enabled in MaxScale server settings, MaxScale will relay the original client address and port to the server. See for more information.
This setting may be useful if a compatible load balancer is relaying client connections to MaxScale. If proxy headers are used, both MaxScale and the backends will know where the client originally came from.
The proxy_protocol_networks-setting works similarly to the equivalent setting
in .
The value can be a single IP or subnetwork, or a comma-separated list of them.
Subnetworks are given in CIDR-format, e.g. "192.168.0.0/16". "*" is a valid
value, allowing anyone to send the header. "localhost" allows proxy headers
from domain socket connections.
Only trusted IPs should be added to the list, as the proxy header may affect authentication results.
Similar to MariaDB Server, MaxScale will also accept normal connections even
if proxy_protocol_networks is configured for the listener.
connection_init_sql_fileType: path
Mandatory: No
Dynamic: Yes
Default: ""
Path to a text file with sql queries. Any sessions created from the listener will send the contents of the file to backends after authentication. Each non-empty line in the file is interpreted as a query. Each query must succeed for the backend connection to be usable for client queries. The queries should not return any data.
Example query file:
user_mapping_fileType: path
Mandatory: No
Dynamic: Yes
Default: ""
Path to a json-text file with user and group mapping, as well as server credentials. Only affects MariaDB-protocol based listeners. Default value is empty, which disables the feature.
Should not be used together with
settings pam_backend_mapping or pam_mapped_pw_file, as these may overwrite
the mapped credentials. Is most powerful when combined with service settinguser_accounts_file, as then MaxScale can accept users that do not exist on
backends and map them to backend users.
This file functions very similar to . Both user-to-user and group-to-user mappings can be defined. Also, the password and authentication plugin for the mapped users can be added. The file is only read during listener creation (typically MaxScale start) or when a listener is modified during runtime. When a client logs into MaxScale, their username is searched from the mapping data. If the name matches either a name mapping or a Linux group mapping, the username is replaced by the mapped name. The mapped name is then used when logging into backends. If the file also contains credentials for the mapped user, then those are used. Otherwise, MaxScale tries to log in with an empty password and default MariaDB authentication.
Three arrays are read from the file: user_map, group_map and_server_credentials_, none of which are mandatory.
Each array element in the user_map-array must define the following fields:
original_user: String. Incoming client username.
mapped_user: String. Username the client is mapped to.
Each array element in the group_map-array must define the following fields:
original_group: String. Incoming client Linux group.
mapped_user: String. Username the client is mapped to.
Each array element in the server_credentials-array can define the following fields:
mapped_user: String. The mapped username this password is for.
password: String. Backend server password. Can be encrypted with maxpasswd.
plugin: String, optional. Authentication plugin to use. Must be enabled on the listener. Defaults to empty, which results in standard MariaDB authentication.
When a client successfully logs into MaxScale, MaxScale first searches for name-based mapping. The incoming client does not need to be a Linux user for name-based mapping to take place. If the name is not found, MaxScale checks if the client is a Linux user with a group membership matching an element in the group mapping array. If the client is a member of more than 100 groups, this check may fail.
If a mapping is found, MaxScale searches the credentials array for a matching username, and uses the password and plugin listed. The plugin need not be the same as the one the original user used. Currently, "mysql_native_password" and "pam" are supported as mapped plugins.
An example mapping file is below.
connection_metadataType: stringlist
Default: character_set_client=auto,character_set_connection=auto,character_set_results=auto,max_allowed_packet=auto,system_time_zone=auto,time_zone=auto,tx_isolation=auto,maxscale=auto
Dynamic: Yes
Mandatory: No
Metadata that's sent to all connecting clients. The value must be a comma-separated list of key-value arguments. The keys or values cannot contain commas in them.
Any values that are set to auto will be substituted with the value of the
corresponding MariaDB system variable. Any system variables that do not not
exist or have empty or null values will not be sent to the client. The system
variable values are read from the first Master server that's reachable from
the listener's service. If no Master server is reachable, the value is read
from the first Slave server and if no Slave servers are available, from the
first Running server. If no running servers are available, the system
variables are not sent.
The exception to this is the maxscale=auto value where the auto will be
replaced with the MaxScale version string. This is useful for detecting whether
a client is connected to MaxScale. To make MaxScale completely transparent to
the client application, the maxscale=auto value can be removed fromconnection_metadata.
MaxScale will always send a metadata value for threads_connected that contains
the current number of connections to the service that the listener points to and
for connection_id that contains the 64-bit connection ID value. The values can
be overridden by defining them with some value, for example,connection_metadata=threads_connected=0,connection_id=0.
The metadata is implemented using that is embedded in the OK packets that are generated by MaxScale. The values are encoded as system variables changes. This information can be accessed by all connectors that support reading the session state information. One example of this is the MariaDB Connector/C that implements it with the and functions.
The following example demonstrates the use of connection_metadata:
The configuration has three variables, redirect_url, service_name andmax_allowed_packet that have the values localhost:3306, my-service andauto. The auto value is special and gets replaced with themax_allowed_packet value from the MariaDB server. This means that the final
metadata that is sent to the client would be redirect_url=localhost:3306,service_name=my-service and max_allowed_packet=16777216.
If the connection_metadata variable list contains the tx_isolation variable
and the backend MariaDB server from which the variable is retrieved is MariaDB
11 or newer, the value is renamed to transaction_isolation. The tx_isolation
parameter was deprecated in favor of transaction_isolation in MariaDB 11
(MDEV-21921).
An include section defined common parameters used in other configuration sections. Consider the following configuration.
The two monitor sections are identical except for the servers setting.
If they otherwise should remain identical, a change must be made in two
places. With an include section the situation can be simplified.
With an include section, all common settings can be defined in one
place, and then included to any number of other sections using the@include parameter.
The @include parameter takes a list of section names, so the settings
can be distributed across several include sections.
It is permissible to specify in the including section, parameters
that have already been specified in the included section and they
will take precedence. For instance, if Monitor2 in the example
above should have a longer backend connect timeout it can be
specified as follows.
Note that an included section must be an include section and
that an include section cannot include another include
section. For instance, both of the following sections would cause
an error at startup.
Note also that if an included parameter is changed using maxctrl,
it will be changed only on the actual object the change is applied
on, not on the include section where the parameter is originally
specified.
Protocol modules in MaxScale define what kind of clients can connect to a listener and what type of backend servers are supported. Protocol is defined in listener settings, and affects both the listener and any services the listener is linked to.
MariaDB or MariaDBClientImplements MariaDB protocol. The listener will accept MariaDB/MySQL connections from clients and route the client queries through a linked MaxScale service to backend servers. The backends used by the service should be MariaDB servers or compatible.
CDCSee for more information.
Postgresql or PostgresprotocolImplements . The listener will accept Postgresql connections from clients and route the client queries through a linked MaxScale service to backend servers. The backends used by the service should be PostgreSQL servers or compatible.
nosqlprotocolAccepts MongoDB® connections, yet stores and fetches results to/from MariaDB servers. See for more information.
This section describes configuration parameters for both servers and listeners that control the TLS/SSL encryption method and the various certificate files involved in it.
To enable TLS/SSL for a listener, you must set the ssl parameter totrue and provide at least the ssl_cert and ssl_key parameters.
To enable TLS/SSL for a server, you must set the ssl parameter totrue. If the backend database server has certificate verification
enabled, the ssl_cert and ssl_key parameters must also be defined.
Custom CA certificates can be defined with the ssl_ca parameter. Ifssl_verify_peer_certificate is enabled yet ssl_ca is not set, MaxScale
will load CA certificates from the system default location.
After this, MaxScale connections between the server and/or the client will be encrypted. Note that the database must also be configured to use TLS/SSL connections if backend connection encryption is used.
Note: MaxScale does not allow mixed use of TLS/SSL and normal connections on the same port.
If TLS encryption is enabled for a listener, any unencrypted connections to it will be rejected. MaxScale does this to improve security by preventing accidental creation of unencrypted connections.
The separation of secure and insecure connections differs from the MariaDB Server which allows both secure and insecure connections on the same port. As MaxScale is the gateway through which all connections go, MaxScale enforces a stricter security policy than MariaDB Server. Multiple listeners with different configurations can be created to enable different encryption schemes.
TLS encryption must be enabled for listeners when they are created. For servers, the TLS can be enabled after creation but it cannot be disabled or altered.
Starting with MaxScale 2.5.20, if the TLS certificate given to MaxScale has the X509v3 extended key usage information, MaxScale will check it and refuse to use a certificate with the wrong usage. This means that a certificate with only clientAuth can only be used with servers and a certificate with only serverAuth can only be used with listeners. In order to use the same certificate for both listeners and servers, it must have both the clientAuth and serverAuth usages.
sslType:
Mandatory: No
Dynamic: Yes
Default: false
This enables SSL connections when set to true. The legacy values required anddisabled were removed in MaxScale 6.0.
If enabled, the certificate files mentioned above must also be supplied. MaxScale connections to will then be encrypted with TLS/SSL.
Starting with MaxScale 21.06.18, 22.08.15, 23.02.12, 23.08.8, 24.02.4 and
24.08.1, if ssl is disabled for a listener, MariaDB user accounts that require
ssl cannot log in through that listener. Any user account with a non-empty_ssl_type_-field in mysql.user-table is blocked. This includes users created
with REQUIRE SSL or REQUIRE X509.
ssl_keyType: path
Mandatory: No
Dynamic: Yes
Default: ""
A string giving a file path that identifies an existing readable file. The file must be the SSL client private key MaxScale should use. This is a required parameter for listeners but an optional parameter for servers.
ssl_certType: path
Mandatory: No
Dynamic: Yes
Default: ""
A string giving a file path that identifies an existing readable file. The file
must be the SSL client certificate MaxScale should use with the server. The
certificate must match the key defined in ssl_key. This is a required
parameter for listeners but an optional parameter for servers.
ssl_ca_certDeprecated since MariaDB MaxScale 22.08. See ssl_ca.
ssl_caType: path
Mandatory: No
Dynamic: Yes
Default: ""
A string giving a file path that identifies an existing readable file. The file must be a Certificate Authority (CA) certificate. It will be used to verify that the peer certificate (sent by either client or a MariaDB Server) is valid. The CA certificate can consist of a certificate chain.
NOTE Up until MariaDB MaxScale 6, the parameter was called ssl_ca_cert,
which is still accepted as an alias for ssl_ca.
ssl_versionType:
Mandatory: No
Dynamic: No
Values: MAX, TLSv1.0, TLSv1.1
This parameter controls the allowed TLS version. Accepted values are:
TLSv10
TLSv11
TLSv12
TLSv13
MaxScale versions 6.4.16, 22.08.13, 23.02.10, 23.08.6, 24.02.2 and all newer releases accept also the following alias values:
TLSv1.0
TLSv1.1
TLSv1.2
TLSv1.3
The default setting (MAX) allows all supported versions. MaxScale supports
TLSv1.0, TLSv1.1, TLSv1.2 and TLSv1.3 depending on the OpenSSL library version.
TLSv1.0 and TLSv1.1 are considered deprecated and should not be used, so settingssl_version=TLSv1.2,TLSv1.3 or ssl_version=TLSv1.3 is recommended.
In MaxScale versions 6.4.13, 22.08.11, 23.02.7, 23.08.3 and earlier, this
setting defined the only allowed TLS version, e.g. ssl_version=TLSv12 would
only enable TLSv12. The interpretation changed in MaxScale versions 6.4.14,
22.08.12, 23.02.8, 23.08.4 to enable the user to disable old versions while
allowing multiple recent TLS versions. In these versions, ssl_version=TLSv1.2
enabled both TLSv1.2 and TLSv1.3.
The interpretation changed again in MaxScale versions 6.4.16, 22.08.13,
23.02.10, 23.08.6, 24.02.2. In these versions the value of ssl_version is an
enumeration of accepted TLS protocol versions. This means thatadmin_ssl_version=TLSv1.2 again only allows TLSv1.2. To retain the behavior
from the previous releases where the newer versions were automatically enabled,
the protocol versions must be explicitly listed, for exampleadmin_ssl_version=TLSv1.2,TLSv1.3. The change was done to make thessl_version behave identically to how the MariaDB
parameter works.
ssl_cipherType: string
Mandatory: No
Dynamic: Yes
Default: ""
Set the list of TLS ciphers. By default, no explicit ciphers are defined and the system defaults are used. Note that this parameter does not modify TLSv1.3 ciphers.
ssl_cert_verify_depthType: number
Mandatory: No
Dynamic: Yes
Default: 9
The maximum length of the certificate authority chain that will be accepted. The default value is 9, same as the OpenSSL default. The configured value must be larger than 0.
ssl_verify_peer_certificateType:
Mandatory: No
Dynamic: Yes
Default: false
Peer certificate verification. This functionality is disabled by default. In versions prior to 2.3.17 the feature was enabled by default.
When this feature is enabled, the peer (client or MariaDB Server) must send a
certificate. The certificate sent by the peer is verified against the
configured Certificate Authority to ensure the peer is who they claim to be.
For listeners, this behaves as if REQUIRE X509 was defined for all users.
ssl_verify_peer_hostType:
Mandatory No
Dynamic: Yes
Default: false
Peer host verification.
When this feature is enabled, the peer (client or MariaDB Server) hostname or IP is verified against the certificate sent by the peer. If the IP address or the hostname does not match the one in the certificate, the connection is closed.
If the peer does not provide a certificate, host verification is skipped.
To require peer certificates, also enable ssl_verify_peer_certificate.
For servers, the combination of
behaves like the --ssl-verify-server-cert command line option for themysql client.
ssl_crlType: path
Mandatory: No
Dynamic: Yes
Default: ""
A string giving a file path that identifies an existing readable file. The file must be a Certificate Revocation List in the PEM format that defines the revoked certificates. This parameter is only accepted by listeners.
Example SSL enabled server configuration
This example configuration requires all connections to this server to be encrypted with SSL. The paths to the certificate files and the Certificate Authority file are also provided.
Example SSL enabled listener configuration
This example configuration requires all connections to be encrypted with SSL. The paths to the certificate files and the Certificate Authority file are also provided.
The main task of MariaDB MaxScale is to accept database connections from client applications and route the connections or the statements sent over those connections to the various services supported by MariaDB MaxScale.
Currently a number of routing modules are available, these are designed for a range of different needs.
Connection based load balancing:
Read/Write aware statement based router:
Simple sharding on database level:
Binary log server:
Monitor modules are used by MariaDB MaxScale to internally monitor the state of the backend databases in order to set the server flags for each of those servers. The router modules then use these flags to determine if the particular server is a suitable destination for routing connections for particular query classifications. The monitors are run within separate threads of MariaDB MaxScale and do not affect MariaDB MaxScale's routing performance.
The use of monitors in MaxScale is not absolutely mandatory: it is possible to run MariaDB MaxScale without a monitor module. In this case an external monitoring system must the status of each server via MaxCtrl or the REST API. Only do this if you know what you are doing.
Filters provide a means to manipulate or process requests as they pass through MariaDB MaxScale between the client side protocol and the query router. A full explanation of each filter's functionality can be found in its documentation.
The document shows how you can add a filter to a service and combine multiple filters in one service.
Passwords stored in the maxscale.cnf file may optionally be encrypted for added security.
This is done by creation of an encryption key on installation of MariaDB MaxScale.
Encryption keys may be created manually by executing the maxkeys utility with the argument
of the filename to store the key. The default location MariaDB MaxScale stores
the keys is /var/lib/maxscale. The passwords are encrypted using 256-bit AES CBC encryption.
Changing the encryption key for MariaDB MaxScale will invalidate any currently encrypted keys stored in the maxscale.cnf file.
Note: The password encryption format changed in MaxScale 2.5. All encrypted passwords created with MaxScale 2.4 or older need to be re-encrypted.
Encrypted passwords are created by executing the maxpasswd command with the location of the .secrets file and the password you require to encrypt as an argument.
The output of the maxpasswd command is a hexadecimal string, this should be inserted into the maxscale.cnf file in place of the ordinary, plain text, password. MariaDB MaxScale will determine this as an encrypted password and automatically decrypt it before sending it the database server.
Read the following documents for different methods of altering the MaxScale configuration at runtime.
MaxCtrl
All changes to the configuration done via MaxCtrl are persisted as individual
configuration files in /var/lib/maxscale/maxscale.cnf.d/. The content of these
files will override any configurations found in the main configuration file or
any auxiliary configuration files.
Refer to the section for more details on how this mechanism works and how to disable it.
The configuration synchronization mechanism is intended for synchronizing configuration changes done on one MaxScale to all other MaxScales. This is done by propagating the changes via the database cluster used by Maxscale.
When configuring configuration synchronization for the first time, the same
static configuration files should be used on all MaxScale instances that use the
same cluster: the value of config_sync_cluster must be the same on all
MaxScale instances and the cluster (i.e. the monitor) pointed by it and its
servers must be the same in every configuration.
Whenever the MaxScale configuration is modified at runtime, the latest
configuration is stored in the database cluster in the mysql.maxscale_config
table. The table is created when the first modification to the configuration is
done. A local copy of the configuration is stored in the data directory to allow
MaxScale to function even if a connection to the cluster cannot be made. By
default this file is stored at /var/lib/maxscale/maxscale-config.json.
Whenever MaxScale starts up, it checks if a local version of this configuration
exists. If it does and it is a valid cached configuration, the static
configuration file as well as any other generated configuration files are
ignored. The exception is the [maxscale] section of the main static
configuration file which is always read.
Each configuration has a version number with the initial configuration being version 0. Each time the configuration is modified, the version number is incremented. This version number is used to detect when MaxScale needs to update its configuration.
When doing a configuration change on the local MaxScale, if the configuration change completes on MaxScale but fails to be committed to the database, MaxScale will attempt to revert the local configuration change. If this attempt fails, MaxScale will discard the cached configuration and abort the process.
When synchronizing with the cluster, if MaxScale fails to apply a configuration retrieved from the cluster, it attempts to revert the configuration to the previous version. If successful, the failed configuration update is ignored. If the configuration update that fails cannot be reverted, the MaxScale configuration will be in an indeterminate state. When this happens, MaxScale will discard the cached configuration and abort the process.
When loading a locally cached configuration during startup, if any errors are found in the cached configuration, it is discarded and the MaxScale process will attempt to restart by exiting with code 75 from the main process. If MaxScale is being used as a SystemD service, this will automatically trigger a restart of MaxScale and no further actions are needed.
The most common reason for a failed configuration update is missing files. For example, if a configuration update adds encrypted connections to a server and the TLS certificates it uses were not copied over to all MaxScale nodes before the change was done, the operation will fail on all nodes that do not have these files.
If the synchronization of the configuration change fails at the step when the database transaction is being committed, the new configuration can be momentarily visible to the local MaxScale. This means the changes are not guaranteed to be atomic on the local MaxScale but are atomic from the cluster's point of view.
Starting with MaxScale 6.4.9, any passwords that are transmitted by the
configuration synchronization are encrypted if password encryption has been
enabled in MaxScale. This means that all MaxScale nodes in the same
configuration cluster must be configured to use password encryption and they
need to all use the same encryption keys that were created with maxkeys.
The output of maxctrl show maxscale contains the Config Sync field with
information about the current configuration state of the local Maxscale as well
as the state of any other nodes using this cluster.
The version field is the logical configuration version and the origin is the
node that originates the latest configuration change. The checksum field is
the checksum of the logical configuration and can be used to compare whether two
Maxscale instances are in the same configuration state. The nodes field
contains the status of each MaxScale instance mapped to the hostname of the
server. This field is updated whenever MaxScale reads the configuration from the
cluster and can thus be used to detect which MaxScales have updated their
configuration.
The mysql.maxscale_config table where the configuration changes are stored
must not be modified manually. The only case when the table should be modified
is when resetting the configuration synchronization.
To reset the configuration synchronization:
Stop all MaxScale instances
Remove the cached configuration file stored at/var/lib/maxscale/maxscale-config.json on all MaxScale instances
Drop the mysql.maxscale_config table
Start all MaxScale instances
To disable configuration synchronization, remove config_sync_cluster from the
configuration file or set it to an empty string: config_sync_cluster="". This
can be done at runtime with MaxCtrl by passing an empty string toconfig_sync_cluster:
If MaxScale cannot create a connection to the database cluster, configuration
changes are not possible until communication with the database is possible. To
override this behavior and force the changes to be done, use the --skip-sync
option for maxctrl or the sync=false HTTP parameter for the REST API. Any
updates done with --skip-sync will overwritten by changes coming from the
cluster.
Only the MaxScale configuration is synchronized. Any external files (TLS certificates, configuration files for modules or data generated by MaxScale) are not synchronized. For example, the rule files for the cache filter must be synchronized separately if the filter itself is modified.
Starting with MaxScale 22.08, the Maintenance and Draining states of servers
and modifications to the administrative users will be synchronized. In older
versions servers had to be put into maintenance mode and users had to be
modified separately on each MaxScale.
() External files are not synchronized.
() The --export-config
option will not export the cluster configuration and instead exports only the
static configuration files. To start a new MaxScale based off of a clustered
configuration, copy the static configuration files as well as the JSON
configuration in /var/lib/maxscale/maxscale-config.json to the new MaxScale
instance.
The combination of configuration files can be done either manually
(e.g. rsync) or with the maxscale --export-config=FILE command line
option. See maxscale --help for more information about how to use the--export-config flag.
For example, to export the current runtime configuration, run the following command.
This will create the /tmp/maxscale.cnf.combined file and write the current
configuration into the it. This allows new MaxScale instances to be easily set
up without requiring copying of all runtime configuration files. The user
executing the command must be able to read all MaxScale configuration files as
well as create and write the provided filename.
The encryption key managers are how MaxScale retrieves symmetric encryption keys
from a key management system. Some parts of MaxScale require the key_manager
to be configured in order to work. The key manager that is used is selected with
the parameter and the key manager itself is
configured by placing the parameters in the [maxscale] section.
The encryption key managers can be enabled at runtime using maxctrl alter maxscale but cannot be disabled once enabled. To disable the encryption key
management, stop Maxscale, remove any persisted configuration files and removekey_manager as well as any key manager options from the static configuration
files.
The encryption keys are stored in a text file stored on a local filesystem.
The file uses the same format as the MariaDB server : a file consisting of an encryption key ID number and the hex-encoded encryption key separated by a semicolon. Read for more details on how to create the file.
For example, to configure encryption for the nosqlprotocol shared credentials
using the file-based encryption key:
Create the key file with (echo -n '1;' ; openssl rand -hex 32) | cat > /var/lib/maxscale/encryption.key
Give MaxScale read permissions on it with chown maxscale:maxscale /var/lib/maxscale/encryption.key
Configure MaxScale with the following:
Start MaxScale
Key versioning is not supported
file.keyfile
Type: path
Mandatory: Yes
Dynamic: Yes
Path to the file that contains the encryption keys. The user MaxScale runs as
(almost always maxscale) must be able to read this file. Encryption keys are
read from disk only during startup or when any global MaxScale parameter is
modified at runtime.
Encryption keys are read from a KMIP server.
The KMIP key manager has been verified to work with the PyKMIP server.
Key versioning is not supported
Encryption keys are not cached locally: whenever MaxScale needs an encryption key, it retrieves it from the KMIP server.
kmip.host
Type: string
Mandatory: Yes
Dynamic: Yes
The host where the KMIP server is.
kmip.port
Type: integer
Mandatory: Yes
Dynamic: Yes
The port on which the KMIP server listens on.
kmip.cert
Type: path
Mandatory: Yes
Dynamic: Yes
The client public certificate used when connecting to the KMIP server.
kmip.key
Type: path
Mandatory: Yes
Dynamic: Yes
The client private key used when connecting to the KMIP server.
kmip.ca
Type: path
Default: ""
Dynamic: Yes
The CA certificate to use. By default the system default certificates are used.
Encryption keys are read from a local or remote Vault server using the secret engine included in the Vault. This key manager supports versioned keys. Only version 2 key-value stores are supported.
The encryption keys use the same format as the MariaDB :
The key-value secret for each encryption key ID must contain the field data
which must contain a hex-encoded string that is either 32, 48 or 64 characters
long.
An easy way to generate a correct encryption key is to use the vault andopenssl command line clients. The following command creates a 256-bit
encryption key using openssl and stores it using the key ID 1:
Encryption keys are not cached locally: whenever MaxScale needs an encryption key, it retrieves it from the Vault server.
vault.token
Type: password
Mandatory: Yes
Dynamic: Yes
The authentication token used to connect to the Vault server. This can be
encrypted using maxpasswd, similar to how other passwords are encrypted.
vault.host
Type: string
Default: localhost
Dynamic: Yes
The host where the Vault server is.
vault.port
Type: integer
Default: 8200
Dynamic: Yes
The port on which the Vault server listens on.
vault.ca
Type: path
Default: ""
Dynamic: Yes
The CA certificate to use. By default the system default certificates are used.
vault.tls
Type:
Default: true
Dynamic: Yes
Whether to use encrypted connections (i.e. HTTPS or HTTP) when communicating with the Vault server.
vault.mount
Type: string
Default: secret
Dynamic: Yes
The Key-Value mount where the secret is stored. By default the secret mount is
used which is present by default in most Vault installations.
vault.timeout
Type:
Default: 30s
Dynamic: Yes
The connection and request timeout used with the Vault server.
For routing, MaxScale uses asynchronous I/O and a fixed number of threads (aka routing workers), whose number up until 23.02 was fixed at startup. From 23.02 onwards the number of threads can be altered at runtime, which is convenient, for instance, if MaxScale is running in a container whose properties are changed during the lifetime of the container.
A thread can be in three different states:
Active: The thread is routing client traffic and is listening for new connections.
Draining: The thread is routing client traffic but is not listening for new connections.
Dormant: The thread is not routing client traffic (all sessions have ended), and is not listening for new connections, and is waiting to be terminated.
All threads start as Active and may become Draining if the number of threads is reduced. A draining thread will eventually become Dormant, unless the number of threads is increased while the thread is still Draining.
Note that it is not possible to terminate a specific thread, but it is only possible to specify the number of threads that MaxScale should use, and that the threads will be terminated from the end. This has implications if the number of threads is reduced by more than 1, as a Dormant thread will not be terminated before it is the last thread.
In the following, MaxScale has been started with threads=4.
All threads are Active. If we now decrease the number of threads
we will see that the threads 2 and 3 are now Draining. The reason is that threads 2 and 3 still handle client sessions. If some client sessions now end, the situation may become like
That is, thread 2 is Dormant and thread 3 is Draining. All client sessions that were handled by thread 2 have ended and the thread is ready to be terminated. However, as thread 3 is still Draining, thread 2 will not be terminated but stay Dormant.
If the sessions handled by thread 3 end, then it will become Dormant at which point first thread 3 will be terminated and immediately after that thread 2.
If the situation is like
that is, the number of threads was 4 but has been reduced to 2, and while thread 2 has become drained it stays as Dormant since thread 3 is stillDraining, it is possible to make thread 2 Active again by increasing the number of threads to 3.
Once the sessions of thread 3 ends, we will have
MariaDB MaxScale is designed to be executed as a service, therefore all error
reports, including configuration errors, are written to the MariaDB MaxScale
error log file. By default, MariaDB MaxScale will log to a file in/var/log/maxscale and the system log.
The current limitations of MaxScale are listed in the document.
Tune query_classifier_cache_size to allow maximal use of the query
classifier cache. Increase the value and/or system memory until the set of
unique SQL patterns fits into memory. By default at most 15% of the system
memory is used for this cache. To detect if the SQL statements fit into
memory, monitor the QC cache evictions value in maxctrl show threads to
see how many evictions take place. If it keeps increasing, increase the size
of the query classifier cache. Using the query classifier cache with a CPU
bound workload gives a roughly 20% improvement in performance compared to when
it is turned off.
A faster CPU with more CPU cores is better. This is true for most applications
but especially for MaxScale as it is mostly limited by the speed of the
CPU. Using threads=auto is recommended (the default starting with MaxScale
6).
From 22.08.2 onwards, maxctrl show maxscale shows a System object with
information about the system MaxScale is running on. The fields are:
In addition there is an os object that contains what the Linux command uname displays.
threads
If threads has not been specified at all in the MaxScale configuration file,
or if its value is auto, then MaxScale will use as many routing threads as
there are physical cores on the machine. This is the right choice, if MaxScale
is running on a dedicated machine or in a container that has not been restricted
in any way.
However, if the number of cores available to MaxScale have been restricted or if MaxScale is running in a container whose CPU quota and period have been limited, then it will lead to MaxScale using more routing threads than what is appropriate in the environment where it is running.
If machine.cores_virtual is less than machine.cores_physical, then threads
should be specified explicitly in the MaxScale configuration file and its value
should be that of machine.cores_virtual rounded up to the nearest integer. If
that value is 1 it may be beneficial to check whether 2 gives better performance.
query_classifier_cache_size
If query_classifier_cache_size has not been specified in the MaxScale
configuration file, then MaxScale will use at most 15% of the amount of physical
memory in the machine for the cache. This is a good starting point, if MaxScale
is running on a dedicated machine or in a container that has not been restricted
in any way. Note that the amount specifies how much memory the cache at maximum
is allowed to use, not what would immediately be allocated for the cache.
However, if the amount of memory available to MaxScale has been restricted, which may be the case if MaxScale is running in a container, this may cause the cache to grow beyond what is available, which will lead to a crash or MaxScale being killed.
If the value of machine.memory_available is less than that ofmachine.memory_physical, then query_classifier_cache_size should be explicitly
set to 15% of maxscale.memory_available. The value can be larger, but must not
be a bigger share of machine.memory_available than what is reasonable.
As can be seen, maxscale.threads is larger than machine.cores_virtual and thus,threads=4 should explicitly be specified in the MaxScale configuration file.
maxscale.query_classifier_cache_size is the default 15% of machine.memory_physical
but as machine.memory_available is just half of that, something likequery_classifier_cache_size=3100000000 (~15% of machine.memory_available) should be
added to the configuration file.
For a list of common problems and their solutions, read the article on the MariaDB documentation.
If MaxScale is running as a systemd service, the systemd Watchdog will be
enabled by default. To configure it, change the WatchdogSec option in the
Service section of the maxscale systemd configuration file located in/lib/systemd/system/maxscale.service:
It is not recommended to use a watchdog timeout less than 30 seconds. When enabled MaxScale will check that all threads are running and notify systemd with a "keep-alive ping".
Systemd reference:
This page is licensed: CC BY-SA / Gnu FDL
Maintenance
The server is under maintenance. Typically this status bit is turned on manually using maxctrl, but it will also be turned on for a server that for some reason is blocking connections from MaxScale. When a server is in maintenance mode, no connections will be created to it and existing connections will be closed.
Slave of External Master
The server is a replica of a primary that is not being monitored.
Master Stickiness
The server is monitored by a galeramon with disable_master_failback=true. See for more information.
admin_pam_readonly_service
admin_pam_readwrite_service
admin_readonly_hosts
admin_readwrite_hosts
admin_port
admin_secure_gui
admin_ssl_ca
admin_ssl_version
admin_jwt_algorithm
admin_jwt_key
admin_jwt_issuer
auto_tune
cachedir
connector_plugindir
core_file
datadir
debug
execdir
language
libdir
load_persisted_configs
persist_runtime_changes
local_address
log_augmentation
log_warn_super_user
logdir
module_configdir
persistdir
piddir
query_retries
secretsdir
sharedir
sql_mode
substitute_variables
threads_max
case: Turns on case-sensitive matching. This means that /SELECT/ will not
match select.
Dynamic: No
Default: default
Default: never
TLSv1.2TLSv1.3TLSv10TLSv11TLSv12TLSv13Default: MAX
MAX
HS512RS256RS384RS512PS256PS384PS512ES256ES384ES512ED25519ED448Default: auto
HS256, HS384 or HS512
HMAC with SHA-2
Functions. Ifadmin_jwt_key is not defined, uses a random encryption key of the correct
size.
RS256, RS384 or RS512
Digital Signature with RSASSA-PKCS1-v1_5. Requires at least a 2048-bit RSA key.
PS256, PS384 or PS512
Digital Signature with RSASSA-PSS. Requires at least a 2048-bit RSA key.
ES256, ES384 or ES512
Digital Signature with
ECDSA. Requires
an EC key with the correct curve: P-256 for ES256, P-384 for ES384 and
P-512 for ES512.
ED25519 or ED448
Edwards-curve Digital Signature Algorithm
(EdDSA). Requires a
Ed25519 key for ED25519 or a Ed448 key for ED448.
PATCHDELETEHEADOPTIONSCONNECTTRACEDefault: No exclusions
Default: none
vault - HashiCorp Vault key manager. This
key manager is only included on systems with GCC 9 or newer which
means it cannot be used on Ubuntu 18.04.Auto tune: Yes
Auto tune: Yes
super_priv: Boolean. True if user has SUPER grant.
global_db_priv: Boolean. True if user can access any database on login.
proxy_priv: Boolean. True if user has a PROXY grant.
is_role: Boolean. True if user is a role.
Default: add_when_load_ok
Default: primary
Default: default
TLSv1.2TLSv1.3TLSv10TLSv11TLSv12TLSv13Default: MAX
MAX
REST API documentation
Network throughput between the client, MaxScale and the database nodes governs how much traffic can be handled. The client-to-MaxScale network is likely to be saturated first: having multiple MaxScales in front of the cluster is an easy way of solving this problem.
Certain MaxScale modules store data on disk. A faster disk improves their
performance but depending on the module, this might not be a big enough of a
problem to worry about. Filters like the qlafilter that write information to
disk for every SQL query can cause performance bottlenecks.
The number of routing threads used by MaxScale.
connection routing
Connection routing is a method of handling requests in which MariaDB MaxScale will accept connections from a client and route data on that connection to a single database using a single connection. Connection based routing will not examine individual requests on a connection and it will not move that connection once it is established.
statement routing
Statement routing is a method of handling requests in which each request within a connection will be handled individually. Requests may be sent to one or more servers and connections may be dynamically added or removed from the session.
module
A module is a separate code entity that may be loaded dynamically into MariaDB MaxScale to increase the available functionality. Modules are implemented as run-time loadable shared objects.
connection failover
When a connection currently being used between MariaDB MaxScale and the database server fails a replacement will be automatically created to another server by MariaDB MaxScale without client intervention
backend database
A term used to refer to a database that sits behind MariaDB MaxScale and is accessed by applications via MariaDB MaxScale.
REST API
HTTP administrative interface
Running
The server is running.
Master
The server is the primary.
Slave
The server is a replica.
Draining
The server is being drained. Existing connections can continue to be used, but no new connections will be created to the server. Typically this status bit is turned on manually using maxctrl, but a monitor may also turn it on.
Drained
The server has been drained. The server was being drained and now the number of connections to the server has dropped to 0.
Auth Error
The monitor cannot login and query the server due to insufficient privileges.
80% of the smallest [wait_timeout]server-system-variables/#wait_timeout) value of the servers used by the service
The smallest value of the servers used by the service
QC cache size
The current size of the cache (bytes).
QC cache inserts
How many entries have been inserted into the cache.
QC cache hits
How many times the classification result has been found from the cache.
QC cache misses
How many times the classification result has not been found from the cache, but the classification had to be performed.
QC cache evictions
How many times a cache entry has had to be removed from the cache, in order to make place for another.
machine.cores_physical
The number of physical CPU cores on the machine.
machine.cores_available
The number of CPU cores available to MaxScale. This number may be smaller than machine.cores_physical, if CPU affinities are used and only a subset of the physical cores are available to MaxScale.
machine.cores_virtual
The number of virtual CPU cores available to MaxScale. This number may be a decimal and smaller than machine.cores_available, if MaxScale is running in a container whose CPU quota and period has been restricted. Note that if MaxScale is not, or fails to detect it is running in a container, the value shown will be identical with machine.cores_available.
machine.memory_physical
The amount of physical memory on the machine.
machine.memory_available
The amount of memory available to MaxScale. This number may be smaller than machine.memory_physical, if MaxScale is running in a container whose memory has been restricted. Note that if MaxScale is not, or fails to detect it is running in a container, the value shown will be identical with machine.memory_physical. Note also that the amount is available to all processes running in the same container, not just to MaxScale.
maxscale.query_classifier_cache_size
The maximum size of the MaxScale query classifier cache.

maxscale.threads
# This is a comment before a parameter
some_parameter=123[MyService]
type=service
router=readconnroute
servers=server1,
server2,
server3max_size=1099511628000
max_size=1073741824Ki
max_size=1048576Mi
max_size=1024Gi
max_size=1Timax_size=1000000000000
max_size=1000000000k
max_size=1000000M
max_size=1000G
max_size=1Tsoft_ttl=1h
soft_ttl=60m
soft_ttl=60min
soft_ttl=3600s
soft_ttl=3600000mssome_param=42%router_options=master,slavepath_list_parameter=/tmp/something.log:/var/log/maxscale/maxscale.log# Valid options are:
# threads=[<number of threads> | auto ]
[MaxScale]
threads=autorebalance_period=10srebalance_threshold=15auth_connect_timeout=10s# Valid options are:
# log_augmentation=<0|1>
log_augmentation=1# A valid value looks like
# log_throttling = X, Y, Z
#
# where the first value X is a positive integer and means the number of times
# a specific error may be logged within a duration of Y, before the logging
# of that error is suppressed for a duration of Z.
log_throttling=8, 2s, 15000mslog_throttling=log_throttling=0, 0, 0logdir=/var/log/maxscale/datadir=/var/lib/maxscale/libdir=/usr/lib64/maxscale/cachedir=/var/cache/maxscale/piddir=/var/run/maxscale/execdir=/usr/bin/connector_plugindir=/usr/lib64/maxscale/plugin/persistdir=/var/lib/maxscale/maxscale.cnf.d/module_configdir=/etc/maxscale.modules.d/language=/var/lib/maxscale/# 1MB query classifier cache
query_classifier_cache_size=1MBsome_parameter=$SOME_VALUEsubstitute_variables=truesql_mode=oracleset sql_mode=oracle;set sql_mode=default;local_address=192.168.1.254users_refresh_time=120susers_refresh_interval=2hretain_last_statements=20dump_last_statements=on_errorsession_trace=20session_trace=1000
session_trace_match=/You have an error in your SQL syntax/admin_readwrite_hosts=192.168.1.1,127.0.0.1/21admin_readonly_hosts=mydomain%.comGRANT SELECT, INSERT, UPDATE, CREATE ON `mysql`.`maxscale_config`CREATE TABLE IF NOT EXISTS mysql.maxscale_config(
cluster VARCHAR(256) PRIMARY KEY,
version BIGINT NOT NULL,
config JSON NOT NULL,
origin VARCHAR(254) NOT NULL,
nodes JSON NOT NULL
) ENGINE=InnoDB;key_manager=file
file.keyfile=/path/to/keyfileevent.X.facility=LOG_LOCAL0
event.X.level=LOG_ERRauth,authpriv.* /var/log/auth.logevent.authentication_failure.facility=LOG_AUTH
event.authentication_failure.level=LOG_CRIT[Test-Service]
type=servicerouter=readconnroute
router_options=slaverouter=readconnroute
router_options=master,slavefilters=counter | QLAtargets=My-Service,server2servers=server1,server2,server3cluster=TheMonitoruser=maxscale
password=Mhu87p2Duser=maxscale
password=Mhu87p2Dversion_string=10.11.2-MariaDB-RWsplit[Test-Service]
wait_timeout=300s[Test-Service]
max_connections=100session_track_state_change = ON
session_track_transaction_info = CHARACTERISTICSmaxctrl alter service MyService retain_last_statements 5SET @my_planet='Earth'; -- This command will be removed by history simplification
SET @my_home='My home is: ' || @my_planet; -- Command #1 in the history
SET @my_planet='Earth'; -- Command #2 in the historyuser_accounts_file=/home/root/users.json{
"user": [
{
"user": "test1",
"host": "%",
"global_db_priv": true
},
{
"user": "test2",
"host": "127.0.0.1",
"password": "*032169CDF0B90AF8C00992D43D354E29A2EACB42",
"plugin": "mysql_native_password",
"default_role": "role2"
},
{
"user": "",
"host": "%",
"plugin": "pam",
"proxy_priv": true
}
],
"db": [
{
"user": "test2",
"host": "127.0.0.1",
"db": "test"
}
],
"roles_mapping": [
{
"user": "test2",
"host": "127.0.0.1",
"role": "role2"
}
]
}user_accounts_file_usage=file_only_alwaysidle_session_pool_time=900msmax_slave_connections=1
lazy_connect=1
transaction_replay=true[server1]
type=server
max_routing_connections=1000 #this should be based on MariaDB Server capacity
persistpoolmax=1000 #same as above
persistmaxtime=10
#other server settings...
[myservice]
type=service
max_slave_connections=1
transaction_replay=true
idle_session_pool_time=500ms
lazy_connect=1
#other service settings...multiplex_timeout=33s[MyMariaDBServer1]
type=server
address=127.0.0.1
port=3000monitoruser=mymonitoruser
monitorpw=mymonitorpasswdmonitoruser=mymonitoruser
monitorpw=mymonitorpasswdmax_routing_connections=1234/data:80disk_space_threshold=*:80
disk_space_threshold=/data:80
disk_space_threshold=/data1:80,/data2:60,*:90> use information_schema;
> select * from disks;
+-----------+----------------------+-----------+----------+-----------+
| Disk | Path | Total | Used | Available |
+-----------+----------------------+-----------+----------+-----------+
| /dev/sda3 | / | 47929956 | 34332348 | 11139820 |
| /dev/sdb1 | /data | 961301832 | 83764 | 912363644 |
...[main-site-primary]
type=server
address=192.168.0.11
rank=primary
[main-site-replica]
type=server
address=192.168.0.12
rank=primary
[DR-site-primary]
type=server
address=192.168.0.21
rank=secondary
[DR-site-replica]
type=server
address=192.168.0.22
rank=secondary[MyListener1]
type=listener
service=MyService1
port=3006proxy_protocol_networks=192.168.0.1,198.168.0.0/16connection_init_sql_file=/home/dba/init_queries.txtset @myvar = 'mytext';
set @myvar2 = 4;user_mapping_file=/home/root/mapping.json{
"user_map": [
{
"original_user": "bob",
"mapped_user": "janet"
},
{
"original_user": "karen",
"mapped_user": "janet"
}
],
"group_map": [
{
"original_group": "visitors",
"mapped_user": "db_user"
}
],
"server_credentials": [
{
"mapped_user": "janet",
"password": "secret_pw",
"plugin": "mysql_native_password"
},
{
"mapped_user": "db_user",
"password": "secret_pw2",
"plugin": "pam"
}
]
}connection_metadata=redirect_url=localhost:3306,service_name=my-service,max_allowed_packet=auto[Monitor1]
type=monitor
module=mariadbmon
user=the_user
password=the_password
handle_events=false
monitor_interval=2000ms
backend_connect_timeout = 3s
backend_connect_attempts = 5
servers=Server1, Server2
[Monitor2]
type=monitor
module=mariadbmon
user=the_user
password=the_password
handle_events=false
monitor_interval=2000ms
backend_connect_timeout = 3s
backend_connect_attempts = 5
servers=Server3, Server4[Monitor-Common]
type=include
module=mariadbmon
user=the_user
password=the_password
handle_events=false
monitor_interval=2000ms
backend_connect_timeout = 3s
backend_connect_attempts = 5
[Monitor1]
type=monitor
@include=Monitor-Common
servers=Server1, Server2
[Monitor2]
type=monitor
@include=Monitor-Common
servers=Server3, Server3@include=Some-Common-Attributes, Other-Common-Attributes[Monitor2]
type=monitor
@include=Monitor-Common
servers=Server3, Server3
backend_connect_timeout = 5s[Monitor-Common]
type=include
@include=Base-Common
...
[Monitor2]
type=monitor
@include=Monitor1
...ssl_verify_peer_certificate=true
ssl_verify_peer_host=true[server1]
type=server
address=10.131.24.62
port=3306
ssl=true
ssl_cert=/usr/local/mariadb/maxscale/ssl/crt.max-client.pem
ssl_key=/usr/local/mariadb/maxscale/ssl/key.max-client.pem
ssl_ca_cert=/usr/local/mariadb/maxscale/ssl/crt.ca.maxscale.pem[RW-Split-Listener]
type=listener
service=RW-Split-Router
port=3306
ssl=true
ssl_cert=/usr/local/mariadb/maxscale/ssl/crt.maxscale.pem
ssl_key=/usr/local/mariadb/maxscale/ssl/key.csr.maxscale.pem
ssl_ca_cert=/usr/local/mariadb/maxscale/ssl/crt.ca.maxscale.pem# Usage: maxkeys [PATH]
maxkeys /var/lib/maxscale/# Usage: maxpasswd PATH PASSWORD
maxpasswd /var/lib/maxscale/ MaxScalePw001
61DD955512C39A4A8BC4BB1E5F116705[Split-Service]
type=service
router=readwritesplit
servers=server1,server2,server3,server4
user=maxscale
password=61DD955512C39A4A8BC4BB1E5F116705├──────────────┼─────────────────────────────────────────────────────────────┤
│ Config Sync │ { │
│ │ "checksum": "3dd6b467760d1d2023f2bc3871a60dd903a3341e", │
│ │ "nodes": { │
│ │ "maxscale": "OK", │
│ │ "maxscale2": "OK" │
│ │ }, │
│ │ "origin": "maxscale", │
│ │ "status": "OK", │
│ │ "version": 2 │
│ │ } │
├──────────────┼─────────────────────────────────────────────────────────────┤maxctrl alter maxscale config_sync_cluster ""maxscale --export-config=/tmp/maxscale.cnf.combined[maxscale]
key_manager=file
file.keyfile=/var/lib/maxscale/encryption.key
[NoSQL-Listener]
type=listener
service=My-Service
protocol=nosqlprotocol
nosqlprotocol.authentication_key_id=1
nosqlprotocol.authentication_user=my_user
nosqlprotocol.authentication_password=my_password
# Add services, servers, monitors etc.$ openssl rand -hex 32|vault kv put secret/1 data=-
== Secret Path ==
secret/data/1
======= Metadata =======
Key Value
--- -----
created_time 2022-06-23T06:50:55.29063873Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1$ bin/maxctrl show threads
┌────────────────────────┬────────┬────────┬────────┬────────┬─────┐
│ Id │ 0 │ 1 │ 2 │ 3 │ All │
├────────────────────────┼────────┼────────┼────────┼────────┼─────┤
│ State │ Active │ Active │ Active │ Active │ N/A │
├────────────────────────┼────────┼────────┼────────┼────────┼─────┤
...$ bin/maxctrl alter maxscale threads=2
OK
$ bin/maxctrl show threads
┌────────────────────────┬────────┬────────┬──────────┬──────────┬─────────┐
│ Id │ 0 │ 1 │ 2 │ 3 │ All │
├────────────────────────┼────────┼────────┼──────────┼──────────┼─────────┤
│ State │ Active │ Active │ Draining │ Draining │ N/A │
├────────────────────────┼────────┼────────┼──────────┼──────────┼─────────┤
...┌────────────────────────┬────────┬────────┬─────────┬──────────┬────────┐
│ Id │ 0 │ 1 │ 2 │ 3 │ All │
├────────────────────────┼────────┼────────┼─────────┼──────────┼────────┤
│ State │ Active │ Active │ Dormant │ Draining │ N/A │
├────────────────────────┼────────┼────────┼─────────┼──────────┼────────┤
...$ bin/maxctrl show threads
┌────────────────────────┬────────┬────────┬──────┐
│ Id │ 0 │ 1 │ All │
├────────────────────────┼────────┼────────┼──────┤
│ State │ Active │ Active │ N/A │
├────────────────────────┼────────┼────────┼──────┤
...$ bin/maxctrl show threads
┌────────────────────────┬────────┬────────┬─────────┬──────────┬────────┐
│ Id │ 0 │ 1 │ 2 │ 3 │ All │
├────────────────────────┼────────┼────────┼─────────┼──────────┼────────┤
│ State │ Active │ Active │ Dormant │ Draining │ N/A │
├────────────────────────┼────────┼────────┼─────────┼──────────┼────────┤
...$ bin/maxctrl alter maxscale threads=3
OK
wikman@johan-P53s:maxscale $ bin/maxctrl show threads
┌────────────────────────┬────────┬────────┬────────┬──────────┬────────┐
│ Id │ 0 │ 1 │ 2 │ 3 │ All │
├────────────────────────┼────────┼────────┼────────┼──────────┼────────┤
│ State │ Active │ Active │ Active │ Draining │ N/A │
├────────────────────────┼────────┼────────┼────────┼──────────┼────────┤
...$ bin/maxctrl show threads
┌────────────────────────┬────────┬────────┬────────┬──────┐
│ Id │ 0 │ 1 │ 2 │ All │
├────────────────────────┼────────┼────────┼────────┼──────┤
│ State │ Active │ Active │ Active │ N/A │
├────────────────────────┼────────┼────────┼────────┼──────┤
...$ maxctrl show maxscale
...
├──────────────┼────────────────────────────────────────────────────────────────────────────┤
│ System │ { │
│ │ "machine": { │
│ │ "cores_available": 8, │
│ │ "cores_physical": 8, │
│ │ "cores_virtual": 4, │
│ │ "memory_available": 20858544128, │
│ │ "memory_physical": 41717088256 │
│ │ }, │
│ │ "maxscale": { │
│ │ "query_classifier_cache_size": 6257563238, │
│ │ "threads": 8 │
│ │ }, │
│ │ "os": { │
│ │ "machine": "x86_64", │
│ │ "nodename": "johan-P53s", │
│ │ "release": "5.4.0-125-generic", │
│ │ "sysname": "Linux", │
│ │ "version": "#141~18.04.1-Ubuntu SMP Thu Aug 11 20:15:56 UTC 2022" │
│ │ } │
│ │ } │
└──────────────┴────────────────────────────────────────────────────────────────────────────┘[maxscale]
threads=4
query_classifier_cache_size=3100000000
...WatchdogSec=30s