Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Operating System support varies by product and product version. Operating System support is detailed in MariaDB plc Engineering Policies.
This section explains various deployment architectures, including standalone, replication, and clustering, to help you design scalable and highly available database solutions.
Understand MariaDB Server's architecture. Explore its components, storage engines, and how they interact to provide a high-performance, reliable database solution.
Understand MariaDB Server's architectural constraints. This section details limitations & design considerations, helping you optimize your database deployments for maximum efficiency and scalability.
MariaDB offers varied deployment topologies by workload and technology, each named and diagrammed with benefits listed. Custom configurations are also supported.
MariaDB products can be deployed in many different topologies. The topologies described in this section are representative. MariaDB products can be deployed to form other topologies, leverage advanced product capabilities, or combine the capabilities of multiple topologies.
Topologies are the arrangement of nodes and links to achieve a purpose. This documentation describes a few of the many topologies that can be deployed using MariaDB database products.
We group topologies by workload (transactional, analytical, hybrid) and technologies (Enterprise Spider). Single-node topologies are listed separately.
To help you select the correct topology:
Each topology is named and this name is used consistently throughout the documentation.
A thumbnail diagram provides a small-scale summary of the topology's architecture.
Finally, we provide a list of the benefits of the topology.
Although multiple topologies are listed on this page, the listed topologies are not the only options. MariaDB products are flexible, configurable, and extensible, so it possible to deploy different topologies that combine the capabilities of multiple topologies listed on this page. The topologies listed on this page are primarily intended to be representative of the most commonly requested use cases.
MariaDB Enterprise Server provides DDL (Data Definition Language) syntax to create, alter, or drop constraints.
In the context of ISO 9075-2:2016, this page uses "DDL (Data Definition Language)" to refer to SQL statements in the standard's "SQL-schema statements" group.
MariaDB Replication
Highly available
Asynchronous or semi-synchronous replication
Automatic failover via MaxScale
Manual provisioning of new nodes from backup
Scales reads via MaxScale
Enterprise Server 10.3+, MaxScale 2.5+
Galera Cluster Topology Multi-Primary Cluster Powered by Galera for Transactional/OLTP Workloads
InnoDB Storage Engine
Highly available
Virtually synchronous, certification-based replication
Automated provisioning of new nodes (IST/SST)
Scales reads via MaxScale Enterprise Server 10.3+, MariaDB Enterprise Cluster (powered by Galera), MaxScale 2.5+
Columnar storage engine with S3-compatible object storage
Highly available
Automatic failover via MaxScale and CMAPI
Scales reads via MaxScale
Bulk data import
Enterprise Server 10.5, Enterprise ColumnStore 5, MaxScale 2.5
Enterprise Server 10.6, Enterprise ColumnStore 23.02, MaxScale 22.08
Columnar storage engine with shared local storage
Highly available
Automatic failover via MaxScale and CMAPI
Scales reads via MaxScale
Bulk data import
Enterprise Server 10.5, Enterprise ColumnStore 5, MaxScale 2.5
Enterprise Server 10.6, Enterprise ColumnStore 23.02, MaxScale 22.08
Single-stack hybrid transactional/analytical workloads
ColumnStore for analytics with scalable S3-compatible object storage
InnoDB for transactions• Cross-engine JOINs
Enterprise Server 10.5, Enterprise ColumnStore 5, MaxScale 2.5
Enterprise Server 10.6, Enterprise ColumnStore 23.02, MaxScale 22.08
Read from and write to tables on remote ES nodes
Spider Node uses Spider storage engine for Federated Spider Tables
Federated Spider Table is a "virtual" table• Spider uses MariaDB foreign data wrapper to query Data Table on Data Node
Data Node uses non-Spider storage engine for Data Tables
Supports transactions
Enterprise Server 10.3+, Enterprise Spider
Shard tables for horizontal scalability
Spider Node uses Spider storage engine for Sharded Spider Tables
Sharded Spider Table is a partitioned "virtual" table
Spider uses MariaDB foreign data wrapper to query Data Tables on Data Nodes for each partition
Data Node uses non-Spider storage engine for Data Tables
Supports transactions
Enterprise Server 10.3+, Enterprise Spider
This page details step 4 of the 6-step procedure "Deploy Galera Cluster Topology".
This step installs MariaDB MaxScale.
MariaDB Enterprise Cluster requires 1 or more MaxScale nodes.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Corporation provides package repositories for YUM (RHEL / CentOS), APT (Debian / Ubuntu), and ZYpp (SLES). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On the MaxScale node, install the prerequisites for downloading the software from the Web.
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Install on SLES (ZYpp):
On the MaxScale node, configure package repositories and specify MariaDB MaxScale 25.01:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
On the MaxScale node, install MariaDB MaxScale.
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Install on SLES (ZYpp):
Navigation in the procedure "Deploy Galera Cluster Topology":
This page was step 4 of 6.
Next: Step 5: Start and Configure MariaDB MaxScale
Explore MariaDB Enterprise Spider topologies with MaxScale. This section details how it integrates with Spider to manage & route traffic efficiently across sharded & distributed database environments.
MariaDB Enterprise Spider enables reading from and writing to tables on remote Enterprise Server nodes. It uses the Spider storage engine for "virtual" Federated Spider Tables, querying Data Tables on Data Nodes (which use non-Spider engines) via a MariaDB foreign data wrapper. This solution supports transactions and is available with Enterprise Server 10.3+.
MariaDB Enterprise Spider facilitates horizontal scalability by sharding tables. It uses the Spider storage engine for partitioned "virtual" Sharded Spider Tables, querying Data Tables on Data Nodes (using non-Spider engines) for each partition via a MariaDB foreign data wrapper. This solution supports transactions and is available with Enterprise Server 10.3+.
When creating a table, you can define a column with a NOT NULL constraint to prevent null values. This constraint ensures data integrity by guaranteeing that a column must always contain a value. If an attempt is made to insert or update a row with a null value in a NOT NULL column, the operation will be rejected, thus maintaining the integrity of the database.
MariaDB Server supports NOT NULL constraints to ensure that a column's value is not set to NULL:
When a column is declared with a NOT NULL constraint, Enterprise Server rejects operations that would write a NULL value to the column
With MariaDB Server, the statement can be used to create a new table with a NOT NULL constraint on one or more columns:
With MariaDB Server, the statement can be used to add the NOT NULL constraint to a column using the MODIFY COLUMN clause:
With MariaDB Server, the statement can be used to remove the NOT NULL constraint from a column using the MODIFY COLUMN clause:
This page details step 1 of the 6-step procedure "".
This step installs MariaDB Enterprise Server.
MariaDB Enterprise Server installations support MariaDB Enterprise Cluster, powered by Galera. MariaDB Enterprise Cluster uses the Galera Enterprise 4 wsrep provider plugin.
MariaDB Enterprise Cluster requires an odd number of 3 or more nodes. Nodes must meet requirements.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
This page details step 1 of the 7-step procedure "".
This step installs MariaDB Enterprise Server
The Primary/Replica topology requires 3 or more MariaDB Enterprise Server nodes for High Availability (HA). Nodes must meet requirements.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to https://customers.mariadb.com/downloads/token/
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the Versions section at the bottom of the MariaDB Package Repository Setup and Usage page. Substitute ${checksum} in the example above with the latest checksum.
On each Enterprise ColumnStore node, install additional dependencies:
Install on CentOS and RHEL (YUM):
Install on Debian 9 and Ubuntu 18.04 (APT)
Install on Debian 10 and Ubuntu 20.04 (APT):
On each Enterprise ColumnStore node, install MariaDB Enterprise Server and MariaDB Enterprise ColumnStore:
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Navigation in the procedure "Deploy ColumnStore Object Storage Topology".
This page was step 3 of 9.
Next: Step 4: Start and Configure MariaDB Enterprise Server.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to https://customers.mariadb.com/downloads/token/.
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the Versions section at the bottom of the MariaDB Package Repository Setup and Usage page. Substitute ${checksum} in the example above with the latest checksum.
On each Enterprise Cluster node, install MariaDB Enterprise Server and MariaDB Enterprise Backup.
Install via CentOS / RHEL (YUM):
Install via Debian / Ubuntu (APT):
Install via SLES (ZYpp):
Navigation in the procedure "Deploy Galera Cluster Topology":
This page was step 1 of 6.
Next: Step 2: Start and Configure MariaDB Enterprise Server
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On the MaxScale node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
On the MaxScale node, configure package repositories and specify MariaDB MaxScale 22.08:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the Versions section at the bottom of the MariaDB Package Repository Setup and Usage page. Substitute ${checksum} in the example above with the latest checksum.
On the MaxScale node, install MariaDB MaxScale.
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 6 of 9.
Next: Step 7: Start and Configure MariaDB MaxScale.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to https://customers.mariadb.com/downloads/token/
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the Versions section at the bottom of the MariaDB Package Repository Setup and Usage page. Substitute ${checksum} in the example above with the latest checksum.
On each Enterprise Cluster node, install MariaDB Enterprise Server and MariaDB Enterprise Backup.
Install via CentOS / RHEL (YUM):
Install via Debian / Ubuntu (APT):
Install via SLES (ZYpp):
Navigation in the procedure "Deploy Primary/Replica Topology":
This page was step 1 of 7.
Next: Step 2: Start and Configure MariaDB Enterprise Server on Primary Server
Read from and write to tables on remote ES nodes
Spider Node uses Spider storage engine for Federated Spider Tables
Federated Spider Table is a "virtual" table
Spider uses MariaDB foreign data wrapper to query Data Table on Data Node
Data Node uses non-Spider storage engine for Data Tables
Supports transactions
Enterprise Server 10.3+, Enterprise Spider
Shard tables for horizontal scalability
Spider Node uses Spider storage engine for Sharded Spider Tables
Sharded Spider Table is a partitioned "virtual" table
Spider uses MariaDB foreign data wrapper to query Data Tables on Data Nodes for each partition
Data Node uses non-Spider storage engine for Data Tables
Supports transactions
Enterprise Server 10.3+, Enterprise Spider

Installation and configuration instructions are available for the deployment of single-node topologies of MariaDB Enterprise Server and MariaDB Community Server.
These instructions detail a single-node deployment of a columnar data store, for MariaDB Community Server, with data optionally stored on S3-compatible object storage:
For high availability and scalability, see "" or "".
These instructions detail a single-node deployment of MariaDB Community Server:
These instructions detail a single-node deployment of a columnar data store:
For high availability and scalability, see "".
These instructions detail a single-node deployment of a columnar data store for MariaDB Enterprise Server, with data stored on S3-compatible object storage:
For high availability and scalability, see "".
These instructions detail a single-node deployment of MariaDB Enterprise Server:
\
This page details step 5 of the 7-step procedure "Deploy Primary/Replica Topology".
This step install MariaDB MaxScale 25.01.
The MaxScale node must meet requirements.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Corporation provides package repositories for YUM (RHEL, CentOS), APT (Debian, Ubuntu), and ZYpp (SLES). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On the MaxScale node, install the prerequisites for downloading the software from the Web.
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Install on SLES (ZYpp):
On the MaxScale node, configure package repositories and specify MariaDB MaxScale 25.01:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
On the MaxScale node, install MariaDB MaxScale.
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Install on SLES (ZYpp):
Navigation in the procedure "Deploy Primary/Replica Topology":
This page was step 5 of 7.
Next: Step 6: Start and Configure MariaDB MaxScale
This page details step 2 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Local storage.
This step installs MariaDB Enterprise Server and MariaDB Enterprise ColumnStore 23.10.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
Install additional dependencies:
Install on CentOS / RHEL (YUM)
Install of Debian 10 and Ubuntu 20.04 (APT):
Install on Debian 9 and Ubuntu 18.04 (APT):
Install MariaDB Enterprise Server and MariaDB Enterprise ColumnStore:
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:
This page was step 2 of 5.
Next: Step 3: Start and Configure MariaDB Enterprise ColumnStore.
This page details step 5 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Local storage.
This step bulk imports data to Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Before data can be imported into the tables, create a matching schema.
On the primary server, create the schema:
For each database that you are importing, create the database with the statement:
For each table that you are importing, create the table with the statement:
Enterprise ColumnStore supports multiple methods to import data into ColumnStore tables.
MariaDB Enterprise ColumnStore includes , which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server run :
When data is loaded with the statement, MariaDB Enterprise ColumnStore loads the data using , which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server use statement:
MariaDB Enterprise ColumnStore can also import data directly from a remote database. A simple method is to query the table using the statement, and then pipe the results into , which is a command-line utility that is designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a remote MariaDB database:
Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:
This page was step 5 of 5.
This procedure is complete.
This page details step 2 of the 4-step procedure "Deploy HTAP Topology".
This step installs MariaDB Enterprise Server MariaDB Enterprise ColumnStore 23.10, and dependencies.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
On each Enterprise ColumnStore node, install additional dependencies: Install on CentOS / RHEL (YUM)
Install of Debian 10 and Ubuntu 20.04 (APT):
Install on Debian 9 and Ubuntu 18.04 (APT):
On the Enterprise ColumnStore node, install MariaDB Enterprise Server and MariaDB Enterprise ColumnStore:
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Navigation in the procedure "Deploy HTAP Topology".
This page was step 2 of 4.
Next: Step 3: Start and Configure MariaDB Enterprise Server.
$ sudo yum install curl$ sudo apt install curl apt-transport-https$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_es_repo_setup$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-maxscale \
--skip-tools \
--mariadb-server-version="11.4"$ sudo yum install jemalloc jq curl$ sudo apt install libjemalloc1 jq curl$ sudo apt install libjemalloc2 jq curl$ sudo yum install MariaDB-server \
MariaDB-backup \
MariaDB-shared \
MariaDB-client \
MariaDB-columnstore-engine \
MariaDB-columnstore-cmapi$ sudo apt install mariadb-server \
mariadb-backup \
libmariadb3 \
mariadb-client \
mariadb-plugin-columnstore \
mariadb-columnstore-cmapi$ sudo yum install curl$ sudo apt install curl apt-transport-https$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_es_repo_setup$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-maxscale \
--skip-tools \
--mariadb-server-version="11.4"$ sudo yum install MariaDB-server MariaDB-backup$ sudo apt update
$ sudo apt install mariadb-server mariadb-backup$ sudo zypper install MariaDB-server MariaDB-backup$ sudo yum install curl$ sudo apt install curl apt-transport-https$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -
$ chmod +x mariadb_es_repo_setup
$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-server \
--skip-tools \
--mariadb-maxscale-version="22.08"$ sudo yum install maxscale$ sudo apt install maxscale$ sudo yum install curl$ sudo apt install curl apt-transport-https$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_es_repo_setup$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-maxscale \
--skip-tools \
--mariadb-server-version="11.4"$ sudo yum install MariaDB-server MariaDB-backup$ sudo apt update
$ sudo apt install mariadb-server mariadb-backup$ sudo zypper install MariaDB-server MariaDB-backup$ sudo yum install curl$ sudo apt install curl apt-transport-https$ sudo zypper install curl$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -
$ chmod +x mariadb_es_repo_setup
$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-server \
--skip-tools \
--mariadb-maxscale-version="25.01"$ sudo yum install maxscale$ sudo apt install maxscale$ sudo zypper install maxscaleCREATE TABLE hq_sales.invoices (
invoice_id BIGINT UNSIGNED NOT NULL,
branch_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY (invoice_id)
);ALTER TABLE hq_sales.invoices
MODIFY COLUMN customer_id INT NOT NULL;
ALTER TABLE .. MODIFY COLUMN .. NULLALTER TABLE hq_sales.invoices
MODIFY COLUMN customer_id INT NULL;This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to https://customers.mariadb.com/downloads/token/
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the Versions section at the bottom of the MariaDB Package Repository Setup and Usage page. Substitute ${checksum} in the example above with the latest checksum.
On each Enterprise ColumnStore node, install additional dependencies:
Install on CentOS and RHEL (YUM):
Install on Debian 9 and Ubuntu 18.04 (APT)
Install on Debian 10 and Ubuntu 20.04 (APT):
On each Enterprise ColumnStore node, install MariaDB Enterprise Server and MariaDB Enterprise ColumnStore:
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 3 of 9.
Next: Step 4: Start and Configure MariaDB Enterprise Server.
This page is: Copyright © 2025 MariaDB. All rights reserved.
Before data can be imported into the tables, create a matching schema.
On the primary server, create the schema:
For each database that you are importing, create the database with the CREATE DATABASE statement:
For each table that you are importing, create the table with the CREATE TABLE statement:
Enterprise ColumnStore supports multiple methods to import data into ColumnStore tables.
MariaDB Enterprise ColumnStore includes cpimport, which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server run cpimport:
When data is loaded with the LOAD DATA INFILE statement, MariaDB Enterprise ColumnStore loads the data using cpimport, which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server use LOAD DATA INFILE statement:
MariaDB Enterprise ColumnStore can also import data directly from a remote database. A simple method is to query the table using the SELECT statement, and then pipe the results into cpimport, which is a command-line utility that is designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a remote MariaDB database:
Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:
This page was step 5 of 5.
This procedure is complete.
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Enterprise ColumnStore performs best with Linux kernel optimizations.
To optimize the kernel:
Set the relevant kernel parameters in a sysctl configuration file. To ensure proper change management, use an Enterprise ColumnStore-specific configuration file.
Create a /etc/sysctl.d/90-mariadb-enterprise-columnstore.conf file:
Use the sysctl command to set the kernel parameters at runtime
The Linux Security Modules (LSM) should be temporarily disabled on each Enterprise ColumnStore node during installation.
The LSM will be configured and re-enabled later in this deployment procedure.
The steps to disable the LSM depend on the specific LSM used by the operating system.
SELinux must be set to permissive mode before installing MariaDB Enterprise ColumnStore.
To set SELinux to permissive mode:
Set SELinux to permissive mode:
Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config.
For example, the file will usually look like this after the change:
Confirm that SELinux is in permissive mode:
SELinux will be configured and re-enabled later in this deployment procedure. This configuration is not persistent. If you restart the server before configuring and re-enabling SELinux later in the deployment procedure, you must reset the enforcement to permissive mode.
AppArmor must be disabled before installing MariaDB Enterprise ColumnStore.
Disable AppArmor:
Reboot the system.
Confirm that no AppArmor profiles are loaded using aa-status:
AppArmor will be configured and re-enabled later in this deployment procedure.
When using MariaDB Enterprise ColumnStore, it is recommended to set the system's locale to UTF-8.
On RHEL 8, install additional dependencies:
Set the system's locale to en_US.UTF-8 by executing localedef:
With the HTAP topology, it is important to create the S3 bucket before you start ColumnStore.
If you already have an S3 bucket, confirm that the bucket is empty.
S3 bucket configuration will be performed later in this procedure.
Navigation in the procedure "Deploy HTAP Topology".
This page was step 1 of 4.
Next: Step 2: Install MariaDB Enterprise Server.
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Enterprise ColumnStore performs best with Linux kernel optimizations.
On each server to host an Enterprise ColumnStore node, optimize the kernel:
Set the relevant kernel parameters in a sysctl configuration file. To ensure proper change management, use an Enterprise ColumnStore-specific configuration file.
Create a /etc/sysctl.d/90-mariadb-enterprise-columnstore.conf file:
Use the sysctl command to set the kernel parameters at runtime
The Linux Security Modules (LSM) should be temporarily disabled on each Enterprise ColumnStore node during installation.
The LSM will be configured and re-enabled later in this deployment procedure.
The steps to disable the LSM depend on the specific LSM used by the operating system.
SELinux must be set to permissive mode before installing MariaDB Enterprise ColumnStore.
To set SELinux to permissive mode:
Set SELinux to permissive mode:
Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config.
For example, the file will usually look like this after the change:
Confirm that SELinux is in permissive mode:
SELinux will be configured and re-enabled later in this deployment procedure. This configuration is not persistent. If you restart the server before configuring and re-enabling SELinux later in the deployment procedure, you must reset the enforcement to permissive mode.
AppArmor must be disabled before installing MariaDB Enterprise ColumnStore.
Disable AppArmor:
Reboot the system.
Confirm that no AppArmor profiles are loaded using aa-status:
AppArmor will be configured and re-enabled later in this deployment procedure.
When using MariaDB Enterprise ColumnStore, it is recommended to set the system's locale to UTF-8.
On RHEL 8, install additional dependencies:
Set the system's locale to en_US.UTF-8 by executing localedef:
If you want to use S3-compatible storage, it is important to create the S3 bucket before you start ColumnStore.
If you already have an S3 bucket, confirm that the bucket is empty.
S3 bucket configuration will be performed later in this procedure.
Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:
This page was step 1 of 5.
Next: Step 2: Install MariaDB Enterprise ColumnStore.
This page is: Copyright © 2025 MariaDB. All rights reserved.
Connect to the server using MariaDB Client using the root@localhost user account:
Query information_schema.PLUGINS and confirm that the ColumnStore storage engine plugin is ACTIVE:
Create a test database, if it does not exist:
Create a ColumnStore table:
Add sample data into the table:
Read data from table:
Create an InnoDB table:
Add data to the table:
Perform a cross-engine join:
Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:
This page was step 4 of 5.
Next: Step 5: Bulk Import of Data.
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to https://customers.mariadb.com/downloads/token/
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the Versions section at the bottom of the MariaDB Package Repository Setup and Usage page. Substitute ${checksum} in the example above with the latest checksum.
Install additional dependencies:
Install on CentOS / RHEL (YUM)
Install of Debian 10 and Ubuntu 20.04 (APT):
Install on Debian 9 and Ubuntu 18.04 (APT):
Install MariaDB Enterprise Server and MariaDB Enterprise ColumnStore:
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:
This page was step 2 of 5.
Next: Step 3: Start and Configure MariaDB Enterprise ColumnStore.
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to https://customers.mariadb.com/downloads/token/
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On the MaxScale node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
On the MaxScale node, configure package repositories and specify MariaDB MaxScale 22.08:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the Versions section at the bottom of the MariaDB Package Repository Setup and Usage page. Substitute ${checksum} in the example above with the latest checksum.
On the MaxScale node, install MariaDB MaxScale.
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 6 of 9.
Next: Step 7: Start and Configure MariaDB MaxScale.
This page is: Copyright © 2025 MariaDB. All rights reserved.
$ sudo yum install curl$ sudo apt install curl apt-transport-https$ sudo zypper install curl$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -
$ chmod +x mariadb_es_repo_setup
$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-server \
--skip-tools \
--mariadb-maxscale-version="25.01"$ sudo yum install maxscale$ sudo apt install maxscale$ sudo zypper install maxscale$ sudo yum install curl$ sudo apt install curl apt-transport-https$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_es_repo_setup$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-maxscale \
--skip-tools \
--mariadb-server-version="11.4"$ sudo yum install epel-release
$ sudo yum install jemalloc$ sudo apt install libjemalloc2$ sudo apt install libjemalloc1$ sudo yum install MariaDB-server \
MariaDB-backup \
MariaDB-shared \
MariaDB-client \
MariaDB-columnstore-engine$ sudo apt install mariadb-server \
mariadb-backup \
libmariadb3 \
mariadb-client \
mariadb-plugin-columnstoreCREATE DATABASE inventory;CREATE TABLE inventory.products (
product_name VARCHAR(11) NOT NULL DEFAULT '',
supplier VARCHAR(128) NOT NULL DEFAULT '',
quantity VARCHAR(128) NOT NULL DEFAULT '',
unit_cost VARCHAR(128) NOT NULL DEFAULT ''
) ENGINE=Columnstore DEFAULT CHARSET=utf8;$ sudo cpimport -s '\t' inventory products /tmp/inventory-products.tsvLOAD DATA INFILE '/tmp/inventory-products.tsv'
INTO TABLE inventory.products;$ mariadb --quick \
--skip-column-names \
--execute="SELECT * FROM inventory.products" \
| cpimport -s '\t' inventory products$ sudo yum install curl$ sudo apt install curl apt-transport-https$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_es_repo_setup$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-maxscale \
--skip-tools \
--mariadb-server-version="11.4"$ sudo yum install epel-release
$ sudo yum install jemalloc$ sudo apt install libjemalloc2$ sudo apt install libjemalloc1$ sudo yum install MariaDB-server \
MariaDB-backup \
MariaDB-shared \
MariaDB-client \
MariaDB-columnstore-engine$ sudo apt install mariadb-server \
mariadb-backup \
libmariadb3 \
mariadb-client \
mariadb-plugin-columnstore$ sudo yum install curl$ sudo apt install curl apt-transport-https$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_es_repo_setup$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-maxscale \
--skip-tools \
--mariadb-server-version="11.4"$ sudo yum install jemalloc jq curl$ sudo apt install libjemalloc1 jq curl$ sudo apt install libjemalloc2 jq curl$ sudo yum install MariaDB-server \
MariaDB-backup \
MariaDB-shared \
MariaDB-client \
MariaDB-columnstore-engine \
MariaDB-columnstore-cmapi$ sudo apt install mariadb-server \
mariadb-backup \
libmariadb3 \
mariadb-client \
mariadb-plugin-columnstore \
mariadb-columnstore-cmapiCREATE DATABASE inventory;CREATE TABLE inventory.products (
product_name VARCHAR(11) NOT NULL DEFAULT '',
supplier VARCHAR(128) NOT NULL DEFAULT '',
quantity VARCHAR(128) NOT NULL DEFAULT '',
unit_cost VARCHAR(128) NOT NULL DEFAULT ''
) ENGINE=Columnstore DEFAULT CHARSET=utf8;$ sudo cpimport -s '\t' inventory products /tmp/inventory-products.tsvLOAD DATA INFILE '/tmp/inventory-products.tsv'
INTO TABLE inventory.products;$ mariadb --quick \
--skip-column-names \
--execute="SELECT * FROM inventory.products" \
| cpimport -s '\t' inventory products# minimize swapping
vm.swappiness = 1
# Increase the TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Increase the TCP buffer limits
# min, default, and max number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# for 1 GigE, increase this to 2500
# for 10 GigE, increase this to 30000
net.core.netdev_max_backlog = 2500$ sudo sysctl --load=/etc/sysctl.d/90-mariadb-enterprise-columnstore.conf$ sudo setenforce permissive# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo getenforce
Permissive$ sudo systemctl disable apparmor$ sudo aa-status
apparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.$ sudo yum install glibc-locale-source glibc-langpack-en$ sudo localedef -i en_US -f UTF-8 en_US.UTF-8# minimize swapping
vm.swappiness = 1
# Increase the TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Increase the TCP buffer limits
# min, default, and max number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# for 1 GigE, increase this to 2500
# for 10 GigE, increase this to 30000
net.core.netdev_max_backlog = 2500$ sudo sysctl --load=/etc/sysctl.d/90-mariadb-enterprise-columnstore.conf$ sudo setenforce permissive# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo getenforce
Permissive$ sudo systemctl disable apparmor$ sudo aa-statusapparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.$ sudo yum install glibc-locale-source glibc-langpack-en$ sudo localedef -i en_US -f UTF-8 en_US.UTF-8$ sudo mariadbWelcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>SELECT PLUGIN_NAME, PLUGIN_STATUS
FROM information_schema.PLUGINS
WHERE PLUGIN_LIBRARY LIKE 'ha_columnstore%';+---------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+---------------------+---------------+
| Columnstore | ACTIVE |
| COLUMNSTORE_COLUMNS | ACTIVE |
| COLUMNSTORE_TABLES | ACTIVE |
| COLUMNSTORE_FILES | ACTIVE |
| COLUMNSTORE_EXTENTS | ACTIVE |
+---------------------+---------------+CREATE DATABASE IF NOT EXISTS test;CREATE TABLE IF NOT EXISTS test.contacts (
first_name VARCHAR(50),
last_name VARCHAR(50),
email VARCHAR(100)
) ENGINE=ColumnStore;INSERT INTO test.contacts (first_name, last_name, email)
VALUES
("Kai", "Devi", "kai.devi@example.com"),
("Lee", "Wang", "lee.wang@example.com");SELECT * FROM test.contacts;+------------+-----------+----------------------+
| first_name | last_name | email |
+------------+-----------+----------------------+
| Kai | Devi | kai.devi@example.com |
| Lee | Wang | lee.wang@example.com |
+------------+-----------+----------------------+CREATE TABLE test.addresses (
email VARCHAR(100),
street_address VARCHAR(255),
city VARCHAR(100),
state_code VARCHAR(2)
) ENGINE = InnoDB;INSERT INTO test.addresses (email, street_address, city, state_code)
VALUES
("kai.devi@example.com", "1660 Amphibious Blvd.", "Redwood City", "CA"),
("lee.wang@example.com", "32620 Little Blvd", "Redwood City", "CA");SELECT name AS "Name", addr AS "Address"
FROM (SELECT CONCAT(first_name, " ", last_name) AS name,
email FROM test.contacts) AS contacts
INNER JOIN (SELECT CONCAT(street_address, ", ", city, ", ", state_code) AS addr,
email FROM test.addresses) AS addr
WHERE contacts.email = addr.email;+----------+-----------------------------------------+
| Name | Address |
+----------+-----------------------------------------+
| Kai Devi | 1660 Amphibious Blvd., Redwood City, CA |
| Lee Wang | 32620 Little Blvd, Redwood City, CA |
+----------+-----------------------------------------+
+-------------------+-------------------------------------+
| Name | Address |
+-------------------+-------------------------------------+
| Walker Percy | 500 Thomas More Dr., Covington, LA |
| Flannery O'Connor | 300 Tarwater Rd., Milledgeville, GA |
+-------------------+-------------------------------------+$ sudo yum install curl$ sudo apt install curl apt-transport-https$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_es_repo_setup$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-maxscale \
--skip-tools \
--mariadb-server-version="11.4"$ sudo yum install epel-release
$ sudo yum install jemalloc$ sudo apt install libjemalloc2$ sudo apt install libjemalloc1$ sudo yum install MariaDB-server \
MariaDB-backup \
MariaDB-shared \
MariaDB-client \
MariaDB-columnstore-engine$ sudo apt install mariadb-server \
mariadb-backup \
libmariadb3 \
mariadb-client \
mariadb-plugin-columnstore$ sudo yum install curl$ sudo apt install curl apt-transport-https$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -
$ chmod +x mariadb_es_repo_setup
$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-server \
--skip-tools \
--mariadb-maxscale-version="22.08"$ sudo yum install maxscale$ sudo apt install maxscaleBefore data can be imported into the tables, create a matching schema.
On the primary server, create the schema:
For each database that you are importing, create the database with the CREATE DATABASE statement:
For each table that you are importing, create the table with the CREATE TABLE statement:
Enterprise ColumnStore supports multiple methods to import data into ColumnStore tables.
Shell
SQL access is not required
SQL
Shell access is not required
Remote Database
Use normal database client
Avoid dumping data to intermediate filed
MariaDB Enterprise ColumnStore includes cpimport, which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server run cpimport:
When data is loaded with the LOAD DATA INFILE statement, MariaDB Enterprise ColumnStore loads the data using cpimport, which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server use LOAD DATA INFILE statement:
MariaDB Enterprise ColumnStore can also import data directly from a remote database. A simple method is to query the table using the SELECT statement, and then pipe the results into cpimport, which is a command-line utility that is designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a remote MariaDB database:
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 9 of 9.
This procedure is complete.
This page is: Copyright © 2025 MariaDB. All rights reserved.
Before data can be imported into the tables, create a matching schema.
On the primary server, create the schema:
For each database that you are importing, create the database with the CREATE DATABASE statement:
For each table that you are importing, create the table with the CREATE TABLE statement:
Enterprise ColumnStore supports multiple methods to import data into ColumnStore tables.
Shell
SQL access is not required
SQL
Shell access is not required
Remote Database
Use normal database client
Avoid dumping data to intermediate filed
MariaDB Enterprise ColumnStore includes cpimport, which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server run cpimport:
When data is loaded with the LOAD DATA INFILE statement, MariaDB Enterprise ColumnStore loads the data using cpimport, which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server use LOAD DATA INFILE statement:
MariaDB Enterprise ColumnStore can also import data directly from a remote database. A simple method is to query the table using the SELECT statement, and then pipe the results into cpimport, which is a command-line utility that is designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a remote MariaDB database:
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 9 of 9.
This procedure is complete.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page details step 3 of the 6-step procedure "Deploy Galera Cluster Topology".
This step tests MariaDB Enterprise Server.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Use Systemd to test whether the MariaDB Enterprise Server service is running.
This action is performed on each Enterprise Cluster node.
Check if the MariaDB Enterprise Server service is running by executing the following:
If the service is not running on any node, start the service by executing the following on that node:
Use to test the local connection to the Enterprise Server node.
This action is performed on each Enterprise Cluster node:
The sudo command is used here to connect to the Enterprise Server node using the root@localhost user account, which authenticates using the unix_socket authentication plugin. Other user accounts can be used by specifying the --user and --password command-line options.
MariaDB Enterprise Cluster is operational when the cluster has a Primary Component. Query the status variable with SHOW GLOBAL STATUS to confirm that each node belongs to the Primary Component.
This action is performed on each Enterprise Cluster node.
Check the cluster status by executing the following:
If the Value column does not contain Primary on any node, then the node is not part of the Primary Component. Investigate network connectivity between the node and the nodes in the Primary Component.
MariaDB Enterprise Cluster maintains a count of the cluster size. Query the status variable with SHOW GLOBAL STATUS to confirm the number of nodes currently in the cluster.
This action is performed on each Enterprise Cluster node.
Check the cluster size by executing the following:
If the Value column does not contain the expected number of nodes, then some nodes might not be in the cluster. Check the value of the on each node to confirm that all nodes are in the same cluster.
Use MariaDB Client to test DDL.
On a single Enterprise Cluster node, use the MariaDB Client to connect to the node:
Create a test database and InnoDB table:
On each other Enterprise Cluster node, use the MariaDB Client to connect to the node:
Confirm that the database and table exist:
If the database or table do not exist on any node, then check that:
The nodes are in the Primary Component of the same cluster.
The wsrep_osu_method system variable is not set to RSU.
Use MariaDB Client to test DML.
On a single Enterprise Cluster node, use the MariaDB Client to connect to the node:
Insert sample data into the table created in the DDL test:
On each other Enterprise Cluster node, use the MariaDB Client to connect to the node:
Execute a query to retrieve the data:
If the data is not returned on any node, then check that:
The nodes are in the Primary Component of the same cluster.
The table uses the InnoDB storage engine.
The wsrep_on system variable is set to ON.
Navigation in the procedure "Deploy Galera Cluster Topology":
This page was step 3 of 6.
Next: Step 4: Install MariaDB MaxScale
This page details step 1 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Local storage.
This step prepares the system to host MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Enterprise ColumnStore performs best with Linux kernel optimizations.
On each server to host an Enterprise ColumnStore node, optimize the kernel:
Set the relevant kernel parameters in a sysctl configuration file. To ensure proper change management, use an Enterprise ColumnStore-specific configuration file.
Create a /etc/sysctl.d/90-mariadb-enterprise-columnstore.conf file:
Use the sysctl command to set the kernel parameters at runtime
The Linux Security Modules (LSM) should be temporarily disabled on each Enterprise ColumnStore node during installation.
The LSM will be configured and re-enabled later in this deployment procedure.
The steps to disable the LSM depend on the specific LSM used by the operating system.
SELinux must be set to permissive mode before installing MariaDB Enterprise ColumnStore.
To set SELinux to permissive mode:
Set SELinux to permissive mode:
Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config.
For example, the file will usually look like this after the change:
Confirm that SELinux is in permissive mode:
SELinux will be configured and re-enabled later in this deployment procedure. This configuration is not persistent. If you restart the server before configuring and re-enabling SELinux later in the deployment procedure, you must reset the enforcement to permissive mode.
AppArmor must be disabled before installing MariaDB Enterprise ColumnStore.
Disable AppArmor:
Reboot the system.
Confirm that no AppArmor profiles are loaded using aa-status:
AppArmor will be configured and re-enabled later in this deployment procedure.
When using MariaDB Enterprise ColumnStore, it is recommended to set the system's locale to UTF-8.
On RHEL 8, install additional dependencies:
Set the system's locale to en_US.UTF-8 by executing localedef:
Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:
This page was step 1 of 5.
Next: Step 2: Install MariaDB Enterprise ColumnStore.
This page details step 3 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Local storage.
This step starts and configures MariaDB Enterprise Server and MariaDB Enterprise ColumnStore 23.10.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Mandatory system variables and options for Single-Node Enterprise ColumnStore include:
Start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:
Start and enable the MariaDB Enterprise ColumnStore service, so that it starts automatically upon reboot:
Enterprise ColumnStore requires a mandatory utility user account. By default, it connects to the server using the root user with no password. MariaDB Enterprise Server 10.6 will reject this login attempt by default, so you will need to configure Enterprise ColumnStore to use a different user account and password and create this user account on Enterprise Server.
On the Enterprise ColumnStore node, create the user account with the statement:
On the Enterprise ColumnStore node, grant the user account SELECT privileges on all databases with the GRANT statement:
Configure Enterprise ColumnStore to use the utility user:
Set the password:
For details about how to encrypt the password, see "".
Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.
The specific steps to configure the security module depend on the operating system.
Configure SELinux for Enterprise ColumnStore:
To configure SELinux, you have to install the packages required for audit2allow. On CentOS 7 and RHEL 7, install the following:
On RHEL 8, install the following:
Allow the system to run under load for a while to generate SELinux audit events.
After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:
If no audit events were found, this will print the following:
If audit events were found, the new SELinux policy can be loaded using semodule:
Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config.
For example, the file will usually look like this after the change:
Set SELinux to enforcing mode:
For information on how to create a profile, see on ubuntu.com.
Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:
This page was step 3 of 5.
Next: Step 4: Test MariaDB Enterprise ColumnStore.
This page details step 1 of the 3-step procedure "Deploy Spider Federated Topology".
This step installs the Enterprise Spider storage engine plugin on the Spider Node.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Enterprise Spider depends on interconnect between the Spider Node and the Data Node. This may require adjustment to firewall and security settings.
The plugin is not installed with MariaDB Enterprise Server by default. An additional package must be installed.
On the Spider Node, install MariaDB Enterprise Spider:
Install via APT (Debian, Ubuntu)
On the Spider Node, install MariaDB Enterprise Spider:
On the Spider Node, install MariaDB Enterprise Spider:
The plugin must be loaded by MariaDB Enterprise Server.
On the Spider Node, use one of the following methods to configure MariaDB Enterprise Server to load the Enterprise Spider storage engine plugin:
On the Spider Node, set the plugin_load_add option to ha_spider in a configuration file. This option configures MariaDB Enterprise Server to load the Enterprise Spider storage engine plugin. The Spider Node must be restarted to detect the configuration change.
Choose a configuration file for custom changes to system variables and options. It is not recommended to make custom changes to Enterprise Server's default configuration files, because your custom changes can be overwritten by other default configuration files that are loaded after. Ensure that your custom changes will be read last by creating a custom configuration file in one of the included directories. Configuration files in included directories are read in alphabetical order. Ensure that your custom configuration file is read last by using the z- prefix in the file name. Some example configuration file paths for different distributions are shown in the following table:
Set the plugin_load_add option in the configuration file.
It must be set in a group that will be read by MariaDB Server, such as [mariadb] or [server].
For example:
Restart MariaDB Enterprise Server:
On the Spider Node, execute the statement with the library name ha_spider. The INSTALL SONAME statement configures MariaDB Enterprise Server to load the Enterprise Spider storage engine plugin. The INSTALL SONAME statement requires the SUPER privilege.
The INSTALL SONAME statement adds the Enterprise Spider storage engine to the system table. When the Spider Node is restarted, MariaDB Enterprise Server reads the system table and reloads the plugin, so the statement only needs to be executed once.
Connect to the Spider Node using :
Use the INSTALL SONAME statement to install the Enterprise Spider storage engine plugin:
On the Spider Node, confirm that the Enterprise Spider storage engine plugin is loaded by querying the information_schema.PLUGINS table:
When the Enterprise Spider storage engine is loaded, the PLUGIN_NAME column contains the value SPIDER and the PLUGIN_STATUS column contains the value ACTIVE.
Navigation in the procedure "Deploy Spider Federated Topology''.
This page was step 1 of 3.
Next: Step 2: Configure Spider Node and Data Node
This page details step 3 of the 3-step procedure "Deploy Spider Federated Topology".
This step tests MariaDB Enterprise Spider.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Use Systemd to test whether the MariaDB Enterprise Server service is running.
This action is performed on the Spider Node and Data Node.
Check if the MariaDB Enterprise Server service is running by executing the following:
If the service is not running on any node, start the service by executing the following on that node:
Use MariaDB Client to test the local connection to the Enterprise Server node.
This action is performed on the Spider Node and Data Node:
The sudo command is used here to connect to the Enterprise Server node using the root@localhost user account, which authenticates using the unix_socket authentication plugin. Other user accounts can be used by specifying the --user and --password command-line options.
Use to test a client connection to the Data Node from the Spider Node using the Spider user.
This action is performed on the Spider Node:
The host and port of the Data Node can be provided using the --host and --port command-line options. The credentials for the Spider user can be provided using the --user and --password command-line options.
If the Spider user is unable to connect to the Data Node from the Spider Node, check the password for the Spider user account on the Data Node.
For additional information, see "".
Query the information_schema.PLUGINS table to confirm that the Enterprise Spider storage engine is loaded.
This action is performed on the Spider Node.
Execute the following query:
The PLUGIN_STATUS column for each Spider-related plugin should contain ACTIVE.
For additional information, see "".
Write to the Spider Table using an statement to test write operations.
This action is performed on the Spider Node.
Execute the following query:
Read from the Spider Table using a statement to test read operations.
This action is performed on the Spider Node.
Execute the following query:
Navigation in the procedure "Deploy Spider Federated Topology".
This page was step 3 of 3.
This procedure is complete.
CREATE DATABASE inventory;CREATE TABLE inventory.products (
product_name VARCHAR(11) NOT NULL DEFAULT '',
supplier VARCHAR(128) NOT NULL DEFAULT '',
quantity VARCHAR(128) NOT NULL DEFAULT '',
unit_cost VARCHAR(128) NOT NULL DEFAULT ''
) ENGINE=Columnstore DEFAULT CHARSET=utf8;$ sudo cpimport -s '\t' inventory products /tmp/inventory-products.tsvLOAD DATA INFILE '/tmp/inventory-products.tsv'
INTO TABLE inventory.products;$ mariadb --quick \
--skip-column-names \
--execute="SELECT * FROM inventory.products" \
| cpimport -s '\t' inventory productsCREATE DATABASE inventory;CREATE TABLE inventory.products (
product_name VARCHAR(11) NOT NULL DEFAULT '',
supplier VARCHAR(128) NOT NULL DEFAULT '',
quantity VARCHAR(128) NOT NULL DEFAULT '',
unit_cost VARCHAR(128) NOT NULL DEFAULT ''
) ENGINE=Columnstore DEFAULT CHARSET=utf8;$ sudo cpimport -s '\t' inventory products /tmp/inventory-products.tsvLOAD DATA INFILE '/tmp/inventory-products.tsv'
INTO TABLE inventory.products;$ mariadb --quick \
--skip-column-names \
--execute="SELECT * FROM inventory.products" \
| cpimport -s '\t' inventory productsThis page is: Copyright © 2025 MariaDB. All rights reserved.
Set this system variable to utf8
Set this system variable to utf8_general_ci
columnstore_use_import_for_batchinsert
Set this system variable to ALWAYS to always use cpimport for LOAD DATA INFILE and INSERT...SELECT statements.
This page is: Copyright © 2025 MariaDB. All rights reserved.
Shell
SQL access is not required
SUPER privilege is not required
Configuration file can be version controlled
SQL
Shell access is not required
File system privileges on configuration file are not required
Plugin is included in backup of mysql.plugin system table
Spider Node restart is not required
CentOS
RHEL
Rocky Linux
SLES
/etc/my.cnf.d/z-custom-mariadb.cnf
Debian
Ubuntu
/etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
In a ColumnStore Object Storage topology, MariaDB Enterprise ColumnStore requires the Storage Manager directory to be located on shared local storage.
The Storage Manager directory is at the following path:
/var/lib/columnstore/storagemanager
The N in dataN represents a range of integers that starts at 1 and stops at the number of nodes in the deployment. For example, with a 3-node Enterprise ColumnStore deployment, this would refer to the following directories:
/var/lib/columnstore/data1
/var/lib/columnstore/data2
/var/lib/columnstore/data3
The DB Root directories must be mounted on every ColumnStore node.
Select a Shared Local Storage solution for the Storage Manager directory:
EBS (Elastic Block Store) Multi-Attach
EFS (Elastic File System)
Filestore
GlusterFS
NFS (Network File System)
For additional information, see "Shared Local Storage Options".
EBS is a high-performance block-storage service for AWS (Amazon Web Services). EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported.
For Enterprise ColumnStore deployments in AWS:
EBS Multi-Attach is a recommended option for the Storage Manager directory.
Amazon S3 storage is the recommended option for data.
Consult the vendor documentation for details on how to configure EBS Multi-Attach.
EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services)
For deployments in AWS:
EFS is a recommended option for the Storage Manager directory.
Amazon S3 storage is the recommended option for data.
Consult the vendor documentation for details on how to configure EFS.
Filestore is high-performance, fully managed storage for GCP (Google Cloud Platform).
For Enterprise ColumnStore deployments in GCP:
Filestore is the recommended option for the Storage Manager directory.
Google Object Storage (S3-compatible) is the recommended option for data.
Consult the vendor documentation for details on how to configure Filestore.
GlusterFS is a distributed file system.
GlusterFS is a shared local storage option, but it is not one of the recommended options.
For more information, see "Recommended Storage Options".
On each Enterprise ColumnStore node, install GlusterFS.
Install on CentOS / RHEL 8 (YUM):
Install on CentOS / RHEL 7 (YUM):
Install on Debian (APT):
Install on Ubuntu (APT):
Start the GlusterFS daemon:
Before you can create a volume with GlusterFS, you must probe each node from a peer node.
On the primary node, probe all of the other cluster nodes:
On one of the replica nodes, probe the primary node to confirm that it is connected:
On the primary node, check the peer status:
Number of Peers: 2
Create the GlusterFS volumes for MariaDB Enterprise ColumnStore. Each volume must have the same number of replicas as the number of Enterprise ColumnStore nodes.
On each Enterprise ColumnStore node, create the directory for each brick in the /brick directory:
On the primary node, create the GlusterFS volumes:
On the primary node, start the volume:
On each Enterprise ColumnStore node, create mount points for the volumes:
On each Enterprise ColumnStore node, add the mount points to /etc/fstab:
On each Enterprise ColumnStore node, mount the volumes:
NFS is a distributed file system. NFS is available in most Linux distributions. If NFS is used for an Enterprise ColumnStore deployment, the storage must be mounted with the sync option to ensure that each node flushes its changes immediately.
For on-premises deployments:
NFS is the recommended option for the Storage Manager directory.
Any S3-compatible storage is the recommended option for data.
Consult the documentation for your NFS implementation for details on how to configure NFS.
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 2 of 9.
Next: Step 3: Install MariaDB Enterprise Server.
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Enterprise ColumnStore 23.10 includes a testS3Connection command to test the S3 configuration, permissions, and connectivity.
On each Enterprise ColumnStore node, test the S3 configuration:
If the testS3Connection command does not return OK, investigate the S3 configuration.
Connect to the server using MariaDB Client using the root@localhost user account:
Query information_schema.PLUGINS and confirm that the ColumnStore storage engine plugin is ACTIVE:
Create a test database, if it does not exist:
Create a ColumnStore table:
Add sample data into the table:
Read data from table:
Create an InnoDB table:
Add data to the table:
Perform a cross-engine join:
Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:
This page was step 4 of 5.
Next: Step 5: Bulk Import of Data.
This page is: Copyright © 2025 MariaDB. All rights reserved.
# minimize swapping
vm.swappiness = 1
# Increase the TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Increase the TCP buffer limits
# min, default, and max number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# for 1 GigE, increase this to 2500
# for 10 GigE, increase this to 30000
net.core.netdev_max_backlog = 2500$ sudo sysctl --load=/etc/sysctl.d/90-mariadb-enterprise-columnstore.conf$ sudo setenforce permissive# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo getenforce
Permissive$ sudo systemctl disable apparmor$ sudo aa-statusapparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.$ sudo yum install glibc-locale-source glibc-langpack-en$ sudo localedef -i en_US -f UTF-8 en_US.UTF-8[mariadb]
log_error = mariadbd.err
character_set_server = utf8
collation_server = utf8_general_ci$ sudo systemctl start mariadb
$ sudo systemctl enable mariadb$ sudo systemctl start mariadb-columnstore
$ sudo systemctl enable mariadb-columnstoreCREATE USER 'util_user'@'127.0.0.1'
IDENTIFIED BY 'util_user_passwd';GRANT SELECT, PROCESS ON *.*
TO 'util_user'@'127.0.0.1';$ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1
$ sudo mcsSetConfig CrossEngineSupport Port 3306
$ sudo mcsSetConfig CrossEngineSupport User util_user$ sudo mcsSetConfig CrossEngineSupport Password util_user_passwd$ sudo yum install policycoreutils policycoreutils-python$ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
Nothing to do$ sudo semodule -i mariadb_local.pp# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo setenforce enforcing$ sudo yum install MariaDB-spider-engine$ sudo apt install mariadb-plugin-spider$ sudo zypper install MariaDB-spider-engine[mariadb]
...
plugin_load_add = "ha_spider"$ sudo systemctl restart mariadb$ sudo mariadbINSTALL SONAME "ha_spider";SELECT PLUGIN_NAME, PLUGIN_STATUS
FROM information_schema.PLUGINS
WHERE PLUGIN_LIBRARY LIKE "ha_spider%";
+--------------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+--------------------------+---------------+
| SPIDER | ACTIVE |
| SPIDER_ALLOC_MEM | ACTIVE |
| SPIDER_WRAPPER_PROTOCOLS | ACTIVE |
+--------------------------+---------------+$ systemctl status mariadb$ sudo systemctl start mariadb$ sudo mariadb
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>$ mariadb \
--host 192.0.2.2 \
--user spider_user \
--password
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>SELECT PLUGIN_NAME, PLUGIN_STATUS
FROM information_schema.PLUGINS
WHERE PLUGIN_LIBRARY LIKE 'ha_spider%';
+--------------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+--------------------------+---------------+
| SPIDER | ACTIVE |
| SPIDER_ALLOC_MEM | ACTIVE |
| SPIDER_WRAPPER_PROTOCOLS | ACTIVE |
+--------------------------+---------------+INSERT INTO spider_hq_sales.invoices
(branch_id, invoice_id, customer_id, invoice_date, invoice_total, payment_method)
VALUES (1, 4, 1, '2021-03-10 12:45:10', 3045.73, 'CREDIT_CARD');SELECT * FROM spider_hq_sales.invoices;
+-----------+------------+-------------+----------------------------+---------------+----------------+
| branch_id | invoice_id | customer_id | invoice_date | invoice_total | payment_method |
+-----------+------------+-------------+----------------------------+---------------+----------------+
| 1 | 1 | 1 | 2020-05-10 12:35:10.000000 | 1087.23 | CREDIT_CARD |
| 1 | 2 | 2 | 2020-05-10 14:17:32.000000 | 1508.57 | WIRE_TRANSFER |
| 1 | 3 | 3 | 2020-05-10 14:25:16.000000 | 227.15 | CASH |
| 1 | 4 | 1 | 2021-03-10 12:45:10.000000 | 3045.73 | CREDIT_CARD |
+-----------+------------+-------------+----------------------------+---------------+----------------+$ sudo yum install --enablerepo=PowerTools glusterfs-server$ sudo yum install centos-release-gluster
$ sudo yum install glusterfs-server$ wget -O - https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub | apt-key add -
$ DEBID=$(grep 'VERSION_ID=' /etc/os-release | cut -d '=' -f 2 | tr -d '"')
$ DEBVER=$(grep 'VERSION=' /etc/os-release | grep -Eo '[a-z]+')
$ DEBARCH=$(dpkg --print-architecture)
$ echo deb https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main > /etc/apt/sources.list.d/gluster.list$ sudo apt update$ sudo apt install glusterfs-server$ sudo apt update$ sudo apt install glusterfs-server$ sudo systemctl start glusterd$ sudo systemctl enable glusterd$ sudo gluster peer probe mcs2$ sudo gluster peer probe mcs3$ sudo gluster peer probe mcs1
peer probe: Host mcs1 port 24007 already in peer list$ sudo gluster peer statusHostname: mcs2
Uuid: 3c8a5c79-22de-45df-9034-8ae624b7b23e
State: Peer in Cluster (Connected)
Hostname: mcs3
Uuid: 862af7b2-bb5e-4b1c-8311-630fa32ed451
State: Peer in Cluster (Connected)$ sudo mkdir -p /brick/storagemanager$ sudo gluster volume create storagemanager \
replica 3 \
mcs1:/brick/storagemanager \
mcs2:/brick/storagemanager \
mcs3:/brick/storagemanager \
force$ sudo gluster volume start storagemanager$ sudo mkdir -p /var/lib/columnstore/storagemanager127.0.0.1:storagemanager /var/lib/columnstore/storagemanager glusterfs defaults,_netdev 0 0$ sudo mount -a$ sudo testS3ConnectionStorageManager[26887]: Using the config file found at /etc/columnstore/storagemanager.cnf
StorageManager[26887]: S3Storage: S3 connectivity & permissions are OK
S3 Storage Manager Configuration OK$ sudo mariadbWelcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>SELECT PLUGIN_NAME, PLUGIN_STATUS
FROM information_schema.PLUGINS
WHERE PLUGIN_LIBRARY LIKE 'ha_columnstore%';+---------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+---------------------+---------------+
| Columnstore | ACTIVE |
| COLUMNSTORE_COLUMNS | ACTIVE |
| COLUMNSTORE_TABLES | ACTIVE |
| COLUMNSTORE_FILES | ACTIVE |
| COLUMNSTORE_EXTENTS | ACTIVE |
+---------------------+---------------+CREATE DATABASE IF NOT EXISTS test;CREATE TABLE IF NOT EXISTS test.contacts (
first_name VARCHAR(50),
last_name VARCHAR(50),
email VARCHAR(100)
) ENGINE=ColumnStore;INSERT INTO test.contacts (first_name, last_name, email)
VALUES
("Kai", "Devi", "kai.devi@example.com"),
("Lee", "Wang", "lee.wang@example.com");SELECT * FROM test.contacts;
+------------+-----------+----------------------+
| first_name | last_name | email |
+------------+-----------+----------------------+
| Kai | Devi | kai.devi@example.com |
| Lee | Wang | lee.wang@example.com |
+------------+-----------+----------------------+CREATE TABLE test.addresses (
email VARCHAR(100),
street_address VARCHAR(255),
city VARCHAR(100),
state_code VARCHAR(2)
) ENGINE = InnoDB;INSERT INTO test.addresses (email, street_address, city, state_code)
VALUES
("kai.devi@example.com", "1660 Amphibious Blvd.", "Redwood City", "CA"),
("lee.wang@example.com", "32620 Little Blvd", "Redwood City", "CA");SELECT name AS "Name", addr AS "Address"
FROM (SELECT CONCAT(first_name, " ", last_name) AS name,
email FROM test.contacts) AS contacts
INNER JOIN (SELECT CONCAT(street_address, ", ", city, ", ", state_code) AS addr,
email FROM test.addresses) AS addr
WHERE contacts.email = addr.email;+----------+-----------------------------------------+
| Name | Address |
+----------+-----------------------------------------+
| Kai Devi | 1660 Amphibious Blvd., Redwood City, CA |
| Lee Wang | 32620 Little Blvd, Redwood City, CA |
+----------+-----------------------------------------+
+-------------------+-------------------------------------+
| Name | Address |
+-------------------+-------------------------------------+
| Walker Percy | 500 Thomas More Dr., Covington, LA |
| Flannery O'Connor | 300 Tarwater Rd., Milledgeville, GA |
+-------------------+-------------------------------------+In a ColumnStore Object Storage topology, MariaDB Enterprise ColumnStore requires the Storage Manager directory to be located on shared local storage.
The Storage Manager directory is at the following path:
/var/lib/columnstore/storagemanager
The Storage Manager directory must be mounted on every ColumnStore node.
Select a Shared Local Storage solution for the Storage Manager directory:
For additional information, see "Shared Local Storage Options".
EBS is a high-performance block-storage service for AWS (Amazon Web Services). EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported.
For Enterprise ColumnStore deployments in AWS:
EBS Multi-Attach is a recommended option for the Storage Manager directory.
Amazon S3 storage is the recommended option for data.
Consult the vendor documentation for details on how to configure EBS Multi-Attach.
EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services)
For deployments in AWS:
EFS is a recommended option for the Storage Manager directory.
Amazon S3 storage is the recommended option for data.
Consult the vendor documentation for details on how to configure EFS.
Filestore is high-performance, fully managed storage for GCP (Google Cloud Platform).
For Enterprise ColumnStore deployments in GCP:
Filestore is the recommended option for the Storage Manager directory.
Google Object Storage (S3-compatible) is the recommended option for data.
Consult the vendor documentation for details on how to configure Filestore.
GlusterFS is a distributed file system.
GlusterFS is a shared local storage option, but it is not one of the recommended options.
For more information, see "Recommended Storage Options".
On each Enterprise ColumnStore node, install GlusterFS.
Install on CentOS / RHEL 8 (YUM):
Install on CentOS / RHEL 7 (YUM):
Install on Debian (APT):
Install on Ubuntu (APT):
Start the GlusterFS daemon:
Before you can create a volume with GlusterFS, you must probe each node from a peer node.
On the primary node, probe all of the other cluster nodes:
On one of the replica nodes, probe the primary node to confirm that it is connected:
On the primary node, check the peer status:
Create the GlusterFS volumes for MariaDB Enterprise ColumnStore. Each volume must have the same number of replicas as the number of Enterprise ColumnStore nodes.
On each Enterprise ColumnStore node, create the directory for each brick in the /brick directory:
On the primary node, create the GlusterFS volumes:
On the primary node, start the volume:
On each Enterprise ColumnStore node, create mount points for the volumes:
On each Enterprise ColumnStore node, add the mount points to /etc/fstab:
On each Enterprise ColumnStore node, mount the volumes:
NFS is a distributed file system. NFS is available in most Linux distributions. If NFS is used for an Enterprise ColumnStore deployment, the storage must be mounted with the sync option to ensure that each node flushes its changes immediately.
For on-premises deployments:
NFS is the recommended option for the Storage Manager directory.
Any S3-compatible storage is the recommended option for data.
Consult the documentation for your NFS implementation for details on how to configure NFS.
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 2 of 9.
Next: Step 3: Install MariaDB Enterprise Server.
This page is: Copyright © 2025 MariaDB. All rights reserved.
The installation process might have started the Enterprise Server service. The service should be stopped prior to making configuration changes.
Stop the MariaDB Enterprise Server service:
Enterprise Server nodes require that you set the following system variables and options:
The network socket Enterprise Server listens on for incoming TCP/IP client connections. On Debian or Ubuntu, this system variable must be set to override the 127.0.0.1 default configuration.
Set this option to the file you want to use for the Binary Log. Setting this option enables binary logging.
Sets the numeric Server ID for this MariaDB Enterprise Server. The value set on this option must be unique to each node.
MariaDB Enterprise Server also supports group commit.
Group commit can help performance by reducing I/O.
If you would like to configure parallel replication on replica servers, then you must also configure group commit on the primary server.
Sets the number of transactions that the server commits as a group to the binary log.
Sets the number of microseconds that the server waits for transactions to group commit before it commits the current group.
On each Enterprise Server node, edit a configuration file and set these system variables and options:
Set the server_id option to a value that is unique for each Enterprise Server node.
Start MariaDB Enterprise Server. If the Enterprise Server process is already running, restart it to apply the changes from the configuration file.
For additional information, see "Starting and Stopping MariaDB".
The Primary/Replica topology requires several user accounts. Each user account should be created on the primary server, so that it is replicated to the replica servers.
Primary/Replica uses MariaDB Replication to replicate writes between the primary and replica servers. As MaxScale can promote a replica server to become a new primary in the event of node failure, all nodes must have a replication user.
The action is performed on the primary server.
Create the replication user and grant it the required privileges:
Use the CREATE USER statement to create replication user.
Replace the referenced IP address with the relevant address for your environment.
Ensure that the user account can connect to the primary server from each replica.
Grant the user account the required privileges with the GRANT statement.
The following privileges are required:
Use this username and password for the MASTER_USER and MASTER_PASSWORD in the CHANGE MASTER TO statement when configuring replica servers in Step 3.
Primary/Replica uses MariaDB MaxScale 25.01 to load balance between the nodes. MaxScale requires a database user to connect to the primary server when routing queries and to promote replicas in the event that the primary server fails.
This action is performed on the primary server.
Use the CREATE USER statement to create the MaxScale user:
Replace the referenced IP address with the relevant address for your environment.
Ensure that the user account can connect from the IP address of the MaxScale instance.
Use the GRANT statement to grant the privileges required by the router:
Use the GRANT statement to grant privileges required by the MariaDB Monitor.
The following privileges are required:
Navigation in the procedure "Deploy Primary/Replica Topology":
This page was step 2 of 7.
Next: Step 3: Start and Configure MariaDB Enterprise Server on Replica Servers
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Enterprise Spider depends on interconnect between the Spider Node and all Data Nodes. This may require adjustment to firewall and security settings.
The Enterprise Spider storage engine plugin is not installed with MariaDB Enterprise Server by default. An additional package must be installed.
On the Spider Node, install MariaDB Enterprise Spider:
Install via APT (Debian, Ubuntu)
On the Spider Node, install MariaDB Enterprise Spider:
On the Spider Node, install MariaDB Enterprise Spider:
The Enterprise Spider storage engine plugin must be loaded by MariaDB Enterprise Server
On the Spider Node, use one of the following methods to configure MariaDB Enterprise Server to load the Enterprise Spider storage engine plugin:
Shell
SQL access is not required
SUPER privilege is not required
Configuration file can be version controlled
SQL
Shell access is not required
File system privileges on configuration file are not required
Plugin is included in backup of mysql.plugin system table
Spider Node restart is not required
On the Spider Node, set the plugin_load_add option to ha_spider in a configuration file. The plugin_load_add option configures MariaDB Enterprise Server to load the Enterprise Spider storage engine plugin. The Spider Node must be restarted to detect the configuration change.
Choose a configuration file for custom changes to system variables and options. It is not recommended to make custom changes to Enterprise Server's default configuration files, because your custom changes can be overwritten by other default configuration files that are loaded after. Ensure that your custom changes will be read last by creating a custom configuration file in one of the included directories. Configuration files in included directories are read in alphabetical order. Ensure that your custom configuration file is read last by using the z- prefix in the file name. Some example configuration file paths for different distributions are shown in the following table:
CentOS
RHEL
Rocky Linux
SLES
/etc/my.cnf.d/z-custom-mariadb.cnf
Debian
Ubuntu
/etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf
Set the plugin_load_add option in the configuration file.
It must be set in a group that will be read by MariaDB Server, such as [mariadb] or [server].
For example:
Restart MariaDB Enterprise Server:
On the Spider Node, execute the INSTALL SONAME statement with the library name ha_spider. The INSTALL SONAME statement configures MariaDB Enterprise Server to load the Enterprise Spider storage engine plugin. The INSTALL SONAME statement requires the SUPER privilege.
The INSTALL SONAME statement adds the Enterprise Spider storage engine to the mysql.plugin system table. When the Spider Node is restarted, MariaDB Enterprise Server reads the system table and reloads the plugin, so the statement only needs to be executed once.
Connect to the Spider Node using MariaDB Client:
Use the INSTALL SONAME statement to install the Enterprise Spider storage engine plugin:
On the Spider Node, confirm that the Enterprise Spider storage engine plugin is loaded by querying the information_schema.PLUGINS table:
When the Enterprise Spider storage engine is loaded, the PLUGIN_NAME column contains the value SPIDER and the PLUGIN_STATUS column contains the value ACTIVE.
Navigation in the procedure "Deploy Spider Sharded Topology":
This page was step 1 of 3.
Next: Step 2: Configure Spider Node and Data Nodes.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page details step 1 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".
This step prepares systems to host MariaDB Enterprise Server and MariaDB Enterprise ColumnStore 23.10.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Enterprise ColumnStore performs best with Linux kernel optimizations.
On each server to host an Enterprise ColumnStore node, optimize the kernel:
Set the relevant kernel parameters in a sysctl configuration file. To ensure proper change management, use an Enterprise ColumnStore-specific configuration file.
Create a /etc/sysctl.d/90-mariadb-enterprise-columnstore.conf file:
Use the sysctl command to set the kernel parameters at runtime
The Linux Security Modules (LSM) should be temporarily disabled on each Enterprise ColumnStore node during installation.
The LSM will be configured and re-enabled later in this deployment procedure.
The steps to disable the LSM depend on the specific LSM used by the operating system.
SELinux must be set to permissive mode before installing MariaDB Enterprise ColumnStore.
To set SELinux to permissive mode:
Set SELinux to permissive mode:
Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config.
For example, the file will usually look like this after the change:
Confirm that SELinux is in permissive mode:
SELinux will be configured and re-enabled later in this deployment procedure. This configuration is not persistent. If you restart the server before configuring and re-enabling SELinux later in the deployment procedure, you must reset the enforcement to permissive mode.
AppArmor must be disabled before installing MariaDB Enterprise ColumnStore.
Disable AppArmor:
Reboot the system.
Confirm that no AppArmor profiles are loaded using aa-status:
AppArmor will be configured and re-enabled later in this deployment procedure.
MariaDB Enterprise ColumnStore requires the following TCP ports:
The firewall should be temporarily disabled on each Enterprise ColumnStore node during installation.
The firewall will be configured and re-enabled later in this deployment procedure.
The steps to disable the firewall depend on the specific firewall used by the operating system.
Check if the firewalld service is running:
If the firewalld service is running, stop it:
Firewalld will be configured and re-enabled later in this deployment procedure.
Check if the UFW service is running:
If the UFW service is running, stop it:
UFW will be configured and re-enabled later in this deployment procedure.
To install Enterprise ColumnStore on Amazon Web Services (AWS), the security group must be modified prior to installation.
Enterprise ColumnStore requires all internal communications to be open between Enterprise ColumnStore nodes. Therefore, the security group should allow all protocols and all ports to be open between the Enterprise ColumnStore nodes and the MaxScale proxy.
When using MariaDB Enterprise ColumnStore, it is recommended to set the system's locale to UTF-8.
On RHEL 8, install additional dependencies:
Set the system's locale to en_US.UTF-8 by executing localedef:
MariaDB Enterprise ColumnStore requires all nodes to have host names that are resolvable on all other nodes. If your infrastructure does not configure DNS centrally, you may need to configure static DNS entries in the /etc/hosts file of each server.
On each Enterprise ColumnStore node, edit the /etc/hosts file to map host names to the IP address of each Enterprise ColumnStore node:
Replace the IP addresses with the addresses in your own environment.
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 1 of 9.
This page details step 7 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".
This step starts and configures MariaDB MaxScale 22.08.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB MaxScale installations include a configuration file with some example objects. This configuration file should be replaced.
On the MaxScale node, replace the default /etc/maxscale.cnf with the following configuration:
For additional information, see "Global Parameters".
On the MaxScale node, restart the MaxScale service to ensure that MaxScale picks up the new configuration:
For additional information, see "Start and Stop Services".
On the MaxScale node, use to create a server object for each Enterprise ColumnStore node:
MaxScale uses monitors to retrieve additional information from the servers. This information is used by other services in filtering and routing connections based on the current state of the node. For MariaDB Enterprise ColumnStore, use the MariaDB Monitor (mariadbmon).
On the MaxScale node, use to create a MariaDB Monitor:
In this example:
columnstore_monitor is an arbitrary name that is used to identify the new monitor.
mariadbmon is the name of the module that implements the MariaDB Monitor.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to monitor the ColumnStore nodes.
Routers control how MaxScale balances the load between Enterprise ColumnStore nodes. Each router uses a different approach to routing queries. Consider the specific use case of your application and database load and select the router that best suits your needs.
Use to route connections to replica servers for a read-only pool.
On the MaxScale node, use to create a router:
In this example:
connection_router_service is an arbitrary name that is used to identify the new service.
readconnroute is the name of the module that implements the Read Connection Router.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ColumnStore nodes.
These instructions reference TCP port 3308. You can use a different TCP port. The TCP port used must not be bound by any other listener.
On the MaxScale node, use the command to configure MaxScale to use a listener for the :
In this example:
connection_router_service is the name of the readconnroute service that was previously created.
connection_router_listener is an arbitrary name that is used to identify the new listener.
3308 is the TCP port.
MaxScale performs query-based load balancing. The router routes write queries to the primary and read queries to the replicas.
On the MaxScale node, use the maxctrl create service command to configure MaxScale to use the :
In this example:
query_router_service is an arbitrary name that is used to identify the new service.
readwritesplit is the name of the module that implements the Read/Write Split Router.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ColumnStore nodes.
These instructions reference TCP port 3307. You can use a different TCP port. The TCP port used must not be bound by any other listener.
On the MaxScale node, use the command to configure MaxScale to use a listener for the :
In this example:
query_router_service is the name of the readwritesplit service that was previously created.
query_router_listener is an arbitrary name that is used to identify the new listener.
3307 is the TCP port.
protocol=MariaDBClient
To start the services and monitors, on the MaxScale node use :
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 7 of 9.
Next: Next: Step 8: Test MariaDB MaxScale.
This page details step 1 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step prepares systems to host MariaDB Enterprise Server and MariaDB Enterprise ColumnStore 23.10.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Enterprise ColumnStore performs best with Linux kernel optimizations.
On each server to host an Enterprise ColumnStore node, optimize the kernel:
Set the relevant kernel parameters in a sysctl configuration file. To ensure proper change management, use an Enterprise ColumnStore-specific configuration file.
Create a /etc/sysctl.d/90-mariadb-enterprise-columnstore.conf file:
Use the sysctl command to set the kernel parameters at runtime
The Linux Security Modules (LSM) should be temporarily disabled on each Enterprise ColumnStore node during installation.
The LSM will be configured and re-enabled later in this deployment procedure.
The steps to disable the LSM depend on the specific LSM used by the operating system.
SELinux must be set to permissive mode before installing MariaDB Enterprise ColumnStore.
To set SELinux to permissive mode:
Set SELinux to permissive mode:
Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config.
For example, the file will usually look like this after the change:
Confirm that SELinux is in permissive mode:
SELinux will be configured and re-enabled later in this deployment procedure. This configuration is not persistent. If you restart the server before configuring and re-enabling SELinux later in the deployment procedure, you must reset the enforcement to permissive mode.
AppArmor must be disabled before installing MariaDB Enterprise ColumnStore.
Disable AppArmor:
Reboot the system.
Confirm that no AppArmor profiles are loaded using aa-status:
AppArmor will be configured and re-enabled later in this deployment procedure.
MariaDB Enterprise ColumnStore requires the following TCP ports:
The firewall should be temporarily disabled on each Enterprise ColumnStore node during installation.
The firewall will be configured and re-enabled later in this deployment procedure.
The steps to disable the firewall depend on the specific firewall used by the operating system.
Check if the firewalld service is running:
If the firewalld service is running, stop it:
Firewalld will be configured and re-enabled later in this deployment procedure.
Check if the UFW service is running:
If the UFW service is running, stop it:
UFW will be configured and re-enabled later in this deployment procedure.
To install Enterprise ColumnStore on Amazon Web Services (AWS), the security group must be modified prior to installation.
Enterprise ColumnStore requires all internal communications to be open between Enterprise ColumnStore nodes. Therefore, the security group should allow all protocols and all ports to be open between the Enterprise ColumnStore nodes and the MaxScale proxy.
When using MariaDB Enterprise ColumnStore, it is recommended to set the system's locale to UTF-8.
On RHEL 8, install additional dependencies:
Set the system's locale to en_US.UTF-8 by executing localedef:
MariaDB Enterprise ColumnStore requires all nodes to have host names that are resolvable on all other nodes. If your infrastructure does not configure DNS centrally, you may need to configure static DNS entries in the /etc/hosts file of each server.
On each Enterprise ColumnStore node, edit the /etc/hosts file to map host names to the IP address of each Enterprise ColumnStore node:
Replace the IP addresses with the addresses in your own environment.
With the ColumnStore Object Storage topology, it is important to create the S3 bucket before you start ColumnStore. All Enterprise ColumnStore nodes access data from the same bucket.
If you already have an S3 bucket, confirm that the bucket is empty.
S3 bucket configuration will be performed later in this procedure.
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 1 of 9.
Next: Step 2: Configure Shared Local Storage.
This page details step 6 of the 7-step procedure "Deploy Primary/Replica Topology".
This step starts and configures MariaDB MaxScale.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB MaxScale installations include a configuration file with some example objects. This configuration file should be replaced.
On the MaxScale node, replace the default /etc/maxscale.cnf with the following configuration:
For additional information, see "".
On the MaxScale node, restart the MaxScale service to ensure that MaxScale picks up the new configuration:
For additional information, see "".
On the MaxScale node, use server to create a server object for each MariaDB Enterprise Server:
MaxScale uses monitors to retrieve additional information from the servers. This information is used by other services in filtering and routing connections based on the current state of the node. For MariaDB Replication, use the MariaDB Monitor (mariadbmon).
On the MaxScale node, use to create a MariaDB Monitor:
In this example:
mdb_monitor is an arbitrary name that is used to identify the new monitor.
mariadbmon is the name of the module that implements the MariaDB Monitor.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to monitor the ES nodes.
Routers control how MaxScale balances the load between Enterprise Server nodes. Each router uses a different approach to routing queries. Consider the specific use case of your application and database load and select the router that best suits
Use MaxScale Read Connection Router (readconnroute) to route connections to replica servers for a read-only pool.
On the MaxScale node, use maxctrl create service to create a router:
In this example:
connection_router_service is an arbitrary name that is used to identify the new service.
readconnroute is the name of the module that implements the Read Connection Router.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ES nodes.
These instructions reference TCP port 3308. You can use a different TCP port. The TCP port used must not be bound by any other listener.
On the MaxScale node, use the maxctrl create listener command to configure MaxScale to use a listener for the Read Connection Router (readconnroute):
In this example:
connection_router_service is the name of the readconnroute service that was previously created.
connection_router_listener is an arbitrary name that is used to identify the new listener.
3308 is the TCP port.
MaxScale Read/Write Split Router (readwritesplit) performs query-based load balancing. The router routes write queries to the primary and read queries to the replicas.
On the MaxScale node, use the maxctrl create service command to configure MaxScale to use the Read/Write Split Router (readwritesplit):
In this example:
query_router_service is an arbitrary name that is used to identify the new service.
readwritesplit is the name of the module that implements the Read/Write Split Router.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ES nodes.
These instructions reference TCP port 3307. You can use a different TCP port. The TCP port used must not be bound by any other listener.
On the MaxScale node, use the maxctrl create listener command to configure MaxScale to use a listener for the Read/Write Split Router (readwritesplit):
In this example:
query_router_service is the name of the readwritesplit service that was previously created.
query_router_listener is an arbitrary name that is used to identify the new listener.
3307 is the TCP port.
To start the services and monitors, on the MaxScale node use maxctrl start services:
Navigation in the procedure "Deploy Primary/Replica Topology":
This page was step 6 of 7.
Next: Step 7: Test MariaDB MaxScale
In the Federated MariaDB Enterprise Spider topology, a Spider Node contains one or more "virtual" Spider Tables. A Spider Table does not store data. When a Spider Table is queried, the Enterprise Spider storage engine uses a MariaDB foreign data wrapper to read from and write to a Data Table on a Data Node.
MariaDB Enterprise Spider:
Supports a MariaDB foreign data wrapper. The MariaDB foreign data wrapper can be used to replace the older Federated and FederatedX storage engines.
Supports an ODBC foreign data wrapper in MariaDB Enterprise Server 10.5 and later. The ODBC foreign data wrapper is beta maturity. The maturity can be confirmed by querying the table.
The Spider Federated topology:
Can be used to query tables located on a different MariaDB Enterprise Server node from the Spider Node using the MariaDB foreign data wrapper.
Can be used to join tables located on a different MariaDB Enterprise Server node with tables on the Spider Node using the MariaDB foreign data wrapper.
Can be used to migrate tables located on a different MariaDB Enterprise Server node to the Spider Node using the MariaDB foreign data wrapper.
In the Spider Federated topology, a Spider Node contains one or more "virtual" Spider Tables. A Spider Table does not store data. When a Spider Table is queried, the Enterprise Spider storage engine uses a MariaDB foreign data wrapper to read from and write to a Data Table on a Data Node.
The Spider Federated topology consists of:
One MariaDB Enterprise Server node is a Spider Node
One MariaDB Enterprise Server node is a Data Node
The Spider Node:
Contains one or more Spider Tables
Uses the plugin for Spider Tables
Uses a MariaDB foreign data wrapper to query the Data Table on the Data Node
The Data Node:
Contains a Data Table for each Spider Table
Uses a non-Spider storage engine for each Data Table, such as or
The Spider Federated topology can be used to query tables located on another MariaDB Enterprise Server node:
The MariaDB Enterprise Server node with the desired table is configured as a Data Node.
The MariaDB Enterprise Server node that needs to query the table is configured as a Spider Node.
The Data Table is the desired table on the Data Node.
A Spider Table is created on the Spider Node that references the Data Table on the Data Node.
Non-Spider tables can also be referenced in queries with the Spider Table:
The Spider Federated topology can be used to migrate tables from one MariaDB Enterprise Server node to another MariaDB Enterprise Server node:
The MariaDB Enterprise Server node with the source table is configured as a Data Node.
The MariaDB Enterprise Server node with the destination table is configured as a Spider Node.
The Data Table is the source table on the Data Node.
A Spider Table is created on the Spider Node that references the Data Table on the Data Node.
In the ODBC MariaDB Enterprise Spider topology, a Spider Node contains one or more "virtual" Spider Tables. A Spider Table does not store data. When the Spider Table is queried in this topology, the Enterprise Spider storage engine uses an ODBC foreign data wrapper to read from and write to an ODBC Data Source.
MariaDB Enterprise Spider:
Supports a MariaDB foreign data wrapper. The MariaDB foreign data wrapper can be used to replace the older Federated and FederatedX storage engines.
Supports an ODBC foreign data wrapper in MariaDB Enterprise Server 10.5 and later. The maturity can be confirmed by querying the table.
The Spider ODBC topology:
Can be used to query ODBC Data Sources from the Spider Node using the ODBC foreign data wrapper.
Can be used to join ODBC Data Sources with tables on the Spider Node using the ODBC foreign data wrapper.
Can be used to migrate table data from ODBC Data Sources to the Spider Node using the ODBC foreign data wrapper.
ODBC MariaDB Enterprise Spider Topology
In the Spider ODBC topology, a Spider Node contains one or more "virtual" Spider Tables. A Spider Table does not store data. When the Spider Table is queried in this topology, the Enterprise Spider storage engine uses an ODBC foreign data wrapper to read from and write to an ODBC Data Source.
MariaDB Enterprise Spider implemented support for the ODBC foreign data wrapper in MariaDB Enterprise Server 10.5.
The maturity can be confirmed by querying the table.
The Spider ODBC topology consists of:
One MariaDB Enterprise Server node is a Spider Node
One ODBC Data Source stores data for Spider Tables
The Spider Node:
Contains one or more Spider Tables
Uses the plugin for Spider Tables
Uses an ODBC foreign data wrapper to query the ODBC Data Source
The ODBC Data Source:
Contains the Data for each Spider Table
Can be a non-MariaDB database server, such as Microsoft SQL Server, Oracle, or PostgreSQL
The Spider ODBC topology can be used to query tables located on non-MariaDB databases:
The non-MariaDB database is configured as an ODBC Data Source in the ODBC Driver Manager.
The MariaDB Enterprise Server node that needs to query the ODBC Data Source is configured as a Spider Node.
A Spider Table is created on the Spider Node that references the ODBC Data Source.
On the Spider Node, the ODBC Data Source is queried by querying the Spider Table like the following:
The Spider ODBC topology can be used to migrate tables from a non-MariaDB database to a MariaDB Enterprise Server node:
The non-MariaDB database is configured as an ODBC Data Source in the ODBC Driver Manager.
The MariaDB Enterprise Server node with the destination table is configured as a Spider Node.
A Spider Table is created on the Spider Node that references the ODBC Data Source.
Follow the link under for further information on the setup.
This page details step 3 of the 3-step procedure "Deploy Spider Sharded Topology".
This step tests MariaDB Enterprise Spider.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Use Systemd to test whether the MariaDB Enterprise Server service is running.
This action is performed on the Spider Node and each Data Node.
Check if the MariaDB Enterprise Server service is running by executing the following:
If the service is not running on any node, start the service by executing the following on that node:
Use to test the local connection to the Enterprise Server node.
This action is performed on the Spider Node and each Data Node:
The sudo command is used here to connect to the Enterprise Server node using the root@localhost user account, which authenticates using the unix_socket authentication plugin. Other user accounts can be used by specifying the --user and --password command-line options.
Use to test a client connection to the Data Node from the Spider Node using the Spider user.
This action is performed on the Spider Node:
The host and port of the Data Node can be provided using the --host and --port command-line options. The credentials for the Spider user can be provided using the --user and --password command-line options.
If the Spider user is unable to connect to the Data Node from the Spider Node, check the password for the Spider user account on the Data Node.
For additional information, see "".
Query the information_schema.PLUGINS table to confirm that the Enterprise Spider storage engine is loaded.
This action is performed on the Spider Node.
Execute the following query:
The PLUGIN_STATUS column for each Spider-related plugin should contain ACTIVE.
For additional information, see "".
Write to the Spider Table using an statement to test write operations.
This action is performed on the Spider Node.
Execute the following query:
Read from the Spider Table using a statement to test read operations.
This action is performed on the Spider Node.
Execute the following query:
Use the statement with a statement to determine which shards Spider will read for the query.
This action is performed on the Spider Node.
Execute the following query:
The specific shards read by the query are listed in the partitions column. If partition pruning does not eliminate unnecessary shards for a query with a restrictive filter, then check the partition definitions.
Navigation in the procedure "Deploy Spider Sharded Topology":
This page was step 3 of 3.
This procedure is complete.
This page details step 5 of the 6-step procedure "Deploy Galera Cluster Topology".
This step configures MariaDB MaxScale to route connections to MariaDB Enterprise Cluster.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB MaxScale installations include a configuration file with some example objects. This configuration file should be replaced.
On the MaxScale node, replace the default /etc/maxscale.cnf with the following configuration:
For additional information, see "Global Parameters".
On the MaxScale node, restart the MaxScale service to ensure that MaxScale picks up the new configuration:
For additional information, see "Start and Stop Services".
MariaDB MaxScale connects to Enterprise Cluster through the client port. MaxScale requires its own user account to monitor and orchestrate Enterprise Cluster nodes.
On any Enterprise Cluster node, use the CREATE USER statement to create a new user for MaxScale:
Use the GRANT statement to grant required privileges to the MaxScale user:
Enterprise Cluster replicates the new user and privileges to the other Enterprise Cluster nodes.
MariaDB MaxScale uses server objects to define the connections it makes to MariaDB Enterprise Servers.
On the MaxScale node, use maxctrl to create a server object for each MariaDB Enterprise Server:
MaxScale uses monitors to retrieve additional information from the servers. This information is used by other services in filtering and routing connections based on the current state of the node. For Enterprise Cluster, use the .
On the MaxScale node, use maxctrl to create a Galera Monitor for the cluster:
In this example:
cluster_monitor is an arbitrary name that is used to identify the new monitor.
galeramon is the name of the module that implements the Galera Monitor.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to monitor the ES nodes.
Routers control how MaxScale balances the load between Enterprise Cluster nodes. Each router uses a different approach to routing queries. Consider the specific use case of your application and database load and select the router that best suits your needs.
Use MaxScale Read Connection Router (readconnroute) to route connections to replica servers for a read-only pool.
On the MaxScale node, use maxctrl create service to create a router:
In this example:
connection_router_service is an arbitrary name that is used to identify the new service.
readconnroute is the name of the module that implements the Read Connection Router.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ES nodes.
These instructions reference TCP port 3308. You can use a different TCP port. The TCP port used must not be bound by any other listener.
On the MaxScale node, use the maxctrl create listener command to configure MaxScale to use a listener for the Read Connection Router (readconnroute):
In this example:
connection_router_service is the name of the readconnroute service that was previously created.
connection_router_listener is an arbitrary name that is used to identify the new listener.
3308 is the TCP port.
protocol=MariaDBClient
MaxScale Read/Write Split Router (readwritesplit) performs query-based load balancing. The router routes write queries to the primary and read queries to the replicas.
On the MaxScale node, use the maxctrl create service command to configure MaxScale to use the Read/Write Split Router (readwritesplit):
In this example:
query_router_service is an arbitrary name that is used to identify the new service.
readwritesplit is the name of the module that implements the Read/Write Split Router.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ES nodes.
These instructions reference TCP port 3307. You can use a different TCP port. The TCP port used must not be bound by any other listener.
On the MaxScale node, use the maxctrl create listener command to configure MaxScale to use a listener for the Read/Write Split Router (readwritesplit):
In this example:
query_router_service is the name of the readwritesplit service that was previously created.
query_router_listener is an arbitrary name that is used to identify the new listener.
3307 is the TCP port.
protocol=MariaDBClient
To start the services and monitors, on the MaxScale node use maxctrl start services:
Navigation in the procedure "Deploy Galera Cluster Topology":
This page was step 5 of 6.
Next: Step 6: Test MariaDB MaxScale
This page details step 2 of the 3-step procedure "Deploy Spider Federated Topology".
This step configures the Spider Node and Data Node and creates the Spider Table and Data Table.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
The data node requires a user account that the Spider Node uses to connect.
On the Data Node, create the Spider user account for the Spider Node using the statement:
Privileges will be granted to the user account in .
On the Spider Node, confirm that the Spider user account can connect to the Data Node using MariaDB Client:
The Spider Node requires connection details for the Data Node.
On the Spider Node, create a server object to configure the connection details for the Data Node using the statement:
The Data Node runs MariaDB Enterprise Server, so the FOREIGN DATA WRAPPER is set to mariadb.
Using a server object for connection details is optional. Alternatively, the connection details for the Data Node can be specified in the COMMENT table option of the statement when .
When queries read and write to a Spider Table, Spider reads and writes to the Data Table on the Data Node. The Data Table must be created on the Data Node with the same structure as the Spider Table.
If your Data Table already exists, grant privileges on the table to the Spider user.
On the Data Node, create the Data Table:
The Spider Node reads and writes to the Data Table using the server and user account configured in "". The user account must have .
The Spider Node connects to the Data Node with the user account configured in "".
On the Data Node, grant the Spider user sufficient privileges to operate on the Data Table:
By default, the Spider user also requires the privilege on the database containing the Data Table. The CREATE TEMPORARY TABLES privilege is required, because Spider uses temporary tables to optimize read queries when Spider BKA Mode is 1.
Spider BKA Mode is configured using the following methods:
The session value is configured by setting the system variable on the Spider Node. The default value is -1. When the session value is -1, the value for each is used.
The value for each is configured by setting the bka_mode option in the COMMENT table option. When the bka_mode option is not set, the implicit value is 1.
The default value is -1, and the implicit Spider Table value is 1, so the default Spider BKA Mode is 1.
On the Data Node, grant the Spider user the CREATE TEMPORARY TABLES privilege on the database:
The Spider Table must be created on the Spider Node with the same structure as the Data Table.
On the Spider Node, create the Spider Table and reference the Data Node in the COMMENT table option:
The COMMENT table option is used to configure the Data Node and the Data Table. Set the server option to the server object configured in "". Set the table option to the .
An alternative syntax is available. When you don't want to create a server object, the connection details for the Data Node can be specified in the COMMENT table option:
Navigation in the procedure "Deploy Spider Federated Topology":
This page was step 2 of 3.
Next: Step 3: Test Spider Federated Topology.
In the Sharded MariaDB Enterprise Spider topology, a Spider Node contains one or more "virtual" Spider Tables. A Spider Table does not store data. When a Spider Table is queried in this topology, the Enterprise Spider storage engine uses a MariaDB foreign data wrapper to read from and write to Data Tables on Data Nodes. The data for the Spider Table is partitioned among the Data Nodes using the regular partitioning syntax.
$ sudo yum install --enablerepo=PowerTools glusterfs-server$ sudo yum install centos-release-gluster
$ sudo yum install glusterfs-server$ wget -O - https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub | apt-key add -
$ DEBID=$(grep 'VERSION_ID=' /etc/os-release | cut -d '=' -f 2 | tr -d '"')
$ DEBVER=$(grep 'VERSION=' /etc/os-release | grep -Eo '[a-z]+')
$ DEBARCH=$(dpkg --print-architecture)
$ echo deb https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main > /etc/apt/sources.list.d/gluster.list$ sudo apt update$ sudo apt install glusterfs-server$ sudo apt update$ sudo apt install glusterfs-server$ sudo systemctl start glusterd$ sudo systemctl enable glusterd$ sudo gluster peer probe mcs2$ sudo gluster peer probe mcs3$ sudo gluster peer probe mcs1peer probe: Host mcs1 port 24007 already in peer list$ sudo gluster peer statusNumber of Peers: 2
Hostname: mcs2
Uuid: 3c8a5c79-22de-45df-9034-8ae624b7b23e
State: Peer in Cluster (Connected)
Hostname: mcs3
Uuid: 862af7b2-bb5e-4b1c-8311-630fa32ed451
State: Peer in Cluster (Connected)$ sudo mkdir -p /brick/storagemanager$ sudo gluster volume create storagemanager \
replica 3 \
mcs1:/brick/storagemanager \
mcs2:/brick/storagemanager \
mcs3:/brick/storagemanager \
force$ sudo gluster volume start storagemanager$ sudo mkdir -p /var/lib/columnstore/storagemanager127.0.0.1:storagemanager /var/lib/columnstore/storagemanager glusterfs defaults,_netdev 0 0$ sudo mount -a$ sudo systemctl stop mariadb[mariadb]
bind_address = 0.0.0.0
log_bin = mariadb-bin.log
server_id = 1$ systemctl start mariadbCREATE USER 'repl'@'192.0.2.%' IDENTIFIED BY 'repl_passwd';GRANT REPLICATION SLAVE,
REPLICATION CLIENT
ON *.* TO repl@'%';CREATE USER 'mxs'@'192.0.2.%'
IDENTIFIED BY 'mxs_passwd';GRANT SHOW DATABASES ON *.* TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.columns_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.db TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.procs_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.proxies_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.roles_mapping TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.tables_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.user TO 'mxs'@'192.0.2.%';GRANT SUPER,
REPLICATION CLIENT,
RELOAD,
PROCESS,
SHOW DATABASES,
EVENT
ON *.* TO 'mxs'@'192.0.2.%';$ sudo yum install MariaDB-spider-engine$ sudo apt install mariadb-plugin-spider$ sudo zypper install MariaDB-spider-engine[mariadb]
...
plugin_load_add = "ha_spider"$ sudo systemctl restart mariadb$ sudo mariadbINSTALL SONAME "ha_spider";SELECT PLUGIN_NAME, PLUGIN_STATUS
FROM information_schema.PLUGINS
WHERE PLUGIN_LIBRARY LIKE "ha_spider%";
+--------------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+--------------------------+---------------+
| SPIDER | ACTIVE |
| SPIDER_ALLOC_MEM | ACTIVE |
| SPIDER_WRAPPER_PROTOCOLS | ACTIVE |
+--------------------------+---------------+$ systemctl status mariadb$ sudo systemctl start mariadb$ sudo mariadb
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>SHOW GLOBAL STATUS LIKE 'wsrep_cluster_status';
+----------------------+---------+
| Variable_name | Value |
+----------------------+---------+
| wsrep_cluster_status | Primary |
+----------------------+---------+SHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
+--------------------+---------+
| Variable_name | Value |
+--------------------+---------+
| wsrep_cluster_size | 3 |
+--------------------+---------+$ sudo mariadbCREATE DATABASE IF NOT EXISTS test;
CREATE TABLE test.contacts (
id INT PRIMARY KEY AUTO_INCREMENT,
first_name VARCHAR(50),
last_name VARCHAR(50),
email VARCHAR(100)
);$ sudo mariadbSHOW CREATE TABLE test.contacts\G;$ sudo mariadbINSERT INTO test.contacts (first_name, last_name, email)
VALUES
("Kai", "Devi", "kai.devi@example.com"),
("Lee", "Wang", "lee.wang@example.com");$ sudo mariadbSELECT * FROM test.contacts;
+----+------------+-----------+----------------------+
| id | first_name | last_name | email |
+----+------------+-----------+----------------------+
| 1 | Kai | Devi | kai.devi@example.com |
| 2 | Lee | Wang | lee.wang@example.com |
+----+------------+-----------+----------------------+3306
Port used for MariaDB Client traffic
8600-8630
Port range used for inter-node communication
8640
Port used by CMAPI
8700
Port used for inter-node communication
8800
Port used for inter-node communication
This page is: Copyright © 2025 MariaDB. All rights reserved.
password='MAXSCALE_USER_PASSWORD' sets the password parameter to the password used by the database user account that MaxScale uses to monitor the ColumnStore nodes.
replication_user=REPLICATION_USER sets the replication_user parameter to the database user account that MaxScale uses to setup replication.
replication_password='REPLICATION_USER_PASSWORD' sets the replication_password parameter to the password used by the database user account that MaxScale uses to setup replication.
--servers sets the servers parameter to the set of nodes that MaxScale should monitor. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by mariadbmon in MaxScale 22.08 can also be specified.
password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ColumnStore nodes.router_options=slave sets the router_options parameter to slave, so that MaxScale only routes connections to the replica nodes.
--servers sets the servers parameter to the set of nodes to which MaxScale should route connections. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by readconnroute in MaxScale 22.08 can also be specified.
protocol=MariaDBClient sets the protocol parameter.Other Module Parameters supported by listeners in MaxScale 22.08 can also be specified.
password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ColumnStore nodes.
--servers sets the servers parameter to the set of nodes to which MaxScale should route queries. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by readwritesplit in MaxScale 22.08 can also be specified.
Other Module Parameters supported by listeners in MaxScale 22.08 can also be specified.
Connection-based load balancing
Routes connections to Enterprise ColumnStore nodes designated as replica servers for a read-only pool
Routes connections to an Enterprise ColumnStore node designated as the primary server for a read-write pool.
Query-based load balancing
Routes write queries to an Enterprise ColumnStore node designated as the primary server
Routes read queries to Enterprise ColumnStore node designated as replica servers
Automatically reconnects after node failures
Automatically replays transactions after node failures
Optionally enforces causal reads
This page is: Copyright © 2025 MariaDB. All rights reserved.
3306
Port used for MariaDB Client traffic
8600-8630
Port range used for inter-node communication
8640
Port used by CMAPI
8700
Port used for inter-node communication
8800
Port used for inter-node communication
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
password='MAXSCALE_USER_PASSWORD' sets the password parameter to the password used by the database user account that MaxScale uses to monitor the ES nodes.
--servers sets the servers parameter to the set of nodes that MaxScale should monitor. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by galeramon in MaxScale 25.01 can also be specified.
password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ES nodes.
router_options=slave sets the router_options parameter to slave, so that MaxScale only routes connections to the replica nodes.
--servers sets the servers parameter to the set of nodes to which MaxScale should route connections. All non-option arguments after --serversare interpreted as server names.
Other Module Parameters supported by readconnroute in MaxScale 25.01 can also be specified.
Other Module Parameters supported by listeners in MaxScale 25.01 can also be specified.
password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ES nodes.
--servers sets the servers parameter to the set of nodes to which MaxScale should route queries. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by readwritesplit in MaxScale 25.01 can also be specified.
Other Module Parameters supported by listeners in MaxScale 25.01 can also be specified.
Configure Read Connection Router
• Connection-based load balancing • Routes connections to Enterprise ColumnStore nodes designated as replica servers for a read-only pool • Routes connections to an Enterprise ColumnStore node designated as the primary server for a read-write pool.
Configure Read/Write Split
• Query-based load balancing • Routes write queries to an Enterprise ColumnStore node designated as the primary server • Routes read queries to Enterprise ColumnStore node designated as replica servers • Automatically reconnects after node failures • Automatically replays transactions after node failures • Optionally enforces causal reads
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
Use Systemd to test whether the MariaDB Enterprise Server service is running.
This action is performed on each Enterprise ColumnStore node.
Check if the MariaDB Enterprise Server service is running by executing the following:
If the service is not running on any node, start the service by executing the following on that node:
Use MariaDB Client to test the local connection to the Enterprise Server node.
This action is performed on each Enterprise ColumnStore node:
The sudo command is used here to connect to the Enterprise Server node using the root@localhost user account, which authenticates using the unix_socket authentication plugin. Other user accounts can be used by specifying the --user and --password command-line options.
Query the information_schema.PLUGINS table to confirm that the ColumnStore storage engine is loaded.
This action is performed on each Enterprise ColumnStore node.
Execute the following query:
The PLUGIN_STATUS column for each ColumnStore-related plugin should contain ACTIVE.
Use Systemd to test whether the CMAPI service is running.
This action is performed on each Enterprise ColumnStore node.
Check if the CMAPI service is running by executing the following:
If the service is not running on any node, start the service by executing the following on that node:
Use CMAPI to request the ColumnStore status. The API key needs to be provided as part of the X-API-key HTML header.
This action is performed with the CMAPI service on the primary server.
Check the ColumnStore status using curl by executing the following:
Use MariaDB Client to test DDL.
On the primary server, use the MariaDB Client to connect to the node:
Create a test database and ColumnStore table:
On each replica server, use the MariaDB Client to connect to the node:
Confirm that the database and table exist:
If the database or table do not exist on any node, then check the replication configuration.
Use MariaDB Client to test DML.
On the primary server, use the MariaDB Client to connect to the node:
Insert sample data into the table created in the DDL test:
On each replica server, use the MariaDB Client to connect to the node:
Execute a SELECT query to retrieve the data:
If the data is not returned on any node, check the ColumnStore status and the storage configuration.
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 5 of 9.
Next: Step 6: Install MariaDB MaxScale.
This page is: Copyright © 2025 MariaDB. All rights reserved.
# minimize swapping
vm.swappiness = 1
# Increase the TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Increase the TCP buffer limits
# min, default, and max number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# for 1 GigE, increase this to 2500
# for 10 GigE, increase this to 30000
net.core.netdev_max_backlog = 2500$ sudo sysctl --load=/etc/sysctl.d/90-mariadb-enterprise-columnstore.conf$ sudo setenforce permissive# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo getenforce
Permissive$ sudo systemctl disable apparmor$ sudo aa-statusapparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.$ sudo systemctl status firewalld$ sudo systemctl stop firewalld$ sudo ufw status verbose$ sudo ufw disable$ sudo yum install glibc-locale-source glibc-langpack-en$ sudo localedef -i en_US -f UTF-8 en_US.UTF-8192.0.2.1 mcs1
192.0.2.2 mcs2
192.0.2.3 mcs3
192.0.2.100 mxs1[maxscale]
threads = auto
admin_host = 0.0.0.0
admin_secure_gui = false$ sudo systemctl restart maxscale$ maxctrl create server mcs1 192.0.2.101$ maxctrl create server mcs2 192.0.2.102$ maxctrl create server mcs3 192.0.2.103$ maxctrl create monitor columnstore_monitor mariadbmon \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
replication_user=repl \
replication_password='REPLICATION_USER_PASSWORD' \
--servers mcs1 mcs2 mcs3$ maxctrl create service connection_router_service readconnroute \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
router_options=slave \
--servers mcs1 mcs2 mcs3$ maxctrl create listener connection_router_service connection_router_listener 3308 \
protocol=MariaDBClient$ maxctrl create service query_router_service readwritesplit \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
--servers mcs1 mcs2 mcs3$ maxctrl create listener query_router_service query_router_listener 3307 \
protocol=MariaDBClient$ maxctrl start services# minimize swapping
vm.swappiness = 1
# Increase the TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Increase the TCP buffer limits
# min, default, and max number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# for 1 GigE, increase this to 2500
# for 10 GigE, increase this to 30000
net.core.netdev_max_backlog = 2500$ sudo sysctl --load=/etc/sysctl.d/90-mariadb-enterprise-columnstore.conf$ sudo setenforce permissive# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo getenforce
Permissive$ sudo systemctl disable apparmor$ sudo aa-statusapparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.$ sudo systemctl status firewalld$ sudo systemctl stop firewalld$ sudo ufw status verbose$ sudo ufw disable$ sudo yum install glibc-locale-source glibc-langpack-en$ sudo localedef -i en_US -f UTF-8 en_US.UTF-8192.0.2.1 mcs1
192.0.2.2 mcs2
192.0.2.3 mcs3
192.0.2.100 mxs1$ systemctl status mariadb$ sudo systemctl start mariadb$ sudo mariadb
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>$ mariadb \
--host 192.0.2.2 \
--user spider_user \
--password
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>SELECT PLUGIN_NAME, PLUGIN_STATUS
FROM information_schema.PLUGINS
WHERE PLUGIN_LIBRARY LIKE 'ha_spider%';
+--------------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+--------------------------+---------------+
| SPIDER | ACTIVE |
| SPIDER_ALLOC_MEM | ACTIVE |
| SPIDER_WRAPPER_PROTOCOLS | ACTIVE |
+--------------------------+---------------+INSERT INTO spider_hq_sales.invoices
(branch_id, invoice_id, customer_id, invoice_date, invoice_total, payment_method)
VALUES (1, 4, 1, '2021-03-10 12:45:10', 3045.73, 'CREDIT_CARD');SELECT * FROM spider_sharded_sales.invoices;+-----------+------------+-------------+----------------------------+---------------+----------------+
| branch_id | invoice_id | customer_id | invoice_date | invoice_total | payment_method |
+-----------+------------+-------------+----------------------------+---------------+----------------+
| 1 | 1 | 1 | 2020-05-10 12:35:10.000000 | 1087.23 | CREDIT_CARD |
| 1 | 2 | 2 | 2020-05-10 14:17:32.000000 | 1508.57 | WIRE_TRANSFER |
| 1 | 3 | 3 | 2020-05-10 14:25:16.000000 | 227.15 | CASH |
| 1 | 4 | 1 | 2021-03-10 12:45:10.000000 | 3045.73 | CREDIT_CARD |
| 2 | 1 | 2 | 2020-05-10 12:31:00.000000 | 1351.04 | CREDIT_CARD |
| 2 | 2 | 2 | 2020-05-10 12:45:27.000000 | 162.11 | WIRE_TRANSFER |
| 2 | 3 | 4 | 2020-05-10 13:11:23.000000 | 350.00 | CASH |
| 3 | 1 | 5 | 2020-05-10 12:31:00.000000 | 111.50 | CREDIT_CARD |
| 3 | 2 | 8 | 2020-05-10 12:45:27.000000 | 1509.23 | WIRE_TRANSFER |
| 3 | 3 | 3 | 2020-05-10 13:11:23.000000 | 3301.66 | CASH |
+-----------+------------+-------------+----------------------------+---------------+----------------+EXPLAIN PARTITIONS
SELECT * FROM spider_sharded_sales.invoices
WHERE customer_id = 4;+------+-------------+----------+--------------------------------------------------+------+---------------+------+---------+------+------+-----------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+----------+--------------------------------------------------+------+---------------+------+---------+------+------+-----------------------------------+
| 1 | SIMPLE | invoices | hq_partition,eastern_partition,western_partition | ALL | NULL | NULL | NULL | NULL | 9 | Using where with pushed condition |
+------+-------------+----------+--------------------------------------------------+------+---------------+------+---------+------+------+-----------------------------------+[maxscale]
threads = auto
admin_host = 0.0.0.0
admin_secure_gui = false$ sudo systemctl restart maxscaleCREATE USER mxs@192.0.2.104 IDENTIFIED BY "passwd";GRANT SHOW DATABASES ON *.* TO mxs@192.0.2.104;
GRANT SELECT ON mysql.columns_priv TO mxs@192.0.2.104;
GRANT SELECT ON mysql.db TO mxs@192.0.2.104;
GRANT SELECT ON mysql.procs_priv TO mxs@192.0.2.104;
GRANT SELECT ON mysql.proxies_priv TO mxs@192.0.2.104;
GRANT SELECT ON mysql.roles_mapping TO mxs@192.0.2.104;
GRANT SELECT ON mysql.tables_priv TO mxs@192.0.2.104;
GRANT SELECT ON mysql.user TO mxs@192.0.2.104;$ maxctrl create server node1 192.0.2.101
$ maxctrl create server node2 192.0.2.102
$ maxctrl create server node3 192.0.2.103$ maxctrl create monitor cluster_monitor galeramon \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
--servers node1 node2 node3$ maxctrl create service connection_router_service readconnroute \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
router_options=slave \
--servers node1 node2 node3$ maxctrl create listener connection_router_service connection_router_listener 3308 \
protocol=MariaDBClient$ maxctrl create service query_router_service readwritesplit \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
--servers node1 node2 node3$ maxctrl create listener query_router_service query_router_listener 3307 \
protocol=MariaDBClient$ maxctrl start servicesCREATE USER spider_user@192.0.2.1 IDENTIFIED BY "password";$ mariadb --user spider_user --host 192.0.2.2 --passwordCREATE SERVER hq_server
FOREIGN DATA WRAPPER mariadb
OPTIONS (
HOST "192.0.2.2",
PORT 5801,
USER "spider_user",
PASSWORD "password",
DATABASE "hq_sales"
);CREATE DATABASE hq_sales;
CREATE SEQUENCE hq_sales.invoice_seq;
CREATE TABLE hq_sales.invoices (
branch_id INT NOT NULL DEFAULT (1) CHECK (branch_id=1),
invoice_id INT NOT NULL DEFAULT (NEXT VALUE FOR hq_sales.invoice_seq),
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=InnoDB;
INSERT INTO hq_sales.invoices
(customer_id, invoice_date, invoice_total, payment_method)
VALUES
(1, '2020-05-10 12:35:10', 1087.23, 'CREDIT_CARD'),
(2, '2020-05-10 14:17:32', 1508.57, 'WIRE_TRANSFER'),
(3, '2020-05-10 14:25:16', 227.15, 'CASH');GRANT ALL PRIVILEGES ON hq_sales.invoices TO 'spider_user'@'192.0.2.1';GRANT CREATE TEMPORARY TABLES ON hq_sales.* TO 'spider_user'@'192.0.2.1';CREATE DATABASE spider_hq_sales;
CREATE TABLE spider_hq_sales.invoices (
branch_id INT NOT NULL,
invoice_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=Spider
COMMENT='server "hq_server", table "invoices"';CREATE TABLE spider_hq_sales.invoices_alternate (
branch_id INT NOT NULL,
invoice_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=Spider
COMMENT='table "invoices", host "192.0.2.2", port "5801", user "spider_user", password "user_password", database "hq_sales"';$ systemctl status mariadb$ sudo systemctl start mariadb$ sudo mariadb
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>SELECT PLUGIN_NAME, PLUGIN_STATUS
FROM information_schema.PLUGINS
WHERE PLUGIN_LIBRARY LIKE 'ha_columnstore%';
+---------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+---------------------+---------------+
| Columnstore | ACTIVE |
| COLUMNSTORE_COLUMNS | ACTIVE |
| COLUMNSTORE_TABLES | ACTIVE |
| COLUMNSTORE_FILES | ACTIVE |
| COLUMNSTORE_EXTENTS | ACTIVE |
+---------------------+---------------+$ systemctl status mariadb-columnstore-cmapi$ sudo systemctl start mariadb-columnstore-cmapi$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
},
"192.0.2.2": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"192.0.2.3": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"num_nodes": 3
}$ sudo mariadbCREATE DATABASE IF NOT EXISTS test;
CREATE TABLE IF NOT EXISTS test.contacts (
first_name VARCHAR(50),
last_name VARCHAR(50),
email VARCHAR(100)
) ENGINE = ColumnStore;$ sudo mariadbSHOW CREATE TABLE test.contacts\G;$ sudo mariadbINSERT INTO test.contacts (first_name, last_name, email)
VALUES
("Kai", "Devi", "kai.devi@example.com"),
("Lee", "Wang", "lee.wang@example.com");$ sudo mariadbSELECT * FROM test.contacts;
+------------+-----------+----------------------+
| first_name | last_name | email |
+------------+-----------+----------------------+
| Kai | Devi | kai.devi@example.com |
| Lee | Wang | lee.wang@example.com |
+------------+-----------+----------------------+password='MAXSCALE_USER_PASSWORD' sets the password parameter to the password used by the database user account that MaxScale uses to monitor the ES nodes.
replication_user=REPLICATION_USER sets the replication_user parameter to the database user account that MaxScale uses to setup replication.
replication_password='REPLICATION_USER_PASSWORD' sets the replication_password parameter to the password used by the database user account that MaxScale uses to setup replication.
--servers sets the servers parameter to the set of nodes that MaxScale should monitor. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by mariadbmon in MaxScale can also be specified.
password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ES nodes.
router_options=slave sets the router_options parameter to slave, so that MaxScale only routes connections to the replica nodes.
--servers sets the servers parameter to the set of nodes to which MaxScale should route connections. All non-option arguments after --serversare interpreted as server names.
Other Module Parameters supported by readconnroute in MaxScale 25.01 can also be specified.
protocol=MariaDBClient sets the protocol parameter.Other Module Parameters supported by listeners in MaxScale can also be specified.
password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ES nodes.
--servers sets the servers parameter to the set of nodes to which MaxScale should route queries. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by readwritesplit in MaxScale can also be specified.
protocol=MariaDBClient sets the protocol parameter.Other Module Parameters supported by listeners in MaxScale can also be specified.
Connection-based load balancing
Routes connections to Enterprise ColumnStore nodes designated as replica servers for a read-only pool
Routes connections to an Enterprise ColumnStore node designated as the primary server for a read-write pool.
Query-based load balancing
Routes write queries to an Enterprise ColumnStore node designated as the primary server
Routes read queries to Enterprise ColumnStore node designated as replica servers
Automatically reconnects after node failures
Automatically replays transactions after node failures
Optionally enforces causal reads
This page is: Copyright © 2025 MariaDB. All rights reserved.
A Spider Table is a virtual table that does not store data. When a Spider Table is queried, the uses foreign data wrappers to read from and write to Data Tables on Data Nodes or ODBC Data Sources.
On the Spider Node, the Data Table is queried by querying the Spider Table like the following:
On the Spider node, the Data Table's data is migrated to the destination table by querying the Spider Table like the following:
Data Node
A Data Node is a MariaDB Enterprise Server node that contains one or more Data Tables.
Data Table
A Data Table stores data for a Spider Table. When a Spider Table is queried, the Enterprise Spider storage engine uses the MariaDB foreign data wrapper to read from and write to the Data Table on a Data Node. The Data Table must be created on the Data Node with the same structure as the Spider Table. The Data Table must use a non-Spider storage engine, such as InnoDB or ColumnStore.
ODBC Data Source
An ODBC Data Source relies on an ODBC Driver and an ODBC Driver Manager to query an external data source.
ODBC Driver
An ODBC Driver is a library that integrates with a ODBC Driver Manager to query an external data source.
ODBC Driver Manager
An ODBC Driver Manager allows applications to use ODBC Drivers.
Spider Node
A Spider Node is a MariaDB Enterprise Server node that contains one or more Spider Tables.
This page is: Copyright © 2025 MariaDB. All rights reserved.
Spider Table
A Spider Table is a virtual table that does not store data. When a Spider Table is queried, the uses foreign data wrappers to read from and write to Data Tables on Data Nodes or ODBC Data Sources.
Local tables can also be referenced in queries:
On the Spider Node, the ODBC Data Source's data is migrated to the destination table by querying the Spider Table like the following:
Data Node
A Data Node is a MariaDB Enterprise Server node that contains one or more Data Tables.
Data Table
A Data Table stores data for a Spider Table. When a Spider Table is queried, the Enterprise Spider storage engine uses the MariaDB foreign data wrapper to read from and write to the Data Table on a Data Node. The Data Table must be created on the Data Node with the same structure as the Spider Table. The Data Table must use a non-Spider storage engine, such as InnoDB or ColumnStore.
ODBC Data Source
An ODBC Data Source relies on an ODBC Driver and an ODBC Driver Manager to query an external data source.
ODBC Driver
An ODBC Driver is a library that integrates with a ODBC Driver Manager to query an external data source.
ODBC Driver Manager
An ODBC Driver Manager allows applications to use ODBC Drivers.
Spider Node
A Spider Node is a MariaDB Enterprise Server node that contains one or more Spider Tables.
This page is: Copyright © 2025 MariaDB. All rights reserved.
Spider Table
MariaDB MaxScale installations include a configuration file with some example objects. This configuration file should be replaced.
On the MaxScale node, replace the default /etc/maxscale.cnf with the following configuration:
For additional information, see "Global Parameters".
On the MaxScale node, restart the MaxScale service to ensure that MaxScale picks up the new configuration:
For additional information, see "Start and Stop Services".
On the MaxScale node, use maxctrl create to create a server object for each Enterprise ColumnStore node:
MaxScale uses monitors to retrieve additional information from the servers. This information is used by other services in filtering and routing connections based on the current state of the node. For MariaDB Enterprise ColumnStore, use the MariaDB Monitor (mariadbmon).
On the MaxScale node, use maxctrl create monitor to create a MariaDB Monitor:
In this example:
columnstore_monitor is an arbitrary name that is used to identify the new monitor.
mariadbmon is the name of the module that implements the MariaDB Monitor.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to monitor the ColumnStore nodes.
password='MAXSCALE_USER_PASSWORD' sets the password parameter to the password used by the database user account that MaxScale uses to monitor the ColumnStore nodes.
replication_user=REPLICATION_USER sets the replication_user parameter to the database user account that MaxScale uses to setup replication.
replication_password='REPLICATION_USER_PASSWORD' sets the replication_password parameter to the password used by the database user account that MaxScale uses to setup replication.
--servers sets the servers parameter to the set of nodes that MaxScale should monitor. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by mariadbmon in MaxScale 22.08 can also be specified.
Routers control how MaxScale balances the load between Enterprise ColumnStore nodes. Each router uses a different approach to routing queries. Consider the specific use case of your application and database load and select the router that best suits your needs.
Connection-based load balancing
Routes connections to Enterprise ColumnStore nodes designated as replica servers for a read-only pool
Routes connections to an Enterprise ColumnStore node designated as the primary server for a read-write pool.|
Query-based load balancing
Routes write queries to an Enterprise ColumnStore node designated as the primary server
Routes read queries to Enterprise ColumnStore node designated as replica servers
Automatically reconnects after node failures
Automatically replays transactions after node failures
Use MaxScale Read Connection Router (readconnroute) to route connections to replica servers for a read-only pool.
On the MaxScale node, use maxctrl create service to create a router:
In this example:
connection_router_service is an arbitrary name that is used to identify the new service.
readconnroute is the name of the module that implements the Read Connection Router.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ColumnStore nodes.
password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ColumnStore nodes.
router_options=slave sets the router_options parameter to slave, so that MaxScale only routes connections to the replica nodes.
--servers sets the servers parameter to the set of nodes to which MaxScale should route connections. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by readconnroute in MaxScale 22.08 can also be specified.
These instructions reference TCP port 3308. You can use a different TCP port. The TCP port used must not be bound by any other listener.
On the MaxScale node, use the maxctrl create listener command to configure MaxScale to use a listener for the Read Connection Router (readconnroute):
In this example:
connection_router_service is the name of the readconnroute service that was previously created.
connection_router_listener is an arbitrary name that is used to identify the new listener.
3308 is the TCP port.
protocol=MariaDBClient sets the protocol parameter.
Other Module Parameters supported by listeners in MaxScale 22.08 can also be specified.
MaxScale Read/Write Split Router (readwritesplit) performs query-based load balancing. The router routes write queries to the primary and read queries to the replicas.
On the MaxScale node, use the maxctrl create service command to configure MaxScale to use the Read/Write Split Router (readwritesplit):
In this example:
query_router_service is an arbitrary name that is used to identify the new service.
readwritesplit is the name of the module that implements the Read/Write Split Router.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ColumnStore nodes.
password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ColumnStore nodes.
--servers sets the servers parameter to the set of nodes to which MaxScale should route queries. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by readwritesplit in MaxScale 22.08 can also be specified.
These instructions reference TCP port 3307. You can use a different TCP port. The TCP port used must not be bound by any other listener.
On the MaxScale node, use the maxctrl create listener command to configure MaxScale to use a listener for the Read/Write Split Router (readwritesplit):
In this example:
query_router_service is the name of the readwritesplit service that was previously created.
query_router_listener is an arbitrary name that is used to identify the new listener.
3307 is the TCP port.
protocol=MariaDBClient sets the protocol parameter.
Other Module Parameters supported by listeners in MaxScale 22.08 can also be specified.
To start the services and monitors, on the MaxScale node use maxctrl start services:
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 7 of 9.
Next: Step 8: Test MariaDB MaxScale
This page is: Copyright © 2025 MariaDB. All rights reserved.
This procedure incrementally deploys MariaDB Enterprise Spider on an existing MariaDB Enterprise Server deployment.
In the Spider Sharded topology, a Spider Node contains one or more "virtual" Spider Tables. A Spider Table does not store data. When a Spider Table is queried in this topology, the Enterprise Spider storage engine uses a MariaDB foreign data wrapper to read from and write to Data Tables on Data Nodes. The data for the Spider Table is partitioned among the Data Nodes using the regular partitioning syntax.
This procedure has 3 steps, which are executed in sequence.
This page provides an overview of the topology, requirements, and deployment procedure.
The topology described is representative of basic product capabilities. MariaDB products can be deployed to form other topologies, leverage advanced product capabilities, or combine the capabilities of multiple topologies.
If you have not yet deployed MariaDB Enterprise Server on the Spider Node and Data Nodes, first deploy a topology containing MariaDB Enterprise Server. Several topologies are documented.
Step 1
Step 2
Step 3
Customers can obtain support by submitting a support case.
The following components are deployed during this procedure:
Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.
Storage engine used by Spider Tables to read from and write to Data Tables using the MariaDB foreign data wrapper.
Data Node
A Data Node is a MariaDB Enterprise Server node that contains one or more Data Tables.
Data Table
A Data Table stores data for a Spider Table. When a Spider Table is queried, the Enterprise Spider storage engine uses the MariaDB foreign data wrapper to read from and write to the Data Table on a Data Node. The Data Table must be created on the Data Node with the same structure as the Spider Table. The Data Table must use a non-Spider storage engine, such as or .
ODBC Data Source
An ODBC Data Source relies on an ODBC Driver and an ODBC Driver Manager to query an external data source.
ODBC Driver
An ODBC Driver is a library that integrates with a ODBC Driver Manager to query an external data source.
ODBC Driver Manager
An ODBC Driver Manager allows applications to use ODBC Drivers.
Spider Node
A Spider Node is a MariaDB Enterprise Server node that contains one or more Spider Tables.
In the Spider Sharded topology, a Spider Node contains one or more "virtual" Spider Tables. A Spider Table does not store data. When a Spider Table is queried, the Enterprise Spider storage engine uses a MariaDB foreign data wrapper to read from and write to Data Tables on Data Nodes. The data for the Spider Table is partitioned among the Data Nodes using the regular partitioning syntax.
The Spider Sharded topology consists of:
One MariaDB Enterprise Server node is a Spider Node
One or more MariaDB Enterprise Server nodes are Data Nodes
The Spider Node:
Contains one or more Spider Tables
Uses the Enterprise Spider storage engine plugin for Spider Tables
Uses a MariaDB foreign data wrapper to query the Data Tables on the Data Nodes
The Data Nodes:
Contain Data Tables for one or more partitions of the Spider Table
Uses a non-Spider storage engine for each Data Table, such as InnoDB or ColumnStore
For additional information, see "Spider Sharded Topology".
These requirements are for the Spider Sharded topology when deployed with MariaDB Enterprise Server
One or more MariaDB Enterprise Server nodes must be deployed as Spider Nodes. The Spider Nodes contain Spider Tables.
One or more MariaDB Enterprise Server nodes must be deployed as Data Nodes. The Data Nodes contain Data Tables.
In alignment to the , the Spider Sharded topology with MariaDB Enterprise Server is provided for:
AlmaLinux 8 (x86_64, ARM64)
AlmaLinux 9 (x86_64, ARM64)
Debian 11 (x86_64, ARM64)
Debian 12 (x86_64, ARM64)
Microsoft Windows (x86_64)
Red Hat Enterprise Linux 8 (x86_64, ARM64)
Red Hat Enterprise Linux 9 (x86_64, PPC64LE, ARM64)
Red Hat UBI 8 (x86_64, ARM64)
Rocky Linux 8 (x86_64, ARM64)
Rocky Linux 9 (x86_64, ARM64)
SUSE Linux Enterprise Server 15 (x86_64, ARM64)
Ubuntu 20.04 LTS (x86_64, ARM64)
Ubuntu 22.04 LTS (x86_64, ARM64)
Ubuntu 24.04 LTS (x86_64, ARM64)
Navigation in the procedure "Deploy Spider Sharded Topology":
Next: Step 1: Install Enterprise Spider
Enterprise Server 10.4
Enterprise Server 10.5
Enterprise Server 10.6
Enterprise Server 11.4
Shard tables for horizontal scalability
Spider Node uses Spider storage engine for Sharded Spider Tables
Sharded Spider Table is a partitioned "virtual" table
Spider uses MariaDB foreign data wrapper to query Data Tables on Data Nodes for each partition
Data Node uses non-Spider storage engine for Data Tables
Supports transactions
Enterprise Server 10.3+, Enterprise Spider
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Enterprise Spider:
Supports a MariaDB foreign data wrapper. The MariaDB foreign data wrapper can be used to replace the older Federated and FederatedX storage engines.
Supports an ODBC foreign data wrapper in MariaDB Enterprise Server 10.5 and later. The ODBC foreign data wrapper was backported to MariaDB Enterprise Server in a previous version. The ODBC foreign data wrapper is beta maturity. The maturity can be confirmed by querying the information_schema.SPIDER_WRAPPER_PROTOCOLS table.
The Spider Sharded topology:
Can be used to consolidate multiple tables on multiple MariaDB Enterprise Server nodes into a single "virtual" table on the Spider Node using the MariaDB foreign data wrapper.
Can be used to partition a large table across multiple MariaDB Enterprise Server nodes for horizontal scalability using the MariaDB foreign data wrapper.
Defines Sharded Spider Tables with MariaDB Enterprise Server's regular partitioning syntax.
In the Spider Sharded topology, a Spider Node contains one or more "virtual" Spider Tables. A Spider Table does not store data. When a Spider Table is queried in this topology, the Enterprise Spider storage engine uses a MariaDB foreign data wrapper to read from and write to Data Tables on Data Nodes. The data for the Spider Table is partitioned among the Data Nodes using the regular partitioning syntax.
The Spider Sharded topology consists of:
One MariaDB Enterprise Server node is a Spider Node
One or more MariaDB Enterprise Server nodes are Data Nodes
The Spider Node:
Contains one or more partitioned Spider Tables
Uses the Enterprise Spider storage engine plugin for Spider Tables
Uses a MariaDB foreign data wrapper to query the Data Tables on the Data Nodes
The Data Nodes:
Contain Data Tables for one or more partitions of the Spider Table
Use a non-Spider storage engine for each Data Table, such as InnoDB or ColumnStore
Data Node
A Data Node is a MariaDB Enterprise Server node that contains one or more Data Tables.
Data Table
A Data Table stores data for a Spider Table. When a Spider Table is queried, the Enterprise Spider storage engine uses the MariaDB foreign data wrapper to read from and write to the Data Table on a Data Node. The Data Table must be created on the Data Node with the same structure as the Spider Table. The Data Table must use a non-Spider storage engine, such as or .
ODBC Data Source
An ODBC Data Source relies on an ODBC Driver and an ODBC Driver Manager to query an external data source.
ODBC Driver
An ODBC Driver is a library that integrates with a ODBC Driver Manager to query an external data source.
ODBC Driver Manager
An ODBC Driver Manager allows applications to use ODBC Drivers.
Spider Node
A Spider Node is a MariaDB Enterprise Server node that contains one or more Spider Tables.
The Spider Sharded topology can be used to split table data into multiple shards stored on remote MariaDB Enterprise Server nodes for horizontal scalability:
One MariaDB Enterprise Server node is configured as a Spider Node and accepts application queries.
One or more MariaDB Enterprise Server nodes are configured as Data Nodes and store shards.
The Spider Sharded topology can be used to implement a consolidated view of multiple remote databases:
One MariaDB Enterprise Server is configured as a Spider Node and provides a consolidated view using Spider Tables.
One or more MariaDB Enterprise Server nodes are configured as Data Nodes and contain the local data.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This procedure incrementally deploys MariaDB Enterprise Spider on an existing MariaDB Enterprise Server deployment.
In the Spider Federated topology, a Spider Node contains one or more "virtual" Spider Tables. A Spider Table does not store data. When a Spider Table is queried, the Enterprise Spider storage engine uses a MariaDB foreign data wrapper to read from and write to a Data Table on a Data Node.
This procedure has 3 steps, which are executed in sequence.
This page provides an overview of the topology, requirements, and deployment procedure.
The topology described is representative of basic product capabilities. MariaDB products can be deployed to form other topologies, leverage advanced product capabilities, or combine the capabilities of multiple topologies.
If you have not yet deployed MariaDB Enterprise Server on the Spider Node and Data Node, first deploy a topology containing MariaDB Enterprise Server. Several topologies are documented.
Step 1
Step 2
Step 3
Customers can obtain support by submitting a support case.
The following components are deployed during this procedure:
Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.
Storage engine used by Spider Tables to read from and write to Data Tables using the MariaDB foreign data wrapper.
Data Node
A Data Node is a MariaDB Enterprise Server node that contains one or more Data Tables.
Data Table
A Data Table stores data for a Spider Table. When a Spider Table is queried, the Enterprise Spider storage engine uses the MariaDB foreign data wrapper to read from and write to the Data Table on a Data Node. The Data Table must be created on the Data Node with the same structure as the Spider Table. The Data Table must use a non-Spider storage engine, such as or .
ODBC Data Source
An ODBC Data Source relies on an ODBC Driver and an ODBC Driver Manager to query an external data source.
ODBC Driver
An ODBC Driver is a library that integrates with a ODBC Driver Manager to query an external data source.
ODBC Driver Manager
An ODBC Driver Manager allows applications to use ODBC Drivers.
Spider Node
A Spider Node is a MariaDB Enterprise Server node that contains one or more Spider Tables.
In the Spider Federated topology, a Spider Node contains one or more "virtual" Spider Tables. A Spider Table does not store data. When a Spider Table is queried, the Enterprise Spider storage engine uses a MariaDB foreign data wrapper to read from and write to a Data Table on a Data Node.
The Spider Federated topology consists of:
One MariaDB Enterprise Server node is a Spider Node
One MariaDB Enterprise Server node is a Data Node
The Spider Node:
Contains one or more Spider Tables
Uses the Enterprise Spider storage engine plugin for Spider Tables
Uses a MariaDB foreign data wrapper to query the Data Table on the Data Node
The Data Node:
Contains a Data Table for each Spider Table
Uses a non-Spider storage engine for each Data Table, such as InnoDB or ColumnStore
For additional information, see "Spider Federated Topology Operations".
These requirements are for the Spider Federated topology when deployed with MariaDB Enterprise Server
One or more MariaDB Enterprise Server nodes must be deployed as Spider Nodes. The Spider Nodes contain Spider Tables.
One or more MariaDB Enterprise Server nodes must be deployed as Data Nodes. The Data Nodes contain Data Tables.
In alignment to the , the Spider Federated topology with MariaDB Enterprise Server is provided for:
AlmaLinux 8 (x86_64, ARM64)
AlmaLinux 9 (x86_64, ARM64)
Debian 11 (x86_64, ARM64)
Debian 12 (x86_64, ARM64)
Microsoft Windows (x86_64)
Red Hat Enterprise Linux 8 (x86_64, ARM64)
Red Hat Enterprise Linux 9 (x86_64, PPC64LE, ARM64)
Red Hat UBI 8 (x86_64, ARM64)
Rocky Linux 8 (x86_64, ARM64)
Rocky Linux 9 (x86_64, ARM64)
SUSE Linux Enterprise Server 15 (x86_64, ARM64)
Ubuntu 20.04 LTS (x86_64, ARM64)
Ubuntu 22.04 LTS (x86_64, ARM64)
Ubuntu 24.04 LTS (x86_64, ARM64)
Navigation in the procedure "Deploy Spider Federated Topology":
Next: Step 1: Install Enterprise Spider
Enterprise Server 10.4
Enterprise Server 10.5
Enterprise Server 10.6
Enterprise Server 11.4
Read from and write to tables on remote ES nodes
Spider Node uses Spider storage engine for Federated Spider Tables
Federated Spider Table is a "virtual" table
Spider uses MariaDB foreign data wrapper to query Data Table on Data Node
Data Node uses non-Spider storage engine for Data Tables
Supports transactions
Enterprise Server 10.3+, Enterprise Spider
This page is: Copyright © 2025 MariaDB. All rights reserved.
Mandatory system variables and options for Single-Node Enterprise ColumnStore include:
Set this system variable to utf8
Set this system variable to utf8_general_ci
columnstore_use_import_for_batchinsert
Set this system variable to ALWAYS to always use cpimport for and statements.
Configure Enterprise ColumnStore S3 Storage Manager to use S3-compatible storage by editing the /etc/columnstore/storagemanager.cnf configuration file:
The S3-compatible object storage options are configured under [S3]:
The bucket option must be set to the name of the bucket that you created in "Create an S3 Bucket".
The endpoint option must be set to the endpoint for the S3-compatible object storage.
The aws_access_key_id and aws_secret_access_key options must be set to the access key ID and secret access key for the S3-compatible object storage.
To use a specific IAM role, you must uncomment and set iam_role_name, sts_region, and sts_endpoint.
To use the IAM role assigned to an EC2 instance, you must uncomment ec2_iam_mode=enabled.
The local cache options are configured under [Cache]:
The cache_size option is set to 2 GB by default.
The path option is set to /var/lib/columnstore/storagemanager/cache by default.
Ensure that the specified path has sufficient storage space for the specified cache size.
Start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:
Start and enable the MariaDB Enterprise ColumnStore service, so that it starts automatically upon reboot:
Enterprise ColumnStore requires a mandatory utility user account to perform cross-engine joins and similar operations.
Create the user account with the CREATE USER statement:
Grant the user account SELECT privileges on all databases with the GRANT statement:
Configure Enterprise ColumnStore to use the utility user:
Set the password:
For details about how to encrypt the password, see "Credentials Management for MariaDB Enterprise ColumnStore".
Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.
The specific steps to configure the security module depend on the operating system.
Configure SELinux for Enterprise ColumnStore:
To configure SELinux, you have to install the packages required for audit2allow. On CentOS 7 and RHEL 7, install the following:
On RHEL 8, install the following:
Allow the system to run under load for a while to generate SELinux audit events.
After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:
If no audit events were found, this will print the following:
If audit events were found, the new SELinux policy can be loaded using semodule:
Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config.
For example, the file will usually look like this after the change:
Set SELinux to enforcing mode:
For information on how to create a profile, see How to create an AppArmor Profile on ubuntu.com.
Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:
This page was step 3 of 5.
Next: Step 4: Test MariaDB Enterprise ColumnStore.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page details step 2 of the 6-step procedure "Deploy Galera Cluster Topology".
This step configures MariaDB Enterprise Servers to operate as Enterprise Cluster nodes and starts MariaDB Enterprise Cluster.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
The installation process might have started the Enterprise Server service. The service should be stopped prior to making configuration changes.
On each Enterprise Cluster node, stop the MariaDB Enterprise Server service:
By default, MariaDB Enterprise Cluster requires data-in-transit encryption to secure Galera replication traffic.
MariaDB Enterprise Cluster encrypts the data using the Transport Layer Security (TLS) protocol, which is a newer version of the Secure Socket Layer (SSL) protocol.
In MariaDB Enterprise Cluster 10.5 and earlier, TLS was supported, but not required. For backward compatibility, MariaDB Enterprise Cluster supports the Provider WSREP TLS Mode, which is equivalent to Enterprise Cluster's TLS implementation in ES 10.5 and earlier. For additional information, see "WSREP TLS Modes".
TLS configuration requires 3 files.
Self-signed certificates are supported. However, in environments where security is critical, it is recommended to use certificates signed by a trusted Certificate Authority (CA).
For additional information, see "Data-in-Transit Encryption".
MariaDB Enterprise Server installations support MariaDB Enterprise Cluster, powered by Galera. MariaDB Enterprise Cluster uses the Galera Enterprise 4 wsrep provider plugin. The path to the wsrep provider plugin must be configured using the system variable.
Enterprise Cluster nodes require that you set the following system variables and options:
Edit a configuration file and set these system variables and options:
For additional information, see "MariaDB Enterprise Server Configuration Management".
MariaDB Enterprise Cluster can be deployed alongside MariaDB Replication. Deploying MariaDB Enterprise Cluster with MariaDB Replication enables integrating Enterprise Cluster with other products and clusters, for example as separate clusters in different data centers, or as a small dedicated write cluster with two larger dedicated read clusters.
For additional information, see "Replication Configuration".
When an Enterprise Cluster node starts, it checks the addresses in the system variable to establish connections with other nodes. The node does not become active until it finds a node that belongs to the Primary Component.
To start the cluster when all nodes are down, you must bootstrap the Primary Component on one node. This allows the other nodes to connect to a working cluster.
On one Enterprise Cluster node, when all nodes are down, bootstrap the Primary Component.
Bootstrap the Primary Component:
For additional information, see "Bootstrap a Galera Cluster".
Connect with MariaDB Client:
The sudo command is used here to connect to the Enterprise Server node using the root@localhost user account, which authenticates using the unix_socket authentication plugin. Other user accounts can be used by specifying the --user and --password command-line options.
Use the SHOW STATUS statement to check the status variable:
The Enterprise Cluster node launches as the Primary Component of a single-node cluster.
To add nodes to a cluster that has a Primary Component running, complete the following procedure for each Enterprise Cluster node to be added. Nodes should be added one at a time.
On the Enterprise Cluster node being added, start MariaDB Enterprise Server:
For additional information, see "Start and Stop Services".
On the Enterprise Cluster node being added, connect with MariaDB Client:
On the bootstrapped Enterprise Cluster node, use the SHOW STATUS statement to check the status variable:
On the Enterprise Cluster node being added, use the SHOW STATUS statement to check the status variable. If is SYNCED, the node has been successfully added, and the Add Node procedure can be repeated to add more nodes.
When each new Enterprise Cluster node joins the cluster, it requests the current cluster position. If the new node is missing transactions, it initiates either a State Snapshot Transfer (SST) or an Incremental State Transfer (IST) from a donor node to synchronize its data with the Primary Component. Depending on the value of , the donor node may or may not be blocked during an SST.
When the new Enterprise Cluster node finishes its state transfer, the node updates the status variable to SYNCED. MaxScale registers the change and begins routing connections or queries to the new node.
Navigation in the procedure "Deploy Galera Cluster Topology":
This page was step 2 of 6.
Next: Step 3: Test MariaDB Enterprise Server
Deployment instructions for a single-node MariaDB Enterprise Server instance with the ColumnStore engine using local storage.
This procedure describes the deployment of the Single-Node Enterprise ColumnStore topology with Local storage.
MariaDB Enterprise ColumnStore 23.10 is a columnar storage engine for MariaDB Enterprise Server 10.6. Enterprise ColumnStore is best suited for Online Analytical Processing (OLAP) workloads.
This procedure has 5 steps, which are executed in sequence.
This page provides an overview of the topology, requirements, and deployment procedures.
Please read and understand this procedure before executing.
Customers can obtain support by .
The following components are deployed during this procedure:
The Single-Node Enterprise ColumnStore topology provides support for Online Analytical Processing (OLAP) workloads to MariaDB Enterprise Server.
The Enterprise ColumnStore node:
Receives queries from the application
Executes queries
Uses the local disk for storage.
Single-Node Enterprise ColumnStore does not provide high availability (HA) for Online Analytical Processing (OLAP). If you would like to deploy Enterprise ColumnStore with high availability, see .
These requirements are for the Single-Node Enterprise ColumnStore, when deployed with MariaDB Enterprise Server 10.6 and MariaDB Enterprise ColumnStore 23.10.
Debian 11 (x86_64, ARM64)
Debian 12 (x86_64, ARM64)
Red Hat Enterprise Linux 8 (x86_64, ARM64)
Red Hat Enterprise Linux 9 (x86_64, ARM64)
MariaDB Enterprise ColumnStore's minimum hardware requirements are not intended for production environments, but the minimum hardware requirements can be appropriate for development and test environments. For production environments, see the instead.
The minimum hardware requirements are:
MariaDB Enterprise ColumnStore will refuse to start if the system has less than 3 GB of memory.
If Enterprise ColumnStore is started on a system with less memory, the following error message will be written to the ColumnStore system log called crit.log:
And the following error message will be raised to the client:
MariaDB Enterprise ColumnStore's recommended hardware requirements are intended for production analytics.
The recommended hardware requirements are:
MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.
To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.
The systemctl command is used to start and stop the MariaDB Enterprise Server service.
Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:
Next: Step 1: Install MariaDB Enterprise ColumnStore 23.10.
This page details step 4 of the 7-step procedure "Deploy Primary/Replica Topology".
This step tests MariaDB Enterprise Server
Several actions require connection to MariaDB Enterprise Server. A command-line client (mariadb) was included with your ES installation. These instructions describe connection via Unix domain socket. Alternatively, a different client and connection method could be used.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Use Systemd to test whether the MariaDB Enterprise Server service is running.
This action is performed on each Enterprise Server node.
Check if the MariaDB Enterprise Server service is running by executing the following:
If the service is not running on any node, start the service by executing the following on that node:
Use to test the local connection to the Enterprise Server node.
This action is performed on each Enterprise Server node:
The sudo command is used here to connect to the Enterprise Server node using the root@localhost user account, which authenticates using the authentication plugin. Other user accounts can be used by specifying the --user and --password command-line options.
Use to check that replication is running properly on the replica servers.
This action is performed on each replica server.
Execute the following:
If Slave_IO_Running column is not Yes on any replica server, then check:
The network connectivity between the replica server and the primary server
The Last_IO_Error column for details on any errors
If Slave_SQL_Running column is not Yes on any replica server, then check:
The GTID position in
The Last_SQL_Error column for details on any errors
If both columns are not Yes on any replica server, then check:
The replication configuration on the replica server.
If you need to make any corrections, the slave threads can be restarted with .
Use to test DDL.
On the primary server, use the to connect to the node:
Create a test database and table:
On each replica server, use the to connect to the node:
Confirm that the database and table exist:
If the database or table do not exist on any node, then check the replication status on the node.
Use to test DML.
On the primary server, use the MariaDB Client to connect to the node:
Insert sample data into the table created in the DDL test:
On each replica server, use the to connect to the node:
Execute a query to retrieve the data:
If the data is not returned on any node, then check the replication status on the node.
Navigation in the procedure "Deploy Primary/Replica Topology":
This page was step 4 of 7.
Next: Step 5: Install MariaDB MaxScale
This page details step 5 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step tests MariaDB Enterprise Server and MariaDB Enterprise ColumnStore 23.10.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Enterprise ColumnStore 23.10 includes a testS3Connection command to test the S3 configuration, permissions, and connectivity.
This action is performed on each Enterprise ColumnStore node.
Test the S3 configuration by executing the following:
If the testS3Connection command does not return OK, investigate the S3 configuration.
Use Systemd to test whether the MariaDB Enterprise Server service is running.
This action is performed on each Enterprise ColumnStore node.
Check if the MariaDB Enterprise Server service is running by executing the following:
If the service is not running on any node, start the service by executing the following on that node:
Use to test the local connection to the Enterprise Server node.
This action is performed on each Enterprise ColumnStore node:
The sudo command is used here to connect to the Enterprise Server node using the root@localhost user account, which authenticates using the unix_socket authentication plugin. Other user accounts can be used by specifying the --user and --password command-line options.
Query the table to confirm that the ColumnStore storage engine is loaded.
This action is performed on each Enterprise ColumnStore node.
Execute the following query:
The PLUGIN_STATUS column for each ColumnStore-related plugin should contain ACTIVE.
Use Systemd to test whether the CMAPI service is running.
This action is performed on each Enterprise ColumnStore node.
Check if the CMAPI service is running by executing the following:
If the service is not running on any node, start the service by executing the following on that node:
Use CMAPI to request the ColumnStore status. The API key needs to be provided as part of the X-API-key HTML header.
This action is performed with the CMAPI service on the primary server.
Check the ColumnStore status using curl by executing the following:
Use MariaDB Client to test DDL.
On the primary server, use the MariaDB Client to connect to the node:
Create a test database and ColumnStore table:
On each replica server, use the MariaDB Client to connect to the node:
Confirm that the database and table exist:
If the database or table do not exist on any node, then check the replication configuration.
Use MariaDB Client to test DML.
On the primary server, use the MariaDB Client to connect to the node:
Insert sample data into the table created in the DDL test:
On each replica server, use the MariaDB Client to connect to the node:
Execute a query to retrieve the data:
If the data is not returned on any node, check the ColumnStore status and the storage configuration.
Navigation in the procedure 'Deploy ColumnStore Object Storage Topology".
This page was step 5 of 9.
Next: Step 6: Install MariaDB MaxScale.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page details step 2 of the 3-step procedure "Deploy Spider Sharded Topology".
This step configures the Spider Node and Data Nodes and creates the Spider Table and Data Tables.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
The data node requires a user account that the Spider Node uses to connect.
On each Data Node, create the Spider user account for the Spider Node using the statement:
Privileges will be granted to the user account in .
On the Spider Node, confirm that the Spider user account can connect to the Data Node using MariaDB Client:
The Spider Node requires connection details for each Data Node.
On the Spider Node, create a server object to configure the connection details for each Data Node using the statement:
Create a Server object to configure the connection details for the Data Node at the headquarters branch:\
Create a server object to configure the connection details for the Data Node at the eastern branch:\
Create a server object to configure the connection details for the Data Node at the western branch:
The Data Node runs MariaDB Enterprise Server, so the FOREIGN DATA WRAPPER is set to mariadb.
Using a server object for connection details is optional. Alternatively, the connection details for the Data Node can be specified in the COMMENT table option of the statement when .
When queries read and write to a Spider Table, Spider reads and writes to the Data Tables for each partition on the on the Data Nodes. The Data Tables must be created on the Data Nodes with the same structure as the .
If your Data Tables already exist, to the Spider user.
On each Data Node, create the Data Tables:
On the Data Node for the headquarters server, create a database and table and add sample data:
The Spider Node reads and writes to the Data Table using the server and user account configured in "". The user account must have .
On the Data Node for the eastern branch of the business, create a database and table and add sample data:\
The Spider Node reads and writes to the Data Table using the server and user account configured in "". The user account must have .\
On the Data Node for the western branch of the business, create a database and table and add sample data:\
The Spider Node connects to the Data Nodes with the user account configured in "".
On each Data Node, grant the Spider user sufficient privileges to operate on the Data Table:
By default, the Spider user also requires the CREATE TEMPORARY TABLES privilege on the database containing the Data Table. The CREATE TEMPORARY TABLES privilege is required, because Spider uses temporary tables to optimize read queries when Spider BKA Mode is 1.
Spider BKA Mode is configured using the following methods:
The session value is configured by setting the system variable on the Spider Node. The default value is -1. When the session value is -1, the value for each is used.
The value for each Spider Table is configured by setting the bka_mode option in the COMMENT table option. When the bka_mode option is not set, the implicit value is 1.
The default value is -1, and the implicit Spider Table value is 1, so the default Spider BKA Mode is 1.
On the Data Node, grant the Spider user the CREATE TEMPORARY TABLES privilege on the database:
The Spider Table must be created on the Spider Node with the same structure as the . The Spider Table must have a partition for each Data Table.
On the Spider Node, create the Spider Table and reference the Data Node in the COMMENT table option:
The COMMENT partition option is used to configure the Data Node and the Data Table for each partition. Set the server option to the server object for the partition configured in "". Set the table option to the for the partition.
An alternative syntax is available. When you don't want to create a server object, the connection details for the Data Nodes can be specified in the COMMENT partition option:
Navigation in the procedure "Deploy Spider Sharded Topology":
This page was step 2 of 3.
Next: Step 3: Test Spider Sharded Topology.
This page details step 3 of the 7-step procedure "Deploy Primary/Replica Topology".
This page starts and configures a MariaDB Enterprise Server 11.4 to operate as a replica server in MariaDB Replication.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
The installation process might have started the Enterprise Server service. The service should be stopped prior to making configuration changes.
On each Enterprise Server node, stop the MariaDB Enterprise Server service:
Enterprise Server nodes require that you set the following system variables and options:
MariaDB Enterprise Server also supports group commit.
Writes to the primary server that are group committed or logged with a Global Transaction ID in different replication domains can be applied on the replica server using parallel threads to improve performance.
On each Enterprise Server node, edit a configuration file and set these system variables and options:
Set the server_id option to a value that is unique for each Enterprise Server node.
When deploying a new replica server to an existing system, back up the primary server and restore it on the replica server to initialize the database.
Use to back up the primary server.
On the primary server, take a full backup:
Confirm successful completion of the backup operation.
On the primary server, prepare the backup:
Confirm successful completion of the prepare operation.
On the primary server, copy the backup directory to each replica server:
On the replica server, move the default to another location:
On the replica server, use to restore the backup to the :
On the replica server, set the file permissions for the :
Start MariaDB Enterprise Server. If the Enterprise Server process is already running, restart it to apply the changes from the configuration file.
For additional information, see "".
If the replica server was restored from a backup of the primary, set the GTID position.
Get the GTID position that corresponds to the restored backup. This can be found in the xtrabackup_binlong_info file.
The GTID position from the above output is 0-1-2001,1-2-5139.
Connect to the replica server:
Set the system variable to the GTID position:
Execute the CHANGE MASTER TO statement to configure the replica server to connect to the primary server at this position:
The above statement configures the replica server to connect to a primary server located at 192.0.2.10 using the repl user account. This account must first be configured on the primary server.
Use the statement to start replication:
Use statement to confirm replication is running:
Navigation in the procedure "Deploy Primary/Replica Topology":
This page was step 3 of 7.
Next: Step 4: Test MariaDB Enterprise Server


Deployment instructions for a single-node MariaDB Enterprise Server instance with the ColumnStore engine using S3-compatible object storage.
This procedure describes the deployment of the Single-Node Enterprise ColumnStore topology with Object storage.
MariaDB Enterprise ColumnStore 23.10 is a columnar storage engine for MariaDB Enterprise Server and Enterprise ColumnStore is best suited for Online Analytical Processing (OLAP) workloads.
This procedure has 5 steps, which are executed in sequence.
This page provides an overview of the topology, requirements, and deployment procedures.
Please read and understand this procedure before executing.
The unique_checks system variable in MariaDB Server can be utilized to disable unique checks, which might be useful in specific scenarios where enforcing unique constraints temporarily is not required. This can enhance performance during bulk inserts, for instance.
MariaDB Server supports UNIQUE constraints to ensure that a column's value is unique within a table:
[maxscale]
threads = auto
admin_host = 0.0.0.0
admin_secure_gui = false$ sudo systemctl restart maxscale$ maxctrl create server node1 192.0.2.101
$ maxctrl create server node2 192.0.2.102
$ maxctrl create server node3 192.0.2.103$ maxctrl create monitor mdb_monitor mariadbmon \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
replication_user=repl \
replication_password='REPLICATION_USER_PASSWORD' \
--servers node1 node2 node3$ maxctrl create service connection_router_service readconnroute \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
router_options=slave \
--servers node1 node2 node3$ maxctrl create listener connection_router_service connection_router_listener 3308 \
protocol=MariaDBClient$ maxctrl create service query_router_service readwritesplit \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
--servers node1 node2 node3$ maxctrl create listener query_router_service query_router_listener 3307 \
protocol=MariaDBClient$ maxctrl start servicesSELECT * FROM spider_tab;SELECT *
FROM spider_tab s
JOIN innodb_tab i
ON s.id=i.id;INSERT INTO innodb_tab
SELECT * FROM spider_tab;[mariadb]
...
plugin_load_add = "ha_spider"INSTALL SONAME "ha_spider";SELECT * FROM information_schema.SPIDER_WRAPPER_PROTOCOLS;CREATE SERVER hq_server
FOREIGN DATA WRAPPER mariadb
OPTIONS (
HOST "192.0.2.2",
PORT 5801,
USER "spider_user",
PASSWORD "password",
DATABASE "hq_sales"
);
CREATE DATABASE spider_hq_sales;
CREATE TABLE spider_hq_sales.invoices (
branch_id INT NOT NULL,
invoice_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=Spider
COMMENT='server "hq_server", table "invoices"';SELECT *
FROM spider_tab s
JOIN local_tab l
ON s.id=l.id;INSERT INTO destination_tab
SELECT * FROM spider_tab;SELECT * FROM spider_tab;[mariadb]
...
plugin_load_add = "ha_spider"INSTALL SONAME "ha_spider";SELECT * FROM information_schema.SPIDER_WRAPPER_PROTOCOLS;INSTALL SONAME 'ha_spider';
CREATE DATABASE spider_test;
USE spider_test;
CREATE OR REPLACE TABLE contacts (
contact_id BIGINT NOT NULL PRIMARY KEY,
first_name VARCHAR(255) NOT NULL,
last_name VARCHAR(255) NOT NULL,
email VARCHAR(255) NOT NULL,
phone VARCHAR(20),
customer_id BIGINT
) ENGINE=SPIDER COMMENT='wrapper "odbc", dsn "ORARDS", table "CONTACTS"';[maxscale]
threads = auto
admin_host = 0.0.0.0
admin_secure_gui = false$ sudo systemctl restart maxscale$ maxctrl create server mcs1 192.0.2.101$ maxctrl create server mcs2 192.0.2.102$ maxctrl create server mcs3 192.0.2.103$ maxctrl create monitor columnstore_monitor mariadbmon \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
replication_user=repl \
replication_password='REPLICATION_USER_PASSWORD' \
--servers mcs1 mcs2 mcs3$ maxctrl create service connection_router_service readconnroute \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
router_options=slave \
--servers mcs1 mcs2 mcs3$ maxctrl create listener connection_router_service connection_router_listener 3308 \
protocol=MariaDBClient$ maxctrl create service query_router_service readwritesplit \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
--servers mcs1 mcs2 mcs3$ maxctrl create listener query_router_service query_router_listener 3307 \
protocol=MariaDBClient$ maxctrl start services[mariadb]
...
plugin_load_add = "ha_spider"INSTALL SONAME "ha_spider";SELECT * FROM information_schema.SPIDER_WRAPPER_PROTOCOLS;CREATE SERVER hq_server
FOREIGN DATA WRAPPER mariadb
OPTIONS (
HOST "192.0.2.2",
PORT 5801,
USER "spider_user",
PASSWORD "password",
DATABASE "hq_sales"
);
CREATE SERVER eastern_server
FOREIGN DATA WRAPPER mariadb
OPTIONS (
HOST "192.0.2.3",
PORT 5801,
USER "spider_user",
PASSWORD "password",
DATABASE "eastern_sales"
);
CREATE SERVER western_server
FOREIGN DATA WRAPPER mariadb
OPTIONS (
HOST "192.0.2.4",
PORT 5801,
USER "spider_user",
PASSWORD "password",
DATABASE "western_sales"
);
CREATE DATABASE spider_sharded_sales;
CREATE TABLE spider_sharded_sales.invoices (
branch_id INT NOT NULL,
invoice_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2)
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=Spider
PARTITION BY LIST(branch_id) (
PARTITION hq_partition VALUES IN (1) COMMENT = 'server "hq_server", table "invoices"',
PARTITION eastern_partition VALUES IN (2) COMMENT = 'server "eastern_server", table "invoices"',
PARTITION western_partition VALUES IN (3) COMMENT = 'server "western_server", table "invoices"'
);[mariadb]
log_error = mariadbd.err
character_set_server = utf8
collation_server = utf8_general_ci[ObjectStorage]
…
service = S3
…
[S3]
bucket = your_columnstore_bucket_name
endpoint = your_s3_endpoint
aws_access_key_id = your_s3_access_key_id
aws_secret_access_key = your_s3_secret_key
# iam_role_name = your_iam_role
# sts_region = your_sts_region
# sts_endpoint = your_sts_endpoint
# ec2_iam_mode = enabled
[Cache]
cache_size = your_local_cache_size
path = your_local_cache_path$ sudo systemctl start mariadb
$ sudo systemctl enable mariadb$ sudo systemctl start mariadb-columnstore
$ sudo systemctl enable mariadb-columnstoreCREATE USER 'util_user'@'127.0.0.1'
IDENTIFIED BY 'util_user_passwd';GRANT SELECT, PROCESS ON *.*
TO 'util_user'@'127.0.0.1';$ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1
$ sudo mcsSetConfig CrossEngineSupport Port 3306
$ sudo mcsSetConfig CrossEngineSupport User util_user$ sudo mcsSetConfig CrossEngineSupport Password util_user_passwd$ sudo yum install policycoreutils policycoreutils-python$ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
Nothing to do$ sudo semodule -i mariadb_local.pp# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo setenforce enforcingOptionally enforces causal reads|




Rocky Linux 9 (x86_64, ARM64)
Ubuntu 20.04 LTS (x86_64, ARM64)
Ubuntu 22.04 LTS (x86_64, ARM64)
Ubuntu 24.04 LTS (x86_64, ARM64)
Step 1
Step 2
Step 3
Step 4
Step 5
MariaDB Enterprise Server
Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.
Columnar Storage Engine
Optimized for Online Analytical Processing (OLAP) workloads
Enterprise ColumnStore node
4+ cores
16+ GB
Enterprise ColumnStore node
64+ cores
128+ GB
Configuration File
Configuration files (such as /etc/my.cnf) can be used to set system variables and options. The server must be restarted to apply changes made to configuration files.
Command-line
The server can be started with command-line options that set system variables and options.
SQL
Users can set system variables that support dynamic changes on-the-fly using the SET statement.
CentOS
Red Hat Enterprise Linux (RHEL)
/etc/my.cnf.d/z-custom-mariadb.cnf
Debian
Ubuntu
/etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf
Start
sudo systemctl start mariadb
Stop
sudo systemctl stop mariadb
Restart
sudo systemctl restart mariadb
Enable during startup
sudo systemctl enable mariadb
Disable during startup
sudo systemctl disable mariadb
Status
sudo systemctl status mariadb

This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
The network socket Enterprise Server listens on for incoming TCP/IP client connections. On Debian or Ubuntu, this system variable must be set to override the 127.0.0.1 default configuration.
Enables binary logging and sets the name of the binlog file.
Unique numeric identifier for each Enterprise Server node.
Sets the number of threads the replica server uses to apply replication events in parallel. Use a non-zero value to enable Parallel Replication.
Sets how the replica server applies replicated transactions.
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Enterprise ColumnStore includes a testS3Connection command to test the S3 configuration, permissions, and connectivity.
Test the S3 configuration by executing the following:
If the testS3Connection command does not return OK, investigate the S3 configuration.
Use Systemd to test whether the MariaDB Enterprise Server service is running.
Check if the MariaDB Enterprise Server service is running by executing the following:
If the service is not running, start the service by executing the following:
Use MariaDB Client to test the local connection to the Enterprise Server node:
The sudo command is used here to connect to the Enterprise Server node using the root@localhost user account, which authenticates using the unix_socket authentication plugin. Other user accounts can be used by specifying the --user and --password command-line options.
Query the information_schema.PLUGINS table to confirm that the ColumnStore storage engine is loaded.
Execute the following query:
The PLUGIN_STATUS column for each ColumnStore-related plugin should contain ACTIVE.
Use the SHOW REPLICA STATUS statement to check the status of MariaDB Replication:
Create a test database, if it does not exist:
Create a ColumnStore table:
Add sample data into the table:
Read data from table:
Create an InnoDB table:
Add data to the table:
Perform a cross-engine join:
Connect to the server using MariaDB Client using the root@localhost user account:
Create the databases for the InnoDB and ColumnStore tables using the CREATE DATABASE statement:
Create the InnoDB versions of the HTAP tables using the CREATE TABLE statement:
Confirm that the tables were replicated using the SHOW TABLES statement:
The replication initially creates empty InnoDB tables, which need to be transformed into ColumnStore tables and which need to be populated with the initial copy of the data:
Insert data into the InnoDB versions of the HTAP tables using the INSERT statement:
Confirm that the data was replicated using the SELECT statement:
Create an InnoDB table that will not be replicated:
Confirm that the table was not replicated:
Create a ColumnStore table that will not be replicated:
Confirm that the table was not replicated:
Navigation in the procedure "Deploy HTAP Topology". This page was step 4 of 4.
This procedure is complete.
This page is: Copyright © 2025 MariaDB. All rights reserved.
MariaDB Server uses unique indexes to speed up query execution and enforce unique constraints
MariaDB Server supports single column and composite (multi-column) unique indexes
MariaDB Community Server can have up to 64 total indexes for a given table.
MariaDB Server can have up to 128 total indexes for a given table.
InnoDB stores unique indexes in the same tablespace file as the clustered index and data.
Unique indexes are B+ trees, which are very efficient for searching for exact values, performing range scans, and checking uniqueness.
If no primary key is defined for a table, then InnoDB will use the table's first NOT NULL unique index as the table's primary key.
If no primary key or NOT NULL unique index is defined for a table, then InnoDB will automatically create a primary key called GEN_CLUST_INDEX, using an internal 48-bit DB_ROW_ID column as the key. Replication with such tables can be very slow, especially when binlog_format is MIXED or ROW.
MariaDB Server provides the unique_checks system variable, which can be used to disable unique checks.
When unique checks are disabled, the InnoDB change buffer is used for inserts into unique indexes, and duplicate values will not be detected.
Disabling unique checks can speed up bulk data loads, but it is dangerous to do so.
Let's create an InnoDB table with a single column unique index after confirming that the default storage engine is InnoDB:
Connect to the server using MariaDB Client:
Confirm that the default storage engine is InnoDB by checking the default_storage_engine system variable using the SHOW SESSION VARIABLES statement:
If the database does not exist, then create the database for the table using the CREATE DATABASE statement:
Create the table using the CREATE TABLE statement and specify the unique index with the UNIQUE INDEX() clause:
For a single column unique index, the unique index can also be specified with the UNIQUE column option:
Let's create an InnoDB table with a composite (multi-column) unique index after confirming that the default storage engine is InnoDB:
Connect to the server using MariaDB Client:
Confirm that the default storage engine is InnoDB by checking the default_storage_engine system variable using the SHOW SESSION VARIABLES statement:
If the database does not exist, then create the database for the table using the CREATE DATABASE statement:
Create the table using the CREATE TABLE statement and specify the unique index with the UNIQUE INDEX() clause:
Let's create an InnoDB table with a unique index on a single column prefix after confirming that the default storage engine is InnoDB:
Connect to the server using MariaDB Client:
Confirm that the default storage engine is InnoDB by checking the default_storage_engine system variable using the SHOW SESSION VARIABLES statement:
If the database does not exist, then create the database for the table using the CREATE DATABASE statement:
Create the table using the CREATE TABLE statement and specify the unique index with the UNIQUE INDEX() clause:
The unique index is specified with the product_description(1000) prefix, so only the first 1000 characters of the product_description column for each row will be indexed and checked for uniqueness.
Let's create an InnoDB table without a unique index, and then add a unique index to it:
Connect to the server using MariaDB Client:
Confirm that the default storage engine is InnoDB by checking the default_storage_engine system variable using the SHOW SESSION VARIABLES statement:
If the database does not exist, then create the database for the table using the CREATE DATABASE statement:
Create the table without a primary key using the CREATE TABLE statement:
Alter the table using the ALTER TABLE statement and specify the new unique index with the ADD UNIQUE INDEX() clause:
Let's drop the unique index from the table created in the Creating an InnoDB Table with a Single Column Unique Index section:
Obtain the name of the index by joining the information_schema.INNODB_SYS_INDEXES, information_schema.INNODB_SYS_TABLES, and information_schema.INNODB_SYS_FIELDS tables:
Alter the table using the ALTER TABLE statement and specify the DROP INDEX clause:
Let's rebuild the unique index in the table created in the Creating an InnoDB Table with a Single Column Unique Index section:
Obtain the name of the index by joining the information_schema.INNODB_SYS_INDEXES, information_schema.INNODB_SYS_TABLES, and information_schema.INNODB_SYS_FIELDS tables:
Alter the table using the ALTER TABLE statement and specify the DROP INDEX clause:
Alter the table using the ALTER TABLE statement and specify the unique index with the ADD UNIQUE INDEX() clause:
This page is: Copyright © 2025 MariaDB. All rights reserved.
Apr 30 21:54:35 a1ebc96a2519 PrimProc[1004]: 35.668435 |0|0|0| C 28 CAL0000: Error total memory available is less than 3GB.ERROR 1815 (HY000): Internal error: System is not ready yet. Please try again.$ systemctl status mariadb$ sudo systemctl start mariadb$ sudo mariadb
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>SHOW REPLICA STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.0.2.1
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mariadb-bin.000001
Read_Master_Log_Pos: 645
Relay_Log_File: li282-189-relay-bin.000002
Relay_Log_Pos: 946
Relay_Master_Log_File: mariadb-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 645
Relay_Log_Space: 1259
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_SSL_Crl:
Master_SSL_Crlpath:
Using_Gtid: Slave_Pos
Gtid_IO_Pos: 0-1-2
Replicate_Do_Domain_Ids:
Replicate_Ignore_Domain_Ids:
Parallel_Mode: optimistic
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
Slave_DDL_Groups: 2
Slave_Non_Transactional_Groups: 0
Slave_Transactional_Groups: 0$ sudo mariadbCREATE DATABASE IF NOT EXISTS test;
CREATE TABLE test.contacts (
id INT PRIMARY KEY AUTO_INCREMENT,
first_name VARCHAR(50),
last_name VARCHAR(50),
email VARCHAR(100)
);$ sudo mariadbSHOW CREATE TABLE test.contacts\G;$ sudo mariadbINSERT INTO test.contacts (first_name, last_name, email)
VALUES
("Kai", "Devi", "kai.devi@example.com"),
("Lee", "Wang", "lee.wang@example.com");$ sudo mariadbSELECT * FROM test.contacts;
+----+------------+-----------+----------------------+
| id | first_name | last_name | email |
+----+------------+-----------+----------------------+
| 1 | Kai | Devi | kai.devi@example.com |
| 2 | Lee | Wang | lee.wang@example.com |
+----+------------+-----------+----------------------+$ sudo testS3ConnectionStorageManager[26887]: Using the config file found at /etc/columnstore/storagemanager.cnf
StorageManager[26887]: S3Storage: S3 connectivity & permissions are OK
S3 Storage Manager Configuration OK$ systemctl status mariadb$ sudo systemctl start mariadb$ sudo mariadb
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>SELECT PLUGIN_NAME, PLUGIN_STATUS
FROM information_schema.PLUGINS
WHERE PLUGIN_LIBRARY LIKE 'ha_columnstore%';
+---------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+---------------------+---------------+
| Columnstore | ACTIVE |
| COLUMNSTORE_COLUMNS | ACTIVE |
| COLUMNSTORE_TABLES | ACTIVE |
| COLUMNSTORE_FILES | ACTIVE |
| COLUMNSTORE_EXTENTS | ACTIVE |
+---------------------+---------------+$ systemctl status mariadb-columnstore-cmapi$ sudo systemctl start mariadb-columnstore-cmapi$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .
{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
},
"192.0.2.2": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"192.0.2.3": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"num_nodes": 3
}$ sudo mariadbCREATE DATABASE IF NOT EXISTS test;
CREATE TABLE IF NOT EXISTS test.contacts (
first_name VARCHAR(50),
last_name VARCHAR(50),
email VARCHAR(100)
) ENGINE = ColumnStore;$ sudo mariadbSHOW CREATE TABLE test.contacts\G;$ sudo mariadbINSERT INTO test.contacts (first_name, last_name, email)
VALUES
("Kai", "Devi", "kai.devi@example.com"),
("Lee", "Wang", "lee.wang@example.com");$ sudo mariadbSELECT * FROM test.contacts;
+------------+-----------+----------------------+
| first_name | last_name | email |
+------------+-----------+----------------------+
| Kai | Devi | kai.devi@example.com |
| Lee | Wang | lee.wang@example.com |
+------------+-----------+----------------------+CREATE USER spider_user@192.0.2.1 IDENTIFIED BY "password";$ mariadb --user spider_user --host 192.0.2.2 --passwordCREATE SERVER hq_server
FOREIGN DATA WRAPPER mariadb
OPTIONS (
HOST "192.0.2.2",
PORT 5801,
USER "spider_user",
PASSWORD "password",
DATABASE "hq_sales"
);CREATE SERVER eastern_server
FOREIGN DATA WRAPPER mariadb
OPTIONS (
HOST "192.0.2.3",
PORT 5801,
USER "spider_user",
PASSWORD "password",
DATABASE "eastern_sales"
);CREATE SERVER western_server
FOREIGN DATA WRAPPER mariadb
OPTIONS (
HOST "192.0.2.4",
PORT 5801,
USER "spider_user",
PASSWORD "password",
DATABASE "western_sales"
);CREATE DATABASE hq_sales;
CREATE SEQUENCE hq_sales.invoice_seq;
CREATE TABLE hq_sales.invoices (
branch_id INT NOT NULL DEFAULT (1) CHECK (branch_id=1),
invoice_id INT NOT NULL DEFAULT (NEXT VALUE FOR hq_sales.invoice_seq),
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=InnoDB;
INSERT INTO hq_sales.invoices
(customer_id, invoice_date, invoice_total, payment_method)
VALUES
(1, '2020-05-10 12:35:10', 1087.23, 'CREDIT_CARD'),
(2, '2020-05-10 14:17:32', 1508.57, 'WIRE_TRANSFER'),
(3, '2020-05-10 14:25:16', 227.15, 'CASH');CREATE DATABASE eastern_sales;
CREATE SEQUENCE eastern_sales.invoice_seq;
CREATE TABLE eastern_sales.invoices (
branch_id INT NOT NULL DEFAULT (2) CHECK (branch_id=2),
invoice_id INT NOT NULL DEFAULT (NEXT VALUE FOR eastern_sales.invoice_seq),
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=InnoDB;
INSERT INTO eastern_sales.invoices
(customer_id, invoice_date, invoice_total, payment_method)
VALUES
(2, '2020-05-10 12:31:00', 1351.04, 'CREDIT_CARD'),
(2, '2020-05-10 12:45:27', 162.11, 'WIRE_TRANSFER'),
(4, '2020-05-10 13:11:23', 350.00, 'CASH');CREATE DATABASE western_sales;
CREATE SEQUENCE western_sales.invoice_seq;
CREATE TABLE western_sales.invoices (
branch_id INT NOT NULL DEFAULT (3) CHECK (branch_id=3),
invoice_id INT NOT NULL DEFAULT (NEXT VALUE FOR western_sales.invoice_seq),
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=InnoDB;
INSERT INTO western_sales.invoices
(customer_id, invoice_date, invoice_total, payment_method)
VALUES
(5, '2020-05-10 12:31:00', 111.50, 'CREDIT_CARD'),
(8, '2020-05-10 12:45:27', 1509.23, 'WIRE_TRANSFER'),
(3, '2020-05-10 13:11:23', 3301.66, 'CASH');GRANT ALL PRIVILEGES ON hq_sales.invoices TO 'spider_user'@'192.0.2.1';GRANT CREATE TEMPORARY TABLES ON hq_sales.* TO 'spider_user'@'192.0.2.1';CREATE DATABASE spider_sharded_sales;
CREATE TABLE spider_sharded_sales.invoices (
branch_id INT NOT NULL,
invoice_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=Spider
PARTITION BY LIST(branch_id) (
PARTITION hq_partition VALUES IN (1) COMMENT = 'server "hq_server", table "invoices"',
PARTITION eastern_partition VALUES IN (2) COMMENT = 'server "eastern_server", table "invoices"',
PARTITION western_partition VALUES IN (3) COMMENT = 'server "western_server", table "invoices"'
);CREATE TABLE spider_hq_sales.invoices_alternate (
branch_id INT NOT NULL,
invoice_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(branch_id, invoice_id)
) ENGINE=Spider
COMMENT='table "invoices", host "192.0.2.2", port "5801", user "spider_user", password "user_password", database "hq_sales"';$ sudo systemctl stop mariadb[mariadb]
bind_address = 0.0.0.0
log_bin = mariadb-bin.log
server_id = 1$ sudo mariadb-backup --backup \
--user=mariadb-backup_user \
--password=mariadb-backup_passwd \
--target-dir=/data/backup/replica_backup$ sudo mariadb-backup --prepare \
--target-dir=/data/backup/replica_backup$ sudo rsync -av /data/backup/replica_backup 192.0.2.11:/data/backup/$ sudo mv /var/lib/mysql /var/lib/mysql_backup$ sudo mariadb-backup --copy-back \
--target-dir=/data/backup/replica_backup$ sudo chown -R mysql:mysql /var/lib/mysql$ systemctl start mariadb$ cat xtrabackup_binlog_info
mariadb-bin.000096 568 0-1-2001,1-2-5139$ sudo mariadbSET GLOBAL gtid_slave_pos='0-1-2001,1-2-5139';CHANGE MASTER TO
MASTER_USER = "repl",
MASTER_HOST = "192.0.2.10",
MASTER_PASSWORD = "repl_passwd",
MASTER_USE_GTID = slave_pos;START REPLICA;SHOW REPLICA STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.0.2.10
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mariadb-bin.000001
Read_Master_Log_Pos: 645
Relay_Log_File: li282-189-relay-bin.000002
Relay_Log_Pos: 946
Relay_Master_Log_File: mariadb-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 645
Relay_Log_Space: 1259
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_SSL_Crl:
Master_SSL_Crlpath:
Using_Gtid: Slave_Pos
Gtid_IO_Pos: 0-1-2
Replicate_Do_Domain_Ids:
Replicate_Ignore_Domain_Ids:
Parallel_Mode: optimistic
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
Slave_DDL_Groups: 2
Slave_Non_Transactional_Groups: 0
Slave_Transactional_Groups: 0$ sudo testS3ConnectionStorageManager[26887]: Using the config file found at /etc/columnstore/storagemanager.cnf
StorageManager[26887]: S3Storage: S3 connectivity & permissions are OK
S3 Storage Manager Configuration OK$ systemctl status mariadb$ sudo systemctl start mariadb$ sudo mariadb
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>SELECT PLUGIN_NAME, PLUGIN_STATUS
FROM information_schema.PLUGINS
WHERE PLUGIN_LIBRARY LIKE 'ha_columnstore%';
+---------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+---------------------+---------------+
| Columnstore | ACTIVE |
| COLUMNSTORE_COLUMNS | ACTIVE |
| COLUMNSTORE_TABLES | ACTIVE |
| COLUMNSTORE_FILES | ACTIVE |
| COLUMNSTORE_EXTENTS | ACTIVE |
+---------------------+---------------+SHOW REPLICA STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: localhost
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mariadb-bin.000002
Read_Master_Log_Pos: 695
Relay_Log_File: mysqld-relay-bin.000002
Relay_Log_Pos: 996
Relay_Master_Log_File: mariadb-bin.000002
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table: columnstore_db.htap%
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 695
Relay_Log_Space: 1306
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_SSL_Crl:
Master_SSL_Crlpath:
Using_Gtid: Slave_Pos
Gtid_IO_Pos: 0-1-7
Replicate_Do_Domain_Ids:
Replicate_Ignore_Domain_Ids:
Parallel_Mode: optimistic
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
Slave_DDL_Groups: 1
Slave_Non_Transactional_Groups: 0
Slave_Transactional_Groups: 1CREATE DATABASE IF NOT EXISTS test;CREATE TABLE IF NOT EXISTS test.contacts (
first_name VARCHAR(50),
last_name VARCHAR(50),
email VARCHAR(100)
) ENGINE=ColumnStore;INSERT INTO test.contacts (first_name, last_name, email)
VALUES
("Kai", "Devi", "kai.devi@example.com"),
("Lee", "Wang", "lee.wang@example.com");SELECT * FROM test.contacts;
+------------+-----------+----------------------+
| first_name | last_name | email |
+------------+-----------+----------------------+
| Kai | Devi | kai.devi@example.com |
| Lee | Wang | lee.wang@example.com |
+------------+-----------+----------------------+CREATE TABLE test.addresses (
email VARCHAR(100),
street_address VARCHAR(255),
city VARCHAR(100),
state_code VARCHAR(2)
) ENGINE = InnoDB;INSERT INTO test.addresses (email, street_address, city, state_code)
VALUES
("kai.devi@example.com", "1660 Amphibious Blvd.", "Redwood City", "CA"),
("lee.wang@example.com", "32620 Little Blvd", "Redwood City", "CA");SELECT name AS "Name", addr AS "Address"
FROM (SELECT CONCAT(first_name, " ", last_name) AS name,
email FROM test.contacts) AS contacts
INNER JOIN (SELECT CONCAT(street_address, ", ", city, ", ", state_code) AS addr,
email FROM test.addresses) AS addr
WHERE contacts.email = addr.email;
+----------+-----------------------------------------+
| Name | Address |
+----------+-----------------------------------------+
| Kai Devi | 1660 Amphibious Blvd., Redwood City, CA |
| Lee Wang | 32620 Little Blvd, Redwood City, CA |
+----------+-----------------------------------------+$ sudo mariadbCREATE DATABASE columnstore_db;
CREATE DATABASE innodb_db;USE innodb_db;
CREATE TABLE htap_test1 (
id INT
) ENGINE = InnoDB;
CREATE TABLE htap_test2 (
id INT
) ENGINE = InnoDB;SHOW TABLES FROM columnstore_db;
+--------------------------+
| Tables_in_columnstore_db |
+--------------------------+
| htap_test1 |
| htap_test2 |
+--------------------------+DROP TABLE IF EXISTS columnstore_db.htap_test1;
CREATE TABLE columnstore_db.htap_test1
ENGINE=COLUMNSTORE
SELECT * FROM innodb_db.htap_test1;
DROP TABLE IF EXISTS columnstore_db.htap_test2;
CREATE TABLE columnstore_db.htap_test2
ENGINE=COLUMNSTORE
SELECT * FROM innodb_db.htap_test2;USE innodb_db;
INSERT INTO htap_test1
VALUES (100);
INSERT INTO htap_test2
VALUES (200);SELECT * FROM columnstore_db.htap_test1;
+------+
| id |
+------+
| 100 |
+------+
SELECT * FROM columnstore_db.htap_test2;
+------+
| id |
+------+
| 200 |
+------+USE innodb_db;
CREATE TABLE transactional_test1 (
id INT
) ENGINE = InnoDB;SHOW TABLES FROM columnstore_db LIKE 'transactional_%';
Empty set (0.02 sec)USE columnstore_db;
CREATE TABLE analytical_test1 (
id INT
) ENGINE = ColumnStore;SHOW TABLES FROM innodb_db LIKE 'analytical_%';
Empty set (0.02 sec)$ mariadb --user=rootSHOW SESSION VARIABLES
LIKE 'default_storage_engine';+------------------------+--------+
| Variable_name | Value |
+------------------------+--------+
| default_storage_engine | InnoDB |
+------------------------+--------+CREATE DATABASE hq_sales;CREATE TABLE hq_sales.customers (
customer_id BIGINT AUTO_INCREMENT NOT NULL,
customer_name VARCHAR(500),
customer_email VARCHAR(200),
PRIMARY KEY(customer_id),
UNIQUE INDEX(customer_email)
);CREATE TABLE hq_sales.customers (
customer_id BIGINT AUTO_INCREMENT NOT NULL,
customer_name VARCHAR(500),
customer_email VARCHAR(200) UNIQUE,
PRIMARY KEY(customer_id)
);$ mariadb --user=rootSHOW SESSION VARIABLES
LIKE 'default_storage_engine';+------------------------+--------+
| Variable_name | Value |
+------------------------+--------+
| default_storage_engine | InnoDB |
+------------------------+--------+CREATE DATABASE hq_sales;CREATE TABLE hq_sales.invoices (
invoice_id BIGINT UNSIGNED NOT NULL,
branch_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(invoice_id),
UNIQUE INDEX(invoice_date, customer_id, branch_id)
);$ mariadb --user=rootSHOW SESSION VARIABLES
LIKE 'default_storage_engine';+------------------------+--------+
| Variable_name | Value |
+------------------------+--------+
| default_storage_engine | InnoDB |
+------------------------+--------+CREATE DATABASE hq_sales;CREATE TABLE hq_sales.products (
product_id BIGINT AUTO_INCREMENT NOT NULL,
product_name VARCHAR(500),
product_brand VARCHAR(500),
product_description TEXT,
PRIMARY KEY(product_id),
UNIQUE INDEX(product_description(1000))
);$ mariadb --user=rootSHOW SESSION VARIABLES
LIKE 'default_storage_engine';+------------------------+--------+
| Variable_name | Value |
+------------------------+--------+
| default_storage_engine | InnoDB |
+------------------------+--------+CREATE DATABASE hq_sales;CREATE TABLE hq_sales.customers (
customer_id BIGINT AUTO_INCREMENT NOT NULL,
customer_name VARCHAR(500),
customer_email VARCHAR(200),
PRIMARY KEY(customer_id)
);ALTER TABLE hq_sales.customers ADD UNIQUE INDEX (customer_email);SELECT isi.NAME AS index_name, isf.NAME AS index_column
FROM information_schema.INNODB_SYS_INDEXES isi
JOIN information_schema.INNODB_SYS_TABLES ist
ON isi.TABLE_ID = ist.TABLE_ID
JOIN information_schema.INNODB_SYS_FIELDS isf
ON isi.INDEX_ID = isf.INDEX_ID
WHERE ist.NAME = 'hq_sales/customers'
ORDER BY isf.INDEX_ID, isf.POS;+----------------+----------------+
| index_name | index_column |
+----------------+----------------+
| PRIMARY | customer_id |
| customer_email | customer_email |
+----------------+----------------+ALTER TABLE hq_sales.customers DROP INDEX customer_email;SELECT isi.NAME AS index_name, isf.NAME AS index_column
FROM information_schema.INNODB_SYS_INDEXES isi
JOIN information_schema.INNODB_SYS_TABLES ist
ON isi.TABLE_ID = ist.TABLE_ID
JOIN information_schema.INNODB_SYS_FIELDS isf
ON isi.INDEX_ID = isf.INDEX_ID
WHERE ist.NAME = 'hq_sales/customers'
ORDER BY isf.INDEX_ID, isf.POS;+----------------+----------------+
| index_name | index_column |
+----------------+----------------+
| PRIMARY | customer_id |
| customer_email | customer_email |
+----------------+----------------+ALTER TABLE hq_sales.customers DROP INDEX customer_email;ALTER TABLE hq_sales.customers ADD UNIQUE INDEX (customer_email);Sets the Group Communications back-end (usually gcomm:), followed by a comma-separated list of IP addresses or domain names for each Cluster Node. It is best practice to include all Enterprise Cluster nodes in this list.
Sets the logical name for the cluster. Must be the same on all Cluster Nodes.
wsrep_on
Enables Enterprise Cluster.
Path to the Galera Enterprise 4 wsrep provider plugin. In MariaDB Enterprise Cluster 11.4, the path is /usr/lib/galera/libgalera_enterprise_smm.so on Debian and Ubuntu and /usr/lib64/galera/libgalera_enterprise_smm.so on CentOS, RHEL, and SLES.
Accepts options for the Galera Enterprise 4 wsrep provider plugin. Multiple options can be specified, separated by a semi-colon (;). For example, can be used to configure Galera cache size or to enable TLS. For a complete list of available options, see wsrep_provider_options
ca-cert.pem
Certificate Authority (CA) file.
server-cert.pem
X.509 certificate file.
server-key.pem
X.509 key file.
bind_address
The network socket Enterprise Cluster listens on for incoming TCP/IP client connections. On Debian or Ubuntu, this system variable must be set to override the 127.0.0.1 default configuration.
binlog_format
Enterprise Cluster requires use of the ROW Binary Log format.
innodb_autoinc_lock_mode
Enterprise Cluster requires an auto-increment lock mode of 2.
ssl_ca
Certificate Authority (CA) file in PEM format.
ssl_cert
X.509 certificate file in PEM format.
ssl_key
X.509 key file in PEM format.
This page is: Copyright © 2025 MariaDB. All rights reserved.
Step 1
Step 2
Step 3
Step 4
Step 5
Customers can obtain support by submitting a support case.
The following components are deployed during this procedure:
Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.
Columnar Storage Engine
Optimized for Online Analytical Processing (OLAP) workloads
S3-compatible object storage
The Single-Node Enterprise ColumnStore topology provides support for Online Analytical Processing (OLAP) workloads to MariaDB Enterprise Server.
The Enterprise ColumnStore node:
Receives queries from the application
Executes queries
Use S3-compatible object storage for data
Single-Node Enterprise ColumnStore does not provide high availability (HA) for Online Analytical Processing (OLAP). If you would like to deploy Enterprise ColumnStore with high availability, see Enterprise ColumnStore with Object storage.
These requirements are for the Single-Node Enterprise ColumnStore, when deployed with MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.
Debian 11 (x86_64, ARM64)
Debian 12 (x86_64, ARM64)
Red Hat Enterprise Linux 8 (x86_64, ARM64)
Red Hat Enterprise Linux 9 (x86_64, PPC64LE, ARM64)
Red Hat UBI 8 (x86_64, ARM64)
Rocky Linux 8 (x86_64, ARM64)
Rocky Linux 9 (x86_64, ARM64)
Ubuntu 20.04 LTS (x86_64, ARM64)
Ubuntu 22.04 LTS (x86_64, ARM64)
Ubuntu 24.04 LTS (x86_64, ARM64)
MariaDB Enterprise ColumnStore's minimum hardware requirements are not intended for production environments, but the minimum hardware requirements can be appropriate for development and test environments. For production environments, see the recommended hardware requirements instead.
The minimum hardware requirements are:
Enterprise ColumnStore node
4+ cores
16+ GB
MariaDB Enterprise ColumnStore will refuse to start if the system has less than 3 GB of memory.
If Enterprise ColumnStore is started on a system with less memory, the following error message will be written to the ColumnStore system log called crit.log:
And the following error message will be raised to the client:
MariaDB Enterprise ColumnStore's recommended hardware requirements are intended for production analytics.
The recommended hardware requirements are:
Enterprise ColumnStore node
64+ cores
128+ GB
Single-node Enterprise ColumnStore with Object Storage requires the following storage type:
Single-node Enterprise ColumnStore with Object Storage uses S3-compatible object storage to store data.
Single-node Enterprise ColumnStore with Object Storage uses S3-compatible object storage to store data.
Many S3-compatible object storage services exist. MariaDB Corporation cannot make guarantees about all S3-compatible object storage services, because different services provide different functionality.
For the preferred S3-compatible object storage providers that provide cloud and hardware solutions, see the following sections:
The use of non-cloud and non-hardware providers is at your own risk.
If you have any questions about using specific S3-compatible object storage with MariaDB Enterprise ColumnStore, contact us.
Amazon Web Services (AWS) S3
Google Cloud Storage
Azure Storage
Alibaba Cloud Object Storage Service
Cloudian HyperStore
Dell EMC
Seagate Lyve Rack
Quantum ActiveScale
IBM Cloud Object Storage
MariaDB Enterprise Server Configuration Management
Configuration File
Configuration files (such as /etc/my.cnf) can be used to set and . The server must be restarted to apply changes made to configuration files.
Command-line
The server can be started with command-line options that set and .
SQL
Users can set that support dynamic changes on-the-fly using the statement.
MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.
To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.
CentOS
Red Hat Enterprise Linux (RHEL)
/etc/my.cnf.d/z-custom-mariadb.cnf
Debian
Ubuntu
/etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf
The systemctl command is used to start and stop the MariaDB Enterprise Server service.
Start
sudo systemctl start mariadb
Stop
sudo systemctl stop mariadb
Restart
sudo systemctl restart mariadb
Enable during startup
sudo systemctl enable mariadb
Disable during startup
sudo systemctl disable mariadb
Status
sudo systemctl status mariadb
Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:
Next: Step 1: Install MariaDB Enterprise ColumnStore.
This page is: Copyright © 2025 MariaDB. All rights reserved.
Spider Node
A Spider Table is a virtual table that does not store data. When a Spider Table is queried, the Enterprise Spider storage engine uses foreign data wrappers to read from and write to Data Tables on Data Nodes or ODBC Data Sources.
Spider Table
A Spider Table is a virtual table that does not store data. When a Spider Table is queried, the Enterprise Spider storage engine uses foreign data wrappers to read from and write to Data Tables on Data Nodes or ODBC Data Sources.
Spider Node
A Spider Table is a virtual table that does not store data. When a Spider Table is queried, the Enterprise Spider storage engine uses foreign data wrappers to read from and write to Data Tables on Data Nodes or ODBC Data Sources.
This page details step 3 of the 4-step procedure "Deploy HTAP Topology".
This step starts and configures MariaDB Enterprise Server and MariaDB Enterprise ColumnStore 23.10.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
The installation process might have started some of the ColumnStore services. The services should be stopped prior to making configuration changes.
On each Enterprise ColumnStore node, stop the MariaDB Enterprise Server service:
On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:
On each Enterprise ColumnStore node, stop the CMAPI service:
On each Enterprise ColumnStore node, configure Enterprise Server.
Configure Enterprise ColumnStore S3 Storage Manager to use S3-compatible storage by editing the /etc/columnstore/storagemanager.cnf configuration file:
The S3-compatible object storage options are configured under [S3]:
The bucket option must be set to the name of the bucket that you created in "Create an S3 Bucket".
The endpoint option must be set to the endpoint for the S3-compatible object storage.
The aws_access_key_id and aws_secret_access_key options must be set to the access key ID and secret access key for the S3-compatible object storage.
To use a specific IAM role, you must uncomment and set
The local cache options are configured under [Cache]:
The cache_size option is set to 2 GB by default.
The path option is set to /var/lib/columnstore/storagemanager/cache by default.
Ensure that the specified path has sufficient storage space for the specified cache size.
Start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:
Start and enable the MariaDB Enterprise ColumnStore service, so that it starts automatically upon reboot:
For additional information, see "".
The HTAP topology requires several user accounts.
Enterprise ColumnStore requires a mandatory utility user account. By default, it connects to the server using the root user with no password. MariaDB Enterprise Server 10.6 will reject this login attempt by default, so you will need to configure Enterprise ColumnStore to use a different user account and password and create this user account on Enterprise Server.
On the Enterprise ColumnStore node, create the user account with the CREATE USER statement:
On the Enterprise ColumnStore node, grant the user account SELECT privileges on all databases with the GRANT statement:
Configure Enterprise ColumnStore to use the utility user:
Set the password:
For details about how to encrypt the password, see "".
Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.
Enterprise HTAP uses to replicate writes between InnoDB tables and ColumnStore tables.
Create a replication user and grant it the required privileges:
Use the statement to create replication users for each replica server:
Grant the user account several global privileges with the statement.
Set the GTID position by setting the system variable. If this is a new deployment, then it would be set to the empty string:
Use the CHANGE MASTER TO statement to configure the server to replicate from itself starting from this position:
Start replication using the START REPLICA statement:
Confirm that replication is working using the SHOW REPLICA STATUS statement:
The specific steps to configure the security module depend on the operating system.
Configure SELinux for Enterprise ColumnStore:
To configure SELinux, you have to install the packages required for audit2allow. On CentOS 7 and RHEL 7, install the following:
On RHEL 8, install the following:
Allow the system to run under load for a while to generate SELinux audit events.
After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:
If no audit events were found, this will print the following:
If audit events were found, the new SELinux policy can be loaded using semodule:
Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config.
For example, the file will usually look like this after the change:
Set SELinux to enforcing mode:
For information on how to create a profile, see on ubuntu.com.
Navigation in the procedure "Deploy HTAP Topology".
This page was step 3 of 4.
Next: Step 4: Test MariaDB Enterprise Server.
Using an AUTO_INCREMENT column as the primary key in MariaDB is beneficial for ensuring unique row identification and efficient data retrieval. This approach automatically generates a unique integer for each new row, simplifying primary key management. It is especially useful in tables where data insertion occurs frequently, as it facilitates quick indexing and minimizes storage overhead due to its numeric nature. This practice helps maintain optimal performance in database operations, particularly when combined with clustered indexing in InnoDB.
MariaDB Server supports PRIMARY KEY constraints to uniquely identify rows:
There can only be a single primary key for a given table.
InnoDB uses the primary key as a clustered index, which means that InnoDB stores table data in the order determined by the primary key.
Primary key indexes are B+ trees, which are very efficient for searching for exact values, performing range scans, and checking uniqueness.
If no primary key is defined for a table, then InnoDB will use the table's first NOT NULL unique index as the table's primary key.
If your table does not have a column or a set of columns that could act as a natural primary key, then you can define a single AUTO_INCREMENT column, which can serve as the table's primary key. See for more details.
If your table does not have a column or a set of columns that could act as a natural primary key, then you can use a sequence to generate an integer value to use as the table's primary key. Sequences were first added in MariaDB Server 10.3 and MariaDB Community Server 10.3. See for more details.
Let's key after confirming that the default storage engine is InnoDB:
Connect to the server using MariaDB Client:
Confirm that the default storage engine is InnoDB by checking the using the statement:
If the database does not exist, then create the database for the table using the statement:
Create the table using the statement and specify the primary key with the PRIMARY KEY() clause:
For a single column primary key, the primary key can also be specified with the PRIMARY KEY column option:
Let's create an InnoDB table with a composite (multi-column) primary key after confirming that the default storage engine is InnoDB:
Connect to the server using MariaDB Client:
Confirm that the default storage engine is InnoDB by checking the using the statement:
If the database does not exist, then create the database for the table using the statement:
Create the table using the statement and specify the primary key with the PRIMARY KEY() clause:
Let's , and then add a primary key to it:
Connect to the server using MariaDB Client:
Confirm that the default storage engine is InnoDB by checking the using the statement:
If the database does not exist, then create the database for the table using the statement:
Create the table without a primary key using the statement:
Alter the table using the statement and specify the new primary key with the ADD PRIMARY KEY() clause:
Let's drop the primary key from the table created in the section:
Alter the table using the statement and specify the DROP PRIMARY KEY clause:
Let's change the primary key from the table created in the section:
Alter the table using the statement and specify the DROP PRIMARY KEY clause to drop the old primary key, and specify the new primary key with the ADD PRIMARY KEY() clause:
For best performance, every InnoDB table should have a primary key. It is possible to find tables without a primary key using a basic statement.
Let's create an InnoDB table without a primary key, and then use a statement to confirm that it does not have one:
Connect to the server using MariaDB Client:
Confirm that the default storage engine is InnoDB by checking the using the statement:
If the database does not exist, then create the database for the table using the statement:
Create the table without a primary key using the statement:
Query the .TABLES and tables to find InnoDB tables that do not have a primary key:
To add a primary key, alter the table using the statement, and specify the primary key with the ADD PRIMARY KEY() clause:
A primary key uniquely identifies every row. Therefore, if a second row is inserted with an identical value, it will fail.
Let's try to insert two identical primary key values into the table created in the section:
Insert a row with the statement:
Insert a second row that has the same primary key value with the statement:
This will fail with the ER_DUP_ENTRY error code:
Fix the problem by inserting the row with a unique primary key value:
To easily generate unique values for a primary key, consider using one of the following options:
Deploy MariaDB Community Server
These instructions detail the deployment of MariaDB Community Server 10.5 in a Single Standalone Server configuration on a range of supported Operating Systems.
These instructions detail how to deploy a single-node row database, which is suited for a transactional or OLTP workload that does not require high availability (HA). This deployment type is generally for non-production use cases, such as for development and testing.
$ sudo systemctl stop mariadb[mariadb]
bind_address = 0.0.0.0
binlog_format = ROW
innodb_autoinc_lock_mode = 2
wsrep_cluster_address = gcomm://192.0.2.101,192.0.2.102,192.0.2.103
wsrep_cluster_name = example-cluster
wsrep_on = ON
# wsrep provider path for Debian and Ubuntu:
wsrep_provider = /usr/lib/galera/libgalera_enterprise_smm.so
# wsrep provider path for CentOS, RHEL, and SLES:
# wsrep_provider = /usr/lib64/galera/libgalera_enterprise_smm.so
wsrep_provider_options = "gcache.size=2G;gcs.fc_limit=128"
# TLS Configuration
ssl_ca = /path/to/ca-cert.pem
ssl_cert = /path/to/sever-cert.pem
ssl_key = /path/to/server-key.pem$ sudo galera_new_cluster$ sudo mariadbSHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 1 |
+--------------------+-------+$ sudo systemctl start mariadb$ sudo mariadbSHOW GLOBAL STATUS LIKE 'wsrep_cluster_size';
+--------------------+-------+
| Variable_name | Value |
+--------------------+-------+
| wsrep_cluster_size | 2 |
+--------------------+-------+SHOW GLOBAL STATUS LIKE 'wsrep_local_state_comment';
+---------------------------+--------+
| Variable_name | Value |
+---------------------------+--------+
| wsrep_local_state_comment | Synced |
+---------------------------+--------+Apr 30 21:54:35 a1ebc96a2519 PrimProc[1004]: 35.668435 |0|0|0| C 28 CAL0000: Error total memory available is less than 3GB.ERROR 1815 (HY000): Internal error: System is not ready yet. Please try again.
NOT NULLGEN_CLUST_INDEX48-bit DB_ROW_IDMIXED or ROW!For best performance, users should always create primary keys for their tables, and primary keys should be short, because the primary key columns are duplicated in every secondary index record.
This page is: Copyright © 2025 MariaDB. All rights reserved.
$ mariadb --user=rootSHOW SESSION VARIABLES
LIKE 'default_storage_engine';
+------------------------+--------+
| Variable_name | Value |
+------------------------+--------+
| default_storage_engine | InnoDB |
+------------------------+--------+CREATE DATABASE hq_sales;CREATE TABLE hq_sales.invoices (
invoice_id BIGINT UNSIGNED NOT NULL,
branch_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(invoice_id)
);CREATE TABLE hq_sales.invoices (
invoice_id BIGINT UNSIGNED NOT NULL PRIMARY KEY,
branch_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD')
);$ mariadb --user=rootSHOW SESSION VARIABLES
LIKE 'default_storage_engine';
+------------------------+--------+
| Variable_name | Value |
+------------------------+--------+
| default_storage_engine | InnoDB |
+------------------------+--------+CREATE DATABASE hq_sales;CREATE TABLE hq_sales.invoices (
invoice_id BIGINT UNSIGNED NOT NULL,
branch_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(invoice_id, branch_id)
);$ mariadb --user=rootSHOW SESSION VARIABLES
LIKE 'default_storage_engine';
+------------------------+--------+
| Variable_name | Value |
+------------------------+--------+
| default_storage_engine | InnoDB |
+------------------------+--------+CREATE DATABASE hq_sales;CREATE TABLE hq_sales.invoices (
invoice_id BIGINT UNSIGNED NOT NULL,
branch_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD')
);ALTER TABLE hq_sales.invoices ADD PRIMARY KEY (invoice_id);ALTER TABLE hq_sales.invoices DROP PRIMARY KEY;ALTER TABLE hq_sales.invoices
DROP PRIMARY KEY,
ADD PRIMARY KEY (invoice_id, branch_id);$ mariadb --user=rootSHOW SESSION VARIABLES
LIKE 'default_storage_engine';
+------------------------+--------+
| Variable_name | Value |
+------------------------+--------+
| default_storage_engine | InnoDB |
+------------------------+--------+CREATE DATABASE hq_sales;CREATE TABLE hq_sales.invoices (
invoice_id BIGINT UNSIGNED NOT NULL,
branch_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD')
);SELECT t.TABLE_SCHEMA, t.TABLE_NAME
FROM information_schema.TABLES AS t
LEFT JOIN information_schema.KEY_COLUMN_USAGE AS c
ON t.TABLE_SCHEMA = c.CONSTRAINT_SCHEMA
AND t.TABLE_NAME = c.TABLE_NAME
AND c.CONSTRAINT_NAME = 'PRIMARY'
WHERE t.TABLE_SCHEMA != 'mysql'
AND t.ENGINE = 'InnoDB'
AND c.CONSTRAINT_NAME IS NULL;+--------------+------------+
| TABLE_SCHEMA | TABLE_NAME |
+--------------+------------+
| hq_sales | invoices |
+--------------+------------+ALTER TABLE hq_sales.invoices ADD PRIMARY KEY (invoice_id);INSERT INTO hq_sales.invoices
(invoice_id, branch_id, customer_id, invoice_date, invoice_total, payment_method)
VALUES
(1, 1, 1, '2020-05-10 12:35:10', 1087.23, 'CREDIT_CARD');INSERT INTO hq_sales.invoices
(invoice_id, branch_id, customer_id, invoice_date, invoice_total, payment_method)
VALUES
(1, 1, 2, '2020-05-10 14:17:32', 1508.57, 'WIRE_TRANSFER');ERROR 1062 (23000): Duplicate entry '1' for key 'PRIMARY'INSERT INTO hq_sales.invoices
(invoice_id, branch_id, customer_id, invoice_date, invoice_total, payment_method)
VALUES
(2, 1, 2, '2020-05-10 14:17:32', 1508.57, 'WIRE_TRANSFER');Set this option to the file you want to use for the Binary Log. Setting this option enables binary logging.
Set this system variable to ON.
Set this option to the file you want to use for the Relay Logs. Setting this option enables relay logging.
Set this option to the file you want to use to index Relay Log filenames.
Sets the numeric Server ID for this MariaDB Enterprise Server. The value set on this option must be unique to each node.
iam_role_name, sts_region, and sts_endpointTo use the IAM role assigned to an EC2 instance, you must uncomment ec2_iam_mode=enabled.
Set this to the name of the database to replicate from InnoDB to ColumnStore.
Set this to STATEMENT for HTAP.
Set this system variable to utf8
Set this system variable to utf8_general_ci
columnstore_use_import_for_batchinsert
Set this system variable to ALWAYS to always use cpimport for LOAD DATA INFILE and INSERT...SELECT statements.
Set this system variable to ON.
This page is: Copyright © 2025 MariaDB. All rights reserved.
These instructions detail the deployment of the following MariaDB Community Server components:
It is a general purpose storage engine.
It is ACID-compliant.
It is performant.
It is the transactional component of MariaDB's single stack solution.
row database
A database where all columns of each row are stored together.
Best suited for transactional and OLTP workloads.
Also known as a "row-oriented database".
MariaDB Corporation provides package repositories for YUM (RHEL, CentOS), APT (Debian, Ubuntu), and ZYpp (SLES).
Configure the YUM package repository.
Prefix the version with mariadb- and pass the version string to the --mariadb-server-version flag to mariadb_repo_setup. The following directions reference 11.4.
To configure YUM package repositories:
Checksums of the various releases of the mariadb_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
Install MariaDB Community Server and package dependencies:
Configure MariaDB.
Installation only loads MariaDB Community Server to the system. MariaDB Community Server requires configuration before the database server is ready for use.
See .
Configure the APT package repository.
Prefix the version with mariadb- and pass the version string to the --mariadb-server-version flag to mariadb_repo_setup. The following directions reference 11.4.
To configure APT package repositories:
Checksums of the various releases of the mariadb_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
Install MariaDB Community Server and package dependencies:
Configure MariaDB.
Installation only loads MariaDB Community Server to the system. MariaDB Community Server requires configuration before the database server is ready for use.
See .
Configure the ZYpp package repository.
Prefix the version with mariadb- and pass the version string to the --mariadb-server-version flag to mariadb_repo_setup. The following directions reference 11.4.
To configure ZYpp package repositories:
Checksums of the various releases of the mariadb_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
Install MariaDB Community Server and package dependencies:
Configure MariaDB.
Installation only loads MariaDB Community Server to the system. MariaDB Community Server requires configuration before the database server is ready for use.
See .
MariaDB Community Server can be configured in the following ways:
System variables and options can be set in a configuration file (such as /etc/my.cnf). MariaDB Community Server must be restarted to apply changes made to the configuration file.
System variables and options can be set on the command-line.
If a system variable supports dynamic changes, then it can be set on-the-fly using the SET statement.
MariaDB's packages include several bundled configuration files. It is also possible to create custom configuration files.
On RHEL, CentOS, and SLES, MariaDB's packages bundle the following configuration files:
/etc/my.cnf
/etc/my.cnf.d/client.cnf
/etc/my.cnf.d/mysql-clients.cnf
/etc/my.cnf.d/server.cnf
And on RHEL, CentOS, and SLES, custom configuration files from the following directories are read by default:
/etc/my.cnf.d/
On Debian and Ubuntu, MariaDB's packages bundle the following configuration files:
/etc/mysql/my.cnf
/etc/mysql/mariadb.cnf
/etc/mysql/mariadb.conf.d/50-client.cnf
/etc/mysql/mariadb.conf.d/50-mysql-clients.cnf
/etc/mysql/mariadb.conf.d/50-mysqld_safe.cnf
/etc/mysql/mariadb.conf.d/50-server.cnf
/etc/mysql/mariadb.conf.d/60-galera.cnf
And on Debian and Ubuntu, custom configuration files from the following directories are read by default:
/etc/mysql/conf.d/
/etc/mysql/mariadb.conf.d/
Determine which system variables and options you need to configure.
Useful system variables and options for MariaDB Community Server include:
System Variable/Option
Description
Sets the path to the data directory. MariaDB Community Server writes data files to this directory, including tablespaces, logs, and schemas. Change it to use a non-standard location or to start the Server on a different data directory for testing.
Sets the local TCP/IP address on which MariaDB Community Server listens for incoming connections. When testing on a local system, bind the address to the local host at 127.0.0.1 to prevent network access.
Sets the port MariaDB Community Server listens on. Use this system variable to use a non-standard port or when running multiple Servers on the same host for testing.
Choose a configuration file in which to configure your system variables and options.
It is not recommended to make custom changes to one of the bundled configuration files. Instead, it is recommended to create a custom configuration file in one of the included directories. Configuration files in included directories are read in alphabetical order. If you want your custom configuration file to override the bundled configuration files, then it is a good idea to prefix the custom configuration file's name with a string that will be sorted last, such as z-.
On RHEL, CentOS, and SLES, a good custom configuration file would be: /etc/my.cnf.d/z-custom-my.cnf
Set your system variables and options in the configuration file.
They need to be set in a group that will be read by , such as [mariadb] or [server].
For example:
MariaDB Community Server includes configuration to start, stop, restart, enable/disable on boot, and check the status of the Server using the operating system default process management system.
For distributions that use systemd (most supported OSes), you can manage the Server process using the systemctl command:
Start
sudo systemctl start mariadb
Stop
sudo systemctl stop mariadb
Restart
sudo systemctl restart mariadb
Enable during startup
sudo systemctl enable mariadb
Disable during startup
sudo systemctl disable mariadb
Status
sudo systemctl status mariadb
When MariaDB Community Server is up and running on your system, you should test that it is working and there weren't any issues during startup.
Connect to the Server using mariadb Client using the root@localhost user account:
This page is: Copyright © 2025 MariaDB. All rights reserved.
A guide to installing MariaDB Enterprise Server on various operating systems using package managers (YUM, APT, ZYpp) or binary tarballs.
These instructions detail the deployment of MariaDB Enterprise Server in a Single Standalone Server configuration on a range of supported Operating Systems.
These instructions detail how to deploy a single-node row database, which is suited for a transactional or OLTP workload that does not require high availability (HA). This deployment type is generally for non-production use cases, such as for development and testing.
These instructions detail the deployment of the following MariaDB database products:
These instructions detail the deployment of the following components:
MariaDB Corporation provides package repositories for YUM (RHEL, CentOS), APT (Debian, Ubuntu), and ZYpp (SLES).
Retrieve your Customer Download Token at and substitute for CUSTOMER_DOWNLOAD_TOKEN in the following directions.
Configure the YUM package repository.
Pass the version to install using the --mariadb-server-version flag to mariadb_es_repo_setup. The following directions reference 11.4.
To configure YUM package repositories:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
Install MariaDB Enterprise Server and package dependencies:
Configure MariaDB.
Installation only loads MariaDB Enterprise Server to the system. MariaDB Enterprise Server requires configuration before the database server is ready for use. See .
Retrieve your Customer Download Token at and substitute for CUSTOMER_DOWNLOAD_TOKEN in the following directions.
Configure the APT package repository.
Pass the version to install using the --mariadb-server-version flag to mariadb_es_repo_setup. The following directions reference
To configure APT package repositories:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
Install MariaDB Enterprise Server and package dependencies:
Configure MariaDB.
Installation only loads MariaDB Enterprise Server to the system. MariaDB Enterprise Server requires configuration before the database server is ready for use. See .
Retrieve your Customer Download Token at and substitute for CUSTOMER_DOWNLOAD_TOKEN in the following directions.
Configure the ZYpp package repository.
Pass the version to install using the --mariadb-server-version flag to mariadb_es_repo_setup. The following directions reference 11.4.
To configure ZYpp package repositories:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
Install MariaDB Enterprise Server and package dependencies:
Configure MariaDB.
Installation only loads MariaDB Enterprise Server to the system. MariaDB Enterprise Server requires configuration before the database server is ready for use. See .
MariaDB Enterprise Server can be configured in the following ways:
and can be set in a configuration file (such as /etc/my.cnf). MariaDB Enterprise Server must be restarted to apply changes made to the configuration file.
and can be set on the command-line.
If a system variable supports dynamic changes, then it can be set on-the-fly using the statement.
MariaDB's packages include several bundled configuration files. It is also possible to create custom configuration files.
On RHEL, CentOS, and SLES, MariaDB's packages bundle the following configuration files:
/etc/my.cnf
/etc/my.cnf.d/client.cnf
/etc/my.cnf.d/mariadb-enterprise.cnf
And on RHEL, CentOS, and SLES, custom configuration files from the following directories are read by default:
/etc/my.cnf.d/
On Debian and Ubuntu, MariaDB's packages bundle the following configuration files:
/etc/mysql/my.cnf
/etc/mysql/mariadb.cnf
/etc/mysql/mariadb.conf.d/50-client.cnf
And on Debian and Ubuntu, custom configuration files from the following directories are read by default:
/etc/mysql/conf.d/
/etc/mysql/mariadb.conf.d/
Determine which system variables and options you need to configure.
Useful system variables and options for MariaDB Enterprise Server include:
Choose a configuration file in which to configure your system variables and options.
It is not recommended to make custom changes to one of the bundled configuration files. Instead, it is recommended to create a custom configuration file in one of the included directories. Configuration files in included directories are read in alphabetical order. If you want your custom configuration file to override the bundled configuration files, then it is a good idea to prefix the custom configuration file's name with a string that will be sorted last, such as z-.
On RHEL, CentOS, and SLES, a good custom configuration file would be: /etc/my.cnf.d/z-custom-my.cnf
On Debian and Ubuntu, a good custom configuration file would be: /etc/mysql/mariadb.conf.d/z-custom-my.cnf
Set your system variables and options in the configuration file.
They need to be set in a group that will be read by , such as [mariadb] or [server].
For example:
MariaDB Enterprise Server includes configuration to start, stop, restart, enable/disable on boot, and check the status of the Server using the operating system default process management system.
For distributions that use systemd (most supported OSes), you can manage the Server process using the systemctl command:
When MariaDB Enterprise Server is up and running on your system, you should test that it is working and there weren't any issues during startup.
Connect to the server using using the root@localhost user account:
$ sudo systemctl stop mariadb$ sudo systemctl stop mariadb-columnstore$ sudo systemctl stop mariadb-columnstore-cmapi[mariadb]
log_error = mariadbd.err
character_set_server = utf8
collation_server = utf8_general_ci
# Replication Configuration (HTAP Server)
server_id = 1
log_bin = mariadb-bin
binlog_format = STATEMENT
log_slave_updates = OFF
columnstore_replication_slave = ON
# HTAP filtering rules
# Transactions replicate from same server
replicate_same_server_id = ON
# Only write queries that touch 'innodb_db' to the binary log
binlog_do_db = innodb_db
# Rewrite innodb_db to columnstore_db prior to applying transaction
replicate_rewrite_db = innodb_db->columnstore_db
# Only replicate tables that begin with "htap"
replicate_wild_do_table = columnstore_db.htap%[ObjectStorage]
…
service = S3
…
[S3]
bucket = your_columnstore_bucket_name
endpoint = your_s3_endpoint
aws_access_key_id = your_s3_access_key_id
aws_secret_access_key = your_s3_secret_key
# iam_role_name = your_iam_role
# sts_region = your_sts_region
# sts_endpoint = your_sts_endpoint
# ec2_iam_mode = enabled
[Cache]
cache_size = your_local_cache_size
path = your_local_cache_path$ sudo systemctl start mariadb
$ sudo systemctl enable mariadb$ sudo systemctl start mariadb-columnstore
$ sudo systemctl enable mariadb-columnstoreCREATE USER 'util_user'@'127.0.0.1'
IDENTIFIED BY 'util_user_passwd';GRANT SELECT, PROCESS ON *.*
TO 'util_user'@'127.0.0.1';$ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1
$ sudo mcsSetConfig CrossEngineSupport Port 3306
$ sudo mcsSetConfig CrossEngineSupport User util_user$ sudo mcsSetConfig CrossEngineSupport Password util_user_passwdCREATE USER 'repl'@'localhost' IDENTIFIED BY 'passwd';GRANT REPLICA MONITOR,
REPLICATION REPLICA
ON *.* TO 'repl'@'localhost';SET GLOBAL gtid_slave_pos='';CHANGE MASTER TO
MASTER_HOST='localhost',
MASTER_USER='htap_replication',
MASTER_PASSWORD='passwd',
MASTER_USE_GTID=slave_pos;START REPLICA;SHOW REPLICA STATUS;$ sudo yum install policycoreutils policycoreutils-python$ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
Nothing to do$ sudo semodule -i mariadb_local.pp# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo setenforce enforcing$ sudo yum install curl$ curl -LsSO https://r.mariadb.com/downloads/mariadb_repo_setup$ echo "${checksum} mariadb_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_repo_setup$ sudo ./mariadb_repo_setup \
--mariadb-server-version="mariadb-11.4"$ sudo apt install curl$ curl -LsSO https://r.mariadb.com/downloads/mariadb_repo_setup$ echo "${checksum} mariadb_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_repo_setup$ sudo ./mariadb_repo_setup \
--mariadb-server-version="mariadb-11.4"$ sudo apt update$ sudo zypper install curl$ curl -LsSO https://r.mariadb.com/downloads/mariadb_repo_setup$ echo "${checksum} mariadb_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_repo_setup$ sudo ./mariadb_repo_setup \
--mariadb-server-version="mariadb-11.4"$ sudo mariadb
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 10.5.28-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>/etc/mysql/mariadb.conf.d/z-custom-my.cnfSets the maximum number of simultaneous connections MariaDB Community Server allows.
Sets how MariaDB Community Server handles threads for client connections.
Sets the file name for the error log.
Sets the amount of memory InnoDB reserves for the Buffer Pool.
Sets the size for each Redo Log file and innodb_log_files_in_group sets the number of Redo Log files used by InnoDB.
Sets the maximum number of I/O operations per second that InnoDB can use.
$ sudo yum install MariaDB-server MariaDB-backup$ sudo apt install mariadb-server mariadb-backup$ sudo zypper install MariaDB-server MariaDB-backup[mariadb]
log_error = mariadb-test.err
innodb_buffer_pool_size = 1G/etc/my.cnf.d/mysql-clients.cnf/etc/my.cnf.d/server.cnf
/etc/mysql/mariadb.conf.d/50-mysql-clients.cnf/etc/mysql/mariadb.conf.d/50-mysqld_safe.cnf
/etc/mysql/mariadb.conf.d/50-server.cnf
/etc/mysql/mariadb.conf.d/60-galera.cnf
/etc/mysql/mariadb.conf.d/mariadb-enterprise.cnf
Sets the amount of memory InnoDB reserves for the Buffer Pool.
Sets the size for each Redo Log file and sets the number of Redo Log files used by InnoDB.
Sets the maximum number of I/O operations per second that InnoDB can use.
MariaDB Enterprise Server
Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.
InnoDB
It is a general purpose storage engine
It is ACID-compliant
It is performant
It is the transactional component of MariaDB's single stack Hybrid Transactional/Analytical Processing (HTAP) solution
row database
A database where all columns of each row are stored together
Best suited for transactional and OLTP workloads
Also known as a "row-oriented database"
Sets the path to the data directory. MariaDB Enterprise Server writes data files to this directory, including tablespaces, logs, and schemas. Change it to use a non-standard location or to start the Server on a different data directory for testing.
Sets the local TCP/IP address on which MariaDB Enterprise Server listens for incoming connections. When testing on a local system, bind the address to the local host at 127.0.0.1 to prevent network access.
Sets the port MariaDB Enterprise Server listens on. Use this system variable to use a non-standard port or when running multiple Servers on the same host for testing.
Sets the maximum number of simultaneous connections MariaDB Enterprise Server allows.
Sets how MariaDB Enterprise Server handles threads for client connections.
Sets the file name for the error log.
Start
sudo systemctl start mariadb
Stop
sudo systemctl stop mariadb
Restart
sudo systemctl restart mariadb
Enable during startup
sudo systemctl enable mariadb
Disable during startup
sudo systemctl disable mariadb
Status
sudo systemctl status mariadb
This page is: Copyright © 2025 MariaDB. All rights reserved.
Enterprise Server 10.5
Enterprise Server 10.6
Enterprise Server 11.4
Single-stack hybrid transactional/analytical workloads
ColumnStore for analytics with scalable S3-compatible object storage
InnoDB for transactions
Cross-engine JOINs
Enterprise Server 10.5, Enterprise ColumnStore 5, MaxScale 2.5
This procedure describes the deployment of the HTAP topology with MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.
MariaDB Enterprise ColumnStore is a columnar storage engine for MariaDB Enterprise Server. This topology is best suited for Hybrid Transactional-Analytical Processing (HTAP) workloads.
This procedure has 4 steps, which are executed in sequence.
This procedure represents the basic product capability and uses 1 Enterprise ColumnStore node.
This page provides an overview of the topology, requirements, and deployment procedures.
Please read and understand this procedure before executing.
Customers can obtain support by
The following components are deployed during this procedure:
The MariaDB Enterprise ColumnStore HTAP topology is designed for hybrid transactional-analytical processing (HTAP) workloads.
The topology consists of:
One MaxScale node
One ColumnStore node running ES and Enterprise ColumnStore
The MaxScale node:
Monitors the health and availability of the ColumnStore node using the MariaDB Monitor (mariadbmon)
Accepts client and application connections
Routes queries to the ColumnStore node using the Read/Write Split Router (readwritesplit)
The ColumnStore node:
Receives queries from MaxScale
Executes queries
Uses a row-based storage engine, such as to handle transactional queries
Uses Enterprise ColumnStore as the columnar storage engine to handle analytical queries
These requirements are for the HTAP topology when deployed with MariaDB Enterprise Server 11.4 and MariaDB Enterprise ColumnStore.
In alignment to the , the HTAP topology with MariaDB Enterprise Server 11.4 and MariaDB Enterprise ColumnStore is provided for:
Debian 11 (x86_64, ARM64)
Debian 12 (x86_64, ARM64)
Red Hat Enterprise Linux 8 (x86_64, ARM64)
Red Hat Enterprise Linux 9 (x86_64, ARM64)
The HTAP topology can use S3-compatible object storage to store ColumnStore data, but it is not required.
Many S3-compatible object storage services exist. MariaDB Corporation cannot make guarantees about all S3-compatible object storage services, because different services provide different functionality.
For the preferred S3-compatible object storage providers that provide cloud and hardware solutions, see the following sections:
The use of non-cloud and non-hardware providers is at your own risk.
If you have any questions about using specific S3-compatible object storage with MariaDB Enterprise ColumnStore, .
Amazon Web Services (AWS) S3
Google Cloud Storage
Azure Storage
Alibaba Cloud Object Storage Service
Cloudian HyperStore
Dell EMC
Seagate Lyve Rack
Quantum ActiveScale
This implementation relies on replicate_rewrite_db, so it does not support cross-database queries with statement-based replication.
For example, if the replicated database is selected by the USE, then the query will replicate properly:
However, if the replicated database is not selected, and it is instead prefixed the table name in the query, then the query will not replicate properly:
This implementation has not been tested with semi-synchronous replication.
This implementation has not been tested with parallel replication.
This implementation requires the system variable to be set to STATEMENT. Row-based replication is not currently supported.
MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.
To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.
The systemctl command is used to start and stop the MariaDB Enterprise Server service.
For additional information, see "".
MariaDB Enterprise Server produces log data that can be helpful in problem diagnosis.
Log filenames and locations may be overridden in the server configuration. The default location of logs is the data directory. The data directory is specified by the datadir system variable.
The systemctl command is used to start and stop the ColumnStore service.
Navigation in the procedure "Deploy HTAP Topology".
Next: Step 1: Prepare ColumnStore Node.
This page is: Copyright © 2025 MariaDB. All rights reserved.
$ sudo yum install curl
$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -
$ chmod +x mariadb_es_repo_setup
$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--mariadb-server-version="11.4"$ sudo yum install MariaDB-server MariaDB-backup$ sudo apt install curl
$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -
$ chmod +x mariadb_es_repo_setup
$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--mariadb-server-version="11.4"
$ sudo apt update$ sudo apt install mariadb-server mariadb-backup$ sudo zypper install curl
$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -
$ chmod +x mariadb_es_repo_setup
$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--mariadb-server-version="11.4"$ sudo zypper install MariaDB-server MariaDB-backup[mariadb]
log_error = mariadbd.err
innodb_buffer_pool_size = 1G$ sudo mariadb
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>Uses cross-engine JOINs to join transactional and analytical tables
Replicates data between engines using MariaDB Replication
Optionally uses S3-compatible object storage for Enterprise ColumnStore data
Rocky Linux 9 (x86_64, ARM64)
Ubuntu 20.04 LTS (x86_64, ARM64)
Ubuntu 22.04 LTS (x86_64, ARM64)
Ubuntu 24.04 LTS (x86_64, ARM64)
<hostname>-bin
Enterprise Server 10.6, Enterprise ColumnStore 23.02, MaxScale 22.08
Prepare ColumnStore Node
Install MariaDB Enterprise Server
Start and Configure MariaDB Enterprise Server
Test MariaDB Enterprise Server
MariaDB Enterprise Server
Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.
General purpose storage engine
Support for Online Transactional Processing (OLTP) workloads
ACID-compliant
Performance
Columnar storage engine
Optimized for Online Analytical
Processing (OLAP) workloads
Scalable query execution
Configuration File
Configuration files (such as /etc/my.cnf) can be used to set system-variables and options. The server must be restarted to apply changes made to configuration files.
Command-line
The server can be started with command-line options that set system-variables and options.
SQL
Users can set system-variables that support dynamic changes on-the-fly using the SET statement.
CentOS
Red Hat Enterprise Linux (RHEL)
/etc/my.cnf.d/z-custom-mariadb.cnf
Debian
Ubuntu
/etc/my.cnf.d/z-custom-mariadb.cnf
Start
sudo systemctl start mariadb
Stop
sudo systemctl stop mariadb
Restart
sudo systemctl restart mariadb
Enable during startup
sudo systemctl enable mariadb
Disable during startup
sudo systemctl disable mariadb
Status
sudo systemctl status mariadb
<hostname>.err
server_audit.log
<hostname>-slow.log
Start
sudo systemctl start mariadb-columnstore
Stop
sudo systemctl stop mariadb-columnstore
Restart
sudo systemctl restart mariadb-columnstore
Enable during startup
sudo systemctl enable mariadb-columnstore
Disable during startup
sudo systemctl disable mariadb-columnstore
Status
sudo systemctl status mariadb-columnstore
<hostname>.log
This procedure describes the deployment of the Primary/Replica topology with MariaDB Enterprise Server and MariaDB MaxScale.
Primary/Replica topology provides read scalability and fault tolerance through asynchronous or semi-synchronous single-primary replication.
This procedure has 7 steps, which are executed in sequence.
MariaDB products can be deployed in many different topologies to suit specific use cases. The Primary/Replica topology can be deployed on its own or integrated with MariaDB Enterprise Cluster.
This procedure represents basic product capability with 3 Enterprise Server nodes and 1 MaxScale node.
This page provides an overview of the topology, requirements, and deployment procedure.
Please read and understand this procedure before executing.
Install MariaDB Enterprise Server
Start and Configure MariaDB Enterprise Server on Primary Server
Start and Configure MariaDB Enterprise Server on Replica Servers
Test MariaDB Enterprise Server
Install MariaDB MaxScale
Start and Configure MariaDB MaxScale
The following components are deployed during this procedure:
Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.
Database proxy that extends the availability, scalability, and security of MariaDB Enterprise Servers
General purpose storage engine
ACID-compliant
Performance
Listener
Listens for client connections to MaxScale, then passes them to the router service associated with the listener
MariaDB Monitor
Tracks changes in the state of MariaDB Enterprise Servers.
Read Connection Router
Routes connections from the listener to any available Enterprise Server node
Read/Write Split Router
Routes read operations from the listener to any available Enterprise Server node, and routes write operations from the listener to a specific server operating as the primary server
Server Module
Connection configuration in MaxScale to an Enterprise Server node
Primary/Replica topology provides read scalability and fault tolerance through asynchronous or semi-synchronous single-primary replication of MariaDB Enterprise Server 11.4
The Primary/Replica topology consists of:
1 or more MaxScale nodes
1 Enterprise Server node operating as the primary server
2 or more Enterprise Server nodes operating as replica servers.
The MaxScale nodes:
Monitor the health and availability of the Enterprise Server nodes
Route queries to Enterprise Server nodes using Read/Write Split (readwritesplit) and Read Connection (readconnroute) routers.
Promote replica servers in the event that the primary server fails.
The Enterprise Server node is operating as the primary server:
Receives write queries from MaxScale, logging them to the Binary Log
Provides Binary Logs to replica servers for replication
The Enterprise Server nodes are operating as replica servers:
Receive read queries from MaxScale
Replicate writes asynchronously or semi-synchronously from the primary server
These requirements are for the Primary/Replica topology when deployed with MariaDB Enterprise Server 11.4 and MariaDB MaxScale 25.01.
In alignment to the , the Primary/Replica topology with MariaDB Enterprise Server 11.4 and MariaDB MaxScale 25.01 is provided for:
AlmaLinux 8 (x86_64, ARM64)
AlmaLinux 9 (x86_64, ARM64)
Debian 11 (x86_64, ARM64)
Debian 12 (x86_64, ARM64)
Microsoft Windows (x86_64)
Red Hat Enterprise Linux 8 (x86_64, ARM64)
Red Hat Enterprise Linux 9 (x86_64, PPC64LE, ARM64)
Red Hat UBI 8 (x86_64, ARM64)
Rocky Linux 8 (x86_64, ARM64)
Rocky Linux 9 (x86_64, ARM64)
SUSE Linux Enterprise Server 15 (x86_64, ARM64)
Ubuntu 20.04 LTS (x86_64, ARM64)
Ubuntu 22.04 LTS (x86_64, ARM64)
Ubuntu 24.04 LTS (x86_64, ARM64)
maxscale
MaxScale process owner
mysql
Enterprise Server process owner
Configuration File
Configuration files (such as /etc/my.cnf) can be used to set system-variables and options. The server must be restarted to apply changes made to configuration files.
Command-line
The server can be started with command-line options that set system-variables and options.
SQL
Users can set system-variables that support dynamic changes on-the-fly using the SET statement.
MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.
To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.
CentOS
Red Hat Enterprise Linux (RHEL)
SUSE Linux Enterprise Server (SLES)
/etc/my.cnf.d/z-custom-mariadb.cnf
Debian
Ubuntu
/etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf
The systemctl command is used to start and stop the MariaDB Enterprise Server service.
Start
sudo systemctl start mariadb
Stop
sudo systemctl stop mariadb
Restart
sudo systemctl restart mariadb
Enable during startup
sudo systemctl enable mariadb
Disable during startup
sudo systemctl disable mariadb
Status
sudo systemctl status mariadb
For additional information, see "Starting and Stopping MariaDB".
MariaDB Enterprise Server produces log data that can be helpful in problem diagnosis.
Log filenames and locations may be overridden in the server configuration. The default location of logs is the data directory. The data directory is specified by the datadir system variable.
<hostname>.err
server_audit.log
<hostname>-slow.log
MaxScale can be configured using several methods. These methods make use of MaxScale's .
Command-line utility to perform administrative tasks through the REST API. See MaxCtrl Commands.
MaxGUI is a graphical utility that can perform administrative tasks through the REST API.
The REST API can be used directly. For example, the curl utility could be used to make REST API calls from the command-line. Many programming languages also have libraries to interact with REST APIs.
The procedure on these pages configures MaxScale using MaxCtrl.
The systemctl command is used to start and stop the MaxScale service.
Start
sudo systemctl start maxscale
Stop
sudo systemctl stop maxscale
Restart
sudo systemctl restart maxscale
Enable during startup
sudo systemctl enable maxscale
Disable during startup
sudo systemctl disable maxscale
Status
sudo systemctl status maxscale
For additional information, see "Starting and Stopping MariaDB".
Navigation in the procedure "Deploy Primary/Replica Topology":
This page is: Copyright © 2025 MariaDB. All rights reserved.\
Enterprise Server 10.4
Enterprise Server 10.5
Enterprise Server 10.6
Enterprise Server 11.4
MariaDB Replication
Highly available
Asynchronous or semi-synchronous replication
Automatic failover via MaxScale
Manual provisioning of new nodes from backup
MariaDB Enterprise Cluster is powered by Galera.
MariaDB Enterprise Cluster provides read scalability and fault tolerance through virtually synchronous multi-primary certification-based write-set replication (wsrep).
This procedure has 6 steps, which are executed in sequence.
MariaDB products can be deployed in many different topologies to suit specific use cases. Enterprise Cluster can be deployed on its own, or integrated with MariaDB Replication to integrate with other clusters or topologies.
This procedure represents basic product capability with 3 Enterprise Cluster nodes and 1 MaxScale node.
This page provides an overview of the topology, requirements, and deployment procedures.
Please read and understand this procedure before executing.
Procedure Steps
Step 1
Step 2
Step 3
Step 4
Step 5
Step 6
Customers can obtain support by submitting a support case.
The following components are deployed during this procedure:
Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.
Database proxy that extends the availability, scalability, and security of MariaDB Enterprise Servers
MariaDB Enterprise Server leverages the Galera Enterprise 4 wsrep provider plugin
Provides virtually synchronous multi-primary replication for MariaDB Enterprise Server
All nodes can handle both reads and writes
Replicates write-sets to all other nodes in the cluster
Supports data-at-rest encryption of the write-set cache
General purpose storage engine
ACID-compliant
Performance
Required for Enterprise Cluster
Galera Monitor
Tracks changes in the state of MariaDB Enterprise Servers operating as Enterprise Cluster nodes.
Listener
Listens for client connections to MaxScale, then passes them to the router service associated with the listener
Read Connection Router
Routes connections from the listener to any available Enterprise Cluster node
Read/Write Split Router
Routes read operations from the listener to any available Enterprise Cluster node, and routes write operations from the listener to a specific server that MaxScale uses as the primary server
Server Module
Connection configuration in MaxScale to an Enterprise Cluster node
MariaDB Enterprise Cluster topology provides read scalability through certification-based write-set replication (wsrep) that is multi-primary and virtually synchronous.
The Enterprise Cluster topology consists of:
1 or more MaxScale nodes
3 or more MariaDB Enterprise Servers (ES) configured as Enterprise Cluster nodes
The MaxScale nodes:
Monitor the health and availability of each Enterprise Cluster node using the Galera Monitor (galeramon)
Accept client and application connections
Route queries to the Enterprise Cluster nodes using the Read Connection (readconnroute) or the Read/Write Split (readwritesplit) routers.
The Enterprise Cluster nodes:
Receive queries from MaxScale
Store data locally using the InnoDB storage engine
Perform certification-based virtually synchronous replication to other Enterprise Cluster nodes
Provide State Snapshot Transfers (SST) to bring MariaDB Enterprise Server nodes into sync with the cluster
These requirements are for the Galera Cluster topology when deployed with MariaDB Enterprise Server and MariaDB MaxScale 25.01.
MaxScale nodes, 1 or more are required.
Enterprise Cluster nodes, 3 or more are required.
To avoid problems in establishing a quorum in the event of a network partition or outage, MariaDB recommends deploying an odd number of Enterprise Cluster nodes. When using multiple network switches, deploy across an odd number of switches, each with an odd number of nodes. When using multiple data centers, deploy across an odd number of data centers, each with an odd number of switches.
In alignment to the enterprise lifecycle, the Galera Cluster topology with MariaDB Enterprise Server and MariaDB MaxScale 25.01 is provided for:
AlmaLinux 8 (x86_64, ARM64)
AlmaLinux 9 (x86_64, ARM64)
Debian 11 (x86_64, ARM64)
Debian 12 (x86_64, ARM64)
Red Hat Enterprise Linux 8 (x86_64, ARM64)
Red Hat Enterprise Linux 9 (x86_64, PPC64LE, ARM64)
Red Hat UBI 8 (x86_64, ARM64)
Rocky Linux 8 (x86_64, ARM64)
Rocky Linux 9 (x86_64, ARM64)
SUSE Linux Enterprise Server 15 (x86_64, ARM64)
Ubuntu 20.04 LTS (x86_64, ARM64)
Ubuntu 22.04 LTS (x86_64, ARM64)
Ubuntu 24.04 LTS (x86_64, ARM64)
MariaDB Enterprise Server Configuration Management
Configuration File
Configuration files (such as /etc/my.cnf) can be used to set system-variables and options. The server must be restarted to apply changes made to configuration files.
Command-line
The server can be started with command-line options that set system-variables and options.
SQL
Users can set system-variables that support dynamic changes on-the-fly using the SET statement.
MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.
To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.
CentOS
Red Hat Enterprise Linux (RHEL)
SUSE Linux Enterprise Server (SLES)
/etc/my.cnf.d/z-custom-mariadb.cnf
Debian
Ubuntu
/etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf
The systemctl command is used to start and stop the MariaDB Enterprise Server service. The galera_new_cluster and galera_recovery scripts are used for Enterprise Cluster-specific operations.
Start
sudo systemctl start mariadb
Stop
sudo systemctl stop mariadb
Restart
sudo systemctl restart mariadb
Enable during startup
sudo systemctl enable mariadb
Disable during startup
sudo systemctl disable mariadb
Status
sudo systemctl status mariadb
For additional information, see "Starting and Stopping MariaDB".
MariaDB Enterprise Server produces log data that can be helpful in problem diagnosis.
Log filenames and locations may be overridden in the server configuration. The default location of logs is the data directory. The data directory is specified by the datadir system variable.
<hostname>.err
server_audit.log
<hostname>-slow.log
MaxScale can be configured using several methods. These methods make use of MaxScale's REST API.
Command-line utility to perform administrative tasks through the REST API. See MaxCtrl Commands.
MaxGUI is a graphical utility that can perform administrative tasks through the REST API.
The REST API can be used directly. For example, the curl utility could be used to make REST API calls from the command-line. Many programming languages also have libraries to interact with REST APIs.
The procedure on these pages configures MaxScale using MaxCtrl.
The systemctl command is used to start and stop the MaxScale service.>
Start
sudo systemctl start maxscale
Stop
sudo systemctl stop maxscale
Restart
sudo systemctl restart maxscale
Enable during startup
sudo systemctl enable maxscale
Disable during startup
sudo systemctl disable maxscale
Status
sudo systemctl status maxscale
For additional information, see "Starting and Stopping MariaDB".
Navigation in the procedure "Deploy Galera Cluster Topology":
Next: Step 1: Install MariaDB Enterprise Server
This page is: Copyright © 2025 MariaDB. All rights reserved.
Software Version
Diagram
Features
Enterprise Server 10.4
Enterprise Server 10.5
Enterprise Server 10.6
Enterprise Server 11.4
Multi-Primary Cluster Powered by Galera for Transactional/OLTP Workloads
InnoDB Storage Engine
Highly available
Virtually synchronous, certification-based replication
Automated provisioning of new nodes (IST/SST)
Scales reads via MaxScale
Enterprise Server 10.3+, MariaDB Enterprise Cluster (powered by Galera), MaxScale 2.5+
Learn about AUTO_INCREMENT constraints in MariaDB Server. This section details how to automatically generate unique, sequential values for table columns, simplifying data management.
To define an AUTO_INCREMENT column, select an integer data type to accommodate the range of anticipated values. Common choices include INT, BIGINT, TINYINT, SMALLINT, and MEDIUMINT, depending on size requirements. Ensure the chosen type provides sufficient capacity to avoid overflow while optimizing storage efficiency.
MariaDB Enterprise Server supports AUTO_INCREMENT constraints:
A column with an AUTO_INCREMENT constraint can act as a table's primary key when a natural primary key is not available
Generated values for auto-increment columns are guaranteed to be unique and monotonically increasing
Auto-increment columns provide compatibility with schemas designed for MariaDB Enterprise Server and MySQL
Alternatively, MariaDB Enterprise Server can use as the primary key instead of columns with AUTO_INCREMENT constraints. Sequences are compliant with the SQL standard, while AUTO_INCREMENT constraints are not, so sequences are the better option for applications that require standard-compliant features.
AUTO_INCREMENT ColumnWhen designing a schema, AUTO_INCREMENT columns should use integer data types. The following types can be used:
To determine which type to use, consider the following points:
Do you want to be able to manually insert negative values? If not, then specify the UNSIGNED attribute for the column.
InnoDB can't generate negative AUTO_INCREMENT values, so it is only beneficial to use a signed integer column if you want the option to manually insert negative values, which would bypass the AUTO_INCREMENT handling.
How large will your table grow?
If your AUTO_INCREMENT column is being used as the table's primary key, then the maximum value for the chosen data type should be considered the maximum number of rows that can fit in the table:
If you want to give your table the most room to grow, then it would be best to choose BIGINT UNSIGNED.
Let's after confirming that the is InnoDB:
Connect to the server using MariaDB Client:
Confirm that the default storage engine is InnoDB by checking the using the statement:
If the database does not exist, then create the database for the table using the statement:
Create the table using the statement:
If a column is specified as AUTO_INCREMENT, then its value will be automatically generated. There are multiple ways to insert rows with these automatically generated values.
If the column is not specified, then InnoDB will automatically generate the value.
Let's insert a row into the table created in the section:
Connect to the server using MariaDB Client:
Insert a row with the statement, but do not specify the AUTO_INCREMENT column:
Select the same row with the statement to confirm that a value was automatically generated:
If the column's value is specified as 0, then InnoDB will automatically generate the value if the system variable does not contain NO_AUTO_VALUE_ON_ZERO.
Let's insert a row into the table created in the section:
Connect to the server using MariaDB Client:
Confirm that the session's value of the sql_mode system variable does not contain NO_AUTO_VALUE_ON_ZERO with the statement:
Insert a row with the statement, and specify the AUTO_INCREMENT column's value as 0:
Select the same row with the statement to confirm that a value was automatically generated:
If the column's value is specified as NULL, then InnoDB will automatically generate the value if the column is defined as NOT NULL.
Let's insert a row into the table created in the section:
Connect to the server using MariaDB Client:
Insert a row with the statement, and specify the AUTO_INCREMENT column's value as NULL:
Select the same row with the statement to confirm that a value was automatically generated:
After InnoDB inserts an automatically generated value into an AUTO_INCREMENT column, the application sometimes needs to know what value it inserted. For example, the application may need to use the value to insert a foreign key column in a dependent table. The function can be used to get the lasted inserted value for an AUTO_INCREMENT column without re-reading the row from the table.
Let's insert a row into the table created in the section and then use the function:
Connect to the server using MariaDB Client:
Insert a row with the statement, but do not specify the AUTO_INCREMENT column:
Execute the function to get the AUTO_INCREMENT value for the new row:
Select the same row with the statement to confirm that the AUTO_INCREMENT column has the same value:
When multiple rows are inserted into a table concurrently, InnoDB needs to be able to generate multiple values concurrently in a safe manner. It has several different modes that can be used to do this, and each mode has its own advantages and disadvantages.
InnoDB's AUTO_INCREMENT lock mode is configured with the system variable. Users can choose between 3 different values:
• In interleaved lock mode, InnoDB never holds a table-level lock while generating AUTO_INCREMENT values. • Interleaved lock mode is not safe to use if binlog_format is set to STATEMENT.|
The system variable configures the AUTO_INCREMENT Lock Mode for InnoDB.
Choose a configuration file for custom changes to system variables and options.
It is not recommended to make custom changes to Enterprise Server's default configuration files, because your custom changes can be overwritten by other default configuration files that are loaded after.
Ensure that your custom changes are read last by creating a custom configuration file in one of the included directories. Configuration files in included directories are read in alphabetical order. Use the z- prefix in the file name to ensure that your custom configuration file is read last.
Some example configuration file paths for different distributions are shown in the following table:
Set the system variable in the configuration file. It needs to be set in a group that will be read by MariaDB Server, such as [mariadb] or [server].
For example:
Restart the server:
A table's next AUTO_INCREMENT value can be set with the statement. The value is set using the AUTO_INCREMENT table option.
Let's alter the AUTO_INCREMENT value for the table created in the section and then insert a row into the table, so we can confirm that it uses the new value:
Connect to the server using MariaDB Client:
Alter the table's next AUTO_INCREMENT value with the statement:
Insert a row with the statement, but do not specify the AUTO_INCREMENT column:
Execute the function to get the AUTO_INCREMENT value for the new row:
Select the same row with the statement to confirm that the AUTO_INCREMENT column has the same value:
The offset and increment values can be configured by setting the system variables.
When Galera Cluster is used, the offset and increment values are managed automatically by default. They can be managed manually by disabling the system variable.
USE innodb_db;
INSERT INTO htap_test1
VALUES (100);SELECT * FROM columnstore_db.htap_test1;
+------+
| id |
+------+
| 100 |
+------+USE columnstore_db;
INSERT INTO innodb_db.htap_test1
VALUES (200);SELECT * FROM columnstore_db.htap_test1;
+------+
| id |
+------+
| 100 |
+------+-9223372036854775808 - 9223372036854775807
0 - 18446744073709551615
9223372036854775807
18446744073709551615
-128 - 127
0 - 255
-32768 - 32767
0 - 65535
-8388608 - 8388607
0 - 16777215
-2147483648 - 2147483647
127
255
32767
65535
8388607
16777215
2147483647
0
• This value configures Traditional Lock Mode. • Don't use traditional lock mode. • Traditional lock mode performs very poorly. • In traditional lock mode, InnoDB holds a table-level lock while generating AUTO_INCREMENT values.
1
• This value configures Consecutive Lock Mode. • Consecutive lock mode is the default lock mode. • In consecutive lock mode, InnoDB holds a table-level lock while generating AUTO_INCREMENT values for statements that insert multiple new rows. However, InnoDB uses a lightweight internal lock to improve performance when generating an AUTO_INCREMENT value for statements that insert a single new row.
2
• This value configures Interleaved Lock Mode. • Interleaved lock mode is the recommended lock mode for best performance. • If Galera Cluster is being used, then interleaved lock mode must be configured.
CentOS RHEL Rocky Linux SLES
/etc/my.cnf.d/z-custom-mariadb.cnf
Debian Ubuntu
/etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf
This page is: Copyright © 2025 MariaDB. All rights reserved.
0 - 4294967295
4294967295
Scales reads via MaxScale
Enterprise Server 10.3+, MaxScale 2.5+
Test MariaDB MaxScale
<hostname>.log
<hostname>-bin
Bootstrap a cluster node
sudo galera_new_cluster
Recover a cluster node's position
sudo galera_recovery
<hostname>.log
<hostname>-bin
In MariaDB Server, FOREIGN KEY constraints are key elements in defining relationships between InnoDB tables: they ensure referential integrity by maintaining valid associations between rows in parent and child tables. Changes in the parent table can either be restricted or propagated to maintain consistency with child tables.
MariaDB Server supports FOREIGN KEY constraints to define referential constraints between InnoDB tables:
Referential constraints allow the database to automatically ensure that each row in the child table is associated with a valid row in the parent table
If the row in the parent table changes, MariaDB Server can block the change to protect child rows, or propagate the change to the child rows
Foreign keys allow the database to handle certain types of referential constraint checks, so that the application does not have to handle them. As a consequence, application logic can be simplified.
Some example use cases are described below.
Many different applications have account management features. Foreign keys can be used to simplify certain aspects of managing accounts, such as:
If an account is deleted, foreign keys can be used to automatically delete data associated with the account.
If an account is deleted, foreign keys can be used to automatically disassociate its data, so that it is no longer associated with any account.
If an account's user name or user ID is updated, foreign keys can be used to automatically associate its data with the new user name or user ID.
Foreign keys can also be used to simplify certain aspects of managing a store inventory, such as:
If a product is deleted, foreign keys can be used to automatically delete data associated with the product, such as reviews and images.
If a product's name is changed, foreign keys can be used to automatically associate the new name with the product's other data.
If a customer purchases a product, foreign keys can be used to associate the order with the product.
If a customer reviews a product, foreign keys can be used to associate the review with the product.
When an InnoDB table has a foreign key or when it is referenced by a foreign key, InnoDB performs a referential constraint check when certain types of operations try to write to the table. The specific details depend on whether the table is a parent table or a child table.
When an InnoDB table has a foreign key, it is known as a parent table. InnoDB performs referential constraint checks for the following operations on parent tables:
When InnoDB performs a referential constraint check, the outcome depends on several factors. The following table describes the details:
When an InnoDB table is referenced by a foreign key, it is known as a child table. InnoDB performs referential constraint checks for the following operations on child tables:
When InnoDB performs a referential constraint check, the outcome depends on several factors. The following table describes the details:
Let's create InnoDB tables with a foreign key constraint after confirming that the is InnoDB:
Connect to the server using MariaDB Client:
Confirm that the default storage engine is InnoDB by checking the using the statement:
If the database does not exist, then create the database for the sequence and table using the statement:
Create the parent table using the statement:
Create the child table using the statement:
Insert some rows into the parent table using the statement:
Insert a row into the child table for each row in the parent table using the statement:
Attempt to delete a row from the parent table that has a corresponding row in the child table using the DELETE statement:
This will fail with the error code as explained in the Operating on a Parent Table section:
Attempt to insert a row into the child table for a non-existent row in the parent table using the INSERT statement:
This will fail with the error code as explained in the Operating on a Child Table section:
Let's create InnoDB tables after confirming that the is InnoDB, and then let's add a foreign key constraint between them:
Connect to the server using MariaDB Client:
Confirm that the default storage engine is InnoDB by checking the using the statement:
If the database does not exist, then create the database for the sequence and table using the statement:
Create the parent table using the statement:
Create the child table using the statement:
Alter the child table to add the foreign key constraint using the statement:
Insert some rows into the parent table using the statement:
Insert a row into the child table for each row in the parent table using the statement:
Attempt to delete a row from the parent table that has a corresponding row in the child table using the statement:
This will fail with the error code as explained in the Operating on a Parent Table section:
Attempt to insert a row into the child table for a non-existent row in the parent table using the statement:
This will fail with the error code as explained in the Operating on a Child Table section:
Let's drop the foreign key constraint from the child table created in the section:
Connect to the server using MariaDB Client:
Obtain the name of the foreign key constraint by querying the table:
Drop the foreign key constraint from the child table using the statement:
When performing bulk data loads, dropping a table, or rebuilding a table, it can improve performance to temporarily disable foreign key constraint checks. Foreign key constraint checks can be temporarily disabled by setting the session value for the foreign_key_checks system variable. If foreign key constraint checks are temporarily disabled, then rows can be inserted that violate the constraint, so users must proceed with caution. This feature is most useful when you are loading a data set that is known to be valid.
Let's temporarily disable foreign key constraint checks, and then perform some tasks with the tables created in the section:
Connect to the server using MariaDB Client:
Temporarily disable foreign key constraint checks by setting the system variable with the statement:
If you want to test how to introduce inconsistencies, then attempt to delete a row from the parent table that has a corresponding row in the child table using the statement:
This operation would usually fail with the error code as explained in the Operating on a Parent Table section, but if foreign key constraint checks are disable, then it will succeed.
If you want to test how to introduce inconsistencies, then also attempt to insert a row into the child table for a non-existent row in the parent table using the statement:
This operation would usually fail with the error code as explained in the section, but if foreign key constraint checks are disable, then it will succeed.
Re-enable foreign key constraint checks by setting the system variable with the statement:
When a foreign key constraint is created without a name, InnoDB implicitly gives it a name in the format:
If foreign key constraints are explicitly created with names in the same format, it is possible for collisions to occur during table renames. In that case, the would contain messages like the following:
A foreign key constraint requires an index on the column. If a foreign key constraint is added to a column without an index, InnoDB will automatically create an index to enforce the foreign key constraint.
When the system variable is disabled, it is possible to drop the index used by a foreign key constraint. When the system variable is re-enabled, InnoDB will have no way to enforce the foreign key constraint, so all operations that could potentially violate the foreign key constraint will fail.
$ mariadb --user=rootSHOW SESSION VARIABLES
LIKE 'default_storage_engine';+------------------------+--------+
| Variable_name | Value |
+------------------------+--------+
| default_storage_engine | InnoDB |
+------------------------+--------+CREATE DATABASE hq_sales;CREATE TABLE hq_sales.invoices (
invoice_id BIGINT UNSIGNED AUTO_INCREMENT NOT NULL,
branch_id INT NOT NULL,
customer_id INT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(invoice_id)
);$ mariadb --user=rootINSERT INTO hq_sales.invoices
(branch_id, customer_id, invoice_date, invoice_total, payment_method)
VALUES
(1, 1, '2020-05-10 12:35:10', 1087.23, 'CREDIT_CARD');SELECT invoice_id
FROM hq_sales.invoices
WHERE branch_id = 1
AND customer_id = 1
AND invoice_date = '2020-05-10 12:35:10';+------------+
| invoice_id |
+------------+
| 1 |
+------------+$ mariadb --user=rootSHOW SESSION VARIABLES
LIKE 'sql_mode';+---------------+-------------------------------------------------------------------------------------------+
| Variable_name | Value |
+---------------+-------------------------------------------------------------------------------------------+
| sql_mode | STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION |
+---------------+-------------------------------------------------------------------------------------------+INSERT INTO hq_sales.invoices
(invoice_id, branch_id, customer_id, invoice_date, invoice_total, payment_method)
VALUES
(0, 1, 2, '2020-05-10 14:17:32', 1508.57, 'WIRE_TRANSFER');SELECT invoice_id
FROM hq_sales.invoices
WHERE branch_id = 1
AND customer_id = 2
AND invoice_date = '2020-05-10 14:17:32';+------------+
| invoice_id |
+------------+
| 2 |
+------------+$ mariadb --user=rootINSERT INTO hq_sales.invoices
(invoice_id, branch_id, customer_id, invoice_date, invoice_total, payment_method)
VALUES
(NULL, 1, 3, '2020-05-10 14:25:16', 227.15, 'CASH');SELECT invoice_id
FROM hq_sales.invoices
WHERE branch_id = 1
AND customer_id = 3
AND invoice_date = '2020-05-10 14:25:16';+------------+
| invoice_id |
+------------+
| 3 |
+------------+$ mariadb --user=rootINSERT INTO hq_sales.invoices
(branch_id, customer_id, invoice_date, invoice_total, payment_method)
VALUES
(1, 4, '2020-05-10 12:37:22', 104.19, 'CREDIT_CARD');SELECT LAST_INSERT_ID();
+------------------+
| LAST_INSERT_ID() |
+------------------+
| 4 |
+------------------+SELECT invoice_id
FROM hq_sales.invoices
WHERE branch_id = 1
AND customer_id = 4
AND invoice_date = '2020-05-10 12:37:22';+------------+
| invoice_id |
+------------+
| 4 |
+------------+[mariadb]
...
innodb_autoinc_lock_mode = 2$ sudo systemctl restart mariadb$ mariadb --user=rootALTER TABLE hq_sales.invoices
AUTO_INCREMENT = 100;INSERT INTO hq_sales.invoices
(branch_id, customer_id, invoice_date, invoice_total, payment_method)
VALUES
(1, 5, '2020-05-10 12:43:19', 1105.98, 'CREDIT_CARD');SELECT LAST_INSERT_ID();
+------------------+
| LAST_INSERT_ID() |
+------------------+
| 100 |
+------------------+SELECT invoice_id
FROM hq_sales.invoices
WHERE branch_id = 1
AND customer_id = 5
AND invoice_date = '2020-05-10 12:43:19';
+------------+
| invoice_id |
+------------+
| 100 |
+------------+InnoDB finds corresponding rows in the child table
ON UPDATE SET NULL
• Success. • Row in the parent table is updated. • Corresponding rows in the child table are also updated with NULL. If the child table has an update trigger, the trigger will not be executed for the update.
InnoDB does not find any corresponding rows in the child table
NA
• Success. • Row in the parent table is updated
InnoDB finds corresponding rows in the child table
ON DELETE RESTRICT
• Fails with error code
InnoDB finds corresponding rows in the child table
ON DELETE NO ACTION
• Fails with error code
InnoDB finds corresponding rows in the child table
ON DELETE CASCADE
• Success. • Row in the parent table is deleted. • Corresponding rows in the child table are also deleted. If the child table has a delete trigger, the trigger will not be executed for the delete.
InnoDB finds rows in the child table for the row
ON DELETE SET NULL
• Success. • Row in the parent table is deleted. • Corresponding rows in the child table are updated with NULL. If the child table has an update trigger, the trigger will not be executed for the update.
InnoDB does not find any rows in the child table for the row
NA
• Success. • Row in the parent table is deleted
New foreign key value is not present in parent table
Fails with error code
New foreign key value is NULL
Success
Table is referenced by a foreign key
Fails with error code
parent table
A table that has a foreign key.
child table
A table that is referenced by a foreign key.
InnoDB finds corresponding rows in the child table
ON UPDATE RESTRICT
• Fails with ER_ROW_IS_REFERENCED_2 error code
InnoDB finds corresponding rows in the child table
ON UPDATE NO ACTION
•. Fails with ER_ROW_IS_REFERENCED_2 error code
InnoDB finds corresponding rows in the child table
ON UPDATE CASCADE
New foreign key value is present in parent table
Success
New foreign key value is not present in parent table
Fails with ER_NO_REFERENCED_ROW_2 error code
New foreign key value is NULL
Success
New foreign key value is present in parent table
This page is: Copyright © 2025 MariaDB. All rights reserved.
• Success. • Row in the parent table is updated. • Corresponding rows in the child table are also updated with the new foreign key value. If the child table has an update trigger, the trigger will not be executed for the update.
Success
The installation process might have started some of the ColumnStore services. The services should be stopped prior to making configuration changes.
On each Enterprise ColumnStore node, stop the MariaDB Enterprise Server service:
On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:
On each Enterprise ColumnStore node, stop the CMAPI service:
On each Enterprise ColumnStore node, configure Enterprise Server.
Set this system variable to utf8
Set this system variable to utf8_general_ci
columnstore_use_import_for_batchinsert
Set this system variable to ALWAYS to always use cpimport for LOAD DATA INFILE and INSERT...SELECT statements.
Set this system variable to ON.
Set this option to the file you want to use for the Binary Log. Setting this option enables binary logging.
Set this option to the file you want to use to track binlog filenames.
Mandatory system variables and options for ColumnStore Object Storage include:
Example Configuration
On each Enterprise ColumnStore node, start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:
On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:
After the CMAPI service is installed in the next step, CMAPI will start the Enterprise ColumnStore service as-needed on each node. CMAPI disables the Enterprise ColumnStore service to prevent systemd from automatically starting Enterprise ColumnStore upon reboot.
On each Enterprise ColumnStore node, start and enable the CMAPI service, so that it starts automatically upon reboot:
For additional information, see "Start and Stop Services".
The ColumnStore Object Storage topology requires several user accounts. Each user account should be created on the primary server, so that it is replicated to the replica servers.
Enterprise ColumnStore requires a mandatory utility user account to perform cross-engine joins and similar operations.
On the primary server, create the user account with the CREATE USER statement:
On the primary server, grant the user account SELECT privileges on all databases with the GRANT statement:
On each Enterprise ColumnStore node, configure the ColumnStore utility user:
On each Enterprise ColumnStore node, set the password:
For details about how to encrypt the password, see "Credentials Management for MariaDB Enterprise ColumnStore".
Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.
ColumnStore Object Storage uses MariaDB Replication to replicate writes between the primary and replica servers. As MaxScale can promote a replica server to become a new primary in the event of node failure, all nodes must have a replication user.
The action is performed on the primary server.
Create the replication user and grant it the required privileges:
Use the CREATE USER statement to create replication user.
Replace the referenced IP address with the relevant address for your environment.
Ensure that the user account can connect to the primary server from each replica.
Grant the user account the required privileges with the GRANT statement.
ColumnStore Object Storage 23.10 uses MariaDB MaxScale 22.08 to load balance between the nodes.
This action is performed on the primary server.
Use the CREATE USER statement to create the MaxScale user:
Replace the referenced IP address with the relevant address for your environment.
Ensure that the user account can connect from the IP address of the MaxScale instance.
Use the GRANT statement to grant the privileges required by the router:
Use the GRANT statement to grant privileges required by the MariaDB Monitor.
On each replica server, configure MariaDB Replication:
Use the CHANGE MASTER TO statement to configure the connection to the primary server:
Start replication using the START REPLICA statement:
Confirm that replication is working using the SHOW REPLICA STATUS statement:
Ensure that the replica server cannot accept local writes by setting the read_only system variable to ON using the SET GLOBAL statement:
Initiate the primary server using CMAPI.
Create an API key for the cluster. This API key should be stored securely and kept confidential, because it can be used to add cluster nodes to the multi-node Enterprise ColumnStore deployment.
For example, to create a random 256-bit API key using openssl rand:
This document will use the following API key in further examples, but users should create their own:
Use CMAPI to add the primary server to the cluster and set the API key. The new API key needs to be provided as part of the X-API-key HTML header.
For example, if the primary server's host name is mcs1 and its IP address is 192.0.2.1, use the following node command:
Use CMAPI to check the status of the cluster node:
Add the replica servers with CMAPI:
For each replica server, use CMAPI to add the replica server to the cluster. The previously set API key needs to be provided as part of the X-API-key HTML header.
For example, if the primary server's host name is mcs1 and the replica server's IP address is 192.0.2.2, use the following node command:
After all replica servers have been added, use CMAPI to confirm that all cluster nodes have been successfully added:
The specific steps to configure the security module depend on the operating system.
Configure SELinux for Enterprise ColumnStore:
To configure SELinux, you have to install the packages required for audit2allow. On CentOS 7 and RHEL 7, install the following:
On RHEL 8, install the following:
Allow the system to run under load for a while to generate SELinux audit events.
After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:
If no audit events were found, this will print the following:
If audit events were found, the new SELinux policy can be loaded using semodule:
Set SELinux to enforcing mode:
Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config.
For example, the file will usually look like this after the change:
Confirm that SELinux is in enforcing mode:
For information on how to create a profile, see How to create an AppArmor Profile on Ubuntu.com.
The specific steps to configure the firewall service depend on the platform.
Configure firewalld for Enterprise Cluster on CentOS and RHEL:
Check if the firewalld service is running:
If the firewalld service was stopped to perform the installation, start it now:
For example, if your cluster nodes are in the 192.0.2.0/24 subnet:
Open up the relevant ports using firewall-cmd:
Reload the runtime configuration:
Configure UFW for Enterprise ColumnStore on Ubuntu:
Check if the UFW service is running:
If the UFW service was stopped to perform the installation, start it now:
Open up the relevant ports using ufw.
For example, if your cluster nodes are in the 192.0.2.0/24 subnet in the range 192.0.2.1 - 192.0.2.3:
Reload the runtime configuration:
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 4 of 9.
Next: Step 5: Test MariaDB Enterprise Server.
This page is: Copyright © 2025 MariaDB. All rights reserved.


This page details step 8 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step tests MariaDB MaxScale 22.08.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Use command to view the global MaxScale configuration.
This action is performed on the MaxScale node:
Output should align to the global MaxScale configuration in the new configuration file you created.
Check Server Configuration Use the and commands to view the configured server objects.
This action is performed on the MaxScale node:
Obtain the full list of servers objects:
For each server object, view the configuration:
Output should align to the Server Object configuration you performed.
Use the and commands to view the configured monitors.
This action is performed on the MaxScale node:
Obtain the full list of monitors:
For each monitor, view the monitor configuration:
Output should align to the MariaDB Monitor (mariadbmon) configuration you performed.
Use the and commands to view the configured routing services.
This action is performed on the MaxScale node:
Obtain the full list of routing services:
For each service, view the service configuration:
Output should align to the or configuration you performed.
Applications should use a dedicated user account. The user account must be created on the primary server.
When users connect to MaxScale, MaxScale authenticates the user connection before routing it to an Enterprise Server node. Enterprise Server authenticates the connection as originating from the IP address of the MaxScale node.
The application users must have one user account with the host IP address of the application server and a second user account with the host IP address of the MaxScale node.
The requirement of a duplicate user account can be avoided by enabling the proxy_protocol parameter for MaxScale and the proxy_protocol_networks for Enterprise Server.
This action is performed on the primary Enterprise ColumnStore node:
Connect to the primary Enterprise ColumnStore node:
Create the database user account for your MaxScale node:
Replace 192.0.2.10 with the relevant IP address specification for your MaxScale node.
Passwords should meet your organization's password policies.
Grant the privileges required by your application to the database user account for your MaxScale node:
The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.
This action is performed on the primary Enterprise ColumnStore node:
Create the database user account for your application server:
Replace 192.0.2.11 with the relevant IP address specification for your application server.
Passwords should meet your organization's password policies.
Grant the privileges required by your application to the d database user account for your application server:
The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.
To test the connection, use the MariaDB Client from your application server to connect to an Enterprise ColumnStore node through MaxScale.
This action is performed on a client connected to the MaxScale node:
If you configured the Read Connection Router, confirm that MaxScale routes connections to the replica servers.
On the MaxScale node, use the command to view the available listeners and ports:
Open multiple terminals connected to your application server, in each, use MariaDB Client to connect to the listener port for the Read Connection Router (in the example, 3308):
Use the application user credentials you created for the --user and --password options.
In each terminal, query the hostname and server_id system variable and option to identify to which you're connected:
Different terminals should return different values since MaxScale routes the connections to different nodes.
Since the router was configured with the slave router option, the Read Connection Router only routes connections to replica servers.
If you configured the Read/Write Split Router, confirm that MaxScale routes write queries on this router to the primary Enterprise ColumnStore node.
on the MaxScale node, use the command to view the available listeners and ports:
Open multiple terminals connected to your application server, in each, use MariaDB Client to connect to the listener port for the Read/Write Split Router (in the example, 3307):
Use the application user credentials you created for the --user and --password options.
In one terminal, create the test table:
In each terminal, issue an insert.md statement to add a row to the example table with the values of the hostname and server_id system variable and option:
In one terminal, issue a SELECT statement to query the results:
While MaxScale is handling multiple connections from different terminals, it routed all connections to the current primary Enterprise ColumnStore node, which in the example is mcs1#.
If you configured the , confirm that MaxScale routes read queries on this router to replica servers.
On the MaxScale node, use the command to view the available listeners and ports:
In a terminal connected to your application server, use MariaDB Client to connect to the listener port for the (in the example, 3307):
Use the application user credentials you created for the --user and --password options.
Query the hostname and server_id to identify which server MaxScale routed you to.
Resend the query:
Confirm that MaxScale routes the SELECT statements to different replica servers.
For more information on different routing criteria, see slave_selection_criteria
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 8 of 9.
Next: Step 9: Import Data
This page is: Copyright © 2025 MariaDB. All rights reserved.


$ mariadb --user=rootSHOW SESSION VARIABLES
LIKE 'default_storage_engine';
+------------------------+--------+
| Variable_name | Value |
+------------------------+--------+
| default_storage_engine | InnoDB |
+------------------------+--------+CREATE DATABASE hq_sales;CREATE TABLE hq_sales.customers (
customer_id BIGINT AUTO_INCREMENT NOT NULL,
customer_name VARCHAR(500) NOT NULL,
customer_email VARCHAR(200),
PRIMARY KEY(customer_id)
);CREATE TABLE hq_sales.invoices (
invoice_id BIGINT AUTO_INCREMENT NOT NULL,
branch_id INT NOT NULL,
customer_id BIGINT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(invoice_id),
CONSTRAINT fk_invoices_customers
FOREIGN KEY (customer_id) REFERENCES hq_sales.customers (customer_id)
ON DELETE RESTRICT
ON UPDATE RESTRICT
);INSERT INTO hq_sales.customers (customer_id, name)
VALUES
(1, 'John Doe'),
(2, 'Jane Doe');INSERT INTO hq_sales.invoices
(branch_id, customer_id, invoice_date, invoice_total, payment_method)
VALUES
(1, 1, '2020-05-10 12:35:10', 1087.23, 'CREDIT_CARD'),
(1, 2, '2020-05-10 14:17:32', 1508.57, 'WIRE_TRANSFER');DELETE FROM hq_sales.customers
WHERE customer_id = 1;ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails
(`hq_sales`.`invoices`, CONSTRAINT `fk_invoices_customers`
FOREIGN KEY (`customer_id`) REFERENCES `customers` (`customer_id`))INSERT INTO hq_sales.invoices
(branch_id, customer_id, invoice_date, invoice_total, payment_method)
VALUES
(1, 3, '2020-05-10 14:25:16', 227.15, 'CASH');ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails
(`hq_sales`.`invoices`, CONSTRAINT `fk_invoices_customers`
FOREIGN KEY (`customer_id`) REFERENCES `customers` (`customer_id`))$ mariadb --user=rootSHOW SESSION VARIABLES
LIKE 'default_storage_engine';
+------------------------+--------+
| Variable_name | Value |
+------------------------+--------+
| default_storage_engine | InnoDB |
+------------------------+--------+CREATE DATABASE hq_sales;CREATE TABLE hq_sales.customers (
customer_id BIGINT AUTO_INCREMENT NOT NULL,
customer_name VARCHAR(500) NOT NULL,
customer_email VARCHAR(200),
PRIMARY KEY(customer_id)
);CREATE TABLE hq_sales.invoices (
invoice_id BIGINT AUTO_INCREMENT NOT NULL,
branch_id INT NOT NULL,
customer_id BIGINT,
invoice_date DATETIME(6),
invoice_total DECIMAL(13, 2),
payment_method ENUM('NONE', 'CASH', 'WIRE_TRANSFER', 'CREDIT_CARD', 'GIFT_CARD'),
PRIMARY KEY(invoice_id)
);ALTER TABLE hq_sales.invoices ADD CONSTRAINT fk_invoices_customers
FOREIGN KEY (customer_id) REFERENCES hq_sales.customers (customer_id)
ON DELETE RESTRICT
ON UPDATE RESTRICT;INSERT INTO hq_sales.customers (customer_id, name)
VALUES
(1, 'John Doe'),
(2, 'Jane Doe');INSERT INTO hq_sales.invoices
(branch_id, customer_id, invoice_date, invoice_total, payment_method)
VALUES
(1, 1, '2020-05-10 12:35:10', 1087.23, 'CREDIT_CARD'),
(1, 2, '2020-05-10 14:17:32', 1508.57, 'WIRE_TRANSFER');DELETE FROM hq_sales.customers
WHERE customer_id = 1;ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails
(`hq_sales`.`invoices`, CONSTRAINT `fk_invoices_customers`
FOREIGN KEY (`customer_id`) REFERENCES `customers` (`customer_id`))INSERT INTO hq_sales.invoices
(branch_id, customer_id, invoice_date, invoice_total, payment_method)
VALUES
(1, 3, '2020-05-10 14:25:16', 227.15, 'CASH');ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails
(`hq_sales`.`invoices`, CONSTRAINT `fk_invoices_customers`
FOREIGN KEY (`customer_id`) REFERENCES `customers` (`customer_id`))$ mariadb --user=rootSELECT CONSTRAINT_NAME
FROM information_schema.TABLE_CONSTRAINTS
WHERE TABLE_SCHEMA = 'hq_sales'
AND TABLE_NAME = 'invoices'
AND CONSTRAINT_TYPE = 'FOREIGN KEY';+-----------------------+
| CONSTRAINT_NAME |
+-----------------------+
| fk_invoices_customers |
+-----------------------+ALTER TABLE hq_sales.invoices DROP FOREIGN KEY fk_invoices_customers;$ mariadb --user=rootSET SESSION foreign_key_checks=OFF;DELETE FROM hq_sales.customers
WHERE customer_id = 1;INSERT INTO hq_sales.invoices
(branch_id, customer_id, invoice_date, invoice_total, payment_method)
VALUES
(1, 3, '2020-05-10 14:25:16', 227.15, 'CASH');SET SESSION foreign_key_checks=ON;<table_name>_ibfk_<constraint_count>2021-06-16 11:50:39 139702710404864 [ERROR] InnoDB: Possible reasons:
2021-06-16 11:50:39 139702710404864 [ERROR] InnoDB: (1) Table rename would cause two FOREIGN KEY constraints to have the same internal name in case-insensitive comparison.
2021-06-16 11:50:39 139702710404864 [ERROR] InnoDB: (2) Table `test`.`t3` exists in the InnoDB internal data dictionary though MySQL is trying to rename table `test`.`t2b` to it. Have you deleted the .frm file and not used DROP TABLE?
2021-06-16 11:50:39 139702710404864 [ERROR] InnoDB: If table `test`.`t3` is a temporary table #sql..., then it can be that there are still queries running on the table, and it will be dropped automatically when the queries end. You can drop the orphaned table inside InnoDB by creating an InnoDB table with the same name in another database and copying the .frm file to the current database. Then MySQL thinks the table exists, and DROP TABLE will succeed.$ sudo systemctl stop mariadb$ sudo systemctl stop mariadb-columnstore$ sudo systemctl stop mariadb-columnstore-cmapi[mariadb]
bind_address = 0.0.0.0
log_error = mariadbd.err
character_set_server = utf8
collation_server = utf8_general_ci
log_bin = mariadb-bin
log_bin_index = mariadb-bin.index
relay_log = mariadb-relay
relay_log_index = mariadb-relay.index
log_slave_updates = ON
gtid_strict_mode = ON
# This must be unique on each Enterprise ColumnStore node
server_id = 1$ sudo systemctl start mariadb$ sudo systemctl enable mariadb$ sudo systemctl stop mariadb-columnstore$ sudo systemctl start mariadb-columnstore-cmapi$ sudo systemctl enable mariadb-columnstore-cmapiCREATE USER 'util_user'@'127.0.0.1'
IDENTIFIED BY 'util_user_passwd';GRANT SELECT, PROCESS ON *.*
TO 'util_user'@'127.0.0.1';$ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1$ sudo mcsSetConfig CrossEngineSupport Port 3306$ sudo mcsSetConfig CrossEngineSupport User util_user$ sudo mcsSetConfig CrossEngineSupport Password util_user_passwdCREATE USER 'repl'@'192.0.2.%' IDENTIFIED BY 'repl_passwd';GRANT REPLICA MONITOR,
REPLICATION REPLICA,
REPLICATION REPLICA ADMIN,
REPLICATION MASTER ADMIN
ON *.* TO 'repl'@'192.0.2.%';CREATE USER 'mxs'@'192.0.2.%'
IDENTIFIED BY 'mxs_passwd';GRANT SHOW DATABASES ON *.* TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.columns_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.db TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.procs_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.proxies_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.roles_mapping TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.tables_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.user TO 'mxs'@'192.0.2.%';GRANT BINLOG ADMIN,
READ_ONLY ADMIN,
RELOAD,
REPLICA MONITOR,
REPLICATION MASTER ADMIN,
REPLICATION REPLICA ADMIN,
REPLICATION REPLICA,
SHOW DATABASES,
SELECT
ON *.* TO 'mxs'@'192.0.2.%';CHANGE MASTER TO
MASTER_HOST='192.0.2.1',
MASTER_USER='repl',
MASTER_PASSWORD='repl_passwd',
MASTER_USE_GTID=slave_pos;START REPLICA;SHOW REPLICA STATUS;SET GLOBAL read_only=ON;$ openssl rand -hex 32
93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":120, "node": "192.0.2.1"}' \
| jq .{
"timestamp": "2020-10-28 00:39:14.672142",
"node_id": "192.0.2.1"
}$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
}$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":120, "node": "192.0.2.2"}' \
| jq .{
"timestamp": "2020-10-28 00:42:42.796050",
"node_id": "192.0.2.2"
}$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
},
"192.0.2.2": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"192.0.2.3": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"num_nodes": 3
}$ sudo yum install policycoreutils policycoreutils-python$ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
Nothing to do$ sudo semodule -i mariadb_local.pp$ sudo setenforce enforcing# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo getenforceEnforcing$ sudo systemctl status firewalld$ sudo systemctl start firewalld$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="3306" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8600-8630" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8640" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8700" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8800" protocol="tcp"
accept'$ sudo firewall-cmd --reload$ sudo ufw status verbose$ sudo ufw enable$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 3306 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8600:8630 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8640 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8700 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8800 proto tcp$ sudo ufw reloadSet this system variable to ON.
Set this option to the file you want to use for the Relay Logs. Setting this option enables relay logging.
Set this option to the file you want to use to index Relay Log filenames.
Sets the numeric Server ID for this MariaDB Enterprise Server. The value set on this option must be unique to each node.


Use maxctrl show maxscale command to view the global MaxScale configuration.
This action is performed on the MaxScale node:
Output should align to the global MaxScale configuration in the new configuration file you created.
Use the maxctrl list servers and maxctrl show server commands to view the configured server objects.
This action is performed on the MaxScale node:
Obtain the full list of servers objects:
For each server object, view the configuration:
Output should align to the Server Object configuration you performed.
Use the maxctrl list monitors and maxctrl show monitor commands to view the configured monitors.
This action is performed on the MaxScale node:
Obtain the full list of monitors:
For each monitor, view the monitor configuration:
Output should align to the MariaDB Monitor (mariadbmon) configuration you performed.
Use the maxctrl list services and maxctrl show service commands to view the configured routing services.
This action is performed on the MaxScale node:
Obtain the full list of routing services:
For each service, view the service configuration:
Output should align to the Read Connection Router (readconnroute) or Read/Write Split Router (readwritesplit) configuration you performed.
Applications should use a dedicated user account. The user account must be created on the primary server.
When users connect to MaxScale, MaxScale authenticates the user connection before routing it to an Enterprise Server node. Enterprise Server authenticates the connection as originating from the IP address of the MaxScale node.
The application users must have one user account with the host IP address of the application server and a second user account with the host IP address of the MaxScale node.
The requirement of a duplicate user account can be avoided by enabling the proxy_protocol parameter for MaxScale and the proxy_protocol_networks for Enterprise Server.
This action is performed on the primary Enterprise ColumnStore node:
Connect to the primary Enterprise ColumnStore node:
Create the database user account for your MaxScale node:
Replace 192.0.2.10 with the relevant IP address specification for your MaxScale node.
Passwords should meet your organization's password policies.
Grant the privileges required by your application to the database user account for your MaxScale node:
The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.
This action is performed on the primary Enterprise ColumnStore node:
Create the database user account for your application server:
Replace 192.0.2.11 with the relevant IP address specification for your application server.
Passwords should meet your organization's password policies.
Grant the privileges required by your application to the d database user account for your application server:
The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.
To test the connection, use the MariaDB Client from your application server to connect to an Enterprise ColumnStore node through MaxScale.
This action is performed on a client connected to the MaxScale node:
If you configured the Read Connection Router, confirm that MaxScale routes connections to the replica servers.
On the MaxScale node, use the maxctrl list listeners command to view the available listeners and ports:
Open multiple terminals connected to your application server, in each, use MariaDB Client to connect to the listener port for the Read Connection Router (in the example, 3308):
Use the application user credentials you created for the --user and --password options.
In each terminal, query the hostname and server_id system variable and option to identify to which you're connected:
Different terminals should return different values since MaxScale routes the connections to different nodes.
Since the router was configured with the slave router option, the Read Connection Router only routes connections to replica servers.
If you configured the Read/Write Split Router, confirm that MaxScale routes write queries on this router to the primary Enterprise ColumnStore node.
on the MaxScale node, use the maxctrl list listeners command to view the available listeners and ports:
Open multiple terminals connected to your application server, in each, use MariaDB Client to connect to the listener port for the Read/Write Split Router (in the example, 3307):
Use the application user credentials you created for the --user and --password options.
In one terminal, create the test table:
In each terminal, issue an insert.md statement to add a row to the example table with the values of the hostname and server_id system variable and option:
In one terminal, issue a SELECT statement to query the results:
While MaxScale is handling multiple connections from different terminals, it routed all connections to the current primary Enterprise ColumnStore node, which in the example is mcs1#.
If you configured the Read/Write Split Router (readwritesplit), confirm that MaxScale routes read queries on this router to replica servers.
On the MaxScale node, use the maxctrl list listeners command to view the available listeners and ports:
In a terminal connected to your application server, use MariaDB Client to connect to the listener port for the Read/Write Split Router (readwritesplit) (in the example, 3307):
Use the application user credentials you created for the --user and --password options.
Query the hostname and server_id to identify which server MaxScale routed you to.
Resend the query:
Confirm that MaxScale routes the SELECT statements to different replica servers.
For more information on different routing criteria, see slave_selection_criteria
"Deploy ColumnStore Shared Local Storage Topology".
This page was step 8 of 9.
Next: Step 9: Import Data.
This page is: Copyright © 2025 MariaDB. All rights reserved.
$ maxctrl show maxscale┌──────────────┬───────────────────────────────────────────────────────┐
│ Version │ 22.08.15 │
├──────────────┼───────────────────────────────────────────────────────┤
│ Commit │ 3761fa7a52046bc58faad8b5a139116f9e33364c │
├──────────────┼───────────────────────────────────────────────────────┤
│ Started At │ Thu, 05 Aug 2021 20:21:20 GMT │
├──────────────┼───────────────────────────────────────────────────────┤
│ Activated At │ Thu, 05 Aug 2021 20:21:20 GMT │
├──────────────┼───────────────────────────────────────────────────────┤
│ Uptime │ 868 │
├──────────────┼───────────────────────────────────────────────────────┤
│ Config Sync │ null │
├──────────────┼───────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ "admin_auth": true, │
│ │ "admin_enabled": true, │
│ │ "admin_gui": true, │
│ │ "admin_host": "0.0.0.0", │
│ │ "admin_log_auth_failures": true, │
│ │ "admin_pam_readonly_service": null, │
│ │ "admin_pam_readwrite_service": null, │
│ │ "admin_port": 8989, │
│ │ "admin_secure_gui": false, │
│ │ "admin_ssl_ca_cert": null, │
│ │ "admin_ssl_cert": null, │
│ │ "admin_ssl_key": null, │
│ │ "admin_ssl_version": "MAX", │
│ │ "auth_connect_timeout": "10000ms", │
│ │ "auth_read_timeout": "10000ms", │
│ │ "auth_write_timeout": "10000ms", │
│ │ "cachedir": "/var/cache/maxscale", │
│ │ "config_sync_cluster": null, │
│ │ "config_sync_interval": "5000ms", │
│ │ "config_sync_password": "*****", │
│ │ "config_sync_timeout": "10000ms", │
│ │ "config_sync_user": null, │
│ │ "connector_plugindir": "/usr/lib64/mysql/plugin", │
│ │ "datadir": "/var/lib/maxscale", │
│ │ "debug": null, │
│ │ "dump_last_statements": "never", │
│ │ "execdir": "/usr/bin", │
│ │ "language": "/var/lib/maxscale", │
│ │ "libdir": "/usr/lib64/maxscale", │
│ │ "load_persisted_configs": true, │
│ │ "local_address": null, │
│ │ "log_debug": false, │
│ │ "log_info": false, │
│ │ "log_notice": true, │
│ │ "log_throttling": { │
│ │ "count": 10, │
│ │ "suppress": 10000, │
│ │ "window": 1000 │
│ │ }, │
│ │ "log_warn_super_user": false, │
│ │ "log_warning": true, │
│ │ "logdir": "/var/log/maxscale", │
│ │ "max_auth_errors_until_block": 10, │
│ │ "maxlog": true, │
│ │ "module_configdir": "/etc/maxscale.modules.d", │
│ │ "ms_timestamp": false, │
│ │ "passive": false, │
│ │ "persistdir": "/var/lib/maxscale/maxscale.cnf.d", │
│ │ "piddir": "/var/run/maxscale", │
│ │ "query_classifier": "qc_sqlite", │
│ │ "query_classifier_args": null, │
│ │ "query_classifier_cache_size": 289073971, │
│ │ "query_retries": 1, │
│ │ "query_retry_timeout": "5000ms", │
│ │ "rebalance_period": "0ms", │
│ │ "rebalance_threshold": 20, │
│ │ "rebalance_window": 10, │
│ │ "retain_last_statements": 0, │
│ │ "session_trace": 0, │
│ │ "skip_permission_checks": false, │
│ │ "sql_mode": "default", │
│ │ "syslog": true, │
│ │ "threads": 1, │
│ │ "users_refresh_interval": "0ms", │
│ │ "users_refresh_time": "30000ms", │
│ │ "writeq_high_water": 16777216, │
│ │ "writeq_low_water": 8192 │
│ │ } │
└──────────────┴───────────────────────────────────────────────────────┘$ maxctrl list servers┌────────┬────────────────┬──────┬─────────────┬─────────────────┬────────┐
│ Server │ Address │ Port │ Connections │ State │ GTID │
├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
│ mcs1 │ 192.0.2.1 │ 3306 │ 1 │ Master, Running │ 0-1-25 │
├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
│ mcs2 │ 192.0.2.2 │ 3306 │ 1 │ Slave, Running │ 0-1-25 │
├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
│ mcs3 │ 192.0.2.3 │ 3306 │ 1 │ Slave, Running │ 0-1-25 │
└────────┴────────────────┴──────┴─────────────┴─────────────────┴────────┘$ maxctrl show server mcs1┌─────────────────────┬───────────────────────────────────────────┐
│ Server │ mcs1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Address │ 192.0.2.1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Port │ 3306 │
├─────────────────────┼───────────────────────────────────────────┤
│ State │ Master, Running │
├─────────────────────┼───────────────────────────────────────────┤
│ Version │ 11.4.5-3-MariaDB-enterprise-log │
├─────────────────────┼───────────────────────────────────────────┤
│ Last Event │ master_up │
├─────────────────────┼───────────────────────────────────────────┤
│ Triggered At │ Thu, 05 Aug 2021 20:22:26 GMT │
├─────────────────────┼───────────────────────────────────────────┤
│ Services │ connection_router_service │
│ │ query_router_service │
├─────────────────────┼───────────────────────────────────────────┤
│ Monitors │ columnstore_monitor │
├─────────────────────┼───────────────────────────────────────────┤
│ Master ID │ -1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Node ID │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Slave Server IDs │ │
├─────────────────────┼───────────────────────────────────────────┤
│ Current Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Total Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Max Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Statistics │ { │
│ │ "active_operations": 0, │
│ │ "adaptive_avg_select_time": "0ns", │
│ │ "connection_pool_empty": 0, │
│ │ "connections": 1, │
│ │ "max_connections": 1, │
│ │ "max_pool_size": 0, │
│ │ "persistent_connections": 0, │
│ │ "reused_connections": 0, │
│ │ "routed_packets": 0, │
│ │ "total_connections": 1 │
│ │ } │
├─────────────────────┼───────────────────────────────────────────┤
│ Parameters │ { │
│ │ "address": "192.0.2.1", │
│ │ "disk_space_threshold": null, │
│ │ "extra_port": 0, │
│ │ "monitorpw": null, │
│ │ "monitoruser": null, │
│ │ "persistmaxtime": "0ms", │
│ │ "persistpoolmax": 0, │
│ │ "port": 3306, │
│ │ "priority": 0, │
│ │ "proxy_protocol": false, │
│ │ "rank": "primary", │
│ │ "socket": null, │
│ │ "ssl": false, │
│ │ "ssl_ca_cert": null, │
│ │ "ssl_cert": null, │
│ │ "ssl_cert_verify_depth": 9, │
│ │ "ssl_cipher": null, │
│ │ "ssl_key": null, │
│ │ "ssl_verify_peer_certificate": false, │
│ │ "ssl_verify_peer_host": false, │
│ │ "ssl_version": "MAX" │
│ │ } │
└─────────────────────┴───────────────────────────────────────────┘$ maxctrl list monitors┌─────────────────────┬─────────┬──────────────────┐
│ Monitor │ State │ Servers │
├─────────────────────┼─────────┼──────────────────┤
│ columnstore_monitor │ Running │ mcs1, mcs2, mcs3 │
└─────────────────────┴─────────┴──────────────────┘$ maxctrl show monitor columnstore_monitor┌─────────────────────┬─────────────────────────────────────┐
│ Monitor │ columnstore_monitor │
├─────────────────────┼─────────────────────────────────────┤
│ Module │ mariadbmon │
├─────────────────────┼─────────────────────────────────────┤
│ State │ Running │
├─────────────────────┼─────────────────────────────────────┤
│ Servers │ mcs1 │
│ │ mcs2 │
│ │ mcs3 │
├─────────────────────┼─────────────────────────────────────┤
│ Parameters │ { │
│ │ "backend_connect_attempts": 1, │
│ │ "backend_connect_timeout": 3, │
│ │ "backend_read_timeout": 3, │
│ │ "backend_write_timeout": 3, │
│ │ "disk_space_check_interval": 0, │
│ │ "disk_space_threshold": null, │
│ │ "events": "all", │
│ │ "journal_max_age": 28800, │
│ │ "module": "mariadbmon", │
│ │ "monitor_interval": 2000, │
│ │ "password": "*****", │
│ │ "script": null, │
│ │ "script_timeout": 90, │
│ │ "user": "mxs" │
│ │ } │
├─────────────────────┼─────────────────────────────────────┤
│ Monitor Diagnostics │ {} │
└─────────────────────┴─────────────────────────────────────┘$ maxctrl list services┌───────────────────────────┬────────────────┬─────────────┬───────────────────┬──────────────────┐
│ Service │ Router │ Connections │ Total Connections │ Servers │
├───────────────────────────┼────────────────┼─────────────┼───────────────────┼──────────────────┤
│ connection_router_Service │ readconnroute │ 0 │ 0 │ mcs1, mcs2, mcs3 │
├───────────────────────────┼────────────────┼─────────────┼───────────────────┼──────────────────┤
│ query_router_service │ readwritesplit │ 0 │ 0 │ mcs1, mcs2, mcs3 │
└───────────────────────────┴────────────────┴─────────────┴───────────────────┴──────────────────┘$ maxctrl show service query_router_service┌─────────────────────┬─────────────────────────────────────────────────────────────┐
│ Service │ query_router_service │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Router │ readwritesplit │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ State │ Started │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Started At │ Sat Aug 28 21:41:16 2021 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Current Connections │ 0 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Total Connections │ 0 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Max Connections │ 0 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Cluster │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Servers │ mcs1 │
│ │ mcs2 │
│ │ mcs3 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Services │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Filters │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ "auth_all_servers": false, │
│ │ "causal_reads": "false", │
│ │ "causal_reads_timeout": "10000ms", │
│ │ "connection_keepalive": "300000ms", │
│ │ "connection_timeout": "0ms", │
│ │ "delayed_retry": false, │
│ │ "delayed_retry_timeout": "10000ms", │
│ │ "disable_sescmd_history": false, │
│ │ "enable_root_user": false, │
│ │ "idle_session_pool_time": "-1000ms", │
│ │ "lazy_connect": false, │
│ │ "localhost_match_wildcard_host": true, │
│ │ "log_auth_warnings": true, │
│ │ "master_accept_reads": false, │
│ │ "master_failure_mode": "fail_instantly", │
│ │ "master_reconnection": false, │
│ │ "max_connections": 0, │
│ │ "max_sescmd_history": 50, │
│ │ "max_slave_connections": 255, │
│ │ "max_slave_replication_lag": "0ms", │
│ │ "net_write_timeout": "0ms", │
│ │ "optimistic_trx": false, │
│ │ "password": "*****", │
│ │ "prune_sescmd_history": true, │
│ │ "rank": "primary", │
│ │ "retain_last_statements": -1, │
│ │ "retry_failed_reads": true, │
│ │ "reuse_prepared_statements": false, │
│ │ "router": "readwritesplit", │
│ │ "session_trace": false, │
│ │ "session_track_trx_state": false, │
│ │ "slave_connections": 255, │
│ │ "slave_selection_criteria": "LEAST_CURRENT_OPERATIONS", │
│ │ "strict_multi_stmt": false, │
│ │ "strict_sp_calls": false, │
│ │ "strip_db_esc": true, │
│ │ "transaction_replay": false, │
│ │ "transaction_replay_attempts": 5, │
│ │ "transaction_replay_max_size": 1073741824, │
│ │ "transaction_replay_retry_on_deadlock": false, │
│ │ "type": "service", │
│ │ "use_sql_variables_in": "all", │
│ │ "user": "mxs", │
│ │ "version_string": null │
│ │ } │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Router Diagnostics │ { │
│ │ "avg_sescmd_history_length": 0, │
│ │ "max_sescmd_history_length": 0, │
│ │ "queries": 0, │
│ │ "replayed_transactions": 0, │
│ │ "ro_transactions": 0, │
│ │ "route_all": 0, │
│ │ "route_master": 0, │
│ │ "route_slave": 0, │
│ │ "rw_transactions": 0, │
│ │ "server_query_statistics": [] │
│ │ } │
└─────────────────────┴─────────────────────────────────────────────────────────────┘$ sudo mariadbCREATE USER 'app_user'@'192.0.2.10' IDENTIFIED BY 'app_user_passwd';GRANT ALL ON test.* TO 'app_user'@'192.0.2.10';CREATE USER 'app_user'@'192.0.2.11' IDENTIFIED BY 'app_user_passwd';GRANT ALL ON test.* TO 'app_user'@'192.0.2.11';$ mariadb --host 192.0.2.10 --port 3307
--user app_user --password$ maxctrl list listeners┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘$ mariadb --host 192.0.2.10 --port 3308 \
--user app_user --passwordSELECT @@global.hostname, @@global.server_id;
+-------------------+--------------------+
| @@global.hostname | @@global.server_id |
+-------------------+--------------------+
| mcs2 | 2 |
+-------------------+--------------------+$ maxctrl list listeners┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘$ mariadb --host 192.0.2.10 --port 3307 \
--user app_user --passwordCREATE TABLE test.load_balancing_test (
id INT PRIMARY KEY AUTO_INCREMENT,
hostname VARCHAR(256),
server_id INT
);INSERT INTO test.load_balancing_test (hostname, server_id)
VALUES (@@global.hostname, @@global.server_id);SELECT * FROM test.load_balancing_test;+----+----------+-----------+
| id | hostname | server_id |
+----+----------+-----------+
| 1 | mcs1 | 1 |
| 2 | mcs1 | 1 |
| 3 | mcs1 | 1 |
+----+----------+-----------+$ maxctrl list listeners┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘$ mariadb --host 192.0.2.10 --port 3307 \
--user app_user --passwordSELECT @@global.hostname, @@global.server_id;+-------------------+--------------------+
| @@global.hostname | @@global.server_id |
+-------------------+--------------------+
| mcs2 | 2 |
+-------------------+--------------------+SELECT @@global.hostname, @@global.server_id;+-------------------+--------------------+
| @@global.hostname | @@global.server_id |
+-------------------+--------------------+
| mcs3 | 3 |
+-------------------+--------------------+$ maxctrl show maxscale┌──────────────┬───────────────────────────────────────────────────────┐
│ Version │ 22.08.15 │
├──────────────┼───────────────────────────────────────────────────────┤
│ Commit │ 3761fa7a52046bc58faad8b5a139116f9e33364c │
├──────────────┼───────────────────────────────────────────────────────┤
│ Started At │ Thu, 05 Aug 2021 20:21:20 GMT │
├──────────────┼───────────────────────────────────────────────────────┤
│ Activated At │ Thu, 05 Aug 2021 20:21:20 GMT │
├──────────────┼───────────────────────────────────────────────────────┤
│ Uptime │ 868 │
├──────────────┼───────────────────────────────────────────────────────┤
│ Config Sync │ null │
├──────────────┼───────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ "admin_auth": true, │
│ │ "admin_enabled": true, │
│ │ "admin_gui": true, │
│ │ "admin_host": "0.0.0.0", │
│ │ "admin_log_auth_failures": true, │
│ │ "admin_pam_readonly_service": null, │
│ │ "admin_pam_readwrite_service": null, │
│ │ "admin_port": 8989, │
│ │ "admin_secure_gui": false, │
│ │ "admin_ssl_ca_cert": null, │
│ │ "admin_ssl_cert": null, │
│ │ "admin_ssl_key": null, │
│ │ "admin_ssl_version": "MAX", │
│ │ "auth_connect_timeout": "10000ms", │
│ │ "auth_read_timeout": "10000ms", │
│ │ "auth_write_timeout": "10000ms", │
│ │ "cachedir": "/var/cache/maxscale", │
│ │ "config_sync_cluster": null, │
│ │ "config_sync_interval": "5000ms", │
│ │ "config_sync_password": "*****", │
│ │ "config_sync_timeout": "10000ms", │
│ │ "config_sync_user": null, │
│ │ "connector_plugindir": "/usr/lib64/mysql/plugin", │
│ │ "datadir": "/var/lib/maxscale", │
│ │ "debug": null, │
│ │ "dump_last_statements": "never", │
│ │ "execdir": "/usr/bin", │
│ │ "language": "/var/lib/maxscale", │
│ │ "libdir": "/usr/lib64/maxscale", │
│ │ "load_persisted_configs": true, │
│ │ "local_address": null, │
│ │ "log_debug": false, │
│ │ "log_info": false, │
│ │ "log_notice": true, │
│ │ "log_throttling": { │
│ │ "count": 10, │
│ │ "suppress": 10000, │
│ │ "window": 1000 │
│ │ }, │
│ │ "log_warn_super_user": false, │
│ │ "log_warning": true, │
│ │ "logdir": "/var/log/maxscale", │
│ │ "max_auth_errors_until_block": 10, │
│ │ "maxlog": true, │
│ │ "module_configdir": "/etc/maxscale.modules.d", │
│ │ "ms_timestamp": false, │
│ │ "passive": false, │
│ │ "persistdir": "/var/lib/maxscale/maxscale.cnf.d", │
│ │ "piddir": "/var/run/maxscale", │
│ │ "query_classifier": "qc_sqlite", │
│ │ "query_classifier_args": null, │
│ │ "query_classifier_cache_size": 289073971, │
│ │ "query_retries": 1, │
│ │ "query_retry_timeout": "5000ms", │
│ │ "rebalance_period": "0ms", │
│ │ "rebalance_threshold": 20, │
│ │ "rebalance_window": 10, │
│ │ "retain_last_statements": 0, │
│ │ "session_trace": 0, │
│ │ "skip_permission_checks": false, │
│ │ "sql_mode": "default", │
│ │ "syslog": true, │
│ │ "threads": 1, │
│ │ "users_refresh_interval": "0ms", │
│ │ "users_refresh_time": "30000ms", │
│ │ "writeq_high_water": 16777216, │
│ │ "writeq_low_water": 8192 │
│ │ } │
└──────────────┴───────────────────────────────────────────────────────┘$ maxctrl list servers┌────────┬────────────────┬──────┬─────────────┬─────────────────┬────────┐
│ Server │ Address │ Port │ Connections │ State │ GTID │
├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
│ mcs1 │ 192.0.2.1 │ 3306 │ 1 │ Master, Running │ 0-1-25 │
├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
│ mcs2 │ 192.0.2.2 │ 3306 │ 1 │ Slave, Running │ 0-1-25 │
├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
│ mcs3 │ 192.0.2.3 │ 3306 │ 1 │ Slave, Running │ 0-1-25 │
└────────┴────────────────┴──────┴─────────────┴─────────────────┴────────┘$ maxctrl show server mcs1┌─────────────────────┬───────────────────────────────────────────┐
│ Server │ mcs1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Address │ 192.0.2.1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Port │ 3306 │
├─────────────────────┼───────────────────────────────────────────┤
│ State │ Master, Running │
├─────────────────────┼───────────────────────────────────────────┤
│ Version │ 11.4.5-3-MariaDB-enterprise-log │
├─────────────────────┼───────────────────────────────────────────┤
│ Last Event │ master_up │
├─────────────────────┼───────────────────────────────────────────┤
│ Triggered At │ Thu, 05 Aug 2021 20:22:26 GMT │
├─────────────────────┼───────────────────────────────────────────┤
│ Services │ connection_router_service │
│ │ query_router_service │
├─────────────────────┼───────────────────────────────────────────┤
│ Monitors │ columnstore_monitor │
├─────────────────────┼───────────────────────────────────────────┤
│ Master ID │ -1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Node ID │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Slave Server IDs │ │
├─────────────────────┼───────────────────────────────────────────┤
│ Current Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Total Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Max Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Statistics │ { │
│ │ "active_operations": 0, │
│ │ "adaptive_avg_select_time": "0ns", │
│ │ "connection_pool_empty": 0, │
│ │ "connections": 1, │
│ │ "max_connections": 1, │
│ │ "max_pool_size": 0, │
│ │ "persistent_connections": 0, │
│ │ "reused_connections": 0, │
│ │ "routed_packets": 0, │
│ │ "total_connections": 1 │
│ │ } │
├─────────────────────┼───────────────────────────────────────────┤
│ Parameters │ { │
│ │ "address": "192.0.2.1", │
│ │ "disk_space_threshold": null, │
│ │ "extra_port": 0, │
│ │ "monitorpw": null, │
│ │ "monitoruser": null, │
│ │ "persistmaxtime": "0ms", │
│ │ "persistpoolmax": 0, │
│ │ "port": 3306, │
│ │ "priority": 0, │
│ │ "proxy_protocol": false, │
│ │ "rank": "primary", │
│ │ "socket": null, │
│ │ "ssl": false, │
│ │ "ssl_ca_cert": null, │
│ │ "ssl_cert": null, │
│ │ "ssl_cert_verify_depth": 9, │
│ │ "ssl_cipher": null, │
│ │ "ssl_key": null, │
│ │ "ssl_verify_peer_certificate": false, │
│ │ "ssl_verify_peer_host": false, │
│ │ "ssl_version": "MAX" │
│ │ } │
└─────────────────────┴───────────────────────────────────────────┘$ maxctrl list monitors┌─────────────────────┬─────────┬──────────────────┐
│ Monitor │ State │ Servers │
├─────────────────────┼─────────┼──────────────────┤
│ columnstore_monitor │ Running │ mcs1, mcs2, mcs3 │
└─────────────────────┴─────────┴──────────────────┘$ maxctrl show monitor columnstore_monitor┌─────────────────────┬─────────────────────────────────────┐
│ Monitor │ columnstore_monitor │
├─────────────────────┼─────────────────────────────────────┤
│ Module │ mariadbmon │
├─────────────────────┼─────────────────────────────────────┤
│ State │ Running │
├─────────────────────┼─────────────────────────────────────┤
│ Servers │ mcs1 │
│ │ mcs2 │
│ │ mcs3 │
├─────────────────────┼─────────────────────────────────────┤
│ Parameters │ { │
│ │ "backend_connect_attempts": 1, │
│ │ "backend_connect_timeout": 3, │
│ │ "backend_read_timeout": 3, │
│ │ "backend_write_timeout": 3, │
│ │ "disk_space_check_interval": 0, │
│ │ "disk_space_threshold": null, │
│ │ "events": "all", │
│ │ "journal_max_age": 28800, │
│ │ "module": "mariadbmon", │
│ │ "monitor_interval": 2000, │
│ │ "password": "*****", │
│ │ "script": null, │
│ │ "script_timeout": 90, │
│ │ "user": "mxs" │
│ │ } │
├─────────────────────┼─────────────────────────────────────┤
│ Monitor Diagnostics │ {} │
└─────────────────────┴─────────────────────────────────────┘$ maxctrl list services┌───────────────────────────┬────────────────┬─────────────┬───────────────────┬──────────────────┐
│ Service │ Router │ Connections │ Total Connections │ Servers │
├───────────────────────────┼────────────────┼─────────────┼───────────────────┼──────────────────┤
│ connection_router_Service │ readconnroute │ 0 │ 0 │ mcs1, mcs2, mcs3 │
├───────────────────────────┼────────────────┼─────────────┼───────────────────┼──────────────────┤
│ query_router_service │ readwritesplit │ 0 │ 0 │ mcs1, mcs2, mcs3 │
└───────────────────────────┴────────────────┴─────────────┴───────────────────┴──────────────────┘$ maxctrl show service query_router_service┌─────────────────────┬─────────────────────────────────────────────────────────────┐
│ Service │ query_router_service │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Router │ readwritesplit │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ State │ Started │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Started At │ Sat Aug 28 21:41:16 2021 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Current Connections │ 0 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Total Connections │ 0 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Max Connections │ 0 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Cluster │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Servers │ mcs1 │
│ │ mcs2 │
│ │ mcs3 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Services │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Filters │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ "auth_all_servers": false, │
│ │ "causal_reads": "false", │
│ │ "causal_reads_timeout": "10000ms", │
│ │ "connection_keepalive": "300000ms", │
│ │ "connection_timeout": "0ms", │
│ │ "delayed_retry": false, │
│ │ "delayed_retry_timeout": "10000ms", │
│ │ "disable_sescmd_history": false, │
│ │ "enable_root_user": false, │
│ │ "idle_session_pool_time": "-1000ms", │
│ │ "lazy_connect": false, │
│ │ "localhost_match_wildcard_host": true, │
│ │ "log_auth_warnings": true, │
│ │ "master_accept_reads": false, │
│ │ "master_failure_mode": "fail_instantly", │
│ │ "master_reconnection": false, │
│ │ "max_connections": 0, │
│ │ "max_sescmd_history": 50, │
│ │ "max_slave_connections": 255, │
│ │ "max_slave_replication_lag": "0ms", │
│ │ "net_write_timeout": "0ms", │
│ │ "optimistic_trx": false, │
│ │ "password": "*****", │
│ │ "prune_sescmd_history": true, │
│ │ "rank": "primary", │
│ │ "retain_last_statements": -1, │
│ │ "retry_failed_reads": true, │
│ │ "reuse_prepared_statements": false, │
│ │ "router": "readwritesplit", │
│ │ "session_trace": false, │
│ │ "session_track_trx_state": false, │
│ │ "slave_connections": 255, │
│ │ "slave_selection_criteria": "LEAST_CURRENT_OPERATIONS", │
│ │ "strict_multi_stmt": false, │
│ │ "strict_sp_calls": false, │
│ │ "strip_db_esc": true, │
│ │ "transaction_replay": false, │
│ │ "transaction_replay_attempts": 5, │
│ │ "transaction_replay_max_size": 1073741824, │
│ │ "transaction_replay_retry_on_deadlock": false, │
│ │ "type": "service", │
│ │ "use_sql_variables_in": "all", │
│ │ "user": "mxs", │
│ │ "version_string": null │
│ │ } │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Router Diagnostics │ { │
│ │ "avg_sescmd_history_length": 0, │
│ │ "max_sescmd_history_length": 0, │
│ │ "queries": 0, │
│ │ "replayed_transactions": 0, │
│ │ "ro_transactions": 0, │
│ │ "route_all": 0, │
│ │ "route_master": 0, │
│ │ "route_slave": 0, │
│ │ "rw_transactions": 0, │
│ │ "server_query_statistics": [] │
│ │ } │
└─────────────────────┴─────────────────────────────────────────────────────────────┘$ sudo mariadbCREATE USER 'app_user'@'192.0.2.10' IDENTIFIED BY 'app_user_passwd';GRANT ALL ON test.* TO 'app_user'@'192.0.2.10';CREATE USER 'app_user'@'192.0.2.11' IDENTIFIED BY 'app_user_passwd';GRANT ALL ON test.* TO 'app_user'@'192.0.2.11';$ mariadb --host 192.0.2.10 --port 3307
--user app_user --password$ maxctrl list listeners┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘$ mariadb --host 192.0.2.10 --port 3308 \
--user app_user --passwordSELECT @@global.hostname, @@global.server_id;
+-------------------+--------------------+
| @@global.hostname | @@global.server_id |
+-------------------+--------------------+
| mcs2 | 2 |
+-------------------+--------------------+$ maxctrl list listeners┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘$ mariadb --host 192.0.2.10 --port 3307 \
--user app_user --passwordCREATE TABLE test.load_balancing_test (
id INT PRIMARY KEY AUTO_INCREMENT,
hostname VARCHAR(256),
server_id INT
);INSERT INTO test.load_balancing_test (hostname, server_id)
VALUES (@@global.hostname, @@global.server_id);SELECT * FROM test.load_balancing_test;+----+----------+-----------+
| id | hostname | server_id |
+----+----------+-----------+
| 1 | mcs1 | 1 |
| 2 | mcs1 | 1 |
| 3 | mcs1 | 1 |
+----+----------+-----------+$ maxctrl list listeners┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘$ mariadb --host 192.0.2.10 --port 3307 \
--user app_user --passwordSELECT @@global.hostname, @@global.server_id;+-------------------+--------------------+
| @@global.hostname | @@global.server_id |
+-------------------+--------------------+
| mcs2 | 2 |
+-------------------+--------------------+SELECT @@global.hostname, @@global.server_id;+-------------------+--------------------+
| @@global.hostname | @@global.server_id |
+-------------------+--------------------+
| mcs3 | 3 |
+-------------------+--------------------+MariaDB Enterprise ColumnStore 5 is a columnar storage engine for MariaDB Enterprise Server 10.5. Enterprise ColumnStore is suitable for Online Analytical Processing (OLAP) workloads.
This procedure has 9 steps, which are executed in sequence.
This procedure represents basic product capability and deploys 3 Enterprise ColumnStore nodes and 1 MaxScale node.
This page provides an overview of the topology, requirements, and deployment procedures.
Please read and understand this procedure before executing.
Prepare ColumnStore Nodes
Configure Shared Local Storage
Install MariaDB Enterprise Server
Start and Configure MariaDB Enterprise Server
Test MariaDB Enterprise Server
Install MariaDB MaxScale
Customers can obtain support by submitting a support case.
The following components are deployed during this procedure:
Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.
Database proxy that extends the availability, scalability, and security of MariaDB Enterprise Servers
• Columnar storage engine • Highly available • Optimized for Online Analytical Processing (OLAP) workloads • Scalable query execution • Cluster Management API (CMAPI) provides a REST API for multi-node administration.
Listener
Listens for client connections to MaxScale then passes them to the router service
MariaDB Monitor
Tracks changes in the state of MariaDB Enterprise Servers.
Read Connection Router
Routes connections from the listener to any available Enterprise ColumnStore node
Read/Write Split Router
Routes read operations from the listener to any available Enterprise ColumnStore node, and routes write operations from the listener to a specific server that MaxScale uses as the primary server
Server Module
Connection configuration in MaxScale to an Enterprise ColumnStore node
The MariaDB Enterprise ColumnStore topology with Object Storage delivers production analytics with high availability, fault tolerance, and limitless data storage by leveraging S3-compatible storage.
The topology consists of:
One or more MaxScale nodes
An odd number of ColumnStore nodes (minimum of 3) running ES, Enterprise ColumnStore, and CMAPI
The MaxScale nodes:
Monitor the health and availability of each ColumnStore node using the MariaDB Monitor (mariadbmon)
Accept client and application connections
Route queries to ColumnStore nodes using the Read/Write Split Router (readwritesplit)
The ColumnStore nodes:
Receive queries from MaxScale
Execute queries
Use shared local storage for the Storage Manager directory
These requirements are for the ColumnStore Object Storage topology when deployed with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5.
Node Count
Operating System
Minimum Hardware Requirements
Recommended Hardware Requirements
Storage Requirements
S3-Compatible Object Storage Requirements
Preferred Object Storage Providers: Cloud
Preferred Object Storage Providers: Hardware
Shared Local Storage Directories
Shared Local Storage Options
Recommended Storage Options
MaxScale nodes, 1 or more are required.
Enterprise ColumnStore nodes, 3 or more are required for high availability. You should always have an odd number of nodes in a multi-node ColumnStore deployment to avoid split brain scenarios.
In alignment to the , the ColumnStore Object Storage topology with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5 is provided for:
CentOS Linux 7 (x86_64)
Debian 10 (x86_64)
Red Hat Enterprise Linux 7 (x86_64)
Red Hat Enterprise Linux 8 (x86_64)
Ubuntu 18.04 LTS (x86_64)
Ubuntu 20.04 LTS (x86_64)
MariaDB Enterprise ColumnStore's minimum hardware requirements are not intended for production environments, but the minimum hardware requirements can be appropriate for development and test environments. For production environments, see the recommended hardware requirements instead.
The minimum hardware requirements are:
MaxScale node
4+ cores
4+ GB
Enterprise ColumnStore node
4+ cores
4+ GB
MariaDB Enterprise ColumnStore will refuse to start if the system has less than 3 GB of memory.
If Enterprise ColumnStore is started on a system with less memory, the following error message will be written to the ColumnStore system log called crit.log:
And the following error message will be raised to the client:
MariaDB Enterprise ColumnStore's recommended hardware requirements are intended for production analytics.
The recommended hardware requirements are:
MaxScale node
8+ cores
16+ GB
Enterprise ColumnStore node
64+ cores
128+ GB
The ColumnStore Object Storage topology requires the following storage types:
The ColumnStore Object Storage topology uses shared local storage for the to store metadata.
The ColumnStore Object Storage topology uses shared local storage for the Storage Manager directory to store metadata.
The Storage Manager directory is located at the following path by default:
/var/lib/columnstore/storagemanager
The most common shared local storage options for the ColumnStore Object Storage topology are:
EBS (Elastic Block Store) Multi-Attach
AWS
• EBS is a high-performance block-storage service for AWS (Amazon Web Services). • EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported. • For deployments in AWS, EBS Multi-Attach is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data.
EFS (Elastic File System)
AWS
• EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services). • For deployments in AWS, EFS is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data. EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services).
Filestore
GCP
• Filestore is high-performance, fully managed storage for GCP (Google Cloud Platform). • For deployments in GCP, Filestore is the recommended option for the Storage Manager directory, and Google Object Storage (S3-compatible) is the recommended option for data.
GlusterFS
On-premises
Enterprise ColumnStore's CMAPI (Cluster Management API) is a REST API that can be used to manage a multi-node Enterprise ColumnStore cluster.
Many tools are capable of interacting with REST APIs. For example, the curl utility could be used to make REST API calls from the command-line.
Many programming languages also have libraries for interacting with REST APIs.
The examples below show how to use the CMAPI with curl.
For example:
'x-api-key': '93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd'
'Content-Type': 'application/json'
x-api-key can be set to any value of your choice during the first call to the server. Subsequent connections will require this same key.
Configuration File
Configuration files (such as /etc/my.cnf) can be used to set system-variables and options. The server must be restarted to apply changes made to configuration files.
Command-line
The server can be started with command-line options that set system-variables and options.
SQL
Users can set system-variables that support dynamic changes on-the-fly using the SET statement.
MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.
To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.
CentOS
Red Hat Enterprise Linux (RHEL)
/etc/my.cnf.d/z-custom-mariadb.cnf
Debian
Ubuntu
/etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf
The systemctl command is used to start and stop the MariaDB Enterprise Server service.
Start
sudo systemctl start mariadb
Stop
sudo systemctl stop mariadb
Restart
sudo systemctl restart mariadb
Enable during startup
sudo systemctl enable mariadb
Disable during startup
sudo systemctl disable mariadb
Status
sudo systemctl status mariadb
For additional information, see "Starting and Stopping MariaDB".
MariaDB Enterprise Server produces log data that can be helpful in problem diagnosis.
Log filenames and locations may be overridden in the server configuration. The default location of logs is the data directory. The data directory is specified by the datadir system variable.
<hostname>.err
server_audit.log
<hostname>-slow.log
The systemctl command is used to start and stop the ColumnStore service.
Start
sudo systemctl start mariadb-columnstore
Stop
sudo systemctl stop mariadb-columnstore
Restart
sudo systemctl restart mariadb-columnstore
Enable during startup
sudo systemctl enable mariadb-columnstore
Disable during startup
sudo systemctl disable mariadb-columnstore
Status
sudo systemctl status mariadb-columnstore
In the ColumnStore Object Storage topology, the mariadb-columnstore service should not be enabled. The CMAPI service restarts Enterprise ColumnStore as needed, so it does not need to start automatically upon reboot.
The systemctl command is used to start and stop the CMAPI service.
Start
sudo systemctl start mariadb-columnstore-cmapi
Stop
sudo systemctl stop mariadb-columnstore-cmapi
Restart
sudo systemctl restart mariadb-columnstore-cmapi
Enable during startup
sudo systemctl enable mariadb-columnstore-cmapi
Disable during startup
sudo systemctl disable mariadb-columnstore-cmapi
Status
sudo systemctl status mariadb-columnstore-cmapi
For additional information on endpoints, see "CMAPI".
MaxScale can be configured using several methods. These methods make use of MaxScale's REST API.
Command-line utility to perform administrative tasks through the REST API. See MaxCtrl Commands.
MaxGUI is a graphical utility that can perform administrative tasks through the REST API.
The REST API can be used directly. For example, the curl utility could be used to make REST API calls from the command-line. Many programming languages also have libraries to interact with REST APIs.
The procedure on these pages configures MaxScale using MaxCtrl.
The systemctl command is used to start and stop the MaxScale service.>
Start
sudo systemctl start maxscale
Stop
sudo systemctl stop maxscale
Restart
sudo systemctl restart maxscale
Enable during startup
sudo systemctl enable maxscale
Disable during startup
sudo systemctl disable maxscale
Status
sudo systemctl status maxscale
For additional information, see "Start and Stop Services".
Navigation in the procedure Shared Local Storage topology
Next: Step 1: Prepare ColumnStore Nodes.
This page is: Copyright © 2025 MariaDB. All rights reserved.
Enterprise Server 10.5
Enterprise Server 10.6
Enterprise Server 11.4
Columnar storage engine with S3-compatible object storage
Highly available
Automatic failover via MaxScale and CMAPI
Scales reads via MaxScale
Bulk data import
Enterprise Server 10.5, Enterprise ColumnStore 5, MaxScale 2.5
Enterprise Server 10.6, Enterprise ColumnStore 23.02, MaxScale 22.08
This page details step 4 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step starts and configures MariaDB Enterprise Server, and MariaDB Enterprise ColumnStore 23.10.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
The installation process might have started some of the ColumnStore services. The services should be stopped prior to making configuration changes.
On each Enterprise ColumnStore node, stop the MariaDB Enterprise Server service:
On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:
On each Enterprise ColumnStore node, stop the CMAPI service:
On each Enterprise ColumnStore node, configure Enterprise Server.
Mandatory system variables and options for ColumnStore Object Storage include:
Example Configuration
On each Enterprise ColumnStore node, configure S3 Storage Manager to use S3-compatible storage by editing the /etc/columnstore/storagemanager.cnf configuration file:
The S3-compatible object storage options are configured under [S3]:
The bucket option must be set to the name of the bucket that you created in "Create an S3 Bucket".
The endpoint option must be set to the endpoint for the S3-compatible object storage.
The aws_access_key_id and aws_secret_access_key options must be set to the access key ID and secret access key for the S3-compatible object storage.
To use a specific IAM role, you must uncomment and set
The local cache options are configured under [Cache]:
The cache_size option is set to 2 GB by default.
The path option is set to /var/lib/columnstore/storagemanager/cache by default.
Ensure that the specified path has sufficient storage space for the specified cache size.
On each Enterprise ColumnStore node, start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:
On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:
After the CMAPI service is installed in the next step, CMAPI will start the Enterprise ColumnStore service as-needed on each node. CMAPI disables the Enterprise ColumnStore service to prevent systemd from automatically starting Enterprise ColumnStore upon reboot.
On each Enterprise ColumnStore node, start and enable the CMAPI service, so that it starts automatically upon reboot:
For additional information, see "".
The ColumnStore Object Storage topology requires several user accounts. Each user account should be created on the primary server, so that it is replicated to the replica servers.
Enterprise ColumnStore requires a mandatory utility user account to perform cross-engine joins and similar operations.
On the primary server, create the user account with the CREATE USER statement:
On the primary server, grant the user account SELECT privileges on all databases with the GRANT statement:
On each Enterprise ColumnStore node, configure the ColumnStore utility user:
On each Enterprise ColumnStore node, set the password:
For details about how to encrypt the password, see "".
Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.
ColumnStore Object Storage uses MariaDB Replication to replicate writes between the primary and replica servers. As MaxScale can promote a replica server to become a new primary in the event of node failure, all nodes must have a replication user.
The action is performed on the primary server.
Create the replication user and grant it the required privileges:
Use the CREATE USER statement to create replication user.
Replace the referenced IP address with the relevant address for your environment.
Ensure that the user account can connect to the primary server from each replica.
Grant the user account the required privileges with the GRANT statement.
ColumnStore Object Storage 23.10 uses MariaDB MaxScale 22.08 to load balance between the nodes.
This action is performed on the primary server.
Use the statement to create the MaxScale user:
Replace the referenced IP address with the relevant address for your environment.
Ensure that the user account can connect from the IP address of the MaxScale instance.
Use the statement to grant the privileges required by the router:
Use the statement to grant privileges required by the MariaDB Monitor.
On each replica server, configure MariaDB Replication:
Use the CHANGE MASTER TO statement to configure the connection to the primary server:
Start replication using the START REPLICA statement:
Confirm that replication is working using the SHOW REPLICA STATUS statement:
Ensure that the replica server cannot accept local writes by setting the read_only system variable to ON using the SET GLOBAL statement:
Initiate the primary server using CMAPI.
Create an API key for the cluster. This API key should be stored securely and kept confidential, because it can be used to add cluster nodes to the multi-node Enterprise ColumnStore deployment.
For example, to create a random 256-bit API key using openssl rand:
This document will use the following API key in further examples, but users should create their own:
Use CMAPI to add the primary server to the cluster and set the API key. The new API key needs to be provided as part of the X-API-key HTML header.
For example, if the primary server's host name is mcs1 and its IP address is 192.0.2.1, use the following node command:
Use CMAPI to check the status of the cluster node:
Add the replica servers with CMAPI:
For each replica server, use to add the replica server to the cluster. The previously set API key needs to be provided as part of the X-API-key HTML header.
For example, if the primary server's host name is mcs1 and the replica server's IP address is 192.0.2.2, use the following node command:
After all replica servers have been added, use CMAPI to confirm that all cluster nodes have been successfully added:
The specific steps to configure the security module depend on the operating system.
Configure SELinux for Enterprise ColumnStore:
To configure SELinux, you have to install the packages required for audit2allow. On CentOS 7 and RHEL 7, install the following:
On RHEL 8, install the following:
Allow the system to run under load for a while to generate SELinux audit events.
After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:
If no audit events were found, this will print the following:
If audit events were found, the new SELinux policy can be loaded using semodule:
Set SELinux to enforcing mode:
Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config.
For example, the file will usually look like this after the change:
Confirm that SELinux is in enforcing mode:
For information on how to create a profile, see on Ubuntu.com.
The specific steps to configure the firewall service depend on the platform.
Configure firewalld for Enterprise Cluster on CentOS and RHEL:
Check if the firewalld service is running:
If the firewalld service was stopped to perform the installation, start it now:
For example, if your cluster nodes are in the 192.0.2.0/24 subnet:
Open up the relevant ports using firewall-cmd:
Reload the runtime configuration:
Configure UFW for Enterprise ColumnStore on Ubuntu:
Check if the UFW service is running:
If the UFW service was stopped to perform the installation, start it now:
Open up the relevant ports using ufw.
For example, if your cluster nodes are in the 192.0.2.0/24 subnet in the range 192.0.2.1 - 192.0.2.3:
Reload the runtime configuration:
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 4 of 9.
Next: Step 5: Test MariaDB Enterprise Server.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page details step 7 of the 7-step procedure "Deploy Primary/Replica Topology".
This step tests MariaDB MaxScale.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Use command to view the global MaxScale configuration.
This action is performed on the MaxScale node:
Output should align to the global MaxScale configuration in the new configuration file you created.
Use the and commands to view the configured server objects.
This action is performed on the MaxScale node:
Obtain the full list of servers objects:
For each server object, view the configuration:
Output should align to the Server Object configuration you performed.
Use the and commands to view the configured monitors.
This action is performed on the MaxScale node:
Obtain the full list of monitors:
For each monitor, view the monitor configuration:
Output should align to the Galera Monitor (galeramon) configuration you performed.
Use the and commands to view the configured routing services.
This action is performed on the MaxScale node:
Obtain the full list of routing services:
For each service, view the service configuration:
Output should align to the Read Connection Router (readconnroute) or Read/Write Split Router (readwritesplit) configuration you performed.
Applications should use a dedicated user account. The user account must be created on the primary server.
When users connect to MaxScale, MaxScale authenticates the user connection before routing it to an Enterprise Server node. Enterprise Server authenticates the connection as originating from the IP address of the MaxScale node.
The application users must have one user account with the host IP address of the application server and a second user account with the host IP address of the MaxScale node.
The requirement of a duplicate user account can be avoided by enabling the proxy_protocol parameter for MaxScale and the proxy_protocol_networks for Enterprise Server.
This action is performed on any Enterprise Cluster node:
Connect to the node:
Create the database user account for your MaxScale node:
Replace 192.0.2.104 with the relevant IP address specification for your MaxScale node.
Passwords should meet your organization's password policies.
Grant the privileges required by your application to the database user account for your MaxScale node:
The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.
This action is performed on any Enterprise Cluster node:
Create the database user account for your application server:
Replace 192.0.2.11 with the relevant IP address specification for your application server.
Passwords should meet your organization's password policies.
Grant the privileges required by your application to the database user account for your application server:
The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.
To test the connection, use the MariaDB Client from your application server to connect to an Enterprise Cluster node through MaxScale.
This action is performed on the application server:
If you configured the Read Connection Router, confirm that MaxScale routes connections to the replica servers.
On the MaxScale node, use the command to view the available listeners and ports:
Open multiple terminals connected to your application server, in each use MariaDB Client to connect to the listener port for the Read Connection Router (in the example 3308):
Use the application user credentials you created for the --user and --password options.
In each terminal, query the hostname system variable to identify to which you're connected:
Different terminals should return different values since MaxScale routes the connections to different nodes.
Since the router was configured the slave router option, the Read Connection Router only routes connections to replica servers.
If you configured the Read/Write Split Router, confirm that readwritesplit correctly routes write queries.
This action is performed with multiple client connections to the MaxScale node.
On the MaxScale node, use the command to identify the Enterprise Cluster node currently operating as the primary server:
The server listed as Master is currently operating as the primary server.
On the MaxScale node, use the command to identify the correct listener port:
In the example, the listener port for the Read/Write Split router is 3307.
Use the MariaDB Client to establish multiple connections to the listener configured for the Read/Write Split routing service, query_router_listener, on the MaxScale node:
The database user account for your application server should be specified by the --user option.
Using any client connection, create a test table:
Using each client connection, insert the values of the hostname system variable into the table using the INSERT statement to identify the node that executes the statement:
Using any client connection, query the table using the SELECT statement:
The output shows the hostname from the Enterprise Cluster node operating as the primary server. (Enterprise Cluster offsets auto-increment values by node to avoid write conflicts.)
Confirm that MaxScale is routing write queries to the Enterprise Cluster node operating as the primary server by checking that the test table only contains the hostname of the correct Enterprise Cluster node.
If you configured the Read/Write Split Router, confirm that readwritesplit properly routes read queries to multiple replica servers.
This action is performed with multiple clients connected to the MaxScale node.
On the MaxScale node, use to identify the Enterprise Cluster nodes that are currently operating as replica servers:
The servers listed as Slave are currently operating as replica servers.
On the MaxScale node, use the command to identify the correct listener port:
In the example, the listener port for the Read/Write Split router is 3307.
Use the MariaDB Client to establish multiple connections to query_router_listener which is the listener configured for the Read/Write Split routing service on the MaxScale node:
The database user account for your application server should be specified by the --user option.
Using each client connection, query the hostname system variable to identify the node that executes the statement:
Confirm that MaxScale routes the SELECT statements to different replica servers.
For more information on different routing criteria, see slave_selection_criteria
Navigation in the procedure "Deploy Primary/Replica Topology":
This page was step 7 of 7.
This procedure is complete.
Apr 30 21:54:35 a1ebc96a2519 PrimProc[1004]: 35.668435 |0|0|0| C 28 CAL0000: Error total memory available is less than 3GB.ERROR 1815 (HY000): Internal error: System is not ready yet. Please try again.https://{server}:{port}/cmapi/{version}/{route}/{command}$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/start \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20}' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/shutdown \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20}' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .$ curl -k -s -X DELETE https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .This page is: Copyright © 2025 MariaDB. All rights reserved.
Use maxctrl show maxscale command to view the global MaxScale configuration.
This action is performed on the MaxScale node:
Output should align to the global MaxScale configuration in the new configuration file you created.
Use the maxctrl list servers and maxctrl show server commands to view the configured server objects.
This action is performed on the MaxScale node:
Obtain the full list of servers objects:
For each server object, view the configuration:
Output should align to the Server Object configuration you performed.
Use the maxctrl list monitors and maxctrl show monitor commands to view the configured monitors.
This action is performed on the MaxScale node:
Obtain the full list of monitors:
For each monitor, view the monitor configuration:
Output should align to the Galera Monitor (galeramon) configuration you performed.
Use the maxctrl list services and maxctrl show service commands to view the configured routing services.
This action is performed on the MaxScale node:
Obtain the full list of routing services:
For each service, view the service configuration:
Output should align to the Read Connection Router (readconnroute) or Read/Write Split Router (readwritesplit) configuration you performed.
Applications should use a dedicated user account. The user account must be created on the primary server.
When users connect to MaxScale, MaxScale authenticates the user connection before routing it to an Enterprise Server node. Enterprise Server authenticates the connection as originating from the IP address of the MaxScale node.
The application users must have one user account with the host IP address of the application server and a second user account with the host IP address of the MaxScale node.
The requirement of a duplicate user account can be avoided by enabling the proxy_protocol parameter for MaxScale and the proxy_protocol_networks for Enterprise Server.
This action is performed on any Enterprise Cluster node:
Connect to the node:
Create the database user account for your MaxScale node:
Replace 192.0.2.104 with the relevant IP address specification for your MaxScale node.
Passwords should meet your organization's password policies.
Grant the privileges required by your application to the database user account for your MaxScale node:
The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.
This action is performed on any Enterprise Cluster node:
Create the database user account for your application server:
Replace 192.0.2.11 with the relevant IP address specification for your application server.
Passwords should meet your organization's password policies.
Grant the privileges required by your application to the database user account for your application server:
The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.
To test the connection, use the MariaDB Client from your application server to connect to an Enterprise Cluster node through MaxScale.
This action is performed on the application server:
If you configured the Read Connection Router, confirm that MaxScale routes connections to the replica servers.
On the MaxScale node, use the maxctrl list listeners command to view the available listeners and ports:
Open multiple terminals connected to your application server, in each use MariaDB Client to connect to the listener port for the Read Connection Router (in the example 3308):
Use the application user credentials you created for the --user and --password options.
In each terminal, query the hostname system variable to identify to which you're connected:
Different terminals should return different values since MaxScale routes the connections to different nodes.
Since the router was configured the slave router option, the Read Connection Router only routes connections to replica servers.
If you configured the Read/Write Split Router, confirm that readwritesplit correctly routes write queries.
This action is performed with multiple client connections to the MaxScale node.
On the MaxScale node, use the maxctrl list servers command to identify the Enterprise Cluster node currently operating as the primary server:
The server listed as Master is currently operating as the primary server.
On the MaxScale node, use the maxctrl list listeners command to identify the correct listener port:
In the example, the listener port for the Read/Write Split router is 3307.
Use the MariaDB Client to establish multiple connections to the listener configured for the Read/Write Split routing service, query_router_listener, on the MaxScale node:
The database user account for your application server should be specified by the --user option.
Using any client connection, create a test table:
Using each client connection, insert the values of the hostname system variable into the table using the INSERT statement to identify the node that executes the statement:
Using any client connection, query the table using the SELECT statement:
The output shows the hostname from the Enterprise Cluster node operating as the primary server. (Enterprise Cluster offsets auto-increment values by node to avoid write conflicts.)
Confirm that MaxScale is routing write queries to the Enterprise Cluster node operating as the primary server by checking that the test table only contains the hostname of the correct Enterprise Cluster node.
If you configured the Read/Write Split Router, confirm that readwritesplit properly routes read queries to multiple replica servers.
This action is performed with multiple clients connected to the MaxScale node.
On the MaxScale node, use maxctrl list servers to identify the Enterprise Cluster nodes that are currently operating as replica servers:
The servers listed as Slave are currently operating as replica servers.
On the MaxScale node, use the maxctrl list listeners command to identify the correct listener port:
In the example, the listener port for the Read/Write Split router is 3307.
Use the MariaDB Client to establish multiple connections to query_router_listener which is the listener configured for the Read/Write Split routing service on the MaxScale node:
The database user account for your application server should be specified by the --user option.
Using each client connection, query the hostname system variable to identify the node that executes the statement:
The output shows the hostname value from one of the Enterprise Cluster nodes operating as replica servers.
Confirm that MaxScale is routing read queries to multiple Enterprise Cluster nodes operating as replica servers by checking that different client connections return different hostname values.
For more information on different routing criteria, slave_selection_criteria
Navigation in the procedure "Deploy Galera Cluster Topology":
This page was step 6 of 6.
This procedure is complete.
This page is: Copyright © 2025 MariaDB. All rights reserved.
$ maxctrl show maxscale┌──────────────┬───────────────────────────────────────────────────────┐
│ Version │ 25.01.2 │
├──────────────┼───────────────────────────────────────────────────────┤
│ Commit │ 3761fa7a52046bc58faad8b5a139116f9e33364c │
├──────────────┼───────────────────────────────────────────────────────┤
│ Started At │ Thu, 05 Aug 2021 20:21:20 GMT │
├──────────────┼───────────────────────────────────────────────────────┤
│ Activated At │ Thu, 05 Aug 2021 20:21:20 GMT │
├──────────────┼───────────────────────────────────────────────────────┤
│ Uptime │ 868 │
├──────────────┼───────────────────────────────────────────────────────┤
│ Config Sync │ null │
├──────────────┼───────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ "admin_auth": true, │
│ │ "admin_enabled": true, │
│ │ "admin_gui": true, │
│ │ "admin_host": "0.0.0.0", │
│ │ "admin_log_auth_failures": true, │
│ │ "admin_pam_readonly_service": null, │
│ │ "admin_pam_readwrite_service": null, │
│ │ "admin_port": 8989, │
│ │ "admin_secure_gui": false, │
│ │ "admin_ssl_ca_cert": null, │
│ │ "admin_ssl_cert": null, │
│ │ "admin_ssl_key": null, │
│ │ "admin_ssl_version": "MAX", │
│ │ "auth_connect_timeout": "10000ms", │
│ │ "auth_read_timeout": "10000ms", │
│ │ "auth_write_timeout": "10000ms", │
│ │ "cachedir": "/var/cache/maxscale", │
│ │ "config_sync_cluster": null, │
│ │ "config_sync_interval": "5000ms", │
│ │ "config_sync_password": "*****", │
│ │ "config_sync_timeout": "10000ms", │
│ │ "config_sync_user": null, │
│ │ "connector_plugindir": "/usr/lib64/mysql/plugin", │
│ │ "datadir": "/var/lib/maxscale", │
│ │ "debug": null, │
│ │ "dump_last_statements": "never", │
│ │ "execdir": "/usr/bin", │
│ │ "language": "/var/lib/maxscale", │
│ │ "libdir": "/usr/lib64/maxscale", │
│ │ "load_persisted_configs": true, │
│ │ "local_address": null, │
│ │ "log_debug": false, │
│ │ "log_info": false, │
│ │ "log_notice": true, │
│ │ "log_throttling": { │
│ │ "count": 10, │
│ │ "suppress": 10000, │
│ │ "window": 1000 │
│ │ }, │
│ │ "log_warn_super_user": false, │
│ │ "log_warning": true, │
│ │ "logdir": "/var/log/maxscale", │
│ │ "max_auth_errors_until_block": 10, │
│ │ "maxlog": true, │
│ │ "module_configdir": "/etc/maxscale.modules.d", │
│ │ "ms_timestamp": false, │
│ │ "passive": false, │
│ │ "persistdir": "/var/lib/maxscale/maxscale.cnf.d", │
│ │ "piddir": "/var/run/maxscale", │
│ │ "query_classifier": "qc_sqlite", │
│ │ "query_classifier_args": null, │
│ │ "query_classifier_cache_size": 289073971, │
│ │ "query_retries": 1, │
│ │ "query_retry_timeout": "5000ms", │
│ │ "rebalance_period": "0ms", │
│ │ "rebalance_threshold": 20, │
│ │ "rebalance_window": 10, │
│ │ "retain_last_statements": 0, │
│ │ "session_trace": 0, │
│ │ "skip_permission_checks": false, │
│ │ "sql_mode": "default", │
│ │ "syslog": true, │
│ │ "threads": 1, │
│ │ "users_refresh_interval": "0ms", │
│ │ "users_refresh_time": "30000ms", │
│ │ "writeq_high_water": 16777216, │
│ │ "writeq_low_water": 8192 │
│ │ } │
└──────────────┴───────────────────────────────────────────────────────┘$ maxctrl list servers┌────────┬─────────────┬──────┬─────────────┬─────────────────────────┬──────┐
│ Server │ Address │ Port │ Connections │ State │ GTID │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node1 │ 192.0.2.101 │ 3306 │ 0 │ Slave, Synced, Running │ │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node2 │ 192.0.2.102 │ 3306 │ 0 │ Slave, Synced, Running │ │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node3 │ 192.0.2.103 │ 3306 │ 0 │ Master, Synced, Running │ │
└────────┴─────────────┴──────┴─────────────┴─────────────────────────┴──────┘$ maxctrl show server node3┌─────────────────────┬───────────────────────────────────────────┐
│ Server │ node3 │
├─────────────────────┼───────────────────────────────────────────┤
│ Address │ 192.0.2.103 │
├─────────────────────┼───────────────────────────────────────────┤
│ Port │ 3306 │
├─────────────────────┼───────────────────────────────────────────┤
│ State │ Master, Synced, Running │
├─────────────────────┼───────────────────────────────────────────┤
│ Version │ 11.4.5-3-MariaDB-enterprise-log │
├─────────────────────┼───────────────────────────────────────────┤
│ Last Event │ master_up │
├─────────────────────┼───────────────────────────────────────────┤
│ Triggered At │ Thu, 05 Aug 2021 20:22:26 GMT │
├─────────────────────┼───────────────────────────────────────────┤
│ Services │ connection_router_service │
│ │ query_router_service │
├─────────────────────┼───────────────────────────────────────────┤
│ Monitors │ cluster_monitor │
├─────────────────────┼───────────────────────────────────────────┤
│ Master ID │ -1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Node ID │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Slave Server IDs │ │
├─────────────────────┼───────────────────────────────────────────┤
│ Current Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Total Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Max Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Statistics │ { │
│ │ "active_operations": 0, │
│ │ "adaptive_avg_select_time": "0ns", │
│ │ "connection_pool_empty": 0, │
│ │ "connections": 1, │
│ │ "max_connections": 1, │
│ │ "max_pool_size": 0, │
│ │ "persistent_connections": 0, │
│ │ "reused_connections": 0, │
│ │ "routed_packets": 0, │
│ │ "total_connections": 1 │
│ │ } │
├─────────────────────┼───────────────────────────────────────────┤
│ Parameters │ { │
│ │ "address": "192.0.2.103", │
│ │ "disk_space_threshold": null, │
│ │ "extra_port": 0, │
│ │ "monitorpw": null, │
│ │ "monitoruser": null, │
│ │ "persistmaxtime": "0ms", │
│ │ "persistpoolmax": 0, │
│ │ "port": 3306, │
│ │ "priority": 0, │
│ │ "proxy_protocol": false, │
│ │ "rank": "primary", │
│ │ "socket": null, │
│ │ "ssl": false, │
│ │ "ssl_ca_cert": null, │
│ │ "ssl_cert": null, │
│ │ "ssl_cert_verify_depth": 9, │
│ │ "ssl_cipher": null, │
│ │ "ssl_key": null, │
│ │ "ssl_verify_peer_certificate": false, │
│ │ "ssl_verify_peer_host": false, │
│ │ "ssl_version": "MAX" │
│ │ } │
└─────────────────────┴───────────────────────────────────────────┘$ maxctrl list monitors┌─────────────────┬─────────┬─────────────────────┐
│ Monitor │ State │ Servers │
├─────────────────┼─────────┼─────────────────────┤
│ cluster_monitor │ Running │ node1, node2, node3 │
└─────────────────┴─────────┴─────────────────────┘$ maxctrl show monitor cluster_monitor┌─────────────────────┬──────────────────────────────────────────────────────┐
│ Monitor │ cluster_monitor │
├─────────────────────┼──────────────────────────────────────────────────────┤
│ Module │ galeramon │
├─────────────────────┼──────────────────────────────────────────────────────┤
│ State │ Running │
├─────────────────────┼──────────────────────────────────────────────────────┤
│ Servers │ node1 │
│ │ node2 │
│ │ node3 │
├─────────────────────┼──────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ .. │
│ │ } │
├─────────────────────┼──────────────────────────────────────────────────────┤
│ Monitor Diagnostics │ { │
│ │ .. │
│ │ } │
└─────────────────────┴──────────────────────────────────────────────────────┘$ maxctrl list services┌───────────────────────────┬────────────────┬─────────────┬───────────────────┬─────────────────────┐
│ Service │ Router │ Connections │ Total Connections │ Servers │
├───────────────────────────┼────────────────┼─────────────┼───────────────────┼─────────────────────┤
│ connection_router_service │ readconnroute │ 0 │ 0 │ node1, node2, node3 │
├───────────────────────────┼────────────────┼─────────────┼───────────────────┼─────────────────────┤
│ query_router_service │ readwritesplit │ 1 │ 1 │ node1, node2, node3 │
└───────────────────────────┴────────────────┴─────────────┴───────────────────┴─────────────────────┘$ maxctrl show service query_router_service┌─────────────────────┬─────────────────────────────────────────────────────────────┐
│ Service │ query_router_service │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Router │ readwritesplit │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ State │ Started │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Started At │ Thu Aug 5 20:23:38 2021 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Current Connections │ 1 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Total Connections │ 1 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Max Connections │ 1 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Cluster │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Servers │ node1 │
│ │ node2 │
│ │ node3 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Services │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Filters │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ "auth_all_servers": false, │
│ │ "causal_reads": "false", │
│ │ "causal_reads_timeout": "10000ms", │
│ │ "connection_keepalive": "300000ms", │
│ │ "connection_timeout": "0ms", │
│ │ "delayed_retry": false, │
│ │ "delayed_retry_timeout": "10000ms", │
│ │ "disable_sescmd_history": false, │
│ │ "enable_root_user": false, │
│ │ "idle_session_pool_time": "-1000ms", │
│ │ "lazy_connect": false, │
│ │ "localhost_match_wildcard_host": true, │
│ │ "log_auth_warnings": true, │
│ │ "master_accept_reads": false, │
│ │ "master_failure_mode": "fail_instantly", │
│ │ "master_reconnection": false, │
│ │ "max_connections": 0, │
│ │ "max_sescmd_history": 50, │
│ │ "max_slave_connections": 255, │
│ │ "max_slave_replication_lag": "0ms", │
│ │ "net_write_timeout": "0ms", │
│ │ "optimistic_trx": false, │
│ │ "password": "*****", │
│ │ "prune_sescmd_history": true, │
│ │ "rank": "primary", │
│ │ "retain_last_statements": -1, │
│ │ "retry_failed_reads": true, │
│ │ "reuse_prepared_statements": false, │
│ │ "router": "readwritesplit", │
│ │ "session_trace": false, │
│ │ "session_track_trx_state": false, │
│ │ "slave_connections": 255, │
│ │ "slave_selection_criteria": "LEAST_CURRENT_OPERATIONS", │
│ │ "strict_multi_stmt": false, │
│ │ "strict_sp_calls": false, │
│ │ "strip_db_esc": true, │
│ │ "transaction_replay": false, │
│ │ "transaction_replay_attempts": 5, │
│ │ "transaction_replay_max_size": 1073741824, │
│ │ "transaction_replay_retry_on_deadlock": false, │
│ │ "type": "service", │
│ │ "use_sql_variables_in": "all", │
│ │ "user": "mxs", │
│ │ "version_string": null │
│ │ } │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Router Diagnostics │ { │
│ │ "avg_sescmd_history_length": 0, │
│ │ "max_sescmd_history_length": 0, │
│ │ "queries": 1, │
│ │ "replayed_transactions": 0, │
│ │ "ro_transactions": 0, │
│ │ "route_all": 0, │
│ │ "route_master": 0, │
│ │ "route_slave": 1, │
│ │ "rw_transactions": 0, │
│ │ "server_query_statistics": [ │
│ │ { │
│ │ "avg_selects_per_session": 0, │
│ │ "avg_sess_duration": "0ns", │
│ │ "id": "node2", │
│ │ "read": 1, │
│ │ "total": 1, │
│ │ "write": 0 │
│ │ } │
│ │ ] │
│ │ } │
└─────────────────────┴─────────────────────────────────────────────────────────────┘$ sudo mariadbCREATE USER 'app_user'@'192.0.2.104' IDENTIFIED BY 'app_user_passwd';GRANT ALL ON test.* TO 'app_user'@'192.0.2.104';CREATE USER 'app_user'@'192.0.2.11' IDENTIFIED BY 'app_user_passwd';GRANT ALL ON test.* TO 'app_user'@'192.0.2.11';$ mariadb --host 192.0.2.104 --port 3307
--user app_user --password$ maxctrl list listeners┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘$ mariadb --host 192.0.2.104 --port 3308 \
--user app_user --passwordSELECT @@global.hostname;
+-------------------+
| @@global.hostname |
+-------------------+
| node2 |
+-------------------+$ maxctrl list servers┌────────┬─────────────┬──────┬─────────────┬─────────────────────────┬──────┐
│ Server │ Address │ Port │ Connections │ State │ GTID │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node1 │ 192.0.2.101 │ 3306 │ 0 │ Slave, Synced, Running │ │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node2 │ 192.0.2.102 │ 3306 │ 0 │ Slave, Synced, Running │ │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node3 │ 192.0.2.103 │ 3306 │ 0 │ Master, Synced, Running │ │
└────────┴─────────────┴──────┴─────────────┴─────────────────────────┴──────┘$ maxctrl list listeners galerarouter┌────────────────────────────┬──────┬───────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼───────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ │ Running │ connection_router_service │
│ query_router_listener │ 3307 │ │ Running │ query_router_service │
└────────────────────────────┴──────┴───────┴─────────┴───────────────────────────┘$ mariadb --host=192.0.2.104 --port=3307 \
--user=app_user --password=app_user_passwdCREATE TABLE test.load_balancing_test (
id INT PRIMARY KEY AUTO_INCREMENT,
hostname VARCHAR(256)
);INSERT INTO test.load_balancing_test (hostname)
VALUES (@@global.hostname);SELECT * FROM test.load_balancing_test;+----+----------+
| id | hostname |
+----+----------+
| 1 | node3 |
| 4 | node3 |
| 7 | node3 |
+----+----------+$ maxctrl list servers┌────────┬─────────────┬──────┬─────────────┬─────────────────────────┬──────┐
│ Server │ Address │ Port │ Connections │ State │ GTID │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node1 │ 192.0.2.101 │ 3306 │ 0 │ Slave, Synced, Running │ │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node2 │ 192.0.2.102 │ 3306 │ 0 │ Slave, Synced, Running │ │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node3 │ 192.0.2.103 │ 3306 │ 0 │ Master, Synced, Running │ │
└────────┴─────────────┴──────┴─────────────┴─────────────────────────┴──────┘$ maxctrl list listeners┌────────────────────────────┬──────┬───────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼───────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ │ Running │ connection_router_service │
│ query_router_listener │ 3307 │ │ Running │ query_router_service │
└────────────────────────────┴──────┴───────┴─────────┴───────────────────────────┘$ mariadb --host=192.0.2.104 --port=3307 \
--user=app_user --password=app_user_passwdSELECT @@global.hostname;
+-------------------+
| @@global.hostname |
+-------------------+
| node2 |
+-------------------+$ maxctrl show maxscale┌──────────────┬───────────────────────────────────────────────────────┐
│ Version │ 25.01.2 │
├──────────────┼───────────────────────────────────────────────────────┤
│ Commit │ 3761fa7a52046bc58faad8b5a139116f9e33364c │
├──────────────┼───────────────────────────────────────────────────────┤
│ Started At │ Thu, 05 Aug 2021 20:21:20 GMT │
├──────────────┼───────────────────────────────────────────────────────┤
│ Activated At │ Thu, 05 Aug 2021 20:21:20 GMT │
├──────────────┼───────────────────────────────────────────────────────┤
│ Uptime │ 868 │
├──────────────┼───────────────────────────────────────────────────────┤
│ Config Sync │ null │
├──────────────┼───────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ "admin_auth": true, │
│ │ "admin_enabled": true, │
│ │ "admin_gui": true, │
│ │ "admin_host": "0.0.0.0", │
│ │ "admin_log_auth_failures": true, │
│ │ "admin_pam_readonly_service": null, │
│ │ "admin_pam_readwrite_service": null, │
│ │ "admin_port": 8989, │
│ │ "admin_secure_gui": false, │
│ │ "admin_ssl_ca_cert": null, │
│ │ "admin_ssl_cert": null, │
│ │ "admin_ssl_key": null, │
│ │ "admin_ssl_version": "MAX", │
│ │ "auth_connect_timeout": "10000ms", │
│ │ "auth_read_timeout": "10000ms", │
│ │ "auth_write_timeout": "10000ms", │
│ │ "cachedir": "/var/cache/maxscale", │
│ │ "config_sync_cluster": null, │
│ │ "config_sync_interval": "5000ms", │
│ │ "config_sync_password": "*****", │
│ │ "config_sync_timeout": "10000ms", │
│ │ "config_sync_user": null, │
│ │ "connector_plugindir": "/usr/lib64/mysql/plugin", │
│ │ "datadir": "/var/lib/maxscale", │
│ │ "debug": null, │
│ │ "dump_last_statements": "never", │
│ │ "execdir": "/usr/bin", │
│ │ "language": "/var/lib/maxscale", │
│ │ "libdir": "/usr/lib64/maxscale", │
│ │ "load_persisted_configs": true, │
│ │ "local_address": null, │
│ │ "log_debug": false, │
│ │ "log_info": false, │
│ │ "log_notice": true, │
│ │ "log_throttling": { │
│ │ "count": 10, │
│ │ "suppress": 10000, │
│ │ "window": 1000 │
│ │ }, │
│ │ "log_warn_super_user": false, │
│ │ "log_warning": true, │
│ │ "logdir": "/var/log/maxscale", │
│ │ "max_auth_errors_until_block": 10, │
│ │ "maxlog": true, │
│ │ "module_configdir": "/etc/maxscale.modules.d", │
│ │ "ms_timestamp": false, │
│ │ "passive": false, │
│ │ "persistdir": "/var/lib/maxscale/maxscale.cnf.d", │
│ │ "piddir": "/var/run/maxscale", │
│ │ "query_classifier": "qc_sqlite", │
│ │ "query_classifier_args": null, │
│ │ "query_classifier_cache_size": 289073971, │
│ │ "query_retries": 1, │
│ │ "query_retry_timeout": "5000ms", │
│ │ "rebalance_period": "0ms", │
│ │ "rebalance_threshold": 20, │
│ │ "rebalance_window": 10, │
│ │ "retain_last_statements": 0, │
│ │ "session_trace": 0, │
│ │ "skip_permission_checks": false, │
│ │ "sql_mode": "default", │
│ │ "syslog": true, │
│ │ "threads": 1, │
│ │ "users_refresh_interval": "0ms", │
│ │ "users_refresh_time": "30000ms", │
│ │ "writeq_high_water": 16777216, │
│ │ "writeq_low_water": 8192 │
│ │ } │
└──────────────┴───────────────────────────────────────────────────────┘$ maxctrl list servers┌────────┬─────────────┬──────┬─────────────┬─────────────────────────┬──────┐
│ Server │ Address │ Port │ Connections │ State │ GTID │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node1 │ 192.0.2.101 │ 3306 │ 0 │ Slave, Synced, Running │ │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node2 │ 192.0.2.102 │ 3306 │ 0 │ Slave, Synced, Running │ │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node3 │ 192.0.2.103 │ 3306 │ 0 │ Master, Synced, Running │ │
└────────┴─────────────┴──────┴─────────────┴─────────────────────────┴──────┘$ maxctrl show server node3┌─────────────────────┬───────────────────────────────────────────┐
│ Server │ node3 │
├─────────────────────┼───────────────────────────────────────────┤
│ Address │ 192.0.2.103 │
├─────────────────────┼───────────────────────────────────────────┤
│ Port │ 3306 │
├─────────────────────┼───────────────────────────────────────────┤
│ State │ Master, Synced, Running │
├─────────────────────┼───────────────────────────────────────────┤
│ Version │ 11.4.5-3-MariaDB-enterprise-log │
├─────────────────────┼───────────────────────────────────────────┤
│ Last Event │ master_up │
├─────────────────────┼───────────────────────────────────────────┤
│ Triggered At │ Thu, 05 Aug 2021 20:22:26 GMT │
├─────────────────────┼───────────────────────────────────────────┤
│ Services │ connection_router_service │
│ │ query_router_service │
├─────────────────────┼───────────────────────────────────────────┤
│ Monitors │ cluster_monitor │
├─────────────────────┼───────────────────────────────────────────┤
│ Master ID │ -1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Node ID │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Slave Server IDs │ │
├─────────────────────┼───────────────────────────────────────────┤
│ Current Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Total Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Max Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Statistics │ { │
│ │ "active_operations": 0, │
│ │ "adaptive_avg_select_time": "0ns", │
│ │ "connection_pool_empty": 0, │
│ │ "connections": 1, │
│ │ "max_connections": 1, │
│ │ "max_pool_size": 0, │
│ │ "persistent_connections": 0, │
│ │ "reused_connections": 0, │
│ │ "routed_packets": 0, │
│ │ "total_connections": 1 │
│ │ } │
├─────────────────────┼───────────────────────────────────────────┤
│ Parameters │ { │
│ │ "address": "192.0.2.103", │
│ │ "disk_space_threshold": null, │
│ │ "extra_port": 0, │
│ │ "monitorpw": null, │
│ │ "monitoruser": null, │
│ │ "persistmaxtime": "0ms", │
│ │ "persistpoolmax": 0, │
│ │ "port": 3306, │
│ │ "priority": 0, │
│ │ "proxy_protocol": false, │
│ │ "rank": "primary", │
│ │ "socket": null, │
│ │ "ssl": false, │
│ │ "ssl_ca_cert": null, │
│ │ "ssl_cert": null, │
│ │ "ssl_cert_verify_depth": 9, │
│ │ "ssl_cipher": null, │
│ │ "ssl_key": null, │
│ │ "ssl_verify_peer_certificate": false, │
│ │ "ssl_verify_peer_host": false, │
│ │ "ssl_version": "MAX" │
│ │ } │
└─────────────────────┴───────────────────────────────────────────┘$ maxctrl list monitors┌─────────────────┬─────────┬─────────────────────┐
│ Monitor │ State │ Servers │
├─────────────────┼─────────┼─────────────────────┤
│ cluster_monitor │ Running │ node1, node2, node3 │
└─────────────────┴─────────┴─────────────────────┘$ maxctrl show monitor cluster_monitor┌─────────────────────┬──────────────────────────────────────────────────────┐
│ Monitor │ cluster_monitor │
├─────────────────────┼──────────────────────────────────────────────────────┤
│ Module │ galeramon │
├─────────────────────┼──────────────────────────────────────────────────────┤
│ State │ Running │
├─────────────────────┼──────────────────────────────────────────────────────┤
│ Servers │ node1 │
│ │ node2 │
│ │ node3 │
├─────────────────────┼──────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ .. │
│ │ } │
├─────────────────────┼──────────────────────────────────────────────────────┤
│ Monitor Diagnostics │ { │
│ │ .. │
│ │ } │
└─────────────────────┴──────────────────────────────────────────────────────┘$ maxctrl list services┌───────────────────────────┬────────────────┬─────────────┬───────────────────┬─────────────────────┐
│ Service │ Router │ Connections │ Total Connections │ Servers │
├───────────────────────────┼────────────────┼─────────────┼───────────────────┼─────────────────────┤
│ connection_router_service │ readconnroute │ 0 │ 0 │ node1, node2, node3 │
├───────────────────────────┼────────────────┼─────────────┼───────────────────┼─────────────────────┤
│ query_router_service │ readwritesplit │ 1 │ 1 │ node1, node2, node3 │
└───────────────────────────┴────────────────┴─────────────┴───────────────────┴─────────────────────┘$ maxctrl show service query_router_service┌─────────────────────┬─────────────────────────────────────────────────────────────┐
│ Service │ query_router_service │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Router │ readwritesplit │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ State │ Started │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Started At │ Thu Aug 5 20:23:38 2021 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Current Connections │ 1 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Total Connections │ 1 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Max Connections │ 1 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Cluster │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Servers │ node1 │
│ │ node2 │
│ │ node3 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Services │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Filters │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ "auth_all_servers": false, │
│ │ "causal_reads": "false", │
│ │ "causal_reads_timeout": "10000ms", │
│ │ "connection_keepalive": "300000ms", │
│ │ "connection_timeout": "0ms", │
│ │ "delayed_retry": false, │
│ │ "delayed_retry_timeout": "10000ms", │
│ │ "disable_sescmd_history": false, │
│ │ "enable_root_user": false, │
│ │ "idle_session_pool_time": "-1000ms", │
│ │ "lazy_connect": false, │
│ │ "localhost_match_wildcard_host": true, │
│ │ "log_auth_warnings": true, │
│ │ "master_accept_reads": false, │
│ │ "master_failure_mode": "fail_instantly", │
│ │ "master_reconnection": false, │
│ │ "max_connections": 0, │
│ │ "max_sescmd_history": 50, │
│ │ "max_slave_connections": 255, │
│ │ "max_slave_replication_lag": "0ms", │
│ │ "net_write_timeout": "0ms", │
│ │ "optimistic_trx": false, │
│ │ "password": "*****", │
│ │ "prune_sescmd_history": true, │
│ │ "rank": "primary", │
│ │ "retain_last_statements": -1, │
│ │ "retry_failed_reads": true, │
│ │ "reuse_prepared_statements": false, │
│ │ "router": "readwritesplit", │
│ │ "session_trace": false, │
│ │ "session_track_trx_state": false, │
│ │ "slave_connections": 255, │
│ │ "slave_selection_criteria": "LEAST_CURRENT_OPERATIONS", │
│ │ "strict_multi_stmt": false, │
│ │ "strict_sp_calls": false, │
│ │ "strip_db_esc": true, │
│ │ "transaction_replay": false, │
│ │ "transaction_replay_attempts": 5, │
│ │ "transaction_replay_max_size": 1073741824, │
│ │ "transaction_replay_retry_on_deadlock": false, │
│ │ "type": "service", │
│ │ "use_sql_variables_in": "all", │
│ │ "user": "mxs", │
│ │ "version_string": null │
│ │ } │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Router Diagnostics │ { │
│ │ "avg_sescmd_history_length": 0, │
│ │ "max_sescmd_history_length": 0, │
│ │ "queries": 1, │
│ │ "replayed_transactions": 0, │
│ │ "ro_transactions": 0, │
│ │ "route_all": 0, │
│ │ "route_master": 0, │
│ │ "route_slave": 1, │
│ │ "rw_transactions": 0, │
│ │ "server_query_statistics": [ │
│ │ { │
│ │ "avg_selects_per_session": 0, │
│ │ "avg_sess_duration": "0ns", │
│ │ "id": "node2", │
│ │ "read": 1, │
│ │ "total": 1, │
│ │ "write": 0 │
│ │ } │
│ │ ] │
│ │ } │
└─────────────────────┴─────────────────────────────────────────────────────────────┘$ sudo mariadbCREATE USER 'app_user'@'192.0.2.104' IDENTIFIED BY 'app_user_passwd';GRANT ALL ON test.* TO 'app_user'@'192.0.2.104';CREATE USER 'app_user'@'192.0.2.11' IDENTIFIED BY 'app_user_passwd';GRANT ALL ON test.* TO 'app_user'@'192.0.2.11';$ mariadb --host 192.0.2.104 --port 3307
--user app_user --password$ maxctrl list listeners┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘$ mariadb --host 192.0.2.104 --port 3308 \
--user app_user --passwordSELECT @@global.hostname;
+-------------------+
| @@global.hostname |
+-------------------+
| node2 |
+-------------------+$ maxctrl list servers┌────────┬─────────────┬──────┬─────────────┬─────────────────────────┬──────┐
│ Server │ Address │ Port │ Connections │ State │ GTID │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node1 │ 192.0.2.101 │ 3306 │ 0 │ Slave, Synced, Running │ │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node2 │ 192.0.2.102 │ 3306 │ 0 │ Slave, Synced, Running │ │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node3 │ 192.0.2.103 │ 3306 │ 0 │ Master, Synced, Running │ │
└────────┴─────────────┴──────┴─────────────┴─────────────────────────┴──────┘$ maxctrl list listeners galerarouter┌────────────────────────────┬──────┬───────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼───────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ │ Running │ connection_router_service │
│ query_router_listener │ 3307 │ │ Running │ query_router_service │
└────────────────────────────┴──────┴───────┴─────────┴───────────────────────────┘$ mariadb --host=192.0.2.104 --port=3307 \
--user=app_user --password=app_user_passwdCREATE TABLE test.load_balancing_test (
id INT PRIMARY KEY AUTO_INCREMENT,
hostname VARCHAR(256)
);INSERT INTO test.load_balancing_test (hostname)
VALUES (@@global.hostname);SELECT * FROM test.load_balancing_test;+----+----------+
| id | hostname |
+----+----------+
| 1 | node3 |
| 4 | node3 |
| 7 | node3 |
+----+----------+$ maxctrl list servers┌────────┬─────────────┬──────┬─────────────┬─────────────────────────┬──────┐
│ Server │ Address │ Port │ Connections │ State │ GTID │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node1 │ 192.0.2.101 │ 3306 │ 0 │ Slave, Synced, Running │ │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node2 │ 192.0.2.102 │ 3306 │ 0 │ Slave, Synced, Running │ │
├────────┼─────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ node3 │ 192.0.2.103 │ 3306 │ 0 │ Master, Synced, Running │ │
└────────┴─────────────┴──────┴─────────────┴─────────────────────────┴──────┘$ maxctrl list listeners┌────────────────────────────┬──────┬───────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼───────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ │ Running │ connection_router_service │
│ query_router_listener │ 3307 │ │ Running │ query_router_service │
└────────────────────────────┴──────┴───────┴─────────┴───────────────────────────┘$ mariadb --host=192.0.2.104 --port=3307 \
--user=app_user --password=app_user_passwdSELECT @@global.hostname;
+-------------------+
| @@global.hostname |
+-------------------+
| node2 |
+-------------------+Set this system variable to ON.
Set this option to the file you want to use for the Relay Logs. Setting this option enables relay logging.
Set this option to the file you want to use to index Relay Log filenames.
Sets the numeric Server ID for this MariaDB Enterprise Server. The value set on this option must be unique to each node.
iam_role_name, sts_region, and sts_endpointTo use the IAM role assigned to an EC2 instance, you must uncomment ec2_iam_mode=enabled.
Set this system variable to utf8
Set this system variable to utf8_general_ci
columnstore_use_import_for_batchinsert
Set this system variable to ALWAYS to always use cpimport for LOAD DATA INFILE and INSERT...SELECT statements.
Set this system variable to ON.
Set this option to the file you want to use for the Binary Log. Setting this option enables binary logging.
Set this option to the file you want to use to track binlog filenames.
Start and Configure MariaDB MaxScale
Test MariaDB MaxScale
Import Data
• GlusterFS is a distributed file system. • GlusterFS supports replication and failover.
NFS (Network File System)
On-premises
• NFS is a distributed file system. • If NFS is used, the storage should be mounted with the sync option to ensure that each node flushes its changes immediately. • For on-premises deployments, NFS is the recommended option for the Storage Manager directory, and any S3-compatible storage is the recommended option for data.
<hostname>.log
<hostname>-bin

$ sudo systemctl stop mariadb$ sudo systemctl stop mariadb-columnstore$ sudo systemctl stop mariadb-columnstore-cmapi[mariadb]
bind_address = 0.0.0.0
log_error = mariadbd.err
character_set_server = utf8
collation_server = utf8_general_ci
log_bin = mariadb-bin
log_bin_index = mariadb-bin.index
relay_log = mariadb-relay
relay_log_index = mariadb-relay.index
log_slave_updates = ON
gtid_strict_mode = ON
# This must be unique on each Enterprise ColumnStore node
server_id = 1[ObjectStorage]
…
service = S3
…
[S3]
bucket = your_columnstore_bucket_name
endpoint = your_s3_endpoint
aws_access_key_id = your_s3_access_key_id
aws_secret_access_key = your_s3_secret_key
# iam_role_name = your_iam_role
# sts_region = your_sts_region
# sts_endpoint = your_sts_endpoint
# ec2_iam_mode = enabled
[Cache]
cache_size = your_local_cache_size
path = your_local_cache_path$ sudo systemctl start mariadb$ sudo systemctl enable mariadb$ sudo systemctl stop mariadb-columnstore$ sudo systemctl start mariadb-columnstore-cmapi$ sudo systemctl enable mariadb-columnstore-cmapiCREATE USER 'util_user'@'127.0.0.1'
IDENTIFIED BY 'util_user_passwd';GRANT SELECT, PROCESS ON *.*
TO 'util_user'@'127.0.0.1';$ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1$ sudo mcsSetConfig CrossEngineSupport Port 3306$ sudo mcsSetConfig CrossEngineSupport User util_user$ sudo mcsSetConfig CrossEngineSupport Password util_user_passwdCREATE USER 'repl'@'192.0.2.%' IDENTIFIED BY 'repl_passwd';GRANT REPLICA MONITOR,
REPLICATION REPLICA,
REPLICATION REPLICA ADMIN,
REPLICATION MASTER ADMIN
ON *.* TO 'repl'@'192.0.2.%';CREATE USER 'mxs'@'192.0.2.%'
IDENTIFIED BY 'mxs_passwd';GRANT SHOW DATABASES ON *.* TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.columns_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.db TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.procs_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.proxies_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.roles_mapping TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.tables_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.user TO 'mxs'@'192.0.2.%';GRANT BINLOG ADMIN,
READ_ONLY ADMIN,
RELOAD,
REPLICA MONITOR,
REPLICATION MASTER ADMIN,
REPLICATION REPLICA ADMIN,
REPLICATION REPLICA,
SHOW DATABASES,
SELECT
ON *.* TO 'mxs'@'192.0.2.%';CHANGE MASTER TO
MASTER_HOST='192.0.2.1',
MASTER_USER='repl',
MASTER_PASSWORD='repl_passwd',
MASTER_USE_GTID=slave_pos;START REPLICA;SHOW REPLICA STATUS;SET GLOBAL read_only=ON;$ openssl rand -hex 32
93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":120, "node": "192.0.2.1"}' \
| jq .{
"timestamp": "2020-10-28 00:39:14.672142",
"node_id": "192.0.2.1"
}$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
}$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":120, "node": "192.0.2.2"}' \
| jq .{
"timestamp": "2020-10-28 00:42:42.796050",
"node_id": "192.0.2.2"
}$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
},
"192.0.2.2": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"192.0.2.3": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"num_nodes": 3
}$ sudo yum install policycoreutils policycoreutils-python$ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
Nothing to do$ sudo semodule -i mariadb_local.pp$ sudo setenforce enforcing# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo getenforceEnforcing$ sudo systemctl status firewalld$ sudo systemctl start firewalld$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="3306" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8600-8630" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8640" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8700" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8800" protocol="tcp"
accept'$ sudo firewall-cmd --reload$ sudo ufw status verbose$ sudo ufw enable$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 3306 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8600:8630 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8640 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8700 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8800 proto tcp$ sudo ufw reloadEnterprise Server 10.5
Enterprise Server 10.6
Enterprise Server 11.4
Columnar storage engine with S3-compatible object storage
Highly available
Automatic failover via MaxScale and CMAPI
Scales reads via MaxScale
Bulk data import
This procedure describes the deployment of the ColumnStore Object Storage topology with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5.
MariaDB Enterprise ColumnStore 5 is a columnar storage engine for MariaDB Enterprise Server 10.5. Enterprise ColumnStore is suitable for Online Analytical Processing (OLAP) workloads.
This procedure has 9 steps, which are executed in sequence.
This procedure represents basic product capability and deploys 3 Enterprise ColumnStore nodes and 1 MaxScale node.
This page provides an overview of the topology, requirements, and deployment procedures.
Please read and understand this procedure before executing.
Customers can obtain support by submitting a support case.
The following components are deployed during this procedure:
The MariaDB Enterprise ColumnStore topology with Object Storage delivers production analytics with high availability, fault tolerance, and limitless data storage by leveraging S3-compatible storage.
The topology consists of:
One or more MaxScale nodes
An odd number of ColumnStore nodes (minimum of 3) running ES, Enterprise ColumnStore, and CMAPI
The MaxScale nodes:
Monitor the health and availability of each ColumnStore node using the MariaDB Monitor (mariadbmon)
Accept client and application connections
Route queries to ColumnStore nodes using the Read/Write Split Router (readwritesplit)
The ColumnStore nodes:
Receive queries from MaxScale
Execute queries
Use for data
Use shared local storage for the Storage Manager directory
These requirements are for the ColumnStore Object Storage topology when deployed with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5.
Node Count
Operating System
Minimum Hardware Requirements
Recommended Hardware Requirements
MaxScale nodes, 1 or more are required.
Enterprise ColumnStore nodes, 3 or more are required for high availability. You should always have an odd number of nodes in a multi-node ColumnStore deployment to avoid split brain scenarios.
In alignment to the , the ColumnStore Object Storage topology with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5 is provided for:
CentOS Linux 7 (x86_64)
Debian 10 (x86_64)
Red Hat Enterprise Linux 7 (x86_64)
Red Hat Enterprise Linux 8 (x86_64)
MariaDB Enterprise ColumnStore's minimum hardware requirements are not intended for production environments, but the minimum hardware requirements can be appropriate for development and test environments. For production environments, see the instead.
The minimum hardware requirements are:
MariaDB Enterprise ColumnStore will refuse to start if the system has less than 3 GB of memory.
If Enterprise ColumnStore is started on a system with less memory, the following error message will be written to the ColumnStore system log called crit.log:
And the following error message will be raised to the client:
MariaDB Enterprise ColumnStore's recommended hardware requirements are intended for production analytics.
The recommended hardware requirements are:
The ColumnStore Object Storage topology requires the following storage types:
The ColumnStore Object Storage topology uses S3-compatible object storage to store data.
Many S3-compatible object storage services exist. MariaDB Corporation cannot make guarantees about all S3-compatible object storage services, because different services provide different functionality.
For the preferred S3-compatible object storage providers that provide cloud and hardware solutions, see the following sections:
The use of non-cloud and non-hardware providers is at your own risk.
If you have any questions about using specific S3-compatible object storage with MariaDB Enterprise ColumnStore, contact us.
Amazon Web Services (AWS) S3
Google Cloud Storage
Azure Storage
Alibaba Cloud Object Storage Service
Cloudian HyperStore
Cohesity S3
Dell EMC
IBM Cloud Object Storage
The ColumnStore Object Storage topology uses shared local storage for the to store metadata.
The Storage Manager directory is located at the following path by default:
/var/lib/columnstore/storagemanager
The most common shared local storage options for the ColumnStore Object Storage topology are:
For best results, MariaDB Corporation would recommend the following storage options:
Enterprise ColumnStore's CMAPI (Cluster Management API) is a REST API that can be used to manage a multi-node Enterprise ColumnStore cluster.
Many tools are capable of interacting with REST APIs. For example, the curl utility could be used to make REST API calls from the command-line.
Many programming languages also have libraries for interacting with REST APIs.
The examples below show how to use the CMAPI with curl.
For example:
https://mcs1:8640/cmapi/0.4.0/cluster/shutdown
https://mcs1:8640/cmapi/0.4.0/cluster/start
https://mcs1:8640/cmapi/0.4.0/cluster/status
With CMAPI 1.4 and later:
https://mcs1:8640/cmapi/0.4.0/cluster/node
With CMAPI 1.3 and earlier:
https://mcs1:8640/cmapi/0.4.0/cluster/add-node
https://mcs1:8640/cmapi/0.4.0/cluster/remove-node
'x-api-key': '93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd'
'Content-Type': 'application/json'
x-api-key can be set to any value of your choice during the first call to the server. Subsequent connections will require this same key.
curl examples remain valid but are now considered legacy.
$ mcs cluster status
$ mcs cluster start --timeout 20
$ mcs cluster shutdown --timeout 20
With CMAPI 1.4 and later:
With CMAPI 1.3 and earlier:
With CMAPI 1.4 and later:
With CMAPI 1.3 and earlier:
MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.
To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.
The systemctl command is used to start and stop the MariaDB Enterprise Server service.
For additional information, see "".
MariaDB Enterprise Server produces log data that can be helpful in problem diagnosis.
Log filenames and locations may be overridden in the server configuration. The default location of logs is the data directory. The data directory is specified by the datadir system variable.
The systemctl command is used to start and stop the ColumnStore service.
In the ColumnStore Object Storage topology, the mariadb-columnstore service should not be enabled. The CMAPI service restarts Enterprise ColumnStore as needed, so it does not need to start automatically upon reboot.
The systemctl command is used to start and stop the CMAPI service.
For additional information on endpoints, see "CMAPI".
MaxScale can be configured using several methods. These methods make use of MaxScale's .
The procedure on these pages configures MaxScale using MaxCtrl.
The systemctl command is used to start and stop the MaxScale service.
For additional information, see "".
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
Next: Step 1: Prepare ColumnStore Nodes.
This page is: Copyright © 2025 MariaDB. All rights reserved.
These instructions detail the deployment of MariaDB ColumnStore 6 with MariaDB Community Server 10.6 in a Single-node ColumnStore Deployment configuration on a range of supported Operating Systems.
These instructions detail how to deploy a single-node columnar database, which is suited for an analytical or OLAP workload that does not require high availability (HA). This deployment type is generally for non-production use cases, such as for development and testing.
These instructions detail the deployment of the following MariaDB Community Server components:
Single-node ColumnStore 6 does not support high availability.
For high availability and scalability, instead see "" or "".
Systems hosting a ColumnStore deployment requires some additional configuration prior to installation:
MariaDB ColumnStore performs best when certain Linux kernel parameters are optimized.
Set the relevant kernel parameters in a sysctl configuration file. For proper change management, we recommend setting them in a ColumnStore-specific configuration file.
For example, create a /etc/sysctl.d/90-mariadb-columnstore.conf file with the following contents:
Set the same kernel parameters at runtime using the sysctl command:
To avoid confusion and potential problems, we recommend configuring the system's Linux Security Module (LSM) during installation. The specific steps to configure the security module will depend on the platform.
In the section, we will configure the security module and restart it.
SELinux must be set to permissive mode before installing MariaDB ColumnStore.
Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config.
For example, the file will usually look like this after the change:
Reboot the system.
Confirm that SELinux is in permissive mode using getenforce
AppArmor must be disabled before installing MariaDB ColumnStore.
Disable AppArmor:
Reboot the system.
Confirm that no AppArmor profiles are loaded using aa-status:
Example output:
When using MariaDB ColumnStore, it is recommended to set the system's locale to UTF-8.
On RHEL 8, install additional dependencies.
Set the system's locale to en_US.UTF-8 by executing localedef:
MariaDB ColumnStore supports S3-compatible object storage.
S3-compatible object storage is optional, but highly recommended.
S3-compatible object storage is:
Compatible: Many object storage services are compatible with the Amazon S3 API.
Economical: S3-compatible object storage is often very low cost.
Flexible: S3-compatible object storage is available for both cloud and on-premises deployments.
Limitless: S3-compatible object storage is often virtually limitless.
Many S3-compatible object storage services exist. MariaDB Corporation cannot make guarantees about all S3-compatible object storage services, because different services provide different functionality.
If you have any questions about using specific S3-compatible object storage with MariaDB ColumnStore, .
If you want to use S3-compatible storage, it is important to create the S3 bucket before you start ColumnStore.
If you already have an S3 bucket, confirm that the bucket is empty.
We will configure ColumnStore to use the S3 bucket later in the section.
MariaDB Corporation provides package repositories for YUM (RHEL, CentOS) and APT (Debian, Ubuntu).
MariaDB ColumnStore ships as a storage engine plugin for MariaDB Community Server and a platform engine to handle back-end storage processes. MariaDB Community Server 10.6 does not require any additional software to operate as a single-node analytics database.
Configure the YUM package repository.
MariaDB ColumnStore 6 is available on MariaDB Community Server 10.6.
To configure YUM package repositories:
Checksums of the various releases of the mariadb_repo_setup script can be found in the section at the bottom of the page. Substitute
Configure the APT package repository.
MariaDB ColumnStore 6 is available on MariaDB Community Server 10.6.
To configure APT package repositories:
Checksums of the various releases of the mariadb_repo_setup script can be found in the section at the bottom of the page. Substitute
MariaDB ColumnStore requires configuration after it is installed. The configuration file location depends on your operating system.
MariaDB Community Server can be configured in the following ways:
and can be set in a configuration file (such as /etc/my.cnf). MariaDB Community Server must be restarted to apply changes made to the configuration file.
and can be set on the command-line.
If a system variable supports dynamic changes, then it can be set on-the-fly using the statement.
MariaDB's packages include several bundled configuration files. It is also possible to create custom configuration files.
On RHEL and CentOS, MariaDB's packages bundle the following configuration files:
/etc/my.cnf
/etc/my.cnf.d/client.cnf
/etc/my.cnf.d/mysql-clients.cnf
And on RHEL and CentOS, custom configuration files from the following directories are read by default:
/etc/my.cnf.d/
On Debian and Ubuntu, MariaDB's packages bundle the following configuration files:
/etc/mysql/my.cnf
/etc/mysql/mariadb.cnf
/etc/mysql/mariadb.conf.d/50-client.cnf
And on Debian and Ubuntu, custom configuration files from the following directories are read by default:
/etc/mysql/conf.d/
/etc/mysql/mariadb.conf.d/
Determine which and you need to configure.
Mandatory system variables and options for single-node MariaDB ColumnStore include:
Choose a configuration file in which to configure your system variables and options.
We recommend not making custom changes to one of the bundled configuration files. Instead, create a custom configuration file in one of the included directories. Configuration files in included directories are read in alphabetical order. If you want your custom configuration file to override the bundled configuration files, it is a good idea to prefix the custom configuration file's name with a string that will be sorted last, such as z-.
On RHEL and CentOS, a good custom configuration file would be: /etc/my.cnf.d/z-custom-my.cnf
On Debian and Ubuntu, a good custom configuration file would be: /etc/mysql/mariadb.conf.d/z-custom-my.cnf
Set your system variables and options in the configuration file.
They need to be set in a group that will be read by , such as [mariadb] or [server].
For example:
When a cross engine join is executed, the ExeMgr process connects to the server using the root user with no password by default. MariaDB Community Server 10.6 will reject this login attempt by default. If you plan to use Cross Engine Joins, you need to configure ColumnStore to use a different user account and password. These directions are for configuring the cross engine join user. Directions for creating the cross engine join user are in the section.
To configure cross engine joins, perform the following steps, use the mcsSetConfig command.
For example, to configure ColumnStore to use the cross_engine user account to connect to the server at 127.0.0.1:
MariaDB ColumnStore can use , but it is not required. S3-compatible storage must be configured before it can be used.
To configure ColumnStore to use S3-compatible storage, edit /etc/columnstore/storagemanager.cnf:
The default local cache size is 2 GB.
The default local cache path is /var/lib/columnstore/storagemanager/cache.
Ensure that the local cache path has sufficient store space to store the local cache.
The bucket
The Community Server and ColumnStore processes can be started using the systemctl command. In case the processes were started during the installation process, use the restart command to ensure that the processes pick up the new configuration. Perform the following procedure.
Start the MariaDB Community Server process and configure it to start automatically:
Start the MariaDB ColumnStore processes and configure them to start automatically:
For single-node ColumnStore deployments, only a single user account needs to be created.
The credentials for cross engine joins were previously configured in the section. The user account must also be created and granted the necessary privileges to access data.
Connect to the server using using the root@localhost user account:
Create the user account with the statement:
Grant the user account SELECT privileges on all databases with the statement:
Now that the ColumnStore system is running, you can bulk import your data.
Before data can be imported into the tables, the schema needs to be created.
Connect to the server using using the root@localhost user account:
For each database that you are importing, create the database with the statement:
For each table that you are importing, create the table with the statement:
MariaDB ColumnStore includes cpimport, which is a command-line utility that is designed to efficiently load data in bulk.
To import your data from a TSV (tab-separated values) file with cpimport:
When data is loaded with the statement, MariaDB ColumnStore loads the data using , which is a command-line utility that is designed to efficiently load data in bulk.
To import your data from a TSV (tab-separated values) file with statement:
MariaDB ColumnStore can also import data directly from a remote database. A simple method is to query the table using the statement, and then pipe the results into cpimport, which is a command-line utility that is designed to efficiently load data in bulk.
To import your data from a remote MariaDB database:
If you stopped the Linux Security Module (LSM) during installation, you can restart the module and configure.
The specific steps to configure the security module depend on the operating system.
We set SELinux to permissive mode in the section, but we have to create an SELinux policy for ColumnStore before re-enabling it. This will ensure that SELinux does not interfere with ColumnStore's functionality. A policy can be generated while SELinux is still in permissive mode using the audit2allow command.
To configure SELinux, you have to install the packages required for audit2allow.
On RHEL 7 and CentOS 7, install the following:
On RHEL 8, install the following:
Allow the system to run under load for a while to generate SELinux audit events.
After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:
We disabled AppArmor in the section, but we have to create an AppArmor profile for ColumnStore before re-enabling it. This will ensure that AppArmor does not interfere with ColumnStore's functionality.
For information on how to create a profile, see on ubuntu.com.
ColumnStore has several components. Each of those components needs to be administered.
MariaDB Community Server uses systemctl to start and stop the server processes:
MariaDB ColumnStore uses systemctl to start and stop the ColumnStore processes:
When you have MariaDB ColumnStore up and running, you should test it to ensure that it is in working order and that there were not any issues during startup.
Connect to the server using using the root@localhost user account:
Resilient: S3-compatible object storage is often low maintenance and highly available, since many services use resilient cloud infrastructure.
Scalable: S3-compatible object storage is often highly optimized for read and write scaling.
Secure: S3-compatible object storage is often encrypted-at-rest.
Install the EPEL repository:
Install some additional dependencies for ColumnStore:
Install MariaDB ColumnStore and package dependencies:
Configure MariaDB ColumnStore.
Installation only loads MariaDB ColumnStore to the system. MariaDB ColumnStore requires configuration and additional post-installation steps before the database server is ready for use.
Install some additional dependencies for ColumnStore.
On Debian 10 and Ubuntu 20.04, install the following:
On Debian 9 and Ubuntu 18.04, install the following:
Install MariaDB ColumnStore and package dependencies:
Configure MariaDB ColumnStore.
Installation only loads MariaDB ColumnStore to the system. MariaDB ColumnStore requires configuration and additional post-installation steps before the database server is ready for use.
/etc/my.cnf.d/server.cnf/etc/mysql/mariadb.conf.d/50-mysql-clients.cnf/etc/mysql/mariadb.conf.d/50-mysqld_safe.cnf
/etc/mysql/mariadb.conf.d/50-server.cnf
/etc/mysql/mariadb.conf.d/60-galera.cnf
To use an IAM role, you must also uncomment and set iam_role_name, sts_region, and sts_endpoint.
If audit events were found, the new SELinux policy can be loaded using semodule:
Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config:
Reboot the system.
Confirm that SELinux is in enforcing mode using getenforce:
MariaDB ColumnStore 6
It is a columnar storage engine that provides distributed, columnar storage for scalable analytical processing and smart transactions.
It is the analytical component of MariaDB's single stack Hybrid Transactional/Analytical Processing (HTAP) solution.
columnar database
A database where the columns of each row are stored separately.
Best suited for analytical and OLAP workloads.
Also known as a "column-oriented database".
row database
A database where all columns of each row are stored together.
Best suited for transactional and OLTP workloads.
Also known as a "row-oriented database".
Set this system variable to utf8
Set this system variable to utf8_general_ci
columnstore_use_import_for_batchinsert
Set this system variable to ALWAYS to always use cpimport for LOAD DATA INFILE and INSERT...SELECT statements.
Start
sudo systemctl start mariadb
Stop
sudo systemctl stop mariadb
Restart
sudo systemctl restart mariadb
Enable during startup
sudo systemctl enable mariadb
Disable during startup
sudo systemctl disable mariadb
Status
sudo systemctl status mariadb
Start
sudo systemctl start mariadb-columnstore
Stop
sudo systemctl stop mariadb-columnstore
Restart
sudo systemctl restart mariadb-columnstore
Enable during startup
sudo systemctl enable mariadb-columnstore
Disable during startup
sudo systemctl disable mariadb-columnstore
Status
sudo systemctl status mariadb-columnstore
This page is: Copyright © 2025 MariaDB. All rights reserved.
$ sudo yum install epel-release$ sudo yum install jemalloc$ sudo yum install MariaDB-server MariaDB-backup \
MariaDB-shared MariaDB-client \
MariaDB-columnstore-engine$ sudo apt install libjemalloc2$ sudo apt install libjemalloc1$ sudo apt install mariadb-server mariadb-backup \
libmariadb3 mariadb-client \
mariadb-plugin-columnstore$ sudo semodule -i mariadb_local.pp# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo getenforce# minimize swapping
vm.swappiness = 1
# Increase the TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Increase the TCP buffer limits
# min, default, and max number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# for 1 GigE, increase this to 2500
# for 10 GigE, increase this to 30000
net.core.netdev_max_backlog = 2500$ sudo sysctl --load=/etc/sysctl.d/90-mariadb-columnstore.conf# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo systemctl disable apparmor$ sudo aa-statusapparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.$ sudo yum install glibc-locale-source glibc-langpack-en$ sudo localedef -i en_US -f UTF-8 en_US.UTF-8$ sudo yum install curl$ curl -LsSO https://r.mariadb.com/downloads/mariadb_repo_setup$ echo "${checksum} mariadb_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_repo_setup$ sudo ./mariadb_repo_setup \
--mariadb-server-version="mariadb-10.6"$ sudo apt install curl$ curl -LsSO https://r.mariadb.com/downloads/mariadb_repo_setup$ echo "${checksum} mariadb_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_repo_setup$ sudo ./mariadb_repo_setup \
--mariadb-server-version="mariadb-10.6"$ sudo apt update[mariadb]
log_error = mariadbd.err
character_set_server = utf8
collation_server = utf8_general_ci$ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1
$ sudo mcsSetConfig CrossEngineSupport Port 3306
$ sudo mcsSetConfig CrossEngineSupport User cross_engine
$ sudo mcsSetConfig CrossEngineSupport Password cross_engine_passwd[ObjectStorage]
…
service = S3
…
[S3]
bucket = your_columnstore_bucket_name
endpoint = your_s3_endpoint
aws_access_key_id = your_s3_access_key_id
aws_secret_access_key = your_s3_secret_key
# iam_role_name = your_iam_role
# sts_region = your_sts_region
# sts_endpoint = your_sts_endpoint
[Cache]
cache_size = your_local_cache_size
path = your_local_cache_path$ sudo systemctl restart mariadb$ sudo systemctl enable mariadb$ sudo systemctl restart mariadb-columnstore$ sudo systemctl enable mariadb-columnstore$ sudo mariadbCREATE USER 'cross_engine'@'127.0.0.1'
IDENTIFIED BY "cross_engine_passwd";CREATE USER 'cross_engine'@'localhost'
IDENTIFIED BY "cross_engine_passwd";GRANT SELECT, PROCESS ON *.*
TO 'cross_engine'@'127.0.0.1';GRANT SELECT, PROCESS ON *.*
TO 'cross_engine'@'localhost';$ sudo mariadbCREATE DATABASE inventory;CREATE TABLE inventory.products (
product_name VARCHAR(11) NOT NULL DEFAULT '',
supplier VARCHAR(128) NOT NULL DEFAULT '',
quantity VARCHAR(128) NOT NULL DEFAULT '',
unit_cost VARCHAR(128) NOT NULL DEFAULT ''
) ENGINE=Columnstore DEFAULT CHARSET=utf8;$ sudo cpimport -s '\t' inventory products /tmp/inventory-products.tsvLOAD DATA INFILE '/tmp/inventory-products.tsv'
INTO TABLE inventory.products;$ mariadb --quick \
--skip-column-names \
--execute="SELECT * FROM inventory.products" \
| cpimport -s '\t' inventory products$ sudo yum install policycoreutils policycoreutils-python$ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local$ sudo mariadbWelcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 10.6.21-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>$ sudo getenforcePermissive$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_localNothing to doStart and Configure MariaDB MaxScale
Test MariaDB MaxScale
Import Data
S3-Compatible Object Storage Requirements
Preferred Object Storage Providers: Cloud
Preferred Object Storage Providers: Hardware
Shared Local Storage Directories
Shared Local Storage Options
Ubuntu 20.04 LTS (x86_64)
Quantum ActiveScale
NFS (Network File System)
On-premises
NFS is a distributed file system.
If NFS is used, the storage should be mounted with the sync option to ensure that each node flushes its changes immediately.
For on-premises deployments, NFS is the recommended option for the Storage Manager directory, and any S3-compatible storage is the recommended option for data.
<hostname>-bin
Enterprise Server 10.5, Enterprise ColumnStore 5, MaxScale 2.5
Enterprise Server 10.6, Enterprise ColumnStore 23.02, MaxScale 22.08
Prepare ColumnStore Nodes
Configure Shared Local Storage
Install MariaDB Enterprise Server
Start and Configure MariaDB Enterprise Server
Test MariaDB Enterprise Server
Install MariaDB MaxScale
Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.
Database proxy that extends the availability, scalability, and security of MariaDB Enterprise Servers
Columnar storage engine
Highly available
Optimized for Online Analytical Processing (OLAP) workloads
Scalable query execution
provides a REST API for multi-node administration
Listener
Listens for client connections to MaxScale then passes them to the router service
MariaDB Monitor
Tracks changes in the state of MariaDB Enterprise Servers.
Read Connection Router
Routes connections from the listener to any available Enterprise ColumnStore node
Read/Write Split Router
Routes read operations from the listener to any available Enterprise ColumnStore node, and routes write operations from the listener to a specific server that MaxScale uses as the primary server
Server Module
Connection configuration in MaxScale to an Enterprise ColumnStore node
MaxScale node
4+ cores
4+ GB
Enterprise ColumnStore node
4+ cores
4+ GB
MaxScale node
8+ cores
16+ GB
Enterprise ColumnStore node
64+ cores
128+ GB
The ColumnStore Object Storage topology uses S3-compatible object storage to store data.
The ColumnStore Object Storage topology uses shared local storage for the Storage Manager directory to store metadata.
EBS (Elastic Block Store) Multi-Attach
AWS
EBS is a high-performance block-storage service for AWS (Amazon Web Services).
EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported.
For deployments in AWS, EBS Multi-Attach is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data.
EFS (Elastic File System)
AWS
EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services).
For deployments in AWS, EFS is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data. EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services).
Filestore
GCP
Filestore is high-performance, fully managed storage for GCP (Google Cloud Platform).
For deployments in GCP, Filestore is the recommended option for the Storage Manager directory, and Google Object Storage (S3-compatible) is the recommended option for data.
GlusterFS
On-premises
AWS
Amazon S3 storage
EBS Multi-Attach or EFS
GCP
Google Object Storage (S3-compatible)
Filestore
On-premises
Any S3-compatible object storage
NFS
Configuration File
Configuration files (such as /etc/my.cnf) can be used to set system-variables and options. The server must be restarted to apply changes made to configuration files.
Command-line
The server can be started with command-line options that set system-variables and options.
SQL
Users can set system-variables that support dynamic changes on-the-fly using the SET statement.
Distribution
Example Configuration File Path
CentOS
Red Hat Enterprise Linux (RHEL)
/etc/my.cnf.d/z-custom-mariadb.cnf
Debian
Ubuntu
/etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf
Start
sudo systemctl start mariadb
Stop
sudo systemctl stop mariadb
Restart
sudo systemctl restart mariadb
Enable during startup
sudo systemctl enable mariadb
Disable during startup
sudo systemctl disable mariadb
Status
sudo systemctl status mariadb
<hostname>.err
server_audit.log
<hostname>-slow.log
Start
sudo systemctl start mariadb-columnstore
Stop
sudo systemctl stop mariadb-columnstore
Restart
sudo systemctl restart mariadb-columnstore
Enable during startup
sudo systemctl enable mariadb-columnstore
Disable during startup
sudo systemctl disable mariadb-columnstore
Status
sudo systemctl status mariadb-columnstore
Start
sudo systemctl start mariadb-columnstore-cmapi
Stop
sudo systemctl stop mariadb-columnstore-cmapi
Restart
sudo systemctl restart mariadb-columnstore-cmapi
Enable during startup
sudo systemctl enable mariadb-columnstore-cmapi
Disable during startup
sudo systemctl disable mariadb-columnstore-cmapi
Status
sudo systemctl status mariadb-columnstore-cmapi
Command-line utility to perform administrative tasks through the REST API. See MaxCtrl Commands.
MaxGUI is a graphical utility that can perform administrative tasks through the REST API.
The REST API can be used directly. For example, the curl utility could be used to make REST API calls from the command-line. Many programming languages also have libraries to interact with REST APIs.
Start
sudo systemctl start maxscale
Stop
sudo systemctl stop maxscale
Restart
sudo systemctl restart maxscale
Enable during startup
sudo systemctl enable maxscale
Disable during startup
sudo systemctl disable maxscale
Status
sudo systemctl status maxscale
GlusterFS is a distributed file system.
GlusterFS supports replication and failover.
<hostname>.log
Apr 30 21:54:35 a1ebc96a2519 PrimProc[1004]: 35.668435 |0|0|0| C 28 CAL0000: Error total memory available is less than 3GB.ERROR 1815 (HY000): Internal error: System is not ready yet. Please try again.https://{server}:{port}/cmapi/{version}/{route}/{command}$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/start \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20}' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/shutdown \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20}' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/add-node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .$ curl -k -s -X DELETE https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/remove-node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .
