Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Managing MariaDB ColumnStore involves setup, configuration, and tools like mcsadmin and cpimport for efficient analytics.
This section provides instructions for installing and configuring MariaDB ColumnStore. It covers various deployment scenarios, including single- and multi-node setups with both local and S3 storage.
Managing MariaDB ColumnStore means deploying its architecture, scaling modules, and maintaining performance through monitoring, optimization, and backups.
Step 2: Install Enterprise ColumnStore
Connect to the server using MariaDB Client using the root@localhost user account:
Query and confirm that the ColumnStore storage engine plugin is ACTIVE:
Create a test database, if it does not exist:
Create a ColumnStore table:
Add sample data into the table:
Read data from table:
Create an InnoDB table:
Add data to the table:
Perform a cross-engine join:
Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:
This page was step 4 of 5.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to https://customers.mariadb.com/downloads/token/
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
Install additional dependencies:
Install on CentOS / RHEL (YUM)
Install of Debian 10 and Ubuntu 20.04 (APT):
Install on Debian 9 and Ubuntu 18.04 (APT):
Install MariaDB Enterprise Server and MariaDB Enterprise ColumnStore:
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:
This page was step 2 of 5.
Step 1: Prepare Systems for Enterprise ColumnStore Nodes
This page details step 1 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Local storage.
This step prepares the system to host MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Enterprise ColumnStore performs best with Linux kernel optimizations.
On each server to host an Enterprise ColumnStore node, optimize the kernel:
Set the relevant kernel parameters in a sysctl configuration file. To ensure proper change management, use an Enterprise ColumnStore-specific configuration file.
Create a /etc/sysctl.d/90-mariadb-enterprise-columnstore.conf file:
Use the sysctl command to set the kernel parameters at runtime
The Linux Security Modules (LSM) should be temporarily disabled on each Enterprise ColumnStore node during installation.
The LSM will be configured and re-enabled later in this deployment procedure.
The steps to disable the LSM depend on the specific LSM used by the operating system.
SELinux must be set to permissive mode before installing MariaDB Enterprise ColumnStore.
To set SELinux to permissive mode:
Set SELinux to permissive mode:
Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config.
For example, the file will usually look like this after the change:
Confirm that SELinux is in permissive mode:
SELinux will be configured and re-enabled later in this deployment procedure. This configuration is not persistent. If you restart the server before configuring and re-enabling SELinux later in the deployment procedure, you must reset the enforcement to permissive mode.
AppArmor must be disabled before installing MariaDB Enterprise ColumnStore.
Disable AppArmor:
Reboot the system.
Confirm that no AppArmor profiles are loaded using aa-status:
AppArmor will be configured and re-enabled later in this deployment procedure.
When using MariaDB Enterprise ColumnStore, it is recommended to set the system's locale to UTF-8.
On RHEL 8, install additional dependencies:
Set the system's locale to en_US.UTF-8 by executing localedef:
Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:
This page was step 1 of 5.
Step 5: Bulk Import of Data
This page details step 5 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Local storage.
This step bulk imports data to Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Before data can be imported into the tables, create a matching schema.
On the primary server, create the schema:
For each database that you are importing, create the database with the statement:
For each table that you are importing, create the table with the statement:
Enterprise ColumnStore supports multiple methods to import data into ColumnStore tables.
MariaDB Enterprise ColumnStore includes , which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server run :
When data is loaded with the statement, MariaDB Enterprise ColumnStore loads the data using , which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server use statement:
MariaDB Enterprise ColumnStore can also import data directly from a remote database. A simple method is to query the table using the statement, and then pipe the results into , which is a command-line utility that is designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a remote MariaDB database:
Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:
This page was step 5 of 5.
This procedure is complete.
Step 6: Install MariaDB MaxScale
This page details step 6 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".
This step installs MariaDB MaxScale 22.08. ColumnStore Object Storage requires 1 or more MaxScale nodes.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On the MaxScale node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
On the MaxScale node, configure package repositories and specify MariaDB MaxScale 22.08:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
On the MaxScale node, install MariaDB MaxScale.
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 6 of 9.
Step 5: Bulk Import of Data
This page details step 5 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Object storage.
This step bulk imports data to Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Before data can be imported into the tables, create a matching schema.
On the primary server, create the schema:
For each database that you are importing, create the database with the statement:
For each table that you are importing, create the table with the statement:
Enterprise ColumnStore supports multiple methods to import data into ColumnStore tables.
MariaDB Enterprise ColumnStore includes , which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server run :
When data is loaded with the LOAD DATA INFILE statement, MariaDB Enterprise ColumnStore loads the data using , which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server use LOAD DATA INFILE statement:
MariaDB Enterprise ColumnStore can also import data directly from a remote database. A simple method is to query the table using the statement, and then pipe the results into , which is a command-line utility that is designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a remote MariaDB database:
Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:
This page was step 5 of 5.
This procedure is complete.
To rejoin a node with Enterprise ColumnStore, perform the following procedure.
The node can be configured to rejoin in MaxScale using :
Use or another supported REST client.
Call a module command using the call command command.
As the first argument, provide the name for the module, which is .
As the second argument, provide the module command, which is rejoin .
As the third argument, provide the name of the monitor.
As the fourth argument, provide the name of the server.
For example:
MaxScale is capable of checking the status of using :
List the servers using the list servers command, like this:
If the node properly rejoined, the State column of the node shows Slave, Running.
To switchover to a new primary node with Enterprise ColumnStore, perform the following procedure.
The primary node can be switched in MaxScale using :
Use or another supported REST client.
Call a module command using the call command command.
As the first argument, provide the name for the module, which is .
As the second argument, provide the module command, which is switchover .
As the third argument, provide the name of the monitor.
For example:
With the above syntax, MaxScale will choose the most up-to-date replica to be the new primary.
If you want to manually select a new primary, provide the server name of the new primary as the fourth argument:
MaxScale is capable of checking the status of using :
List the servers using the list servers command, like this:
If switchover was properly performed, the State column of the new primary shows Master, Running.
MariaDB Enterprise ColumnStore acquires table locks for some operations, and it provides utilities to view and clear those locks.
MariaDB Enterprise ColumnStore acquires table locks for some operations, such as:
DDL statements
DML statements
Bulk data loads
If an operation fails, the table lock does not always get released. If you try to access the table, you can see errors like the following:
To solve this problem, MariaDB Enterprise ColumnStore provides two utilities to view and clear the table locks:
cleartablelock
viewtablelock
The viewtablelock utility shows table locks currently held by MariaDB Enterprise ColumnStore:
To view all table locks:
To view table locks for a specific table, specify the database and table:
The cleartablelock utility clears table locks currently held by MariaDB Enterprise ColumnStore.
To clear a table lock, specify the lock ID shown by the viewtablelock utility:
MariaDB ColumnStore has a hard limit of 4096 columns per table.
However, it's likely that you run into other limitations before hitting that limit, including:
Row size limit of tables. This varies, depending on the storage engine you're using. For example, which indirectly limits the number of columns.
Size limit of .frm files. Those files hold the column description of tables. Column descriptions vary in length. Once all column descriptions combined reach a length of 64KB, the table's .frm file is full, limiting the number of columns you can have in a table.
Given that, the maximum number of columns a ColumnStore table can effectively have is around 2000 columns.
Step 3: Start and Configure Enterprise ColumnStore
Step 6: Install MariaDB MaxScale
This page details step 6 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step installs MariaDB MaxScale 22.08.
ColumnStore Object Storage requires 1 or more MaxScale nodes.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Step 2: Install Enterprise ColumnStore
This page details step 2 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Object storage.
This step installs MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
$ sudo mariadbWelcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>SELECT PLUGIN_NAME, PLUGIN_STATUS
FROM information_schema.PLUGINS
WHERE PLUGIN_LIBRARY LIKE 'ha_columnstore%';+---------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+---------------------+---------------+
| Columnstore | ACTIVE |
| COLUMNSTORE_COLUMNS | ACTIVE |
| COLUMNSTORE_TABLES | ACTIVE |
| COLUMNSTORE_FILES | ACTIVE |
| COLUMNSTORE_EXTENTS | ACTIVE |
+---------------------+---------------+CREATE DATABASE IF NOT EXISTS test;CREATE TABLE IF NOT EXISTS test.contacts (
first_name VARCHAR(50),
last_name VARCHAR(50),
email VARCHAR(100)
) ENGINE=ColumnStore;INSERT INTO test.contacts (first_name, last_name, email)
VALUES
("Kai", "Devi", "kai.devi@example.com"),
("Lee", "Wang", "lee.wang@example.com");SELECT * FROM test.contacts;+------------+-----------+----------------------+
| first_name | last_name | email |
+------------+-----------+----------------------+
| Kai | Devi | kai.devi@example.com |
| Lee | Wang | lee.wang@example.com |
+------------+-----------+----------------------+CREATE TABLE test.addresses (
email VARCHAR(100),
street_address VARCHAR(255),
city VARCHAR(100),
state_code VARCHAR(2)
) ENGINE = InnoDB;INSERT INTO test.addresses (email, street_address, city, state_code)
VALUES
("kai.devi@example.com", "1660 Amphibious Blvd.", "Redwood City", "CA"),
("lee.wang@example.com", "32620 Little Blvd", "Redwood City", "CA");SELECT name AS "Name", addr AS "Address"
FROM (SELECT CONCAT(first_name, " ", last_name) AS name,
email FROM test.contacts) AS contacts
INNER JOIN (SELECT CONCAT(street_address, ", ", city, ", ", state_code) AS addr,
email FROM test.addresses) AS addr
WHERE contacts.email = addr.email;+----------+-----------------------------------------+
| Name | Address |
+----------+-----------------------------------------+
| Kai Devi | 1660 Amphibious Blvd., Redwood City, CA |
| Lee Wang | 32620 Little Blvd, Redwood City, CA |
+----------+-----------------------------------------+
+-------------------+-------------------------------------+
| Name | Address |
+-------------------+-------------------------------------+
| Walker Percy | 500 Thomas More Dr., Covington, LA |
| Flannery O'Connor | 300 Tarwater Rd., Milledgeville, GA |
+-------------------+-------------------------------------+$ sudo yum install curl$ sudo apt install curl apt-transport-https$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_es_repo_setup$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-maxscale \
--skip-tools \
--mariadb-server-version="11.4"$ sudo yum install epel-release
$ sudo yum install jemalloc$ sudo apt install libjemalloc2$ sudo apt install libjemalloc1$ sudo yum install MariaDB-server \
MariaDB-backup \
MariaDB-shared \
MariaDB-client \
MariaDB-columnstore-engine$ sudo apt install mariadb-server \
mariadb-backup \
libmariadb3 \
mariadb-client \
mariadb-plugin-columnstoreMariaDB ColumnStore backup and restore manage distributed data using snapshots or tools like mariadb-backup, with restoration ensuring cluster sync via cpimport or file system recovery.
Mandatory system variables and options for Single-Node Enterprise ColumnStore include:
Set this system variable to utf8
Set this system variable to utf8_general_ci
columnstore_use_import_for_batchinsert
Set this system variable to ALWAYS to always use cpimport for and statements.
Start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:
Start and enable the MariaDB Enterprise ColumnStore service, so that it starts automatically upon reboot:
Enterprise ColumnStore requires a mandatory utility user account. By default, it connects to the server using the root user with no password. MariaDB Enterprise Server 10.6 will reject this login attempt by default, so you will need to configure Enterprise ColumnStore to use a different user account and password and create this user account on Enterprise Server.
On the Enterprise ColumnStore node, create the user account with the statement:
On the Enterprise ColumnStore node, grant the user account SELECT privileges on all databases with the GRANT statement:
Configure Enterprise ColumnStore to use the utility user:
Set the password:
For details about how to encrypt the password, see "Credentials Management for MariaDB Enterprise ColumnStore".
Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.
The specific steps to configure the security module depend on the operating system.
Configure SELinux for Enterprise ColumnStore:
To configure SELinux, you have to install the packages required for audit2allow. On CentOS 7 and RHEL 7, install the following:
On RHEL 8, install the following:
Allow the system to run under load for a while to generate SELinux audit events.
After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:
If no audit events were found, this will print the following:
If audit events were found, the new SELinux policy can be loaded using semodule:
Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config.
For example, the file will usually look like this after the change:
Set SELinux to enforcing mode:
For information on how to create a profile, see How to create an AppArmor Profile on ubuntu.com.
Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:
This page was step 3 of 5.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to https://customers.mariadb.com/downloads/token/
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On the MaxScale node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
On the MaxScale node, configure package repositories and specify MariaDB MaxScale 22.08:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
On the MaxScale node, install MariaDB MaxScale.
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 6 of 9.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to https://customers.mariadb.com/downloads/token/
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
Install additional dependencies:
Install on CentOS / RHEL (YUM)
Install of Debian 10 and Ubuntu 20.04 (APT):
Install on Debian 9 and Ubuntu 18.04 (APT):
Install MariaDB Enterprise Server and MariaDB Enterprise ColumnStore:
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:
This page was step 2 of 5.
Set the server object to maintenance mode using the set server command.
As the first argument, provide the name for the server.
As the second argument, provide maintenance as the state.
For example:
If the specified server is a primary server, then MaxScale will allow open transactions to complete before closing any connections.
If you would like MaxScale to immediately close all connections, the --force option can be provided as a third argument:
Confirm the state of the server object in MaxScale using :
List the servers using the list servers command, like this:
If the node is properly in maintenance mode, then the State column will show Maintenance as one of the states.
Now that the server is in maintenance mode in MaxScale, you can perform your maintenance.
While the server is in maintenance mode:
MaxScale doesn't route traffic to the node.
MaxScale doesn't select the node to be primary during failover.
The node can be rebooted.
The node's services can be restarted.
Maintenance mode for the server object for the node can be cleared in MaxScale using :
Use or another supported REST client.
Clear the server object's state using the clear server command.
As the first argument, provide the name for the server.
As the second argument, provide maintenance as the state.
For example:
Confirm the state of the server object in MaxScale using :
List the servers using the list servers command, like this:
If the node is no longer in maintenance mode, the State column no longer shows Maintenance as one of the states.
maxctrl call command \
mariadbmon \
rejoin \
mcs_monitor \
mcs3maxctrl call command \
mariadbmon \
switchover \
mcs_monitorERROR 1815 (HY000): Internal error: CAL0009: Drop table failed due to IDB-2009: Unable to perform the drop table operation because cpimport with PID 16301 is currently holding the table lock for session -1.Step 3: Install MariaDB Enterprise Server
This page details step 3 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".
This step installs MariaDB Enterprise Server, MariaDB Enterprise ColumnStore, CMAPI, and dependencies.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
On each Enterprise ColumnStore node, install additional dependencies:
Install on CentOS and RHEL (YUM):
Install on Debian 9 and Ubuntu 18.04 (APT)
Install on Debian 10 and Ubuntu 20.04 (APT):
On each Enterprise ColumnStore node, install MariaDB Enterprise Server and MariaDB Enterprise ColumnStore:
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 3 of 9.
Step 3: Install MariaDB Enterprise Server
This page details step 3 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step installs MariaDB Enterprise Server, MariaDB Enterprise ColumnStore, CMAPI, and dependencies.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.
Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.
To retrieve the token for your account:
Navigate to
Log in.
Copy the Customer Download Token.
Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.
On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
On each Enterprise ColumnStore node, install additional dependencies:
Install on CentOS and RHEL (YUM):
Install on Debian 9 and Ubuntu 18.04 (APT)
Install on Debian 10 and Ubuntu 20.04 (APT):
On each Enterprise ColumnStore node, install MariaDB Enterprise Server and MariaDB Enterprise ColumnStore:
Install on CentOS / RHEL (YUM):
Install on Debian / Ubuntu (APT):
Navigation in the procedure "Deploy ColumnStore Object Storage Topology".
This page was step 3 of 9.
Step 9: Import Data
This page details step 9 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step bulk imports data to Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Before data can be imported into the tables, create a matching schema.
On the primary server, create the schema:
For each database that you are importing, create the database with the CREATE DATABASE statement:
For each table that you are importing, create the table with the CREATE TABLE statement:
Enterprise ColumnStore supports multiple methods to import data into ColumnStore tables.
MariaDB Enterprise ColumnStore includes , which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server run :
When data is loaded with the LOAD DATA INFILE statement, MariaDB Enterprise ColumnStore loads the data using , which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server use LOAD DATA INFILE statement:
MariaDB Enterprise ColumnStore can also import data directly from a remote database. A simple method is to query the table using the SELECT statement, and then pipe the results into , which is a command-line utility that is designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a remote MariaDB database:
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 9 of 9.
This procedure is complete.
MariaDB Enterprise ColumnStore supports backup and restore.
Before you determine a backup strategy for your Enterprise ColumnStore deployment, it is a good idea to determine the system of record for your Enterprise ColumnStore data.
A system of record is the authoritative data source for a given piece of information. Organizations often store duplicate information in several systems, but only a single system can be the authoritative data source.
Enterprise ColumnStore is designed to handle analytical processing for OLAP, data warehousing, DSS, and hybrid workloads on very large data sets. Analytical processing does not generally happen on the system of record. Instead, analytical processing generally occurs on a specialized database that is loaded with data from the separate system of record. Additionally, very large data sets can be difficult to back up. Therefore, it may be beneficial to only backup the system of record.
If Enterprise ColumnStore is not acting as the system of record for your data, you should determine how the system of record affects your backup plan:
If your system of record is another database server, you should ensure that the other database server is properly backed up and that your organization has procedures to reload Enterprise ColumnStore from the other database server.
If your system of record is a set of data files, you should ensure that the set of data files is properly backed up and that your organization has procedures to reload Enterprise ColumnStore from the set of data files.
MariaDB Enterprise ColumnStore supports full backup and restore for all storage types. A full backup includes:
Enterprise ColumnStore's data and metadata
With S3: an S3 snapshot of the and a file system snapshot or copy of the Without S3: a file system snapshot or copy of the .
The MariaDB data directory from the primary node
To see the procedure to perform a full backup and restore, choose the storage type:
This guide provides steps for deploying a single-node ColumnStore, setting up the environment, installing the software, and bulk importing data for online analytical processing (OLAP) workloads.
This procedure describes the deployment of the Single-Node Enterprise ColumnStore topology with Local storage.
MariaDB Enterprise ColumnStore is a columnar storage engine for MariaDB Enterprise Server 10.6. Enterprise ColumnStore is best suited for Online Analytical Processing (OLAP) workloads.
This procedure has 5 steps, which are executed in sequence.
This page provides an overview of the topology, requirements, and deployment procedures.
Please read and understand this procedure before executing.
This page provides a major release upgrade procedure for MariaDB Enterprise ColumnStore. A major release upgrade is an upgrade from an older major release to a newer major release, such as an upgrade from MariaDB Enterprise ColumnStore 5 to MariaDB Enterprise ColumnStore 22.08.
Enterprise ColumnStore 5
Step 1: Prepare Systems for Enterprise ColumnStore Nodes
This page details step 1 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Object storage.
This step prepares the system to host MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
[mariadb]
log_error = mariadbd.err
character_set_server = utf8
collation_server = utf8_general_ci$ sudo systemctl start mariadb
$ sudo systemctl enable mariadb$ sudo systemctl start mariadb-columnstore
$ sudo systemctl enable mariadb-columnstoreCREATE USER 'util_user'@'127.0.0.1'
IDENTIFIED BY 'util_user_passwd';GRANT SELECT, PROCESS ON *.*
TO 'util_user'@'127.0.0.1';$ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1
$ sudo mcsSetConfig CrossEngineSupport Port 3306
$ sudo mcsSetConfig CrossEngineSupport User util_user$ sudo mcsSetConfig CrossEngineSupport Password util_user_passwd$ sudo yum install policycoreutils policycoreutils-python$ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
Nothing to do$ sudo semodule -i mariadb_local.pp# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo setenforce enforcing$ sudo yum install curl$ sudo apt install curl apt-transport-https$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -
$ chmod +x mariadb_es_repo_setup
$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-server \
--skip-tools \
--mariadb-maxscale-version="22.08"$ sudo yum install maxscale$ sudo apt install maxscale$ sudo yum install curl$ sudo apt install curl apt-transport-https$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_es_repo_setup$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-maxscale \
--skip-tools \
--mariadb-server-version="11.4"$ sudo yum install epel-release
$ sudo yum install jemalloc$ sudo apt install libjemalloc2$ sudo apt install libjemalloc1$ sudo yum install MariaDB-server \
MariaDB-backup \
MariaDB-shared \
MariaDB-client \
MariaDB-columnstore-engine$ sudo apt install mariadb-server \
mariadb-backup \
libmariadb3 \
mariadb-client \
mariadb-plugin-columnstoremaxctrl set server \
mcs3 \
maintenancemaxctrl set server \
mcs3 \
maintenance \
--forcemaxctrl list serversmaxctrl clear server \
mcs3 \
maintenancemaxctrl list servers# minimize swapping
vm.swappiness = 1
# Increase the TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Increase the TCP buffer limits
# min, default, and max number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# for 1 GigE, increase this to 2500
# for 10 GigE, increase this to 30000
net.core.netdev_max_backlog = 2500$ sudo sysctl --load=/etc/sysctl.d/90-mariadb-enterprise-columnstore.conf$ sudo setenforce permissive# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targetedsudo getenforce
Permissive$ sudo systemctl disable apparmor$ sudo aa-statusapparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.$ sudo yum install glibc-locale-source glibc-langpack-en$ sudo localedef -i en_US -f UTF-8 en_US.UTF-8CREATE DATABASE inventory;CREATE TABLE inventory.products (
product_name VARCHAR(11) NOT NULL DEFAULT '',
supplier VARCHAR(128) NOT NULL DEFAULT '',
quantity VARCHAR(128) NOT NULL DEFAULT '',
unit_cost VARCHAR(128) NOT NULL DEFAULT ''
) ENGINE=Columnstore DEFAULT CHARSET=utf8;$ sudo cpimport -s '\t' inventory products /tmp/inventory-products.tsvLOAD DATA INFILE '/tmp/inventory-products.tsv'
INTO TABLE inventory.products;$ mariadb --quick \
--skip-column-names \
--execute="SELECT * FROM inventory.products" \
| cpimport -s '\t' inventory products$ sudo yum install curl$ sudo apt install curl apt-transport-https$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -
$ chmod +x mariadb_es_repo_setup
$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-server \
--skip-tools \
--mariadb-maxscale-version="22.08"$ sudo yum install maxscale$ sudo apt install maxscaleCREATE DATABASE inventory;CREATE TABLE inventory.products (
product_name VARCHAR(11) NOT NULL DEFAULT '',
supplier VARCHAR(128) NOT NULL DEFAULT '',
quantity VARCHAR(128) NOT NULL DEFAULT '',
unit_cost VARCHAR(128) NOT NULL DEFAULT ''
) ENGINE=Columnstore DEFAULT CHARSET=utf8;$ sudo cpimport -s '\t' inventory products /tmp/inventory-products.tsvLOAD DATA INFILE '/tmp/inventory-products.tsv'
INTO TABLE inventory.products;$ mariadb --quick \
--skip-column-names \
--execute="SELECT * FROM inventory.products" \
| cpimport -s '\t' inventory productsmaxctrl list serversmaxctrl call command \
mariadbmon \
switchover \
mcs_monitor \
mcs2maxctrl list serversviewtablelock
There is 1 table lock
Table LockID Process PID Session Txn CreationTime State DBRoots
hq_sales.invoices 1 cpimport 16301 BulkLoad n/a Wed April 7 14:20:42 2021 LOADING 1viewtablelock hq_sales invoices
There is 1 table lock
Table LockID Process PID Session Txn CreationTime State DBRoots
hq_sales.invoices 1 cpimport 16301 BulkLoad n/a Wed April 7 14:20:42 2021 LOADING 1cleartablelock 1Step 1
Step 2
Step 3
Step 4
Step 5
Customers can obtain support by submitting a support case.
The following components are deployed during this procedure:
MariaDB Enterprise Server
Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.
Columnar Storage Engine
Optimized for Online Analytical Processing (OLAP) workloads
The Single-Node Enterprise ColumnStore topology provides support for Online Analytical Processing (OLAP) workloads to MariaDB Enterprise Server.
The Enterprise ColumnStore node:
Receives queries from the application
Executes queries
Uses the local disk for storage.
Single-Node Enterprise ColumnStore does not provide high availability (HA) for Online Analytical Processing (OLAP). If you would like to deploy Enterprise ColumnStore with high availability, see Enterprise ColumnStore with Shared Local storage.
These requirements are for the Single-Node Enterprise ColumnStore, when deployed with MariaDB Enterprise Server 10.6 and MariaDB Enterprise ColumnStore 23.10.
Debian 11 (x86_64, ARM64)
Debian 12 (x86_64, ARM64)
Red Hat Enterprise Linux 8 (x86_64, ARM64)
Red Hat Enterprise Linux 9 (x86_64, ARM64)
Rocky Linux 8 (x86_64, ARM64)
Rocky Linux 9 (x86_64, ARM64)
Ubuntu 20.04 LTS (x86_64, ARM64)
Ubuntu 22.04 LTS (x86_64, ARM64)
Ubuntu 24.04 LTS (x86_64, ARM64)
MariaDB Enterprise ColumnStore's minimum hardware requirements are not intended for production environments, but the minimum hardware requirements can be appropriate for development and test environments. For production environments, see the recommended hardware requirements instead.
The minimum hardware requirements are:
Enterprise ColumnStore node
4+ cores
16+ GB
MariaDB Enterprise ColumnStore will refuse to start if the system has less than 3 GB of memory.
If Enterprise ColumnStore is started on a system with less memory, the following error message will be written to the ColumnStore system log called crit.log:
And the following error message will be raised to the client:
MariaDB Enterprise ColumnStore's recommended hardware requirements are intended for production analytics.
The recommended hardware requirements are:
Enterprise ColumnStore node
64+ cores
128+ GB
Configuration File
Configuration files (such as /etc/my.cnf) can be used to set and . The server must be restarted to apply changes made to configuration files.
Command-line
The server can be started with command-line options that set and .
SQL
Users can set that support dynamic changes on-the-fly using the statement.
MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.
To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.
CentOS
Red Hat Enterprise Linux (RHEL)
/etc/my.cnf.d/z-custom-mariadb.cnf
Debian
Ubuntu
/etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf
The systemctl command is used to start and stop the MariaDB Enterprise Server service.
Start
sudo systemctl start mariadb
Stop
sudo systemctl stop mariadb
Restart
sudo systemctl restart mariadb
Enable during startup
sudo systemctl enable mariadb
Disable during startup
sudo systemctl disable mariadb
Status
sudo systemctl status mariadb
Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:
Next: Step 1: Install MariaDB Enterprise ColumnStore 23.10.
Enterprise ColumnStore 22.08
This procedure assumes that the new Enterprise ColumnStore version will be installed onto new servers.
To reuse existing servers for the new Enterprise ColumnStore version, you must adapt the procedure detailed below. After step 1, confirm all data has been backed-up and verify backups. The old version of Enterprise ColumnStore should then be uninstalled, and all Enterprise ColumnStore files should be deleted before continuing with step 2.
On the old ColumnStore cluster, perform a full backup.
MariaDB recommends backing up the table schemas to a single SQL file and backing up the table data to table-specific CSV files.
For each table, obtain the table's schema by executing the SHOW CREATE TABLE :
Backup the table schemas by copying the output to an SQL file. This procedure assumes that the SQL file is named schema-backup.sql.
For each table, backup the table data to a CSV file using the SELECT .. INTO OUTFILE :
Copy the SQL file containing the table schemas and the CSV files containing the table data to the primary node of the new ColumnStore cluster.
On the new ColumnStore cluster, follow the deployment instructions of the desired topology for the new ColumnStore version.
For deployment instructions, see "".
On the new ColumnStore cluster, restore the table schemas and data.
Restore the schema backup using :
HOST and PORT should refer to the following:
If you are connecting with MaxScale as a proxy, they should refer to the host and port of the MaxScale listener
If you are connecting directly to a multi-node ColumnStore cluster, they should refer to the host and port of the primary ColumnStore node
If you are connecting directly to single-node ColumnStore, they should refer to the host and port of the ColumnStore node
When the command is executed, mariadb client prompts for the user password
For each table, restore the data from the table's CSV file by executing the on the primary ColumnStore node:
On the new ColumnStore cluster, verify that the table schemas and data have been restored.
For each table, verify the table's definition by executing the SHOW CREATE TABLE statement:
For each table, verify the number of rows in the table by executing SELECT COUNT(*):
For each table, verify the data in the table executing the statement.
If the table is very large, you can limit the number of rows in the result set by adding a LIMIT clause:
MariaDB Enterprise ColumnStore performs best with Linux kernel optimizations.
On each server to host an Enterprise ColumnStore node, optimize the kernel:
Set the relevant kernel parameters in a sysctl configuration file. To ensure proper change management, use an Enterprise ColumnStore-specific configuration file.
Create a /etc/sysctl.d/90-mariadb-enterprise-columnstore.conf file:
Use the sysctl command to set the kernel parameters at runtime
The Linux Security Modules (LSM) should be temporarily disabled on each Enterprise ColumnStore node during installation.
The LSM will be configured and re-enabled later in this deployment procedure.
The steps to disable the LSM depend on the specific LSM used by the operating system.
SELinux must be set to permissive mode before installing MariaDB Enterprise ColumnStore.
To set SELinux to permissive mode:
Set SELinux to permissive mode:
Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config.
For example, the file will usually look like this after the change:
Confirm that SELinux is in permissive mode:
SELinux will be configured and re-enabled later in this deployment procedure. This configuration is not persistent. If you restart the server before configuring and re-enabling SELinux later in the deployment procedure, you must reset the enforcement to permissive mode.
AppArmor must be disabled before installing MariaDB Enterprise ColumnStore.
Disable AppArmor:
Reboot the system.
Confirm that no AppArmor profiles are loaded using aa-status:
AppArmor will be configured and re-enabled later in this deployment procedure.
When using MariaDB Enterprise ColumnStore, it is recommended to set the system's locale to UTF-8.
On RHEL 8, install additional dependencies:
Set the system's locale to en_US.UTF-8 by executing localedef:
If you want to use S3-compatible storage, it is important to create the S3 bucket before you start ColumnStore. If you already have an S3 bucket, confirm that the bucket is empty.
S3 bucket configuration will be performed later in this procedure.
Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:
This page was step 1 of 5.
Shell
SQL access is not required
SQL
Shell access is not required
Remote Database
Use normal database client
Avoid dumping data to intermediate filed
MariaDB ColumnStore utilizes an Extent Map to manage data distribution across extents—logical blocks within physical segment files ranging from 8 to 64 MB. Each extent holds a consistent number of rows, with the Extent Map cataloging these extents, their corresponding block identifiers (LBIDs), and the minimum and maximum values for each column's data within the extent.
The primary node maintains the master copy of the Extent Map. Upon system startup, this map is loaded into memory and propagated to other nodes for redundancy and quick access. Corruption of the master Extent Map can render the system unusable and lead to data loss.
ColumnStore's extent map is a smart structure that underpins its performance. By providing a logical partitioning scheme, it avoids the overhead associated with indexing and other common row-based database optimizations.
The primary node in a ColumnStore cluster holds the master copy of the extent map. Upon system startup, this master copy is read into memory and then replicated to all other participating nodes for high availability and disaster recovery. Nodes keep the extent map in memory for rapid access during query processing. As data within extents is modified, these updates are broadcast to all participating nodes to maintain consistency.
If the master copy of the extent map becomes corrupted, the entire system could become unusable, potentially leading to data loss. Having a recent backup of the extent map allows for a much faster recovery compared to reloading the entire database in such a scenario.
To safeguard against potential Extent Map corruption, regularly back up the master copy:
Lock Table:
Save BRM:
Create Backup Directory:
Copy Extent Map:
Unlock Tables:
Stop ColumnStore:
Rename Corrupted Map:
Clear Versioning Files:
Restore Backup:
Set Ownership:
Start ColumnStore:
Shutdown Cluster:
Rename Corrupted Map:
Clear Versioning Files:
Restore Backup:
Set Ownership:
Start Cluster:
Incorporate the save_brm command into your data import scripts (e.g., those using cpimport) to automate Extent Map backups. This practice ensures regular backups without manual intervention.
Refer to the MariaDB ColumnStore Backup Script for an example implementation.
Step 1: Prepare ColumnStore Nodes
Step 2: Configure Shared Local Storage
This page details step 2 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".
This step configures shared local storage on systems hosting Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Step 2: Configure Shared Local Storage
Step 7: Start and Configure MariaDB MaxScale
Step 4: Test Enterprise ColumnStore
This page details step 4 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Object storage.
This step tests MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Apr 30 21:54:35 a1ebc96a2519 PrimProc[1004]: 35.668435 |0|0|0| C 28 CAL0000: Error total memory available is less than 3GB.ERROR 1815 (HY000): Internal error: System is not ready yet. Please try again.SHOW CREATE TABLE DATABASE_NAME.TABLE_NAME\GSELECT * INTO OUTFILE '/path/to/DATABASE_NAME-TABLE_NAME.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM DATABASE_NAME.TABLE_NAME;mariadb --host HOST --port PORT --user USER --password < schema-backup.sqlSHOW CREATE TABLE DATABASE_NAME.TABLE_NAME\GSELECT COUNT(*) FROM DATABASE_NAME.TABLE_NAME;SELECT * FROM DATABASE_NAME.TABLE_NAME LIMIT 100;# minimize swapping
vm.swappiness = 1
# Increase the TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Increase the TCP buffer limits
# min, default, and max number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# for 1 GigE, increase this to 2500
# for 10 GigE, increase this to 30000
net.core.netdev_max_backlog = 2500$ sudo sysctl --load=/etc/sysctl.d/90-mariadb-enterprise-columnstore.conf$ sudo setenforce permissive# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo getenforce
Permissive$ sudo systemctl disable apparmor$ sudo aa-statusapparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.$ sudo yum install glibc-locale-source glibc-langpack-en$ sudo localedef -i en_US -f UTF-8 en_US.UTF-8$ sudo yum install curl$ sudo apt install curl apt-transport-https$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_es_repo_setup$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-maxscale \
--skip-tools \
--mariadb-server-version="11.4"$ sudo yum install jemalloc jq curl$ sudo apt install libjemalloc1 jq curl$ sudo apt install libjemalloc2 jq curl$ sudo yum install MariaDB-server \
MariaDB-backup \
MariaDB-shared \
MariaDB-client \
MariaDB-columnstore-engine \
MariaDB-columnstore-cmapi$ sudo apt install mariadb-server \
mariadb-backup \
libmariadb3 \
mariadb-client \
mariadb-plugin-columnstore \
mariadb-columnstore-cmapi$ sudo yum install curl$ sudo apt install curl apt-transport-https$ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup$ echo "${checksum} mariadb_es_repo_setup" \
| sha256sum -c -$ chmod +x mariadb_es_repo_setup$ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--skip-maxscale \
--skip-tools \
--mariadb-server-version="11.4"$ sudo yum install jemalloc jq curl$ sudo apt install libjemalloc1 jq curl$ sudo apt install libjemalloc2 jq curl$ sudo yum install MariaDB-server \
MariaDB-backup \
MariaDB-shared \
MariaDB-client \
MariaDB-columnstore-engine \
MariaDB-columnstore-cmapi$ sudo apt install mariadb-server \
mariadb-backup \
libmariadb3 \
mariadb-client \
mariadb-plugin-columnstore \
mariadb-columnstore-cmapiCREATE DATABASE inventory;CREATE TABLE inventory.products (
product_name VARCHAR(11) NOT NULL DEFAULT '',
supplier VARCHAR(128) NOT NULL DEFAULT '',
quantity VARCHAR(128) NOT NULL DEFAULT '',
unit_cost VARCHAR(128) NOT NULL DEFAULT ''
) ENGINE=Columnstore DEFAULT CHARSET=utf8;$ sudo cpimport -s '\t' inventory products /tmp/inventory-products.tsvLOAD DATA INFILE '/tmp/inventory-products.tsv'
INTO TABLE inventory.products;$ mariadb --quick \
--skip-column-names \
--execute="SELECT * FROM inventory.products" \
| cpimport -s '\t' inventory productsMariaDB Enterprise ColumnStore performs best with Linux kernel optimizations.
On each server to host an Enterprise ColumnStore node, optimize the kernel:
Set the relevant kernel parameters in a sysctl configuration file. To ensure proper change management, use an Enterprise ColumnStore-specific configuration file.
Create a /etc/sysctl.d/90-mariadb-enterprise-columnstore.conf file:
Use the sysctl command to set the kernel parameters at runtime
The Linux Security Modules (LSM) should be temporarily disabled on each Enterprise ColumnStore node during installation.
The LSM will be configured and re-enabled later in this deployment procedure.
The steps to disable the LSM depend on the specific LSM used by the operating system.
SELinux must be set to permissive mode before installing MariaDB Enterprise ColumnStore.
To set SELinux to permissive mode:
Set SELinux to permissive mode:
Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config.
For example, the file will usually look like this after the change:
Confirm that SELinux is in permissive mode:
SELinux will be configured and re-enabled later in this deployment procedure. This configuration is not persistent. If you restart the server before configuring and re-enabling SELinux later in the deployment procedure, you must reset the enforcement to permissive mode.
AppArmor must be disabled before installing MariaDB Enterprise ColumnStore.
Disable AppArmor:
Reboot the system.
Confirm that no AppArmor profiles are loaded using aa-status:
AppArmor will be configured and re-enabled later in this deployment procedure.
MariaDB Enterprise ColumnStore requires the following TCP ports:
3306
Port used for MariaDB Client traffic
8600-8630
Port range used for inter-node communication
8640
Port used by CMAPI
8700
Port used for inter-node communication
8800
Port used for inter-node communication
The firewall should be temporarily disabled on each Enterprise ColumnStore node during installation.
The firewall will be configured and re-enabled later in this deployment procedure.
The steps to disable the firewall depend on the specific firewall used by the operating system.
Check if the firewalld service is running:
If the firewalld service is running, stop it:
Firewalld will be configured and re-enabled later in this deployment procedure.
Check if the UFW service is running:
If the UFW service is running, stop it:
UFW will be configured and re-enabled later in this deployment procedure.
To install Enterprise ColumnStore on Amazon Web Services (AWS), the security group must be modified prior to installation.
Enterprise ColumnStore requires all internal communications to be open between Enterprise ColumnStore nodes. Therefore, the security group should allow all protocols and all ports to be open between the Enterprise ColumnStore nodes and the MaxScale proxy.
When using MariaDB Enterprise ColumnStore, it is recommended to set the system's locale to UTF-8.
On RHEL 8, install additional dependencies:
Set the system's locale to en_US.UTF-8 by executing localedef:
MariaDB Enterprise ColumnStore requires all nodes to have host names that are resolvable on all other nodes. If your infrastructure does not configure DNS centrally, you may need to configure static DNS entries in the /etc/hosts file of each server.
On each Enterprise ColumnStore node, edit the /etc/hosts file to map host names to the IP address of each Enterprise ColumnStore node:
Replace the IP addresses with the addresses in your own environment.
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 1 of 9.
In a ColumnStore Object Storage topology, MariaDB Enterprise ColumnStore requires the Storage Manager directory to be located on shared local storage.
The Storage Manager directory is at the following path:
/var/lib/columnstore/storagemanager
The N in dataN represents a range of integers that starts at 1 and stops at the number of nodes in the deployment. For example, with a 3-node Enterprise ColumnStore deployment, this would refer to the following directories:
/var/lib/columnstore/data1
/var/lib/columnstore/data2
/var/lib/columnstore/data3
The DB Root directories must be mounted on every ColumnStore node.
Select a Shared Local Storage solution for the Storage Manager directory:
EBS (Elastic Block Store) Multi-Attach
EFS (Elastic File System)
Filestore
GlusterFS
NFS (Network File System)
For additional information, see "Shared Local Storage Options".
EBS is a high-performance block-storage service for AWS (Amazon Web Services). EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported.
For Enterprise ColumnStore deployments in AWS:
EBS Multi-Attach is a recommended option for the Storage Manager directory.
Amazon S3 storage is the recommended option for data.
Consult the vendor documentation for details on how to configure EBS Multi-Attach.
EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services)
For deployments in AWS:
EFS is a recommended option for the Storage Manager directory.
Amazon S3 storage is the recommended option for data.
Consult the vendor documentation for details on how to configure EFS.
Filestore is high-performance, fully managed storage for GCP (Google Cloud Platform).
For Enterprise ColumnStore deployments in GCP:
Filestore is the recommended option for the Storage Manager directory.
Google Object Storage (S3-compatible) is the recommended option for data.
Consult the vendor documentation for details on how to configure Filestore.
GlusterFS is a distributed file system.
GlusterFS is a shared local storage option, but it is not one of the recommended options.
For more information, see "Recommended Storage Options".
On each Enterprise ColumnStore node, install GlusterFS.
Install on CentOS / RHEL 8 (YUM):
Install on CentOS / RHEL 7 (YUM):
Install on Debian (APT):
Install on Ubuntu (APT):
Start the GlusterFS daemon:
Before you can create a volume with GlusterFS, you must probe each node from a peer node.
On the primary node, probe all of the other cluster nodes:
On one of the replica nodes, probe the primary node to confirm that it is connected:
On the primary node, check the peer status:
Number of Peers: 2
Create the GlusterFS volumes for MariaDB Enterprise ColumnStore. Each volume must have the same number of replicas as the number of Enterprise ColumnStore nodes.
On each Enterprise ColumnStore node, create the directory for each brick in the /brick directory:
On the primary node, create the GlusterFS volumes:
On the primary node, start the volume:
On each Enterprise ColumnStore node, create mount points for the volumes:
On each Enterprise ColumnStore node, add the mount points to /etc/fstab:
On each Enterprise ColumnStore node, mount the volumes:
NFS is a distributed file system. NFS is available in most Linux distributions. If NFS is used for an Enterprise ColumnStore deployment, the storage must be mounted with the sync option to ensure that each node flushes its changes immediately.
For on-premises deployments:
NFS is the recommended option for the Storage Manager directory.
Any S3-compatible storage is the recommended option for data.
Consult the documentation for your NFS implementation for details on how to configure NFS.
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 2 of 9.
In a ColumnStore Object Storage topology, MariaDB Enterprise ColumnStore requires the Storage Manager directory to be located on shared local storage.
The Storage Manager directory is at the following path:
/var/lib/columnstore/storagemanager
The Storage Manager directory must be mounted on every ColumnStore node.
Select a Shared Local Storage solution for the Storage Manager directory:
For additional information, see "Shared Local Storage Options".
EBS is a high-performance block-storage service for AWS (Amazon Web Services). EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported.
For Enterprise ColumnStore deployments in AWS:
EBS Multi-Attach is a recommended option for the Storage Manager directory.
Amazon S3 storage is the recommended option for data.
Consult the vendor documentation for details on how to configure EBS Multi-Attach.
EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services)
For deployments in AWS:
EFS is a recommended option for the Storage Manager directory.
Amazon S3 storage is the recommended option for data.
Consult the vendor documentation for details on how to configure EFS.
Filestore is high-performance, fully managed storage for GCP (Google Cloud Platform).
For Enterprise ColumnStore deployments in GCP:
Filestore is the recommended option for the Storage Manager directory.
Google Object Storage (S3-compatible) is the recommended option for data.
Consult the vendor documentation for details on how to configure Filestore.
GlusterFS is a distributed file system. GlusterFS is a shared local storage option, but it is not one of the recommended options.
For more information, see "Recommended Storage Options".
On each Enterprise ColumnStore node, install GlusterFS.
Install on CentOS / RHEL 8 (YUM):
Install on CentOS / RHEL 7 (YUM):
Install on Debian (APT):
Install on Ubuntu (APT):
Start the GlusterFS daemon:
Before you can create a volume with GlusterFS, you must probe each node from a peer node.
On the primary node, probe all of the other cluster nodes:
On one of the replica nodes, probe the primary node to confirm that it is connected:
On the primary node, check the peer status:
Create the GlusterFS volumes for MariaDB Enterprise ColumnStore. Each volume must have the same number of replicas as the number of Enterprise ColumnStore nodes.
On each Enterprise ColumnStore node, create the directory for each brick in the /brick directory:
On the primary node, create the GlusterFS volumes:
On the primary node, start the volume:
On each Enterprise ColumnStore node, create mount points for the volumes:
On each Enterprise ColumnStore node, add the mount points to /etc/fstab:
On each Enterprise ColumnStore node, mount the volumes:
NFS is a distributed file system. NFS is available in most Linux distributions. If NFS is used for an Enterprise ColumnStore deployment, the storage must be mounted with the sync option to ensure that each node flushes its changes immediately.
For on-premises deployments:
NFS is the recommended option for the Storage Manager directory.
Any S3-compatible storage is the recommended option for data.
Consult the documentation for your NFS implementation for details on how to configure NFS.
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 2 of 9.
MariaDB MaxScale installations include a configuration file with some example objects. This configuration file should be replaced.
On the MaxScale node, replace the default /etc/maxscale.cnf with the following configuration:
For additional information, see "Global Parameters".
On the MaxScale node, restart the MaxScale service to ensure that MaxScale picks up the new configuration:
For additional information, see "Start and Stop Services".
On the MaxScale node, use maxctrl create to create a server object for each Enterprise ColumnStore node:
MaxScale uses monitors to retrieve additional information from the servers. This information is used by other services in filtering and routing connections based on the current state of the node. For MariaDB Enterprise ColumnStore, use the MariaDB Monitor (mariadbmon).
On the MaxScale node, use maxctrl create monitor to create a MariaDB Monitor:
In this example:
columnstore_monitor is an arbitrary name that is used to identify the new monitor.
mariadbmon is the name of the module that implements the MariaDB Monitor.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to monitor the ColumnStore nodes.
password='MAXSCALE_USER_PASSWORD' sets the password parameter to the password used by the database user account that MaxScale uses to monitor the ColumnStore nodes.
replication_user=REPLICATION_USER sets the replication_user parameter to the database user account that MaxScale uses to setup replication.
replication_password='REPLICATION_USER_PASSWORD' sets the replication_password parameter to the password used by the database user account that MaxScale uses to setup replication.
--servers sets the servers parameter to the set of nodes that MaxScale should monitor. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by mariadbmon in MaxScale 22.08 can also be specified.
Routers control how MaxScale balances the load between Enterprise ColumnStore nodes. Each router uses a different approach to routing queries. Consider the specific use case of your application and database load and select the router that best suits your needs.
Connection-based load balancing
Routes connections to Enterprise ColumnStore nodes designated as replica servers for a read-only pool
Routes connections to an Enterprise ColumnStore node designated as the primary server for a read-write pool.|
Query-based load balancing
Routes write queries to an Enterprise ColumnStore node designated as the primary server
Routes read queries to Enterprise ColumnStore node designated as replica servers
Automatically reconnects after node failures
Automatically replays transactions after node failures
Use MaxScale Read Connection Router (readconnroute) to route connections to replica servers for a read-only pool.
On the MaxScale node, use maxctrl create service to create a router:
In this example:
connection_router_service is an arbitrary name that is used to identify the new service.
readconnroute is the name of the module that implements the Read Connection Router.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ColumnStore nodes.
password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ColumnStore nodes.
router_options=slave sets the router_options parameter to slave, so that MaxScale only routes connections to the replica nodes.
--servers sets the servers parameter to the set of nodes to which MaxScale should route connections. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by readconnroute in MaxScale 22.08 can also be specified.
These instructions reference TCP port 3308. You can use a different TCP port. The TCP port used must not be bound by any other listener.
On the MaxScale node, use the maxctrl create listener command to configure MaxScale to use a listener for the Read Connection Router (readconnroute):
In this example:
connection_router_service is the name of the readconnroute service that was previously created.
connection_router_listener is an arbitrary name that is used to identify the new listener.
3308 is the TCP port.
protocol=MariaDBClient sets the protocol parameter.
Other Module Parameters supported by listeners in MaxScale 22.08 can also be specified.
MaxScale Read/Write Split Router (readwritesplit) performs query-based load balancing. The router routes write queries to the primary and read queries to the replicas.
On the MaxScale node, use the maxctrl create service command to configure MaxScale to use the Read/Write Split Router (readwritesplit):
In this example:
query_router_service is an arbitrary name that is used to identify the new service.
readwritesplit is the name of the module that implements the Read/Write Split Router.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ColumnStore nodes.
password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ColumnStore nodes.
--servers sets the servers parameter to the set of nodes to which MaxScale should route queries. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by readwritesplit in MaxScale 22.08 can also be specified.
These instructions reference TCP port 3307. You can use a different TCP port. The TCP port used must not be bound by any other listener.
On the MaxScale node, use the maxctrl create listener command to configure MaxScale to use a listener for the Read/Write Split Router (readwritesplit):
In this example:
query_router_service is the name of the readwritesplit service that was previously created.
query_router_listener is an arbitrary name that is used to identify the new listener.
3307 is the TCP port.
protocol=MariaDBClient sets the protocol parameter.
Other Module Parameters supported by listeners in MaxScale 22.08 can also be specified.
To start the services and monitors, on the MaxScale node use maxctrl start services:
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 7 of 9.
MariaDB Enterprise ColumnStore 23.10 includes a testS3Connection command to test the S3 configuration, permissions, and connectivity.
On each Enterprise ColumnStore node, test the S3 configuration:
If the testS3Connection command does not return OK, investigate the S3 configuration.
Connect to the server using using the root@localhost user account:
Query and confirm that the ColumnStore storage engine plugin is ACTIVE:
Create a test database, if it does not exist:
Create a ColumnStore table:
Add sample data into the table:
Read data from table:
Create an InnoDB table:
Add data to the table:
Perform a cross-engine join:
Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:
This page was step 4 of 5.



Step 7: Start and Configure MariaDB MaxScale
This page details step 7 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".
This step starts and configures MariaDB MaxScale 22.08.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB MaxScale installations include a configuration file with some example objects. This configuration file should be replaced.
On the MaxScale node, replace the default /etc/maxscale.cnf with the following configuration:
For additional information, see "Global Parameters".
On the MaxScale node, restart the MaxScale service to ensure that MaxScale picks up the new configuration:
For additional information, see "Start and Stop Services".
On the MaxScale node, use to create a server object for each Enterprise ColumnStore node:
MaxScale uses monitors to retrieve additional information from the servers. This information is used by other services in filtering and routing connections based on the current state of the node. For MariaDB Enterprise ColumnStore, use the MariaDB Monitor (mariadbmon).
On the MaxScale node, use to create a MariaDB Monitor:
In this example:
columnstore_monitor is an arbitrary name that is used to identify the new monitor.
mariadbmon is the name of the module that implements the MariaDB Monitor.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to monitor the ColumnStore nodes.
Routers control how MaxScale balances the load between Enterprise ColumnStore nodes. Each router uses a different approach to routing queries. Consider the specific use case of your application and database load and select the router that best suits your needs.
Use to route connections to replica servers for a read-only pool.
On the MaxScale node, use to create a router:
In this example:
connection_router_service is an arbitrary name that is used to identify the new service.
readconnroute is the name of the module that implements the Read Connection Router.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ColumnStore nodes.
These instructions reference TCP port 3308. You can use a different TCP port. The TCP port used must not be bound by any other listener.
On the MaxScale node, use the command to configure MaxScale to use a listener for the :
In this example:
connection_router_service is the name of the readconnroute service that was previously created.
connection_router_listener is an arbitrary name that is used to identify the new listener.
3308 is the TCP port.
MaxScale performs query-based load balancing. The router routes write queries to the primary and read queries to the replicas.
On the MaxScale node, use the maxctrl create service command to configure MaxScale to use the :
In this example:
query_router_service is an arbitrary name that is used to identify the new service.
readwritesplit is the name of the module that implements the Read/Write Split Router.
user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ColumnStore nodes.
These instructions reference TCP port 3307. You can use a different TCP port. The TCP port used must not be bound by any other listener.
On the MaxScale node, use the command to configure MaxScale to use a listener for the :
In this example:
query_router_service is the name of the readwritesplit service that was previously created.
query_router_listener is an arbitrary name that is used to identify the new listener.
3307 is the TCP port.
protocol=MariaDBClient
To start the services and monitors, on the MaxScale node use :
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 7 of 9.
Step 1: Prepare ColumnStore Nodes
This page details step 1 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step prepares systems to host MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Enterprise ColumnStore performs best with Linux kernel optimizations.
On each server to host an Enterprise ColumnStore node, optimize the kernel:
Set the relevant kernel parameters in a sysctl configuration file. To ensure proper change management, use an Enterprise ColumnStore-specific configuration file.
Create a /etc/sysctl.d/90-mariadb-enterprise-columnstore.conf file:
Use the sysctl command to set the kernel parameters at runtime
The Linux Security Modules (LSM) should be temporarily disabled on each Enterprise ColumnStore node during installation.
The LSM will be configured and re-enabled later in this deployment procedure.
The steps to disable the LSM depend on the specific LSM used by the operating system.
SELinux must be set to permissive mode before installing MariaDB Enterprise ColumnStore.
To set SELinux to permissive mode:
Set SELinux to permissive mode:
Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config.
For example, the file will usually look like this after the change:
Confirm that SELinux is in permissive mode:
SELinux will be configured and re-enabled later in this deployment procedure. This configuration is not persistent. If you restart the server before configuring and re-enabling SELinux later in the deployment procedure, you must reset the enforcement to permissive mode.
AppArmor must be disabled before installing MariaDB Enterprise ColumnStore.
Disable AppArmor:
Reboot the system.
Confirm that no AppArmor profiles are loaded using aa-status:
AppArmor will be configured and re-enabled later in this deployment procedure.
MariaDB Enterprise ColumnStore requires the following TCP ports:
The firewall should be temporarily disabled on each Enterprise ColumnStore node during installation.
The firewall will be configured and re-enabled later in this deployment procedure.
The steps to disable the firewall depend on the specific firewall used by the operating system.
Check if the firewalld service is running:
If the firewalld service is running, stop it:
Firewalld will be configured and re-enabled later in this deployment procedure.
Check if the UFW service is running:
If the UFW service is running, stop it:
UFW will be configured and re-enabled later in this deployment procedure.
To install Enterprise ColumnStore on Amazon Web Services (AWS), the security group must be modified prior to installation.
Enterprise ColumnStore requires all internal communications to be open between Enterprise ColumnStore nodes. Therefore, the security group should allow all protocols and all ports to be open between the Enterprise ColumnStore nodes and the MaxScale proxy.
When using MariaDB Enterprise ColumnStore, it is recommended to set the system's locale to UTF-8.
On RHEL 8, install additional dependencies:
Set the system's locale to en_US.UTF-8 by executing localedef:
MariaDB Enterprise ColumnStore requires all nodes to have host names that are resolvable on all other nodes. If your infrastructure does not configure DNS centrally, you may need to configure static DNS entries in the /etc/hosts file of each server.
On each Enterprise ColumnStore node, edit the /etc/hosts file to map host names to the IP address of each Enterprise ColumnStore node:
Replace the IP addresses with the addresses in your own environment.
With the ColumnStore Object Storage topology, it is important to create the S3 bucket before you start ColumnStore. All Enterprise ColumnStore nodes access data from the same bucket.
If you already have an S3 bucket, confirm that the bucket is empty.
S3 bucket configuration will be performed later in this procedure.
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 1 of 9.
.
Step 3: Start and Configure Enterprise ColumnStore
This page details step 3 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Object storage.
This step starts and configures MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Mandatory system variables and options for Single-Node Enterprise ColumnStore include:
Configure Enterprise ColumnStore S3 Storage Manager to use S3-compatible storage by editing the /etc/columnstore/storagemanager.cnf configuration file:
The S3-compatible object storage options are configured under [S3]:
The bucket option must be set to the name of the bucket that you created in "Create an S3 Bucket".
The endpoint option must be set to the endpoint for the S3-compatible object storage.
The aws_access_key_id and aws_secret_access_key options must be set to the access key ID and secret access key for the S3-compatible object storage.
The local cache options are configured under [Cache]:
The cache_size option is set to 2 GB by default.
The path option is set to /var/lib/columnstore/storagemanager/cache by default.
Ensure that the specified path has sufficient storage space for the specified cache size.
Start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:
Start and enable the MariaDB Enterprise ColumnStore service, so that it starts automatically upon reboot:
Enterprise ColumnStore requires a mandatory utility user account to perform cross-engine joins and similar operations.
Create the user account with the statement:
Grant the user account SELECT privileges on all databases with the statement:
Configure Enterprise ColumnStore to use the utility user:
Set the password:
For details about how to encrypt the password, see "".
Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.
The specific steps to configure the security module depend on the operating system.
Configure SELinux for Enterprise ColumnStore:
To configure SELinux, you have to install the packages required for audit2allow. On CentOS 7 and RHEL 7, install the following:
On RHEL 8, install the following:
Allow the system to run under load for a while to generate SELinux audit events.
After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:
If no audit events were found, this will print the following:
If audit events were found, the new SELinux policy can be loaded using semodule:
Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config.
For example, the file will usually look like this after the change:
Set SELinux to enforcing mode:
For information on how to create a profile, see on ubuntu.com.
Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:
This page was step 3 of 5.
MariaDB ColumnStore automatically creates logical horizontal partitions across every column. For ordered or semi-ordered data fields such as an order date this will result in a highly effective partitioning scheme based on that column. This allows for increased performance of queries filtering on that column since partition elimination can be performed. It also allows for data lifecycle management as data can be disabled or dropped by partition cheaply. Caution should be used when disabling or dropping partitions as these commands are destructive.
It is important to understand that a Partition in ColumnStore terms is actually 2 extents (16 million rows) and that extents & partitions are created according to the following algorithm in 1.0.x:
Create 4 extents in 4 files
When these are filled up (after 32M rows), create 4 more extents in the 4 files created in step 1.
When these are filled up (after 64M rows), create a new partition.
Information about all partitions for a given column can be retrieved using the calShowPartitions stored procedure which takes either two or three mandatory parameters: [database_name], table_name, and column_name. If two parameters are provided the current database is assumed. For example:
The calEnablePartitions stored procedure allows for enabling of one or more partitions. The procedure takes the same set of parameters as calDisablePartitions.
For example:
The result showing the first partition has been enabled:
The calDisablePartitions stored procedure allows for disabling of one or more partitions. A disabled partition still exists on the file system (and can be enabled again at a later time) but will not participate in any query, DML or import activity. The procedure takes either two or three mandatory parameters: [database_name], table_name, and partition_numbers separated by commas. If two parameters are provided the current database is assumed.
For example:
The result showing the first partition has been disabled:
The calDropPartitions stored procedure allows for dropping of one or more partitions. Dropping means that the underlying storage is deleted and the partition is completely removed. A partition can be dropped from either enabled or disabled state. The procedure takes the same set of parameters as calDisablePartitions. Extra caution should be used with this procedure since it is destructive and cannot be reversed.
For example:
The result showing the first partition has been dropped:
Information about a range of parititions for a given column can be retrieved using the calShowPartitionsByValue stored procedure. This procedure takes either four or five mandatory parameters: [database_name], table_name,`` column_name,`` start_value, and`` end_value. If four parameters are provided, the current database is assumed. Only casual partition column types (, , , , up to 8 bytes and up to 7 bytes) are supported for this function.
The function returns a list of partitions whose minimum and maximum values for the column col_name fall completely within the range of start_value and end_value.
For example:
The calEnablePartitionsbyValue stored procedure allows for enabling of one or more partitions by value. The procedure takes the same set of arguments as calShowPartitionsByValue.
A good practice is to use calShowPartitionsByValue to identify the partitions to be enabled and then the same argument values used to construct the calEnablePartitionsbyValue call.
For example:
The result showing the first partition has been enabled:
The calDisablePartitionsByValue stored procedure allows for disabling of one or more partitions by value. A disabled partition still exists on the file system (and can be enabled again at a later time) but will not participate in any query, DML or import activity. The procedure takes the same set of arguments as calShowPartitionsByValue.
A good practice is to use calShowPartitionsByValue to identify the partitions to be disabled and then the same argument values used to construct the calDisablePartitionsByValue call. For example:
The result showing the first partition has been disabled:
The calDropPartitionsByValue stored procedure allows for dropping of one or more partitions by value. Dropping means that the underlying storage is deleted and the partition is completely removed. A partition can be dropped from either enabled or disabled state. The procedure takes the same set of arguments as calShowPartitionsByValue. A good practice is to use calShowPartitionsByValue to identify the partitions to be enabled and then the same argument values used to construct the calDropPartitionsByValue call. Extra caution should be used with this procedure since it is destructive and cannot be reversed.
For example:
The result showing the first partition has been dropped:
Since the partitioning scheme is system-maintained, the minimum and maximum values are not directly specified, but influenced by the order of data loading. If you want to drop a specific date range, additional deletes are required to achieve this. The following cases may occur:
For semi-ordered data, there may be overlap between minimum and maximum values between partitions.
As in the example above, the partition ranges from 1992-01-01 to 1998-08-02. It may be desirable to drop the remaining 1998 rows.
A bulk-delete statement can be used to delete the remaining rows that do not fall exactly within partition ranges. The partition drops will be fastest; however, the system optimizes bulk-delete statements to delete by block internally. This is still relatively fast.
To remove a node from Enterprise ColumnStore, perform the following procedure.
The server object for the node must be unlinked from the service using :
Unlink the server object from the service using the unlink service command.
As the first argument, provide the name of the service.
As the second argument, provide the name of the server.
To confirm that the server object was properly unlinked from the service, the service should be checked using :
Show the services using the show services command, like this:
The server object for the node must be unlinked from the monitor using :
Unlink a server object from the monitor using the unlink monitor command.
As the first argument, provide the name of the monitor.
As the second argument, provide the name of the server.
To confirm that the server object was properly unlinked from the monitor, the monitor should be checked using :
Show the monitors using the show monitors command, like this:
The server object for the node must also be removed from MaxScale using :
Use or another supported REST client.
Remove the server object using the destroy server command.
As the first argument, provide the name for the server.
For example:
To confirm that the server object was properly removed, the server objects should be checked using :
Show the server objects using the show servers command, like this:
The Enterprise Server. Enterprise ColumnStore, and CMAPI services can be stopped using the systemctl command.
Perform the following procedure on the node:
Stop the MariaDB Enterprise Server service:
Stop the MariaDB Enterprise ColumnStore service:
Stop the CMAPI service:
The node must be removed from Enterprise ColumnStore using :
Remove the node using the endpoint path.
Use a , such as curl .
Format the JSON output using jq for enhanced readability.
For example, if the primary node's host name is mcs1 and the IP address for the node to remove is 192.0.2.3:
In ES 10.5.10-7 and later:
In ES 10.5.9-6 and earlier:
Example output:
To confirm that the node was properly removed, the status of Enterprise ColumnStore should be checked using :
Check the status using the endpoint path.
For example, if the primary node's host name is mcs1:
Example output:
This page provides information on optimizing Linux kernel parameters for improved performance with MariaDB ColumnStore.
MariaDB ColumnStore is a high-performance columnar database designed for analytical workloads. By optimizing the Linux kernel parameters, you can further enhance the performance of your MariaDB ColumnStore deployments.
The following table lists the recommended optimized Linux kernel parameters for MariaDB ColumnStore:
For more information refer to .
To configure these parameters, you can add them to the /etc/sysctl.conf file. For example:
After making changes to the /etc/sysctl.conf file, you need to apply the changes by running the following command:
These optimized parameters are recommended for all MariaDB ColumnStore deployments, regardless of the specific workload. They can improve performance for various use cases, including:
Large-scale data warehousing
Real-time analytics
Business intelligence
Machine learning
By optimizing the Linux kernel parameters, you can significantly improve the performance of your MariaDB ColumnStore deployments. These recommendations provide a starting point for optimizing your system, and you may need to adjust the values based on your specific hardware and workload.
Step 5: Test MariaDB Enterprise Server
The ColumnStore engine does not fully support recursive Common Table Expressions (CTEs). Attempting to use recursive CTEs directly against ColumnStore tables typically results in an error.
The purpose of the following examples is to demonstrate three potential workarounds for this issue. The best fit for your organization will depend on your specific needs and ability to refactor queries and adjust your approach.
It simulates a simple organizational chart with employees and managers to illustrate the problem and the workarounds.
First, an InnoDB table for comparison:
Step 9: Import Data
# minimize swapping
vm.swappiness = 1
# Increase the TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Increase the TCP buffer limits
# min, default, and max number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# for 1 GigE, increase this to 2500
# for 10 GigE, increase this to 30000
net.core.netdev_max_backlog = 2500$ sudo sysctl --load=/etc/sysctl.d/90-mariadb-enterprise-columnstore.conf$ sudo setenforce permissive# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo getenforce
Permissive$ sudo systemctl disable apparmor$ sudo aa-statusapparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.$ sudo systemctl status firewalld$ sudo systemctl stop firewalld$ sudo ufw status verbose$ sudo ufw disable$ sudo yum install glibc-locale-source glibc-langpack-en$ sudo localedef -i en_US -f UTF-8 en_US.UTF-8192.0.2.1 mcs1
192.0.2.2 mcs2
192.0.2.3 mcs3
192.0.2.100 mxs1$ sudo yum install --enablerepo=PowerTools glusterfs-server$ sudo yum install centos-release-gluster
$ sudo yum install glusterfs-server$ wget -O - https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub | apt-key add -
$ DEBID=$(grep 'VERSION_ID=' /etc/os-release | cut -d '=' -f 2 | tr -d '"')
$ DEBVER=$(grep 'VERSION=' /etc/os-release | grep -Eo '[a-z]+')
$ DEBARCH=$(dpkg --print-architecture)
$ echo deb https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main > /etc/apt/sources.list.d/gluster.list$ sudo apt update$ sudo apt install glusterfs-server$ sudo apt update$ sudo apt install glusterfs-server$ sudo systemctl start glusterd$ sudo systemctl enable glusterd$ sudo gluster peer probe mcs2$ sudo gluster peer probe mcs3$ sudo gluster peer probe mcs1
peer probe: Host mcs1 port 24007 already in peer list$ sudo gluster peer statusHostname: mcs2
Uuid: 3c8a5c79-22de-45df-9034-8ae624b7b23e
State: Peer in Cluster (Connected)
Hostname: mcs3
Uuid: 862af7b2-bb5e-4b1c-8311-630fa32ed451
State: Peer in Cluster (Connected)$ sudo mkdir -p /brick/storagemanager$ sudo gluster volume create storagemanager \
replica 3 \
mcs1:/brick/storagemanager \
mcs2:/brick/storagemanager \
mcs3:/brick/storagemanager \
force$ sudo gluster volume start storagemanager$ sudo mkdir -p /var/lib/columnstore/storagemanager127.0.0.1:storagemanager /var/lib/columnstore/storagemanager glusterfs defaults,_netdev 0 0$ sudo mount -a$ sudo yum install --enablerepo=PowerTools glusterfs-server$ sudo yum install centos-release-gluster
$ sudo yum install glusterfs-server$ wget -O - https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub | apt-key add -
$ DEBID=$(grep 'VERSION_ID=' /etc/os-release | cut -d '=' -f 2 | tr -d '"')
$ DEBVER=$(grep 'VERSION=' /etc/os-release | grep -Eo '[a-z]+')
$ DEBARCH=$(dpkg --print-architecture)
$ echo deb https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main > /etc/apt/sources.list.d/gluster.list$ sudo apt update$ sudo apt install glusterfs-server$ sudo apt update$ sudo apt install glusterfs-server$ sudo systemctl start glusterd$ sudo systemctl enable glusterd$ sudo gluster peer probe mcs2$ sudo gluster peer probe mcs3$ sudo gluster peer probe mcs1peer probe: Host mcs1 port 24007 already in peer list$ sudo gluster peer statusNumber of Peers: 2
Hostname: mcs2
Uuid: 3c8a5c79-22de-45df-9034-8ae624b7b23e
State: Peer in Cluster (Connected)
Hostname: mcs3
Uuid: 862af7b2-bb5e-4b1c-8311-630fa32ed451
State: Peer in Cluster (Connected)$ sudo mkdir -p /brick/storagemanager$ sudo gluster volume create storagemanager \
replica 3 \
mcs1:/brick/storagemanager \
mcs2:/brick/storagemanager \
mcs3:/brick/storagemanager \
force$ sudo gluster volume start storagemanager$ sudo mkdir -p /var/lib/columnstore/storagemanager127.0.0.1:storagemanager /var/lib/columnstore/storagemanager glusterfs defaults,_netdev 0 0$ sudo mount -a[maxscale]
threads = auto
admin_host = 0.0.0.0
admin_secure_gui = false$ sudo systemctl restart maxscale$ maxctrl create server mcs1 192.0.2.101$ maxctrl create server mcs2 192.0.2.102$ maxctrl create server mcs3 192.0.2.103$ maxctrl create monitor columnstore_monitor mariadbmon \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
replication_user=repl \
replication_password='REPLICATION_USER_PASSWORD' \
--servers mcs1 mcs2 mcs3$ maxctrl create service connection_router_service readconnroute \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
router_options=slave \
--servers mcs1 mcs2 mcs3$ maxctrl create listener connection_router_service connection_router_listener 3308 \
protocol=MariaDBClient$ maxctrl create service query_router_service readwritesplit \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
--servers mcs1 mcs2 mcs3$ maxctrl create listener query_router_service query_router_listener 3307 \
protocol=MariaDBClient$ maxctrl start services$ sudo testS3ConnectionStorageManager[26887]: Using the config file found at /etc/columnstore/storagemanager.cnf
StorageManager[26887]: S3Storage: S3 connectivity & permissions are OK
S3 Storage Manager Configuration OK$ sudo mariadbWelcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>SELECT PLUGIN_NAME, PLUGIN_STATUS
FROM information_schema.PLUGINS
WHERE PLUGIN_LIBRARY LIKE 'ha_columnstore%';+---------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+---------------------+---------------+
| Columnstore | ACTIVE |
| COLUMNSTORE_COLUMNS | ACTIVE |
| COLUMNSTORE_TABLES | ACTIVE |
| COLUMNSTORE_FILES | ACTIVE |
| COLUMNSTORE_EXTENTS | ACTIVE |
+---------------------+---------------+CREATE DATABASE IF NOT EXISTS test;CREATE TABLE IF NOT EXISTS test.contacts (
first_name VARCHAR(50),
last_name VARCHAR(50),
email VARCHAR(100)
) ENGINE=ColumnStore;INSERT INTO test.contacts (first_name, last_name, email)
VALUES
("Kai", "Devi", "kai.devi@example.com"),
("Lee", "Wang", "lee.wang@example.com");SELECT * FROM test.contacts;
+------------+-----------+----------------------+
| first_name | last_name | email |
+------------+-----------+----------------------+
| Kai | Devi | kai.devi@example.com |
| Lee | Wang | lee.wang@example.com |
+------------+-----------+----------------------+CREATE TABLE test.addresses (
email VARCHAR(100),
street_address VARCHAR(255),
city VARCHAR(100),
state_code VARCHAR(2)
) ENGINE = InnoDB;INSERT INTO test.addresses (email, street_address, city, state_code)
VALUES
("kai.devi@example.com", "1660 Amphibious Blvd.", "Redwood City", "CA"),
("lee.wang@example.com", "32620 Little Blvd", "Redwood City", "CA");SELECT name AS "Name", addr AS "Address"
FROM (SELECT CONCAT(first_name, " ", last_name) AS name,
email FROM test.contacts) AS contacts
INNER JOIN (SELECT CONCAT(street_address, ", ", city, ", ", state_code) AS addr,
email FROM test.addresses) AS addr
WHERE contacts.email = addr.email;+----------+-----------------------------------------+
| Name | Address |
+----------+-----------------------------------------+
| Kai Devi | 1660 Amphibious Blvd., Redwood City, CA |
| Lee Wang | 32620 Little Blvd, Redwood City, CA |
+----------+-----------------------------------------+
+-------------------+-------------------------------------+
| Name | Address |
+-------------------+-------------------------------------+
| Walker Percy | 500 Thomas More Dr., Covington, LA |
| Flannery O'Connor | 300 Tarwater Rd., Milledgeville, GA |
+-------------------+-------------------------------------+sudo cpimport -s ',' \
DATABASE_NAME \
TABLE_NAME \
/path/to/DATABASE_NAME-TABLE_NAME.csvmariadb -e "FLUSH TABLES WITH READ LOCK;"save_brmmkdir -p /extent_map_backupcp -f /var/lib/columnstore/data1/systemFiles/dbrm/BRM_saves_em /extent_map_backupmariadb -e "UNLOCK TABLES;"systemctl stop mariadb-columnstoremv /var/lib/columnstore/data1/systemFiles/dbrm/BRM_saves_em /tmp/BRM_saves_em.bad> /var/lib/columnstore/data1/systemFiles/dbrm/BRM_saves_vbbm > /var/lib/columnstore/data1/systemFiles/dbrm/BRM_saves_vsscp -f /extent_map_backup/BRM_saves_em /var/lib/columnstore/data1/systemFiles/dbrm/chown -R mysql:mysql /var/lib/columnstore/data1/systemFiles/dbrm/systemctl start mariadb-columnstorecurl -s -X PUT https://127.0.0.1:8640/cmapi/0.4.0/cluster/shutdown \ --header 'Content-Type:application/json' \ --header 'x-api-key:your_api_key' \ --data '{"timeout":60}' -kmv /var/lib/columnstore/data1/systemFiles/dbrm/BRM_saves_em /tmp/BRM_saves_em.bad> /var/lib/columnstore/data1/systemFiles/dbrm/BRM_saves_vbbm > /var/lib/columnstore/data1/systemFiles/dbrm/BRM_saves_vssmv cp -f /extent_map_backup/BRM_saves_em /var/lib/columnstore/data1/systemFiles/dbrm/chown -R mysql:mysql /var/lib/columnstore/data1/systemFiles/dbrmcurl -s -X PUT https://127.0.0.1:8640/cmapi/0.4.0/cluster/start \ --header 'Content-Type:application/json' \ --header 'x-api-key:your_api_key' \ --data '{"timeout":60}' -kpassword='MAXSCALE_USER_PASSWORD' sets the password parameter to the password used by the database user account that MaxScale uses to monitor the ColumnStore nodes.
replication_user=REPLICATION_USER sets the replication_user parameter to the database user account that MaxScale uses to setup replication.
replication_password='REPLICATION_USER_PASSWORD' sets the replication_password parameter to the password used by the database user account that MaxScale uses to setup replication.
--servers sets the servers parameter to the set of nodes that MaxScale should monitor. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by mariadbmon in MaxScale 22.08 can also be specified.
password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ColumnStore nodes.router_options=slave sets the router_options parameter to slave, so that MaxScale only routes connections to the replica nodes.
--servers sets the servers parameter to the set of nodes to which MaxScale should route connections. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by readconnroute in MaxScale 22.08 can also be specified.
protocol=MariaDBClient sets the protocol parameter.Other Module Parameters supported by listeners in MaxScale 22.08 can also be specified.
password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ColumnStore nodes.
--servers sets the servers parameter to the set of nodes to which MaxScale should route queries. All non-option arguments after --servers are interpreted as server names.
Other Module Parameters supported by readwritesplit in MaxScale 22.08 can also be specified.
Other Module Parameters supported by listeners in MaxScale 22.08 can also be specified.
Connection-based load balancing
Routes connections to Enterprise ColumnStore nodes designated as replica servers for a read-only pool
Routes connections to an Enterprise ColumnStore node designated as the primary server for a read-write pool.
Query-based load balancing
Routes write queries to an Enterprise ColumnStore node designated as the primary server
Routes read queries to Enterprise ColumnStore node designated as replica servers
Automatically reconnects after node failures
Automatically replays transactions after node failures
Optionally enforces causal reads
To use a specific IAM role, you must uncomment and set iam_role_name, sts_region, and sts_endpoint.
To use the IAM role assigned to an EC2 instance, you must uncomment ec2_iam_mode=enabled.
Set this system variable to utf8
Set this system variable to utf8_general_ci
columnstore_use_import_for_batchinsert
Set this system variable to ALWAYS to always use cpimport for LOAD DATA INFILE and INSERT...SELECT statements.
Include the required headers.
net.core.netdev_max_backlog
2500
Sets the maximum number of packets that can be queued for a network device. A higher value allows for more packets to be queued, improving performance.
net.core.rmem_max
16777216
Sets the maximum receive buffer size for TCP sockets. A higher value allows for larger buffers, improving performance.
net.core.wmem_max
16777216
Sets the maximum send buffer size for TCP sockets. A higher value allows for larger buffers, improving performance.
net.ipv4.tcp_max_syn_backlog
8192
Sets the maximum number of queued SYN requests. A higher value allows for more queued requests, improving performance.
net.ipv4.tcp_timestamps
0
Disables TCP timestamps, reducing overhead and improving performance.
vm.max_map_count
4,262,144
Increases the maximum number of memory map areas a process may have. The default is 65,530, which can be too low for workloads like MariaDB ColumnStore. Raising this prevents mapping errors for processes that need large address spaces.
kernel.pid_max
4,194,304
Defines the maximum process ID value. Older Linux versions defaulted to 32,768; newer versions default to 4,194,304. Raising this ensures support for systems running a very large number of processes concurrently.
kernel.threads-max
2,000,000
Specifies the maximum number of threads allowed on the system. The default varies depending on available RAM. A value of 2 million is suitable for systems with 32–64GB RAM. Increase further if running with more RAM or requiring more threads.
vm.overcommit_memory
1
Disables overcommitting of memory, ensuring sufficient memory is available for MariaDB ColumnStore.
vm.dirty_background_ratio
5
Sets the percentage of dirty memory that can be written back to disk in the background. A lower value reduces the amount of dirty memory, improving performance.
vm.dirty_ratio
10
Sets the percentage of dirty memory that can be written back to disk before the kernel starts to write out clean pages. A lower value reduces the amount of dirty memory, improving performance.
vm.vfs_cache_pressure
50
Sets the pressure level for the kernel's VFS cache. A lower value reduces the amount of memory used by the VFS cache, improving performance.
Optionally enforces causal reads|
Use Systemd to test whether the MariaDB Enterprise Server service is running. This action is performed on each Enterprise ColumnStore node.
Check if the MariaDB Enterprise Server service is running by executing the following:
If the service is not running on any node, start the service by executing the following on that node:
Use MariaDB Client to test the local connection to the Enterprise Server node.
This action is performed on each Enterprise ColumnStore node:
The sudo command is used here to connect to the Enterprise Server node using the root@localhost user account, which authenticates using the unix_socket authentication plugin. Other user accounts can be used by specifying the --user and --password command-line options.
Query the table to confirm that the ColumnStore storage engine is loaded.
This action is performed on each Enterprise ColumnStore node.
Execute the following query:
The PLUGIN_STATUS column for each ColumnStore-related plugin should contain ACTIVE.
Use Systemd to test whether the CMAPI service is running. This action is performed on each Enterprise ColumnStore node.
Check if the CMAPI service is running by executing the following:
If the service is not running on any node, start the service by executing the following on that node:
Use CMAPI to request the ColumnStore status. The API key needs to be provided as part of the X-API-key HTML header.
This action is performed with the CMAPI service on the primary server.
Check the ColumnStore status using curl by executing the following:
Use MariaDB Client to test DDL.
On the primary server, use the MariaDB Client to connect to the node:
Create a test database and ColumnStore table:
On each replica server, use the MariaDB Client to connect to the node:
Confirm that the database and table exist:
If the database or table do not exist on any node, then check the replication configuration.
Use MariaDB Client to test DML.
On the primary server, use the MariaDB Client to connect to the node:
Insert sample data into the table created in the DDL test:
On each replica server, use the MariaDB Client to connect to the node:
Execute a query to retrieve the data:
If the data is not returned on any node, check the ColumnStore status and the storage configuration.
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 5 of 9.
This page discusses how to backup and restore Enterprise ColumnStore when it uses S3-compatible object storage for data and shared local storage (such as NFS) for the Storage Manager directory.
Any file can become corrupt due to hardware issues, crashes, power loss, and other reasons. If the Enterprise ColumnStore data or metadata become corrupt, Enterprise ColumnStore could become unusable, resulting in data loss.
If Enterprise ColumnStore is your system of record, it should be backed up regularly.
If Enterprise ColumnStore uses S3-compatible object storage for data and shared local storage for the Storage Manager directory, the following items must be backed up:
The MariaDB Data directory is backed up using .
The S3 bucket must be backed up using the vendor's snapshot procedure.
The Storage Manager directory must be backed up.
See the instructions below for more details.
Use the following process to take a backup:
Determine which node is the primary server using curl to send the status command to the CMAPI Server:
The output will show "dbrm_mode": "master" for the primary server:
Connect to the primary server using MariaDB Client as a user account that has privileges to lock the database:
Lock the database with the statement:
Ensure that the client remains connected to the primary server, so that the lock is held for the remaining steps.
Make a copy or snapshot of the Storage Manager directory. By default, it is located at /var/lib/columnstore/storagemanager.
For example, to make a copy of the directory with rsync:
Use to backup the MariaDB data directory:
Use to prepare the backup:
Create a snapshot of the S3-compatible storage. Consult the storage vendor's manual for details on how to do this.
Ensure that all previous operations are complete.
In the original client connection to the primary server, unlock the database with the statement:
Use the following process to restore a backup:
Deploy Enterprise ColumnStore, so that you can restore the backup to an empty deployment.
Ensure that all services are stopped on each node:
Restore the backup of the Storage Manager directory. By default, it is located at /var/lib/columnstore/storagemanager.
For example, to restore the backup with rsync:
Use to restore the backup of the MariaDB data directory:
Restore the snapshot of your S3-compatible storage to the new S3 bucket that you plan to use. Consult the storage vendor's manual for details on how to do this.
Update storagemanager.cnf to configure Enterprise ColumnStore to use the S3 bucket. By default, it is located at /etc/columnstore/storagemanager.cnf.
For example:
The default local cache size is 2 GB.
The default local cache path is /var/lib/columnstore/storagemanager/cache.
Ensure that the local cache path has sufficient store space to store the local cache.
The bucket option must be set to the name of the bucket that you created from your snapshot in the previous step.
To use an IAM role, you must also uncomment and set iam_role_name, sts_region, and sts_endpoint.
Start the services on each node:
Attempting to run a recursive CTE directly on the employees (ColumnStore) table:
This will result in the aforementioned error:
Here are three potential workarounds to address the recursive CTE limitation with MariaDB ColumnStore.
You can temporarily bypass ColumnStore's SELECT handler by disabling it at the session level before executing your recursive CTE and then re-enabling it afterwards.
Note: This workaround may not always be effective, as its success can depend on the specific MariaDB server version and table definitions.
If direct recursive CTEs fail or cause server crashes, you can simulate the recursive logic using a stored procedure and a temporary table. This approach iteratively populates the hierarchy.
First, create a temporary table to store the hierarchical data:
Next, create a stored procedure to iteratively populate the temp_org_chart table:
Finally, call the stored procedure and then select from the populated temporary table:
Another robust workaround is to clone the structure and data of the ColumnStore table into an InnoDB table. Once the data resides in an InnoDB table, you can execute the recursive CTE as usual, as InnoDB fully supports them.
This approach involves a few steps, often executed via shell commands interacting with the MariaDB client:
Extract and Modify CREATE TABLE Statement: Use SHOW CREATE TABLE to get the definition of your ColumnStore table, then modify it to change the engine to InnoDB and give the new table a different name (e.g., employees2).
Create New Table and Copy Data: Execute the modified CREATE TABLE script to create the new InnoDB table, then insert all data from the original ColumnStore table into it.
Run Recursive CTE on the InnoDB Table: Now, with the data in employees2 (an InnoDB table), you can run your recursive CTE without issues.
This page discusses how to backup and restore Enterprise ColumnStore when it uses shared local storage (such as NFS) for the DB Root directories.
Any file can become corrupt due to hardware issues, crashes, power loss, and other reasons. If the Enterprise ColumnStore data or metadata become corrupt, Enterprise ColumnStore could become unusable, resulting in data loss.
If Enterprise ColumnStore is your system of record, it should be backed up regularly.
If Enterprise ColumnStore uses shared local storage for the DB Root directories, the following items must be backed up:
The MariaDB Data directory is backed up using
The Storage Manager directory must be backed up
Each DB Root directories must be backed up
See the instructions below for more details.
Use the following process to take a backup:
Determine which node is the primary server using curl to send the status command to the CMAPI Server:
The output will show dbrm_mode: master for the primary server:
Connect to the primary server using MariaDB Client as a user account that has privileges to lock the database:
Lock the database with the statement:
Ensure that the client remains connected to the primary server, so that the lock is held for the remaining steps.
Make a copy or snapshot of the Storage Manager directory. By default, it is located at /var/lib/columnstore/storagemanager.
For example, to make a copy of the directory with rsync:
Make a copy or snapshot of the DB Root directories. By default, they are located at /var/lib/columnstore/dataN, where the N in dataN represents a range of integers that starts at 1 and stops at the number of nodes in the deployment.
For example, to make a copy of the directories with rsync in a 3-node deployment:
Use to backup the Storage Manager directory:
Use to prepare the backup:
Ensure that all previous operations are complete.
In the original client connection to the primary server, unlock the database with the statement:
Use the following process to restore a backup:
Deploy Enterprise ColumnStore, so that you can restore the backup to an empty deployment.
Ensure that all services are stopped on each node:
Restore the backup of the Storage Manager director. By default, it is located at /var/lib/columnstore/storagemanager.
For example, to restore the backup with rsync:
Restore the backup of the DB Root directories. By default, they are located at /var/lib/columnstore/dataN, where the N in dataN represents a range of integers that starts at 1 and stops at the number of nodes in the deployment.
For example, to restore the backup with rsync in a 3-node deployment:
Use to restore the backup of the MariaDB data directory:
Start the services on each node:
Before data can be imported into the tables, create a matching schema.
On the primary server, create the schema:
For each database that you are importing, create the database with the CREATE DATABASE statement:
For each table that you are importing, create the table with the CREATE TABLE statement:
Enterprise ColumnStore supports multiple methods to import data into ColumnStore tables.
Shell
SQL access is not required
SQL
Shell access is not required
Remote Database
Use normal database client
Avoid dumping data to intermediate filed
MariaDB Enterprise ColumnStore includes cpimport, which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server run cpimport:
When data is loaded with the LOAD DATA INFILE statement, MariaDB Enterprise ColumnStore loads the data using cpimport, which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a TSV (tab-separated values) file, on the primary server use LOAD DATA INFILE statement:
MariaDB Enterprise ColumnStore can also import data directly from a remote database. A simple method is to query the table using the SELECT statement, and then pipe the results into cpimport, which is a command-line utility that is designed to efficiently load data in bulk. Alternative methods are available.
To import your data from a remote MariaDB database:
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 9 of 9.
This procedure is complete.
3306
Port used for MariaDB Client traffic
8600-8630
Port range used for inter-node communication
8640
Port used by CMAPI
8700
Port used for inter-node communication
8800
Port used for inter-node communication
Step 5: Test MariaDB Enterprise Server
This page details step 5 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step tests MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
MariaDB Enterprise ColumnStore 23.10 includes a testS3Connection command to test the S3 configuration, permissions, and connectivity.
This action is performed on each Enterprise ColumnStore node.
Test the S3 configuration by executing the following:
If the testS3Connection command does not return OK, investigate the S3 configuration.
Use Systemd to test whether the MariaDB Enterprise Server service is running.
This action is performed on each Enterprise ColumnStore node.
Check if the MariaDB Enterprise Server service is running by executing the following:
If the service is not running on any node, start the service by executing the following on that node:
Use to test the local connection to the Enterprise Server node.
This action is performed on each Enterprise ColumnStore node:
The sudo command is used here to connect to the Enterprise Server node using the root@localhost user account, which authenticates using the unix_socket authentication plugin. Other user accounts can be used by specifying the --user and --password command-line options.
Query the table to confirm that the ColumnStore storage engine is loaded.
This action is performed on each Enterprise ColumnStore node.
Execute the following query:
The PLUGIN_STATUS column for each ColumnStore-related plugin should contain ACTIVE.
Use Systemd to test whether the CMAPI service is running.
This action is performed on each Enterprise ColumnStore node.
Check if the CMAPI service is running by executing the following:
If the service is not running on any node, start the service by executing the following on that node:
Use CMAPI to request the ColumnStore status. The API key needs to be provided as part of the X-API-key HTML header.
This action is performed with the CMAPI service on the primary server.
Check the ColumnStore status using curl by executing the following:
Use MariaDB Client to test DDL.
On the primary server, use the MariaDB Client to connect to the node:
Create a test database and ColumnStore table:
On each replica server, use the MariaDB Client to connect to the node:
Confirm that the database and table exist:
If the database or table do not exist on any node, then check the replication configuration.
Use MariaDB Client to test DML.
On the primary server, use the MariaDB Client to connect to the node:
Insert sample data into the table created in the DDL test:
On each replica server, use the MariaDB Client to connect to the node:
Execute a query to retrieve the data:
If the data is not returned on any node, check the ColumnStore status and the storage configuration.
Navigation in the procedure 'Deploy ColumnStore Object Storage Topology".
This page was step 5 of 9.
This guide provides steps for deploying a multi-node S3 ColumnStore, setting up the environment, installing the software, and bulk importing data for online analytical processing (OLAP) workloads.
This procedure describes the deployment of the Single-Node Enterprise ColumnStore topology with Object storage.
MariaDB Enterprise ColumnStore is a columnar storage engine for MariaDB Enterprise Server and Enterprise ColumnStore is best suited for Online Analytical Processing (OLAP) workloads.
This procedure has 5 steps, which are executed in sequence.
This page provides an overview of the topology, requirements, and deployment procedures.
Please read and understand this procedure before executing.
Customers can obtain support by .
The following components are deployed during this procedure:
The Single-Node Enterprise ColumnStore topology provides support for Online Analytical Processing (OLAP) workloads to MariaDB Enterprise Server.
The Enterprise ColumnStore node:
Receives queries from the application
Executes queries
Use for data
Single-Node Enterprise ColumnStore does not provide high availability (HA) for Online Analytical Processing (OLAP). If you would like to deploy Enterprise ColumnStore with high availability, see Enterprise ColumnStore with Object storage.
These requirements are for the Single-Node Enterprise ColumnStore, when deployed with MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.
Debian 11 (x86_64, ARM64)
Debian 12 (x86_64, ARM64)
Red Hat Enterprise Linux 8 (x86_64, ARM64)
Red Hat Enterprise Linux 9 (x86_64, PPC64LE, ARM64)
MariaDB Enterprise ColumnStore's minimum hardware requirements are not intended for production environments, but the minimum hardware requirements can be appropriate for development and test environments. For production environments, see the instead.
The minimum hardware requirements are:
MariaDB Enterprise ColumnStore will refuse to start if the system has less than 3 GB of memory.
If Enterprise ColumnStore is started on a system with less memory, the following error message will be written to the ColumnStore system log called crit.log:
And the following error message will be raised to the client:
MariaDB Enterprise ColumnStore's recommended hardware requirements are intended for production analytics.
The recommended hardware requirements are:
Single-node Enterprise ColumnStore with Object Storage requires the following storage type:
Single-node Enterprise ColumnStore with Object Storage uses S3-compatible object storage to store data.
Many S3-compatible object storage services exist. MariaDB Corporation cannot make guarantees about all S3-compatible object storage services, because different services provide different functionality.
For the preferred S3-compatible object storage providers that provide cloud and hardware solutions, see the following sections:
The use of non-cloud and non-hardware providers is at your own risk.
If you have any questions about using specific S3-compatible object storage with MariaDB Enterprise ColumnStore, .
Amazon Web Services (AWS) S3
Google Cloud Storage
Azure Storage
Alibaba Cloud Object Storage Service
Cloudian HyperStore
Dell EMC
Seagate Lyve Rack
Quantum ActiveScale
MariaDB Enterprise Server Configuration Management
MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.
To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.
The systemctl command is used to start and stop the MariaDB Enterprise Server service.
Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:
.
These instructions detail the upgrade from MariaDB Enterprise ColumnStore 6 to MariaDB Enterprise ColumnStore 23.10 in a Multi-Node topology on a range of .
This action is performed for each replica server on the MaxScale node.
Prior to upgrading, the replica servers must be in MaxScale. The replicas can be set to maintenance mode in MaxScale using . If you are using , the replicas can be set to maintenance mode using the set server
[maxscale]
threads = auto
admin_host = 0.0.0.0
admin_secure_gui = false$ sudo systemctl restart maxscale$ maxctrl create server mcs1 192.0.2.101$ maxctrl create server mcs2 192.0.2.102$ maxctrl create server mcs3 192.0.2.103$ maxctrl create monitor columnstore_monitor mariadbmon \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
replication_user=repl \
replication_password='REPLICATION_USER_PASSWORD' \
--servers mcs1 mcs2 mcs3$ maxctrl create service connection_router_service readconnroute \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
router_options=slave \
--servers mcs1 mcs2 mcs3$ maxctrl create listener connection_router_service connection_router_listener 3308 \
protocol=MariaDBClient$ maxctrl create service query_router_service readwritesplit \
user=mxs \
password='MAXSCALE_USER_PASSWORD' \
--servers mcs1 mcs2 mcs3$ maxctrl create listener query_router_service query_router_listener 3307 \
protocol=MariaDBClient$ maxctrl start services[mariadb]
log_error = mariadbd.err
character_set_server = utf8
collation_server = utf8_general_ci[ObjectStorage]
…
service = S3
…
[S3]
bucket = your_columnstore_bucket_name
endpoint = your_s3_endpoint
aws_access_key_id = your_s3_access_key_id
aws_secret_access_key = your_s3_secret_key
# iam_role_name = your_iam_role
# sts_region = your_sts_region
# sts_endpoint = your_sts_endpoint
# ec2_iam_mode = enabled
[Cache]
cache_size = your_local_cache_size
path = your_local_cache_path$ sudo systemctl start mariadb
$ sudo systemctl enable mariadb$ sudo systemctl start mariadb-columnstore
$ sudo systemctl enable mariadb-columnstoreCREATE USER 'util_user'@'127.0.0.1'
IDENTIFIED BY 'util_user_passwd';GRANT SELECT, PROCESS ON *.*
TO 'util_user'@'127.0.0.1';$ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1
$ sudo mcsSetConfig CrossEngineSupport Port 3306
$ sudo mcsSetConfig CrossEngineSupport User util_user$ sudo mcsSetConfig CrossEngineSupport Password util_user_passwd$ sudo yum install policycoreutils policycoreutils-python$ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
Nothing to do$ sudo semodule -i mariadb_local.pp# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo setenforce enforcingmaxctrl unlink service \
mcs_service \
mcs3maxctrl show servicesmaxctrl unlink monitor \
mcs_monitor \
mcs3maxctrl show monitorsmaxctrl destroy server \
mcs3maxctrl show serverssudo systemctl stop mariadbsudo systemctl stop mariadb-columnstoresudo systemctl stop mariadb-columnstore-cmapicurl -k -s -X DELETE https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.3"}' \
| jq .curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/remove-node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.3"}' \
| jq .{
"timestamp": "2020-10-28 00:39:14.672142",
"node_id": "192.0.2.3"
}curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
},
"192.0.2.2": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"num_nodes": 2
}vm.overcommit_memory=1
vm.dirty_background_ratio=5
vm.dirty_ratio=10
vm.vfs_cache_pressure=50
net.core.netdev_max_backlog=2500
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.ipv4.tcp_max_syn_backlog=8192
net.ipv4.tcp_timestamps=0sudo sysctl -pcat /proc/sys/kernel/threads-max
cat /proc/sys/kernel/pid_max
cat /proc/sys/vm/max_map_count
# Rhel /etc/sysctl.conf
sudo echo "vm.max_map_count=4262144" >> /etc/sysctl.conf
sudo echo "kernel.pid_max = 4194304" >> /etc/sysctl.conf
sudo echo "kernel.threads-max = 2000000" >> /etc/sysctl.conf
# There may be a file called 50-pid-max.conf or perhaps something similar. If so, modify it
sudo echo "vm.max_map_count=4262144" > /usr/lib/sysctl.d/50-max_map_count.conf
sudo echo "kernel.pid_max = 4194304" > /usr/lib/sysctl.d/50-pid-max.conf
sudo sysctl -p$ systemctl status mariadb$ sudo systemctl start mariadb$ sudo mariadb
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>SELECT PLUGIN_NAME, PLUGIN_STATUS
FROM information_schema.PLUGINS
WHERE PLUGIN_LIBRARY LIKE 'ha_columnstore%';
+---------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+---------------------+---------------+
| Columnstore | ACTIVE |
| COLUMNSTORE_COLUMNS | ACTIVE |
| COLUMNSTORE_TABLES | ACTIVE |
| COLUMNSTORE_FILES | ACTIVE |
| COLUMNSTORE_EXTENTS | ACTIVE |
+---------------------+---------------+$ systemctl status mariadb-columnstore-cmapi$ sudo systemctl start mariadb-columnstore-cmapi$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
},
"192.0.2.2": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"192.0.2.3": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"num_nodes": 3
}$ sudo mariadbCREATE DATABASE IF NOT EXISTS test;
CREATE TABLE IF NOT EXISTS test.contacts (
first_name VARCHAR(50),
last_name VARCHAR(50),
email VARCHAR(100)
) ENGINE = ColumnStore;$ sudo mariadbSHOW CREATE TABLE test.contacts\G;$ sudo mariadbINSERT INTO test.contacts (first_name, last_name, email)
VALUES
("Kai", "Devi", "kai.devi@example.com"),
("Lee", "Wang", "lee.wang@example.com");$ sudo mariadbSELECT * FROM test.contacts;
+------------+-----------+----------------------+
| first_name | last_name | email |
+------------+-----------+----------------------+
| Kai | Devi | kai.devi@example.com |
| Lee | Wang | lee.wang@example.com |
+------------+-----------+----------------------+$ curl -k -s https://mcs1:8640/cmapi/0.4.0/mcs cluster status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
}$ mariadb --host=192.0.2.1 \
--user=root \
--passwordFLUSH TABLES WITH READ LOCK;$ sudo mkdir -p /backups/columnstore/202101291600/
$ sudo rsync -av /var/lib/columnstore/storagemanager /backups/columnstore/202101291600/$ sudo mkdir -p /backups/mariadb/202101291600/
$ sudo mariadb-backup --backup \
--target-dir=/backups/mariadb/202101291600/ \
--user=mariadb-backup \
--password=mbu_passwd$ sudo mariadb-backup --prepare \
--target-dir=/backups/mariadb/202101291600/UNLOCK TABLES;$ sudo systemctl stop mariadb-columnstore-cmapi
$ sudo systemctl stop mariadb-columnstore
$ sudo systemctl stop mariadb$ sudo rsync -av /backups/columnstore/202101291600/storagemanager/ /var/lib/columnstore/storagemanager/
$ sudo chown -R mysql:mysql /var/lib/columnstore/storagemanager$ sudo mariadb-backup --copy-back \
--target-dir=/backups/mariadb/202101291600/
$ sudo chown -R mysql:mysql /var/lib/mysql[ObjectStorage]
…
service = S3
…
[S3]
bucket = your_columnstore_bucket_name
endpoint = your_s3_endpoint
aws_access_key_id = your_s3_access_key_id
aws_secret_access_key = your_s3_secret_key
# iam_role_name = your_iam_role
# sts_region = your_sts_region
# sts_endpoint = your_sts_endpoint
[Cache]
cache_size = your_local_cache_size
path = your_local_cache_path$ sudo systemctl start mariadb
$ sudo systemctl start mariadb-columnstore-cmapiCREATE TABLE employees_innodb (
id INT PRIMARY KEY,
name VARCHAR(100),
manager_id INT -- references employees.id (nullable for top-level)
);
INSERT INTO employees_innodb (id, name, manager_id) VALUES
(1, 'CEO', NULL),
(2, 'VP of Sales', 1),
(3, 'Sales Rep A', 2),
(4, 'VP of Eng', 1),
(5, 'Eng A', 4),
(6, 'Eng B', 4);
CREATE TABLE employees (
id INT,
name VARCHAR(100),
manager_id INT -- references employees.id (nullable for top-level)
) engine=columnstore;
INSERT INTO employees (id, name, manager_id) VALUES
(1, 'CEO', NULL),
(2, 'VP of Sales', 1),
(3, 'Sales Rep A', 2),
(4, 'VP of Eng', 1),
(5, 'Eng A', 4),
(6, 'Eng B', 4);
WITH RECURSIVE org_chart AS (
-- Anchor: start with the top-level manager (CEO)
SELECT id, name, manager_id, 0 AS level
FROM employees
WHERE id = 1
UNION ALL
-- Recursive step: find employees who report to the previous level
SELECT e.id, e.name, e.manager_id, oc.level + 1
FROM employees e
JOIN org_chart oc ON e.manager_id = oc.id
)
SELECT * FROM org_chart;
ERROR 1178 (42000): The storage engine for the table doesn't support Recursive CTESET SESSION columnstore_select_handler=OFF;
WITH RECURSIVE org_chart AS (
-- Anchor: start with the top-level manager (CEO)
SELECT id, name, manager_id, 0 AS level
FROM employees
WHERE id = 1
UNION ALL
-- Recursive step: find employees who report to the previous level
SELECT e.id, e.name, e.manager_id, oc.level + 1
FROM employees e
JOIN org_chart oc ON e.manager_id = oc.id
)
SELECT * FROM org_chart;
SET SESSION columnstore_select_handler=ON;
CREATE TABLE temp_org_chart (
id INT,
name VARCHAR(100),
manager_id INT,
level INT
);
-- Initialize the temporary table with the top-level employees
INSERT INTO temp_org_chart (id, name, manager_id, level)
SELECT id, name, manager_id, 0 AS level FROM employees WHERE manager_id IS NULL;DELIMITER //
CREATE OR REPLACE PROCEDURE populate_org_chart()
BEGIN
DECLARE v_level INT DEFAULT 1;
DECLARE rows_inserted INT DEFAULT 1;
-- Loop until no more rows are inserted, indicating the hierarchy is fully traversed
WHILE rows_inserted > 0 DO
-- Insert employees who report to the previous level
INSERT INTO temp_org_chart (id, name, manager_id, level)
SELECT e.id, e.name, e.manager_id, v_level
FROM employees e
JOIN temp_org_chart t ON e.manager_id = t.id
WHERE t.level = v_level - 1
AND NOT EXISTS (
SELECT 1 FROM temp_org_chart x WHERE x.id = e.id
);
-- Get the number of rows inserted in the current iteration
SET rows_inserted = ROW_COUNT();
-- Increment the level for the next iteration
SET v_level = v_level + 1;
END WHILE;
END //
DELIMITER ;CALL populate_org_chart();
SELECT * FROM temp_org_chart;mariadb test -qsNe "SHOW CREATE TABLE employees" \
| awk -F '\t' '{print $2}' \
| sed -e 's/ENGINE=Columnstore/ENGINE=InnoDB/' \
-e 's/CREATE TABLE `employees`/CREATE TABLE `employees2`/' \
> create_employees2.sql
mariadb test < create_employees2.sql
mariadb test -e "INSERT INTO employees2 SELECT * FROM employees"WITH RECURSIVE org_chart AS (
-- Anchor: start with the top-level manager (CEO)
SELECT id, name, manager_id, 0 AS level
FROM employees2
WHERE id = 1
UNION ALL
-- Recursive step: find employees who report to the previous level
SELECT e.id, e.name, e.manager_id, oc.level + 1
FROM employees2 e
JOIN org_chart oc ON e.manager_id = oc.id
)
SELECT * FROM org_chart;$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
}$ mariadb --host=192.0.2.1 \
--user=root \
--passwordFLUSH TABLES WITH READ LOCK;$ sudo mkdir -p /backups/columnstore/202101291600/
$ sudo rsync -av /var/lib/columnstore/storagemanager /backups/columnstore/202101291600/$ sudo rsync -av /var/lib/columnstore/data1 /backups/columnstore/202101291600/
$ sudo rsync -av /var/lib/columnstore/data2 /backups/columnstore/202101291600/
$ sudo rsync -av /var/lib/columnstore/data3 /backups/columnstore/202101291600/$ sudo mkdir -p /backups/mariadb/202101291600/
$ sudo mariadb-backup --backup \
--target-dir=/backups/mariadb/202101291600/ \
--user=mariadb-backup \
--password=mbu_passwd$ sudo mariadb-backup --prepare \
--target-dir=/backups/mariadb/202101291600/UNLOCK TABLES;$ sudo systemctl stop mariadb-columnstore-cmapi
$ sudo systemctl stop mariadb-columnstore
$ sudo systemctl stop mariadb$ sudo rsync -av /backups/columnstore/202101291600/storagemanager/ /var/lib/columnstore/storagemanager/
$ sudo chown -R mysql:mysql /var/lib/columnstore/storagemanager$ sudo rsync -av /backups/columnstore/202101291600/data1/ /var/lib/columnstore/data1/
$ sudo rsync -av /backups/columnstore/202101291600/data2/ /var/lib/columnstore/data2/
$ sudo rsync -av /backups/columnstore/202101291600/data3/ /var/lib/columnstore/data3/
$ sudo chown -R mysql:mysql /var/lib/columnstore/data1
$ sudo chown -R mysql:mysql /var/lib/columnstore/data2
$ sudo chown -R mysql:mysql /var/lib/columnstore/data3$ sudo mariadb-backup --copy-back \
--target-dir=/backups/mariadb/202101291600/
$ sudo chown -R mysql:mysql /var/lib/mysql$ sudo systemctl start mariadb
$ sudo systemctl start mariadb-columnstore-cmapiCREATE DATABASE inventory;CREATE TABLE inventory.products (
product_name VARCHAR(11) NOT NULL DEFAULT '',
supplier VARCHAR(128) NOT NULL DEFAULT '',
quantity VARCHAR(128) NOT NULL DEFAULT '',
unit_cost VARCHAR(128) NOT NULL DEFAULT ''
) ENGINE=Columnstore DEFAULT CHARSET=utf8;$ sudo cpimport -s '\t' inventory products /tmp/inventory-products.tsvLOAD DATA INFILE '/tmp/inventory-products.tsv'
INTO TABLE inventory.products;$ mariadb --quick \
--skip-column-names \
--execute="SELECT * FROM inventory.products" \
| cpimport -s '\t' inventory products# minimize swapping
vm.swappiness = 1
# Increase the TCP max buffer size
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
# Increase the TCP buffer limits
# min, default, and max number of bytes to use
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
# don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
# for 1 GigE, increase this to 2500
# for 10 GigE, increase this to 30000
net.core.netdev_max_backlog = 2500$ sudo sysctl --load=/etc/sysctl.d/90-mariadb-enterprise-columnstore.conf$ sudo setenforce permissive# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo getenforce
Permissive$ sudo systemctl disable apparmor$ sudo aa-statusapparmor module is loaded.
0 profiles are loaded.
0 profiles are in enforce mode.
0 profiles are in complain mode.
0 processes have profiles defined.
0 processes are in enforce mode.
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.$ sudo systemctl status firewalld$ sudo systemctl stop firewalld$ sudo ufw status verbose$ sudo ufw disable$ sudo yum install glibc-locale-source glibc-langpack-en$ sudo localedef -i en_US -f UTF-8 en_US.UTF-8192.0.2.1 mcs1
192.0.2.2 mcs2
192.0.2.3 mcs3
192.0.2.100 mxs1select calShowPartitions('orders','orderdate');
+-----------------------------------------+
| calShowPartitions('orders','orderdate') |
+-----------------------------------------+
| Part# Min Max Status
0.0.1 1992-01-01 1998-08-02 Enabled
0.1.2 1998-08-03 2004-05-15 Enabled
0.2.3 2004-05-16 2010-07-24 Enabled |
+-----------------------------------------+
1 row in set (0.05 sec)select calEnablePartitions('orders', '0.0.1');
+----------------------------------------+
| calEnablePartitions('orders', '0.0.1') |
+----------------------------------------+
| Partitions are enabled successfully. |
+----------------------------------------+
1 row in set (0.28 sec)select calShowPartitions('orders','orderdate');
+-----------------------------------------+
| calShowPartitions('orders','orderdate') |
+-----------------------------------------+
| Part# Min Max Status
0.0.1 1992-01-01 1998-08-02 Enabled
0.1.2 1998-08-03 2004-05-15 Enabled
0.2.3 2004-05-16 2010-07-24 Enabled |
+-----------------------------------------+
1 rows in set (0.05 sec)select calDisablePartitions('orders','0.0.1');
+----------------------------------------+
| calDisablePartitions('orders','0.0.1') |
+----------------------------------------+
| Partitions are disabled successfully. |
+----------------------------------------+
1 row in set (0.28 sec)select calShowPartitions('orders','orderdate');
+-----------------------------------------+
| calShowPartitions('orders','orderdate') |
+-----------------------------------------+
| Part# Min Max Status
0.0.1 1992-01-01 1998-08-02 Disabled
0.1.2 1998-08-03 2004-05-15 Enabled
0.2.3 2004-05-16 2010-07-24 Enabled |
+-----------------------------------------+
1 row in set (0.05 sec)select calDropPartitions('orders', '0.0.1');
+--------------------------------------+
| calDropPartitions('orders', '0.0.1') |
+--------------------------------------+
| Partitions are enabled successfully |
+--------------------------------------+
1 row in set (0.28 sec)select calShowPartitions('orders','orderdate');
+-----------------------------------------+
| calShowPartitions('orders','orderdate') |
+-----------------------------------------+
| Part# Min Max Status
0.1.2 1998-08-03 2004-05-15 Enabled
0.2.3 2004-05-16 2010-07-24 Enabled |
+-----------------------------------------+
1 row in set (0.05 sec)select calShowPartitionsByValue('orders','orderdate', '1992-01-01', '2010-07-24');
+----------------------------------------------------------------------------+
| calShowPartitionsbyvalue('orders','orderdate', '1992-01-02', '2010-07-24') |
+----------------------------------------------------------------------------+
| Part# Min Max Status
0.0.1 1992-01-01 1998-08-02 Enabled
0.1.2 1998-08-03 2004-05-15 Enabled
0.2.3 2004-05-16 2010-07-24 Enabled |
+----------------------------------------------------------------------------+
1 row in set (0.05 sec)select calEnablePartitionsByValue('orders','orderdate', '1992-01-01', '1998-08-02');
+--------------------------------------------------------------------------------+
| calenablepartitionsbyvalue ('orders', 'o_orderdate','1992-01-01','1998-08-02') |
+--------------------------------------------------------------------------------+
| Partitions are enabled successfully |
+--------------------------------------------------------------------------------+
1 row in set (0.28 sec)select calShowPartitionsByValue('orders','orderdate', '1992-01-01', '2010-07-24');
+----------------------------------------------------------------------------+
| calShowPartitionsbyvalue('orders','orderdate', '1992-01-02','2010-07-24' ) |
+----------------------------------------------------------------------------+
| Part# Min Max Status
0.0.1 1992-01-01 1998-08-02 Enabled
0.1.2 1998-08-03 2004-05-15 Enabled
0.2.3 2004-05-16 2010-07-24 Enabled |
+----------------------------------------------------------------------------+
1 rows in set (0.05 sec)select calDisablePartitionsByValue('orders','orderdate', '1992-01-01', '1998-08-02');
+---------------------------------------------------------------------------------+
| caldisablepartitionsbyvalue ('orders', 'o_orderdate','1992-01-01','1998-08-02') |
+---------------------------------------------------------------------------------+
| Partitions are disabled successfully |
+---------------------------------------------------------------------------------+
1 row in set (0.28 sec)select calShowPartitionsByValue('orders','orderdate', '1992-01-01', '2010-07-24');
+----------------------------------------------------------------------------+
| calShowPartitionsbyvalue('orders','orderdate', '1992-01-02','2010-07-24’ ) |
+----------------------------------------------------------------------------+
| Part# Min Max Status
0.0.1 1992-01-01 1998-08-02 Disabled
0.1.2 1998-08-03 2004-05-15 Enabled
0.2.3 2004-05-16 2010-07-24 Enabled |
+----------------------------------------------------------------------------+
1 row in set (0.05 sec)select calDropPartitionsByValue('orders','orderdate', '1992-01-01', '1998-08-02');
+------------------------------------------------------------------------------+
| caldroppartitionsbyvalue ('orders', 'o_orderdate','1992-01-01','1998-08-02') |
+------------------------------------------------------------------------------+
| Partitions are enabled successfully. |
+------------------------------------------------------------------------------+
1 row in set (0.28 sec)select calShowPartitionsByValue('orders','orderdate', '1992-01-01', '2010-07-24');
+----------------------------------------------------------------------------+
| calShowPartitionsbyvalue('orders','orderdate', '1992-01-02','2010-07-24' ) |
+----------------------------------------------------------------------------+
| Part# Min Max Status
0.1.2 1998-08-03 2004-05-15 Enabled
0.2.3 2004-05-16 2010-07-24 Enabled |
+----------------------------------------------------------------------------+
1 row in set (0.05 sec)DELETE FROM orders WHERE orderdate <= '1998-12-31';Rocky Linux 8 (x86_64, ARM64)
Rocky Linux 9 (x86_64, ARM64)
Ubuntu 20.04 LTS (x86_64, ARM64)
Ubuntu 22.04 LTS (x86_64, ARM64)
Ubuntu 24.04 LTS (x86_64, ARM64)
Step 1
Step 2
Step 3
Step 4
Step 5
Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.
Columnar Storage Engine
Optimized for Online Analytical Processing (OLAP) workloads
S3-compatible object storage
Enterprise ColumnStore node
4+ cores
16+ GB
Enterprise ColumnStore node
64+ cores
128+ GB
Single-node Enterprise ColumnStore with Object Storage uses S3-compatible object storage to store data.
Configuration File
Configuration files (such as /etc/my.cnf) can be used to set and . The server must be restarted to apply changes made to configuration files.
Command-line
The server can be started with command-line options that set and .
SQL
Users can set that support dynamic changes on-the-fly using the statement.
CentOS
Red Hat Enterprise Linux (RHEL)
/etc/my.cnf.d/z-custom-mariadb.cnf
Debian
Ubuntu
/etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf
Start
sudo systemctl start mariadb
Stop
sudo systemctl stop mariadb
Restart
sudo systemctl restart mariadb
Enable during startup
sudo systemctl enable mariadb
Disable during startup
sudo systemctl disable mariadb
Status
sudo systemctl status mariadb
As the first argument, provide the name for the server
As the second argument, provide maintenance as the state
This action is performed on the MaxScale node.
Confirm that the replicas are set to maintenance mode in MaxScale using MaxScale's REST API. If you are using MaxCtrl, the state of the replicas can be viewed using the list servers command:
If the node is properly in maintenance mode, then the State column will show Maintenance as one of the states.
This action is performed on each replica server.
The system variable must be disabled for this upgrade procedure. If the gtid_strict_mode system variable is enabled in any configuration files, disable it temporarily until the upgrade procedure is complete.
You can check if the gtid_strict_mode system variable is set in a configuration file by executing my_print_defaults command with the mysqld option:
If the gtid_strict_mode system variable is set, you can temporarily disable it by adding # in front of it in the configuration file, so that it will be treated as a comment and ignored:
Prior to upgrading, MariaDB Enterprise ColumnStore must be shutdown.
This action is performed on each ColumnStore node.
Prior to upgrading, several services must be stopped on each ColumnStore node:
Stop the CMAPI service:
Stop the MariaDB Enterprise ColumnStore service:
Stop the MariaDB Enterprise Server service:
MariaDB Corporation provides package repositories for YUM (RHEL, CentOS, Rocky Linux) and APT (Debian, Ubuntu).
Retrieve your Customer Download Token at https://customers.mariadb.com/downloads/token/ and substitute for CUSTOMER_DOWNLOAD_TOKEN in the following directions.
Configure the YUM package repository.
Enterprise ColumnStore 23.10 is included with MariaDB Enterprise Server 11.4. Pass the version to install using the --mariadb-server-version flag to .
To configure YUM package repositories:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
Update MariaDB Enterprise Server and package dependencies:
Retrieve your Customer Download Token at https://customers.mariadb.com/downloads/token/ and substitute for CUSTOMER_DOWNLOAD_TOKEN in the following directions.
Configure the APT package repository.
Enterprise ColumnStore 23.10 is included with MariaDB Enterprise Server 11.4. Pass the version to install using the --mariadb-server-version flag to mariadb_es_repo_setup.
To configure APT package repositories:
Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.
Update MariaDB Enterprise Server and package dependencies.
The update command depends on the installed APT version, which can be determined by executing the following command:
For versions prior to APT 2.0, execute the following command:
For APT 2.0 and later, execute the following command:
This action is performed on each ColumnStore node.
After upgrading, the MariaDB Enterprise ColumnStore service should be stopped, since it will be controlled by CMAPI:
CMAPI disables the Enterprise ColumnStore service in a multi-node deployment. The Enterprise ColumnStore service will be started as-needed by the CMAPI service, so it does not need to start automatically upon reboot.
This action is performed on each ColumnStore node.
After upgrading, the CMAPI service and the MariaDB Enterprise Server service must be started on each ColumnStore node:
Start the CMAPI service:
Start the MariaDB Enterprise Server service:
On the primary server, run mariadb-upgrade to upgrade the data directory with binary logging enabled to update the system tables:
After upgrading, MariaDB Enterprise ColumnStore must be started.
This action is performed on each replica server.
If you temporarily disabled the system variable in the Disable GTID Strict Mode step, it can be re-enabled. If the gtid_strict_mode system variable was temporarily disabled in any configuration files, re-enable it.
This action is performed on each ColumnStore node.
After upgrading, it is recommended to confirm the Enterprise ColumnStore version on each ColumnStore node. Connect to the node using and query the Columnstore_version status variable with :
This action is performed on each ColumnStore node.
After upgrading, it is recommended to confirm the ES version on each ColumnStore node. Connect to the node using and query the version system variable with :
This action is performed for each replica server on the MaxScale node.
After the upgrade, maintenance mode for each replica has been cleared in MaxScale using . If you are using , maintenance mode can be cleared using the clear server command:
As the first argument, provide the name for the server
As the second argument, provide maintenance as the state
This action is performed for each replica server on the MaxScale node.
Confirm that maintenance mode in MaxScale has been cleared for each replica using . If you are using , the state of the replicas can be viewed using the list servers command:
If the node is no longer in maintenance mode, then the State column will no longer show Maintenance as one of the states.

Adding a Node to MariaDB Enterprise ColumnStore
To add a new node to Enterprise ColumnStore, perform the following procedure.
Before you can add a node to Enterprise ColumnStore, confirm that the Enterprise ColumnStore software has been deployed on the node in the desired topology.
For additional information, see "".
Apr 30 21:54:35 a1ebc96a2519 PrimProc[1004]: 35.668435 |0|0|0| C 28 CAL0000: Error total memory available is less than 3GB.ERROR 1815 (HY000): Internal error: System is not ready yet. Please try again.sudo systemctl stop mariadb-columnstore-cmapisudo systemctl stop mariadb-columnstoresudo systemctl stop mariadbsudo yum install curlcurl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setupecho "${checksum} mariadb_es_repo_setup" | sha256sum -c -chmod +x mariadb_es_repo_setupsudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--mariadb-server-version="11.4"sudo apt install curlcurl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setupecho "${checksum} mariadb_es_repo_setup" sha256sum -c -chmod +x mariadb_es_repo_setupsudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
--mariadb-server-version="11.4"sudo apt updatesudo systemctl start mariadb-columnstore-cmapisudo systemctl start mariadbmaxctrl set server \
mcs2 \
maintenancemaxctrl list servers┌────────┬───────────────┬──────┬─────────────┬──────────────────────┬────────┐
│ Server │ Address │ Port │ Connections │ State │ GTID │
├────────┼───────────────┼──────┼─────────────┼──────────────────────┼────────┤
│ mcs3 │ 192.0.2.3 │ 3306 │ 0 │ Maintenance, Running │ 0-1-17 │
├────────┼───────────────┼──────┼─────────────┼──────────────────────┼────────┤
│ mcs2 │ 192.0.2.2 │ 3306 │ 0 │ Maintenance, Running │ 0-1-17 │
├────────┼───────────────┼──────┼─────────────┼──────────────────────┼────────┤
│ mcs1 │ 192.0.2.1 │ 3306 │ 0 │ Master, Running │ 0-1-17 │
└────────┴───────────────┴──────┴─────────────┴──────────────────────┴────────┘my_print_defaults --mysqld \
| grep "gtid[-_]strict[-_]mode"--gtid_strict_mode=1[mariadb]
...
# temporarily commented out for upgrade
# gtid_strict_mode=1mcs cluster stopsudo systemctl stop mariadb-columnstore
sudo systemctl disable mariadb-columnstoremariadb-upgrade --write-binlogmcs cluster startSHOW GLOBAL STATUS LIKE 'Columnstore_version';+---------------------+---------+
| Variable_name | Value |
+---------------------+---------+
| Columnstore_version | 23.10.0 |
+---------------------+---------+SHOW GLOBAL VARIABLES LIKE 'version';+---------------+----------------------------------+
| Variable_name | Value |
+---------------+----------------------------------+
| version | 10.6.9-5-MariaDB-enterprise-log |
+---------------+----------------------------------+maxctrl clear server \
mcs2 \
maintenancemaxctrl list servers┌────────┬───────────────┬──────┬─────────────┬─────────────────┬─────────┐
│ Server │ Address │ Port │ Connections │ State │ GTID │
├────────┼───────────────┼──────┼─────────────┼─────────────────┼─────────┤
│ mcs3 │ 192.0.2.3 │ 3306 │ 0 │ Slave, Running │ 0-3-159 │
├────────┼───────────────┼──────┼─────────────┼─────────────────┼─────────┤
│ mcs2 │ 192.0.2.2 │ 3306 │ 0 │ Slave, Running │ 0-1-88 │
├────────┼───────────────┼──────┼─────────────┼─────────────────┼─────────┤
│ mcs1 │ 192.0.2.1 │ 3306 │ 0 │ Master, Running │ 0-1-88 │
└────────┴───────────────┴──────┴─────────────┴─────────────────┴─────────┘$ sudo testS3ConnectionStorageManager[26887]: Using the config file found at /etc/columnstore/storagemanager.cnf
StorageManager[26887]: S3Storage: S3 connectivity & permissions are OK
S3 Storage Manager Configuration OK$ systemctl status mariadb$ sudo systemctl start mariadb$ sudo mariadb
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 38
Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]>SELECT PLUGIN_NAME, PLUGIN_STATUS
FROM information_schema.PLUGINS
WHERE PLUGIN_LIBRARY LIKE 'ha_columnstore%';
+---------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+---------------------+---------------+
| Columnstore | ACTIVE |
| COLUMNSTORE_COLUMNS | ACTIVE |
| COLUMNSTORE_TABLES | ACTIVE |
| COLUMNSTORE_FILES | ACTIVE |
| COLUMNSTORE_EXTENTS | ACTIVE |
+---------------------+---------------+$ systemctl status mariadb-columnstore-cmapi$ sudo systemctl start mariadb-columnstore-cmapi$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .
{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
},
"192.0.2.2": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"192.0.2.3": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"num_nodes": 3
}$ sudo mariadbCREATE DATABASE IF NOT EXISTS test;
CREATE TABLE IF NOT EXISTS test.contacts (
first_name VARCHAR(50),
last_name VARCHAR(50),
email VARCHAR(100)
) ENGINE = ColumnStore;$ sudo mariadbSHOW CREATE TABLE test.contacts\G;$ sudo mariadbINSERT INTO test.contacts (first_name, last_name, email)
VALUES
("Kai", "Devi", "kai.devi@example.com"),
("Lee", "Wang", "lee.wang@example.com");$ sudo mariadbSELECT * FROM test.contacts;
+------------+-----------+----------------------+
| first_name | last_name | email |
+------------+-----------+----------------------+
| Kai | Devi | kai.devi@example.com |
| Lee | Wang | lee.wang@example.com |
+------------+-----------+----------------------+Before the new node can be added, its MariaDB data directory must be consistent with the Primary Server. To ensure that it is consistent, take a backup of the Primary Server:
The instructions below show how to perform a backup using .
On the Primary Server, take a full backup:
Confirm successful completion of the backup operation.
On the Primary Server, prepare the backup:
Confirm successful completion of the prepare operation.
To make the new node consistent with the Primary Server, restore the new backup on the new node:
On the Primary Server, copy the backup to the new node:
On the new node, restore the backup using .
On the new node, fix the file permissions of the restored backup:
The Enterprise Server. Enterprise ColumnStore, and CMAPI services can be started using the systemctl command. In case the services were started during the installation process, use the restart command.
Perform the following procedure on the new node:
Start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:
Start and disable the MariaDB Enterprise ColumnStore service, so that it does not start automatically upon reboot:
Note
The Enterprise ColumnStore service should not be enabled in a multi-node deployment. The Enterprise ColumnStore service will be started as-needed by the CMAPI service, so it does not require starting automatically upon reboot.
Start and enable the CMAPI service, so that it starts automatically upon reboot:
MariaDB Enterprise ColumnStore requires MariaDB Replication, which must be configured.
Get the GTID position that corresponds to the restored backup.
If the backup was taken with , this position will be located in xtrabackup_binlog_info:
The GTID position from the above output is 0-1-2001,1-2-5139.
Connect to the Replica Server using using the root@localhost user account:
Set the system variable to the GTID position:
Execute the statement to configure the new node to connect to the Primary Server at this position:
The above statement configures the Replica Server to connect to a Primary Server located at 192.0.2.1 using the repl user account.
Start replication using the command:
The above statement configures the new node to connect to the Primary Server to retrieve new binary log events and replicate them into the local database.
The new node must be added to Enterprise ColumnStore using CMAPI:
Add the node using the add-node endpoint path
Use a supported REST client, such as curl
Format the JSON output using jq for enhanced readability
Authenticate using the configured
Include the
For example, if the primary node's host name is mcs1 and the new node's IP address is 192.0.2.3:
In ES 10.5.10-7 and later:
In ES 10.5.9-6 and earlier:
Example output:
To confirm that the node was properly added, the status of Enterprise ColumnStore should be checked using CMAPI:
Check the status using the status endpoint path
For example, if the primary node's host name is mcs1:
Example output:
A server object for the new node must also be added to MaxScale using :
Use or another supported REST client
Add the server object using the create server command
As the first argument, provide a name for the server
As the second argument, provide the IP address for the node
For example:
To confirm that the server object was properly added, the server objects should be checked using :
Show the server objects using the show servers command
For example:
The server object for the new node must be linked to the monitor using :
Link a server object to the monitor using the link monitor command
As the first argument, provide the name of the monitor
As the second argument, provide the name of the server
To confirm that the server object was properly linked to the monitor, the monitor should be checked using :
Show the monitors using the show monitors command
For example:
The server object for the new node must be linked to the service using :
Link the server object to the service using the link service command
As the first argument, provide the name of the service
As the second argument, provide the name of the server
To confirm that the server object was properly linked to the service, the service should be checked using :
Show the services using the show services command
For example:
MaxScale is capable of checking the status of using :
List the servers using the list servers command
For example:
If the new node is properly replicating, then the State column will show Slave, Running.
This guide explains how to upgrade MariaDB Enterprise Server (ES) and MariaDB Enterprise ColumnStore across all nodes in a cluster using the unified mcs command-line tool that you have to run only once.
sudo mariadb-backup --backup \
--user=mariabackup_user \
--password=mariabackup_passwd \
--target-dir=/data/backup/replica_backupsudo mariadb-backup --prepare \
--target-dir=/data/backup/replica_backupsudo rsync -av /data/backup/replica_backup 192.0.2.3:/data/backup/sudo mariadb-backup --copy-back \
--target-dir=/data/backup/replica_backupsudo chown -R mysql:mysql /var/lib/mysqlsudo systemctl restart mariadb
sudo systemctl enable mariadbsudo systemctl restart mariadb-columnstore
sudo systemctl disable mariadb-columnstoresudo systemctl restart mariadb-columnstore-cmapi
sudo systemctl enable mariadb-columnstore-cmapicat xtrabackup_binlog_info
mariadb-bin.000096 568 0-1-2001,1-2-5139sudo mariadbcurl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.3"}' \
| jq .curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/add-node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.3"}' \
| jq .{
"timestamp": "2020-10-28 00:39:14.672142",
"node_id": "192.0.2.3"
}curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
},
"192.0.2.2": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"192.0.2.3": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"num_nodes": 3
}maxctrl create server \
mcs3 \
192.0.2.3maxctrl show serversmaxctrl link monitor \
mcs_monitor \
mcs3maxctrl show monitorsmaxctrl link service \
mcs_service \
mcs3maxctrl show servicesmaxctrl list serverssudo yum update "MariaDB-*" "MariaDB-columnstore-engine" "MariaDB-columnstore-cmapi"apt --versionapt 2.0.9 (amd64)sudo apt install --only-upgrade "mariadb*"sudo apt install --only-upgrade '?upgradable ?name(mariadb.*)'mcs command must be run as root. Either become root, or prefix the mcs commands on this page with sudo.The mcs install_es command:
Validates your MariaDB Enterprise Repository access using an ES API token.
Stops ColumnStore and MariaDB services in a controlled sequence.
Installs/configures the ES repository for the target version.
Creates a pre‑upgrade backup of ColumnStore DBRM and config files on each node.
Upgrades MariaDB Enterprise Server, ColumnStore, and CMAPI.
Waits for CMAPI to come back online on each node and, for upgrades, automatically restarts services.
Administrative privileges on all cluster nodes (package installation and service management required).
A valid ES API token with access to the MariaDB Enterprise Repository.
Network access from the nodes to the MariaDB Enterprise Repository endpoints.
A maintenance window: the upgrade will stop ColumnStore and MariaDB services.
Recent backups:
At a minimum, ensure Extent Map and configuration backups exist.
Recommended: take a full backup with the mcs backup command.
Related docs:
General backup and restore guidance: ColumnStore Backup and Restore
Always back up your data before upgrading. While the tool performs a pre‑upgrade backup of DBRM and configs, it is not a substitute for a full database backup.
The command can target a specific ES version, or use the latest tested version (currently latest 10.6 version).
Install latest tested version (if you omit the --version option, mcs uses the latest version):
Install a specific version:
Proceed even if nodes report different installed package versions (use the majority version as baseline):
Options summary:
--token TEXT: ES API Token to use for the upgrade (required) — get it from here.
-v, --version TEXT: ES version to install; if omitted or set to latest, upgrades to the latest tested version.
For a different version, specify something like --version 10.6.23-19 or --version 11.4.8-5 .
--ignore-mismatch: Continue even if cluster nodes report different package versions; uses majority versions as the baseline.
Stop or pause write workloads and heavy ingestion (e.g., cpimport, large INSERT/LOAD DATA jobs).
Drain or put traffic managers/proxies (for example, MaxScale) into maintenance/drain mode.
Ensure you have administrative/SSH and package manager access on all nodes.
Verify time synchronization across all nodes (NTP/Chrony) to avoid coordination issues.
Confirm recent backups are complete and restorable.
Validate token and target version.
If --version=latest, the tool resolves the latest tested ES version.
If a specific version is requested, it is validated against the repository. Some versions could exists only for specific operating systems.
Stop services.
Gracefully stops the ColumnStore cluster.
Stops the MariaDB server.
Configure repository.
Installs/configures the MariaDB Enterprise Server repository for the chosen version on each node automatically.
Validate installed repo on each node separately
Pre‑upgrade backups (per node).
Creates a backup of DBRM and key configuration files with name preupgrade_dbrm_backup in default backup directory.
Upgrade packages (per node).
Upgrades MariaDB Enterprise Server and ColumnStore packages.
Upgrades CMAPI and waits for it to become ready again on each node (up to 5 minutes).
Service handling after upgrade.
On upgrades: automatically restarts MariaDB and the ColumnStore cluster.
On downgrades: automatic restarts are intentionally skipped; manual steps are required.
Run mcs cluster status to verify all services are up and the cluster is healthy. In case of a failure:
Verify CMAPI readiness on all nodes (for example, via mcs or an external monitoring tool).
Run a quick smoke test:
Create a small ColumnStore table, insert a few rows, and run a SELECT query.
Check for errors in server/ColumnStore logs.
Review /var/tmp/mcs_cli_install_es.log for the full sequence, and ensure no errors were reported.
Downgrades are supported up to MariaDB 10.6.9-5 and ColumnStore 22.08.4.
When downgrading, the tool doesn't automatically restart services. Complete these steps manually:
Start MariaDB on each node (for example, via your service manager).
Start the ColumnStore cluster (for example, using the mcs cluster start command).
Verify cluster health before resuming traffic.
Downgrades can cause data loss or cluster inconsistency if not planned and validated. Always test and ensure backups are restorable.
After a successful upgrade, or after downgrading and a manual restart:
Validate that CMAPI is ready on all nodes:
mcs cmapi is-ready
Check ColumnStore and MariaDB services are running and the cluster is healthy:
mcs cluster status
The mcs install_es command writes a detailed run log to:
/var/tmp/mcs_cli_install_es.log
If CMAPI readiness times out or services do not start cleanly, review:
CMAPI logs: /var/log/mariadb/columnstore/cmapi_server.log
Service logs on each node: /var/log/mariadb/columnstore/
The install_es log file (/var/tmp/mcs_cli_install_es.log) for the full sequence and any errors
Mixed package versions across nodes.
If nodes report different installed versions of Server/ColumnStore/CMAPI, the command fails with a mismatch message.
You can force continuation with --ignore-mismatch; the tool uses the majority version per package as the baseline, but this carries risk—align versions whenever possible.
CMAPI readiness timeout
After upgrading CMAPI, the command waits up to 300 seconds per node for readiness.
On slow nodes or constrained environments, this timeout may be insufficient, and the command exits with a failure; verify services manually and adjust operational expectations.
Downgrade restarts are skipped by design.
After a downgrade, automatic restarts are not performed; you must start MariaDB and the ColumnStore cluster manually and validate health.
ColumnStore skips automatic restarts, because it cannot guarantee that all the expected APIs endpoints exist or are backward-compatible.
MaxScale maintenance handling not automated.
Transitioning MaxScale to maintenance/normal mode during upgrades is not automated at this time; manage traffic routing and maintenance state manually if applicable.
Repository access and version validation.
Invalid tokens, network restrictions, or unsupported version strings can result in validation errors (for example, HTTP 422). Ensure the token has the correct entitlements and the requested version exists for your platform.
Single‑node detection.
If no active nodes are detected, the tool falls back to localhost only; ensure this matches your topology.
Downgrading to 22.08.4 (10.6.9-5) technically working but finished with known issues:
Got ERROR on waiting CMAPI ready. But in fact CMAPI starts and is working fine (check mcs status and systemctl status mariadb-columnstore-cmapi on each node).
If you try to run a mariadb command, you got an error due to unknown configuration flag. Tool forcing to save current config files while installing packages, and an older MariaDB version doesn't support never flag obviously. To fix it, remove this flag from the configuration file, or restore the configuration from last installed package.
Tool currently supported limited packages.
Only MariaDB-server (and dependencies), MariaDB-columnstore-engine (MariaDB-plugin-columnstore) and MariaDB-columnstore-cmapi packages remove and install supported. So packages like MariaDB-backup currently not supported and should be upgraded/downgraded manually.
Re‑run with -v/--verbose to enable console debug logging.
Inspect /var/tmp/mcs_cli_install_es.log for the complete sequence and API responses.
If package repository installation fails, verify token validity and outbound access from all nodes.
If CMAPI does not become ready, check service logs on each node.
For mismatched node versions, align package versions before re‑running, or proceed with --ignore-mismatch , but only after assessing the risk.
Cluster state: ColumnStore cluster should be healthy before starting.
Node access: All nodes must be reachable (SSH/admin access) and responsive.
Disk space: Ensure sufficient free space for package downloads and pre-upgrade backups.
Internet access: Nodes must reach MariaDB Enterprise repositories (per your operating system).
CMAPI communication: Port 8640 (default) must be reachable between nodes.
Time sync: Keep NTP/Chrony synchronized across nodes.
Downgrades can be destructive.
This prompts for confirmation. After downgrade, services are not restarted automatically; start MariaDB and the ColumnStore cluster manually and verify health.
If the upgrade fails or CMAPI does not become ready on all nodes:
Review the detailed log at /var/tmp/mcs_cli_install_es.log for errors.
Check service status on each node:
systemctl status mariadb
systemctl status mariadb-columnstore-cmapi
Verify network/ports (CMAPI 8640) and repository reachability.
Manually start services if safe to do so:
systemctl start mariadb
mcs start (or mcs cluster start)
If corruption is suspected, follow your backup recovery plan (for example, restore from a recent backup and/or extent map backup).
Prior to upgrading:
Create a full backup and verify restore procedures.
Test the process in staging with similar topology/data.
Document current package versions and configs.
Schedule a maintenance window and inform stakeholders.
During upgrading:
Monitor the console output and /var/tmp/mcs_cli_install_es.log .
Avoid interrupting the process; ensure network stability.
After upgrading:
Validate services and cluster health (mcs cluster status).
Run basic data integrity and application smoke tests.
Monitor performance and logs for regressions.
Contact MariaDB Support if you encounter unexpected failures, data issues, or performance regressions. Provide:
The complete log file: /var/tmp/mcs_cli_install_es.log .
The mcs review logs: mcs review --logs .
The exact command used (with parameters, masking sensitive values).
Cluster topology (nodes, versions, operating system, network).
Source and target versions (Server, ColumnStore, CMAPI).
Exact error messages and timestamps.
Command reference: mcs install_es in the command-line tool help and tool README.
Backups: mcs backup and Extent Map backup guidance.
Cluster management: mcs cluster start|stop|status .
mcs install_es --token <ES_API_TOKEN> --version latestmcs install_es --token <ES_API_TOKEN> --version <ES_VERSION>mcs install_es --token <ES_API_TOKEN> --version <ES_VERSION> --ignore-mismatchmcs install_es --token <ES_API_TOKEN> --version 10.6.15-10SET GLOBAL gtid_slave_pos='0-1-2001,1-2-5139';CHANGE MASTER TO
MASTER_USER = "repl",
MASTER_HOST = "192.0.2.1",
MASTER_PASSWORD = "repl_passwd",
MASTER_USE_GTID=slave_pos;START SLAVE;Controls whether disk joins are forced to run even if they are not estimated to be the most efficient execution plan. This can be useful for debugging purposes or for situations where the optimizer's estimates are not accurate.
Scope: global, session
Data type:
Default value: OFF
Range: ON, OFF
Introduced in: MariaDB Enterprise Server 10.6
Sets the maximum depth of the partition tree that can be used for disk joins. A higher value allows for more complex joins, but may also increase the memory usage and execution time.
Scope: global, session
Data type:
Default value: 10
Sets the maximum number of values that can be used in an IN predicate on a Columnstore table. This limit helps to prevent performance issues caused by queries with a large number of IN values.
Scope: global, session
Data type:
Default value: 10000
Sets the maximum number of rows that can be returned by a parallel merge join on a Columnstore table. This limit helps to prevent memory issues caused by joins that return a large number of rows.
Scope: global, session
Data type:
Default value: 1000000
Command line: Yes
Scope: global, session
Data type:
Default value: 2
Command line: Yes
Scope: global, session
Data type:
Default value: 8
Command line: Yes
Scope: global, session
Data type:
Default value: 100
Command line: Yes
Scope: global, session
Data type:
Default value: 0
Command line: Yes
Scope: global, session
Data type:
Default value: 0
Command line: Yes
Scope: global, session
Data type:
Default value: OFF
Command line: Yes
Scope: global, session
Data type:
Default value: 7
Command line: Yes
Scope: global, session
Data type:
Default value: 17
Command line: Yes
Scope: global, session
Data type:
Default value: 0
Command line: Yes
Scope: global, session
Data type:
Default value: OFF
Command line: Yes
Scope: global, session
Data type:
Default value: 10
Command line: Yes
Scope: global, session
Data type:
Default value: 20
Command line: Yes
Scope: global, session
Data type:
Default value: 0
Command line: Yes
Scope: global, session
Data type:
Default value: OFF
Command line: Yes
Scope: global, session
Data type:
Default value: ON
Command line: Yes
Scope: global, session
Data type:
Default value: ON
Command line: Yes
Scope: global, session
Data type:
Default value: 1
MariaDB ColumnStore has the ability to compress data. This is controlled through a compression mode, which can be set as a default for the instance or set at the session level.
To set the compression mode at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance:
where n is:
compression is turned off. Any subsequent table create statements run will have compression turned off for that table unless any statement overrides have been performed. Any alter statements run to add a column will have compression turned off for that column unless any statement override has been performed.
compression is turned on. Any subsequent table create statements run will have compression turned on for that table unless any statement overrides have been performed. Any alter statements run to add a column will have compression turned on for that column unless any statement override has been performed. ColumnStore uses snappy compression in this mode.
MariaDB ColumnStore has the ability to change intermediate decimal mathematical results from decimal type to double. The decimal type has approximately 17-18 digits of precision, but a smaller maximum range. Whereas the double type has approximately 15-16 digits of precision, but a much larger maximum range.
In typical mathematical and scientific applications, the ability to avoid overflow in intermediate results with double math is likely more beneficial than the additional two digits of precisions. In banking applications, however, it may be more appropriate to leave in the default decimal setting to ensure accuracy to the least significant digit.
The infinidb\_double\_for\_decimal\_math variable is used to control the data type for intermediate decimal results. This decimal for double math may be set as a default for the instance, set at the session level, or at the statement level by toggling this variable on and off.
To enable/disable the use of the decimal to double math at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance:
where n is:
off (disabled, default)
on (enabled)
ColumnStore has the ability to support varied internal precision on decimal calculations. infinidb_decimal_scale is used internally by the ColumnStore engine to control how many significant digits to the right of the decimal point are carried through in suboperations on calculated columns. If, while running a query, you receive the message ‘aggregate overflow’, try reducing infinidb_decimal_scale and running the query again.
Note that, as you decrease infinidb_decimal_scale, you may see reduced accuracy in the least significant digit(s) of a returned calculated column. infinidb_use_decimal_scale is used internally by the ColumnStore engine to turn the use of this internal precision on and off. These two system variables can be set as a default for the instance or at session level.
To enable/disable the use of the decimal scale at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance:
where n is off (disabled) or on (enabled).
To set the decimal scale at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.
where n is the amount of precision desired for calculations.
Joins are performed in memory. When a join operation exceeds the memory allocated for query joins, the query is aborted with an error code IDB-2001.
Disk-based joins enable such queries to use disk for intermediate join data in case when the memory needed for join exceeds the memory limit. Although slower in performance as compared to a fully in-memory join, and bound by the temporary space on disk, it does allow such queries to complete.
The following variables in the HashJoin element in the Columnstore.xml configuration file relate to disk-based joins. Columnstore.xml resides in /usr/local/mariadb/columnstore/etc/.
AllowDiskBasedJoin – Option to use disk-based joins. Valid values are Y (enabled) or N (disabled). Default is disabled.
TempFileCompression – Option to use compression for disk join files. Valid values are Y (use compressed files) or N (use non-compressed files).
TempFilePath – The directory path used for the disk joins. By default, this path is the tmp directory for your installation (i.e., /usr/local/mariadb/columnstore/tmp). Files (named infinidb-join-data*) in this directory will be created and cleaned on an as needed basis. The entire directory is removed and recreated by ExeMgr at startup.)
In addition to the system wide flags, at SQL global and session level, the following system variables exists for managing per user memory limit for joins.
infinidb_um_mem_limit - A value for memory limit in MB per user. When this limit is exceeded by a join, it will switch to a disk-based join. By default, the limit is not set (value of 0).
For modification at the global level:
In my.cnf file (typically /usr/local/mariadb/columnstore/mysql):
where value is the value in MB for in memory limitation per user.
For modification at the session level, before issuing your join query from the SQL client, set the session variable as follows.
MariaDB ColumnStore has the ability to utilize the cpimport fast data import tool for non-transactional and SQL statements. Using this method results in a significant increase in performance in loading data through these two SQL statements. This optimization is independent of the storage engine used for the tables in the select statement.
The infinidb_use_import_for_batchinsert variable is used to control if cpimport is used for these statements. This variable may be set as a default for the instance, set at the session level, or at the statement level by toggling this variable on and off.
To enable/disable the use of the use cpimport for batch insert at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.
where n is:
0 (disabled)
1 (enabled)
The infinidb_import_for_batchinsert_delimiter variable is used internally by MariaDB ColumnStore on a non-transactional INSERT INTO SELECT FROM statement as the default delimiter passed to the cpimport tool. With a default value ascii 7, there should be no need to change this value unless your data contains ascii 7 values.
To change this variable value at the at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.
where ascii_value is an ASCII value representation of the delimiter desired.
Note that this setting may cause issues with multi byte character set data. It is recommended to utilize UTF8 files directly with cpimport.
If the following error is received, most likely with a transaction LOAD DATA INFILE or INSERT INTO SELECT, it is recommended to break up the load into multiple smaller chunks, increase the VersionBufferFileSize setting, consider a nontransactional LOAD DATA INFILE, or use cpimport.
The VersionBufferFileSize setting is updated in the ColumnStore.xml typically located under /usr/local/mariadb/columnstore/etc. This dictates the size of the version buffer file on disk which provides DML transactional consistency. The default value is '1GB' which reserves up to a 1 Gigabyte file size. Modify this on the primary node and restart the system if you require a larger value.
MariaDB ColumnStore has the ability to query data from just a single node instead of the whole cluster. In order to accomplish this, the infinidb_local_query variable in the my.cnf configuration file is used and maybe set as a default at system wide or set at the session level.
Local PrimProc query can be enabled system wide during the install process when running the install script postConfigure. Answer 'y' to this prompt during the install process:
To enable the use of the local PrimProc query at the instance level, specify infinidb_local_query =1 (enabled) in the my.cnf configuration file at /usr/local/mariadb/columnstore/mysql. The default is 0 (disabled).
To enable/disable the use of the local PrimProc query at the session level, the following statement is used. Once the session has ended, any subsequent session will return to the default for the instance:
where n is:
0 (disabled)
1 (enabled)
At the session level, this variable applies only to executing a query on an individual . The PrimProc must be set up with the local query option during installation.
With the infinidb_local_query variable set to 1 (default with local PrimProc Query):
With the infinidb_local_query variable set to 0 (default with local PrimProc Query):
Create a script (i.e., extract_query_script.sql in our example) similar to the following:
The infinidb_local_query is set to 0 to allow query across all PrimProc nodes.
The query is structured so PrimProc gets the fact table data locally from the PrimProc node (as indicated by the use of the function), while the dimension table data is extracted from all the PrimProc nodes.
Then you can execute the script to pipe it directly into cpimport:
ColumnStore has the ability to support full MariaDB query syntax through an operating mode. This operating mode may be set as a default for the instance or set at the session level. To set the operating mode at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.
where n is:
a generic, highly compatible row-by-row processing mode. Some WHERE clause components can be processed by ColumnStore, but joins are processed entirely by MySQL using a nested loop join mechanism.
(the default) query syntax is evaluated by ColumnStore for compatibility with distributed execution and incompatible queries are rejected. Queries executed in this mode take advantage of distributed execution and typically result in higher performance.
auto-switch mode: ColumnStore will attempt to process the query internally, if it cannot, it will automatically switch the query to run in row-by-row mode.
Introduced in: MariaDB Enterprise Server 10.6
Introduced in: MariaDB Enterprise Server 10.6
Introduced in: MariaDB Enterprise Server 10.6
SET infinidb_compression_type = nSET infinidb_double_for_decimal_math = onSET infinidb_use_decimal_scale = onSET infinidb_decimal_scale = n[mysqld]
...
infinidb_um_mem_limit = valueSET infinidb_um_mem_limit = valueSET infinidb_use_import_for_batchinsert = nSET infinidb_import_for_batchinsert_delimiter = ascii_valueERROR 1815 (HY000) at line 1 in file: 'ldi.sql': Internal error: CAL0006: IDB-2008: The version buffer overflowed. Increase VersionBufferFileSize or limit the rows to be processed.NOTE: Local Query Feature allows the ability to query data from a single Performance
Module. Check MariaDB ColumnStore Admin Guide for additional information.
Enable Local Query feature? [y,n] (n) >SET infinidb_local_query = nmcsmysql -e 'select * from source_schema.source_table;' –N | /usr/local/Calpont/bin/cpimport target_schema target_table -s '\t' –n1SET infinidb_local_query=0;
SELECT fact.column1, dim.column2
FROM fact JOIN dim USING (KEY)
WHERE idbPm(fact.KEY) = idbLocalPm();mcsmysql source_schema -N < extract_query_script.sql | /usr/local/mariadb/columnstore/bin/cpimport target_schema target_table -s '\t' –n1SET infinidb_vtable_mode = nStep 4: Start and Configure MariaDB Enterprise Server
This page details step 4 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".
This step starts and configures MariaDB Enterprise Server, and MariaDB Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
The installation process might have started some of the ColumnStore services. The services should be stopped prior to making configuration changes.
On each Enterprise ColumnStore node, stop the MariaDB Enterprise Server service:
On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:
On each Enterprise ColumnStore node, stop the CMAPI service:
On each Enterprise ColumnStore node, configure Enterprise Server.
Mandatory system variables and options for ColumnStore Object Storage include:
Example Configuration
On each Enterprise ColumnStore node, start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:
On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:
After the CMAPI service is installed in the next step, CMAPI will start the Enterprise ColumnStore service as needed on each node. CMAPI disables the Enterprise ColumnStore service to prevent systemd from automatically starting Enterprise ColumnStore upon reboot.
On each Enterprise ColumnStore node, start and enable the CMAPI service, so that it starts automatically upon reboot:
For additional information, see "Start and Stop Services".
The ColumnStore Object Storage topology requires several user accounts. Each user account should be created on the primary server, so that it is replicated to the replica servers.
Enterprise ColumnStore requires a mandatory utility user account to perform cross-engine joins and similar operations.
On the primary server, create the user account with the CREATE USER statement:
On the primary server, grant the user account SELECT privileges on all databases with the GRANT statement:
On each Enterprise ColumnStore node, configure the ColumnStore utility user:
On each Enterprise ColumnStore node, set the password:
For details about how to encrypt the password, see "".
Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.
ColumnStore Object Storage uses MariaDB Replication to replicate writes between the primary and replica servers. As MaxScale can promote a replica server to become a new primary in the event of node failure, all nodes must have a replication user.
The action is performed on the primary server.
Create the replication user and grant it the required privileges:
Use the CREATE USER statement to create replication user.
Replace the referenced IP address with the relevant address for your environment.
Ensure that the user account can connect to the primary server from each replica.
Grant the user account the required privileges with the GRANT statement.
ColumnStore Object Storage 23.10 uses MariaDB MaxScale 22.08 to load balance between the nodes.
This action is performed on the primary server.
Use the statement to create the MaxScale user:
Replace the referenced IP address with the relevant address for your environment.
Ensure that the user account can connect from the IP address of the MaxScale instance.
Use the statement to grant the privileges required by the router:
Use the statement to grant privileges required by the MariaDB Monitor.
On each replica server, configure MariaDB Replication:
Use the CHANGE MASTER TO statement to configure the connection to the primary server:
Start replication using the START REPLICA statement:
Confirm that replication is working using the SHOW REPLICA STATUS statement:
Ensure that the replica server cannot accept local writes by setting the read_only system variable to ON using the SET GLOBAL statement:
Initiate the primary server using CMAPI.
Create an API key for the cluster. This API key should be stored securely and kept confidential, because it can be used to add cluster nodes to the multi-node Enterprise ColumnStore deployment.
For example, to create a random 256-bit API key using openssl rand:
This document will use the following API key in further examples, but users should create their own:
Use CMAPI to add the primary server to the cluster and set the API key. The new API key needs to be provided as part of the X-API-key HTML header.
For example, if the primary server's host name is mcs1 and its IP address is 192.0.2.1, use the following node command:
Use CMAPI to check the status of the cluster node:
Add the replica servers with CMAPI:
For each replica server, use to add the replica server to the cluster. The previously set API key needs to be provided as part of the X-API-key HTML header.
For example, if the primary server's host name is mcs1 and the replica server's IP address is 192.0.2.2, use the following node command:
After all replica servers have been added, use CMAPI to confirm that all cluster nodes have been successfully added:
The specific steps to configure the security module depend on the operating system.
Configure SELinux for Enterprise ColumnStore:
To configure SELinux, you have to install the packages required for audit2allow. On CentOS 7 and RHEL 7, install the following:
On RHEL 8, install the following:
Allow the system to run under load for a while to generate SELinux audit events.
After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:
If no audit events were found, this will print the following:
If audit events were found, the new SELinux policy can be loaded using semodule:
Set SELinux to enforcing mode:
Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config.
For example, the file will usually look like this after the change:
Confirm that SELinux is in enforcing mode:
For information on how to create a profile, see on Ubuntu.com.
The specific steps to configure the firewall service depend on the platform.
Configure firewalld for Enterprise Cluster on CentOS and RHEL:
Check if the firewalld service is running:
If the firewalld service was stopped to perform the installation, start it now:
For example, if your cluster nodes are in the 192.0.2.0/24 subnet:
Open up the relevant ports using firewall-cmd:
Reload the runtime configuration:
Configure UFW for Enterprise ColumnStore on Ubuntu:
Check if the UFW service is running:
If the UFW service was stopped to perform the installation, start it now:
Open up the relevant ports using ufw.
For example, if your cluster nodes are in the 192.0.2.0/24 subnet in the range 192.0.2.1 - 192.0.2.3:
Reload the runtime configuration:
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".
This page was step 4 of 9.
.
Step 8: Test MariaDB MaxScale
Step 8: Test MariaDB MaxScale
Set this system variable to ON.
Set this option to the file you want to use for the Relay Logs. Setting this option enables relay logging.
Set this option to the file you want to use to index Relay Log filenames.
Sets the numeric Server ID for this MariaDB Enterprise Server. The value set on this option must be unique to each node.
Set this system variable to utf8
Set this system variable to utf8_general_ci
columnstore_use_import_for_batchinsert
Set this system variable to ALWAYS to always use cpimport for LOAD DATA INFILE and INSERT...SELECT statements.
Set this system variable to ON.
Set this option to the file you want to use for the Binary Log. Setting this option enables binary logging.
Set this option to the file you want to use to track binlog filenames.
Use maxctrl show maxscale command to view the global MaxScale configuration.
This action is performed on the MaxScale node:
Output should align to the global MaxScale configuration in the new configuration file you created.
Use the maxctrl list servers and maxctrl show server commands to view the configured server objects.
This action is performed on the MaxScale node:
Obtain the full list of servers objects:
For each server object, view the configuration:
Output should align to the Server Object configuration you performed.
Use the maxctrl list monitors and maxctrl show monitor commands to view the configured monitors.
This action is performed on the MaxScale node:
Obtain the full list of monitors:
For each monitor, view the monitor configuration:
Output should align to the MariaDB Monitor (mariadbmon) configuration you performed.
Use the maxctrl list services and maxctrl show service commands to view the configured routing services.
This action is performed on the MaxScale node:
Obtain the full list of routing services:
For each service, view the service configuration:
Output should align to the Read Connection Router (readconnroute) or Read/Write Split Router (readwritesplit) configuration you performed.
Applications should use a dedicated user account. The user account must be created on the primary server.
When users connect to MaxScale, MaxScale authenticates the user connection before routing it to an Enterprise Server node. Enterprise Server authenticates the connection as originating from the IP address of the MaxScale node.
The application users must have one user account with the host IP address of the application server and a second user account with the host IP address of the MaxScale node.
The requirement of a duplicate user account can be avoided by enabling the proxy_protocol parameter for MaxScale and the proxy_protocol_networks for Enterprise Server.
This action is performed on the primary Enterprise ColumnStore node:
Connect to the primary Enterprise ColumnStore node:
Create the database user account for your MaxScale node:
Replace 192.0.2.10 with the relevant IP address specification for your MaxScale node.
Passwords should meet your organization's password policies.
Grant the privileges required by your application to the database user account for your MaxScale node:
The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.
This action is performed on the primary Enterprise ColumnStore node:
Create the database user account for your application server:
Replace 192.0.2.11 with the relevant IP address specification for your application server.
Passwords should meet your organization's password policies.
Grant the privileges required by your application to the d database user account for your application server:
The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.
To test the connection, use the MariaDB Client from your application server to connect to an Enterprise ColumnStore node through MaxScale.
This action is performed on a client connected to the MaxScale node:
If you configured the Read Connection Router, confirm that MaxScale routes connections to the replica servers.
On the MaxScale node, use the maxctrl list listeners command to view the available listeners and ports:
Open multiple terminals connected to your application server, in each, use MariaDB Client to connect to the listener port for the Read Connection Router (in the example, 3308):
Use the application user credentials you created for the --user and --password options.
In each terminal, query the hostname and server_id system variable and option to identify to which you're connected:
Different terminals should return different values since MaxScale routes the connections to different nodes.
Since the router was configured with the slave router option, the Read Connection Router only routes connections to replica servers.
If you configured the Read/Write Split Router, confirm that MaxScale routes write queries on this router to the primary Enterprise ColumnStore node.
on the MaxScale node, use the maxctrl list listeners command to view the available listeners and ports:
Open multiple terminals connected to your application server, in each, use MariaDB Client to connect to the listener port for the Read/Write Split Router (in the example, 3307):
Use the application user credentials you created for the --user and --password options.
In one terminal, create the test table:
In each terminal, issue an insert.md statement to add a row to the example table with the values of the hostname and server_id system variable and option:
In one terminal, issue a SELECT statement to query the results:
While MaxScale is handling multiple connections from different terminals, it routed all connections to the current primary Enterprise ColumnStore node, which in the example is mcs1#.
If you configured the Read/Write Split Router (readwritesplit), confirm that MaxScale routes read queries on this router to replica servers.
On the MaxScale node, use the maxctrl list listeners command to view the available listeners and ports:
In a terminal connected to your application server, use MariaDB Client to connect to the listener port for the Read/Write Split Router (readwritesplit) (in the example, 3307):
Use the application user credentials you created for the --user and --password options.
Query the hostname and server_id to identify which server MaxScale routed you to.
Resend the query:
Confirm that MaxScale routes the SELECT statements to different replica servers.
For more information on different routing criteria, see slave_selection_criteria
"Deploy ColumnStore Shared Local Storage Topology".
This page was step 8 of 9.
Use maxctrl show maxscale command to view the global MaxScale configuration.
This action is performed on the MaxScale node:
Output should align to the global MaxScale configuration in the new configuration file you created.
Check Server Configuration Use the maxctrl list servers and maxctrl show server commands to view the configured server objects.
This action is performed on the MaxScale node:
Obtain the full list of servers objects:
For each server object, view the configuration:
Output should align to the Server Object configuration you performed.
Use the maxctrl list monitors and maxctrl show monitor commands to view the configured monitors.
This action is performed on the MaxScale node:
Obtain the full list of monitors:
For each monitor, view the monitor configuration:
Output should align to the MariaDB Monitor (mariadbmon) configuration you performed.
Use the maxctrl list services and maxctrl show service commands to view the configured routing services.
This action is performed on the MaxScale node:
Obtain the full list of routing services:
For each service, view the service configuration:
Output should align to the Read Connection Router (readconnroute) or Read/Write Split Router (readwritesplit) configuration you performed.
Applications should use a dedicated user account. The user account must be created on the primary server.
When users connect to MaxScale, MaxScale authenticates the user connection before routing it to an Enterprise Server node. Enterprise Server authenticates the connection as originating from the IP address of the MaxScale node.
The application users must have one user account with the host IP address of the application server and a second user account with the host IP address of the MaxScale node.
The requirement of a duplicate user account can be avoided by enabling the proxy_protocol parameter for MaxScale and the proxy_protocol_networks for Enterprise Server.
This action is performed on the primary Enterprise ColumnStore node:
Connect to the primary Enterprise ColumnStore node:
Create the database user account for your MaxScale node:
Replace 192.0.2.10 with the relevant IP address specification for your MaxScale node.
Passwords should meet your organization's password policies.
Grant the privileges required by your application to the database user account for your MaxScale node:
The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.
This action is performed on the primary Enterprise ColumnStore node:
Create the database user account for your application server:
Replace 192.0.2.11 with the relevant IP address specification for your application server.
Passwords should meet your organization's password policies.
Grant the privileges required by your application to the d database user account for your application server:
The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.
To test the connection, use the MariaDB Client from your application server to connect to an Enterprise ColumnStore node through MaxScale.
This action is performed on a client connected to the MaxScale node:
If you configured the Read Connection Router, confirm that MaxScale routes connections to the replica servers.
On the MaxScale node, use the maxctrl list listeners command to view the available listeners and ports:
Open multiple terminals connected to your application server, in each, use MariaDB Client to connect to the listener port for the Read Connection Router (in the example, 3308):
Use the application user credentials you created for the --user and --password options.
In each terminal, query the hostname and server_id system variable and option to identify to which you're connected:
Different terminals should return different values since MaxScale routes the connections to different nodes.
Since the router was configured with the slave router option, the Read Connection Router only routes connections to replica servers.
If you configured the Read/Write Split Router, confirm that MaxScale routes write queries on this router to the primary Enterprise ColumnStore node.
on the MaxScale node, use the maxctrl list listeners command to view the available listeners and ports:
Open multiple terminals connected to your application server, in each, use MariaDB Client to connect to the listener port for the Read/Write Split Router (in the example, 3307):
Use the application user credentials you created for the --user and --password options.
In one terminal, create the test table:
In each terminal, issue an insert.md statement to add a row to the example table with the values of the hostname and server_id system variable and option:
In one terminal, issue a SELECT statement to query the results:
While MaxScale is handling multiple connections from different terminals, it routed all connections to the current primary Enterprise ColumnStore node, which in the example is mcs1#.
If you configured the Read/Write Split Router (readwritesplit), confirm that MaxScale routes read queries on this router to replica servers.
On the MaxScale node, use the maxctrl list listeners command to view the available listeners and ports:
In a terminal connected to your application server, use MariaDB Client to connect to the listener port for the Read/Write Split Router (readwritesplit) (in the example, 3307):
Use the application user credentials you created for the --user and --password options.
Query the hostname and server_id to identify which server MaxScale routed you to.
Resend the query:
Confirm that MaxScale routes the SELECT statements to different replica servers.
For more information on different routing criteria, see slave_selection_criteria
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 8 of 9.
This guide provides steps for deploying a multi-node ColumnStore, setting up the environment, installing the software, and bulk importing data for online analytical processing (OLAP) workloads.
Enterprise Server 10.5
Enterprise Server 10.6
Enterprise Server 11.4
This procedure describes the deployment of the ColumnStore Shared Local Storage topology with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5.
MariaDB Enterprise ColumnStore 5 is a columnar storage engine for MariaDB Enterprise Server 10.5. Enterprise ColumnStore is suitable for Online Analytical Processing (OLAP) workloads.
This procedure has 9 steps, which are executed in sequence.
This procedure represents basic product capability and deploys 3 Enterprise ColumnStore nodes and 1 MaxScale node.
This page provides an overview of the topology, requirements, and deployment procedures.
Please read and understand this procedure before executing.
Customers can obtain support by submitting a support case.
The following components are deployed during this procedure:
The MariaDB Enterprise ColumnStore topology with Object Storage delivers production analytics with high availability, fault tolerance, and limitless data storage by leveraging S3-compatible storage.
The topology consists of:
One or more MaxScale nodes
An odd number of ColumnStore nodes (minimum of 3) running ES, Enterprise ColumnStore, and CMAPI
The MaxScale nodes:
Monitor the health and availability of each ColumnStore node using the MariaDB Monitor (mariadbmon)
Accept client and application connections
Route queries to ColumnStore nodes using the Read/Write Split Router (readwritesplit)
The ColumnStore nodes:
Receive queries from MaxScale
Execute queries
Use for the
These requirements are for the ColumnStore Object Storage topology when deployed with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5.
Node Count
Operating System
Minimum Hardware Requirements
Recommended Hardware Requirements
MaxScale nodes, 1 or more are required.
Enterprise ColumnStore nodes, 3 or more are required for high availability. You should always have an odd number of nodes in a multi-node ColumnStore deployment to avoid split brain scenarios.
In alignment to the , the ColumnStore Object Storage topology with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5 is provided for:
CentOS Linux 7 (x86_64)
Debian 10 (x86_64)
Red Hat Enterprise Linux 7 (x86_64)
Red Hat Enterprise Linux 8 (x86_64)
MariaDB Enterprise ColumnStore's minimum hardware requirements are not intended for production environments, but the minimum hardware requirements can be appropriate for development and test environments. For production environments, see the recommended hardware requirements instead.
The minimum hardware requirements are:
MariaDB Enterprise ColumnStore will refuse to start if the system has less than 3 GB of memory.
If Enterprise ColumnStore is started on a system with less memory, the following error message will be written to the ColumnStore system log called crit.log:
And the following error message will be raised to the client:
MariaDB Enterprise ColumnStore's recommended hardware requirements are intended for production analytics.
The recommended hardware requirements are:
The ColumnStore Object Storage topology requires the following storage types:
The ColumnStore Object Storage topology uses shared local storage for the Storage Manager directory to store metadata.
The Storage Manager directory is located at the following path by default:
/var/lib/columnstore/storagemanager
The most common shared local storage options for the ColumnStore Object Storage topology are:
Enterprise ColumnStore's CMAPI (Cluster Management API) is a REST API that can be used to manage a multi-node Enterprise ColumnStore cluster.
Many tools are capable of interacting with REST APIs. For example, the curl utility could be used to make REST API calls from the command-line.
Many programming languages also have libraries for interacting with REST APIs.
The examples below show how to use the CMAPI with curl.
For example:
'x-api-key': '93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd'
'Content-Type': 'application/json'
x-api-key can be set to any value of your choice during the first call to the server. Subsequent connections will require this same key.
MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.
To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.
The systemctl command is used to start and stop the MariaDB Enterprise Server service.
For additional information, see "".
MariaDB Enterprise Server produces log data that can be helpful in problem diagnosis.
Log filenames and locations may be overridden in the server configuration. The default location of logs is the data directory. The data directory is specified by the datadir system variable.
The systemctl command is used to start and stop the ColumnStore service.
In the ColumnStore Object Storage topology, the mariadb-columnstore service should not be enabled. The CMAPI service restarts Enterprise ColumnStore as needed, so it does not need to start automatically upon reboot.
The systemctl command is used to start and stop the CMAPI service.
For additional information on endpoints, see "CMAPI".
MaxScale can be configured using several methods. These methods make use of MaxScale's .
The procedure on these pages configures MaxScale using MaxCtrl.
The systemctl command is used to start and stop the MaxScale service.>
For additional information, see "Start and Stop Services".
Navigation in the procedure Shared Local Storage topology
Step 4: Start and Configure MariaDB Enterprise Server
This page details step 4 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".
This step starts and configures MariaDB Enterprise Server, and MariaDB Enterprise ColumnStore.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
The installation process might have started some of the ColumnStore services. The services should be stopped prior to making configuration changes.
On each Enterprise ColumnStore node, stop the MariaDB Enterprise Server service:
On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:
On each Enterprise ColumnStore node, stop the CMAPI service:
On each Enterprise ColumnStore node, configure Enterprise Server.
Mandatory system variables and options for ColumnStore Object Storage include:
Example Configuration
On each Enterprise ColumnStore node, configure S3 Storage Manager to use S3-compatible storage by editing the /etc/columnstore/storagemanager.cnf configuration file:
The S3-compatible object storage options are configured under [S3]:
The bucket option must be set to the name of the bucket that you created in "Create an S3 Bucket".
The endpoint option must be set to the endpoint for the S3-compatible object storage.
The aws_access_key_id and aws_secret_access_key options must be set to the access key ID and secret access key for the S3-compatible object storage.
To use a specific IAM role, you must uncomment and set
The local cache options are configured under [Cache]:
The cache_size option is set to 2 GB by default.
The path option is set to /var/lib/columnstore/storagemanager/cache by default.
Ensure that the specified path has sufficient storage space for the specified cache size.
On each Enterprise ColumnStore node, start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:
On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:
After the CMAPI service is installed in the next step, CMAPI will start the Enterprise ColumnStore service as needed on each node. CMAPI disables the Enterprise ColumnStore service to prevent systemd from automatically starting Enterprise ColumnStore upon reboot.
On each Enterprise ColumnStore node, start and enable the CMAPI service, so that it starts automatically upon reboot:
For additional information, see "".
The ColumnStore Object Storage topology requires several user accounts. Each user account should be created on the primary server, so that it is replicated to the replica servers.
Enterprise ColumnStore requires a mandatory utility user account to perform cross-engine joins and similar operations.
On the primary server, create the user account with the CREATE USER statement:
On the primary server, grant the user account SELECT privileges on all databases with the GRANT statement:
On each Enterprise ColumnStore node, configure the ColumnStore utility user:
On each Enterprise ColumnStore node, set the password:
For details about how to encrypt the password, see "".
Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.
ColumnStore Object Storage uses MariaDB Replication to replicate writes between the primary and replica servers. As MaxScale can promote a replica server to become a new primary in the event of node failure, all nodes must have a replication user.
The action is performed on the primary server.
Create the replication user and grant it the required privileges:
Use the CREATE USER statement to create replication user.
Replace the referenced IP address with the relevant address for your environment.
Ensure that the user account can connect to the primary server from each replica.
Grant the user account the required privileges with the GRANT statement.
ColumnStore Object Storage 23.10 uses MariaDB MaxScale 22.08 to load balance between the nodes.
This action is performed on the primary server.
Use the statement to create the MaxScale user:
Replace the referenced IP address with the relevant address for your environment.
Ensure that the user account can connect from the IP address of the MaxScale instance.
Use the statement to grant the privileges required by the router:
Use the statement to grant privileges required by the MariaDB Monitor.
On each replica server, configure MariaDB Replication:
Use the CHANGE MASTER TO statement to configure the connection to the primary server:
Start replication using the START REPLICA statement:
Confirm that replication is working using the SHOW REPLICA STATUS statement:
Ensure that the replica server cannot accept local writes by setting the read_only system variable to ON using the SET GLOBAL statement:
Initiate the primary server using CMAPI.
Create an API key for the cluster. This API key should be stored securely and kept confidential, because it can be used to add cluster nodes to the multi-node Enterprise ColumnStore deployment.
For example, to create a random 256-bit API key using openssl rand:
This document will use the following API key in further examples, but users should create their own:
Use CMAPI to add the primary server to the cluster and set the API key. The new API key needs to be provided as part of the X-API-key HTML header.
For example, if the primary server's host name is mcs1 and its IP address is 192.0.2.1, use the following node command:
Use CMAPI to check the status of the cluster node:
Add the replica servers with CMAPI:
For each replica server, use to add the replica server to the cluster. The previously set API key needs to be provided as part of the X-API-key HTML header.
For example, if the primary server's host name is mcs1 and the replica server's IP address is 192.0.2.2, use the following node command:
After all replica servers have been added, use CMAPI to confirm that all cluster nodes have been successfully added:
The specific steps to configure the security module depend on the operating system.
Configure SELinux for Enterprise ColumnStore:
To configure SELinux, you have to install the packages required for audit2allow. On CentOS 7 and RHEL 7, install the following:
On RHEL 8, install the following:
Allow the system to run under load for a while to generate SELinux audit events.
After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:
If no audit events were found, this will print the following:
If audit events were found, the new SELinux policy can be loaded using semodule:
Set SELinux to enforcing mode:
Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config.
For example, the file will usually look like this after the change:
Confirm that SELinux is in enforcing mode:
For information on how to create a profile, see on Ubuntu.com.
The specific steps to configure the firewall service depend on the platform.
Configure firewalld for Enterprise Cluster on CentOS and RHEL:
Check if the firewalld service is running:
If the firewalld service was stopped to perform the installation, start it now:
For example, if your cluster nodes are in the 192.0.2.0/24 subnet:
Open up the relevant ports using firewall-cmd:
Reload the runtime configuration:
Configure UFW for Enterprise ColumnStore on Ubuntu:
Check if the UFW service is running:
If the UFW service was stopped to perform the installation, start it now:
Open up the relevant ports using ufw.
For example, if your cluster nodes are in the 192.0.2.0/24 subnet in the range 192.0.2.1 - 192.0.2.3:
Reload the runtime configuration:
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
This page was step 4 of 9.
This page documents how to create and restore MariaDB Enterprise ColumnStore backups using the mcs CLI.
The mcs backup and mcs restore commands support the same workflows as the mcs_backup_manager.sh script, including:
Full and incremental backups
Local/shared storage and S3 storage topologies
Optional compression and parallelism
Separate DBRM (metadata) backup/restore workflows
On a ColumnStore node, determine which StorageManager service is configured:
Example output:
service = LocalStorage
service = S3
LocalStorage:
S3:
Run mcs backup as root on each node, starting with the primary node.
Use the same backup location on each node.
Example output:
Full backup:
Parallel backup:
Compressed backup:
Incremental backup (auto-select most recent full backup):
Save the backup to a remote host (SCP):
When you run a backup, by default the tooling performs polling checks and attempts to obtain a consistent point-in-time backup by:
checking for active writes
checking for running cpimport jobs
issuing write locks
saving BRM prior to copying
You can skip these safety mechanisms with:
--skip-polls
--skip-locks
--skip-save-brm
Skipping polls/locks/BRM saving can be useful for certain workflows, but it increases the risk of capturing a partially-written state that complicates restore.
Before you can run an incremental backup, you need a full backup taken.
Then taking an incremental backup you need to define the full backup name to increment via flag --incremental xxxxx.
Incremental backups add ColumnStore deltas to an existing full backup. You can either:
specify the full backup folder name explicitly, or
use auto_most_recent will select the most recent directory defined in --backup-location to apply the incremental backup to the most recent full backup
Apply to the most recent full backup:
Apply to a specific full backup folder:
Create a cron job (run as root) that takes periodic backups and appends logs:
Every Night Full Backup retaining the last 14 days:
Full backup once a week (Saturday night) w/ incremental backups all the other nights (keep 21 days)
The most commonly used options are:
Ensure the node has access to your S3 endpoint and credentials.
Run mcs backup with --storage S3 and a backup bucket (--backup-bucket).
Run it as root on each node, starting with the primary node.
Full backup:
Compressed backup (and skip copying bucket data if you only want local artifacts):
Incremental backup:
On-premise S3 endpoint: Key Flags for on premise buckets are the following:
-url - the local/ip address of the S3 provider. For example, minio defaults to port 9000, 127.0.0.1 would be used if minio is installed on the same machine running columnstore
--no-verify-ssl - used when ssl certs are not used/defined for the S3 provider/endpoint
As with LocalStorage, you can schedule mcs backup in cron. Consider including --name-backup to avoid collisions.
The most commonly used S3-specific options are:
List backups to find the folder name you want.
Restore on each node, starting with the primary node.
Standard restore:
Compressed backup restore:
Common options:
Use the same backup bucket that contains the backup.
Restore on each node, starting with the primary node.
When running a columnstore backup, a restoreS3.job file is created with a command compatible to run on each node to restore the backup.
Key Flags for restoring to a new bucket
-nb - the name of the new bucket to copy the backup into and configure columnstore to use post restore
-nr - the name of the region of the new bucket to configure columnstore to use post restore
-nk - the key of the new bucket to configure columnstore to use post restore
Common options:
Both S3 and LocalStorage use the same commands for dbrm backups
DBRM backups are intended for backing up internal ColumnStore metadata only.
Run mcs dbrm_backup as root with the appropriate flags as you need ONLY on the primary node
dbrm_backup:dbrm_backup before upgrade:
Common options:
Both S3 and LocalStorage use the same commands for dbrm restore.
DBRM backups are intended for backing up internal ColumnStore metadata only.
List available DBRM backups.
Restore from the selected folder.
dbrm_restore:dbrm_restore FlagsCommon options:
$ sudo systemctl stop mariadb$ sudo systemctl stop mariadb-columnstore$ sudo systemctl stop mariadb-columnstore-cmapi[mariadb]
bind_address = 0.0.0.0
log_error = mariadbd.err
character_set_server = utf8
collation_server = utf8_general_ci
log_bin = mariadb-bin
log_bin_index = mariadb-bin.index
relay_log = mariadb-relay
relay_log_index = mariadb-relay.index
log_slave_updates = ON
gtid_strict_mode = ON
# This must be unique on each Enterprise ColumnStore node
server_id = 1$ sudo systemctl start mariadb$ sudo systemctl enable mariadb$ sudo systemctl stop mariadb-columnstore$ sudo systemctl start mariadb-columnstore-cmapi$ sudo systemctl enable mariadb-columnstore-cmapiCREATE USER 'util_user'@'127.0.0.1'
IDENTIFIED BY 'util_user_passwd';GRANT SELECT, PROCESS ON *.*
TO 'util_user'@'127.0.0.1';$ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1$ sudo mcsSetConfig CrossEngineSupport Port 3306$ sudo mcsSetConfig CrossEngineSupport User util_user$ sudo mcsSetConfig CrossEngineSupport Password util_user_passwdCREATE USER 'repl'@'192.0.2.%' IDENTIFIED BY 'repl_passwd';GRANT REPLICA MONITOR,
REPLICATION REPLICA,
REPLICATION REPLICA ADMIN,
REPLICATION MASTER ADMIN
ON *.* TO 'repl'@'192.0.2.%';CREATE USER 'mxs'@'192.0.2.%'
IDENTIFIED BY 'mxs_passwd';GRANT SHOW DATABASES ON *.* TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.columns_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.db TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.procs_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.proxies_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.roles_mapping TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.tables_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.user TO 'mxs'@'192.0.2.%';GRANT BINLOG ADMIN,
READ_ONLY ADMIN,
RELOAD,
REPLICA MONITOR,
REPLICATION MASTER ADMIN,
REPLICATION REPLICA ADMIN,
REPLICATION REPLICA,
SHOW DATABASES,
SELECT
ON *.* TO 'mxs'@'192.0.2.%';CHANGE MASTER TO
MASTER_HOST='192.0.2.1',
MASTER_USER='repl',
MASTER_PASSWORD='repl_passwd',
MASTER_USE_GTID=slave_pos;START REPLICA;SHOW REPLICA STATUS;SET GLOBAL read_only=ON;$ openssl rand -hex 32
93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":120, "node": "192.0.2.1"}' \
| jq .{
"timestamp": "2020-10-28 00:39:14.672142",
"node_id": "192.0.2.1"
}$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
}$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":120, "node": "192.0.2.2"}' \
| jq .{
"timestamp": "2020-10-28 00:42:42.796050",
"node_id": "192.0.2.2"
}$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
},
"192.0.2.2": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"192.0.2.3": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"num_nodes": 3
}$ sudo yum install policycoreutils policycoreutils-python$ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
Nothing to do$ sudo semodule -i mariadb_local.pp$ sudo setenforce enforcing# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo getenforceEnforcing$ sudo systemctl status firewalld$ sudo systemctl start firewalld$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="3306" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8600-8630" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8640" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8700" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8800" protocol="tcp"
accept'$ sudo firewall-cmd --reload$ sudo ufw status verbose$ sudo ufw enable$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 3306 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8600:8630 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8640 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8700 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8800 proto tcp$ sudo ufw reload$ maxctrl show maxscale┌──────────────┬───────────────────────────────────────────────────────┐
│ Version │ 22.08.15 │
├──────────────┼───────────────────────────────────────────────────────┤
│ Commit │ 3761fa7a52046bc58faad8b5a139116f9e33364c │
├──────────────┼───────────────────────────────────────────────────────┤
│ Started At │ Thu, 05 Aug 2021 20:21:20 GMT │
├──────────────┼───────────────────────────────────────────────────────┤
│ Activated At │ Thu, 05 Aug 2021 20:21:20 GMT │
├──────────────┼───────────────────────────────────────────────────────┤
│ Uptime │ 868 │
├──────────────┼───────────────────────────────────────────────────────┤
│ Config Sync │ null │
├──────────────┼───────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ "admin_auth": true, │
│ │ "admin_enabled": true, │
│ │ "admin_gui": true, │
│ │ "admin_host": "0.0.0.0", │
│ │ "admin_log_auth_failures": true, │
│ │ "admin_pam_readonly_service": null, │
│ │ "admin_pam_readwrite_service": null, │
│ │ "admin_port": 8989, │
│ │ "admin_secure_gui": false, │
│ │ "admin_ssl_ca_cert": null, │
│ │ "admin_ssl_cert": null, │
│ │ "admin_ssl_key": null, │
│ │ "admin_ssl_version": "MAX", │
│ │ "auth_connect_timeout": "10000ms", │
│ │ "auth_read_timeout": "10000ms", │
│ │ "auth_write_timeout": "10000ms", │
│ │ "cachedir": "/var/cache/maxscale", │
│ │ "config_sync_cluster": null, │
│ │ "config_sync_interval": "5000ms", │
│ │ "config_sync_password": "*****", │
│ │ "config_sync_timeout": "10000ms", │
│ │ "config_sync_user": null, │
│ │ "connector_plugindir": "/usr/lib64/mysql/plugin", │
│ │ "datadir": "/var/lib/maxscale", │
│ │ "debug": null, │
│ │ "dump_last_statements": "never", │
│ │ "execdir": "/usr/bin", │
│ │ "language": "/var/lib/maxscale", │
│ │ "libdir": "/usr/lib64/maxscale", │
│ │ "load_persisted_configs": true, │
│ │ "local_address": null, │
│ │ "log_debug": false, │
│ │ "log_info": false, │
│ │ "log_notice": true, │
│ │ "log_throttling": { │
│ │ "count": 10, │
│ │ "suppress": 10000, │
│ │ "window": 1000 │
│ │ }, │
│ │ "log_warn_super_user": false, │
│ │ "log_warning": true, │
│ │ "logdir": "/var/log/maxscale", │
│ │ "max_auth_errors_until_block": 10, │
│ │ "maxlog": true, │
│ │ "module_configdir": "/etc/maxscale.modules.d", │
│ │ "ms_timestamp": false, │
│ │ "passive": false, │
│ │ "persistdir": "/var/lib/maxscale/maxscale.cnf.d", │
│ │ "piddir": "/var/run/maxscale", │
│ │ "query_classifier": "qc_sqlite", │
│ │ "query_classifier_args": null, │
│ │ "query_classifier_cache_size": 289073971, │
│ │ "query_retries": 1, │
│ │ "query_retry_timeout": "5000ms", │
│ │ "rebalance_period": "0ms", │
│ │ "rebalance_threshold": 20, │
│ │ "rebalance_window": 10, │
│ │ "retain_last_statements": 0, │
│ │ "session_trace": 0, │
│ │ "skip_permission_checks": false, │
│ │ "sql_mode": "default", │
│ │ "syslog": true, │
│ │ "threads": 1, │
│ │ "users_refresh_interval": "0ms", │
│ │ "users_refresh_time": "30000ms", │
│ │ "writeq_high_water": 16777216, │
│ │ "writeq_low_water": 8192 │
│ │ } │
└──────────────┴───────────────────────────────────────────────────────┘$ maxctrl list servers┌────────┬────────────────┬──────┬─────────────┬─────────────────┬────────┐
│ Server │ Address │ Port │ Connections │ State │ GTID │
├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
│ mcs1 │ 192.0.2.1 │ 3306 │ 1 │ Master, Running │ 0-1-25 │
├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
│ mcs2 │ 192.0.2.2 │ 3306 │ 1 │ Slave, Running │ 0-1-25 │
├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
│ mcs3 │ 192.0.2.3 │ 3306 │ 1 │ Slave, Running │ 0-1-25 │
└────────┴────────────────┴──────┴─────────────┴─────────────────┴────────┘$ maxctrl show server mcs1┌─────────────────────┬───────────────────────────────────────────┐
│ Server │ mcs1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Address │ 192.0.2.1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Port │ 3306 │
├─────────────────────┼───────────────────────────────────────────┤
│ State │ Master, Running │
├─────────────────────┼───────────────────────────────────────────┤
│ Version │ 11.4.5-3-MariaDB-enterprise-log │
├─────────────────────┼───────────────────────────────────────────┤
│ Last Event │ master_up │
├─────────────────────┼───────────────────────────────────────────┤
│ Triggered At │ Thu, 05 Aug 2021 20:22:26 GMT │
├─────────────────────┼───────────────────────────────────────────┤
│ Services │ connection_router_service │
│ │ query_router_service │
├─────────────────────┼───────────────────────────────────────────┤
│ Monitors │ columnstore_monitor │
├─────────────────────┼───────────────────────────────────────────┤
│ Master ID │ -1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Node ID │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Slave Server IDs │ │
├─────────────────────┼───────────────────────────────────────────┤
│ Current Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Total Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Max Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Statistics │ { │
│ │ "active_operations": 0, │
│ │ "adaptive_avg_select_time": "0ns", │
│ │ "connection_pool_empty": 0, │
│ │ "connections": 1, │
│ │ "max_connections": 1, │
│ │ "max_pool_size": 0, │
│ │ "persistent_connections": 0, │
│ │ "reused_connections": 0, │
│ │ "routed_packets": 0, │
│ │ "total_connections": 1 │
│ │ } │
├─────────────────────┼───────────────────────────────────────────┤
│ Parameters │ { │
│ │ "address": "192.0.2.1", │
│ │ "disk_space_threshold": null, │
│ │ "extra_port": 0, │
│ │ "monitorpw": null, │
│ │ "monitoruser": null, │
│ │ "persistmaxtime": "0ms", │
│ │ "persistpoolmax": 0, │
│ │ "port": 3306, │
│ │ "priority": 0, │
│ │ "proxy_protocol": false, │
│ │ "rank": "primary", │
│ │ "socket": null, │
│ │ "ssl": false, │
│ │ "ssl_ca_cert": null, │
│ │ "ssl_cert": null, │
│ │ "ssl_cert_verify_depth": 9, │
│ │ "ssl_cipher": null, │
│ │ "ssl_key": null, │
│ │ "ssl_verify_peer_certificate": false, │
│ │ "ssl_verify_peer_host": false, │
│ │ "ssl_version": "MAX" │
│ │ } │
└─────────────────────┴───────────────────────────────────────────┘$ maxctrl list monitors┌─────────────────────┬─────────┬──────────────────┐
│ Monitor │ State │ Servers │
├─────────────────────┼─────────┼──────────────────┤
│ columnstore_monitor │ Running │ mcs1, mcs2, mcs3 │
└─────────────────────┴─────────┴──────────────────┘$ maxctrl show monitor columnstore_monitor┌─────────────────────┬─────────────────────────────────────┐
│ Monitor │ columnstore_monitor │
├─────────────────────┼─────────────────────────────────────┤
│ Module │ mariadbmon │
├─────────────────────┼─────────────────────────────────────┤
│ State │ Running │
├─────────────────────┼─────────────────────────────────────┤
│ Servers │ mcs1 │
│ │ mcs2 │
│ │ mcs3 │
├─────────────────────┼─────────────────────────────────────┤
│ Parameters │ { │
│ │ "backend_connect_attempts": 1, │
│ │ "backend_connect_timeout": 3, │
│ │ "backend_read_timeout": 3, │
│ │ "backend_write_timeout": 3, │
│ │ "disk_space_check_interval": 0, │
│ │ "disk_space_threshold": null, │
│ │ "events": "all", │
│ │ "journal_max_age": 28800, │
│ │ "module": "mariadbmon", │
│ │ "monitor_interval": 2000, │
│ │ "password": "*****", │
│ │ "script": null, │
│ │ "script_timeout": 90, │
│ │ "user": "mxs" │
│ │ } │
├─────────────────────┼─────────────────────────────────────┤
│ Monitor Diagnostics │ {} │
└─────────────────────┴─────────────────────────────────────┘$ maxctrl list services┌───────────────────────────┬────────────────┬─────────────┬───────────────────┬──────────────────┐
│ Service │ Router │ Connections │ Total Connections │ Servers │
├───────────────────────────┼────────────────┼─────────────┼───────────────────┼──────────────────┤
│ connection_router_Service │ readconnroute │ 0 │ 0 │ mcs1, mcs2, mcs3 │
├───────────────────────────┼────────────────┼─────────────┼───────────────────┼──────────────────┤
│ query_router_service │ readwritesplit │ 0 │ 0 │ mcs1, mcs2, mcs3 │
└───────────────────────────┴────────────────┴─────────────┴───────────────────┴──────────────────┘$ maxctrl show service query_router_service┌─────────────────────┬─────────────────────────────────────────────────────────────┐
│ Service │ query_router_service │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Router │ readwritesplit │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ State │ Started │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Started At │ Sat Aug 28 21:41:16 2021 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Current Connections │ 0 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Total Connections │ 0 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Max Connections │ 0 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Cluster │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Servers │ mcs1 │
│ │ mcs2 │
│ │ mcs3 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Services │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Filters │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ "auth_all_servers": false, │
│ │ "causal_reads": "false", │
│ │ "causal_reads_timeout": "10000ms", │
│ │ "connection_keepalive": "300000ms", │
│ │ "connection_timeout": "0ms", │
│ │ "delayed_retry": false, │
│ │ "delayed_retry_timeout": "10000ms", │
│ │ "disable_sescmd_history": false, │
│ │ "enable_root_user": false, │
│ │ "idle_session_pool_time": "-1000ms", │
│ │ "lazy_connect": false, │
│ │ "localhost_match_wildcard_host": true, │
│ │ "log_auth_warnings": true, │
│ │ "master_accept_reads": false, │
│ │ "master_failure_mode": "fail_instantly", │
│ │ "master_reconnection": false, │
│ │ "max_connections": 0, │
│ │ "max_sescmd_history": 50, │
│ │ "max_slave_connections": 255, │
│ │ "max_slave_replication_lag": "0ms", │
│ │ "net_write_timeout": "0ms", │
│ │ "optimistic_trx": false, │
│ │ "password": "*****", │
│ │ "prune_sescmd_history": true, │
│ │ "rank": "primary", │
│ │ "retain_last_statements": -1, │
│ │ "retry_failed_reads": true, │
│ │ "reuse_prepared_statements": false, │
│ │ "router": "readwritesplit", │
│ │ "session_trace": false, │
│ │ "session_track_trx_state": false, │
│ │ "slave_connections": 255, │
│ │ "slave_selection_criteria": "LEAST_CURRENT_OPERATIONS", │
│ │ "strict_multi_stmt": false, │
│ │ "strict_sp_calls": false, │
│ │ "strip_db_esc": true, │
│ │ "transaction_replay": false, │
│ │ "transaction_replay_attempts": 5, │
│ │ "transaction_replay_max_size": 1073741824, │
│ │ "transaction_replay_retry_on_deadlock": false, │
│ │ "type": "service", │
│ │ "use_sql_variables_in": "all", │
│ │ "user": "mxs", │
│ │ "version_string": null │
│ │ } │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Router Diagnostics │ { │
│ │ "avg_sescmd_history_length": 0, │
│ │ "max_sescmd_history_length": 0, │
│ │ "queries": 0, │
│ │ "replayed_transactions": 0, │
│ │ "ro_transactions": 0, │
│ │ "route_all": 0, │
│ │ "route_master": 0, │
│ │ "route_slave": 0, │
│ │ "rw_transactions": 0, │
│ │ "server_query_statistics": [] │
│ │ } │
└─────────────────────┴─────────────────────────────────────────────────────────────┘$ sudo mariadbCREATE USER 'app_user'@'192.0.2.10' IDENTIFIED BY 'app_user_passwd';GRANT ALL ON test.* TO 'app_user'@'192.0.2.10';CREATE USER 'app_user'@'192.0.2.11' IDENTIFIED BY 'app_user_passwd';GRANT ALL ON test.* TO 'app_user'@'192.0.2.11';$ mariadb --host 192.0.2.10 --port 3307
--user app_user --password$ maxctrl list listeners┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘$ mariadb --host 192.0.2.10 --port 3308 \
--user app_user --passwordSELECT @@global.hostname, @@global.server_id;
+-------------------+--------------------+
| @@global.hostname | @@global.server_id |
+-------------------+--------------------+
| mcs2 | 2 |
+-------------------+--------------------+$ maxctrl list listeners┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘$ mariadb --host 192.0.2.10 --port 3307 \
--user app_user --passwordCREATE TABLE test.load_balancing_test (
id INT PRIMARY KEY AUTO_INCREMENT,
hostname VARCHAR(256),
server_id INT
);INSERT INTO test.load_balancing_test (hostname, server_id)
VALUES (@@global.hostname, @@global.server_id);SELECT * FROM test.load_balancing_test;+----+----------+-----------+
| id | hostname | server_id |
+----+----------+-----------+
| 1 | mcs1 | 1 |
| 2 | mcs1 | 1 |
| 3 | mcs1 | 1 |
+----+----------+-----------+$ maxctrl list listeners┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘$ mariadb --host 192.0.2.10 --port 3307 \
--user app_user --passwordSELECT @@global.hostname, @@global.server_id;+-------------------+--------------------+
| @@global.hostname | @@global.server_id |
+-------------------+--------------------+
| mcs2 | 2 |
+-------------------+--------------------+SELECT @@global.hostname, @@global.server_id;+-------------------+--------------------+
| @@global.hostname | @@global.server_id |
+-------------------+--------------------+
| mcs3 | 3 |
+-------------------+--------------------+$ maxctrl show maxscale┌──────────────┬───────────────────────────────────────────────────────┐
│ Version │ 22.08.15 │
├──────────────┼───────────────────────────────────────────────────────┤
│ Commit │ 3761fa7a52046bc58faad8b5a139116f9e33364c │
├──────────────┼───────────────────────────────────────────────────────┤
│ Started At │ Thu, 05 Aug 2021 20:21:20 GMT │
├──────────────┼───────────────────────────────────────────────────────┤
│ Activated At │ Thu, 05 Aug 2021 20:21:20 GMT │
├──────────────┼───────────────────────────────────────────────────────┤
│ Uptime │ 868 │
├──────────────┼───────────────────────────────────────────────────────┤
│ Config Sync │ null │
├──────────────┼───────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ "admin_auth": true, │
│ │ "admin_enabled": true, │
│ │ "admin_gui": true, │
│ │ "admin_host": "0.0.0.0", │
│ │ "admin_log_auth_failures": true, │
│ │ "admin_pam_readonly_service": null, │
│ │ "admin_pam_readwrite_service": null, │
│ │ "admin_port": 8989, │
│ │ "admin_secure_gui": false, │
│ │ "admin_ssl_ca_cert": null, │
│ │ "admin_ssl_cert": null, │
│ │ "admin_ssl_key": null, │
│ │ "admin_ssl_version": "MAX", │
│ │ "auth_connect_timeout": "10000ms", │
│ │ "auth_read_timeout": "10000ms", │
│ │ "auth_write_timeout": "10000ms", │
│ │ "cachedir": "/var/cache/maxscale", │
│ │ "config_sync_cluster": null, │
│ │ "config_sync_interval": "5000ms", │
│ │ "config_sync_password": "*****", │
│ │ "config_sync_timeout": "10000ms", │
│ │ "config_sync_user": null, │
│ │ "connector_plugindir": "/usr/lib64/mysql/plugin", │
│ │ "datadir": "/var/lib/maxscale", │
│ │ "debug": null, │
│ │ "dump_last_statements": "never", │
│ │ "execdir": "/usr/bin", │
│ │ "language": "/var/lib/maxscale", │
│ │ "libdir": "/usr/lib64/maxscale", │
│ │ "load_persisted_configs": true, │
│ │ "local_address": null, │
│ │ "log_debug": false, │
│ │ "log_info": false, │
│ │ "log_notice": true, │
│ │ "log_throttling": { │
│ │ "count": 10, │
│ │ "suppress": 10000, │
│ │ "window": 1000 │
│ │ }, │
│ │ "log_warn_super_user": false, │
│ │ "log_warning": true, │
│ │ "logdir": "/var/log/maxscale", │
│ │ "max_auth_errors_until_block": 10, │
│ │ "maxlog": true, │
│ │ "module_configdir": "/etc/maxscale.modules.d", │
│ │ "ms_timestamp": false, │
│ │ "passive": false, │
│ │ "persistdir": "/var/lib/maxscale/maxscale.cnf.d", │
│ │ "piddir": "/var/run/maxscale", │
│ │ "query_classifier": "qc_sqlite", │
│ │ "query_classifier_args": null, │
│ │ "query_classifier_cache_size": 289073971, │
│ │ "query_retries": 1, │
│ │ "query_retry_timeout": "5000ms", │
│ │ "rebalance_period": "0ms", │
│ │ "rebalance_threshold": 20, │
│ │ "rebalance_window": 10, │
│ │ "retain_last_statements": 0, │
│ │ "session_trace": 0, │
│ │ "skip_permission_checks": false, │
│ │ "sql_mode": "default", │
│ │ "syslog": true, │
│ │ "threads": 1, │
│ │ "users_refresh_interval": "0ms", │
│ │ "users_refresh_time": "30000ms", │
│ │ "writeq_high_water": 16777216, │
│ │ "writeq_low_water": 8192 │
│ │ } │
└──────────────┴───────────────────────────────────────────────────────┘$ maxctrl list servers┌────────┬────────────────┬──────┬─────────────┬─────────────────┬────────┐
│ Server │ Address │ Port │ Connections │ State │ GTID │
├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
│ mcs1 │ 192.0.2.1 │ 3306 │ 1 │ Master, Running │ 0-1-25 │
├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
│ mcs2 │ 192.0.2.2 │ 3306 │ 1 │ Slave, Running │ 0-1-25 │
├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
│ mcs3 │ 192.0.2.3 │ 3306 │ 1 │ Slave, Running │ 0-1-25 │
└────────┴────────────────┴──────┴─────────────┴─────────────────┴────────┘$ maxctrl show server mcs1┌─────────────────────┬───────────────────────────────────────────┐
│ Server │ mcs1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Address │ 192.0.2.1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Port │ 3306 │
├─────────────────────┼───────────────────────────────────────────┤
│ State │ Master, Running │
├─────────────────────┼───────────────────────────────────────────┤
│ Version │ 11.4.5-3-MariaDB-enterprise-log │
├─────────────────────┼───────────────────────────────────────────┤
│ Last Event │ master_up │
├─────────────────────┼───────────────────────────────────────────┤
│ Triggered At │ Thu, 05 Aug 2021 20:22:26 GMT │
├─────────────────────┼───────────────────────────────────────────┤
│ Services │ connection_router_service │
│ │ query_router_service │
├─────────────────────┼───────────────────────────────────────────┤
│ Monitors │ columnstore_monitor │
├─────────────────────┼───────────────────────────────────────────┤
│ Master ID │ -1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Node ID │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Slave Server IDs │ │
├─────────────────────┼───────────────────────────────────────────┤
│ Current Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Total Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Max Connections │ 1 │
├─────────────────────┼───────────────────────────────────────────┤
│ Statistics │ { │
│ │ "active_operations": 0, │
│ │ "adaptive_avg_select_time": "0ns", │
│ │ "connection_pool_empty": 0, │
│ │ "connections": 1, │
│ │ "max_connections": 1, │
│ │ "max_pool_size": 0, │
│ │ "persistent_connections": 0, │
│ │ "reused_connections": 0, │
│ │ "routed_packets": 0, │
│ │ "total_connections": 1 │
│ │ } │
├─────────────────────┼───────────────────────────────────────────┤
│ Parameters │ { │
│ │ "address": "192.0.2.1", │
│ │ "disk_space_threshold": null, │
│ │ "extra_port": 0, │
│ │ "monitorpw": null, │
│ │ "monitoruser": null, │
│ │ "persistmaxtime": "0ms", │
│ │ "persistpoolmax": 0, │
│ │ "port": 3306, │
│ │ "priority": 0, │
│ │ "proxy_protocol": false, │
│ │ "rank": "primary", │
│ │ "socket": null, │
│ │ "ssl": false, │
│ │ "ssl_ca_cert": null, │
│ │ "ssl_cert": null, │
│ │ "ssl_cert_verify_depth": 9, │
│ │ "ssl_cipher": null, │
│ │ "ssl_key": null, │
│ │ "ssl_verify_peer_certificate": false, │
│ │ "ssl_verify_peer_host": false, │
│ │ "ssl_version": "MAX" │
│ │ } │
└─────────────────────┴───────────────────────────────────────────┘$ maxctrl list monitors┌─────────────────────┬─────────┬──────────────────┐
│ Monitor │ State │ Servers │
├─────────────────────┼─────────┼──────────────────┤
│ columnstore_monitor │ Running │ mcs1, mcs2, mcs3 │
└─────────────────────┴─────────┴──────────────────┘$ maxctrl show monitor columnstore_monitor┌─────────────────────┬─────────────────────────────────────┐
│ Monitor │ columnstore_monitor │
├─────────────────────┼─────────────────────────────────────┤
│ Module │ mariadbmon │
├─────────────────────┼─────────────────────────────────────┤
│ State │ Running │
├─────────────────────┼─────────────────────────────────────┤
│ Servers │ mcs1 │
│ │ mcs2 │
│ │ mcs3 │
├─────────────────────┼─────────────────────────────────────┤
│ Parameters │ { │
│ │ "backend_connect_attempts": 1, │
│ │ "backend_connect_timeout": 3, │
│ │ "backend_read_timeout": 3, │
│ │ "backend_write_timeout": 3, │
│ │ "disk_space_check_interval": 0, │
│ │ "disk_space_threshold": null, │
│ │ "events": "all", │
│ │ "journal_max_age": 28800, │
│ │ "module": "mariadbmon", │
│ │ "monitor_interval": 2000, │
│ │ "password": "*****", │
│ │ "script": null, │
│ │ "script_timeout": 90, │
│ │ "user": "mxs" │
│ │ } │
├─────────────────────┼─────────────────────────────────────┤
│ Monitor Diagnostics │ {} │
└─────────────────────┴─────────────────────────────────────┘$ maxctrl list services┌───────────────────────────┬────────────────┬─────────────┬───────────────────┬──────────────────┐
│ Service │ Router │ Connections │ Total Connections │ Servers │
├───────────────────────────┼────────────────┼─────────────┼───────────────────┼──────────────────┤
│ connection_router_Service │ readconnroute │ 0 │ 0 │ mcs1, mcs2, mcs3 │
├───────────────────────────┼────────────────┼─────────────┼───────────────────┼──────────────────┤
│ query_router_service │ readwritesplit │ 0 │ 0 │ mcs1, mcs2, mcs3 │
└───────────────────────────┴────────────────┴─────────────┴───────────────────┴──────────────────┘$ maxctrl show service query_router_service┌─────────────────────┬─────────────────────────────────────────────────────────────┐
│ Service │ query_router_service │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Router │ readwritesplit │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ State │ Started │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Started At │ Sat Aug 28 21:41:16 2021 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Current Connections │ 0 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Total Connections │ 0 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Max Connections │ 0 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Cluster │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Servers │ mcs1 │
│ │ mcs2 │
│ │ mcs3 │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Services │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Filters │ │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Parameters │ { │
│ │ "auth_all_servers": false, │
│ │ "causal_reads": "false", │
│ │ "causal_reads_timeout": "10000ms", │
│ │ "connection_keepalive": "300000ms", │
│ │ "connection_timeout": "0ms", │
│ │ "delayed_retry": false, │
│ │ "delayed_retry_timeout": "10000ms", │
│ │ "disable_sescmd_history": false, │
│ │ "enable_root_user": false, │
│ │ "idle_session_pool_time": "-1000ms", │
│ │ "lazy_connect": false, │
│ │ "localhost_match_wildcard_host": true, │
│ │ "log_auth_warnings": true, │
│ │ "master_accept_reads": false, │
│ │ "master_failure_mode": "fail_instantly", │
│ │ "master_reconnection": false, │
│ │ "max_connections": 0, │
│ │ "max_sescmd_history": 50, │
│ │ "max_slave_connections": 255, │
│ │ "max_slave_replication_lag": "0ms", │
│ │ "net_write_timeout": "0ms", │
│ │ "optimistic_trx": false, │
│ │ "password": "*****", │
│ │ "prune_sescmd_history": true, │
│ │ "rank": "primary", │
│ │ "retain_last_statements": -1, │
│ │ "retry_failed_reads": true, │
│ │ "reuse_prepared_statements": false, │
│ │ "router": "readwritesplit", │
│ │ "session_trace": false, │
│ │ "session_track_trx_state": false, │
│ │ "slave_connections": 255, │
│ │ "slave_selection_criteria": "LEAST_CURRENT_OPERATIONS", │
│ │ "strict_multi_stmt": false, │
│ │ "strict_sp_calls": false, │
│ │ "strip_db_esc": true, │
│ │ "transaction_replay": false, │
│ │ "transaction_replay_attempts": 5, │
│ │ "transaction_replay_max_size": 1073741824, │
│ │ "transaction_replay_retry_on_deadlock": false, │
│ │ "type": "service", │
│ │ "use_sql_variables_in": "all", │
│ │ "user": "mxs", │
│ │ "version_string": null │
│ │ } │
├─────────────────────┼─────────────────────────────────────────────────────────────┤
│ Router Diagnostics │ { │
│ │ "avg_sescmd_history_length": 0, │
│ │ "max_sescmd_history_length": 0, │
│ │ "queries": 0, │
│ │ "replayed_transactions": 0, │
│ │ "ro_transactions": 0, │
│ │ "route_all": 0, │
│ │ "route_master": 0, │
│ │ "route_slave": 0, │
│ │ "rw_transactions": 0, │
│ │ "server_query_statistics": [] │
│ │ } │
└─────────────────────┴─────────────────────────────────────────────────────────────┘$ sudo mariadbCREATE USER 'app_user'@'192.0.2.10' IDENTIFIED BY 'app_user_passwd';GRANT ALL ON test.* TO 'app_user'@'192.0.2.10';CREATE USER 'app_user'@'192.0.2.11' IDENTIFIED BY 'app_user_passwd';GRANT ALL ON test.* TO 'app_user'@'192.0.2.11';$ mariadb --host 192.0.2.10 --port 3307
--user app_user --password$ maxctrl list listeners┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘$ mariadb --host 192.0.2.10 --port 3308 \
--user app_user --passwordSELECT @@global.hostname, @@global.server_id;
+-------------------+--------------------+
| @@global.hostname | @@global.server_id |
+-------------------+--------------------+
| mcs2 | 2 |
+-------------------+--------------------+$ maxctrl list listeners┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘$ mariadb --host 192.0.2.10 --port 3307 \
--user app_user --passwordCREATE TABLE test.load_balancing_test (
id INT PRIMARY KEY AUTO_INCREMENT,
hostname VARCHAR(256),
server_id INT
);INSERT INTO test.load_balancing_test (hostname, server_id)
VALUES (@@global.hostname, @@global.server_id);SELECT * FROM test.load_balancing_test;+----+----------+-----------+
| id | hostname | server_id |
+----+----------+-----------+
| 1 | mcs1 | 1 |
| 2 | mcs1 | 1 |
| 3 | mcs1 | 1 |
+----+----------+-----------+$ maxctrl list listeners┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
│ Name │ Port │ Host │ State │ Service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ connection_router_listener │ 3308 │ :: │ Running │ connection_router_service │
├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
│ query_router_listener │ 3307 │ :: │ Running │ query_router_service │
└────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘$ mariadb --host 192.0.2.10 --port 3307 \
--user app_user --passwordSELECT @@global.hostname, @@global.server_id;+-------------------+--------------------+
| @@global.hostname | @@global.server_id |
+-------------------+--------------------+
| mcs2 | 2 |
+-------------------+--------------------+SELECT @@global.hostname, @@global.server_id;+-------------------+--------------------+
| @@global.hostname | @@global.server_id |
+-------------------+--------------------+
| mcs3 | 3 |
+-------------------+--------------------+--parallel (-P)
Enables parallel rsync and defines the number of parallel rsync threads to run. If combined with --compress this flag defines the number of compression threads to run. Default is 4
Example: --parallel 8
--compress (-c)
Compress backup in X format
Supported: pigz
--quiet (-q)
Silence verbose copy command outputs
Useful for cron jobs
--name-backup (-nb)
Define the name of the backup - default: date +%m-%d-%Y
Example: -nb before-upgrade-backup
--retention-days (-r)
Retain backups created within the last X days; delete older ones
0 means keep all
--apply-retention-only (-aro)
Only apply retention policy; do not run a backup
Works with --retention-days
--list (-li)
List backups
Lists backups in the configured location
--skip-mariadb-backup (-smdb)
Skip restoring MariaDB server data
Use when restoring ColumnStore only
-ns - the secret of the new bucket to configure columnstore to use post restore
--new-region (-nr)
Region for --new-bucket
S3 only
--new-key (-nk)
Access key for --new-bucket
S3 only
--new-secret (-ns)
Secret for --new-bucket
S3 only
--continue (-cont)
Allow deleting data in --new-bucket during restore
S3 only; dangerous if bucket contains important data
--skip-storage-manager (-ssm)
Skip backing up the storagemanager directory
Support-guided workflows
--skip-save-brm (-sbrm)
Skip saving BRM prior to DBRM backup
Can produce a dirtier backup
--skip-locks (-slock)
Skip issuing read locks
Support-guided workflows
--skip-polls (-spoll)
Skip polling to confirm locks are released
Support-guided workflows
--quiet (-q)
Silence verbose copy output
Useful for cron
--skip-storage-manager (-ssm)
Skip backing up/restoring storagemanager directory
Support-guided workflows
--list (-li)
List backups
Lists available DBRM backups
--backup-location (-bl)
Defines the path where the backup should be saved to. A date based folder under this path will be created per backup run automatically. You can change the name of the folders with -nb
Typical default in tooling: /tmp/backups/
--backup-destination (-bd)
Where backups are stored relative to the node running the command. Two possible values: Local or Remote. Determines if the backup requires ssh thus -scp needs to be defined too or if the -bl path is relative to the script
Local or Remote
--secure-copy-protocol (-scp)
if --backup-destination is defined as Remote , then you need to define -scp . This defines the ssh connection to remotely rsync to
Example: -bd Remote -scp root@192.168.0.1
--incremental (-i)
Creates an incremental backup based on an existing full backup
--storage S3 (-s)
Use S3 storage topology
Must be set to S3 for object storage workflows
--backup-bucket (-bb)
Bucket where backups are stored
Example: s3://my-cs-backups
--endpoint-url (-url)
Custom S3 endpoint URL
For on-premise S3 vendors (MinIO, etc.)
--no-verify-ssl (-nv-ssl)
Skip verifying SSL certificates
--load (-l)
Backup folder name to restore
Required for restore
--backup-location (-bl)
Where backups are located
Example: /tmp/backups/
--backup_destination (-bd)
Whether the backup is on the local server or a remote server
Local or Remote
--scp (-scp)
SCP source used when --backup_destination Remote
--backup-bucket (-bb)
Backup bucket to restore from
Example: s3://my-cs-backups
--endpoint-url (-url)
Custom S3 endpoint URL
For on-premise S3 vendors
--no-verify-ssl (-nv-ssl)
Skip verifying SSL certificates
Use with caution
--new-bucket (-nb)
New bucket to restore data into
--backup-location (-bl)
Where DBRM backups are written
Default example: /tmp/dbrm_backups
--retention-days (-r)
Retain DBRM backups created within last X days
Older backups are deleted
--mode (-m)
Run mode
once or loop
--interval (-i)
Sleep interval (minutes) when --mode loop
--backup-location (-bl)
Where DBRM backups exist on disk
Example: /tmp/dbrm_backups
--load (-l)
Backup directory name to restore
Required for restore
--no-start (-ns)
Do not attempt ColumnStore startup after restore
Useful for manual recovery steps
--skip-dbrm-backup (-sdbk)
Skip taking a safety backup before restoring
Value can be <folder> or auto_most_recent
Use with caution; primarily for test/non-standard TLS
Format: user@host
Use when restoring into a different bucket
Only used in loop mode
Use with caution
grep "service" /etc/columnstore/storagemanager.cnfdu -sh /var/lib/columnstore/du -sh /var/lib/columnstore/storagemanager/ ;
aws s3 ls s3://<bucketname> --recursive | grep -v -E "(Bucket: |Prefix: |LastWriteTime|^$|--)" | awk 'BEGIN {total=0}{total+=$3}END{print total/1024/1024/1024" GB"}'mcs backup --backup-location /tmp/backups/ --listExisting Backups
--------------------------------------------------------------------------
Options Last-Updated Extent Map EM-Size Journal-Size VBBM-Size VSS-Size Days Old
12-03-2024 Dec 3 21:05 BRM_saves_em 77228 0 1884 2792 12
11-21-2024 Nov 21 21:05 BRM_saves_em 77228 0 1884 2792 12
--------------------------------------------------------------------------
Restore with mcs restore --path /tmp/backups/ --directory <backup_folder_from_above>mcs backupmcs backup --parallel 8mcs backup --parallel 8 --compress pigzmcs backup --incremental auto_most_recentmcs backup --backup-destination Remote --scp root@192.68.0.1mcs backup -P 8# Run Full Backup
mcs backup -P 8
# Backup Complete @ /tmp/backups/12-03-2025
# Run the incremental backup - updating the full backup 12-03-2025
mcs backup --incremental 12-03-2025mcs backup --incremental auto_most_recentmcs backup --incremental <full-backup-folder>sudo crontab -e*/60 */24 * * * mcs backup --parallel 4 -r 14 >> /tmp/cs_backup.log 2>&159 23 * * 6 mcs backup -P 8 -r 21 >> /root/cs_backup.log 2>&1
59 23 * * 0-5 mcs backup --incremental auto_most_recent -r 21 >> /root/cs_backup.log 2>&1
59 23 * * 7 mcs backup --incremental auto_most_recent -r 21 >> /root/cs_backup.log 2>&1mcs backup --storage S3 --backup-bucket s3://my-cs-backupsmcs backup --storage S3 --backup-bucket s3://my-cs-backups --compress pigz --quiet --skip-bucket-datamcs backup --storage S3 --backup-bucket gs://my-cs-backups --incremental 12-18-2023mcs backup --storage S3 --backup-bucket s3://my-on-premise-bucket --endpoint-url http://127.0.0.1:8000scp /tmp/backups/12-03-2024/mysql root@pm2:/tmp/backups/12-03-2024/mysql
scp /tmp/backups/12-03-2024/configs root@pm2:/tmp/backups/12-03-2024/configs
scp /tmp/backups/12-03-2024/mysql root@pm3:/tmp/backups/12-03-2024/mysql
scp /tmp/backups/12-03-2024/configs root@pm3:/tmp/backups/12-03-2024/configsmcs restore --backup-location /tmp/backups/ --list# on primary
mcs cluster stop
# on every node
systemctl stop mariadb
systemctl stop mariadb-columnstore-cmapi
mcs restore -l 12-03-2024 -bl /mnt/custom/path/mcs restore -l 11-21-2024 -bl /mnt/custom/path/ -c pigzmcs restore -s S3 -bb s3://my-cs-backups -l 12-03-2025
mcs restore -s S3 -bb gs://on-premise-bucket -l 12-03-2025 -url http://127.0.0.1:8000
mcs restore -s S3 -bb s3://my-cs-backups -l 11-21-2022 -nb s3://new-data-bucket -nr us-east-1 -nk AKIAxxxxxxx3FHCADF -ns GGGuxxxxxxxxxxnqa72csk5 -ha# on primary
mcs cluster stop
# on every node
systemctl stop mariadb
systemctl stop mariadb-columnstore-cmapi
mcs restore -s S3 -bb s3://my-cs-backups -l 11-21-2024mcs restore -s S3 -bb gs://on-premise-bucket -l 12-03-2021 -url http://127.0.0.1:9000mcs restore -s S3 -bb s3://my-cs-backups -l 11-21-2022 -nb s3://new-data-bucket -nr us-east-1 -nk AKIAxxxxxxx3FHCADF -ns GGGuxxxxxxxxxxnqa72csk5mcs dbrm_backup --listmcs dbrm_backup --mode loop --interval 90 --retention-days 7 --backup-location /mnt/dbrm_backups
mcs dbrm_backup --mode once --retention-days 7 --backup-location /mnt/dbrm_backups -nb my-one-off-backupmcs dbrm_backupmcs dbrm_backup -nb before_upgrade_11.21.2024_dbrm_backupmcs dbrm_restore --list./mcs_backup_manager.sh dbrm_restore --backup-location /tmp/dbrm_backups --load dbrm_backup_20241203_172842
./mcs_backup_manager.sh dbrm_restore --backup-location /tmp/dbrm_backups --load dbrm_backup_20241203_172842 --no-startmcs dbrm-restore --backup-location /tmp/dbrm_backups --load <dbrm_backup_folder>Start and Configure MariaDB MaxScale
Test MariaDB MaxScale
Import Data
S3-Compatible Object Storage Requirements
Preferred Object Storage Providers: Cloud
Preferred Object Storage Providers: Hardware
Shared Local Storage Directories
Shared Local Storage Options
Recommended Storage Options
Ubuntu 20.04 LTS (x86_64)
NFS (Network File System)
On-premises
• NFS is a distributed file system. • If NFS is used, the storage should be mounted with the sync option to ensure that each node flushes its changes immediately. • For on-premises deployments, NFS is the recommended option for the Storage Manager directory, and any S3-compatible storage is the recommended option for data.
<hostname>-bin
Columnar storage engine with S3-compatible object storage
Highly available
Automatic failover via MaxScale and CMAPI
Scales read via MaxScale
Bulk data import
Enterprise Server 10.5, Enterprise ColumnStore 5, MaxScale 2.5
Enterprise Server 10.6, Enterprise ColumnStore 23.02, MaxScale 22.08
Prepare ColumnStore Nodes
Configure Shared Local Storage
Install MariaDB Enterprise Server
Start and Configure MariaDB Enterprise Server
Test MariaDB Enterprise Server
Install MariaDB MaxScale
Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.
Database proxy that extends the availability, scalability, and security of MariaDB Enterprise Servers
• Columnar storage engine • Highly available • Optimized for Online Analytical Processing (OLAP) workloads • Scalable query execution • Cluster Management API (CMAPI) provides a REST API for multi-node administration.
Listener
Listens for client connections to MaxScale then passes them to the router service
MariaDB Monitor
Tracks changes in the state of MariaDB Enterprise Servers.
Read Connection Router
Routes connections from the listener to any available Enterprise ColumnStore node
Read/Write Split Router
Routes read operations from the listener to any available Enterprise ColumnStore node, and routes write operations from the listener to a specific server that MaxScale uses as the primary server
Server Module
Connection configuration in MaxScale to an Enterprise ColumnStore node
MaxScale node
4+ cores
4+ GB
Enterprise ColumnStore node
4+ cores
4+ GB
MaxScale node
8+ cores
16+ GB
Enterprise ColumnStore node
64+ cores
128+ GB
The ColumnStore Object Storage topology uses shared local storage for the Storage Manager directory to store metadata.
EBS (Elastic Block Store) Multi-Attach
AWS
• EBS is a high-performance block-storage service for AWS (Amazon Web Services). • EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported. • For deployments in AWS, EBS Multi-Attach is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data.
EFS (Elastic File System)
AWS
• EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services). • For deployments in AWS, EFS is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data. EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services).
Filestore
GCP
• Filestore is high-performance, fully managed storage for GCP (Google Cloud Platform). • For deployments in GCP, Filestore is the recommended option for the Storage Manager directory, and Google Object Storage (S3-compatible) is the recommended option for data.
GlusterFS
On-premises
Configuration File
Configuration files (such as /etc/my.cnf) can be used to set system-variables and options. The server must be restarted to apply changes made to configuration files.
Command-line
The server can be started with command-line options that set system-variables and options.
SQL
Users can set system-variables that support dynamic changes on-the-fly using the SET statement.
CentOS
Red Hat Enterprise Linux (RHEL)
/etc/my.cnf.d/z-custom-mariadb.cnf
Debian
Ubuntu
/etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf
Start
sudo systemctl start mariadb
Stop
sudo systemctl stop mariadb
Restart
sudo systemctl restart mariadb
Enable during startup
sudo systemctl enable mariadb
Disable during startup
sudo systemctl disable mariadb
Status
sudo systemctl status mariadb
<hostname>.err
server_audit.log
<hostname>-slow.log
Start
sudo systemctl start mariadb-columnstore
Stop
sudo systemctl stop mariadb-columnstore
Restart
sudo systemctl restart mariadb-columnstore
Enable during startup
sudo systemctl enable mariadb-columnstore
Disable during startup
sudo systemctl disable mariadb-columnstore
Status
sudo systemctl status mariadb-columnstore
Start
sudo systemctl start mariadb-columnstore-cmapi
Stop
sudo systemctl stop mariadb-columnstore-cmapi
Restart
sudo systemctl restart mariadb-columnstore-cmapi
Enable during startup
sudo systemctl enable mariadb-columnstore-cmapi
Disable during startup
sudo systemctl disable mariadb-columnstore-cmapi
Status
sudo systemctl status mariadb-columnstore-cmapi
Command-line utility to perform administrative tasks through the REST API. See MaxCtrl Commands.
MaxGUI is a graphical utility that can perform administrative tasks through the REST API.
The REST API can be used directly. For example, the curl utility could be used to make REST API calls from the command-line. Many programming languages also have libraries to interact with REST APIs.
Start
sudo systemctl start maxscale
Stop
sudo systemctl stop maxscale
Restart
sudo systemctl restart maxscale
Enable during startup
sudo systemctl enable maxscale
Disable during startup
sudo systemctl disable maxscale
Status
sudo systemctl status maxscale

• GlusterFS is a distributed file system. • GlusterFS supports replication and failover.
<hostname>.log
Set this system variable to ON.
Set this option to the file you want to use for the Relay Logs. Setting this option enables relay logging.
Set this option to the file you want to use to index Relay Log filenames.
Sets the numeric Server ID for this MariaDB Enterprise Server. The value set on this option must be unique to each node.
iam_role_name, sts_region, and sts_endpointTo use the IAM role assigned to an EC2 instance, you must uncomment ec2_iam_mode=enabled.
Set this system variable to utf8
Set this system variable to utf8_general_ci
columnstore_use_import_for_batchinsert
Set this system variable to ALWAYS to always use cpimport for LOAD DATA INFILE and INSERT...SELECT statements.
Set this system variable to ON.
Set this option to the file you want to use for the Binary Log. Setting this option enables binary logging.
Set this option to the file you want to use to track binlog filenames.
Apr 30 21:54:35 a1ebc96a2519 PrimProc[1004]: 35.668435 |0|0|0| C 28 CAL0000: Error total memory available is less than 3GB.ERROR 1815 (HY000): Internal error: System is not ready yet. Please try again.https://{server}:{port}/cmapi/{version}/{route}/{command}$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/start \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20}' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/shutdown \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20}' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .$ curl -k -s -X DELETE https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .$ sudo systemctl stop mariadb$ sudo systemctl stop mariadb-columnstore$ sudo systemctl stop mariadb-columnstore-cmapi[mariadb]
bind_address = 0.0.0.0
log_error = mariadbd.err
character_set_server = utf8
collation_server = utf8_general_ci
log_bin = mariadb-bin
log_bin_index = mariadb-bin.index
relay_log = mariadb-relay
relay_log_index = mariadb-relay.index
log_slave_updates = ON
gtid_strict_mode = ON
# This must be unique on each Enterprise ColumnStore node
server_id = 1[ObjectStorage]
…
service = S3
…
[S3]
bucket = your_columnstore_bucket_name
endpoint = your_s3_endpoint
aws_access_key_id = your_s3_access_key_id
aws_secret_access_key = your_s3_secret_key
# iam_role_name = your_iam_role
# sts_region = your_sts_region
# sts_endpoint = your_sts_endpoint
# ec2_iam_mode = enabled
[Cache]
cache_size = your_local_cache_size
path = your_local_cache_path$ sudo systemctl start mariadb$ sudo systemctl enable mariadb$ sudo systemctl stop mariadb-columnstore$ sudo systemctl start mariadb-columnstore-cmapi$ sudo systemctl enable mariadb-columnstore-cmapiCREATE USER 'util_user'@'127.0.0.1'
IDENTIFIED BY 'util_user_passwd';GRANT SELECT, PROCESS ON *.*
TO 'util_user'@'127.0.0.1';$ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1$ sudo mcsSetConfig CrossEngineSupport Port 3306$ sudo mcsSetConfig CrossEngineSupport User util_user$ sudo mcsSetConfig CrossEngineSupport Password util_user_passwdCREATE USER 'repl'@'192.0.2.%' IDENTIFIED BY 'repl_passwd';GRANT REPLICA MONITOR,
REPLICATION REPLICA,
REPLICATION REPLICA ADMIN,
REPLICATION MASTER ADMIN
ON *.* TO 'repl'@'192.0.2.%';CREATE USER 'mxs'@'192.0.2.%'
IDENTIFIED BY 'mxs_passwd';GRANT SHOW DATABASES ON *.* TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.columns_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.db TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.procs_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.proxies_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.roles_mapping TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.tables_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.user TO 'mxs'@'192.0.2.%';GRANT BINLOG ADMIN,
READ_ONLY ADMIN,
RELOAD,
REPLICA MONITOR,
REPLICATION MASTER ADMIN,
REPLICATION REPLICA ADMIN,
REPLICATION REPLICA,
SHOW DATABASES,
SELECT
ON *.* TO 'mxs'@'192.0.2.%';CHANGE MASTER TO
MASTER_HOST='192.0.2.1',
MASTER_USER='repl',
MASTER_PASSWORD='repl_passwd',
MASTER_USE_GTID=slave_pos;START REPLICA;SHOW REPLICA STATUS;SET GLOBAL read_only=ON;$ openssl rand -hex 32
93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":120, "node": "192.0.2.1"}' \
| jq .{
"timestamp": "2020-10-28 00:39:14.672142",
"node_id": "192.0.2.1"
}$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
}$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":120, "node": "192.0.2.2"}' \
| jq .{
"timestamp": "2020-10-28 00:42:42.796050",
"node_id": "192.0.2.2"
}$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .{
"timestamp": "2020-12-15 00:40:34.353574",
"192.0.2.1": {
"timestamp": "2020-12-15 00:40:34.362374",
"uptime": 11467,
"dbrm_mode": "master",
"cluster_mode": "readwrite",
"dbroots": [
"1"
],
"module_id": 1,
"services": [
{
"name": "workernode",
"pid": 19202
},
{
"name": "controllernode",
"pid": 19232
},
{
"name": "PrimProc",
"pid": 19254
},
{
"name": "ExeMgr",
"pid": 19292
},
{
"name": "WriteEngine",
"pid": 19316
},
{
"name": "DMLProc",
"pid": 19332
},
{
"name": "DDLProc",
"pid": 19366
}
]
},
"192.0.2.2": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"192.0.2.3": {
"timestamp": "2020-12-15 00:40:34.428554",
"uptime": 11437,
"dbrm_mode": "slave",
"cluster_mode": "readonly",
"dbroots": [
"2"
],
"module_id": 2,
"services": [
{
"name": "workernode",
"pid": 17789
},
{
"name": "PrimProc",
"pid": 17813
},
{
"name": "ExeMgr",
"pid": 17854
},
{
"name": "WriteEngine",
"pid": 17877
}
]
},
"num_nodes": 3
}$ sudo yum install policycoreutils policycoreutils-python$ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
Nothing to do$ sudo semodule -i mariadb_local.pp$ sudo setenforce enforcing# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of three values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted$ sudo getenforceEnforcing$ sudo systemctl status firewalld$ sudo systemctl start firewalld$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="3306" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8600-8630" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8640" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8700" protocol="tcp"
accept'$ sudo firewall-cmd --permanent --add-rich-rule='
rule family="ipv4"
source address="192.0.2.0/24"
destination address="192.0.2.0/24"
port port="8800" protocol="tcp"
accept'$ sudo firewall-cmd --reload$ sudo ufw status verbose$ sudo ufw enable$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 3306 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8600:8630 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8640 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8700 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8800 proto tcp$ sudo ufw reloadThis guide provides steps for deploying a single-node S3 ColumnStore, setting up the environment, installing the software, and bulk importing data for online analytical processing (OLAP) workloads.
Enterprise Server 10.5
Enterprise Server 10.6
Enterprise Server 11.4
Columnar storage engine with S3-compatible object storage
Highly available
Automatic failover via MaxScale and CMAPI
Scales read via MaxScale
Bulk data import
This procedure describes the deployment of the ColumnStore Object Storage topology with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5.
MariaDB Enterprise ColumnStore 5 is a columnar storage engine for MariaDB Enterprise Server 10.5. Enterprise ColumnStore is suitable for Online Analytical Processing (OLAP) workloads.
This procedure has 9 steps, which are executed in sequence.
This procedure represents basic product capability and deploys 3 Enterprise ColumnStore nodes and 1 MaxScale node.
This page provides an overview of the topology, requirements, and deployment procedures.
Please read and understand this procedure before executing.
Customers can obtain support by submitting a support case.
The following components are deployed during this procedure:
The MariaDB Enterprise ColumnStore topology with Object Storage delivers production analytics with high availability, fault tolerance, and limitless data storage by leveraging S3-compatible storage.
The topology consists of:
One or more MaxScale nodes
An odd number of ColumnStore nodes (minimum of 3) running ES, Enterprise ColumnStore, and CMAPI
The MaxScale nodes:
Monitor the health and availability of each ColumnStore node using the MariaDB Monitor (mariadbmon)
Accept client and application connections
Route queries to ColumnStore nodes using the Read/Write Split Router (readwritesplit)
The ColumnStore nodes:
Receive queries from MaxScale
Execute queries
Use for data
Use shared local storage for the Storage Manager directory
These requirements are for the ColumnStore Object Storage topology when deployed with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5.
Node Count
Operating System
Minimum Hardware Requirements
Recommended Hardware Requirements
MaxScale nodes, 1 or more are required.
Enterprise ColumnStore nodes, 3 or more are required for high availability. You should always have an odd number of nodes in a multi-node ColumnStore deployment to avoid split brain scenarios.
In alignment to the , the ColumnStore Object Storage topology with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5 is provided for:
CentOS Linux 7 (x86_64)
Debian 10 (x86_64)
Red Hat Enterprise Linux 7 (x86_64)
Red Hat Enterprise Linux 8 (x86_64)
MariaDB Enterprise ColumnStore's minimum hardware requirements are not intended for production environments, but the minimum hardware requirements can be appropriate for development and test environments. For production environments, see the instead.
The minimum hardware requirements are:
MariaDB Enterprise ColumnStore will refuse to start if the system has less than 3 GB of memory.
If Enterprise ColumnStore is started on a system with less memory, the following error message will be written to the ColumnStore system log called crit.log:
And the following error message will be raised to the client:
MariaDB Enterprise ColumnStore's recommended hardware requirements are intended for production analytics.
The recommended hardware requirements are:
The ColumnStore Object Storage topology requires the following storage types:
The ColumnStore Object Storage topology uses S3-compatible object storage to store data.
Many S3-compatible object storage services exist. MariaDB Corporation cannot make guarantees about all S3-compatible object storage services, because different services provide different functionality.
For the preferred S3-compatible object storage providers that provide cloud and hardware solutions, see the following sections:
The use of non-cloud and non-hardware providers is at your own risk.
If you have any questions about using specific S3-compatible object storage with MariaDB Enterprise ColumnStore, contact us.
Amazon Web Services (AWS) S3
Google Cloud Storage
Azure Storage
Alibaba Cloud Object Storage Service
Cloudian HyperStore
Cohesity S3
Dell EMC
IBM Cloud Object Storage
The ColumnStore Object Storage topology uses shared local storage for the to store metadata.
The Storage Manager directory is located at the following path by default:
/var/lib/columnstore/storagemanager
The most common shared local storage options for the ColumnStore Object Storage topology are:
For best results, MariaDB Corporation would recommend the following storage options:
Enterprise ColumnStore's CMAPI (Cluster Management API) is a REST API that can be used to manage a multi-node Enterprise ColumnStore cluster.
Many tools are capable of interacting with REST APIs. For example, the curl utility could be used to make REST API calls from the command-line.
Many programming languages also have libraries for interacting with REST APIs.
The examples below show how to use the CMAPI with curl.
For example:
https://mcs1:8640/cmapi/0.4.0/cluster/shutdown
https://mcs1:8640/cmapi/0.4.0/cluster/start
https://mcs1:8640/cmapi/0.4.0/cluster/status
With CMAPI 1.4 and later:
https://mcs1:8640/cmapi/0.4.0/cluster/node
With CMAPI 1.3 and earlier:
https://mcs1:8640/cmapi/0.4.0/cluster/add-node
https://mcs1:8640/cmapi/0.4.0/cluster/remove-node
'x-api-key': '93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd'
'Content-Type': 'application/json'
x-api-key can be set to any value of your choice during the first call to the server. Subsequent connections will require this same key.
curl examples remain valid but are now considered legacy.
$ mcs cluster status
$ mcs cluster start --timeout 20
$ mcs cluster shutdown --timeout 20
With CMAPI 1.4 and later:
With CMAPI 1.3 and earlier:
With CMAPI 1.4 and later:
With CMAPI 1.3 and earlier:
MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.
To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.
The systemctl command is used to start and stop the MariaDB Enterprise Server service.
For additional information, see "".
MariaDB Enterprise Server produces log data that can be helpful in problem diagnosis.
Log filenames and locations may be overridden in the server configuration. The default location of logs is the data directory. The data directory is specified by the datadir system variable.
The systemctl command is used to start and stop the ColumnStore service.
In the ColumnStore Object Storage topology, the mariadb-columnstore service should not be enabled. The CMAPI service restarts Enterprise ColumnStore as needed, so it does not need to start automatically upon reboot.
The systemctl command is used to start and stop the CMAPI service.
For additional information on endpoints, see "CMAPI".
MaxScale can be configured using several methods. These methods make use of MaxScale's .
The procedure on these pages configures MaxScale using MaxCtrl.
The systemctl command is used to start and stop the MaxScale service.
For additional information, see "".
Navigation in the procedure "Deploy ColumnStore Object Storage Topology":
.
Start and Configure MariaDB MaxScale
Test MariaDB MaxScale
Import Data
S3-Compatible Object Storage Requirements
Preferred Object Storage Providers: Cloud
Preferred Object Storage Providers: Hardware
Shared Local Storage Directories
Shared Local Storage Options
Ubuntu 20.04 LTS (x86_64)
Quantum ActiveScale
NFS (Network File System)
On-premises
NFS is a distributed file system.
If NFS is used, the storage should be mounted with the sync option to ensure that each node flushes its changes immediately.
For on-premises deployments, NFS is the recommended option for the Storage Manager directory, and any S3-compatible storage is the recommended option for data.
<hostname>-bin
Enterprise Server 10.5, Enterprise ColumnStore 5, MaxScale 2.5
Enterprise Server 10.6, Enterprise ColumnStore 23.02, MaxScale 22.08
Prepare ColumnStore Nodes
Configure Shared Local Storage
Install MariaDB Enterprise Server
Start and Configure MariaDB Enterprise Server
Test MariaDB Enterprise Server
Install MariaDB MaxScale
Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.
Database proxy that extends the availability, scalability, and security of MariaDB Enterprise Servers
Columnar storage engine
Highly available
Optimized for Online Analytical Processing (OLAP) workloads
Scalable query execution
provides a REST API for multi-node administration
Listener
Listens for client connections to MaxScale then passes them to the router service
MariaDB Monitor
Tracks changes in the state of MariaDB Enterprise Servers.
Read Connection Router
Routes connections from the listener to any available Enterprise ColumnStore node
Read/Write Split Router
Routes read operations from the listener to any available Enterprise ColumnStore node, and routes write operations from the listener to a specific server that MaxScale uses as the primary server
Server Module
Connection configuration in MaxScale to an Enterprise ColumnStore node
MaxScale node
4+ cores
4+ GB
Enterprise ColumnStore node
4+ cores
4+ GB
MaxScale node
8+ cores
16+ GB
Enterprise ColumnStore node
64+ cores
128+ GB
The ColumnStore Object Storage topology uses S3-compatible object storage to store data.
The ColumnStore Object Storage topology uses shared local storage for the Storage Manager directory to store metadata.
EBS (Elastic Block Store) Multi-Attach
AWS
EBS is a high-performance block-storage service for AWS (Amazon Web Services).
EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported.
For deployments in AWS, EBS Multi-Attach is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data.
EFS (Elastic File System)
AWS
EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services).
For deployments in AWS, EFS is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data. EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services).
Filestore
GCP
Filestore is high-performance, fully managed storage for GCP (Google Cloud Platform).
For deployments in GCP, Filestore is the recommended option for the Storage Manager directory, and Google Object Storage (S3-compatible) is the recommended option for data.
GlusterFS
On-premises
AWS
Amazon S3 storage
EBS Multi-Attach or EFS
GCP
Google Object Storage (S3-compatible)
Filestore
On-premises
Any S3-compatible object storage
NFS
Configuration File
Configuration files (such as /etc/my.cnf) can be used to set system-variables and options. The server must be restarted to apply changes made to configuration files.
Command-line
The server can be started with command-line options that set system-variables and options.
SQL
Users can set system-variables that support dynamic changes on-the-fly using the SET statement.
Distribution
Example Configuration File Path
CentOS
Red Hat Enterprise Linux (RHEL)
/etc/my.cnf.d/z-custom-mariadb.cnf
Debian
Ubuntu
/etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf
Start
sudo systemctl start mariadb
Stop
sudo systemctl stop mariadb
Restart
sudo systemctl restart mariadb
Enable during startup
sudo systemctl enable mariadb
Disable during startup
sudo systemctl disable mariadb
Status
sudo systemctl status mariadb
<hostname>.err
server_audit.log
<hostname>-slow.log
Start
sudo systemctl start mariadb-columnstore
Stop
sudo systemctl stop mariadb-columnstore
Restart
sudo systemctl restart mariadb-columnstore
Enable during startup
sudo systemctl enable mariadb-columnstore
Disable during startup
sudo systemctl disable mariadb-columnstore
Status
sudo systemctl status mariadb-columnstore
Start
sudo systemctl start mariadb-columnstore-cmapi
Stop
sudo systemctl stop mariadb-columnstore-cmapi
Restart
sudo systemctl restart mariadb-columnstore-cmapi
Enable during startup
sudo systemctl enable mariadb-columnstore-cmapi
Disable during startup
sudo systemctl disable mariadb-columnstore-cmapi
Status
sudo systemctl status mariadb-columnstore-cmapi
Command-line utility to perform administrative tasks through the REST API. See MaxCtrl Commands.
MaxGUI is a graphical utility that can perform administrative tasks through the REST API.
The REST API can be used directly. For example, the curl utility could be used to make REST API calls from the command-line. Many programming languages also have libraries to interact with REST APIs.
Start
sudo systemctl start maxscale
Stop
sudo systemctl stop maxscale
Restart
sudo systemctl restart maxscale
Enable during startup
sudo systemctl enable maxscale
Disable during startup
sudo systemctl disable maxscale
Status
sudo systemctl status maxscale


GlusterFS is a distributed file system.
GlusterFS supports replication and failover.
<hostname>.log
Apr 30 21:54:35 a1ebc96a2519 PrimProc[1004]: 35.668435 |0|0|0| C 28 CAL0000: Error total memory available is less than 3GB.ERROR 1815 (HY000): Internal error: System is not ready yet. Please try again.https://{server}:{port}/cmapi/{version}/{route}/{command}$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/start \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20}' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/shutdown \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20}' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/add-node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .$ curl -k -s -X DELETE https://mcs1:8640/cmapi/0.4.0/cluster/node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/remove-node \
--header 'Content-Type:application/json' \
--header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
--data '{"timeout":20, "node": "192.0.2.2"}' \
| jq .This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.
This page is: Copyright © 2025 MariaDB. All rights reserved.