arrow-left

All pages
gitbookPowered by GitBook
1 of 58

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Deployment

MariaDB ColumnStore Hardware Guidechevron-right
Installing ColumnStorechevron-right
Upgrading ColumnStorechevron-right

Managing ColumnStore

Managing MariaDB ColumnStore involves setup, configuration, and tools like mcsadmin and cpimport for efficient analytics.

Installing ColumnStore

This section provides instructions for installing and configuring MariaDB ColumnStore. It covers various deployment scenarios, including single- and multi-node setups with both local and S3 storage.

MariaDB ColumnStore Hardware Guidechevron-right
Single-Node Localstoragechevron-right
Multi-Node Localstoragechevron-right
Single-Node S3chevron-right
Multi-Node S3chevron-right

Managing ColumnStore Database Environment

Managing MariaDB ColumnStore means deploying its architecture, scaling modules, and maintaining performance through monitoring, optimization, and backups.

Node Maintenance for MariaDB Enterprise Columnstore

Upgrading ColumnStore

Upgrade Multi-Node MariaDB Enterprise ColumnStore from 6 to 23.10chevron-rightMajor Release Upgrades for MariaDB Enterprise ColumnStorechevron-right

Backup & Restore

MariaDB ColumnStore backup and restore manage distributed data using snapshots or tools like mariadb-backup, with restoration ensuring cluster sync via cpimport or file system recovery.

View and Clear Table Locks

MariaDB Enterprise ColumnStore acquires table locks for some operations, and it provides utilities to view and clear those locks.

MariaDB Enterprise ColumnStore acquires table locks for some operations, such as:

  • DDL statements

  • DML statements

  • Bulk data loads

If an operation fails, the table lock does not always get released. If you try to access the table, you can see errors like the following:

To solve this problem, MariaDB Enterprise ColumnStore provides two utilities to view and clear the table locks:

  • cleartablelock

  • viewtablelock

hashtag
Viewing Table Locks

The viewtablelock utility shows table locks currently held by MariaDB Enterprise ColumnStore:

To view all table locks:

To view table locks for a specific table, specify the database and table:

hashtag
Clearing Table Locks

The cleartablelock utility clears table locks currently held by MariaDB Enterprise ColumnStore.

To clear a table lock, specify the lock ID shown by the viewtablelock utility:

This page is: Copyright © 2025 MariaDB. All rights reserved.

ColumnStore Table Size Limitations

MariaDB ColumnStore has a hard limit of 4096 columns per table.

However, it's likely that you run into other limitations before hitting that limit, including:

  • Row size limit of tables. This varies, depending on the storage engine you're using. For example, which indirectly limits the number of columns.

  • Size limit of .frm files. Those files hold the column description of tables. Column descriptions vary in length. Once all column descriptions combined reach a length of 64KB, the table's .frm file is full, limiting the number of columns you can have in a table.

Given that, the maximum number of columns a ColumnStore table can effectively have is around 2000 columns.

Step 6: Install MariaDB MaxScale

Step 6: Install MariaDB MaxScale

hashtag
Overview

This page details step 6 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".

This step installs MariaDB MaxScale 22.08. ColumnStore Object Storage requires 1 or more MaxScale nodes.

circle-info

Switchover of the Primary Node

To switchover to a new primary node with Enterprise ColumnStore, perform the following procedure.

hashtag
Performing Switchover in MaxScale

The primary node can be switched in MaxScale using :

The instructions were tested against ColumnStore 23.10.

Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

hashtag
Retrieve Customer Download Token

MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.

Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.

To retrieve the token for your account:

  1. Navigate to

  2. Log in.

  3. Copy the Customer Download Token.

Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.

hashtag
Set Up Repository

  1. On the MaxScale node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):

Install on Debian / Ubuntu (APT):

  1. On the MaxScale node, configure package repositories and specify MariaDB MaxScale 22.08:

Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.

hashtag
Install MaxScale

On the MaxScale node, install MariaDB MaxScale.

Install on CentOS / RHEL (YUM):

Install on Debian / Ubuntu (APT):

hashtag
Next Step

Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".

This page was step 6 of 9.

Next: Step 7: Start and Configure MariaDB MaxScale.

This page is: Copyright © 2025 MariaDB. All rights reserved.

Use or another supported REST client.
  • Call a module command using the call command command.

  • As the first argument, provide the name for the module, which is .

  • As the second argument, provide the module command, which is switchover .

  • As the third argument, provide the name of the monitor.

  • For example:

    With the above syntax, MaxScale will choose the most up-to-date replica to be the new primary.

    If you want to manually select a new primary, provide the server name of the new primary as the fourth argument:

    hashtag
    Checking the Replication Status with MaxScale

    MaxScale is capable of checking the status of using :

    • List the servers using the list servers command, like this:

    If switchover was properly performed, the State column of the new primary shows Master, Running.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Step 5: Bulk Import of Data

    Step 5: Bulk Import of Data

    hashtag
    Overview

    This page details step 5 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Local storage.

    This step bulk imports data to Enterprise ColumnStore.

    circle-info

    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Import the Schema

    Before data can be imported into the tables, create a matching schema.

    On the primary server, create the schema:

    1. For each database that you are importing, create the database with the statement:

    1. For each table that you are importing, create the table with the statement:

    hashtag
    Import the Data

    Enterprise ColumnStore supports multiple methods to import data into ColumnStore tables.

    hashtag
    cpimport

    MariaDB Enterprise ColumnStore includes , which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.

    To import your data from a TSV (tab-separated values) file, on the primary server run :

    hashtag
    LOAD DATA INFILE

    When data is loaded with the statement, MariaDB Enterprise ColumnStore loads the data using , which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.

    To import your data from a TSV (tab-separated values) file, on the primary server use statement:

    hashtag
    Import from Remote Database

    MariaDB Enterprise ColumnStore can also import data directly from a remote database. A simple method is to query the table using the statement, and then pipe the results into , which is a command-line utility that is designed to efficiently load data in bulk. Alternative methods are available.

    To import your data from a remote MariaDB database:

    hashtag
    Next Step

    Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:

    This page was step 5 of 5.

    This procedure is complete.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Step 3: Install MariaDB Enterprise Server

    Step 3: Install MariaDB Enterprise Server

    hashtag
    Overview

    This page details step 3 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".

    This step installs MariaDB Enterprise Server, MariaDB Enterprise ColumnStore, CMAPI, and dependencies.

    circle-info

    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Retrieve Download Token

    MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.

    Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.

    To retrieve the token for your account:

    1. Navigate to

    2. Log in.

    3. Copy the Customer Download Token.

    Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.

    hashtag
    Set Up Repository

    1. On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):

    Install on Debian / Ubuntu (APT):

    1. On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:

    Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.

    hashtag
    Install Enterprise Server and Enterprise ColumnStore

    1. On each Enterprise ColumnStore node, install additional dependencies:

    Install on CentOS and RHEL (YUM):

    Install on Debian 9 and Ubuntu 18.04 (APT)

    Install on Debian 10 and Ubuntu 20.04 (APT):

    1. On each Enterprise ColumnStore node, install MariaDB Enterprise Server and MariaDB Enterprise ColumnStore:

    Install on CentOS / RHEL (YUM):

    Install on Debian / Ubuntu (APT):

    hashtag
    Next Step

    Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".

    This page was step 3 of 9.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Rejoining a Node

    To rejoin a node with Enterprise ColumnStore, perform the following procedure.

    hashtag
    Performing Rejoin in MaxScale

    The node can be configured to rejoin in MaxScale using :

    • Use or another supported REST client.

    • Call a module command using the call command command.

    • As the first argument, provide the name for the module, which is .

    • As the second argument, provide the module command, which is rejoin .

    • As the third argument, provide the name of the monitor.

    • As the fourth argument, provide the name of the server.

    For example:

    hashtag
    Checking Replication Status with MaxScale

    MaxScale is capable of checking the status of using :

    • List the servers using the list servers command, like this:

    If the node properly rejoined, the State column of the node shows Slave, Running.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Step 2: Install Enterprise ColumnStore

    Step 2: Install Enterprise ColumnStore

    hashtag
    Overview

    This page details step 2 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Object storage.

    This step installs MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.

    circle-info

    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Retrieve Download Token

    MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.

    Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.

    To retrieve the token for your account:

    1. Navigate to

    2. Log in.

    3. Copy the Customer Download Token.

    Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.

    hashtag
    Set Up Repository

    1. On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):

    Install on Debian / Ubuntu (APT):

    1. On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:

    Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.

    hashtag
    Install Enterprise ColumnStore

    Install additional dependencies:

    Install on CentOS / RHEL (YUM)

    Install of Debian 10 and Ubuntu 20.04 (APT):

    Install on Debian 9 and Ubuntu 18.04 (APT):

    Install MariaDB Enterprise Server and MariaDB Enterprise ColumnStore:

    Install on CentOS / RHEL (YUM):

    Install on Debian / Ubuntu (APT):

    hashtag
    Next Step

    Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:

    This page was step 2 of 5.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Step 1: Prepare Systems for Enterprise ColumnStore Nodes

    Step 1: Prepare Systems for Enterprise ColumnStore Nodes

    hashtag
    Overview

    This page details step 1 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Local storage.

    This step prepares the system to host MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.

    circle-info

    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Optimize Linux Kernel Parameters

    MariaDB Enterprise ColumnStore performs best with Linux kernel optimizations.

    On each server to host an Enterprise ColumnStore node, optimize the kernel:

    1. Set the relevant kernel parameters in a sysctl configuration file. To ensure proper change management, use an Enterprise ColumnStore-specific configuration file.

    Create a /etc/sysctl.d/90-mariadb-enterprise-columnstore.conf file:

    1. Use the sysctl command to set the kernel parameters at runtime

    hashtag
    Temporarily Configure Linux Security Modules (LSM)

    The Linux Security Modules (LSM) should be temporarily disabled on each Enterprise ColumnStore node during installation.

    The LSM will be configured and re-enabled later in this deployment procedure.

    The steps to disable the LSM depend on the specific LSM used by the operating system.

    hashtag
    CentOS / RHEL Stop SELinux

    SELinux must be set to permissive mode before installing MariaDB Enterprise ColumnStore.

    To set SELinux to permissive mode:

    1. Set SELinux to permissive mode:

    1. Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config.

    For example, the file will usually look like this after the change:

    1. Confirm that SELinux is in permissive mode:

    SELinux will be configured and re-enabled later in this deployment procedure. This configuration is not persistent. If you restart the server before configuring and re-enabling SELinux later in the deployment procedure, you must reset the enforcement to permissive mode.

    hashtag
    Debian / Ubuntu AppArmor

    AppArmor must be disabled before installing MariaDB Enterprise ColumnStore.

    1. Disable AppArmor:

    1. Reboot the system.

    2. Confirm that no AppArmor profiles are loaded using aa-status:

    AppArmor will be configured and re-enabled later in this deployment procedure.

    hashtag
    Configure Character Encoding

    When using MariaDB Enterprise ColumnStore, it is recommended to set the system's locale to UTF-8.

    1. On RHEL 8, install additional dependencies:

    1. Set the system's locale to en_US.UTF-8 by executing localedef:

    hashtag
    Next Step

    Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:

    This page was step 1 of 5.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Major Release Upgrades for MariaDB Enterprise ColumnStore

    This page provides a major release upgrade procedure for MariaDB Enterprise ColumnStore. A major release upgrade is an upgrade from an older major release to a newer major release, such as an upgrade from MariaDB Enterprise ColumnStore 5 to MariaDB Enterprise ColumnStore 22.08.

    hashtag
    Compatibility

    • Enterprise ColumnStore 5

    • Enterprise ColumnStore 6

    • Enterprise ColumnStore 22.08

    hashtag
    Prerequisites

    This procedure assumes that the new Enterprise ColumnStore version will be installed onto new servers.

    To reuse existing servers for the new Enterprise ColumnStore version, you must adapt the procedure detailed below. After step 1, confirm all data has been backed-up and verify backups. The old version of Enterprise ColumnStore should then be uninstalled, and all Enterprise ColumnStore files should be deleted before continuing with step 2.

    hashtag
    Step 1: Backup/Export Schemas and Data

    On the old ColumnStore cluster, perform a full backup.

    MariaDB recommends backing up the table schemas to a single SQL file and backing up the table data to table-specific CSV files.

    1. For each table, obtain the table's schema by executing the SHOW CREATE TABLE :

      Backup the table schemas by copying the output to an SQL file. This procedure assumes that the SQL file is named schema-backup.sql.

    2. For each table, backup the table data to a CSV file using the SELECT .. INTO OUTFILE :

    hashtag
    Step 2: Install New Major Release

    On the new ColumnStore cluster, follow the deployment instructions of the desired topology for the new ColumnStore version.

    For deployment instructions, see "".

    hashtag
    Step 3: Restore/Import Data

    On the new ColumnStore cluster, restore the table schemas and data.

    1. Restore the schema backup using :

      • HOST and PORT should refer to the following:

    hashtag
    Step 4: Test

    On the new ColumnStore cluster, verify that the table schemas and data have been restored.

    1. For each table, verify the table's definition by executing the SHOW CREATE TABLE statement:

    2. For each table, verify the number of rows in the table by executing SELECT COUNT(*):

    3. For each table, verify the data in the table executing the statement.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Step 3: Install MariaDB Enterprise Server

    Step 3: Install MariaDB Enterprise Server

    hashtag
    Overview

    This page details step 3 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".

    This step installs MariaDB Enterprise Server, MariaDB Enterprise ColumnStore, CMAPI, and dependencies.

    circle-info

    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Retrieve Download Token

    MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.

    Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.

    To retrieve the token for your account:

    1. Navigate to

    2. Log in.

    3. Copy the Customer Download Token.

    Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.

    hashtag
    Set Up Repository

    1. On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):

    Install on Debian / Ubuntu (APT):

    1. On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:

    Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.

    hashtag
    Install Enterprise Server and Enterprise ColumnStore

    1. On each Enterprise ColumnStore node, install additional dependencies:

    Install on CentOS and RHEL (YUM):

    Install on Debian 9 and Ubuntu 18.04 (APT)

    Install on Debian 10 and Ubuntu 20.04 (APT):

    1. On each Enterprise ColumnStore node, install MariaDB Enterprise Server and MariaDB Enterprise ColumnStore:

    Install on CentOS / RHEL (YUM):

    Install on Debian / Ubuntu (APT):

    hashtag
    Next Step

    Navigation in the procedure "Deploy ColumnStore Object Storage Topology".

    This page was step 3 of 9.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Step 6: Install MariaDB MaxScale

    Step 6: Install MariaDB MaxScale

    hashtag
    Overview

    This page details step 6 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".

    This step installs MariaDB MaxScale 22.08.

    circle-info

    The instructions were tested against ColumnStore 23.10.

    ColumnStore Object Storage requires 1 or more MaxScale nodes.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Retrieve Customer Download Token

    MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.

    Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.

    To retrieve the token for your account:

    1. Navigate to

    2. Log in.

    3. Copy the Customer Download Token.

    Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.

    hashtag
    Set Up Repository

    1. On the MaxScale node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):

    Install on Debian / Ubuntu (APT):

    1. On the MaxScale node, configure package repositories and specify MariaDB MaxScale 22.08:

    Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.

    hashtag
    Install MaxScale

    On the MaxScale node, install MariaDB MaxScale.

    Install on CentOS / RHEL (YUM):

    Install on Debian / Ubuntu (APT):

    hashtag
    Next Step

    Navigation in the procedure "Deploy ColumnStore Object Storage Topology":

    This page was step 6 of 9.

    .

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Backup and Restore Overview

    hashtag
    Overview

    MariaDB Enterprise ColumnStore supports backup and restore.

    hashtag
    System of Record

    Before you determine a backup strategy for your Enterprise ColumnStore deployment, it is a good idea to determine the system of record for your Enterprise ColumnStore data.

    A system of record is the authoritative data source for a given piece of information. Organizations often store duplicate information in several systems, but only a single system can be the authoritative data source.

    Enterprise ColumnStore is designed to handle analytical processing for OLAP, data warehousing, DSS, and hybrid workloads on very large data sets. Analytical processing does not generally happen on the system of record. Instead, analytical processing generally occurs on a specialized database that is loaded with data from the separate system of record. Additionally, very large data sets can be difficult to back up. Therefore, it may be beneficial to only backup the system of record.

    If Enterprise ColumnStore is not acting as the system of record for your data, you should determine how the system of record affects your backup plan:

    • If your system of record is another database server, you should ensure that the other database server is properly backed up and that your organization has procedures to reload Enterprise ColumnStore from the other database server.

    • If your system of record is a set of data files, you should ensure that the set of data files is properly backed up and that your organization has procedures to reload Enterprise ColumnStore from the set of data files.

    hashtag
    Full Backup and Restore

    MariaDB Enterprise ColumnStore supports full backup and restore for all storage types. A full backup includes:

    • Enterprise ColumnStore's data and metadata

    With S3: an S3 snapshot of the and a file system snapshot or copy of the Without S3: a file system snapshot or copy of the .

    • The MariaDB data directory from the primary node

    To see the procedure to perform a full backup and restore, choose the storage type:

    Storage Type
    Diagram

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Step 3: Start and Configure Enterprise ColumnStore

    Step 3: Start and Configure Enterprise ColumnStore

    hashtag
    Overview

    This page details step 3 of a 5-step procedure for deploying .

    This step starts and configures MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.

    circle-info

    Step 2: Install Enterprise ColumnStore

    Step 2: Install Enterprise ColumnStore

    hashtag
    Overview

    This page details step 2 of a 5-step procedure for deploying .

    This step installs MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.

    circle-info

    Step 9: Import Data

    Step 9: Import Data

    hashtag
    Overview

    This page details step 9 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".

    This step bulk imports data to Enterprise ColumnStore.

    circle-info

    Step 4: Test Enterprise ColumnStore

    hashtag
    Overview

    This page details step 4 of a 5-step procedure for deploying .

    This step tests MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.

    circle-info

    Step 5: Bulk Import of Data

    Step 5: Bulk Import of Data

    hashtag
    Overview

    This page details step 5 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Object storage.

    This step bulk imports data to Enterprise ColumnStore.

    circle-info

    Extent Map Backup & Recovery

    hashtag
    Overview

    MariaDB ColumnStore utilizes an Extent Map to manage data distribution across extents—logical blocks within physical segment files ranging from 8 to 64 MB. Each extent holds a consistent number of rows, with the Extent Map cataloging these extents, their corresponding block identifiers (LBIDs), and the minimum and maximum values for each column's data within the extent.​

    The primary node maintains the master copy of the Extent Map. Upon system startup, this map is loaded into memory and propagated to other nodes for redundancy and quick access. Corruption of the master Extent Map can render the system unusable and lead to data loss.​

    Setting a Node to Maintenance Mode

    To set a node to maintenance mode with Enterprise ColumnStore, perform the following procedure.

    hashtag
    Setting the Server State in MaxScale

    The server object for the node can be set to maintenance mode in MaxScale using :

    $ sudo yum install curl
    $ sudo apt install curl apt-transport-https
    $ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
    
    $ echo "${checksum}  mariadb_es_repo_setup" \
           | sha256sum -c -
    
    $ chmod +x mariadb_es_repo_setup
    
    $ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
          --skip-server \
          --skip-tools \
          --mariadb-maxscale-version="22.08"
    $ sudo yum install maxscale
    $ sudo apt install maxscale
    maxctrl call command \
       mariadbmon \
       switchover \
       mcs_monitor
    maxctrl call command \
       mariadbmon \
       switchover \
       mcs_monitor \
       mcs2
    maxctrl list servers
    ERROR 1815 (HY000): Internal error: CAL0009: Drop table failed due to IDB-2009: Unable to perform the drop table operation because cpimport with PID 16301 is currently holding the table lock for session -1.
    viewtablelock
     There is 1 table lock
    
      Table                     LockID  Process   PID    Session   Txn  CreationTime               State    DBRoots
      hq_sales.invoices         1       cpimport  16301  BulkLoad  n/a  Wed April 7 14:20:42 2021  LOADING  1
    viewtablelock hq_sales invoices
     There is 1 table lock
    
      Table                     LockID  Process   PID    Session   Txn  CreationTime               State    DBRoots
      hq_sales.invoices         1       cpimport  16301  BulkLoad  n/a  Wed April 7 14:20:42 2021  LOADING  1
    cleartablelock 1

    Copy the SQL file containing the table schemas and the CSV files containing the table data to the primary node of the new ColumnStore cluster.

    If you are connecting with MaxScale as a proxy, they should refer to the host and port of the MaxScale listener
  • If you are connecting directly to a multi-node ColumnStore cluster, they should refer to the host and port of the primary ColumnStore node

  • If you are connecting directly to single-node ColumnStore, they should refer to the host and port of the ColumnStore node

  • When the command is executed, mariadb client prompts for the user password

  • For each table, restore the data from the table's CSV file by executing the cpimport utility on the primary ColumnStore node:

  • If the table is very large, you can limit the number of rows in the result set by adding a LIMIT clause:
    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Configure Enterprise ColumnStore

    Mandatory system variables and options for Single-Node Enterprise ColumnStore include:

    Connector
    MariaDB Connector/R2DBC

    Set this system variable to utf8

    Set this system variable to utf8_general_ci

    columnstore_use_import_for_batchinsert

    Set this system variable to ALWAYS to always use cpimport for and statements.

    hashtag
    Example Configuration

    hashtag
    Start the Enterprise ColumnStore Services

    Start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:

    Start and enable the MariaDB Enterprise ColumnStore service, so that it starts automatically upon reboot:

    hashtag
    Create the Utility User

    Enterprise ColumnStore requires a mandatory utility user account. By default, it connects to the server using the root user with no password. MariaDB Enterprise Server 10.6 will reject this login attempt by default, so you will need to configure Enterprise ColumnStore to use a different user account and password and create this user account on Enterprise Server.

    1. On the Enterprise ColumnStore node, create the user account with the statement:

    1. On the Enterprise ColumnStore node, grant the user account SELECT privileges on all databases with the GRANT statement:

    1. Configure Enterprise ColumnStore to use the utility user:

    1. Set the password:

    For details about how to encrypt the password, see "Credentials Management for MariaDB Enterprise ColumnStorearrow-up-right".

    Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.

    hashtag
    Configure Linux Security Modules (LSM)

    The specific steps to configure the security module depend on the operating system.

    hashtag
    Configure SELinux (CentOS, RHEL)

    Configure SELinux for Enterprise ColumnStore:

    1. To configure SELinux, you have to install the packages required for audit2allow. On CentOS 7 and RHEL 7, install the following:

    On RHEL 8, install the following:

    1. Allow the system to run under load for a while to generate SELinux audit events.

    2. After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:

    If no audit events were found, this will print the following:

    1. If audit events were found, the new SELinux policy can be loaded using semodule:

    1. Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config.

    For example, the file will usually look like this after the change:

    1. Set SELinux to enforcing mode:

    hashtag
    Configure AppArmor (Ubuntu)

    For information on how to create a profile, see How to create an AppArmor Profilearrow-up-right on ubuntu.com.

    hashtag
    Next Step

    Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:

    This page was step 3 of 5.

    Next: Step 4: Test MariaDB Enterprise ColumnStore.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Single-Node Enterprise ColumnStore with Local storage
    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Retrieve Download Token

    MariaDB Corporation provides package repositories for CentOS / RHEL (YUM) and Debian / Ubuntu (APT). A download token is required to access the MariaDB Enterprise Repository.

    Customer Download Tokens are customer-specific and are available through the MariaDB Customer Portal.

    To retrieve the token for your account:

    1. Navigate to https://customers.mariadb.com/downloads/token/arrow-up-right

    2. Log in.

    3. Copy the Customer Download Token.

    Substitute your token for CUSTOMER_DOWNLOAD_TOKEN when configuring the package repositories.

    hashtag
    Set Up Repository

    1. On each Enterprise ColumnStore node, install the prerequisites for downloading the software from the Web. Install on CentOS / RHEL (YUM):

    Install on Debian / Ubuntu (APT):

    1. On each Enterprise ColumnStore node, configure package repositories and specify Enterprise Server:

    Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.

    hashtag
    Install Enterprise ColumnStore

    Install additional dependencies:

    Install on CentOS / RHEL (YUM)

    Install of Debian 10 and Ubuntu 20.04 (APT):

    Install on Debian 9 and Ubuntu 18.04 (APT):

    Install MariaDB Enterprise Server and MariaDB Enterprise ColumnStore:

    Install on CentOS / RHEL (YUM):

    Install on Debian / Ubuntu (APT):

    hashtag
    Next Step

    Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:

    This page was step 2 of 5.

    Next: Step 3: Start and Configure MariaDB Enterprise ColumnStore.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Single-Node Enterprise ColumnStore with Local storage
    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Import the Schema

    Before data can be imported into the tables, create a matching schema.

    On the primary server, create the schema:

    1. For each database that you are importing, create the database with the CREATE DATABASE statement:

    1. For each table that you are importing, create the table with the CREATE TABLE statement:

    hashtag
    Import the Data

    Enterprise ColumnStore supports multiple methods to import data into ColumnStore tables.

    Interface
    Method
    Benefits

    Shell

    • SQL access is not required

    SQL

    • Shell access is not required

    Remote Database

    • Use normal database client

    • Avoid dumping data to intermediate filed

    hashtag
    cpimport

    MariaDB Enterprise ColumnStore includes cpimportarrow-up-right, which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.

    To import your data from a TSV (tab-separated values) file, on the primary server run cpimportarrow-up-right:

    hashtag
    LOAD DATA INFILE

    When data is loaded with the LOAD DATA INFILE statement, MariaDB Enterprise ColumnStore loads the data using cpimportarrow-up-right, which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.

    To import your data from a TSV (tab-separated values) file, on the primary server use LOAD DATA INFILE statement:

    hashtag
    Import from Remote Database

    MariaDB Enterprise ColumnStore can also import data directly from a remote database. A simple method is to query the table using the SELECT statement, and then pipe the results into cpimportarrow-up-right, which is a command-line utility that is designed to efficiently load data in bulk. Alternative methods are available.

    To import your data from a remote MariaDB database:

    hashtag
    Next Step

    Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".

    This page was step 9 of 9.

    This procedure is complete.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Test Local Connection

    Connect to the server using MariaDB Client using the root@localhost user account:

    hashtag
    Test ColumnStore Plugin Status

    Query and confirm that the ColumnStore storage engine plugin is ACTIVE:

    hashtag
    Test ColumnStore Table Creation

    1. Create a test database, if it does not exist:

    1. Create a ColumnStore table:

    1. Add sample data into the table:

    1. Read data from table:

    hashtag
    Test Cross Engine Join

    1. Create an InnoDB table:

    1. Add data to the table:

    1. Perform a cross-engine join:

    hashtag
    Next Step

    Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:

    This page was step 4 of 5.

    Next: Step 5: Bulk Import of Data.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Single-Node Enterprise ColumnStore with Local storage
    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Import the Schema

    Before data can be imported into the tables, create a matching schema.

    On the primary server, create the schema:

    1. For each database that you are importing, create the database with the statement:

    1. For each table that you are importing, create the table with the statement:

    hashtag
    Import the Data

    Enterprise ColumnStore supports multiple methods to import data into ColumnStore tables.

    hashtag
    cpimport

    MariaDB Enterprise ColumnStore includes cpimportarrow-up-right, which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.

    To import your data from a TSV (tab-separated values) file, on the primary server run cpimportarrow-up-right:

    hashtag
    LOAD DATA INFILE

    When data is loaded with the LOAD DATA INFILE statement, MariaDB Enterprise ColumnStore loads the data using cpimportarrow-up-right, which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.

    To import your data from a TSV (tab-separated values) file, on the primary server use LOAD DATA INFILE statement:

    hashtag
    Import from Remote Database

    MariaDB Enterprise ColumnStore can also import data directly from a remote database. A simple method is to query the table using the statement, and then pipe the results into cpimportarrow-up-right, which is a command-line utility that is designed to efficiently load data in bulk. Alternative methods are available.

    To import your data from a remote MariaDB database:

    hashtag
    Next Step

    Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:

    This page was step 5 of 5.

    This procedure is complete.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    hashtag
    Purpose

    ColumnStore's extent map is a smart structure that underpins its performance. By providing a logical partitioning scheme, it avoids the overhead associated with indexing and other common row-based database optimizations.

    The primary node in a ColumnStore cluster holds the master copy of the extent map. Upon system startup, this master copy is read into memory and then replicated to all other participating nodes for high availability and disaster recovery. Nodes keep the extent map in memory for rapid access during query processing. As data within extents is modified, these updates are broadcast to all participating nodes to maintain consistency.

    If the master copy of the extent map becomes corrupted, the entire system could become unusable, potentially leading to data loss. Having a recent backup of the extent map allows for a much faster recovery compared to reloading the entire database in such a scenario.

    hashtag
    Backup Procedure

    circle-info

    Note: MariaDB recommends implementing regular backups to ensure data integrity and recovery. A common default is to back up every 3 hours and retain backups for at least 10 days.

    To safeguard against potential Extent Map corruption, regularly back up the master copy:

    1. Lock Table:

    1. Save BRM:

    1. Create Backup Directory:

    1. Copy Extent Map:

    1. Unlock Tables:

    hashtag
    Recovery Procedures

    hashtag
    Single-Node System

    1. Stop ColumnStore:

    1. Rename Corrupted Map:

    1. Clear Versioning Files:

    1. Restore Backup:

    1. Set Ownership:

    1. Start ColumnStore:

    hashtag
    Clustered System

    1. Shutdown Cluster:

    1. Rename Corrupted Map:

    1. Clear Versioning Files:

    1. Restore Backup:

    1. Set Ownership:

    1. Start Cluster:

    hashtag
    Automation Recommendation

    Incorporate the save_brm command into your data import scripts (e.g., those using cpimport) to automate Extent Map backups. This practice ensures regular backups without manual intervention.

    Refer to the MariaDB ColumnStore Backup Script for an example implementation.​

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Use or another supported REST client.
  • Set the server object to maintenance mode using the set server command.

  • As the first argument, provide the name for the server.

  • As the second argument, provide maintenance as the state.

  • For example:

    If the specified server is a primary server, then MaxScale will allow open transactions to complete before closing any connections.

    If you would like MaxScale to immediately close all connections, the --force option can be provided as a third argument:

    hashtag
    Confirming Maintenance Mode is Set with MaxScale

    Confirm the state of the server object in MaxScale using :

    • List the servers using the list servers command, like this:

    If the node is properly in maintenance mode, then the State column will show Maintenance as one of the states.

    hashtag
    Performing Maintenance

    Now that the server is in maintenance mode in MaxScale, you can perform your maintenance.

    While the server is in maintenance mode:

    • MaxScale doesn't route traffic to the node.

    • MaxScale doesn't select the node to be primary during failover.

    • The node can be rebooted.

    • The node's services can be restarted.

    hashtag
    Clear the Server State in MaxScale

    Maintenance mode for the server object for the node can be cleared in MaxScale using :

    • Use or another supported REST client.

    • Clear the server object's state using the clear server command.

    • As the first argument, provide the name for the server.

    • As the second argument, provide maintenance as the state.

    For example:

    hashtag
    Confirming Maintenance Mode is Cleared with MaxScale

    Confirm the state of the server object in MaxScale using :

    • List the servers using the list servers command, like this:

    If the node is no longer in maintenance mode, the State column no longer shows Maintenance as one of the states.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    cpimportarrow-up-right
    cpimportarrow-up-right
    cpimportarrow-up-right
    cpimportarrow-up-right
    https://customers.mariadb.com/downloads/token/arrow-up-right
    Next: Step 4: Start and Configure MariaDB Enterprise Server.
    https://customers.mariadb.com/downloads/token/arrow-up-right
    Next: Step 3: Start and Configure MariaDB Enterprise ColumnStore.
    Next: Step 2: Install MariaDB Enterprise ColumnStore.
    https://customers.mariadb.com/downloads/token/arrow-up-right
    Next: Step 4: Start and Configure MariaDB Enterprise Server.
    https://customers.mariadb.com/downloads/token/arrow-up-right
    Next: Step 7: Start and Configure MariaDB MaxScale

    Enterprise ColumnStore with Object Storage

    Enterprise ColumnStore with Shared Local Storage

    S3-compatible object storage
    Storage Manager directory
    DB Root directories

    Step 9: Import Data

    Step 9: Import Data

    hashtag
    Overview

    This page details step 9 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".

    This step bulk imports data to Enterprise ColumnStore.

    circle-info

    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Import the Schema

    Before data can be imported into the tables, create a matching schema.

    On the primary server, create the schema:

    1. For each database that you are importing, create the database with the CREATE DATABASE statement:

    1. For each table that you are importing, create the table with the CREATE TABLE statement:

    hashtag
    Import the Data

    Enterprise ColumnStore supports multiple methods to import data into ColumnStore tables.

    Interface
    Method
    Benefits

    hashtag
    cpimport

    MariaDB Enterprise ColumnStore includes , which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.

    To import your data from a TSV (tab-separated values) file, on the primary server run :

    hashtag
    LOAD DATA INFILE

    When data is loaded with the LOAD DATA INFILE statement, MariaDB Enterprise ColumnStore loads the data using , which is a command-line utility designed to efficiently load data in bulk. Alternative methods are available.

    To import your data from a TSV (tab-separated values) file, on the primary server use LOAD DATA INFILE statement:

    hashtag
    Import from Remote Database

    MariaDB Enterprise ColumnStore can also import data directly from a remote database. A simple method is to query the table using the SELECT statement, and then pipe the results into , which is a command-line utility that is designed to efficiently load data in bulk. Alternative methods are available.

    To import your data from a remote MariaDB database:

    hashtag
    Next Step

    Navigation in the procedure "Deploy ColumnStore Object Storage Topology":

    This page was step 9 of 9.

    This procedure is complete.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Step 1: Prepare Systems for Enterprise ColumnStore Nodes

    Step 1: Prepare Systems for Enterprise ColumnStore Nodes

    hashtag
    Overview

    This page details step 1 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Object storage.

    This step prepares the system to host MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.

    circle-info

    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Optimize Linux Kernel Parameters

    MariaDB Enterprise ColumnStore performs best with Linux kernel optimizations.

    On each server to host an Enterprise ColumnStore node, optimize the kernel:

    1. Set the relevant kernel parameters in a sysctl configuration file. To ensure proper change management, use an Enterprise ColumnStore-specific configuration file.

    Create a /etc/sysctl.d/90-mariadb-enterprise-columnstore.conf file:

    1. Use the sysctl command to set the kernel parameters at runtime

    hashtag
    Temporarily Configure Linux Security Modules (LSM)

    The Linux Security Modules (LSM) should be temporarily disabled on each Enterprise ColumnStore node during installation.

    The LSM will be configured and re-enabled later in this deployment procedure.

    The steps to disable the LSM depend on the specific LSM used by the operating system.

    hashtag
    CentOS / RHEL Stop SELinux

    SELinux must be set to permissive mode before installing MariaDB Enterprise ColumnStore.

    To set SELinux to permissive mode:

    1. Set SELinux to permissive mode:

    1. Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config.

    For example, the file will usually look like this after the change:

    1. Confirm that SELinux is in permissive mode:

    SELinux will be configured and re-enabled later in this deployment procedure. This configuration is not persistent. If you restart the server before configuring and re-enabling SELinux later in the deployment procedure, you must reset the enforcement to permissive mode.

    hashtag
    Debian / Ubuntu AppArmor

    AppArmor must be disabled before installing MariaDB Enterprise ColumnStore.

    1. Disable AppArmor:

    1. Reboot the system.

    2. Confirm that no AppArmor profiles are loaded using aa-status:

    AppArmor will be configured and re-enabled later in this deployment procedure.

    hashtag
    Configure Character Encoding

    When using MariaDB Enterprise ColumnStore, it is recommended to set the system's locale to UTF-8.

    1. On RHEL 8, install additional dependencies:

    1. Set the system's locale to en_US.UTF-8 by executing localedef:

    hashtag
    Create an S3 Bucket

    If you want to use S3-compatible storage, it is important to create the S3 bucket before you start ColumnStore. If you already have an S3 bucket, confirm that the bucket is empty.

    S3 bucket configuration will be performed later in this procedure.

    hashtag
    Next Step

    Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:

    This page was step 1 of 5.

    .

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Step 4: Test Enterprise ColumnStore

    Step 4: Test Enterprise ColumnStore

    hashtag
    Overview

    This page details step 4 of a 5-step procedure for deploying Single-Node Enterprise ColumnStore with Object storage.

    This step tests MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.

    circle-info

    sudo cpimport -s ',' \
       DATABASE_NAME \
       TABLE_NAME \
       /path/to/DATABASE_NAME-TABLE_NAME.csv
    SHOW CREATE TABLE DATABASE_NAME.TABLE_NAME\G
    SELECT * INTO OUTFILE '/path/to/DATABASE_NAME-TABLE_NAME.csv'
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
    LINES TERMINATED BY '\n'
    FROM DATABASE_NAME.TABLE_NAME;
    mariadb --host HOST --port PORT --user USER --password < schema-backup.sql
    SHOW CREATE TABLE DATABASE_NAME.TABLE_NAME\G
    SELECT COUNT(*) FROM DATABASE_NAME.TABLE_NAME;
    SELECT * FROM DATABASE_NAME.TABLE_NAME LIMIT 100;
    [mariadb]
    log_error                              = mariadbd.err
    character_set_server                   = utf8
    collation_server                       = utf8_general_ci
    $ sudo systemctl start mariadb
    
    $ sudo systemctl enable mariadb
    $ sudo systemctl start mariadb-columnstore
    
    $ sudo systemctl enable mariadb-columnstore
    CREATE USER 'util_user'@'127.0.0.1'
    IDENTIFIED BY 'util_user_passwd';
    GRANT SELECT, PROCESS ON *.*
    TO 'util_user'@'127.0.0.1';
    $ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1
    
    $ sudo mcsSetConfig CrossEngineSupport Port 3306
    
    $ sudo mcsSetConfig CrossEngineSupport User util_user
    $ sudo mcsSetConfig CrossEngineSupport Password util_user_passwd
    $ sudo yum install policycoreutils policycoreutils-python
    $ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils
    $ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
    $ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
    
    Nothing to do
    $ sudo semodule -i mariadb_local.pp
    # This file controls the state of SELinux on the system.
    # SELINUX= can take one of these three values:
    #     enforcing - SELinux security policy is enforced.
    #     permissive - SELinux prints warnings instead of enforcing.
    #     disabled - No SELinux policy is loaded.
    SELINUX=enforcing
    # SELINUXTYPE= can take one of three values:
    #     targeted - Targeted processes are protected,
    #     minimum - Modification of targeted policy. Only selected processes are protected.
    #     mls - Multi Level Security protection.
    SELINUXTYPE=targeted
    $ sudo setenforce enforcing
    $ sudo yum install curl
    $ sudo apt install curl apt-transport-https
    $ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
    $ echo "${checksum}  mariadb_es_repo_setup" \
          
     | sha256sum -c -
    $ chmod +x mariadb_es_repo_setup
    $ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
          --skip-maxscale \
          --skip-tools \
          --mariadb-server-version="11.4"
    $ sudo yum install epel-release
    
    $ sudo yum install jemalloc
    $ sudo apt install libjemalloc2
    $ sudo apt install libjemalloc1
    $ sudo yum install MariaDB-server \
       MariaDB-backup \
       MariaDB-shared \
       MariaDB-client \
       MariaDB-columnstore-engine
    $ sudo apt install mariadb-server \
       mariadb-backup \
       libmariadb3 \
       mariadb-client \
       mariadb-plugin-columnstore
    CREATE DATABASE inventory;
    CREATE TABLE inventory.products (
       product_name VARCHAR(11) NOT NULL DEFAULT '',
       supplier VARCHAR(128) NOT NULL DEFAULT '',
       quantity VARCHAR(128) NOT NULL DEFAULT '',
       unit_cost VARCHAR(128) NOT NULL DEFAULT ''
    ) ENGINE=Columnstore DEFAULT CHARSET=utf8;
    $ sudo cpimport -s '\t' inventory products /tmp/inventory-products.tsv
    LOAD DATA INFILE '/tmp/inventory-products.tsv'
    INTO TABLE inventory.products;
    $ mariadb --quick \
       --skip-column-names \
       --execute="SELECT * FROM inventory.products" \
       | cpimport -s '\t' inventory products
    $ sudo mariadb
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 38
    Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]>
    SELECT PLUGIN_NAME, PLUGIN_STATUS
    FROM information_schema.PLUGINS
    WHERE PLUGIN_LIBRARY LIKE 'ha_columnstore%';
    +---------------------+---------------+
    | PLUGIN_NAME         | PLUGIN_STATUS |
    +---------------------+---------------+
    | Columnstore         | ACTIVE        |
    | COLUMNSTORE_COLUMNS | ACTIVE        |
    | COLUMNSTORE_TABLES  | ACTIVE        |
    | COLUMNSTORE_FILES   | ACTIVE        |
    | COLUMNSTORE_EXTENTS | ACTIVE        |
    +---------------------+---------------+
    CREATE DATABASE IF NOT EXISTS test;
    CREATE TABLE IF NOT EXISTS test.contacts (
       first_name VARCHAR(50),
       last_name VARCHAR(50),
       email VARCHAR(100)
    ) ENGINE=ColumnStore;
    INSERT INTO test.contacts (first_name, last_name, email)
       VALUES
       ("Kai", "Devi", "kai.devi@example.com"),
       ("Lee", "Wang", "lee.wang@example.com");
    SELECT * FROM test.contacts;
    +------------+-----------+----------------------+
    | first_name | last_name | email                |
    +------------+-----------+----------------------+
    | Kai        | Devi      | kai.devi@example.com |
    | Lee        | Wang      | lee.wang@example.com |
    +------------+-----------+----------------------+
    CREATE TABLE test.addresses (
       email VARCHAR(100),
       street_address VARCHAR(255),
       city VARCHAR(100),
       state_code VARCHAR(2)
    ) ENGINE = InnoDB;
    INSERT INTO test.addresses (email, street_address, city, state_code)
       VALUES
       ("kai.devi@example.com", "1660 Amphibious Blvd.", "Redwood City", "CA"),
       ("lee.wang@example.com", "32620 Little Blvd", "Redwood City", "CA");
    SELECT name AS "Name", addr AS "Address"
    FROM (SELECT CONCAT(first_name, " ", last_name) AS name,
       email FROM test.contacts) AS contacts
    INNER JOIN (SELECT CONCAT(street_address, ", ", city, ", ", state_code) AS addr,
       email FROM test.addresses) AS addr
    WHERE  contacts.email = addr.email;
    +----------+-----------------------------------------+
    | Name     | Address                                 |
    +----------+-----------------------------------------+
    | Kai Devi | 1660 Amphibious Blvd., Redwood City, CA |
    | Lee Wang | 32620 Little Blvd, Redwood City, CA     |
    +----------+-----------------------------------------+
    
    +-------------------+-------------------------------------+
    | Name              | Address                             |
    +-------------------+-------------------------------------+
    | Walker Percy      | 500 Thomas More Dr., Covington, LA  |
    | Flannery O'Connor | 300 Tarwater Rd., Milledgeville, GA |
    +-------------------+-------------------------------------+
    CREATE DATABASE inventory;
    CREATE TABLE inventory.products (
       product_name VARCHAR(11) NOT NULL DEFAULT '',
       supplier VARCHAR(128) NOT NULL DEFAULT '',
       quantity VARCHAR(128) NOT NULL DEFAULT '',
       unit_cost VARCHAR(128) NOT NULL DEFAULT ''
    ) ENGINE=Columnstore DEFAULT CHARSET=utf8;
    $ sudo cpimport -s '\t' inventory products /tmp/inventory-products.tsv
    LOAD DATA INFILE '/tmp/inventory-products.tsv'
    INTO TABLE inventory.products;
    $ mariadb --quick \
       --skip-column-names \
       --execute="SELECT * FROM inventory.products" \
       | cpimport -s '\t' inventory products
    mariadb -e "FLUSH TABLES WITH READ LOCK;"
    save_brm
    mkdir -p /extent_map_backup
    cp -f /var/lib/columnstore/data1/systemFiles/dbrm/BRM_saves_em /extent_map_backup
    mariadb -e "UNLOCK TABLES;"
    systemctl stop mariadb-columnstore
    mv /var/lib/columnstore/data1/systemFiles/dbrm/BRM_saves_em /tmp/BRM_saves_em.bad
    > /var/lib/columnstore/data1/systemFiles/dbrm/BRM_saves_vbbm > /var/lib/columnstore/data1/systemFiles/dbrm/BRM_saves_vss
    cp -f /extent_map_backup/BRM_saves_em /var/lib/columnstore/data1/systemFiles/dbrm/
    chown -R mysql:mysql /var/lib/columnstore/data1/systemFiles/dbrm/
    systemctl start mariadb-columnstore
    curl -s -X PUT https://127.0.0.1:8640/cmapi/0.4.0/cluster/shutdown \ --header 'Content-Type:application/json' \ --header 'x-api-key:your_api_key' \ --data '{"timeout":60}' -k
    mv /var/lib/columnstore/data1/systemFiles/dbrm/BRM_saves_em /tmp/BRM_saves_em.bad
    > /var/lib/columnstore/data1/systemFiles/dbrm/BRM_saves_vbbm > /var/lib/columnstore/data1/systemFiles/dbrm/BRM_saves_vss
    mv cp -f /extent_map_backup/BRM_saves_em /var/lib/columnstore/data1/systemFiles/dbrm/
    chown -R mysql:mysql /var/lib/columnstore/data1/systemFiles/dbrm
    curl -s -X PUT https://127.0.0.1:8640/cmapi/0.4.0/cluster/start \ --header 'Content-Type:application/json' \ --header 'x-api-key:your_api_key' \ --data '{"timeout":60}' -k
    maxctrl set server \
       mcs3 \
       maintenance
    maxctrl set server \
       mcs3 \
       maintenance \
       --force
    maxctrl list servers
    maxctrl clear server \
       mcs3 \
       maintenance
    maxctrl list servers
    CREATE DATABASE inventory;
    CREATE TABLE inventory.products (
       product_name VARCHAR(11) NOT NULL DEFAULT '',
       supplier VARCHAR(128) NOT NULL DEFAULT '',
       quantity VARCHAR(128) NOT NULL DEFAULT '',
       unit_cost VARCHAR(128) NOT NULL DEFAULT ''
    ) ENGINE=Columnstore DEFAULT CHARSET=utf8;
    $ sudo cpimport -s '\t' inventory products /tmp/inventory-products.tsv
    LOAD DATA INFILE '/tmp/inventory-products.tsv'
    INTO TABLE inventory.products;
    $ mariadb --quick \
       --skip-column-names \
       --execute="SELECT * FROM inventory.products" \
       | cpimport -s '\t' inventory products
    $ sudo yum install curl
    $ sudo apt install curl apt-transport-https
    $ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
    $ echo "${checksum}  mariadb_es_repo_setup" \
           | sha256sum -c -
    $ chmod +x mariadb_es_repo_setup
    $ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
          --skip-maxscale \
          --skip-tools \
          --mariadb-server-version="11.4"
    $ sudo yum install jemalloc jq curl
    $ sudo apt install libjemalloc1 jq curl
    $ sudo apt install libjemalloc2 jq curl
    $ sudo yum install MariaDB-server \
       MariaDB-backup \
       MariaDB-shared \
       MariaDB-client \
       MariaDB-columnstore-engine \
       MariaDB-columnstore-cmapi
    $ sudo apt install mariadb-server \
       mariadb-backup \
       libmariadb3 \
       mariadb-client \
       mariadb-plugin-columnstore \
       mariadb-columnstore-cmapi
    maxctrl call command \
       mariadbmon \
       rejoin \
       mcs_monitor \
       mcs3
    maxctrl list servers
    $ sudo yum install curl
    $ sudo apt install curl apt-transport-https
    $ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
    $ echo "${checksum}  mariadb_es_repo_setup" \
          
     | sha256sum -c -
    $ chmod +x mariadb_es_repo_setup
    $ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
          --skip-maxscale \
          --skip-tools \
          --mariadb-server-version="11.4"
    $ sudo yum install epel-release
    
    $ sudo yum install jemalloc
    $ sudo apt install libjemalloc2
    $ sudo apt install libjemalloc1
    $ sudo yum install MariaDB-server \
       MariaDB-backup \
       MariaDB-shared \
       MariaDB-client \
       MariaDB-columnstore-engine
    $ sudo apt install mariadb-server \
       mariadb-backup \
       libmariadb3 \
       mariadb-client \
       mariadb-plugin-columnstore
    # minimize swapping
    vm.swappiness = 1
    
    # Increase the TCP max buffer size
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    
    # Increase the TCP buffer limits
    # min, default, and max number of bytes to use
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 65536 16777216
    
    # don't cache ssthresh from previous connection
    net.ipv4.tcp_no_metrics_save = 1
    
    # for 1 GigE, increase this to 2500
    # for 10 GigE, increase this to 30000
    net.core.netdev_max_backlog = 2500
    $ sudo sysctl --load=/etc/sysctl.d/90-mariadb-enterprise-columnstore.conf
    $ sudo setenforce permissive
    # This file controls the state of SELinux on the system.
    # SELINUX= can take one of these three values:
    #     enforcing - SELinux security policy is enforced.
    #     permissive - SELinux prints warnings instead of enforcing.
    #     disabled - No SELinux policy is loaded.
    SELINUX=permissive
    # SELINUXTYPE= can take one of three values:
    #     targeted - Targeted processes are protected,
    #     minimum - Modification of targeted policy. Only selected processes are protected.
    #     mls - Multi Level Security protection.
    SELINUXTYPE=targeted
    sudo getenforce
    Permissive
    $ sudo systemctl disable apparmor
    $ sudo aa-status
    apparmor module is loaded.
    0 profiles are loaded.
    0 profiles are in enforce mode.
    0 profiles are in complain mode.
    0 processes have profiles defined.
    0 processes are in enforce mode.
    0 processes are in complain mode.
    0 processes are unconfined but have a profile defined.
    $ sudo yum install glibc-locale-source glibc-langpack-en
    $ sudo localedef -i en_US -f UTF-8 en_US.UTF-8
    $ sudo yum install curl
    $ sudo apt install curl apt-transport-https
    $ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
    $ echo "${checksum}  mariadb_es_repo_setup" \
           | sha256sum -c -
    $ chmod +x mariadb_es_repo_setup
    $ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
          --skip-maxscale \
          --skip-tools \
          --mariadb-server-version="11.4"
    $ sudo yum install jemalloc jq curl
    $ sudo apt install libjemalloc1 jq curl
    $ sudo apt install libjemalloc2 jq curl
    $ sudo yum install MariaDB-server \
       MariaDB-backup \
       MariaDB-shared \
       MariaDB-client \
       MariaDB-columnstore-engine \
       MariaDB-columnstore-cmapi
    $ sudo apt install mariadb-server \
       mariadb-backup \
       libmariadb3 \
       mariadb-client \
       mariadb-plugin-columnstore \
       mariadb-columnstore-cmapi
    $ sudo yum install curl
    $ sudo apt install curl apt-transport-https
    $ curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
    
    $ echo "${checksum}  mariadb_es_repo_setup" \
           | sha256sum -c -
    
    $ chmod +x mariadb_es_repo_setup
    
    $ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
          --skip-server \
          --skip-tools \
          --mariadb-maxscale-version="22.08"
    $ sudo yum install maxscale
    $ sudo apt install maxscale
    spinner
    spinner
    columnstore-topology-s3
    columnstore-topology
    spinner
    cpimport
    LOAD DATA INFILE
    Remote Database Import
    spinner
    spinner

    Shell

    cpimport

    • SQL access is not required

    SQL

    LOAD DATA INFILE

    • Shell access is not required

    Remote Database

    Remote Database Import

    • Use normal database client

    • Avoid dumping data to intermediate filed

    cpimportarrow-up-right
    cpimportarrow-up-right
    cpimportarrow-up-right
    cpimportarrow-up-right
    spinner
    Next: Step 2: Install MariaDB Enterprise ColumnStore
    spinner
    CREATE DATABASE inventory;
    CREATE TABLE inventory.products (
       product_name VARCHAR(11) NOT NULL DEFAULT '',
       supplier VARCHAR(128) NOT NULL DEFAULT '',
       quantity VARCHAR(128) NOT NULL DEFAULT '',
       unit_cost VARCHAR(128) NOT NULL DEFAULT ''
    ) ENGINE=Columnstore DEFAULT CHARSET=utf8;
    $ sudo cpimport -s '\t' inventory products /tmp/inventory-products.tsv
    LOAD DATA INFILE '/tmp/inventory-products.tsv'
    INTO TABLE inventory.products;
    $ mariadb --quick \
       --skip-column-names \
       --execute="SELECT * FROM inventory.products" \
       | cpimport -s '\t' inventory products
    # minimize swapping
    vm.swappiness = 1
    
    # Increase the TCP max buffer size
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    
    # Increase the TCP buffer limits
    # min, default, and max number of bytes to use
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 65536 16777216
    
    # don't cache ssthresh from previous connection
    net.ipv4.tcp_no_metrics_save = 1
    
    # for 1 GigE, increase this to 2500
    # for 10 GigE, increase this to 30000
    net.core.netdev_max_backlog = 2500
    $ sudo sysctl --load=/etc/sysctl.d/90-mariadb-enterprise-columnstore.conf
    $ sudo setenforce permissive
    # This file controls the state of SELinux on the system.
    # SELINUX= can take one of these three values:
    #     enforcing - SELinux security policy is enforced.
    #     permissive - SELinux prints warnings instead of enforcing.
    #     disabled - No SELinux policy is loaded.
    SELINUX=permissive
    # SELINUXTYPE= can take one of three values:
    #     targeted - Targeted processes are protected,
    #     minimum - Modification of targeted policy. Only selected processes are protected.
    #     mls - Multi Level Security protection.
    SELINUXTYPE=targeted
    $ sudo getenforce
    Permissive
    $ sudo systemctl disable apparmor
    $ sudo aa-status
    apparmor module is loaded.
    0 profiles are loaded.
    0 profiles are in enforce mode.
    0 profiles are in complain mode.
    0 processes have profiles defined.
    0 processes are in enforce mode.
    0 processes are in complain mode.
    0 processes are unconfined but have a profile defined.
    $ sudo yum install glibc-locale-source glibc-langpack-en
    $ sudo localedef -i en_US -f UTF-8 en_US.UTF-8
    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Test S3 Connection

    MariaDB Enterprise ColumnStore 23.10 includes a testS3Connection command to test the S3 configuration, permissions, and connectivity.

    On each Enterprise ColumnStore node, test the S3 configuration:

    If the testS3Connection command does not return OK, investigate the S3 configuration.

    hashtag
    Test Local Connection

    Connect to the server using using the root@localhost user account:

    hashtag
    Test ColumnStore Plugin Status

    Query and confirm that the ColumnStore storage engine plugin is ACTIVE:

    hashtag
    Test ColumnStore Table Creation

    1. Create a test database, if it does not exist:

    1. Create a ColumnStore table:

    1. Add sample data into the table:

    1. Read data from table:

    hashtag
    Test Cross Engine Join

    1. Create an InnoDB table:

    1. Add data to the table:

    1. Perform a cross-engine join:

    hashtag
    Next Step

    Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:

    This page was step 4 of 5.

    Next: Step 5: Bulk Import of Data.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    spinner

    Step 2: Configure Shared Local Storage

    Step 2: Configure Shared Local Storage

    hashtag
    Overview

    This page details step 2 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".

    This step configures shared local storage on systems hosting Enterprise ColumnStore.

    circle-info

    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Directories for Shared Local Storage

    In a ColumnStore Object Storage topology, MariaDB Enterprise ColumnStore requires the Storage Manager directory to be located on shared local storage.

    The Storage Manager directory is at the following path:

    • /var/lib/columnstore/storagemanager

    The N in dataN represents a range of integers that starts at 1 and stops at the number of nodes in the deployment. For example, with a 3-node Enterprise ColumnStore deployment, this would refer to the following directories:

    • /var/lib/columnstore/data1

    • /var/lib/columnstore/data2

    • /var/lib/columnstore/data3

    The DB Root directories must be mounted on every ColumnStore node.

    hashtag
    Choose a Shared Local Storage Solution

    Select a Shared Local Storage solution for the Storage Manager directory:

    • EBS (Elastic Block Store) Multi-Attach

    • EFS (Elastic File System)

    • Filestore

    • GlusterFS

    For additional information, see "Shared Local Storage Options".

    hashtag
    Configure EBS Multi-Attach

    EBS is a high-performance block-storage service for AWS (Amazon Web Services). EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported.

    For Enterprise ColumnStore deployments in AWS:

    • EBS Multi-Attach is a recommended option for the Storage Manager directory.

    • Amazon S3 storage is the recommended option for data.

    • Consult the vendor documentation for details on how to configure EBS Multi-Attach.

    hashtag
    Configure Elastic File System (EFS)

    EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services)

    For deployments in AWS:

    • EFS is a recommended option for the Storage Manager directory.

    • Amazon S3 storage is the recommended option for data.

    • Consult the vendor documentation for details on how to configure EFS.

    hashtag
    Configure Filestore

    Filestore is high-performance, fully managed storage for GCP (Google Cloud Platform).

    For Enterprise ColumnStore deployments in GCP:

    • Filestore is the recommended option for the Storage Manager directory.

    • Google Object Storage (S3-compatible) is the recommended option for data.

    • Consult the vendor documentation for details on how to configure Filestore.

    hashtag
    Configure GlusterFS

    GlusterFS is a distributed file system.

    GlusterFS is a shared local storage option, but it is not one of the recommended options.

    For more information, see "".

    hashtag
    Install GlusterFS

    On each Enterprise ColumnStore node, install GlusterFS.

    Install on CentOS / RHEL 8 (YUM):

    Install on CentOS / RHEL 7 (YUM):

    Install on Debian (APT):

    Install on Ubuntu (APT):

    hashtag
    Start the GlusterFS Daemon

    Start the GlusterFS daemon:

    hashtag
    Probe the GlusterFS Peers

    Before you can create a volume with GlusterFS, you must probe each node from a peer node.

    1. On the primary node, probe all of the other cluster nodes:

    1. On one of the replica nodes, probe the primary node to confirm that it is connected:

    1. On the primary node, check the peer status:

    Number of Peers: 2

    hashtag
    Configure and Mount GlusterFS Volumes

    Create the GlusterFS volumes for MariaDB Enterprise ColumnStore. Each volume must have the same number of replicas as the number of Enterprise ColumnStore nodes.

    1. On each Enterprise ColumnStore node, create the directory for each brick in the /brick directory:

    1. On the primary node, create the GlusterFS volumes:

    1. On the primary node, start the volume:

    1. On each Enterprise ColumnStore node, create mount points for the volumes:

    1. On each Enterprise ColumnStore node, add the mount points to /etc/fstab:

    1. On each Enterprise ColumnStore node, mount the volumes:

    hashtag
    Configure Network File System (NFS)

    NFS is a distributed file system. NFS is available in most Linux distributions. If NFS is used for an Enterprise ColumnStore deployment, the storage must be mounted with the sync option to ensure that each node flushes its changes immediately.

    For on-premises deployments:

    • NFS is the recommended option for the Storage Manager directory.

    • Any S3-compatible storage is the recommended option for data.

    Consult the documentation for your NFS implementation for details on how to configure NFS.

    hashtag
    Next Step

    Navigation in the procedure "".

    This page was step 2 of 9.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Step 2: Configure Shared Local Storage

    Step 2: Configure Shared Local Storage

    hashtag
    Overview

    This page details step 2 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".

    This step configures shared local storage on systems hosting Enterprise ColumnStore.

    circle-info

    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Directories for Shared Local Storage

    In a ColumnStore Object Storage topology, MariaDB Enterprise ColumnStore requires the Storage Manager directory to be located on shared local storage.

    The Storage Manager directory is at the following path:

    • /var/lib/columnstore/storagemanager

    The Storage Manager directory must be mounted on every ColumnStore node.

    hashtag
    Choose a Shared Local Storage Solution

    Select a Shared Local Storage solution for the Storage Manager directory:

    For additional information, see "".

    hashtag
    Configure EBS Multi-Attach

    EBS is a high-performance block-storage service for AWS (Amazon Web Services). EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported.

    For Enterprise ColumnStore deployments in AWS:

    • EBS Multi-Attach is a recommended option for the Storage Manager directory.

    • Amazon S3 storage is the recommended option for data.

    • Consult the vendor documentation for details on how to configure EBS Multi-Attach.

    hashtag
    Configure Elastic File System (EFS)

    EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services)

    For deployments in AWS:

    • EFS is a recommended option for the Storage Manager directory.

    • Amazon S3 storage is the recommended option for data.

    • Consult the vendor documentation for details on how to configure EFS.

    hashtag
    Configure Filestore

    Filestore is high-performance, fully managed storage for GCP (Google Cloud Platform).

    For Enterprise ColumnStore deployments in GCP:

    • Filestore is the recommended option for the Storage Manager directory.

    • Google Object Storage (S3-compatible) is the recommended option for data.

    • Consult the vendor documentation for details on how to configure Filestore.

    hashtag
    Configure GlusterFS

    GlusterFS is a distributed file system. GlusterFS is a shared local storage option, but it is not one of the recommended options.

    For more information, see "".

    hashtag
    Install GlusterFS

    On each Enterprise ColumnStore node, install GlusterFS.

    Install on CentOS / RHEL 8 (YUM):

    Install on CentOS / RHEL 7 (YUM):

    Install on Debian (APT):

    Install on Ubuntu (APT):

    hashtag
    Start the GlusterFS Daemon

    Start the GlusterFS daemon:

    hashtag
    Probe the GlusterFS Peers

    Before you can create a volume with GlusterFS, you must probe each node from a peer node.

    1. On the primary node, probe all of the other cluster nodes:

    1. On one of the replica nodes, probe the primary node to confirm that it is connected:

    1. On the primary node, check the peer status:

    hashtag
    Configure and Mount GlusterFS Volumes

    Create the GlusterFS volumes for MariaDB Enterprise ColumnStore. Each volume must have the same number of replicas as the number of Enterprise ColumnStore nodes.

    1. On each Enterprise ColumnStore node, create the directory for each brick in the /brick directory:

    1. On the primary node, create the GlusterFS volumes:

    1. On the primary node, start the volume:

    1. On each Enterprise ColumnStore node, create mount points for the volumes:

    1. On each Enterprise ColumnStore node, add the mount points to /etc/fstab:

    1. On each Enterprise ColumnStore node, mount the volumes:

    hashtag
    Configure Network File System (NFS)

    NFS is a distributed file system. NFS is available in most Linux distributions. If NFS is used for an Enterprise ColumnStore deployment, the storage must be mounted with the sync option to ensure that each node flushes its changes immediately.

    For on-premises deployments:

    • NFS is the recommended option for the Storage Manager directory.

    • Any S3-compatible storage is the recommended option for data.

    Consult the documentation for your NFS implementation for details on how to configure NFS.

    hashtag
    Next Step

    Navigation in the procedure "Deploy ColumnStore Object Storage Topology":

    This page was step 2 of 9.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    ColumnStore and Recursive CTE Limitations

    The ColumnStore engine does not fully support recursive Common Table Expressions (CTEs). Attempting to use recursive CTEs directly against ColumnStore tables typically results in an error.

    The purpose of the following examples is to demonstrate three potential workarounds for this issue. The best fit for your organization will depend on your specific needs and ability to refactor queries and adjust your approach.

    hashtag
    Setup: Simulating an Org Chart

    It simulates a simple organizational chart with employees and managers to illustrate the problem and the workarounds.

    First, an InnoDB table for comparison:

    Next, the ColumnStore table, which is where the CTE issue arises:

    Attempting to run a recursive CTE directly on the employees (ColumnStore) table:

    This will result in the aforementioned error:

    hashtag
    Workarounds

    Here are three potential workarounds to address the recursive CTE limitation with MariaDB ColumnStore.

    hashtag
    Option 1: Toggle ColumnStore Select Handler

    You can temporarily bypass ColumnStore's SELECT handler by disabling it at the session level before executing your recursive CTE and then re-enabling it afterwards.

    Note: This workaround may not always be effective, as its success can depend on the specific MariaDB server version and table definitions.

    hashtag
    Option 2: Use Procedural Simulation via Temporary Table

    If direct recursive CTEs fail or cause server crashes, you can simulate the recursive logic using a stored procedure and a temporary table. This approach iteratively populates the hierarchy.

    First, create a temporary table to store the hierarchical data:

    Next, create a stored procedure to iteratively populate the temp_org_chart table:

    Finally, call the stored procedure and then select from the populated temporary table:

    hashtag
    Option 3: Clone Data into InnoDB

    Another robust workaround is to clone the structure and data of the ColumnStore table into an InnoDB table. Once the data resides in an InnoDB table, you can execute the recursive CTE as usual, as InnoDB fully supports them.

    This approach involves a few steps, often executed via shell commands interacting with the MariaDB client:

    1. Extract and Modify CREATE TABLE Statement: Use SHOW CREATE TABLE to get the definition of your ColumnStore table, then modify it to change the engine to InnoDB and give the new table a different name (e.g., employees2).

    1. Create New Table and Copy Data: Execute the modified CREATE TABLE script to create the new InnoDB table, then insert all data from the original ColumnStore table into it.

    1. Run Recursive CTE on the InnoDB Table: Now, with the data in employees2 (an InnoDB table), you can run your recursive CTE without issues.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Optimizing Linux Kernel Parameters for MariaDB ColumnStore

    This page provides information on optimizing Linux kernel parameters for improved performance with MariaDB ColumnStore.

    hashtag
    Introduction

    MariaDB ColumnStore is a high-performance columnar database designed for analytical workloads. By optimizing the Linux kernel parameters, you can further enhance the performance of your MariaDB ColumnStore deployments.

    hashtag
    Recommended Parameters

    The following table lists the recommended optimized Linux kernel parameters for MariaDB ColumnStore:

    For more information refer to .

    Parameter
    Recommended Value
    Explanation

    hashtag
    Configuration Example

    To configure these parameters, you can add them to the /etc/sysctl.conf file. For example:

    After making changes to the /etc/sysctl.conf file, you need to apply the changes by running the following command:

    hashtag
    Increase the Limit for Memory-Mapped Areas

    hashtag
    Common Use Cases

    These optimized parameters are recommended for all MariaDB ColumnStore deployments, regardless of the specific workload. They can improve performance for various use cases, including:

    • Large-scale data warehousing

    • Real-time analytics

    • Business intelligence

    • Machine learning

    hashtag
    Related Links

    hashtag
    Conclusion

    By optimizing the Linux kernel parameters, you can significantly improve the performance of your MariaDB ColumnStore deployments. These recommendations provide a starting point for optimizing your system, and you may need to adjust the values based on your specific hardware and workload.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Backup and Restore with Shared Local Storage

    hashtag
    Overview

    MariaDB Enterprise ColumnStore supports backup and restore. If Enterprise ColumnStore uses shared local storage for the DB Root directories, the DB Root directories and the MariaDB data directory must be backed up separately.

    hashtag
    Recovery Planning

    MariaDB Enterprise ColumnStore supports multiple .

    This page discusses how to backup and restore Enterprise ColumnStore when it uses (such as NFS) for the .

    Any file can become corrupt due to hardware issues, crashes, power loss, and other reasons. If the Enterprise ColumnStore data or metadata become corrupt, Enterprise ColumnStore could become unusable, resulting in data loss.

    If Enterprise ColumnStore is your , it should be backed up regularly.

    If Enterprise ColumnStore uses for the , the following items must be backed up:

    • The MariaDB Data directory is backed up using

    • The must be backed up

    • Each must be backed up

    See the instructions below for more details.

    hashtag
    Backup

    Use the following process to take a backup:

    1. Determine which node is the primary server using curl to send the status command to the CMAPI Server:

    The output will show dbrm_mode: master for the primary server:

    1. Connect to the primary server using MariaDB Client as a user account that has privileges to lock the database:

    1. Lock the database with the statement:

    Ensure that the client remains connected to the primary server, so that the lock is held for the remaining steps.

    1. Make a copy or snapshot of the . By default, it is located at /var/lib/columnstore/storagemanager.

    For example, to make a copy of the directory with rsync:

    1. Make a copy or snapshot of the . By default, they are located at /var/lib/columnstore/dataN, where the N in dataN represents a range of integers that starts at 1 and stops at the number of nodes in the deployment.

    For example, to make a copy of the directories with rsync in a 3-node deployment:

    1. Use to backup the :

    1. Use to prepare the backup:

    1. Ensure that all previous operations are complete.

    2. In the original client connection to the primary server, unlock the database with the statement:

    hashtag
    Restore

    Use the following process to restore a backup:

    1. , so that you can restore the backup to an empty deployment.

    2. Ensure that all services are stopped on each node:

    1. Restore the backup of the . By default, it is located at /var/lib/columnstore/storagemanager.

    For example, to restore the backup with rsync:

    1. Restore the backup of the DB Root directories. By default, they are located at /var/lib/columnstore/dataN, where the N in dataN represents a range of integers that starts at 1 and stops at the number of nodes in the deployment.

    For example, to restore the backup with rsync in a 3-node deployment:

    1. Use to restore the backup of the MariaDB data directory:

    1. Start the services on each node:

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    spinner
    spinner
    spinner
    spinner
    spinner
    spinner
    spinner
    spinner
    spinner
    spinner
    spinner
    spinner
    spinner
    spinner

    Step 5: Test MariaDB Enterprise Server

    Step 5: Test MariaDB Enterprise Server

    hashtag
    Overview

    This page details step 5 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".

    This step tests MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.

    circle-info

    Step 5: Test MariaDB Enterprise Serverd

    Step 5: Test MariaDB Enterprise Server

    hashtag
    Overview

    This page details step 5 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".

    This step tests MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.

    circle-info

    Step 1: Prepare ColumnStore Nodes

    Step 1: Prepare ColumnStore Nodes

    hashtag
    Overview

    This page details step 1 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".

    This step prepares systems to host MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.

    circle-info

    Removing a Node

    To remove a node from Enterprise ColumnStore, perform the following procedure.

    hashtag
    Unlinking from Service in MaxScale

    The server object for the node must be unlinked from the service using :

    Step 1: Prepare ColumnStore Nodes

    Step 1: Prepare ColumnStore Nodes

    hashtag
    Overview

    This page details step 1 of the 9-step procedure "".

    This step prepares systems to host MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.

    circle-info

    Step 3: Start and Configure Enterprise ColumnStore

    Step 3: Start and Configure Enterprise ColumnStore

    hashtag
    Overview

    This page details step 3 of a 5-step procedure for deploying .

    This step starts and configures MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.

    circle-info

    $ sudo testS3Connection
    StorageManager[26887]: Using the config file found at /etc/columnstore/storagemanager.cnf
    StorageManager[26887]: S3Storage: S3 connectivity & permissions are OK
    S3 Storage Manager Configuration OK
    $ sudo mariadb
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 38
    Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]>
    SELECT PLUGIN_NAME, PLUGIN_STATUS
    FROM information_schema.PLUGINS
    WHERE PLUGIN_LIBRARY LIKE 'ha_columnstore%';
    +---------------------+---------------+
    | PLUGIN_NAME         | PLUGIN_STATUS |
    +---------------------+---------------+
    | Columnstore         | ACTIVE        |
    | COLUMNSTORE_COLUMNS | ACTIVE        |
    | COLUMNSTORE_TABLES  | ACTIVE        |
    | COLUMNSTORE_FILES   | ACTIVE        |
    | COLUMNSTORE_EXTENTS | ACTIVE        |
    +---------------------+---------------+
    CREATE DATABASE IF NOT EXISTS test;
    CREATE TABLE IF NOT EXISTS test.contacts (
       first_name VARCHAR(50),
       last_name VARCHAR(50),
       email VARCHAR(100)
    ) ENGINE=ColumnStore;
    INSERT INTO test.contacts (first_name, last_name, email)
       VALUES
       ("Kai", "Devi", "kai.devi@example.com"),
       ("Lee", "Wang", "lee.wang@example.com");
    SELECT * FROM test.contacts;
    
    +------------+-----------+----------------------+
    | first_name | last_name | email                |
    +------------+-----------+----------------------+
    | Kai        | Devi      | kai.devi@example.com |
    | Lee        | Wang      | lee.wang@example.com |
    +------------+-----------+----------------------+
    CREATE TABLE test.addresses (
       email VARCHAR(100),
       street_address VARCHAR(255),
       city VARCHAR(100),
       state_code VARCHAR(2)
    ) ENGINE = InnoDB;
    INSERT INTO test.addresses (email, street_address, city, state_code)
       VALUES
       ("kai.devi@example.com", "1660 Amphibious Blvd.", "Redwood City", "CA"),
       ("lee.wang@example.com", "32620 Little Blvd", "Redwood City", "CA");
    SELECT name AS "Name", addr AS "Address"
    FROM (SELECT CONCAT(first_name, " ", last_name) AS name,
       email FROM test.contacts) AS contacts
    INNER JOIN (SELECT CONCAT(street_address, ", ", city, ", ", state_code) AS addr,
       email FROM test.addresses) AS addr
    WHERE  contacts.email = addr.email;
    +----------+-----------------------------------------+
    | Name     | Address                                 |
    +----------+-----------------------------------------+
    | Kai Devi | 1660 Amphibious Blvd., Redwood City, CA |
    | Lee Wang | 32620 Little Blvd, Redwood City, CA     |
    +----------+-----------------------------------------+
    
    +-------------------+-------------------------------------+
    | Name              | Address                             |
    +-------------------+-------------------------------------+
    | Walker Percy      | 500 Thomas More Dr., Covington, LA  |
    | Flannery O'Connor | 300 Tarwater Rd., Milledgeville, GA |
    +-------------------+-------------------------------------+
  • NFS (Network File System)

  • Recommended Storage Options
    Deploy ColumnStore Shared Local Storage Topology
    Next: Step 3: Install MariaDB Enterprise Server.
    spinner
    GlusterFS
  • NFS (Network File System)

  • EBS (Elastic Block Store) Multi-Attach
    EFS (Elastic File System)
    Filestore
    Shared Local Storage Options
    Recommended Storage Options
    Next: Step 3: Install MariaDB Enterprise Server.
    spinner
    spinner
    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Optimize Linux Kernel Parameters

    MariaDB Enterprise ColumnStore performs best with Linux kernel optimizations.

    On each server to host an Enterprise ColumnStore node, optimize the kernel:

    1. Set the relevant kernel parameters in a sysctl configuration file. To ensure proper change management, use an Enterprise ColumnStore-specific configuration file.

    Create a /etc/sysctl.d/90-mariadb-enterprise-columnstore.conf file:

    1. Use the sysctl command to set the kernel parameters at runtime

    hashtag
    Temporarily Configure Linux Security Modules (LSM)

    The Linux Security Modules (LSM) should be temporarily disabled on each Enterprise ColumnStore node during installation.

    The LSM will be configured and re-enabled later in this deployment procedure.

    The steps to disable the LSM depend on the specific LSM used by the operating system.

    hashtag
    CentOS / RHEL Stop SELinux

    SELinux must be set to permissive mode before installing MariaDB Enterprise ColumnStore.

    To set SELinux to permissive mode:

    1. Set SELinux to permissive mode:

    1. Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config.

    For example, the file will usually look like this after the change:

    1. Confirm that SELinux is in permissive mode:

    SELinux will be configured and re-enabled later in this deployment procedure. This configuration is not persistent. If you restart the server before configuring and re-enabling SELinux later in the deployment procedure, you must reset the enforcement to permissive mode.

    hashtag
    Debian / Ubuntu AppArmor

    AppArmor must be disabled before installing MariaDB Enterprise ColumnStore.

    1. Disable AppArmor:

    1. Reboot the system.

    2. Confirm that no AppArmor profiles are loaded using aa-status:

    AppArmor will be configured and re-enabled later in this deployment procedure.

    hashtag
    Temporarily Configure Firewall for Installation

    MariaDB Enterprise ColumnStore requires the following TCP ports:

    TCP Ports
    Description

    3306

    Port used for MariaDB Client traffic

    8600-8630

    Port range used for inter-node communication

    8640

    Port used by CMAPI

    8700

    Port used for inter-node communication

    8800

    Port used for inter-node communication

    The firewall should be temporarily disabled on each Enterprise ColumnStore node during installation.

    The firewall will be configured and re-enabled later in this deployment procedure.

    The steps to disable the firewall depend on the specific firewall used by the operating system.

    hashtag
    CentOS / RHEL Stop firewalld

    1. Check if the firewalld service is running:

    1. If the firewalld service is running, stop it:

    Firewalld will be configured and re-enabled later in this deployment procedure.

    hashtag
    Ubuntu Stop UFW

    1. Check if the UFW service is running:

    1. If the UFW service is running, stop it:

    UFW will be configured and re-enabled later in this deployment procedure.

    hashtag
    Configure the AWS Security Group

    To install Enterprise ColumnStore on Amazon Web Services (AWS), the security group must be modified prior to installation.

    Enterprise ColumnStore requires all internal communications to be open between Enterprise ColumnStore nodes. Therefore, the security group should allow all protocols and all ports to be open between the Enterprise ColumnStore nodes and the MaxScale proxy.

    hashtag
    Configure Character Encoding

    When using MariaDB Enterprise ColumnStore, it is recommended to set the system's locale to UTF-8.

    1. On RHEL 8, install additional dependencies:

    1. Set the system's locale to en_US.UTF-8 by executing localedef:

    hashtag
    Configure DNS

    MariaDB Enterprise ColumnStore requires all nodes to have host names that are resolvable on all other nodes. If your infrastructure does not configure DNS centrally, you may need to configure static DNS entries in the /etc/hosts file of each server.

    On each Enterprise ColumnStore node, edit the /etc/hosts file to map host names to the IP address of each Enterprise ColumnStore node:

    Replace the IP addresses with the addresses in your own environment.

    hashtag
    Create an S3 Bucket

    With the ColumnStore Object Storage topology, it is important to create the S3 bucket before you start ColumnStore. All Enterprise ColumnStore nodes access data from the same bucket.

    If you already have an S3 bucket, confirm that the bucket is empty.

    S3 bucket configuration will be performed later in this procedure.

    hashtag
    Next Step

    Navigation in the procedure "Deploy ColumnStore Object Storage Topology":

    This page was step 1 of 9.

    Next: Step 2: Configure Shared Local Storage.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    spinner
    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Optimize Linux Kernel Parameters

    MariaDB Enterprise ColumnStore performs best with Linux kernel optimizations.

    On each server to host an Enterprise ColumnStore node, optimize the kernel:

    1. Set the relevant kernel parameters in a sysctl configuration file. To ensure proper change management, use an Enterprise ColumnStore-specific configuration file. Create a /etc/sysctl.d/90-mariadb-enterprise-columnstore.conf file:

    1. Use the sysctl command to set the kernel parameters at runtime

    hashtag
    Temporarily Configure Linux Security Modules (LSM)

    The Linux Security Modules (LSM) should be temporarily disabled on each Enterprise ColumnStore node during installation.

    The LSM will be configured and re-enabled later in this deployment procedure.

    The steps to disable the LSM depend on the specific LSM used by the operating system.

    hashtag
    CentOS / RHEL Stop SELinux

    SELinux must be set to permissive mode before installing MariaDB Enterprise ColumnStore.

    To set SELinux to permissive mode:

    1. Set SELinux to permissive mode:

    1. Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config.

    For example, the file will usually look like this after the change:

    1. Confirm that SELinux is in permissive mode:

    SELinux will be configured and re-enabled later in this deployment procedure. This configuration is not persistent. If you restart the server before configuring and re-enabling SELinux later in the deployment procedure, you must reset the enforcement to permissive mode.

    hashtag
    Debian / Ubuntu AppArmor

    AppArmor must be disabled before installing MariaDB Enterprise ColumnStore.

    1. Disable AppArmor:

    1. Reboot the system.

    2. Confirm that no AppArmor profiles are loaded using aa-status:

    AppArmor will be configured and re-enabled later in this deployment procedure.

    hashtag
    Temporarily Configure Firewall for Installation

    MariaDB Enterprise ColumnStore requires the following TCP ports:

    TCP Ports
    Description

    3306

    Port used for MariaDB Client traffic

    8600-8630

    Port range used for inter-node communication

    8640

    Port used by CMAPI

    8700

    Port used for inter-node communication

    8800

    Port used for inter-node communication

    The firewall should be temporarily disabled on each Enterprise ColumnStore node during installation.

    The firewall will be configured and re-enabled later in this deployment procedure.

    The steps to disable the firewall depend on the specific firewall used by the operating system.

    hashtag
    CentOS / RHEL Stop firewalld

    1. Check if the firewalld service is running:

    1. If the firewalld service is running, stop it:

    Firewalld will be configured and re-enabled later in this deployment procedure.

    hashtag
    Ubuntu Stop UFW

    1. Check if the UFW service is running:

    1. If the UFW service is running, stop it:

    UFW will be configured and re-enabled later in this deployment procedure.

    hashtag
    Configure the AWS Security Group

    To install Enterprise ColumnStore on Amazon Web Services (AWS), the security group must be modified prior to installation.

    Enterprise ColumnStore requires all internal communications to be open between Enterprise ColumnStore nodes. Therefore, the security group should allow all protocols and all ports to be open between the Enterprise ColumnStore nodes and the MaxScale proxy.

    hashtag
    Configure Character Encoding

    When using MariaDB Enterprise ColumnStore, it is recommended to set the system's locale to UTF-8.

    1. On RHEL 8, install additional dependencies:

    1. Set the system's locale to en_US.UTF-8 by executing localedef:

    hashtag
    Configure DNS

    MariaDB Enterprise ColumnStore requires all nodes to have host names that are resolvable on all other nodes. If your infrastructure does not configure DNS centrally, you may need to configure static DNS entries in the /etc/hosts file of each server.

    On each Enterprise ColumnStore node, edit the /etc/hosts file to map host names to the IP address of each Enterprise ColumnStore node:

    Replace the IP addresses with the addresses in your own environment.

    hashtag
    Next Step

    Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".

    This page was step 1 of 9.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Multnode Localstorage
    spinner
    $ sudo yum install --enablerepo=PowerTools glusterfs-server
    $ sudo yum install centos-release-gluster
    $ sudo yum install glusterfs-server
    $ wget -O - https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub | apt-key add -
    
    $ DEBID=$(grep 'VERSION_ID=' /etc/os-release | cut -d '=' -f 2 | tr -d '"')
    $ DEBVER=$(grep 'VERSION=' /etc/os-release | grep -Eo '[a-z]+')
    $ DEBARCH=$(dpkg --print-architecture)
    $ echo deb https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main > /etc/apt/sources.list.d/gluster.list
    $ sudo apt update
    $ sudo apt install glusterfs-server
    $ sudo apt update
    $ sudo apt install glusterfs-server
    $ sudo systemctl start glusterd
    $ sudo systemctl enable glusterd
    $ sudo gluster peer probe mcs2
    $ sudo gluster peer probe mcs3
    $ sudo gluster peer probe mcs1
    
    
    peer probe: Host mcs1 port 24007 already in peer list
    $ sudo gluster peer status
    Hostname: mcs2
    Uuid: 3c8a5c79-22de-45df-9034-8ae624b7b23e
    State: Peer in Cluster (Connected)
    
    Hostname: mcs3
    Uuid: 862af7b2-bb5e-4b1c-8311-630fa32ed451
    State: Peer in Cluster (Connected)
    $ sudo mkdir -p /brick/storagemanager
    $ sudo gluster volume create storagemanager \
          replica 3 \
          mcs1:/brick/storagemanager \
          mcs2:/brick/storagemanager \
          mcs3:/brick/storagemanager \
          force
    $ sudo gluster volume start storagemanager
    $ sudo mkdir -p /var/lib/columnstore/storagemanager
    127.0.0.1:storagemanager /var/lib/columnstore/storagemanager glusterfs defaults,_netdev 0 0
    $ sudo mount -a
    $ sudo yum install --enablerepo=PowerTools glusterfs-server
    $ sudo yum install centos-release-gluster
    $ sudo yum install glusterfs-server
    $ wget -O - https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub | apt-key add -
    
    $ DEBID=$(grep 'VERSION_ID=' /etc/os-release | cut -d '=' -f 2 | tr -d '"')
    $ DEBVER=$(grep 'VERSION=' /etc/os-release | grep -Eo '[a-z]+')
    $ DEBARCH=$(dpkg --print-architecture)
    $ echo deb https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main > /etc/apt/sources.list.d/gluster.list
    $ sudo apt update
    $ sudo apt install glusterfs-server
    $ sudo apt update
    $ sudo apt install glusterfs-server
    $ sudo systemctl start glusterd
    $ sudo systemctl enable glusterd
    $ sudo gluster peer probe mcs2
    $ sudo gluster peer probe mcs3
    $ sudo gluster peer probe mcs1
    peer probe: Host mcs1 port 24007 already in peer list
    $ sudo gluster peer status
    Number of Peers: 2
    
    Hostname: mcs2
    Uuid: 3c8a5c79-22de-45df-9034-8ae624b7b23e
    State: Peer in Cluster (Connected)
    
    Hostname: mcs3
    Uuid: 862af7b2-bb5e-4b1c-8311-630fa32ed451
    State: Peer in Cluster (Connected)
    $ sudo mkdir -p /brick/storagemanager
    $ sudo gluster volume create storagemanager \
          replica 3 \
          mcs1:/brick/storagemanager \
          mcs2:/brick/storagemanager \
          mcs3:/brick/storagemanager \
          force
    $ sudo gluster volume start storagemanager
    $ sudo mkdir -p /var/lib/columnstore/storagemanager
    127.0.0.1:storagemanager /var/lib/columnstore/storagemanager glusterfs defaults,_netdev 0 0
    $ sudo mount -a
    CREATE TABLE employees_innodb (
        id INT PRIMARY KEY,
        name VARCHAR(100),
        manager_id INT  -- references employees.id (nullable for top-level)
    );
    
    INSERT INTO employees_innodb (id, name, manager_id) VALUES
    (1, 'CEO', NULL),
    (2, 'VP of Sales', 1),
    (3, 'Sales Rep A', 2),
    (4, 'VP of Eng', 1),
    (5, 'Eng A', 4),
    (6, 'Eng B', 4);
    
    CREATE TABLE employees (
        id INT,
        name VARCHAR(100),
        manager_id INT  -- references employees.id (nullable for top-level)
    ) engine=columnstore;
    
    INSERT INTO employees (id, name, manager_id) VALUES
    (1, 'CEO', NULL),
    (2, 'VP of Sales', 1),
    (3, 'Sales Rep A', 2),
    (4, 'VP of Eng', 1),
    (5, 'Eng A', 4),
    (6, 'Eng B', 4);
    
    WITH RECURSIVE org_chart AS (
        -- Anchor: start with the top-level manager (CEO)
        SELECT id, name, manager_id, 0 AS level
        FROM employees
        WHERE id = 1
    
        UNION ALL
    
        -- Recursive step: find employees who report to the previous level
        SELECT e.id, e.name, e.manager_id, oc.level + 1
        FROM employees e
        JOIN org_chart oc ON e.manager_id = oc.id
    )
    SELECT * FROM org_chart;
    
    ERROR 1178 (42000): The storage engine for the table doesn't support Recursive CTE
    SET SESSION columnstore_select_handler=OFF;
    
    WITH RECURSIVE org_chart AS (
        -- Anchor: start with the top-level manager (CEO)
        SELECT id, name, manager_id, 0 AS level
        FROM employees
        WHERE id = 1
    
        UNION ALL
    
        -- Recursive step: find employees who report to the previous level
        SELECT e.id, e.name, e.manager_id, oc.level + 1
        FROM employees e
        JOIN org_chart oc ON e.manager_id = oc.id
    )
    SELECT * FROM org_chart;
    
    SET SESSION columnstore_select_handler=ON;
    
    CREATE TABLE temp_org_chart (
        id INT,
        name VARCHAR(100),
        manager_id INT,
        level INT
    );
    
    -- Initialize the temporary table with the top-level employees
    INSERT INTO temp_org_chart (id, name, manager_id, level)
    SELECT id, name, manager_id, 0 AS level FROM employees WHERE manager_id IS NULL;
    DELIMITER //
    
    CREATE OR REPLACE PROCEDURE populate_org_chart()
    BEGIN
      DECLARE v_level INT DEFAULT 1;
      DECLARE rows_inserted INT DEFAULT 1;
    
      -- Loop until no more rows are inserted, indicating the hierarchy is fully traversed
      WHILE rows_inserted > 0 DO
    
        -- Insert employees who report to the previous level
        INSERT INTO temp_org_chart (id, name, manager_id, level)
        SELECT e.id, e.name, e.manager_id, v_level
        FROM employees e
        JOIN temp_org_chart t ON e.manager_id = t.id
        WHERE t.level = v_level - 1
          AND NOT EXISTS (
              SELECT 1 FROM temp_org_chart x WHERE x.id = e.id
          );
    
        -- Get the number of rows inserted in the current iteration
        SET rows_inserted = ROW_COUNT();
        -- Increment the level for the next iteration
        SET v_level = v_level + 1;
    
      END WHILE;
    END //
    
    DELIMITER ;
    CALL populate_org_chart();
    SELECT * FROM temp_org_chart;
    mariadb test -qsNe "SHOW CREATE TABLE employees" \
      | awk -F '\t' '{print $2}' \
      | sed -e 's/ENGINE=Columnstore/ENGINE=InnoDB/' \
            -e 's/CREATE TABLE `employees`/CREATE TABLE `employees2`/' \
      > create_employees2.sql
    
    mariadb test < create_employees2.sql
    mariadb test -e "INSERT INTO employees2 SELECT * FROM employees"
    WITH RECURSIVE org_chart AS (
        -- Anchor: start with the top-level manager (CEO)
        SELECT id, name, manager_id, 0 AS level
        FROM employees2
        WHERE id = 1
    
        UNION ALL
    
        -- Recursive step: find employees who report to the previous level
        SELECT e.id, e.name, e.manager_id, oc.level + 1
        FROM employees2 e
        JOIN org_chart oc ON e.manager_id = oc.id
    )
    SELECT * FROM org_chart;
    # minimize swapping
    vm.swappiness = 1
    
    # Increase the TCP max buffer size
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    
    # Increase the TCP buffer limits
    # min, default, and max number of bytes to use
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 65536 16777216
    
    # don't cache ssthresh from previous connection
    net.ipv4.tcp_no_metrics_save = 1
    
    # for 1 GigE, increase this to 2500
    # for 10 GigE, increase this to 30000
    net.core.netdev_max_backlog = 2500
    $ sudo sysctl --load=/etc/sysctl.d/90-mariadb-enterprise-columnstore.conf
    $ sudo setenforce permissive
    # This file controls the state of SELinux on the system.
    # SELINUX= can take one of these three values:
    #     enforcing - SELinux security policy is enforced.
    #     permissive - SELinux prints warnings instead of enforcing.
    #     disabled - No SELinux policy is loaded.
    SELINUX=permissive
    # SELINUXTYPE= can take one of three values:
    #     targeted - Targeted processes are protected,
    #     minimum - Modification of targeted policy. Only selected processes are protected.
    #     mls - Multi Level Security protection.
    SELINUXTYPE=targeted
    $ sudo getenforce
    Permissive
    $ sudo systemctl disable apparmor
    $ sudo aa-status
    apparmor module is loaded.
    0 profiles are loaded.
    0 profiles are in enforce mode.
    0 profiles are in complain mode.
    0 processes have profiles defined.
    0 processes are in enforce mode.
    0 processes are in complain mode.
    0 processes are unconfined but have a profile defined.
    $ sudo systemctl status firewalld
    $ sudo systemctl stop firewalld
    $ sudo ufw status verbose
    $ sudo ufw disable
    $ sudo yum install glibc-locale-source glibc-langpack-en
    $ sudo localedef -i en_US -f UTF-8 en_US.UTF-8
    192.0.2.1     mcs1
    192.0.2.2     mcs2
    192.0.2.3     mcs3
    192.0.2.100   mxs1
    # minimize swapping
    vm.swappiness = 1
    
    # Increase the TCP max buffer size
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    
    # Increase the TCP buffer limits
    # min, default, and max number of bytes to use
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 65536 16777216
    
    # don't cache ssthresh from previous connection
    net.ipv4.tcp_no_metrics_save = 1
    
    # for 1 GigE, increase this to 2500
    # for 10 GigE, increase this to 30000
    net.core.netdev_max_backlog = 2500
    $ sudo sysctl --load=/etc/sysctl.d/90-mariadb-enterprise-columnstore.conf
    $ sudo setenforce permissive
    # This file controls the state of SELinux on the system.
    # SELINUX= can take one of these three values:
    #     enforcing - SELinux security policy is enforced.
    #     permissive - SELinux prints warnings instead of enforcing.
    #     disabled - No SELinux policy is loaded.
    SELINUX=permissive
    # SELINUXTYPE= can take one of three values:
    #     targeted - Targeted processes are protected,
    #     minimum - Modification of targeted policy. Only selected processes are protected.
    #     mls - Multi Level Security protection.
    SELINUXTYPE=targeted
    $ sudo getenforce
    Permissive
    $ sudo systemctl disable apparmor
    $ sudo aa-status
    apparmor module is loaded.
    0 profiles are loaded.
    0 profiles are in enforce mode.
    0 profiles are in complain mode.
    0 processes have profiles defined.
    0 processes are in enforce mode.
    0 processes are in complain mode.
    0 processes are unconfined but have a profile defined.
    $ sudo systemctl status firewalld
    $ sudo systemctl stop firewalld
    $ sudo ufw status verbose
    $ sudo ufw disable
    $ sudo yum install glibc-locale-source glibc-langpack-en
    $ sudo localedef -i en_US -f UTF-8 en_US.UTF-8
    192.0.2.1     mcs1
    192.0.2.2     mcs2
    192.0.2.3     mcs3
    192.0.2.100   mxs1

    net.core.netdev_max_backlog

    2500

    Sets the maximum number of packets that can be queued for a network device. A higher value allows for more packets to be queued, improving performance.

    net.core.rmem_max

    16777216

    Sets the maximum receive buffer size for TCP sockets. A higher value allows for larger buffers, improving performance.

    net.core.wmem_max

    16777216

    Sets the maximum send buffer size for TCP sockets. A higher value allows for larger buffers, improving performance.

    net.ipv4.tcp_max_syn_backlog

    8192

    Sets the maximum number of queued SYN requests. A higher value allows for more queued requests, improving performance.

    net.ipv4.tcp_timestamps

    0

    Disables TCP timestamps, reducing overhead and improving performance.

    vm.max_map_count

    4,262,144

    Increases the maximum number of memory map areas a process may have. The default is 65,530, which can be too low for workloads like MariaDB ColumnStore. Raising this prevents mapping errors for processes that need large address spaces.

    kernel.pid_max

    4,194,304

    Defines the maximum process ID value. Older Linux versions defaulted to 32,768; newer versions default to 4,194,304. Raising this ensures support for systems running a very large number of processes concurrently.

    kernel.threads-max

    2,000,000

    Specifies the maximum number of threads allowed on the system. The default varies depending on available RAM. A value of 2 million is suitable for systems with 32–64GB RAM. Increase further if running with more RAM or requiring more threads.

    vm.overcommit_memory

    1

    Disables overcommitting of memory, ensuring sufficient memory is available for MariaDB ColumnStore.

    vm.dirty_background_ratio

    5

    Sets the percentage of dirty memory that can be written back to disk in the background. A lower value reduces the amount of dirty memory, improving performance.

    vm.dirty_ratio

    10

    Sets the percentage of dirty memory that can be written back to disk before the kernel starts to write out clean pages. A lower value reduces the amount of dirty memory, improving performance.

    vm.vfs_cache_pressure

    50

    MariaDB ColumnStore Documentation
    Linux Kernel Documentationarrow-up-right
    MCOL-5165: Add optimized Linux kernel parameters for MariaDB ColumnStorearrow-up-right
    spinner

    Sets the pressure level for the kernel's VFS cache. A lower value reduces the amount of memory used by the VFS cache, improving performance.

    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Test Enterprise Server Service

    Use Systemd to test whether the MariaDB Enterprise Server service is running. This action is performed on each Enterprise ColumnStore node.

    Check if the MariaDB Enterprise Server service is running by executing the following:

    If the service is not running on any node, start the service by executing the following on that node:

    hashtag
    Test Local Client Connections

    Use MariaDB Clientarrow-up-right to test the local connection to the Enterprise Server node.

    This action is performed on each Enterprise ColumnStore node:

    The sudo command is used here to connect to the Enterprise Server node using the root@localhost user account, which authenticates using the unix_socket authentication plugin. Other user accounts can be used by specifying the --user and --password command-line options.

    hashtag
    Test ColumnStore Storage Engine Plugin

    Query the table to confirm that the ColumnStore storage engine is loaded.

    This action is performed on each Enterprise ColumnStore node.

    Execute the following query:

    The PLUGIN_STATUS column for each ColumnStore-related plugin should contain ACTIVE.

    hashtag
    Test CMAPI Service

    Use Systemd to test whether the CMAPI service is running. This action is performed on each Enterprise ColumnStore node.

    Check if the CMAPI service is running by executing the following:

    If the service is not running on any node, start the service by executing the following on that node:

    hashtag
    Test ColumnStore Status

    Use CMAPI to request the ColumnStore status. The API key needs to be provided as part of the X-API-key HTML header.

    This action is performed with the CMAPI service on the primary server.

    Check the ColumnStore status using curl by executing the following:

    hashtag
    Test DDL

    Use MariaDB Client to test DDL.

    1. On the primary server, use the MariaDB Client to connect to the node:

    1. Create a test database and ColumnStore table:

    1. On each replica server, use the MariaDB Client to connect to the node:

    1. Confirm that the database and table exist:

    If the database or table do not exist on any node, then check the replication configuration.

    hashtag
    Test DML

    Use MariaDB Client to test DML.

    1. On the primary server, use the MariaDB Client to connect to the node:

    1. Insert sample data into the table created in the DDL test:

    1. On each replica server, use the MariaDB Client to connect to the node:

    1. Execute a query to retrieve the data:

    If the data is not returned on any node, check the ColumnStore status and the storage configuration.

    hashtag
    Next Step

    Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".

    This page was step 5 of 9.

    Next: Step 6: Install MariaDB MaxScale.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    spinner
    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Test S3 Connection

    MariaDB Enterprise ColumnStore 23.10 includes a testS3Connection command to test the S3 configuration, permissions, and connectivity.

    This action is performed on each Enterprise ColumnStore node.

    Test the S3 configuration by executing the following:

    If the testS3Connection command does not return OK, investigate the S3 configuration.

    hashtag
    Test Enterprise Server Service

    Use Systemd to test whether the MariaDB Enterprise Server service is running.

    This action is performed on each Enterprise ColumnStore node.

    Check if the MariaDB Enterprise Server service is running by executing the following:

    If the service is not running on any node, start the service by executing the following on that node:

    hashtag
    Test Local Client Connections

    Use to test the local connection to the Enterprise Server node.

    This action is performed on each Enterprise ColumnStore node:

    The sudo command is used here to connect to the Enterprise Server node using the root@localhost user account, which authenticates using the unix_socket authentication plugin. Other user accounts can be used by specifying the --user and --password command-line options.

    hashtag
    Test ColumnStore Storage Engine Plugin

    Query the table to confirm that the ColumnStore storage engine is loaded.

    This action is performed on each Enterprise ColumnStore node.

    Execute the following query:

    The PLUGIN_STATUS column for each ColumnStore-related plugin should contain ACTIVE.

    hashtag
    Test CMAPI Service

    Use Systemd to test whether the CMAPI service is running.

    This action is performed on each Enterprise ColumnStore node.

    Check if the CMAPI service is running by executing the following:

    If the service is not running on any node, start the service by executing the following on that node:

    hashtag
    Test ColumnStore Status

    Use CMAPI to request the ColumnStore status. The API key needs to be provided as part of the X-API-key HTML header.

    This action is performed with the CMAPI service on the primary server.

    Check the ColumnStore status using curl by executing the following:

    hashtag
    Test DDL

    Use MariaDB Client to test DDL.

    1. On the primary server, use the MariaDB Client to connect to the node:

    1. Create a test database and ColumnStore table:

    1. On each replica server, use the MariaDB Client to connect to the node:

    1. Confirm that the database and table exist:

    If the database or table do not exist on any node, then check the replication configuration.

    hashtag
    Test DML

    Use MariaDB Client to test DML.

    1. On the primary server, use the MariaDB Client to connect to the node:

    1. Insert sample data into the table created in the DDL test:

    1. On each replica server, use the MariaDB Client to connect to the node:

    1. Execute a SELECTarrow-up-right query to retrieve the data:

    If the data is not returned on any node, check the ColumnStore status and the storage configuration.

    hashtag
    Next Step

    Navigation in the procedure 'Deploy ColumnStore Object Storage Topology".

    This page was step 5 of 9.

    Next: Step 6: Install MariaDB MaxScale.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    spinner
    Unlink the server object from the service using the unlink service command.
  • As the first argument, provide the name of the service.

  • As the second argument, provide the name of the server.

  • hashtag
    Checking the Service in MaxScale

    To confirm that the server object was properly unlinked from the service, the service should be checked using :

    • Show the services using the show services command, like this:

    hashtag
    Unlinking from Monitor in MaxScale

    The server object for the node must be unlinked from the monitor using :

    • Unlink a server object from the monitor using the unlink monitor command.

    • As the first argument, provide the name of the monitor.

    • As the second argument, provide the name of the server.

    hashtag
    Checking the Monitor in MaxScale

    To confirm that the server object was properly unlinked from the monitor, the monitor should be checked using :

    • Show the monitors using the show monitors command, like this:

    hashtag
    Removing the Server from MaxScale

    The server object for the node must also be removed from MaxScale using :

    • Use or another supported REST client.

    • Remove the server object using the destroy server command.

    • As the first argument, provide the name for the server.

    For example:

    hashtag
    Checking the Server in MaxScale

    To confirm that the server object was properly removed, the server objects should be checked using :

    • Show the server objects using the show servers command, like this:

    hashtag
    Stopping the Enterprise ColumnStore Services

    The Enterprise Server. Enterprise ColumnStore, and CMAPI services can be stopped using the systemctl command.

    Perform the following procedure on the node:

    1. Stop the MariaDB Enterprise Server service:

    2. Stop the MariaDB Enterprise ColumnStore service:

    3. Stop the CMAPI service:

    hashtag
    Removing the Node from Enterprise ColumnStore

    The node must be removed from Enterprise ColumnStore using CMAPI:

    • Remove the node using the remove-node endpoint path.

    • Use a supported REST client, such as curl .

    • Format the JSON output using jq for enhanced readability.

    • Authenticate using the configured .

    • Include the .

    For example, if the primary node's host name is mcs1 and the IP address for the node to remove is 192.0.2.3:

    • In ES 10.5.10-7 and later:

    • In ES 10.5.9-6 and earlier:

    Example output:

    hashtag
    Checking the Enterprise ColumnStore Status

    To confirm that the node was properly removed, the status of Enterprise ColumnStore should be checked using CMAPI:

    • Check the status using the status endpoint path.

    For example, if the primary node's host name is mcs1:

    Example output:

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    spinner
    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Configure Enterprise ColumnStore

    Mandatory system variables and options for Single-Node Enterprise ColumnStore include:

    Connector
    MariaDB Connector/R2DBC

    Set this system variable to utf8

    Set this system variable to utf8_general_ci

    columnstore_use_import_for_batchinsert

    Set this system variable to ALWAYS to always use cpimport for and statements.

    hashtag
    Example Configuration

    hashtag
    Configure the S3 Storage Manager

    Configure Enterprise ColumnStore S3 Storage Manager to use S3-compatible storage by editing the /etc/columnstore/storagemanager.cnf configuration file:

    The S3-compatible object storage options are configured under [S3]:

    • The bucket option must be set to the name of the bucket that you created in "Create an S3 Bucket".

    • The endpoint option must be set to the endpoint for the S3-compatible object storage.

    • The aws_access_key_id and aws_secret_access_key options must be set to the access key ID and secret access key for the S3-compatible object storage.

    • To use a specific IAM role, you must uncomment and set iam_role_name, sts_region, and sts_endpoint.

    • To use the IAM role assigned to an EC2 instance, you must uncomment ec2_iam_mode=enabled.

    The local cache options are configured under [Cache]:

    • The cache_size option is set to 2 GB by default.

    • The path option is set to /var/lib/columnstore/storagemanager/cache by default.

    Ensure that the specified path has sufficient storage space for the specified cache size.

    hashtag
    Start the Enterprise ColumnStore Services

    Start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:

    Start and enable the MariaDB Enterprise ColumnStore service, so that it starts automatically upon reboot:

    hashtag
    Create the Utility User

    Enterprise ColumnStore requires a mandatory utility user account to perform cross-engine joins and similar operations.

    1. Create the user account with the statement:

    1. Grant the user account SELECT privileges on all databases with the statement:

    1. Configure Enterprise ColumnStore to use the utility user:

    1. Set the password:

    For details about how to encrypt the password, see "Credentials Management for MariaDB Enterprise ColumnStorearrow-up-right".

    Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.

    hashtag
    Configure Linux Security Modules (LSM)

    The specific steps to configure the security module depend on the operating system.

    hashtag
    Configure SELinux (CentOS, RHEL)

    Configure SELinux for Enterprise ColumnStore:

    1. To configure SELinux, you have to install the packages required for audit2allow. On CentOS 7 and RHEL 7, install the following:

    On RHEL 8, install the following:

    1. Allow the system to run under load for a while to generate SELinux audit events.

    2. After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:

    If no audit events were found, this will print the following:

    1. If audit events were found, the new SELinux policy can be loaded using semodule:

    1. Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config.

    For example, the file will usually look like this after the change:

    1. Set SELinux to enforcing mode:

    hashtag
    Configure AppArmor (Ubuntu)

    For information on how to create a profile, see How to create an AppArmor Profilearrow-up-right on ubuntu.com.

    hashtag
    Next Step

    Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:

    This page was step 3 of 5.

    Next: Step 4: Test MariaDB Enterprise ColumnStore.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Single-Node Enterprise ColumnStore with Object storagearrow-up-right
    spinner
    storage options
    shared local storage
    DB Root directories
    system of record
    shared local storage
    DB Root directories
    Storage Manager directory
    DB Root directories
    Storage Manager directory
    DB Root directories
    Storage Manager directory
    Deploy Enterprise ColumnStore
    Storage Manager director
    spinner

    Single-Node Localstorage

    This guide provides steps for deploying a single-node ColumnStore, setting up the environment, installing the software, and bulk importing data for online analytical processing (OLAP) workloads.

    This procedure describes the deployment of the Single-Node Enterprise ColumnStore topology with Local storage.

    MariaDB Enterprise ColumnStore is a columnar storage engine for MariaDB Enterprise Server 10.6. Enterprise ColumnStore is best suited for Online Analytical Processing (OLAP) workloads.

    This procedure has 5 steps, which are executed in sequence.

    circle-info

    The instructions were tested against ColumnStore 23.10.

    This page provides an overview of the topology, requirements, and deployment procedures.

    Please read and understand this procedure before executing.

    hashtag
    Procedure Steps

    Step
    Description

    hashtag
    Support

    Customers can obtain support by .

    hashtag
    Components

    The following components are deployed during this procedure:

    Component
    Function

    hashtag
    MariaDB Enterprise Server Components

    Component
    Description

    hashtag
    Topology

    The Single-Node Enterprise ColumnStore topology provides support for Online Analytical Processing (OLAP) workloads to MariaDB Enterprise Server.

    The Enterprise ColumnStore node:

    • Receives queries from the application

    • Executes queries

    • Uses the local disk for storage.

    hashtag
    High Availability

    Single-Node Enterprise ColumnStore does not provide high availability (HA) for Online Analytical Processing (OLAP). If you would like to deploy Enterprise ColumnStore with high availability, see .

    hashtag
    Requirements

    These requirements are for the Single-Node Enterprise ColumnStore, when deployed with MariaDB Enterprise Server 10.6 and MariaDB Enterprise ColumnStore 23.10.

    hashtag
    Operating System

    • Debian 11 (x86_64, ARM64)

    • Debian 12 (x86_64, ARM64)

    • Red Hat Enterprise Linux 8 (x86_64, ARM64)

    • Red Hat Enterprise Linux 9 (x86_64, ARM64)

    hashtag
    Minimum Hardware Requirements

    MariaDB Enterprise ColumnStore's minimum hardware requirements are not intended for production environments, but the minimum hardware requirements can be appropriate for development and test environments. For production environments, see the instead.

    The minimum hardware requirements are:

    Component
    CPU
    Memory

    MariaDB Enterprise ColumnStore will refuse to start if the system has less than 3 GB of memory.

    If Enterprise ColumnStore is started on a system with less memory, the following error message will be written to the ColumnStore system log called crit.log:

    And the following error message will be raised to the client:

    hashtag
    Recommended Hardware Requirements

    MariaDB Enterprise ColumnStore's recommended hardware requirements are intended for production analytics.

    The recommended hardware requirements are:

    Component
    CPU
    Memory

    hashtag
    Quick Reference

    hashtag
    MariaDB Enterprise Server Configuration Management

    Method
    Description

    MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.

    To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.

    Distribution
    Example Configuration File Path

    hashtag
    MariaDB Enterprise Server Service Management

    The systemctl command is used to start and stop the MariaDB Enterprise Server service.

    Operation
    Command

    hashtag
    Next Step

    Navigation in the Single-Node Enterprise ColumnStore topology with Local storage deployment procedure:

    • Next: Step 1: Install MariaDB Enterprise ColumnStore 23.10.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Step 7: Start and Configure MariaDB MaxScale

    Step 7: Start and Configure MariaDB MaxScale

    hashtag
    Overview

    This page details step 7 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".

    This step starts and configures MariaDB MaxScale 22.08.

    circle-info

    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Replace the Default Configuration File

    MariaDB MaxScale installations include a configuration file with some example objects. This configuration file should be replaced.

    On the MaxScale node, replace the default /etc/maxscale.cnf with the following configuration:

    For additional information, see "Global Parameters".

    hashtag
    Restart MaxScale

    On the MaxScale node, restart the MaxScale service to ensure that MaxScale picks up the new configuration:

    For additional information, see "Start and Stop Services".

    hashtag
    Configure Server Objects

    On the MaxScale node, use to create a server object for each Enterprise ColumnStore node:

    hashtag
    Configure MariaDB Monitor

    MaxScale uses monitors to retrieve additional information from the servers. This information is used by other services in filtering and routing connections based on the current state of the node. For MariaDB Enterprise ColumnStore, use the MariaDB Monitor (mariadbmon).

    On the MaxScale node, use to create a MariaDB Monitor:

    In this example:

    • columnstore_monitor is an arbitrary name that is used to identify the new monitor.

    • mariadbmon is the name of the module that implements the MariaDB Monitor.

    • user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to monitor the ColumnStore nodes.

    hashtag
    Choose a MaxScale Router

    Routers control how MaxScale balances the load between Enterprise ColumnStore nodes. Each router uses a different approach to routing queries. Consider the specific use case of your application and database load and select the router that best suits your needs.

    Router
    Configuration Procedure
    Description

    hashtag
    Configure Read Connection Router

    Use to route connections to replica servers for a read-only pool.

    On the MaxScale node, use to create a router:

    In this example:

    • connection_router_service is an arbitrary name that is used to identify the new service.

    • readconnroute is the name of the module that implements the Read Connection Router.

    • user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ColumnStore nodes.

    hashtag
    Configure Listener for the Read Connection Router

    These instructions reference TCP port 3308. You can use a different TCP port. The TCP port used must not be bound by any other listener.

    On the MaxScale node, use the command to configure MaxScale to use a listener for the :

    In this example:

    • connection_router_service is the name of the readconnroute service that was previously created.

    • connection_router_listener is an arbitrary name that is used to identify the new listener.

    hashtag
    Configure Read/Write Split Router for Queries

    MaxScale performs query-based load balancing. The router routes write queries to the primary and read queries to the replicas.

    On the MaxScale node, use the maxctrl create service command to configure MaxScale to use the :

    In this example:

    • query_router_service is an arbitrary name that is used to identify the new service.

    • readwritesplit is the name of the module that implements the Read/Write Split Router.

    • user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ColumnStore nodes.

    hashtag
    Configure a Listener for the Read/Write Split Router

    These instructions reference TCP port 3307. You can use a different TCP port. The TCP port used must not be bound by any other listener.

    On the MaxScale node, use the command to configure MaxScale to use a listener for the :

    In this example:

    • query_router_service is the name of the readwritesplit service that was previously created.

    • query_router_listener is an arbitrary name that is used to identify the new listener.

    • 3307 is the TCP port.

    hashtag
    Start Services

    To start the services and monitors, on the MaxScale node use :

    hashtag
    Next Step

    Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".

    This page was step 7 of 9.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    ColumnStore Partition Management

    hashtag
    Introduction

    MariaDB ColumnStore automatically creates logical horizontal partitions across every column. For ordered or semi-ordered data fields such as an order date this will result in a highly effective partitioning scheme based on that column. This allows for increased performance of queries filtering on that column since partition elimination can be performed. It also allows for data lifecycle management as data can be disabled or dropped by partition cheaply. Caution should be used when disabling or dropping partitions as these commands are destructive.

    It is important to understand that a Partition in ColumnStore terms is actually 2 extents (16 million rows) and that extents & partitions are created according to the following algorithm in 1.0.x:

    1. Create 4 extents in 4 files

    2. When these are filled up (after 32M rows), create 4 more extents in the 4 files created in step 1.

    3. When these are filled up (after 64M rows), create a new partition.

    hashtag
    Managing Partitions by Partition Number

    hashtag
    Displaying Partitioning Information

    Information about all partitions for a given column can be retrieved using the calShowPartitions stored procedure which takes either two or three mandatory parameters: [database_name], table_name, and column_name. If two parameters are provided the current database is assumed. For example:

    hashtag
    Enabling Partitions

    The calEnablePartitions stored procedure allows for enabling of one or more partitions. The procedure takes the same set of parameters as calDisablePartitions.

    For example:

    The result showing the first partition has been enabled:

    hashtag
    Disabling Partitions

    The calDisablePartitions stored procedure allows for disabling of one or more partitions. A disabled partition still exists on the file system (and can be enabled again at a later time) but will not participate in any query, DML or import activity. The procedure takes either two or three mandatory parameters: [database_name], table_name, and partition_numbers separated by commas. If two parameters are provided the current database is assumed.

    For example:

    The result showing the first partition has been disabled:

    hashtag
    Dropping Partitions

    The calDropPartitions stored procedure allows for dropping of one or more partitions. Dropping means that the underlying storage is deleted and the partition is completely removed. A partition can be dropped from either enabled or disabled state. The procedure takes the same set of parameters as calDisablePartitions. Extra caution should be used with this procedure since it is destructive and cannot be reversed.

    For example:

    The result showing the first partition has been dropped:

    hashtag
    Managing Partitions by Column Value

    hashtag
    Displaying Partitioning Information

    Information about a range of parititions for a given column can be retrieved using the calShowPartitionsByValue stored procedure. This procedure takes either four or five mandatory parameters: [database_name], table_name,`` column_name,`` start_value, and`` end_value. If four parameters are provided, the current database is assumed. Only casual partition column types (, , , , up to 8 bytes and up to 7 bytes) are supported for this function.

    The function returns a list of partitions whose minimum and maximum values for the column col_name fall completely within the range of start_value and end_value.

    For example:

    hashtag
    Enabling Partitions

    The calEnablePartitionsbyValue stored procedure allows for enabling of one or more partitions by value. The procedure takes the same set of arguments as calShowPartitionsByValue.

    A good practice is to use calShowPartitionsByValue to identify the partitions to be enabled and then the same argument values used to construct the calEnablePartitionsbyValue call.

    For example:

    The result showing the first partition has been enabled:

    hashtag
    Disabling Partitions

    The calDisablePartitionsByValue stored procedure allows for disabling of one or more partitions by value. A disabled partition still exists on the file system (and can be enabled again at a later time) but will not participate in any query, DML or import activity. The procedure takes the same set of arguments as calShowPartitionsByValue.

    A good practice is to use calShowPartitionsByValue to identify the partitions to be disabled and then the same argument values used to construct the calDisablePartitionsByValue call. For example:

    The result showing the first partition has been disabled:

    hashtag
    Dropping Partitions

    The calDropPartitionsByValue stored procedure allows for dropping of one or more partitions by value. Dropping means that the underlying storage is deleted and the partition is completely removed. A partition can be dropped from either enabled or disabled state. The procedure takes the same set of arguments as calShowPartitionsByValue. A good practice is to use calShowPartitionsByValue to identify the partitions to be enabled and then the same argument values used to construct the calDropPartitionsByValue call. Extra caution should be used with this procedure since it is destructive and cannot be reversed.

    For example:

    The result showing the first partition has been dropped:

    hashtag
    Dropping Data Outside of Partitions

    Since the partitioning scheme is system-maintained, the minimum and maximum values are not directly specified, but influenced by the order of data loading. If you want to drop a specific date range, additional deletes are required to achieve this. The following cases may occur:

    • For semi-ordered data, there may be overlap between minimum and maximum values between partitions.

    • As in the example above, the partition ranges from 1992-01-01 to 1998-08-02. It may be desirable to drop the remaining 1998 rows.

    A bulk-delete statement can be used to delete the remaining rows that do not fall exactly within partition ranges. The partition drops will be fastest; however, the system optimizes bulk-delete statements to delete by block internally. This is still relatively fast.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Backup and Restore with Object Storage

    hashtag
    Overview

    MariaDB Enterprise ColumnStore supports backup and restore. If Enterprise ColumnStore uses S3-compatible object storage for data and shared local storage for the Storage Manager directory, the S3 bucket, the Storage Manager directory, and the MariaDB data directory must be backed up separately.

    hashtag
    Recovery Planning

    MariaDB Enterprise ColumnStore supports multiple .

    This page discusses how to backup and restore Enterprise ColumnStore when it uses for data and shared local storage (such as NFS) for the .

    Any file can become corrupt due to hardware issues, crashes, power loss, and other reasons. If the Enterprise ColumnStore data or metadata become corrupt, Enterprise ColumnStore could become unusable, resulting in data loss.

    If Enterprise ColumnStore is your , it should be backed up regularly.

    If Enterprise ColumnStore uses S3-compatible object storage for data and shared local storage for the , the following items must be backed up:

    • The MariaDB Data directory is backed up using .

    • The S3 bucket must be backed up using the vendor's snapshot procedure.

    • The must be backed up.

    See the instructions below for more details.

    hashtag
    Backup

    Use the following process to take a backup:

    1. Determine which node is the primary server using curl to send the status command to the CMAPI Server:

    The output will show "dbrm_mode": "master" for the primary server:

    1. Connect to the primary server using MariaDB Client as a user account that has privileges to lock the database:

    1. Lock the database with the statement:

    Ensure that the client remains connected to the primary server, so that the lock is held for the remaining steps.

    1. Make a copy or snapshot of the . By default, it is located at /var/lib/columnstore/storagemanager.

    For example, to make a copy of the directory with rsync:

    1. Use to backup the MariaDB data directory:

    1. Use to prepare the backup:

    1. Create a snapshot of the S3-compatible storage. Consult the storage vendor's manual for details on how to do this.

    2. Ensure that all previous operations are complete.

    3. In the original client connection to the primary server, unlock the database with the statement:

    hashtag
    Restore

    Use the following process to restore a backup:

    1. , so that you can restore the backup to an empty deployment.

    2. Ensure that all services are stopped on each node:

    1. Restore the backup of the . By default, it is located at /var/lib/columnstore/storagemanager.

    For example, to restore the backup with rsync:

    1. Use to restore the backup of the MariaDB data directory:

    1. Restore the snapshot of your S3-compatible storage to the new S3 bucket that you plan to use. Consult the storage vendor's manual for details on how to do this.

    2. Update storagemanager.cnf to configure Enterprise ColumnStore to use the S3 bucket. By default, it is located at /etc/columnstore/storagemanager.cnf.

    For example:

    • The default local cache size is 2 GB.

    • The default local cache path is /var/lib/columnstore/storagemanager/cache.

    • Ensure that the local cache path has sufficient store space to store the local cache.

    1. Start the services on each node:

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Multi-Node S3

    This guide provides steps for deploying a multi-node S3 ColumnStore, setting up the environment, installing the software, and bulk importing data for online analytical processing (OLAP) workloads.

    hashtag
    Overview

    This procedure describes the deployment of the Single-Node Enterprise ColumnStore topology with Object storage.

    MariaDB Enterprise ColumnStore is a columnar storage engine for MariaDB Enterprise Server and Enterprise ColumnStore is best suited for Online Analytical Processing (OLAP) workloads.

    This procedure has 5 steps, which are executed in sequence.

    Step 7: Start and Configure MariaDB MaxScale

    Step 7: Start and Configure MariaDB MaxScale

    hashtag
    Overview

    This page details step 7 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".

    This step starts and configures MariaDB MaxScale 22.08.

    circle-info

    vm.overcommit_memory=1 
    vm.dirty_background_ratio=5 
    vm.dirty_ratio=10 
    vm.vfs_cache_pressure=50 
    net.core.netdev_max_backlog=2500 
    net.core.rmem_max=16777216 
    net.core.wmem_max=16777216 
    net.ipv4.tcp_max_syn_backlog=8192 
    net.ipv4.tcp_timestamps=0
    sudo sysctl -p
    cat /proc/sys/kernel/threads-max
    cat /proc/sys/kernel/pid_max
    cat /proc/sys/vm/max_map_count
    
    
    # Rhel /etc/sysctl.conf
    sudo echo "vm.max_map_count=4262144" >> /etc/sysctl.conf
    sudo echo "kernel.pid_max = 4194304" >> /etc/sysctl.conf
    sudo echo "kernel.threads-max = 2000000" >> /etc/sysctl.conf
    
    # There may be a file called 50-pid-max.conf or perhaps something similar. If so, modify it 
    sudo echo "vm.max_map_count=4262144" > /usr/lib/sysctl.d/50-max_map_count.conf
    sudo echo "kernel.pid_max = 4194304" > /usr/lib/sysctl.d/50-pid-max.conf
    sudo sysctl -p
    $ systemctl status mariadb
    $ sudo systemctl start mariadb
    $ sudo mariadb
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 38
    Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]>
    SELECT PLUGIN_NAME, PLUGIN_STATUS
    FROM information_schema.PLUGINS
    WHERE PLUGIN_LIBRARY LIKE 'ha_columnstore%';
    
    +---------------------+---------------+
    | PLUGIN_NAME         | PLUGIN_STATUS |
    +---------------------+---------------+
    | Columnstore         | ACTIVE        |
    | COLUMNSTORE_COLUMNS | ACTIVE        |
    | COLUMNSTORE_TABLES  | ACTIVE        |
    | COLUMNSTORE_FILES   | ACTIVE        |
    | COLUMNSTORE_EXTENTS | ACTIVE        |
    +---------------------+---------------+
    $ systemctl status mariadb-columnstore-cmapi
    $ sudo systemctl start mariadb-columnstore-cmapi
    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    {
      "timestamp": "2020-12-15 00:40:34.353574",
      "192.0.2.1": {
        "timestamp": "2020-12-15 00:40:34.362374",
        "uptime": 11467,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 19202
          },
          {
            "name": "controllernode",
            "pid": 19232
          },
          {
            "name": "PrimProc",
            "pid": 19254
          },
          {
            "name": "ExeMgr",
            "pid": 19292
          },
          {
            "name": "WriteEngine",
            "pid": 19316
          },
          {
            "name": "DMLProc",
            "pid": 19332
          },
          {
            "name": "DDLProc",
            "pid": 19366
          }
        ]
      },
      "192.0.2.2": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "192.0.2.3": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "num_nodes": 3
    }
    $ sudo mariadb
    CREATE DATABASE IF NOT EXISTS test;
    
    CREATE TABLE IF NOT EXISTS test.contacts (
       first_name VARCHAR(50),
       last_name VARCHAR(50),
       email VARCHAR(100)
    ) ENGINE = ColumnStore;
    $ sudo mariadb
    SHOW CREATE TABLE test.contacts\G;
    $ sudo mariadb
    INSERT INTO test.contacts (first_name, last_name, email)
       VALUES
       ("Kai", "Devi", "kai.devi@example.com"),
       ("Lee", "Wang", "lee.wang@example.com");
    $ sudo mariadb
    SELECT * FROM test.contacts;
    
    +------------+-----------+----------------------+
    | first_name | last_name | email                |
    +------------+-----------+----------------------+
    | Kai        | Devi      | kai.devi@example.com |
    | Lee        | Wang      | lee.wang@example.com |
    +------------+-----------+----------------------+
    $ sudo testS3Connection
    StorageManager[26887]: Using the config file found at /etc/columnstore/storagemanager.cnf
    StorageManager[26887]: S3Storage: S3 connectivity & permissions are OK
    S3 Storage Manager Configuration OK
    $ systemctl status mariadb
    $ sudo systemctl start mariadb
    $ sudo mariadb
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 38
    Server version: 11.4.5-3-MariaDB-Enterprise MariaDB Enterprise Server
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]>
    SELECT PLUGIN_NAME, PLUGIN_STATUS
    FROM information_schema.PLUGINS
    WHERE PLUGIN_LIBRARY LIKE 'ha_columnstore%';
    
    +---------------------+---------------+
    | PLUGIN_NAME         | PLUGIN_STATUS |
    +---------------------+---------------+
    | Columnstore         | ACTIVE        |
    | COLUMNSTORE_COLUMNS | ACTIVE        |
    | COLUMNSTORE_TABLES  | ACTIVE        |
    | COLUMNSTORE_FILES   | ACTIVE        |
    | COLUMNSTORE_EXTENTS | ACTIVE        |
    +---------------------+---------------+
    $ systemctl status mariadb-columnstore-cmapi
    $ sudo systemctl start mariadb-columnstore-cmapi
    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    
    {
      "timestamp": "2020-12-15 00:40:34.353574",
      "192.0.2.1": {
        "timestamp": "2020-12-15 00:40:34.362374",
        "uptime": 11467,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 19202
          },
          {
            "name": "controllernode",
            "pid": 19232
          },
          {
            "name": "PrimProc",
            "pid": 19254
          },
          {
            "name": "ExeMgr",
            "pid": 19292
          },
          {
            "name": "WriteEngine",
            "pid": 19316
          },
          {
            "name": "DMLProc",
            "pid": 19332
          },
          {
            "name": "DDLProc",
            "pid": 19366
          }
        ]
      },
      "192.0.2.2": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "192.0.2.3": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "num_nodes": 3
    }
    $ sudo mariadb
    CREATE DATABASE IF NOT EXISTS test;
    
    CREATE TABLE IF NOT EXISTS test.contacts (
       first_name VARCHAR(50),
       last_name VARCHAR(50),
       email VARCHAR(100)
    ) ENGINE = ColumnStore;
    $ sudo mariadb
    SHOW CREATE TABLE test.contacts\G;
    $ sudo mariadb
    INSERT INTO test.contacts (first_name, last_name, email)
       VALUES
       ("Kai", "Devi", "kai.devi@example.com"),
       ("Lee", "Wang", "lee.wang@example.com");
    $ sudo mariadb
    SELECT * FROM test.contacts;
    
    +------------+-----------+----------------------+
    | first_name | last_name | email                |
    +------------+-----------+----------------------+
    | Kai        | Devi      | kai.devi@example.com |
    | Lee        | Wang      | lee.wang@example.com |
    +------------+-----------+----------------------+
    sudo systemctl stop mariadb
    sudo systemctl stop mariadb-columnstore
    sudo systemctl stop mariadb-columnstore-cmapi
    curl -k -s -X DELETE https://mcs1:8640/cmapi/0.4.0/cluster/node \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       --data '{"timeout":20, "node": "192.0.2.3"}' \
       | jq .
    curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/remove-node \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       --data '{"timeout":20, "node": "192.0.2.3"}' \
       | jq .
    maxctrl unlink service \
       mcs_service \
       mcs3
    maxctrl show services
    maxctrl unlink monitor \
       mcs_monitor \
       mcs3
    maxctrl show monitors
    maxctrl destroy server \
       mcs3
    maxctrl show servers
    {
      "timestamp": "2020-10-28 00:39:14.672142",
      "node_id": "192.0.2.3"
    }
    curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    {
      "timestamp": "2020-12-15 00:40:34.353574",
      "192.0.2.1": {
        "timestamp": "2020-12-15 00:40:34.362374",
        "uptime": 11467,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 19202
          },
          {
            "name": "controllernode",
            "pid": 19232
          },
          {
            "name": "PrimProc",
            "pid": 19254
          },
          {
            "name": "ExeMgr",
            "pid": 19292
          },
          {
            "name": "WriteEngine",
            "pid": 19316
          },
          {
            "name": "DMLProc",
            "pid": 19332
          },
          {
            "name": "DDLProc",
            "pid": 19366
          }
        ]
      },
      "192.0.2.2": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "num_nodes": 2
    }
    [mariadb]
    log_error                              = mariadbd.err
    character_set_server                   = utf8
    collation_server                       = utf8_general_ci
    [ObjectStorage]
    …
    service = S3
    …
    [S3]
    bucket                = your_columnstore_bucket_name
    endpoint              = your_s3_endpoint
    aws_access_key_id     = your_s3_access_key_id
    aws_secret_access_key = your_s3_secret_key
    # iam_role_name       = your_iam_role
    # sts_region          = your_sts_region
    # sts_endpoint        = your_sts_endpoint
    # ec2_iam_mode        = enabled
    
    [Cache]
    cache_size = your_local_cache_size
    path       = your_local_cache_path
    $ sudo systemctl start mariadb
    
    $ sudo systemctl enable mariadb
    $ sudo systemctl start mariadb-columnstore
    
    $ sudo systemctl enable mariadb-columnstore
    CREATE USER 'util_user'@'127.0.0.1'
    IDENTIFIED BY 'util_user_passwd';
    GRANT SELECT, PROCESS ON *.*
    TO 'util_user'@'127.0.0.1';
    $ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1
    
    $ sudo mcsSetConfig CrossEngineSupport Port 3306
    
    $ sudo mcsSetConfig CrossEngineSupport User util_user
    $ sudo mcsSetConfig CrossEngineSupport Password util_user_passwd
    $ sudo yum install policycoreutils policycoreutils-python
    $ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils
    $ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
    $ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
    
    Nothing to do
    $ sudo semodule -i mariadb_local.pp
    # This file controls the state of SELinux on the system.
    # SELINUX= can take one of these three values:
    #     enforcing - SELinux security policy is enforced.
    #     permissive - SELinux prints warnings instead of enforcing.
    #     disabled - No SELinux policy is loaded.
    SELINUX=enforcing
    # SELINUXTYPE= can take one of three values:
    #     targeted - Targeted processes are protected,
    #     minimum - Modification of targeted policy. Only selected processes are protected.
    #     mls - Multi Level Security protection.
    SELINUXTYPE=targeted
    $ sudo setenforce enforcing
    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    {
      "timestamp": "2020-12-15 00:40:34.353574",
      "192.0.2.1": {
        "timestamp": "2020-12-15 00:40:34.362374",
        "uptime": 11467,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 19202
          },
          {
            "name": "controllernode",
            "pid": 19232
          },
          {
            "name": "PrimProc",
            "pid": 19254
          },
          {
            "name": "ExeMgr",
            "pid": 19292
          },
          {
            "name": "WriteEngine",
            "pid": 19316
          },
          {
            "name": "DMLProc",
            "pid": 19332
          },
          {
            "name": "DDLProc",
            "pid": 19366
          }
        ]
      }
    $ mariadb --host=192.0.2.1 \
       --user=root \
       --password
    FLUSH TABLES WITH READ LOCK;
    $ sudo mkdir -p /backups/columnstore/202101291600/
    $ sudo rsync -av /var/lib/columnstore/storagemanager /backups/columnstore/202101291600/
    $ sudo rsync -av /var/lib/columnstore/data1 /backups/columnstore/202101291600/
    $ sudo rsync -av /var/lib/columnstore/data2 /backups/columnstore/202101291600/
    $ sudo rsync -av /var/lib/columnstore/data3 /backups/columnstore/202101291600/
    $ sudo mkdir -p /backups/mariadb/202101291600/
    $ sudo mariadb-backup --backup \
       --target-dir=/backups/mariadb/202101291600/ \
       --user=mariadb-backup \
       --password=mbu_passwd
    $ sudo mariadb-backup --prepare \
       --target-dir=/backups/mariadb/202101291600/
    UNLOCK TABLES;
    $ sudo systemctl stop mariadb-columnstore-cmapi
    $ sudo systemctl stop mariadb-columnstore
    $ sudo systemctl stop mariadb
    $ sudo rsync -av /backups/columnstore/202101291600/storagemanager/ /var/lib/columnstore/storagemanager/
    $ sudo chown -R mysql:mysql /var/lib/columnstore/storagemanager
    $ sudo rsync -av /backups/columnstore/202101291600/data1/ /var/lib/columnstore/data1/
    $ sudo rsync -av /backups/columnstore/202101291600/data2/ /var/lib/columnstore/data2/
    $ sudo rsync -av /backups/columnstore/202101291600/data3/ /var/lib/columnstore/data3/
    $ sudo chown -R mysql:mysql /var/lib/columnstore/data1
    $ sudo chown -R mysql:mysql /var/lib/columnstore/data2
    $ sudo chown -R mysql:mysql /var/lib/columnstore/data3
    $ sudo mariadb-backup --copy-back \
       --target-dir=/backups/mariadb/202101291600/
    $ sudo chown -R mysql:mysql /var/lib/mysql
    $ sudo systemctl start mariadb
    $ sudo systemctl start mariadb-columnstore-cmapi
  • password='MAXSCALE_USER_PASSWORD' sets the password parameter to the password used by the database user account that MaxScale uses to monitor the ColumnStore nodes.

  • replication_user=REPLICATION_USER sets the replication_user parameter to the database user account that MaxScale uses to setup replication.

  • replication_password='REPLICATION_USER_PASSWORD' sets the replication_password parameter to the password used by the database user account that MaxScale uses to setup replication.

  • --servers sets the servers parameter to the set of nodes that MaxScale should monitor. All non-option arguments after --servers are interpreted as server names.

  • Other Module Parameters supported by mariadbmon in MaxScale 22.08 can also be specified.

  • password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ColumnStore nodes.

  • router_options=slave sets the router_options parameter to slave, so that MaxScale only routes connections to the replica nodes.

  • --servers sets the servers parameter to the set of nodes to which MaxScale should route connections. All non-option arguments after --servers are interpreted as server names.

  • Other Module Parameters supported by readconnroute in MaxScale 22.08 can also be specified.

  • 3308 is the TCP port.
  • protocol=MariaDBClient sets the protocol parameter.

  • Other Module Parameters supported by listeners in MaxScale 22.08 can also be specified.

  • password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ColumnStore nodes.

  • --servers sets the servers parameter to the set of nodes to which MaxScale should route queries. All non-option arguments after --servers are interpreted as server names.

  • Other Module Parameters supported by readwritesplit in MaxScale 22.08 can also be specified.

  • protocol=MariaDBClient sets the protocol parameter.

  • Other Module Parameters supported by listeners in MaxScale 22.08 can also be specified.

  • Read Connection (readconnroute)arrow-up-right

    Configure Read Connection Router

    Connection-based load balancing

    • Routes connections to Enterprise ColumnStore nodes designated as replica servers for a read-only pool

    • Routes connections to an Enterprise ColumnStore node designated as the primary server for a read-write pool.

    Read/Write Split (readwritesplit)arrow-up-right

    Configure Read/Write Split

    Query-based load balancing

    • Routes write queries to an Enterprise ColumnStore node designated as the primary server

    • Routes read queries to Enterprise ColumnStore node designated as replica servers

    • Automatically reconnects after node failures

    • Automatically replays transactions after node failures

    • Optionally enforces causal reads

    maxctrl createarrow-up-right
    maxctrl create monitorarrow-up-right
    MaxScale Read Connection Router (readconnroute)arrow-up-right
    maxctrl create servicearrow-up-right
    maxctrl create listenerarrow-up-right
    Read Connection Router (readconnroute)arrow-up-right
    Read/Write Split Router (readwritesplit)arrow-up-right
    Read/Write Split Router (readwritesplit)arrow-up-right
    maxctrl create listenerarrow-up-right
    Read/Write Split Router (readwritesplit)arrow-up-right
    maxctrl start servicesarrow-up-right
    Next: Next: Step 8: Test MariaDB MaxScale.
    spinner
    [maxscale]
    threads          = auto
    admin_host       = 0.0.0.0
    admin_secure_gui = false
    $ sudo systemctl restart maxscale
    $ maxctrl create server mcs1 192.0.2.101
    $ maxctrl create server mcs2 192.0.2.102
    $ maxctrl create server mcs3 192.0.2.103
    $ maxctrl create monitor columnstore_monitor mariadbmon \
         user=mxs \
         password='MAXSCALE_USER_PASSWORD' \
         replication_user=repl \
         replication_password='REPLICATION_USER_PASSWORD' \
         --servers mcs1 mcs2 mcs3
    $ maxctrl create service connection_router_service readconnroute \
         user=mxs \
         password='MAXSCALE_USER_PASSWORD' \
         router_options=slave \
         --servers mcs1 mcs2 mcs3
    $ maxctrl create listener connection_router_service connection_router_listener 3308 \
         protocol=MariaDBClient
    $ maxctrl create service query_router_service readwritesplit  \
         user=mxs \
         password='MAXSCALE_USER_PASSWORD' \
         --servers mcs1 mcs2 mcs3
    $ maxctrl create listener query_router_service query_router_listener 3307 \
         protocol=MariaDBClient
    $ maxctrl start services
  • Rocky Linux 8 (x86_64, ARM64)

  • Rocky Linux 9 (x86_64, ARM64)

  • Ubuntu 20.04 LTS (x86_64, ARM64)

  • Ubuntu 22.04 LTS (x86_64, ARM64)

  • Ubuntu 24.04 LTS (x86_64, ARM64)

  • Step 1

    Prepare System for Enterprise ColumnStore

    Step 2

    Install Enterprise ColumnStore

    Step 3

    Start and Configure Enterprise ColumnStore

    Step 4

    Test Enterprise ColumnStore

    Step 5

    Bulk Import Data to Enterprise ColumnStore

    MariaDB Enterprise Server

    Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.

    MariaDB Enterprise ColumnStore

    • Columnar Storage Engine

    • Optimized for Online Analytical Processing (OLAP) workloads

    Enterprise ColumnStore node

    4+ cores

    16+ GB

    Enterprise ColumnStore node

    64+ cores

    128+ GB

    Configuration File

    Configuration files (such as /etc/my.cnf) can be used to set and . The server must be restarted to apply changes made to configuration files.

    Command-line

    The server can be started with command-line options that set and .

    SQL

    Users can set that support dynamic changes on-the-fly using the statement.

    • CentOS

    • Red Hat Enterprise Linux (RHEL)

    /etc/my.cnf.d/z-custom-mariadb.cnf

    • Debian

    • Ubuntu

    /etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf

    Start

    sudo systemctl start mariadb

    Stop

    sudo systemctl stop mariadb

    Restart

    sudo systemctl restart mariadb

    Enable during startup

    sudo systemctl enable mariadb

    Disable during startup

    sudo systemctl disable mariadb

    Status

    sudo systemctl status mariadb

    submitting a support casearrow-up-right
    Enterprise ColumnStore with Shared Local storage
    recommended hardware requirements
    spinner
    The bucket option must be set to the name of the bucket that you created from your snapshot in the previous step.
  • To use an IAM role, you must also uncomment and set iam_role_name, sts_region, and sts_endpoint.

  • storage options
    S3-compatible object storage
    Storage Manager directory
    system of record
    Storage Manager directory
    Storage Manager directory
    Storage Manager directory
    Deploy Enterprise ColumnStore
    Storage Manager directory
    spinner
    Enterprise-ColumnStore-Backup-with-S3-Flowchart
    circle-info

    The instructions were tested against ColumnStore 23.10.

    This page provides an overview of the topology, requirements, and deployment procedures.

    Please read and understand this procedure before executing.

    hashtag
    Procedure Steps

    Step
    Description

    Step 1

    Step 2

    Step 3

    Step 4

    Step 5

    hashtag
    Support

    Customers can obtain support by submitting a support casearrow-up-right.

    hashtag
    Components

    The following components are deployed during this procedure:

    Component
    Function

    Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.

    hashtag
    MariaDB Enterprise Server Components

    Component
    Description

    • Columnar Storage Engine

    • Optimized for Online Analytical Processing (OLAP) workloads

    • S3-compatible object storage

    hashtag
    Topology

    The Single-Node Enterprise ColumnStore topology provides support for Online Analytical Processing (OLAP) workloads to MariaDB Enterprise Server.

    The Enterprise ColumnStore node:

    • Receives queries from the application

    • Executes queries

    • Use S3-compatible object storagearrow-up-right for data

    hashtag
    High Availability

    Single-Node Enterprise ColumnStore does not provide high availability (HA) for Online Analytical Processing (OLAP). If you would like to deploy Enterprise ColumnStore with high availability, see Enterprise ColumnStore with Object storage.

    hashtag
    Requirements

    These requirements are for the Single-Node Enterprise ColumnStore, when deployed with MariaDB Enterprise Server and MariaDB Enterprise ColumnStore.

    hashtag
    Operating System

    • Debian 11 (x86_64, ARM64)

    • Debian 12 (x86_64, ARM64)

    • Red Hat Enterprise Linux 8 (x86_64, ARM64)

    • Red Hat Enterprise Linux 9 (x86_64, PPC64LE, ARM64)

    • Red Hat UBI 8 (x86_64, ARM64)

    • Rocky Linux 8 (x86_64, ARM64)

    • Rocky Linux 9 (x86_64, ARM64)

    • Ubuntu 20.04 LTS (x86_64, ARM64)

    • Ubuntu 22.04 LTS (x86_64, ARM64)

    • Ubuntu 24.04 LTS (x86_64, ARM64)

    hashtag
    Minimum Hardware Requirements

    MariaDB Enterprise ColumnStore's minimum hardware requirements are not intended for production environments, but the minimum hardware requirements can be appropriate for development and test environments. For production environments, see the recommended hardware requirements instead.

    The minimum hardware requirements are:

    Component
    CPU
    Memory

    Enterprise ColumnStore node

    4+ cores

    16+ GB

    MariaDB Enterprise ColumnStore will refuse to start if the system has less than 3 GB of memory.

    If Enterprise ColumnStore is started on a system with less memory, the following error message will be written to the ColumnStore system log called crit.log:

    And the following error message will be raised to the client:

    hashtag
    Recommended Hardware Requirements

    MariaDB Enterprise ColumnStore's recommended hardware requirements are intended for production analytics.

    The recommended hardware requirements are:

    Component
    CPU
    Memory

    Enterprise ColumnStore node

    64+ cores

    128+ GB

    hashtag
    Storage Requirements

    Single-node Enterprise ColumnStore with Object Storage requires the following storage type:

    Storage Type
    Description

    Single-node Enterprise ColumnStore with Object Storage uses S3-compatible object storage to store data.

    hashtag
    S3-Compatible Object Storage Requirements

    Single-node Enterprise ColumnStore with Object Storage uses S3-compatible object storage to store data.

    Many S3-compatible object storage services exist. MariaDB Corporation cannot make guarantees about all S3-compatible object storage services, because different services provide different functionality.

    For the preferred S3-compatible object storage providers that provide cloud and hardware solutions, see the following sections:

    • Cloud

    • Hardware

    The use of non-cloud and non-hardware providers is at your own risk.

    If you have any questions about using specific S3-compatible object storage with MariaDB Enterprise ColumnStore, contact usarrow-up-right.

    hashtag
    Preferred Object Storage Providers: Cloud

    • Amazon Web Services (AWS) S3

    • Google Cloud Storage

    • Azure Storage

    • Alibaba Cloud Object Storage Service

    hashtag
    Preferred Object Storage Providers: Hardware

    • Cloudian HyperStore

    • Dell EMC

    • Seagate Lyve Rack

    • Quantum ActiveScale

    • IBM Cloud Object Storage

    hashtag
    Quick Reference

    MariaDB Enterprise Server Configuration Management

    Method
    Description

    Configuration File

    Configuration files (such as /etc/my.cnf) can be used to set and . The server must be restarted to apply changes made to configuration files.

    Command-line

    The server can be started with command-line options that set and .

    SQL

    Users can set that support dynamic changes on-the-fly using the statement.

    MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.

    To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.

    Distribution
    Example Configuration File Path
    • CentOS

    • Red Hat Enterprise Linux (RHEL)

    /etc/my.cnf.d/z-custom-mariadb.cnf

    • Debian

    • Ubuntu

    /etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf

    hashtag
    MariaDB Enterprise Server Service Management

    The systemctl command is used to start and stop the MariaDB Enterprise Server service.

    Operation
    Command

    Start

    sudo systemctl start mariadb

    Stop

    sudo systemctl stop mariadb

    Restart

    sudo systemctl restart mariadb

    Enable during startup

    sudo systemctl enable mariadb

    Disable during startup

    sudo systemctl disable mariadb

    Status

    sudo systemctl status mariadb

    hashtag
    Next Step

    Navigation in the Single-Node Enterprise ColumnStore topology with Object storage deployment procedure:

    Next: Step 1: Install MariaDB Enterprise ColumnStore.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    spinner
    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Replace the Default Configuration File

    MariaDB MaxScale installations include a configuration file with some example objects. This configuration file should be replaced.

    On the MaxScale node, replace the default /etc/maxscale.cnf with the following configuration:

    For additional information, see "Global Parameters".

    hashtag
    Restart MaxScale

    On the MaxScale node, restart the MaxScale service to ensure that MaxScale picks up the new configuration:

    For additional information, see "Start and Stop Services".

    hashtag
    Configure Server Objects

    On the MaxScale node, use maxctrl createarrow-up-right to create a server object for each Enterprise ColumnStore node:

    hashtag
    Configure MariaDB Monitor

    MaxScale uses monitors to retrieve additional information from the servers. This information is used by other services in filtering and routing connections based on the current state of the node. For MariaDB Enterprise ColumnStore, use the MariaDB Monitor (mariadbmon).

    On the MaxScale node, use maxctrl create monitorarrow-up-right to create a MariaDB Monitor:

    In this example:

    • columnstore_monitor is an arbitrary name that is used to identify the new monitor.

    • mariadbmon is the name of the module that implements the MariaDB Monitor.

    • user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to monitor the ColumnStore nodes.

    • password='MAXSCALE_USER_PASSWORD' sets the password parameter to the password used by the database user account that MaxScale uses to monitor the ColumnStore nodes.

    • replication_user=REPLICATION_USER sets the replication_user parameter to the database user account that MaxScale uses to setup replication.

    • replication_password='REPLICATION_USER_PASSWORD' sets the replication_password parameter to the password used by the database user account that MaxScale uses to setup replication.

    • --servers sets the servers parameter to the set of nodes that MaxScale should monitor. All non-option arguments after --servers are interpreted as server names.

    • Other Module Parameters supported by mariadbmon in MaxScale 22.08 can also be specified.

    hashtag
    Choose a MaxScale Router

    Routers control how MaxScale balances the load between Enterprise ColumnStore nodes. Each router uses a different approach to routing queries. Consider the specific use case of your application and database load and select the router that best suits your needs.

    Router
    Configuration Procedure
    Description

    Connection-based load balancing

    • Routes connections to Enterprise ColumnStore nodes designated as replica servers for a read-only pool

    • Routes connections to an Enterprise ColumnStore node designated as the primary server for a read-write pool.|

    Query-based load balancing

    • Routes write queries to an Enterprise ColumnStore node designated as the primary server

    • Routes read queries to Enterprise ColumnStore node designated as replica servers

    • Automatically reconnects after node failures

    hashtag
    Configure Read Connection Router

    Use MaxScale Read Connection Router (readconnroute)arrow-up-right to route connections to replica servers for a read-only pool.

    On the MaxScale node, use maxctrl create servicearrow-up-right to create a router:

    In this example:

    • connection_router_service is an arbitrary name that is used to identify the new service.

    • readconnroute is the name of the module that implements the Read Connection Router.

    • user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ColumnStore nodes.

    • password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ColumnStore nodes.

    • router_options=slave sets the router_options parameter to slave, so that MaxScale only routes connections to the replica nodes.

    • --servers sets the servers parameter to the set of nodes to which MaxScale should route connections. All non-option arguments after --servers are interpreted as server names.

    • Other Module Parameters supported by readconnroute in MaxScale 22.08 can also be specified.

    hashtag
    Configure Listener for the Read Connection Router

    These instructions reference TCP port 3308. You can use a different TCP port. The TCP port used must not be bound by any other listener.

    On the MaxScale node, use the maxctrl create listenerarrow-up-right command to configure MaxScale to use a listener for the Read Connection Router (readconnroute)arrow-up-right:

    In this example:

    • connection_router_service is the name of the readconnroute service that was previously created.

    • connection_router_listener is an arbitrary name that is used to identify the new listener.

    • 3308 is the TCP port.

    • protocol=MariaDBClient sets the protocol parameter.

    • Other Module Parameters supported by listeners in MaxScale 22.08 can also be specified.

    hashtag
    Configure Read/Write Split Router for Queries

    MaxScale Read/Write Split Router (readwritesplit)arrow-up-right performs query-based load balancing. The router routes write queries to the primary and read queries to the replicas.

    On the MaxScale node, use the maxctrl create service command to configure MaxScale to use the Read/Write Split Router (readwritesplit)arrow-up-right:

    In this example:

    • query_router_service is an arbitrary name that is used to identify the new service.

    • readwritesplit is the name of the module that implements the Read/Write Split Router.

    • user=MAXSCALE_USER sets the user parameter to the database user account that MaxScale uses to connect to the ColumnStore nodes.

    • password=MAXSCALE_USER_PASSWORD sets the password parameter to the password used by the database user account that MaxScale uses to connect to the ColumnStore nodes.

    • --servers sets the servers parameter to the set of nodes to which MaxScale should route queries. All non-option arguments after --servers are interpreted as server names.

    • Other Module Parameters supported by readwritesplit in MaxScale 22.08 can also be specified.

    hashtag
    Configure a Listener for the Read/Write Split Router

    These instructions reference TCP port 3307. You can use a different TCP port. The TCP port used must not be bound by any other listener.

    On the MaxScale node, use the maxctrl create listenerarrow-up-right command to configure MaxScale to use a listener for the Read/Write Split Router (readwritesplit)arrow-up-right:

    In this example:

    • query_router_service is the name of the readwritesplit service that was previously created.

    • query_router_listener is an arbitrary name that is used to identify the new listener.

    • 3307 is the TCP port.

    • protocol=MariaDBClient sets the protocol parameter.

    • Other Module Parameters supported by listeners in MaxScale 22.08 can also be specified.

    hashtag
    Start Services

    To start the services and monitors, on the MaxScale node use maxctrl start servicesarrow-up-right:

    hashtag
    Next Step

    Navigation in the procedure "Deploy ColumnStore Object Storage Topology":

    This page was step 7 of 9.

    Next: Step 8: Test MariaDB MaxScale

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    spinner
    spinner
    API key
    required headers
    character_set_serverarrow-up-right
    collation_serverarrow-up-right
    LOAD DATA INFILEarrow-up-right
    INSERT...SELECTarrow-up-right

    Upgrading MariaDB Enterprise ColumnStore (Alpha)

    circle-info

    This page documents an Alpha version of the upgrade procedure using the mcs install_es command. Behavior may change. Validate in a non‑production environment first.

    This guide explains how to upgrade MariaDB Enterprise Server (ES) and MariaDB Enterprise ColumnStore across all nodes in a cluster using the unified mcs command-line tool that you have to run only once.

    circle-info

    Upgrade Multi-Node MariaDB Enterprise ColumnStore from 6 to 23.10

    These instructions detail the upgrade from MariaDB Enterprise ColumnStore 6 to MariaDB Enterprise ColumnStore 23.10 in a Multi-Node topology on a range of .

    hashtag
    Set Replicas to Maintenance Mode

    This action is performed for each replica server on the MaxScale node.

    Prior to upgrading, the replica servers must be

    Apr 30 21:54:35 a1ebc96a2519 PrimProc[1004]: 35.668435 |0|0|0| C 28 CAL0000: Error total memory available is less than 3GB.
    ERROR 1815 (HY000): Internal error: System is not ready yet. Please try again.
    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/mcs cluster status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    {
      "timestamp": "2020-12-15 00:40:34.353574",
      "192.0.2.1": {
        "timestamp": "2020-12-15 00:40:34.362374",
        "uptime": 11467,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 19202
          },
          {
            "name": "controllernode",
            "pid": 19232
          },
          {
            "name": "PrimProc",
            "pid": 19254
          },
          {
            "name": "ExeMgr",
            "pid": 19292
          },
          {
            "name": "WriteEngine",
            "pid": 19316
          },
          {
            "name": "DMLProc",
            "pid": 19332
          },
          {
            "name": "DDLProc",
            "pid": 19366
          }
        ]
      }
    $ mariadb --host=192.0.2.1 \
       --user=root \
       --password
    FLUSH TABLES WITH READ LOCK;
    $ sudo mkdir -p /backups/columnstore/202101291600/
    $ sudo rsync -av /var/lib/columnstore/storagemanager /backups/columnstore/202101291600/
    $ sudo mkdir -p /backups/mariadb/202101291600/
    $ sudo mariadb-backup --backup \
       --target-dir=/backups/mariadb/202101291600/ \
       --user=mariadb-backup \
       --password=mbu_passwd
    $ sudo mariadb-backup --prepare \
       --target-dir=/backups/mariadb/202101291600/
    UNLOCK TABLES;
    $ sudo systemctl stop mariadb-columnstore-cmapi
    $ sudo systemctl stop mariadb-columnstore
    $ sudo systemctl stop mariadb
    $ sudo rsync -av /backups/columnstore/202101291600/storagemanager/ /var/lib/columnstore/storagemanager/
    $ sudo chown -R mysql:mysql /var/lib/columnstore/storagemanager
    $ sudo mariadb-backup --copy-back \
       --target-dir=/backups/mariadb/202101291600/
    $ sudo chown -R mysql:mysql /var/lib/mysql
    [ObjectStorage]
    …
    service = S3
    …
    [S3]
    bucket = your_columnstore_bucket_name
    endpoint = your_s3_endpoint
    aws_access_key_id = your_s3_access_key_id
    aws_secret_access_key = your_s3_secret_key
    # iam_role_name = your_iam_role
    # sts_region = your_sts_region
    # sts_endpoint = your_sts_endpoint
    
    [Cache]
    cache_size = your_local_cache_size
    path = your_local_cache_path
    $ sudo systemctl start mariadb
    $ sudo systemctl start mariadb-columnstore-cmapi
    Apr 30 21:54:35 a1ebc96a2519 PrimProc[1004]: 35.668435 |0|0|0| C 28 CAL0000: Error total memory available is less than 3GB.
    ERROR 1815 (HY000): Internal error: System is not ready yet. Please try again.
    [maxscale]
    threads          = auto
    admin_host       = 0.0.0.0
    admin_secure_gui = false
    $ sudo systemctl restart maxscale
    $ maxctrl create server mcs1 192.0.2.101
    $ maxctrl create server mcs2 192.0.2.102
    $ maxctrl create server mcs3 192.0.2.103
    $ maxctrl create monitor columnstore_monitor mariadbmon \
         user=mxs \
         password='MAXSCALE_USER_PASSWORD' \
         replication_user=repl \
         replication_password='REPLICATION_USER_PASSWORD' \
         --servers mcs1 mcs2 mcs3
    $ maxctrl create service connection_router_service readconnroute \
         user=mxs \
         password='MAXSCALE_USER_PASSWORD' \
         router_options=slave \
         --servers mcs1 mcs2 mcs3
    $ maxctrl create listener connection_router_service connection_router_listener 3308 \
         protocol=MariaDBClient
    $ maxctrl create service query_router_service readwritesplit  \
         user=mxs \
         password='MAXSCALE_USER_PASSWORD' \
         --servers mcs1 mcs2 mcs3
    $ maxctrl create listener query_router_service query_router_listener 3307 \
         protocol=MariaDBClient
    $ maxctrl start services
    select calShowPartitions('orders','orderdate');
    +-----------------------------------------+
    | calShowPartitions('orders','orderdate') |
    +-----------------------------------------+
    | Part# Min        Max        Status
      0.0.1 1992-01-01 1998-08-02 Enabled
      0.1.2 1998-08-03 2004-05-15 Enabled
      0.2.3 2004-05-16 2010-07-24 Enabled |
    +-----------------------------------------+
    
    1 row in set (0.05 sec)
    select calEnablePartitions('orders', '0.0.1');
    +----------------------------------------+
    | calEnablePartitions('orders', '0.0.1') |
    +----------------------------------------+
    | Partitions are enabled successfully.   |
    +----------------------------------------+
    1 row in set (0.28 sec)
    select calShowPartitions('orders','orderdate');
    +-----------------------------------------+
    | calShowPartitions('orders','orderdate') |
    +-----------------------------------------+
    | Part# Min        Max        Status
      0.0.1 1992-01-01 1998-08-02 Enabled
      0.1.2 1998-08-03 2004-05-15 Enabled
      0.2.3 2004-05-16 2010-07-24 Enabled |
    +-----------------------------------------+
    1 rows in set (0.05 sec)
    select calDisablePartitions('orders','0.0.1');
    +----------------------------------------+
    | calDisablePartitions('orders','0.0.1') |
    +----------------------------------------+
    | Partitions are disabled successfully.  |
    +----------------------------------------+
    1 row in set (0.28 sec)
    select calShowPartitions('orders','orderdate');
    +-----------------------------------------+
    | calShowPartitions('orders','orderdate') |
    +-----------------------------------------+
    | Part# Min        Max        Status
      0.0.1 1992-01-01 1998-08-02 Disabled
      0.1.2 1998-08-03 2004-05-15 Enabled
      0.2.3 2004-05-16 2010-07-24 Enabled |
    +-----------------------------------------+
    1 row in set (0.05 sec)
    select calDropPartitions('orders', '0.0.1');
    +--------------------------------------+
    | calDropPartitions('orders', '0.0.1') |
    +--------------------------------------+
    | Partitions are enabled successfully  |
    +--------------------------------------+
    1 row in set (0.28 sec)
    select calShowPartitions('orders','orderdate');
    +-----------------------------------------+
    | calShowPartitions('orders','orderdate') |
    +-----------------------------------------+
    | Part# Min        Max        Status
      0.1.2 1998-08-03 2004-05-15 Enabled
      0.2.3 2004-05-16 2010-07-24 Enabled |
    +-----------------------------------------+
    1 row in set (0.05 sec)
    select calShowPartitionsByValue('orders','orderdate', '1992-01-01', '2010-07-24');
    +----------------------------------------------------------------------------+
    | calShowPartitionsbyvalue('orders','orderdate', '1992-01-02', '2010-07-24') |
    +----------------------------------------------------------------------------+
    | Part# Min        Max        Status
      0.0.1 1992-01-01 1998-08-02 Enabled
      0.1.2 1998-08-03 2004-05-15 Enabled
      0.2.3 2004-05-16 2010-07-24 Enabled |
    +----------------------------------------------------------------------------+
    1 row in set (0.05 sec)
    select calEnablePartitionsByValue('orders','orderdate', '1992-01-01', '1998-08-02');
    +--------------------------------------------------------------------------------+
    | calenablepartitionsbyvalue ('orders', 'o_orderdate','1992-01-01','1998-08-02') |
    +--------------------------------------------------------------------------------+
    | Partitions are enabled successfully                                            |
    +--------------------------------------------------------------------------------+
    1 row in set (0.28 sec)
    select calShowPartitionsByValue('orders','orderdate', '1992-01-01', '2010-07-24');
    +----------------------------------------------------------------------------+
    | calShowPartitionsbyvalue('orders','orderdate', '1992-01-02','2010-07-24' ) |
    +----------------------------------------------------------------------------+
    | Part# Min        Max        Status
      0.0.1 1992-01-01 1998-08-02 Enabled
      0.1.2 1998-08-03 2004-05-15 Enabled
      0.2.3 2004-05-16 2010-07-24 Enabled |
    +----------------------------------------------------------------------------+
    1 rows in set (0.05 sec)
    select calDisablePartitionsByValue('orders','orderdate', '1992-01-01', '1998-08-02');
    +---------------------------------------------------------------------------------+
    | caldisablepartitionsbyvalue ('orders', 'o_orderdate','1992-01-01','1998-08-02') |
    +---------------------------------------------------------------------------------+
    | Partitions are disabled successfully                                            |
    +---------------------------------------------------------------------------------+
    1 row in set (0.28 sec)
    select calShowPartitionsByValue('orders','orderdate', '1992-01-01', '2010-07-24');
    +----------------------------------------------------------------------------+
    | calShowPartitionsbyvalue('orders','orderdate', '1992-01-02','2010-07-24’ ) |
    +----------------------------------------------------------------------------+
    | Part# Min        Max        Status
      0.0.1 1992-01-01 1998-08-02 Disabled
      0.1.2 1998-08-03 2004-05-15 Enabled
      0.2.3 2004-05-16 2010-07-24 Enabled |
    +----------------------------------------------------------------------------+
    1 row in set (0.05 sec)
    select calDropPartitionsByValue('orders','orderdate', '1992-01-01', '1998-08-02');
    +------------------------------------------------------------------------------+
    | caldroppartitionsbyvalue ('orders', 'o_orderdate','1992-01-01','1998-08-02') |
    +------------------------------------------------------------------------------+
    | Partitions are enabled successfully.                                         |
    +------------------------------------------------------------------------------+
    1 row in set (0.28 sec)
    select calShowPartitionsByValue('orders','orderdate', '1992-01-01', '2010-07-24');
    +----------------------------------------------------------------------------+
    | calShowPartitionsbyvalue('orders','orderdate', '1992-01-02','2010-07-24' ) |
    +----------------------------------------------------------------------------+
    | Part# Min        Max        Status
      0.1.2 1998-08-03 2004-05-15 Enabled
      0.2.3 2004-05-16 2010-07-24 Enabled |
    +----------------------------------------------------------------------------+
    1 row in set (0.05 sec)
    DELETE FROM orders WHERE orderdate <= '1998-12-31';

    Automatically replays transactions after node failures

  • Optionally enforces causal reads|

  • Read Connection (readconnroute)arrow-up-right
    Configure Read Connection Router
    Read/Write Split (readwritesplit)arrow-up-right
    Configure Read/Write Split

    The mcs command must be run as root. Either become root, or prefix the mcs commands on this page with sudo.

    The mcs install_es command:

    • Validates your MariaDB Enterprise Repository access using an ES API token.

    • Stops ColumnStore and MariaDB services in a controlled sequence.

    • Installs/configures the ES repository for the target version.

    • Creates a pre‑upgrade backup of ColumnStore DBRM and config files on each node.

    • Upgrades MariaDB Enterprise Server, ColumnStore, and CMAPI.

    • Waits for CMAPI to come back online on each node and, for upgrades, automatically restarts services.

    hashtag
    Prerequisites

    • Administrative privileges on all cluster nodes (package installation and service management required).

    • A valid ES API token with access to the MariaDB Enterprise Repository.

    • Network access from the nodes to the MariaDB Enterprise Repository endpoints.

    • A maintenance window: the upgrade will stop ColumnStore and MariaDB services.

    • Recent backups:

      • At a minimum, ensure Extent Map and configuration backups exist.

      • Recommended: take a full backup with the mcs backup command.

    Related docs:

    • Extent Map backup and recovery

    • General backup and restore guidance: ColumnStore Backup and Restore

    triangle-exclamation

    Always back up your data before upgrading. While the tool performs a pre‑upgrade backup of DBRM and configs, it is not a substitute for a full database backup.

    hashtag
    Command Overview

    The command can target a specific ES version, or use the latest tested version (currently latest 10.6 version).

    • Install latest tested version (if you omit the --version option, mcs uses the latest version):

    • Install a specific version:

    • Proceed even if nodes report different installed package versions (use the majority version as baseline):

    Options summary:

    • --token TEXT: ES API Token to use for the upgrade (required) — get it from herearrow-up-right.

    • -v, --version TEXT: ES version to install; if omitted or set to latest, upgrades to the latest tested version.

      • For a different version, specify something like --version 10.6.23-19 or --version 11.4.8-5 .

    • --ignore-mismatch: Continue even if cluster nodes report different package versions; uses majority versions as the baseline.

    hashtag
    Before you Begin

    • Stop or pause write workloads and heavy ingestion (e.g., cpimport, large INSERT/LOAD DATA jobs).

    • Drain or put traffic managers/proxies (for example, MaxScale) into maintenance/drain mode.

    • Ensure you have administrative/SSH and package manager access on all nodes.

    • Verify time synchronization across all nodes (NTP/Chrony) to avoid coordination issues.

    • Confirm recent backups are complete and restorable.

    hashtag
    What mcs install_es Does

    1

    Validate token and target version.

    • If --version=latest, the tool resolves the latest tested ES version.

    • If a specific version is requested, it is validated against the repository. Some versions could exists only for specific operating systems.

    2

    Stop services.

    • Gracefully stops the ColumnStore cluster.

    • Stops the MariaDB server.

    3

    Configure repository.

    • Installs/configures the MariaDB Enterprise Server repository for the chosen version on each node automatically.

    • Validate installed repo on each node separately

    4

    Pre‑upgrade backups (per node).

    Creates a backup of DBRM and key configuration files with name preupgrade_dbrm_backup in default backup directory.

    5

    Upgrade packages (per node).

    • Upgrades MariaDB Enterprise Server and ColumnStore packages.

    • Upgrades CMAPI and waits for it to become ready again on each node (up to 5 minutes).

    6

    Service handling after upgrade.

    • On upgrades: automatically restarts MariaDB and the ColumnStore cluster.

    • On downgrades: automatic restarts are intentionally skipped; manual steps are required.

    hashtag
    Post-Upgrade Checks

    • Run mcs cluster status to verify all services are up and the cluster is healthy. In case of a failure:

      • Verify CMAPI readiness on all nodes (for example, via mcs or an external monitoring tool).

    • Run a quick smoke test:

      • Create a small ColumnStore table, insert a few rows, and run a SELECT query.

      • Check for errors in server/ColumnStore logs.

    • Review /var/tmp/mcs_cli_install_es.log for the full sequence, and ensure no errors were reported.

    hashtag
    Downgrades

    • Downgrades are supported up to MariaDB 10.6.9-5 and ColumnStore 22.08.4.

    • When downgrading, the tool doesn't automatically restart services. Complete these steps manually:

      1. Start MariaDB on each node (for example, via your service manager).

      2. Start the ColumnStore cluster (for example, using the mcs cluster start command).

      3. Verify cluster health before resuming traffic.

    triangle-exclamation

    Downgrades can cause data loss or cluster inconsistency if not planned and validated. Always test and ensure backups are restorable.

    hashtag
    Verification and Logs

    After a successful upgrade, or after downgrading and a manual restart:

    • Validate that CMAPI is ready on all nodes: mcs cmapi is-ready

    • Check ColumnStore and MariaDB services are running and the cluster is healthy: mcs cluster status

    The mcs install_es command writes a detailed run log to:

    • /var/tmp/mcs_cli_install_es.log

    If CMAPI readiness times out or services do not start cleanly, review:

    • CMAPI logs: /var/log/mariadb/columnstore/cmapi_server.log

    • Service logs on each node: /var/log/mariadb/columnstore/

    • The install_es log file (/var/tmp/mcs_cli_install_es.log) for the full sequence and any errors

    hashtag
    Known Issues and Limitations (Alpha/Beta)

    • Mixed package versions across nodes.

      • If nodes report different installed versions of Server/ColumnStore/CMAPI, the command fails with a mismatch message.

      • You can force continuation with --ignore-mismatch; the tool uses the majority version per package as the baseline, but this carries risk—align versions whenever possible.

    • CMAPI readiness timeout

      • After upgrading CMAPI, the command waits up to 300 seconds per node for readiness.

      • On slow nodes or constrained environments, this timeout may be insufficient, and the command exits with a failure; verify services manually and adjust operational expectations.

    • Downgrade restarts are skipped by design.

      • After a downgrade, automatic restarts are not performed; you must start MariaDB and the ColumnStore cluster manually and validate health.

      • ColumnStore skips automatic restarts, because it cannot guarantee that all the expected APIs endpoints exist or are backward-compatible.

    • MaxScale maintenance handling not automated.

      • Transitioning MaxScale to maintenance/normal mode during upgrades is not automated at this time; manage traffic routing and maintenance state manually if applicable.

    • Repository access and version validation.

      • Invalid tokens, network restrictions, or unsupported version strings can result in validation errors (for example, HTTP 422). Ensure the token has the correct entitlements and the requested version exists for your platform.

    • Single‑node detection.

      • If no active nodes are detected, the tool falls back to localhost only; ensure this matches your topology.

    • Downgrading to 22.08.4 (10.6.9-5) technically working but finished with known issues:

      • Got ERROR on waiting CMAPI ready. But in fact CMAPI starts and is working fine (check mcs status and systemctl status mariadb-columnstore-cmapi on each node).

      • If you try to run a mariadb

    • Tool currently supported limited packages.

      • Only MariaDB-server (and dependencies), MariaDB-columnstore-engine (MariaDB-plugin-columnstore) and MariaDB-columnstore-cmapi packages remove and install supported. So packages like MariaDB-backup currently not supported and should be upgraded/downgraded manually.

    hashtag
    Troubleshooting

    • Re‑run with -v/--verbose to enable console debug logging.

    • Inspect /var/tmp/mcs_cli_install_es.log for the complete sequence and API responses.

    • If package repository installation fails, verify token validity and outbound access from all nodes.

    • If CMAPI does not become ready, check service logs on each node.

    • For mismatched node versions, align package versions before re‑running, or proceed with --ignore-mismatch , but only after assessing the risk.

    hashtag
    Environment and Network Requirements

    • Cluster state: ColumnStore cluster should be healthy before starting.

    • Node access: All nodes must be reachable (SSH/admin access) and responsive.

    • Disk space: Ensure sufficient free space for package downloads and pre-upgrade backups.

    • Internet access: Nodes must reach MariaDB Enterprise repositories (per your operating system).

    • CMAPI communication: Port 8640 (default) must be reachable between nodes.

    • Time sync: Keep NTP/Chrony synchronized across nodes.

    hashtag
    Additional Usage Example (Downgrade)

    triangle-exclamation

    Downgrades can be destructive.

    This prompts for confirmation. After downgrade, services are not restarted automatically; start MariaDB and the ColumnStore cluster manually and verify health.

    hashtag
    Recovery Procedures

    If the upgrade fails or CMAPI does not become ready on all nodes:

    1. Review the detailed log at /var/tmp/mcs_cli_install_es.log for errors.

    2. Check service status on each node:

      • systemctl status mariadb

      • systemctl status mariadb-columnstore-cmapi

    3. Verify network/ports (CMAPI 8640) and repository reachability.

    4. Manually start services if safe to do so:

      • systemctl start mariadb

      • mcs start (or mcs cluster start)

    5. If corruption is suspected, follow your backup recovery plan (for example, restore from a recent backup and/or extent map backup).

    hashtag
    Best Practices

    • Prior to upgrading:

      • Create a full backup and verify restore procedures.

      • Test the process in staging with similar topology/data.

      • Document current package versions and configs.

      • Schedule a maintenance window and inform stakeholders.

    • During upgrading:

      • Monitor the console output and /var/tmp/mcs_cli_install_es.log .

      • Avoid interrupting the process; ensure network stability.

    • After upgrading:

      • Validate services and cluster health (mcs cluster status).

      • Run basic data integrity and application smoke tests.

    hashtag
    Support and Reporting Issues

    Contact MariaDB Support if you encounter unexpected failures, data issues, or performance regressions. Provide:

    • The complete log file: /var/tmp/mcs_cli_install_es.log .

    • The mcs review logs: mcs review --logs .

    • The exact command used (with parameters, masking sensitive values).

    • Cluster topology (nodes, versions, operating system, network).

    • Source and target versions (Server, ColumnStore, CMAPI).

    • Exact error messages and timestamps.

    hashtag
    See Also

    • Command reference: mcs install_es in the command-line tool help and tool README.

    • Backups: mcs backup and Extent Map backup guidance.

    • Cluster management: mcs cluster start|stop|status .

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    spinner
    in MaxScale. The replicas can be set to maintenance mode in MaxScale using
    . If you are using
    , the replicas can be set to maintenance mode using the set server command:
    • As the first argument, provide the name for the server

    • As the second argument, provide maintenance as the state

    hashtag
    Confirm Maintenance Mode is Set for Replicas

    This action is performed on the MaxScale node.

    Confirm that the replicas are set to maintenance mode in MaxScale using MaxScale's REST APIarrow-up-right. If you are using MaxCtrlarrow-up-right, the state of the replicas can be viewed using the list serversarrow-up-right command:

    If the node is properly in maintenance mode, then the State column will show Maintenance as one of the states.

    hashtag
    Disable GTID Strict Mode

    This action is performed on each replica server.

    The system variable must be disabled for this upgrade procedure. If the gtid_strict_mode system variable is enabled in any configuration files, disable it temporarily until the upgrade procedure is complete.

    You can check if the gtid_strict_mode system variable is set in a configuration file by executing my_print_defaults command with the mysqld option:

    If the gtid_strict_mode system variable is set, you can temporarily disable it by adding # in front of it in the configuration file, so that it will be treated as a comment and ignored:

    hashtag
    Shutdown ColumnStore

    Prior to upgrading, MariaDB Enterprise ColumnStore must be shutdown.

    hashtag
    Stop Services

    This action is performed on each ColumnStore node.

    Prior to upgrading, several services must be stopped on each ColumnStore node:

    1. Stop the CMAPI service:

    2. Stop the MariaDB Enterprise ColumnStore service:

    3. Stop the MariaDB Enterprise Server service:

    hashtag
    Upgrade to the New Version

    MariaDB Corporation provides package repositories for YUM (RHEL, CentOS, Rocky Linux) and APT (Debian, Ubuntu).

    hashtag
    Upgrade via YUM (RHEL, CentOS, Rocky Linux)

    1. Retrieve your Customer Download Token at https://customers.mariadb.com/downloads/token/arrow-up-right and substitute for CUSTOMER_DOWNLOAD_TOKEN in the following directions.

    2. Configure the YUM package repository.

      Enterprise ColumnStore 23.10 is included with MariaDB Enterprise Server 11.4. Pass the version to install using the --mariadb-server-version flag to .

      To configure YUM package repositories:

      1. Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.

    3. Update MariaDB Enterprise Server and package dependencies:

    hashtag
    Upgrade via APT (Debian, Ubuntu)

    1. Retrieve your Customer Download Token at https://customers.mariadb.com/downloads/token/arrow-up-right and substitute for CUSTOMER_DOWNLOAD_TOKEN in the following directions.

    2. Configure the APT package repository.

      Enterprise ColumnStore 23.10 is included with MariaDB Enterprise Server 11.4. Pass the version to install using the --mariadb-server-version flag to mariadb_es_repo_setuparrow-up-right.

      To configure APT package repositories:

      1. Checksums of the various releases of the mariadb_es_repo_setup script can be found in the section at the bottom of the page. Substitute ${checksum} in the example above with the latest checksum.

    3. Update MariaDB Enterprise Server and package dependencies.

      The update command depends on the installed APT version, which can be determined by executing the following command:

      For versions prior to APT 2.0, execute the following command:

      For APT 2.0 and later, execute the following command:

    hashtag
    Disable ColumnStore Service

    This action is performed on each ColumnStore node.

    After upgrading, the MariaDB Enterprise ColumnStore service should be stopped, since it will be controlled by CMAPI:

    CMAPI disables the Enterprise ColumnStore service in a multi-node deployment. The Enterprise ColumnStore service will be started as-needed by the CMAPI service, so it does not need to start automatically upon reboot.

    hashtag
    Start Services

    This action is performed on each ColumnStore node.

    After upgrading, the CMAPI service and the MariaDB Enterprise Server service must be started on each ColumnStore node:

    1. Start the CMAPI service:

    2. Start the MariaDB Enterprise Server service:

    hashtag
    Write Binary Log

    On the primary server, run mariadb-upgradearrow-up-right to upgrade the data directory with binary logging enabled to update the system tables:

    hashtag
    Start ColumnStore

    After upgrading, MariaDB Enterprise ColumnStore must be started.

    hashtag
    Enable GTID Strict Mode

    This action is performed on each replica server.

    If you temporarily disabled the system variable in the Disable GTID Strict Mode step, it can be re-enabled. If the gtid_strict_mode system variable was temporarily disabled in any configuration files, re-enable it.

    hashtag
    Confirm ColumnStore Version

    This action is performed on each ColumnStore node.

    After upgrading, it is recommended to confirm the Enterprise ColumnStore version on each ColumnStore node. Connect to the node using and query the Columnstore_version status variable with :

    hashtag
    Confirm ES Version

    This action is performed on each ColumnStore node.

    After upgrading, it is recommended to confirm the ES version on each ColumnStore node. Connect to the node using and query the versionarrow-up-right system variable with :

    hashtag
    Clear Maintenance Mode for Replicas

    This action is performed for each replica server on the MaxScale node.

    After the upgrade, maintenance mode for each replica has been cleared in MaxScale using . If you are using , maintenance mode can be cleared using the clear server command:

    • As the first argument, provide the name for the server

    • As the second argument, provide maintenance as the state

    hashtag
    Confirm Maintenance Mode is Cleared for Replicas

    This action is performed for each replica server on the MaxScale node.

    Confirm that maintenance mode in MaxScale has been cleared for each replica using . If you are using , the state of the replicas can be viewed using the list servers command:

    If the node is no longer in maintenance mode, then the State column will no longer show Maintenance as one of the states.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    supported Operating Systemsarrow-up-right
    set to maintenance mode
    spinner
    Prepare System for Enterprise ColumnStore
    Install Enterprise ColumnStore
    Start and Configure Enterprise ColumnStore
    Test Enterprise ColumnStore
    Bulk Import Data to Enterprise ColumnStore
    MariaDB Enterprise Server
    MariaDB Enterprise ColumnStore
    S3-Compatible Object Storage

    Adding a Node

    Adding a Node to MariaDB Enterprise ColumnStore

    To add a new node to Enterprise ColumnStore, perform the following procedure.

    hashtag
    Deploying Enterprise ColumnStore

    Before you can add a node to Enterprise ColumnStore, confirm that the Enterprise ColumnStore software has been deployed on the node in the desired topology.

    For additional information, see "".

    mcs install_es --token <ES_API_TOKEN> --version latest
    mcs install_es --token <ES_API_TOKEN> --version <ES_VERSION>
    mcs install_es --token <ES_API_TOKEN> --version <ES_VERSION> --ignore-mismatch
    mcs install_es --token <ES_API_TOKEN> --version 10.6.15-10
    sudo systemctl stop mariadb-columnstore-cmapi
    sudo systemctl stop mariadb-columnstore
    sudo systemctl stop mariadb
    sudo yum install curl
    curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
    echo "${checksum} mariadb_es_repo_setup" | sha256sum -c -
    chmod +x mariadb_es_repo_setup
    sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
       --mariadb-server-version="11.4"
    sudo apt install curl
    curl -LsSO https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
    echo "${checksum}  mariadb_es_repo_setup" sha256sum -c -
    chmod +x mariadb_es_repo_setup
    sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
       --mariadb-server-version="11.4"
    sudo apt update
    sudo systemctl start mariadb-columnstore-cmapi
    sudo systemctl start mariadb
    maxctrl set server \
       mcs2 \
       maintenance
    maxctrl list servers
    ┌────────┬───────────────┬──────┬─────────────┬──────────────────────┬────────┐
    │ Server │ Address       │ Port │ Connections │ State                │ GTID   │
    ├────────┼───────────────┼──────┼─────────────┼──────────────────────┼────────┤
    │ mcs3   │ 192.0.2.3     │ 3306 │ 0           │ Maintenance, Running │ 0-1-17 │
    ├────────┼───────────────┼──────┼─────────────┼──────────────────────┼────────┤
    │ mcs2   │ 192.0.2.2     │ 3306 │ 0           │ Maintenance, Running │ 0-1-17 │
    ├────────┼───────────────┼──────┼─────────────┼──────────────────────┼────────┤
    │ mcs1   │ 192.0.2.1     │ 3306 │ 0           │ Master, Running      │ 0-1-17 │
    └────────┴───────────────┴──────┴─────────────┴──────────────────────┴────────┘
    my_print_defaults --mysqld \
       | grep "gtid[-_]strict[-_]mode"
    --gtid_strict_mode=1
    [mariadb]
    ...
    # temporarily commented out for upgrade
    # gtid_strict_mode=1
    mcs cluster stop
    sudo systemctl stop mariadb-columnstore
    sudo systemctl disable mariadb-columnstore
    mariadb-upgrade --write-binlog
    mcs cluster start
    SHOW GLOBAL STATUS LIKE 'Columnstore_version';
    +---------------------+---------+
    | Variable_name       | Value   |
    +---------------------+---------+
    | Columnstore_version | 23.10.0 |
    +---------------------+---------+
    SHOW GLOBAL VARIABLES LIKE 'version';
    +---------------+----------------------------------+
    | Variable_name | Value                            |
    +---------------+----------------------------------+
    | version       | 10.6.9-5-MariaDB-enterprise-log  |
    +---------------+----------------------------------+
    maxctrl clear server \
       mcs2 \
       maintenance
    maxctrl list servers
    ┌────────┬───────────────┬──────┬─────────────┬─────────────────┬─────────┐
    │ Server │ Address       │ Port │ Connections │ State           │ GTID    │
    ├────────┼───────────────┼──────┼─────────────┼─────────────────┼─────────┤
    │ mcs3   │ 192.0.2.3     │ 3306 │ 0           │ Slave, Running  │ 0-3-159 │
    ├────────┼───────────────┼──────┼─────────────┼─────────────────┼─────────┤
    │ mcs2   │ 192.0.2.2     │ 3306 │ 0           │ Slave, Running  │ 0-1-88  │
    ├────────┼───────────────┼──────┼─────────────┼─────────────────┼─────────┤
    │ mcs1   │ 192.0.2.1     │ 3306 │ 0           │ Master, Running │ 0-1-88  │
    └────────┴───────────────┴──────┴─────────────┴─────────────────┴─────────┘
    command, you got an error due to unknown configuration flag. Tool forcing to save current config files while installing packages, and an older MariaDB version doesn't support never flag obviously. To fix it, remove this flag from the configuration file, or restore the configuration from last installed package.
    Monitor performance and logs for regressions.
    hashtag
    Backing Up MariaDB Data Directory on the Primary Server

    Before the new node can be added, its MariaDB data directory must be consistent with the Primary Server. To ensure that it is consistent, take a backup of the Primary Server:

    The instructions below show how to perform a backup using .

    1. On the Primary Server, take a full backup:

      Confirm successful completion of the backup operation.

    2. On the Primary Server, prepare the backup:

      Confirm successful completion of the prepare operation.

    hashtag
    Restoring the Backup on the New Node

    To make the new node consistent with the Primary Server, restore the new backup on the new node:

    1. On the Primary Server, copy the backup to the new node:

    2. On the new node, restore the backup using .

    3. On the new node, fix the file permissions of the restored backup:

    hashtag
    Starting the Enterprise ColumnStore Services

    The Enterprise Server. Enterprise ColumnStore, and CMAPI services can be started using the systemctl command. In case the services were started during the installation process, use the restart command.

    Perform the following procedure on the new node:

    1. Start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:

    2. Start and disable the MariaDB Enterprise ColumnStore service, so that it does not start automatically upon reboot:

      Note

      The Enterprise ColumnStore service should not be enabled in a multi-node deployment. The Enterprise ColumnStore service will be started as-needed by the CMAPI service, so it does not require starting automatically upon reboot.

    3. Start and enable the CMAPI service, so that it starts automatically upon reboot:

    hashtag
    Configuring MariaDB Replication

    MariaDB Enterprise ColumnStore requires MariaDB Replication, which must be configured.

    1. Get the GTID position that corresponds to the restored backup.

      If the backup was taken with , this position will be located in xtrabackup_binlog_info:

      The GTID position from the above output is 0-1-2001,1-2-5139.

    2. Connect to the Replica Server using using the root@localhost user account:

    3. Set the system variable to the GTID position:

    4. Execute the statement to configure the new node to connect to the Primary Server at this position:

      The above statement configures the Replica Server to connect to a Primary Server located at 192.0.2.1 using the repl user account.

    5. Start replication using the command:

      The above statement configures the new node to connect to the Primary Server to retrieve new binary log events and replicate them into the local database.

    hashtag
    Adding the Node to Enterprise ColumnStore

    The new node must be added to Enterprise ColumnStore using CMAPI:

    • Add the node using the add-node endpoint path

    • Use a supported REST client, such as curl

    • Format the JSON output using jq for enhanced readability

    • Authenticate using the configured

    • Include the

    For example, if the primary node's host name is mcs1 and the new node's IP address is 192.0.2.3:

    • In ES 10.5.10-7 and later:

    • In ES 10.5.9-6 and earlier:

    Example output:

    hashtag
    Checking Enterprise ColumnStore Status

    To confirm that the node was properly added, the status of Enterprise ColumnStore should be checked using CMAPI:

    • Check the status using the status endpoint path

    For example, if the primary node's host name is mcs1:

    Example output:

    hashtag
    Adding a Server to MaxScale

    A server object for the new node must also be added to MaxScale using :

    • Use or another supported REST client

    • Add the server object using the create server command

    • As the first argument, provide a name for the server

    • As the second argument, provide the IP address for the node

    For example:

    hashtag
    Verifying the Server in MaxScale

    To confirm that the server object was properly added, the server objects should be checked using :

    • Show the server objects using the show servers command

    For example:

    hashtag
    Linking to Monitor in MaxScale

    The server object for the new node must be linked to the monitor using :

    • Link a server object to the monitor using the link monitor command

    • As the first argument, provide the name of the monitor

    • As the second argument, provide the name of the server

    hashtag
    Checking the Monitor in MaxScale

    To confirm that the server object was properly linked to the monitor, the monitor should be checked using :

    • Show the monitors using the show monitors command

    For example:

    hashtag
    Linking to Service in MaxScale

    The server object for the new node must be linked to the service using :

    • Link the server object to the service using the link service command

    • As the first argument, provide the name of the service

    • As the second argument, provide the name of the server

    hashtag
    Checking the Service in MaxScale

    To confirm that the server object was properly linked to the service, the service should be checked using :

    • Show the services using the show services command

    For example:

    hashtag
    Checking the Replication Status with MaxScale

    MaxScale is capable of checking the status of using :

    • List the servers using the list servers command

    For example:

    If the new node is properly replicating, then the State column will show Slave, Running.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    spinner
    sudo mariadb-backup --backup \
          --user=mariabackup_user \
          --password=mariabackup_passwd \
          --target-dir=/data/backup/replica_backup
    sudo mariadb-backup --prepare \
          --target-dir=/data/backup/replica_backup
    sudo rsync -av /data/backup/replica_backup 192.0.2.3:/data/backup/
    sudo mariadb-backup --copy-back \
       --target-dir=/data/backup/replica_backup
    sudo chown -R mysql:mysql /var/lib/mysql
    sudo systemctl restart mariadb
    sudo systemctl enable mariadb
    sudo systemctl restart mariadb-columnstore
    sudo systemctl disable mariadb-columnstore
    cat xtrabackup_binlog_info
    mariadb-bin.000096 568 0-1-2001,1-2-5139
    curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       --data '{"timeout":20, "node": "192.0.2.3"}' \
       | jq .
    curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/add-node \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       --data '{"timeout":20, "node": "192.0.2.3"}' \
       | jq .
    {
      "timestamp": "2020-10-28 00:39:14.672142",
      "node_id": "192.0.2.3"
    }
    curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    {
      "timestamp": "2020-12-15 00:40:34.353574",
      "192.0.2.1": {
        "timestamp": "2020-12-15 00:40:34.362374",
        "uptime": 11467,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 19202
          },
          {
            "name": "controllernode",
            "pid": 19232
          },
          {
            "name": "PrimProc",
            "pid": 19254
          },
          {
            "name": "ExeMgr",
            "pid": 19292
          },
          {
            "name": "WriteEngine",
            "pid": 19316
          },
          {
            "name": "DMLProc",
            "pid": 19332
          },
          {
            "name": "DDLProc",
            "pid": 19366
          }
        ]
      },
      "192.0.2.2": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "192.0.2.3": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "num_nodes": 3
    }
    maxctrl create server \
       mcs3 \
       192.0.2.3
    maxctrl show servers
    maxctrl link monitor \
       mcs_monitor \
       mcs3
    maxctrl show monitors
    maxctrl link service \
       mcs_service \
       mcs3
    maxctrl show services
    maxctrl list servers
    sudo yum update "MariaDB-*" "MariaDB-columnstore-engine" "MariaDB-columnstore-cmapi"
    apt --version
    apt 2.0.9 (amd64)
    sudo apt install --only-upgrade "mariadb*"
    sudo apt install --only-upgrade '?upgradable ?name(mariadb.*)'
    API key
    required headers
    sudo systemctl restart mariadb-columnstore-cmapi
    sudo systemctl enable mariadb-columnstore-cmapi
    sudo mariadb
    SET GLOBAL gtid_slave_pos='0-1-2001,1-2-5139';
    CHANGE MASTER TO
       MASTER_USER = "repl",
       MASTER_HOST = "192.0.2.1",
       MASTER_PASSWORD = "repl_passwd",
       MASTER_USE_GTID=slave_pos;
    START SLAVE;

    ColumnStore System Variables

    hashtag
    Variables

    hashtag
    columnstore_cache_inserts

    Scope: Global Dynamic: No (requires restart) Command Line: --columnstore-cache-inserts[={0|1}] Default: OFF Description: The feature can be enabled or disabled at the global level. When enabled, INSERT operations are directed to a memory‑optimized Aria cache table, which serves as a temporary buffer before the data is flushed into ColumnStore storage.

    hashtag
    columnstore_cache_flush_threshold

    Scope: Global / Session Dynamic: Yes Command Line: --columnstore-cache-flush-threshold=# Default: 500000 Description: Specifies the number of cached rows that trigger an automatic flush from the Aria cache table to the ColumnStore table. For tuning guidance, see .

    hashtag
    columnstore_cache_use_import

    Scope: Global Dynamic: Yes Command Line: --columnstore-cache-use-import[={0|1}] Default: OFF Description: When the insert cache is enabled, flush operations utilize the cpimport utility to achieve improved performance. When the feature is disabled, flushes are executed using ColumnStore’s internal batch processing mode. For details on performance trade-offs, see .

    hashtag
    columnstore_diskjoin_force_run

    • Controls whether disk joins are forced to run even if they are not estimated to be the most efficient execution plan. This can be useful for debugging purposes or for situations where the optimizer's estimates are not accurate.

    • Scope: global, session

    • Data type:

    hashtag
    columnstore_diskjoin_max_partition_tree_depth

    • Sets the maximum depth of the partition tree that can be used for disk joins. A higher value allows for more complex joins, but may also increase the memory usage and execution time.

    • Scope: global, session

    • Data type:

    hashtag
    columnstore_max_allowed_in_values

    • Sets the maximum number of values that can be used in an IN predicate on a Columnstore table. This limit helps to prevent performance issues caused by queries with a large number of IN values.

    • Scope: global, session

    • Data type:

    hashtag
    columnstore_max_pm_join_result_count

    • Sets the maximum number of rows that can be returned by a parallel merge join on a Columnstore table. This limit helps to prevent memory issues caused by joins that return a large number of rows.

    • Scope: global, session

    • Data type:

    hashtag

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: 2

    hashtag

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: 8

    hashtag

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: 100

    hashtag

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: 0

    hashtag

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: 0

    hashtag

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: OFF

    hashtag

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: 7

    hashtag

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: 17

    hashtag

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: 0

    hashtag
    infinidb_ordered_only

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: OFF

    hashtag
    infinidb_string_scan_threshold

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: 10

    hashtag
    infinidb_stringtable_threshold

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: 20

    hashtag

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: 0

    hashtag

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: OFF

    hashtag

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: ON

    hashtag
    infinidb_varbin_always_hex

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: ON

    hashtag

    • Command line: Yes

    • Scope: global, session

    • Data type:

    • Default value: 1

    hashtag
    Compression Mode

    MariaDB ColumnStore has the ability to compress data. This is controlled through a compression mode, which can be set as a default for the instance or set at the session level.

    To set the compression mode at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance:

    where n is:

    1. compression is turned off. Any subsequent table create statements run will have compression turned off for that table unless any statement overrides have been performed. Any alter statements run to add a column will have compression turned off for that column unless any statement override has been performed.

    2. compression is turned on. Any subsequent table create statements run will have compression turned on for that table unless any statement overrides have been performed. Any alter statements run to add a column will have compression turned on for that column unless any statement override has been performed. ColumnStore uses snappy compression in this mode.

    hashtag
    ColumnStore Decimal-to-Double Math

    MariaDB ColumnStore has the ability to change intermediate decimal mathematical results from decimal type to double. The decimal type has approximately 17-18 digits of precision, but a smaller maximum range. Whereas the double type has approximately 15-16 digits of precision, but a much larger maximum range.

    In typical mathematical and scientific applications, the ability to avoid overflow in intermediate results with double math is likely more beneficial than the additional two digits of precisions. In banking applications, however, it may be more appropriate to leave in the default decimal setting to ensure accuracy to the least significant digit.

    hashtag
    Enable/Disable Decimal-to-Double Math

    The infinidb\_double\_for\_decimal\_math variable is used to control the data type for intermediate decimal results. This decimal for double math may be set as a default for the instance, set at the session level, or at the statement level by toggling this variable on and off.

    To enable/disable the use of the decimal to double math at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance:

    where n is:

    • off (disabled, default)

    • on (enabled)

    hashtag
    ColumnStore Decimal Scale

    ColumnStore has the ability to support varied internal precision on decimal calculations. infinidb_decimal_scale is used internally by the ColumnStore engine to control how many significant digits to the right of the decimal point are carried through in suboperations on calculated columns. If, while running a query, you receive the message ‘aggregate overflow’, try reducing infinidb_decimal_scale and running the query again.

    Note that, as you decrease infinidb_decimal_scale, you may see reduced accuracy in the least significant digit(s) of a returned calculated column. infinidb_use_decimal_scale is used internally by the ColumnStore engine to turn the use of this internal precision on and off. These two system variables can be set as a default for the instance or at session level.

    hashtag
    Enable/Disable Decimal Scale

    To enable/disable the use of the decimal scale at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance:

    where n is off (disabled) or on (enabled).

    hashtag
    Set Decimal Scale Level

    To set the decimal scale at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.

    where n is the amount of precision desired for calculations.

    hashtag
    Disk-Based Joins

    hashtag
    Introduction

    Joins are performed in memory. When a join operation exceeds the memory allocated for query joins, the query is aborted with an error code IDB-2001.

    Disk-based joins enable such queries to use disk for intermediate join data in case when the memory needed for join exceeds the memory limit. Although slower in performance as compared to a fully in-memory join, and bound by the temporary space on disk, it does allow such queries to complete.

    circle-info

    Disk-based joins does not include aggregation and DML joins.

    The following variables in the HashJoin element in the Columnstore.xml configuration file relate to disk-based joins. Columnstore.xml resides in /usr/local/mariadb/columnstore/etc/.

    • AllowDiskBasedJoin – Option to use disk-based joins. Valid values are Y (enabled) or N (disabled). Default is disabled.

    • TempFileCompression – Option to use compression for disk join files. Valid values are Y (use compressed files) or N (use non-compressed files).

    • TempFilePath – The directory path used for the disk joins. By default, this path is the tmp directory for your installation (i.e., /usr/local/mariadb/columnstore/tmp). Files (named infinidb-join-data*) in this directory will be created and cleaned on an as needed basis. The entire directory is removed and recreated by ExeMgr at startup.)

    circle-info

    When using disk-based joins, it is strongly recommended that the TempFilePath reside on its own partition as the partition may fill up as queries are executed.

    hashtag
    Per user join memory limit

    In addition to the system wide flags, at SQL global and session level, the following system variables exists for managing per user memory limit for joins.

    • infinidb_um_mem_limit - A value for memory limit in MB per user. When this limit is exceeded by a join, it will switch to a disk-based join. By default, the limit is not set (value of 0).

    For modification at the global level: In my.cnf file (typically /usr/local/mariadb/columnstore/mysql):

    where value is the value in MB for in memory limitation per user.

    For modification at the session level, before issuing your join query from the SQL client, set the session variable as follows.

    hashtag
    Batch Insert Mode for INSERT Statements

    hashtag
    Introduction

    MariaDB ColumnStore has the ability to utilize the cpimport fast data import tool for non-transactional and SQL statements. Using this method results in a significant increase in performance in loading data through these two SQL statements. This optimization is independent of the storage engine used for the tables in the select statement.

    hashtag
    Enable/Disable Using cpimport for Batch Insert

    The infinidb_use_import_for_batchinsert variable is used to control if cpimport is used for these statements. This variable may be set as a default for the instance, set at the session level, or at the statement level by toggling this variable on and off.

    To enable/disable the use of the use cpimport for batch insert at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.

    where n is:

    • 0 (disabled)

    • 1 (enabled)

    hashtag
    Changing Default Delimiter for INSERT SELECT

    • The infinidb_import_for_batchinsert_delimiter variable is used internally by MariaDB ColumnStore on a non-transactional INSERT INTO SELECT FROM statement as the default delimiter passed to the cpimport tool. With a default value ascii 7, there should be no need to change this value unless your data contains ascii 7 values.

    To change this variable value at the at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.

    where ascii_value is an ASCII value representation of the delimiter desired.

    Note that this setting may cause issues with multi byte character set data. It is recommended to utilize UTF8 files directly with cpimport.

    hashtag
    Version Buffer File Management

    If the following error is received, most likely with a transaction LOAD DATA INFILE or INSERT INTO SELECT, it is recommended to break up the load into multiple smaller chunks, increase the VersionBufferFileSize setting, consider a nontransactional LOAD DATA INFILE, or use cpimport.

    The VersionBufferFileSize setting is updated in the ColumnStore.xml typically located under /usr/local/mariadb/columnstore/etc. This dictates the size of the version buffer file on disk which provides DML transactional consistency. The default value is '1GB' which reserves up to a 1 Gigabyte file size. Modify this on the primary node and restart the system if you require a larger value.

    hashtag
    Local PrimProc Query Mode

    MariaDB ColumnStore has the ability to query data from just a single node instead of the whole cluster. In order to accomplish this, the infinidb_local_query variable in the my.cnf configuration file is used and maybe set as a default at system wide or set at the session level.

    hashtag
    Enable Local PrimProc Query During Installation

    Local PrimProc query can be enabled system wide during the install process when running the install script postConfigure. Answer 'y' to this prompt during the install process:

    hashtag
    Enable Local PrimProc Query System-Wide

    To enable the use of the local PrimProc query at the instance level, specify infinidb_local_query =1 (enabled) in the my.cnf configuration file at /usr/local/mariadb/columnstore/mysql. The default is 0 (disabled).

    hashtag
    Enable/Disable Local PrimProc Query at the Session Level

    To enable/disable the use of the local PrimProc query at the session level, the following statement is used. Once the session has ended, any subsequent session will return to the default for the instance:

    where n is:

    • 0 (disabled)

    • 1 (enabled)

    At the session level, this variable applies only to executing a query on an individual . The PrimProc must be set up with the local query option during installation.

    hashtag
    Local PrimProc Query Examples

    hashtag
    Example 1 - SELECT from a single table on local PrimProc to import back on local PrimProc:

    With the infinidb_local_query variable set to 1 (default with local PrimProc Query):

    hashtag
    Example 2 - SELECT involving a join between a fact table on the PrimProc node and dimension table across all the nodes to import back on local PrimProc:

    With the infinidb_local_query variable set to 0 (default with local PrimProc Query):

    Create a script (i.e., extract_query_script.sql in our example) similar to the following:

    The infinidb_local_query is set to 0 to allow query across all PrimProc nodes.

    The query is structured so PrimProc gets the fact table data locally from the PrimProc node (as indicated by the use of the function), while the dimension table data is extracted from all the PrimProc nodes.

    Then you can execute the script to pipe it directly into cpimport:

    hashtag
    Operating Mode

    ColumnStore has the ability to support full MariaDB query syntax through an operating mode. This operating mode may be set as a default for the instance or set at the session level. To set the operating mode at the session level, the following command is used. Once the session has ended, any subsequent session will return to the default for the instance.

    where n is:

    1. a generic, highly compatible row-by-row processing mode. Some WHERE clause components can be processed by ColumnStore, but joins are processed entirely by MySQL using a nested loop join mechanism.

    2. (the default) query syntax is evaluated by ColumnStore for compatibility with distributed execution and incompatible queries are rejected. Queries executed in this mode take advantage of distributed execution and typically result in higher performance.

    3. auto-switch mode: ColumnStore will attempt to process the query internally, if it cannot, it will automatically switch the query to run in row-by-row mode.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Default value: OFF
  • Range: ON, OFF

  • Introduced in: MariaDB Enterprise Server 10.6

  • Default value: 10
  • Introduced in: MariaDB Enterprise Server 10.6

  • Default value: 10000
  • Introduced in: MariaDB Enterprise Server 10.6

  • Default value: 1000000
  • Introduced in: MariaDB Enterprise Server 10.6

  • Range: 0,2

  • Range: OFF, ON

  • Range: 0,1

  • Range: OFF, ON

  • Range: OFF, ON

  • Range: OFF, ON

  • Range: OFF, ON

  • Range: 0,1,2

  • insert cache
    ColumnStore Insert Cache
    ColumnStore Insert Cache
    infinidb_compression_type
    infinidb_decimal_scale
    infinidb_diskjoin_bucketsize
    infinidb_diskjoin_largesidelimit
    infinidb_diskjoin_smallsidelimit
    infinidb_double_for_decimal_math
    infinidb_import_for_batchinsert_delimiter
    infinidb_import_for_batchinsert_enclosed_by
    infinidb_local_query
    infinidb_um_mem_limit
    infinidb_use_decimal_scale
    infinidb_use_import_for_batchinsert
    infinidb_vtable_mode
    PrimProc
    idbLocalPm()
    spinner
    SET infinidb_compression_type = n
    SET infinidb_double_for_decimal_math = on
    SET infinidb_use_decimal_scale = on
    SET infinidb_decimal_scale = n
    [mysqld]
    ...
    infinidb_um_mem_limit = value
    SET infinidb_um_mem_limit = value
    SET infinidb_use_import_for_batchinsert = n
    SET infinidb_import_for_batchinsert_delimiter = ascii_value
    ERROR 1815 (HY000) at line 1 in file: 'ldi.sql': Internal error: CAL0006: IDB-2008: The version buffer overflowed. Increase VersionBufferFileSize or limit the rows to be processed.
    NOTE: Local Query Feature allows the ability to query data from a single Performance
          Module. Check MariaDB ColumnStore Admin Guide for additional information.
    
    Enable Local Query feature? [y,n] (n) >
    SET infinidb_local_query = n
    mcsmysql -e 'select * from source_schema.source_table;' –N | /usr/local/Calpont/bin/cpimport target_schema target_table -s '\t' –n1
    SET infinidb_local_query=0;
    SELECT fact.column1, dim.column2 
    FROM fact JOIN dim USING (KEY) 
    WHERE idbPm(fact.KEY) = idbLocalPm();
    mcsmysql source_schema -N < extract_query_script.sql | /usr/local/mariadb/columnstore/bin/cpimport target_schema target_table -s '\t' –n1
    SET infinidb_vtable_mode = n

    Step 4: Start and Configure MariaDB Enterprise Server

    Step 4: Start and Configure MariaDB Enterprise Server

    hashtag
    Overview

    This page details step 4 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".

    This step starts and configures MariaDB Enterprise Server, and MariaDB Enterprise ColumnStore.

    circle-info

    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Stop the Enterprise ColumnStore Services

    The installation process might have started some of the ColumnStore services. The services should be stopped prior to making configuration changes.

    1. On each Enterprise ColumnStore node, stop the MariaDB Enterprise Server service:

    1. On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:

    1. On each Enterprise ColumnStore node, stop the CMAPI service:

    hashtag
    Configure Enterprise ColumnStore

    On each Enterprise ColumnStore node, configure Enterprise Server.

    Connector
    MariaDB Connector/R2DBC

    Mandatory system variables and options for ColumnStore Object Storage include:

    Example Configuration

    hashtag
    Start the Enterprise ColumnStore Services

    1. On each Enterprise ColumnStore node, start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:

    1. On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:

    After the CMAPI service is installed in the next step, CMAPI will start the Enterprise ColumnStore service as needed on each node. CMAPI disables the Enterprise ColumnStore service to prevent systemd from automatically starting Enterprise ColumnStore upon reboot.

    1. On each Enterprise ColumnStore node, start and enable the CMAPI service, so that it starts automatically upon reboot:

    For additional information, see "Start and Stop Services".

    hashtag
    Create User Accounts

    The ColumnStore Object Storage topology requires several user accounts. Each user account should be created on the primary server, so that it is replicated to the replica servers.

    hashtag
    Create the Utility User

    Enterprise ColumnStore requires a mandatory utility user account to perform cross-engine joins and similar operations.

    1. On the primary server, create the user account with the CREATE USER statement:

    1. On the primary server, grant the user account SELECT privileges on all databases with the GRANT statement:

    1. On each Enterprise ColumnStore node, configure the ColumnStore utility user:

    1. On each Enterprise ColumnStore node, set the password:

    For details about how to encrypt the password, see "".

    Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.

    hashtag
    Create the Replication User

    ColumnStore Object Storage uses MariaDB Replication to replicate writes between the primary and replica servers. As MaxScale can promote a replica server to become a new primary in the event of node failure, all nodes must have a replication user.

    The action is performed on the primary server.

    Create the replication user and grant it the required privileges:

    1. Use the CREATE USER statement to create replication user.

    Replace the referenced IP address with the relevant address for your environment.

    Ensure that the user account can connect to the primary server from each replica.

    1. Grant the user account the required privileges with the GRANT statement.

    hashtag
    Create MaxScale User

    ColumnStore Object Storage 23.10 uses MariaDB MaxScale 22.08 to load balance between the nodes.

    This action is performed on the primary server.

    1. Use the statement to create the MaxScale user:

    Replace the referenced IP address with the relevant address for your environment.

    Ensure that the user account can connect from the IP address of the MaxScale instance.

    1. Use the statement to grant the privileges required by the router:

    1. Use the statement to grant privileges required by the MariaDB Monitor.

    hashtag
    Configure MariaDB Replication

    On each replica server, configure MariaDB Replication:

    1. Use the CHANGE MASTER TO statement to configure the connection to the primary server:

    1. Start replication using the START REPLICA statement:

    1. Confirm that replication is working using the SHOW REPLICA STATUS statement:

    Ensure that the replica server cannot accept local writes by setting the read_only system variable to ON using the SET GLOBAL statement:

    hashtag
    Initiate the Primary Server with CMAPI

    Initiate the primary server using CMAPI.

    1. Create an API key for the cluster. This API key should be stored securely and kept confidential, because it can be used to add cluster nodes to the multi-node Enterprise ColumnStore deployment.

    For example, to create a random 256-bit API key using openssl rand:

    This document will use the following API key in further examples, but users should create their own:

    1. Use CMAPI to add the primary server to the cluster and set the API key. The new API key needs to be provided as part of the X-API-key HTML header.

    For example, if the primary server's host name is mcs1 and its IP address is 192.0.2.1, use the following node command:

    1. Use CMAPI to check the status of the cluster node:

    hashtag
    Add Replica Servers with CMAPI

    Add the replica servers with CMAPI:

    1. For each replica server, use to add the replica server to the cluster. The previously set API key needs to be provided as part of the X-API-key HTML header.

    For example, if the primary server's host name is mcs1 and the replica server's IP address is 192.0.2.2, use the following node command:

    1. After all replica servers have been added, use CMAPI to confirm that all cluster nodes have been successfully added:

    hashtag
    Configure Linux Security Modules (LSM)

    The specific steps to configure the security module depend on the operating system.

    hashtag
    Configure SELinux (CentOS, RHEL)

    Configure SELinux for Enterprise ColumnStore:

    1. To configure SELinux, you have to install the packages required for audit2allow. On CentOS 7 and RHEL 7, install the following:

    On RHEL 8, install the following:

    1. Allow the system to run under load for a while to generate SELinux audit events.

    2. After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:

    If no audit events were found, this will print the following:

    1. If audit events were found, the new SELinux policy can be loaded using semodule:

    1. Set SELinux to enforcing mode:

    1. Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config.

    For example, the file will usually look like this after the change:

    1. Confirm that SELinux is in enforcing mode:

    hashtag
    Configure AppArmor (Ubuntu)

    For information on how to create a profile, see on Ubuntu.com.

    hashtag
    Configure Firewalls

    The specific steps to configure the firewall service depend on the platform.

    hashtag
    Configure firewalld (CentOS, RHEL)

    Configure firewalld for Enterprise Cluster on CentOS and RHEL:

    1. Check if the firewalld service is running:

    1. If the firewalld service was stopped to perform the installation, start it now:

    For example, if your cluster nodes are in the 192.0.2.0/24 subnet:

    1. Open up the relevant ports using firewall-cmd:

    1. Reload the runtime configuration:

    hashtag
    Configure UFW (Ubuntu)

    Configure UFW for Enterprise ColumnStore on Ubuntu:

    1. Check if the UFW service is running:

    1. If the UFW service was stopped to perform the installation, start it now:

    1. Open up the relevant ports using ufw.

    For example, if your cluster nodes are in the 192.0.2.0/24 subnet in the range 192.0.2.1 - 192.0.2.3:

    1. Reload the runtime configuration:

    hashtag
    Next Step

    Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology".

    This page was step 4 of 9.

    .

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    MCS backup and restore commands

    This page documents how to create and restore MariaDB Enterprise ColumnStore backups using the mcs CLI.

    The mcs backup and mcs restore commands support the same workflows as the mcs_backup_manager.sh script, including:

    • Full and incremental backups

    • Local/shared storage and S3 storage topologies

    • Optional compression and parallelism

    • Separate DBRM (metadata) backup/restore workflows

    circle-info

    The examples in this page assume the mcs command is available on the host and you run the backup/restore operations as root.

    hashtag
    Before You Start

    hashtag
    Identify Your Storage Topology

    On a ColumnStore node, determine which StorageManager service is configured:

    Example output:

    • service = LocalStorage

    • service = S3

    circle-info

    Use service = LocalStorage when ColumnStore data lives on local/shared storage, and service = S3 when ColumnStore data is stored in object storage.

    hashtag
    Estimate Backup Size

    LocalStorage:

    S3:

    hashtag
    Backups

    hashtag
    LocalStorage Topology Backups

    hashtag
    Instructions

    1. Run mcs backup as root on each node, starting with the primary node.

    2. Use the same backup location on each node.

    hashtag
    List Your Backups

    Example output:

    hashtag
    Quick Examples

    Full backup:

    Parallel backup:

    Compressed backup:

    Incremental backup (auto-select most recent full backup):

    Save the backup to a remote host (SCP):

    hashtag
    Online Backup Example

    When you run a backup, by default the tooling performs polling checks and attempts to obtain a consistent point-in-time backup by:

    • checking for active writes

    • checking for running cpimport jobs

    • issuing write locks

    You can skip these safety mechanisms with:

    • --skip-polls

    • --skip-locks

    • --skip-save-brm

    circle-exclamation

    Skipping polls/locks/BRM saving can be useful for certain workflows, but it increases the risk of capturing a partially-written state that complicates restore.

    hashtag
    Incremental Backup Example

    circle-exclamation

    Before you can run an incremental backup, you need a full backup taken.

    Then taking an incremental backup you need to define the full backup name to increment via flag --incremental xxxxx.

    Incremental backups add ColumnStore deltas to an existing full backup. You can either:

    • specify the full backup folder name explicitly, or

    • use auto_most_recent will select the most recent directory defined in --backup-location to apply the incremental backup to the most recent full backup

    Apply to the most recent full backup:

    Apply to a specific full backup folder:

    hashtag
    Cron Backup Example

    Create a cron job (run as root) that takes periodic backups and appends logs:

    Every Night Full Backup retaining the last 14 days:

    Full backup once a week (Saturday night) w/ incremental backups all the other nights (keep 21 days)

    hashtag
    LocalStorage Backup Flags

    The most commonly used options are:

    Flag / Option
    Description
    Notes

    hashtag
    S3 Topology Backups

    hashtag
    Instructions

    1. Ensure the node has access to your S3 endpoint and credentials.

    2. Run mcs backup with --storage S3 and a backup bucket (--backup-bucket).

    3. Run it as root

    circle-info

    If you're using an on-premise S3-compatible solution, you may need --endpoint-url (and sometimes --no-verify-ssl).

    hashtag
    Quick Examples

    Full backup:

    Compressed backup (and skip copying bucket data if you only want local artifacts):

    Incremental backup:

    On-premise S3 endpoint: Key Flags for on premise buckets are the following:

    • -url - the local/ip address of the S3 provider. For example, minio defaults to port 9000, 127.0.0.1 would be used if minio is installed on the same machine running columnstore

    • --no-verify-ssl - used when ssl certs are not used/defined for the S3 provider/endpoint

    hashtag
    Cron Backup Example

    As with LocalStorage, you can schedule mcs backup in cron. Consider including --name-backup to avoid collisions.

    hashtag
    S3 Backup Flags

    The most commonly used S3-specific options are:

    Flag / Option
    Description
    Notes

    hashtag
    Restore

    hashtag
    LocalStorage Topology Restore

    hashtag
    Instructions

    circle-info

    If Backup made only on Primary node on Clusters that do NOT save the backup to an NFS share, copy the primary nodes backup mysql & configs directory to all nodes.

    1. List backups to find the folder name you want.

    2. Restore on each node, starting with the primary node.

    circle-info

    When running a columnstore backup, a restore.job file is created with a command compatible to run on each node to restore the backup.

    hashtag
    List Your Backups to Restore

    hashtag
    Quick Examples

    Standard restore:

    Compressed backup restore:

    hashtag
    LocalStorage Restore Flags

    Common options:

    Flag / Option
    Description
    Notes

    hashtag
    S3 Topology Restore

    hashtag
    Instructions

    1. Use the same backup bucket that contains the backup.

    2. Restore on each node, starting with the primary node.

    triangle-exclamation

    When running a columnstore backup, a restoreS3.job file is created with a command compatible to run on each node to restore the backup.

    hashtag
    Quick Examples

    hashtag
    Standard Restore:

    hashtag
    On-premise S3 Endpoint:

    hashtag
    Restoring to a New Bucket:

    Key Flags for restoring to a new bucket

    • -nb - the name of the new bucket to copy the backup into and configure columnstore to use post restore

    • -nr - the name of the region of the new bucket to configure columnstore to use post restore

    • -nk - the key of the new bucket to configure columnstore to use post restore

    hashtag
    S3 Restore Flags

    Common options:

    Flag / Option
    Description
    Notes

    hashtag
    DBRM Backups

    Both S3 and LocalStorage use the same commands for dbrm backups

    triangle-exclamation

    DBRM backups are intended for backing up internal ColumnStore metadata only.

    hashtag
    Instructions

    Run mcs dbrm_backup as root with the appropriate flags as you need ONLY on the primary node

    hashtag
    List Your dbrm Backups

    hashtag
    Quick Examples

    hashtag
    Standard dbrm_backup:

    dbrm_backup before upgrade:

    hashtag
    dbrm_backup Flags

    Common options:

    Flag / Option
    Description
    Notes

    hashtag
    DBRM Restore

    hashtag
    Instructions

    Both S3 and LocalStorage use the same commands for dbrm restore.

    triangle-exclamation

    DBRM backups are intended for backing up internal ColumnStore metadata only.

    1. List available DBRM backups.

    2. Restore from the selected folder.

    hashtag
    List Your dbrm Restore Options

    hashtag
    Quick Examples

    hashtag
    Standard dbrm_restore:

    hashtag
    dbrm_restore Flags

    Common options:

    Flag / Option
    Description
    Notes

    Multi-Node Localstorage

    This guide provides steps for deploying a multi-node ColumnStore, setting up the environment, installing the software, and bulk importing data for online analytical processing (OLAP) workloads.

    hashtag
    Overview

    Software Version
    Diagram
    Features

    Step 4: Start and Configure MariaDB Enterprise Server

    Step 4: Start and Configure MariaDB Enterprise Server

    hashtag
    Overview

    This page details step 4 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".

    This step starts and configures MariaDB Enterprise Server, and MariaDB Enterprise ColumnStore.

    circle-info

    Step 8: Test MariaDB MaxScale

    Step 8: Test MariaDB MaxScale

    hashtag
    Overview

    This page details step 8 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".

    This step tests MariaDB MaxScale 22.08.

    circle-info

    Step 8: Test MariaDB MaxScale

    Step 8: Test MariaDB MaxScale

    hashtag
    Overview

    This page details step 8 of the 9-step procedure "Deploy ColumnStore Object Storage Topology".

    This step tests MariaDB MaxScale 22.08.

    circle-info

    saving BRM prior to copying

    --parallel (-P)

    Enables parallel rsync and defines the number of parallel rsync threads to run. If combined with --compress this flag defines the number of compression threads to run. Default is 4

    Example: --parallel 8

    --compress (-c)

    Compress backup in X format

    Supported: pigz

    --quiet (-q)

    Silence verbose copy command outputs

    Useful for cron jobs

    --name-backup (-nb)

    Define the name of the backup - default: date +%m-%d-%Y

    Example: -nb before-upgrade-backup

    --retention-days (-r)

    Retain backups created within the last X days; delete older ones

    0 means keep all

    --apply-retention-only (-aro)

    Only apply retention policy; do not run a backup

    Works with --retention-days

    --list (-li)

    List backups

    Lists backups in the configured location

    on
    each node
    , starting with the primary node.

    --skip-mariadb-backup (-smdb)

    Skip restoring MariaDB server data

    Use when restoring ColumnStore only

  • -ns - the secret of the new bucket to configure columnstore to use post restore

  • --new-region (-nr)

    Region for --new-bucket

    S3 only

    --new-key (-nk)

    Access key for --new-bucket

    S3 only

    --new-secret (-ns)

    Secret for --new-bucket

    S3 only

    --continue (-cont)

    Allow deleting data in --new-bucket during restore

    S3 only; dangerous if bucket contains important data

    --skip-storage-manager (-ssm)

    Skip backing up the storagemanager directory

    Support-guided workflows

    --skip-save-brm (-sbrm)

    Skip saving BRM prior to DBRM backup

    Can produce a dirtier backup

    --skip-locks (-slock)

    Skip issuing read locks

    Support-guided workflows

    --skip-polls (-spoll)

    Skip polling to confirm locks are released

    Support-guided workflows

    --quiet (-q)

    Silence verbose copy output

    Useful for cron

    --skip-storage-manager (-ssm)

    Skip backing up/restoring storagemanager directory

    Support-guided workflows

    --list (-li)

    List backups

    Lists available DBRM backups

    --backup-location (-bl)

    Defines the path where the backup should be saved to. A date based folder under this path will be created per backup run automatically. You can change the name of the folders with -nb

    Typical default in tooling: /tmp/backups/

    --backup-destination (-bd)

    Where backups are stored relative to the node running the command. Two possible values: Local or Remote. Determines if the backup requires ssh thus -scp needs to be defined too or if the -bl path is relative to the script

    Local or Remote

    --secure-copy-protocol (-scp)

    if --backup-destination is defined as Remote , then you need to define -scp . This defines the ssh connection to remotely rsync to

    Example: -bd Remote -scp root@192.168.0.1

    --incremental (-i)

    Creates an incremental backup based on an existing full backup

    --storage S3 (-s)

    Use S3 storage topology

    Must be set to S3 for object storage workflows

    --backup-bucket (-bb)

    Bucket where backups are stored

    Example: s3://my-cs-backups

    --endpoint-url (-url)

    Custom S3 endpoint URL

    For on-premise S3 vendors (MinIO, etc.)

    --no-verify-ssl (-nv-ssl)

    Skip verifying SSL certificates

    --load (-l)

    Backup folder name to restore

    Required for restore

    --backup-location (-bl)

    Where backups are located

    Example: /tmp/backups/

    --backup_destination (-bd)

    Whether the backup is on the local server or a remote server

    Local or Remote

    --scp (-scp)

    SCP source used when --backup_destination Remote

    --backup-bucket (-bb)

    Backup bucket to restore from

    Example: s3://my-cs-backups

    --endpoint-url (-url)

    Custom S3 endpoint URL

    For on-premise S3 vendors

    --no-verify-ssl (-nv-ssl)

    Skip verifying SSL certificates

    Use with caution

    --new-bucket (-nb)

    New bucket to restore data into

    --backup-location (-bl)

    Where DBRM backups are written

    Default example: /tmp/dbrm_backups

    --retention-days (-r)

    Retain DBRM backups created within last X days

    Older backups are deleted

    --mode (-m)

    Run mode

    once or loop

    --interval (-i)

    Sleep interval (minutes) when --mode loop

    --backup-location (-bl)

    Where DBRM backups exist on disk

    Example: /tmp/dbrm_backups

    --load (-l)

    Backup directory name to restore

    Required for restore

    --no-start (-ns)

    Do not attempt ColumnStore startup after restore

    Useful for manual recovery steps

    --skip-dbrm-backup (-sdbk)

    Skip taking a safety backup before restoring

    Value can be <folder> or auto_most_recent

    Use with caution; primarily for test/non-standard TLS

    Format: user@host

    Use when restoring into a different bucket

    Only used in loop mode

    Use with caution

    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Check Global Configuration

    Use maxctrl show maxscalearrow-up-right command to view the global MaxScale configuration.

    This action is performed on the MaxScale node:

    Output should align to the global MaxScale configuration in the new configuration file you created.

    hashtag
    Check Server Configuration

    Use the maxctrl list serversarrow-up-right and maxctrl show serverarrow-up-right commands to view the configured server objects.

    This action is performed on the MaxScale node:

    1. Obtain the full list of servers objects:

    For each server object, view the configuration:

    Output should align to the Server Object configuration you performed.

    hashtag
    Check Monitor Configuration

    Use the maxctrl list monitorsarrow-up-right and maxctrl show monitorarrow-up-right commands to view the configured monitors.

    This action is performed on the MaxScale node:

    1. Obtain the full list of monitors:

    1. For each monitor, view the monitor configuration:

    Output should align to the MariaDB Monitor (mariadbmon) configuration you performed.

    hashtag
    Check Service Configuration

    Use the maxctrl list servicesarrow-up-right and maxctrl show servicearrow-up-right commands to view the configured routing services.

    This action is performed on the MaxScale node:

    1. Obtain the full list of routing services:

    1. For each service, view the service configuration:

    Output should align to the Read Connection Router (readconnroute)arrow-up-right or Read/Write Split Router (readwritesplit)arrow-up-right configuration you performed.

    hashtag
    Test Application User

    Applications should use a dedicated user account. The user account must be created on the primary server.

    When users connect to MaxScale, MaxScale authenticates the user connection before routing it to an Enterprise Server node. Enterprise Server authenticates the connection as originating from the IP address of the MaxScale node.

    The application users must have one user account with the host IP address of the application server and a second user account with the host IP address of the MaxScale node.

    The requirement of a duplicate user account can be avoided by enabling the proxy_protocol parameter for MaxScale and the proxy_protocol_networks for Enterprise Server.

    hashtag
    Create a User to Connect from MaxScale

    This action is performed on the primary Enterprise ColumnStore node:

    1. Connect to the primary Enterprise ColumnStore node:

    1. Create the database user account for your MaxScale node:

    Replace 192.0.2.10 with the relevant IP address specification for your MaxScale node.

    Passwords should meet your organization's password policies.

    1. Grant the privileges required by your application to the database user account for your MaxScale node:

    The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.

    hashtag
    Create a User to Connect from the Application Server

    This action is performed on the primary Enterprise ColumnStore node:

    1. Create the database user account for your application server:

    Replace 192.0.2.11 with the relevant IP address specification for your application server.

    Passwords should meet your organization's password policies.

    1. Grant the privileges required by your application to the d database user account for your application server:

    The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.

    hashtag
    Test Connection with Application User

    To test the connection, use the MariaDB Client from your application server to connect to an Enterprise ColumnStore node through MaxScale.

    This action is performed on a client connected to the MaxScale node:

    hashtag
    Test Connection with Read Connection Router

    If you configured the Read Connection Router, confirm that MaxScale routes connections to the replica servers.

    On the MaxScale node, use the maxctrl list listenersarrow-up-right command to view the available listeners and ports:

    1. Open multiple terminals connected to your application server, in each, use MariaDB Client to connect to the listener port for the Read Connection Router (in the example, 3308):

    Use the application user credentials you created for the --user and --password options.

    1. In each terminal, query the hostname and server_id system variable and option to identify to which you're connected:

    Different terminals should return different values since MaxScale routes the connections to different nodes.

    Since the router was configured with the slave router option, the Read Connection Router only routes connections to replica servers.

    hashtag
    Test Write Queries with Read/Write Split Router

    If you configured the Read/Write Split Router, confirm that MaxScale routes write queries on this router to the primary Enterprise ColumnStore node.

    on the MaxScale node, use the maxctrl list listenersarrow-up-right command to view the available listeners and ports:

    1. Open multiple terminals connected to your application server, in each, use MariaDB Client to connect to the listener port for the Read/Write Split Router (in the example, 3307):

    Use the application user credentials you created for the --user and --password options.

    1. In one terminal, create the test table:

    1. In each terminal, issue an insert.md statement to add a row to the example table with the values of the hostname and server_id system variable and option:

    1. In one terminal, issue a SELECT statement to query the results:

    While MaxScale is handling multiple connections from different terminals, it routed all connections to the current primary Enterprise ColumnStore node, which in the example is mcs1#.

    hashtag
    Test Read Queries with Read/Write Split Router

    If you configured the Read/Write Split Router (readwritesplit)arrow-up-right, confirm that MaxScale routes read queries on this router to replica servers.

    1. On the MaxScale node, use the maxctrl list listenersarrow-up-right command to view the available listeners and ports:

    1. In a terminal connected to your application server, use MariaDB Client to connect to the listener port for the Read/Write Split Router (readwritesplit)arrow-up-right (in the example, 3307):

    Use the application user credentials you created for the --user and --password options.

    1. Query the hostname and server_id to identify which server MaxScale routed you to.

    1. Resend the query:

    Confirm that MaxScale routes the SELECT statements to different replica servers.

    For more information on different routing criteria, see slave_selection_criteria

    hashtag
    Next Step

    "Deploy ColumnStore Shared Local Storage Topology".

    This page was step 8 of 9.

    Next: Step 9: Import Data.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    spinner
    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Check Global Configuration

    Use maxctrl show maxscalearrow-up-right command to view the global MaxScale configuration.

    This action is performed on the MaxScale node:

    Output should align to the global MaxScale configuration in the new configuration file you created.

    Check Server Configuration Use the maxctrl list serversarrow-up-right and maxctrl show serverarrow-up-right commands to view the configured server objects.

    This action is performed on the MaxScale node:

    1. Obtain the full list of servers objects:

    For each server object, view the configuration:

    Output should align to the Server Object configuration you performed.

    hashtag
    Check Monitor Configuration

    Use the maxctrl list monitorsarrow-up-right and maxctrl show monitorarrow-up-right commands to view the configured monitors.

    This action is performed on the MaxScale node:

    1. Obtain the full list of monitors:

    1. For each monitor, view the monitor configuration:

    Output should align to the MariaDB Monitor (mariadbmon) configuration you performed.

    hashtag
    Check Service Configuration

    Use the maxctrl list servicesarrow-up-right and maxctrl show servicearrow-up-right commands to view the configured routing services.

    This action is performed on the MaxScale node:

    1. Obtain the full list of routing services:

    1. For each service, view the service configuration:

    Output should align to the Read Connection Router (readconnroute)arrow-up-right or Read/Write Split Router (readwritesplit)arrow-up-right configuration you performed.

    hashtag
    Test Application User

    Applications should use a dedicated user account. The user account must be created on the primary server.

    When users connect to MaxScale, MaxScale authenticates the user connection before routing it to an Enterprise Server node. Enterprise Server authenticates the connection as originating from the IP address of the MaxScale node.

    The application users must have one user account with the host IP address of the application server and a second user account with the host IP address of the MaxScale node.

    The requirement of a duplicate user account can be avoided by enabling the proxy_protocol parameter for MaxScale and the proxy_protocol_networks for Enterprise Server.

    hashtag
    Create a User to Connect from MaxScale

    This action is performed on the primary Enterprise ColumnStore node:

    1. Connect to the primary Enterprise ColumnStore node:

    1. Create the database user account for your MaxScale node:

    Replace 192.0.2.10 with the relevant IP address specification for your MaxScale node.

    Passwords should meet your organization's password policies.

    1. Grant the privileges required by your application to the database user account for your MaxScale node:

    The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.

    hashtag
    Create a User to Connect from the Application Server

    This action is performed on the primary Enterprise ColumnStore node:

    1. Create the database user account for your application server:

    Replace 192.0.2.11 with the relevant IP address specification for your application server.

    Passwords should meet your organization's password policies.

    1. Grant the privileges required by your application to the d database user account for your application server:

    The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.

    hashtag
    Test Connection with Application User

    To test the connection, use the MariaDB Client from your application server to connect to an Enterprise ColumnStore node through MaxScale.

    This action is performed on a client connected to the MaxScale node:

    hashtag
    Test Connection with Read Connection Router

    If you configured the Read Connection Router, confirm that MaxScale routes connections to the replica servers.

    On the MaxScale node, use the maxctrl list listenersarrow-up-right command to view the available listeners and ports:

    1. Open multiple terminals connected to your application server, in each, use MariaDB Client to connect to the listener port for the Read Connection Router (in the example, 3308):

    Use the application user credentials you created for the --user and --password options.

    1. In each terminal, query the hostname and server_id system variable and option to identify to which you're connected:

    Different terminals should return different values since MaxScale routes the connections to different nodes.

    Since the router was configured with the slave router option, the Read Connection Router only routes connections to replica servers.

    hashtag
    Test Write Queries with Read/Write Split Router

    If you configured the Read/Write Split Router, confirm that MaxScale routes write queries on this router to the primary Enterprise ColumnStore node.

    on the MaxScale node, use the maxctrl list listenersarrow-up-right command to view the available listeners and ports:

    1. Open multiple terminals connected to your application server, in each, use MariaDB Client to connect to the listener port for the Read/Write Split Router (in the example, 3307):

    Use the application user credentials you created for the --user and --password options.

    1. In one terminal, create the test table:

    1. In each terminal, issue an insert.md statement to add a row to the example table with the values of the hostname and server_id system variable and option:

    1. In one terminal, issue a SELECT statement to query the results:

    While MaxScale is handling multiple connections from different terminals, it routed all connections to the current primary Enterprise ColumnStore node, which in the example is mcs1#.

    hashtag
    Test Read Queries with Read/Write Split Router

    If you configured the Read/Write Split Router (readwritesplit)arrow-up-right, confirm that MaxScale routes read queries on this router to replica servers.

    1. On the MaxScale node, use the maxctrl list listenersarrow-up-right command to view the available listeners and ports:

    1. In a terminal connected to your application server, use MariaDB Client to connect to the listener port for the Read/Write Split Router (readwritesplit)arrow-up-right (in the example, 3307):

    Use the application user credentials you created for the --user and --password options.

    1. Query the hostname and server_id to identify which server MaxScale routed you to.

    1. Resend the query:

    Confirm that MaxScale routes the SELECT statements to different replica servers.

    For more information on different routing criteria, see slave_selection_criteria

    hashtag
    Next Step

    Navigation in the procedure "Deploy ColumnStore Object Storage Topology":

    This page was step 8 of 9.

    Next: Step 9: Import Data

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    spinner
    grep "service" /etc/columnstore/storagemanager.cnf
    du -sh /var/lib/columnstore/
    du -sh /var/lib/columnstore/storagemanager/ ;
    aws s3 ls s3://<bucketname> --recursive | grep -v -E "(Bucket: |Prefix: |LastWriteTime|^$|--)" | awk 'BEGIN {total=0}{total+=$3}END{print total/1024/1024/1024" GB"}'
    mcs backup --backup-location /tmp/backups/ --list
    Existing Backups
    --------------------------------------------------------------------------
    Options                                       Last-Updated  Extent Map      EM-Size      Journal-Size VBBM-Size  VSS-Size   Days Old
    12-03-2024                                    Dec 3 21:05   BRM_saves_em    77228        0            1884       2792       12
    11-21-2024                                    Nov 21 21:05   BRM_saves_em    77228        0            1884       2792       12
    --------------------------------------------------------------------------
    Restore with mcs restore --path /tmp/backups/ --directory <backup_folder_from_above>
    mcs backup
    mcs backup --parallel 8
    mcs backup --parallel 8 --compress pigz
    mcs backup --incremental auto_most_recent
    mcs backup --backup-destination Remote --scp root@192.68.0.1
    mcs backup -P 8
    # Run Full Backup
    mcs backup -P 8
    # Backup Complete @ /tmp/backups/12-03-2025
    # Run the incremental backup - updating the full backup 12-03-2025
    mcs backup --incremental 12-03-2025
    mcs backup --incremental auto_most_recent
    mcs backup --incremental <full-backup-folder>
    sudo crontab -e
    */60 */24 * * *  mcs backup --parallel 4 -r 14 >> /tmp/cs_backup.log  2>&1
    59 23 * * 6 mcs backup -P 8 -r 21 >> /root/cs_backup.log 2>&1
    59 23 * * 0-5 mcs backup --incremental auto_most_recent -r 21 >> /root/cs_backup.log 2>&1
    59 23 * * 7 mcs backup --incremental auto_most_recent -r 21 >> /root/cs_backup.log 2>&1
    mcs backup --storage S3 --backup-bucket s3://my-cs-backups
    mcs backup --storage S3 --backup-bucket s3://my-cs-backups --compress pigz --quiet --skip-bucket-data
    mcs backup --storage S3 --backup-bucket gs://my-cs-backups --incremental 12-18-2023
    mcs backup --storage S3 --backup-bucket s3://my-on-premise-bucket --endpoint-url http://127.0.0.1:8000
    scp /tmp/backups/12-03-2024/mysql root@pm2:/tmp/backups/12-03-2024/mysql
    scp /tmp/backups/12-03-2024/configs root@pm2:/tmp/backups/12-03-2024/configs
    
    scp /tmp/backups/12-03-2024/mysql root@pm3:/tmp/backups/12-03-2024/mysql
    scp /tmp/backups/12-03-2024/configs root@pm3:/tmp/backups/12-03-2024/configs
    mcs restore --backup-location /tmp/backups/ --list
    # on primary
    mcs cluster stop
    
    # on every node
    systemctl stop mariadb
    systemctl stop mariadb-columnstore-cmapi
    mcs restore -l 12-03-2024 -bl /mnt/custom/path/
    mcs restore -l 11-21-2024 -bl /mnt/custom/path/ -c pigz
    mcs restore -s S3 -bb s3://my-cs-backups  -l 12-03-2025
    mcs restore -s S3 -bb gs://on-premise-bucket -l 12-03-2025 -url http://127.0.0.1:8000
    mcs restore -s S3 -bb s3://my-cs-backups -l 11-21-2022 -nb s3://new-data-bucket -nr us-east-1 -nk AKIAxxxxxxx3FHCADF -ns GGGuxxxxxxxxxxnqa72csk5 -ha
    # on primary
    mcs cluster stop
    
    # on every node
    systemctl stop mariadb
    systemctl stop mariadb-columnstore-cmapi
    mcs restore -s S3 -bb s3://my-cs-backups -l 11-21-2024
    mcs restore -s S3 -bb gs://on-premise-bucket -l 12-03-2021 -url http://127.0.0.1:9000
    mcs restore -s S3 -bb s3://my-cs-backups -l 11-21-2022 -nb s3://new-data-bucket -nr us-east-1 -nk AKIAxxxxxxx3FHCADF -ns GGGuxxxxxxxxxxnqa72csk5
    mcs dbrm_backup --list
    mcs dbrm_backup --mode loop --interval 90 --retention-days 7 --backup-location /mnt/dbrm_backups
    mcs dbrm_backup --mode once --retention-days 7 --backup-location /mnt/dbrm_backups -nb my-one-off-backup
    mcs dbrm_backup
    mcs dbrm_backup -nb before_upgrade_11.21.2024_dbrm_backup
    mcs dbrm_restore --list
    ./mcs_backup_manager.sh dbrm_restore --backup-location /tmp/dbrm_backups --load dbrm_backup_20241203_172842
    ./mcs_backup_manager.sh dbrm_restore --backup-location /tmp/dbrm_backups --load dbrm_backup_20241203_172842 --no-start
    mcs dbrm-restore --backup-location /tmp/dbrm_backups --load <dbrm_backup_folder>
    $ maxctrl show maxscale
    ┌──────────────┬───────────────────────────────────────────────────────┐
    │ Version      │ 22.08.15                                              │
    ├──────────────┼───────────────────────────────────────────────────────┤
    │ Commit       │ 3761fa7a52046bc58faad8b5a139116f9e33364c              │
    ├──────────────┼───────────────────────────────────────────────────────┤
    │ Started At   │ Thu, 05 Aug 2021 20:21:20 GMT                         │
    ├──────────────┼───────────────────────────────────────────────────────┤
    │ Activated At │ Thu, 05 Aug 2021 20:21:20 GMT                         │
    ├──────────────┼───────────────────────────────────────────────────────┤
    │ Uptime       │ 868                                                   │
    ├──────────────┼───────────────────────────────────────────────────────┤
    │ Config Sync  │ null                                                  │
    ├──────────────┼───────────────────────────────────────────────────────┤
    │ Parameters   │ {                                                     │
    │              │     "admin_auth": true,                               │
    │              │     "admin_enabled": true,                            │
    │              │     "admin_gui": true,                                │
    │              │     "admin_host": "0.0.0.0",                          │
    │              │     "admin_log_auth_failures": true,                  │
    │              │     "admin_pam_readonly_service": null,               │
    │              │     "admin_pam_readwrite_service": null,              │
    │              │     "admin_port": 8989,                               │
    │              │     "admin_secure_gui": false,                        │
    │              │     "admin_ssl_ca_cert": null,                        │
    │              │     "admin_ssl_cert": null,                           │
    │              │     "admin_ssl_key": null,                            │
    │              │     "admin_ssl_version": "MAX",                       │
    │              │     "auth_connect_timeout": "10000ms",                │
    │              │     "auth_read_timeout": "10000ms",                   │
    │              │     "auth_write_timeout": "10000ms",                  │
    │              │     "cachedir": "/var/cache/maxscale",                │
    │              │     "config_sync_cluster": null,                      │
    │              │     "config_sync_interval": "5000ms",                 │
    │              │     "config_sync_password": "*****",                  │
    │              │     "config_sync_timeout": "10000ms",                 │
    │              │     "config_sync_user": null,                         │
    │              │     "connector_plugindir": "/usr/lib64/mysql/plugin", │
    │              │     "datadir": "/var/lib/maxscale",                   │
    │              │     "debug": null,                                    │
    │              │     "dump_last_statements": "never",                  │
    │              │     "execdir": "/usr/bin",                            │
    │              │     "language": "/var/lib/maxscale",                  │
    │              │     "libdir": "/usr/lib64/maxscale",                  │
    │              │     "load_persisted_configs": true,                   │
    │              │     "local_address": null,                            │
    │              │     "log_debug": false,                               │
    │              │     "log_info": false,                                │
    │              │     "log_notice": true,                               │
    │              │     "log_throttling": {                               │
    │              │         "count": 10,                                  │
    │              │         "suppress": 10000,                            │
    │              │         "window": 1000                                │
    │              │     },                                                │
    │              │     "log_warn_super_user": false,                     │
    │              │     "log_warning": true,                              │
    │              │     "logdir": "/var/log/maxscale",                    │
    │              │     "max_auth_errors_until_block": 10,                │
    │              │     "maxlog": true,                                   │
    │              │     "module_configdir": "/etc/maxscale.modules.d",    │
    │              │     "ms_timestamp": false,                            │
    │              │     "passive": false,                                 │
    │              │     "persistdir": "/var/lib/maxscale/maxscale.cnf.d", │
    │              │     "piddir": "/var/run/maxscale",                    │
    │              │     "query_classifier": "qc_sqlite",                  │
    │              │     "query_classifier_args": null,                    │
    │              │     "query_classifier_cache_size": 289073971,         │
    │              │     "query_retries": 1,                               │
    │              │     "query_retry_timeout": "5000ms",                  │
    │              │     "rebalance_period": "0ms",                        │
    │              │     "rebalance_threshold": 20,                        │
    │              │     "rebalance_window": 10,                           │
    │              │     "retain_last_statements": 0,                      │
    │              │     "session_trace": 0,                               │
    │              │     "skip_permission_checks": false,                  │
    │              │     "sql_mode": "default",                            │
    │              │     "syslog": true,                                   │
    │              │     "threads": 1,                                     │
    │              │     "users_refresh_interval": "0ms",                  │
    │              │     "users_refresh_time": "30000ms",                  │
    │              │     "writeq_high_water": 16777216,                    │
    │              │     "writeq_low_water": 8192                          │
    │              │ }                                                     │
    └──────────────┴───────────────────────────────────────────────────────┘
    $ maxctrl list servers
    ┌────────┬────────────────┬──────┬─────────────┬─────────────────┬────────┐
    │ Server │ Address        │ Port │ Connections │ State           │ GTID   │
    ├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
    │ mcs1   │ 192.0.2.1      │ 3306 │ 1           │ Master, Running │ 0-1-25 │
    ├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
    │ mcs2   │ 192.0.2.2      │ 3306 │ 1           │ Slave, Running  │ 0-1-25 │
    ├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
    │ mcs3   │ 192.0.2.3      │ 3306 │ 1           │ Slave, Running  │ 0-1-25 │
    └────────┴────────────────┴──────┴─────────────┴─────────────────┴────────┘
    $ maxctrl show server mcs1
    ┌─────────────────────┬───────────────────────────────────────────┐
    │ Server              │ mcs1                                      │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Address             │ 192.0.2.1                                 │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Port                │ 3306                                      │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ State               │ Master, Running                           │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Version             │ 11.4.5-3-MariaDB-enterprise-log           │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Last Event          │ master_up                                 │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Triggered At        │ Thu, 05 Aug 2021 20:22:26 GMT             │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Services            │ connection_router_service                 │
    │                     │ query_router_service                      │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Monitors            │ columnstore_monitor                       │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Master ID           │ -1                                        │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Node ID             │ 1                                         │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Slave Server IDs    │                                           │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Current Connections │ 1                                         │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Total Connections   │ 1                                         │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Max Connections     │ 1                                         │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Statistics          │ {                                         │
    │                     │     "active_operations": 0,               │
    │                     │     "adaptive_avg_select_time": "0ns",    │
    │                     │     "connection_pool_empty": 0,           │
    │                     │     "connections": 1,                     │
    │                     │     "max_connections": 1,                 │
    │                     │     "max_pool_size": 0,                   │
    │                     │     "persistent_connections": 0,          │
    │                     │     "reused_connections": 0,              │
    │                     │     "routed_packets": 0,                  │
    │                     │     "total_connections": 1                │
    │                     │ }                                         │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Parameters          │ {                                         │
    │                     │     "address": "192.0.2.1",               │
    │                     │     "disk_space_threshold": null,         │
    │                     │     "extra_port": 0,                      │
    │                     │     "monitorpw": null,                    │
    │                     │     "monitoruser": null,                  │
    │                     │     "persistmaxtime": "0ms",              │
    │                     │     "persistpoolmax": 0,                  │
    │                     │     "port": 3306,                         │
    │                     │     "priority": 0,                        │
    │                     │     "proxy_protocol": false,              │
    │                     │     "rank": "primary",                    │
    │                     │     "socket": null,                       │
    │                     │     "ssl": false,                         │
    │                     │     "ssl_ca_cert": null,                  │
    │                     │     "ssl_cert": null,                     │
    │                     │     "ssl_cert_verify_depth": 9,           │
    │                     │     "ssl_cipher": null,                   │
    │                     │     "ssl_key": null,                      │
    │                     │     "ssl_verify_peer_certificate": false, │
    │                     │     "ssl_verify_peer_host": false,        │
    │                     │     "ssl_version": "MAX"                  │
    │                     │ }                                         │
    └─────────────────────┴───────────────────────────────────────────┘
    $ maxctrl list monitors
    ┌─────────────────────┬─────────┬──────────────────┐
    │ Monitor             │ State   │ Servers          │
    ├─────────────────────┼─────────┼──────────────────┤
    │ columnstore_monitor │ Running │ mcs1, mcs2, mcs3 │
    └─────────────────────┴─────────┴──────────────────┘
    $ maxctrl show monitor columnstore_monitor
    ┌─────────────────────┬─────────────────────────────────────┐
    │ Monitor             │ columnstore_monitor                 │
    ├─────────────────────┼─────────────────────────────────────┤
    │ Module              │ mariadbmon                          │
    ├─────────────────────┼─────────────────────────────────────┤
    │ State               │ Running                             │
    ├─────────────────────┼─────────────────────────────────────┤
    │ Servers             │ mcs1                                │
    │                     │ mcs2                                │
    │                     │ mcs3                                │
    ├─────────────────────┼─────────────────────────────────────┤
    │ Parameters          │ {                                   │
    │                     │     "backend_connect_attempts": 1,  │
    │                     │     "backend_connect_timeout": 3,   │
    │                     │     "backend_read_timeout": 3,      │
    │                     │     "backend_write_timeout": 3,     │
    │                     │     "disk_space_check_interval": 0, │
    │                     │     "disk_space_threshold": null,   │
    │                     │     "events": "all",                │
    │                     │     "journal_max_age": 28800,       │
    │                     │     "module": "mariadbmon",         │
    │                     │     "monitor_interval": 2000,       │
    │                     │     "password": "*****",            │
    │                     │     "script": null,                 │
    │                     │     "script_timeout": 90,           │
    │                     │     "user": "mxs"                   │
    │                     │ }                                   │
    ├─────────────────────┼─────────────────────────────────────┤
    │ Monitor Diagnostics │ {}                                  │
    └─────────────────────┴─────────────────────────────────────┘
    $ maxctrl list services
    ┌───────────────────────────┬────────────────┬─────────────┬───────────────────┬──────────────────┐
    │ Service                   │ Router         │ Connections │ Total Connections │ Servers          │
    ├───────────────────────────┼────────────────┼─────────────┼───────────────────┼──────────────────┤
    │ connection_router_Service │ readconnroute  │ 0           │ 0                 │ mcs1, mcs2, mcs3 │
    ├───────────────────────────┼────────────────┼─────────────┼───────────────────┼──────────────────┤
    │ query_router_service      │ readwritesplit │ 0           │ 0                 │ mcs1, mcs2, mcs3 │
    └───────────────────────────┴────────────────┴─────────────┴───────────────────┴──────────────────┘
    $ maxctrl show service query_router_service
    ┌─────────────────────┬─────────────────────────────────────────────────────────────┐
    │ Service             │ query_router_service                                        │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Router              │ readwritesplit                                              │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ State               │ Started                                                     │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Started At          │ Sat Aug 28 21:41:16 2021                                    │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Current Connections │ 0                                                           │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Total Connections   │ 0                                                           │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Max Connections     │ 0                                                           │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Cluster             │                                                             │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Servers             │ mcs1                                                        │
    │                     │ mcs2                                                        │
    │                     │ mcs3                                                        │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Services            │                                                             │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Filters             │                                                             │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Parameters          │ {                                                           │
    │                     │     "auth_all_servers": false,                              │
    │                     │     "causal_reads": "false",                                │
    │                     │     "causal_reads_timeout": "10000ms",                      │
    │                     │     "connection_keepalive": "300000ms",                     │
    │                     │     "connection_timeout": "0ms",                            │
    │                     │     "delayed_retry": false,                                 │
    │                     │     "delayed_retry_timeout": "10000ms",                     │
    │                     │     "disable_sescmd_history": false,                        │
    │                     │     "enable_root_user": false,                              │
    │                     │     "idle_session_pool_time": "-1000ms",                    │
    │                     │     "lazy_connect": false,                                  │
    │                     │     "localhost_match_wildcard_host": true,                  │
    │                     │     "log_auth_warnings": true,                              │
    │                     │     "master_accept_reads": false,                           │
    │                     │     "master_failure_mode": "fail_instantly",                │
    │                     │     "master_reconnection": false,                           │
    │                     │     "max_connections": 0,                                   │
    │                     │     "max_sescmd_history": 50,                               │
    │                     │     "max_slave_connections": 255,                           │
    │                     │     "max_slave_replication_lag": "0ms",                     │
    │                     │     "net_write_timeout": "0ms",                             │
    │                     │     "optimistic_trx": false,                                │
    │                     │     "password": "*****",                                    │
    │                     │     "prune_sescmd_history": true,                           │
    │                     │     "rank": "primary",                                      │
    │                     │     "retain_last_statements": -1,                           │
    │                     │     "retry_failed_reads": true,                             │
    │                     │     "reuse_prepared_statements": false,                     │
    │                     │     "router": "readwritesplit",                             │
    │                     │     "session_trace": false,                                 │
    │                     │     "session_track_trx_state": false,                       │
    │                     │     "slave_connections": 255,                               │
    │                     │     "slave_selection_criteria": "LEAST_CURRENT_OPERATIONS", │
    │                     │     "strict_multi_stmt": false,                             │
    │                     │     "strict_sp_calls": false,                               │
    │                     │     "strip_db_esc": true,                                   │
    │                     │     "transaction_replay": false,                            │
    │                     │     "transaction_replay_attempts": 5,                       │
    │                     │     "transaction_replay_max_size": 1073741824,              │
    │                     │     "transaction_replay_retry_on_deadlock": false,          │
    │                     │     "type": "service",                                      │
    │                     │     "use_sql_variables_in": "all",                          │
    │                     │     "user": "mxs",                                          │
    │                     │     "version_string": null                                  │
    │                     │ }                                                           │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Router Diagnostics  │ {                                                           │
    │                     │     "avg_sescmd_history_length": 0,                         │
    │                     │     "max_sescmd_history_length": 0,                         │
    │                     │     "queries": 0,                                           │
    │                     │     "replayed_transactions": 0,                             │
    │                     │     "ro_transactions": 0,                                   │
    │                     │     "route_all": 0,                                         │
    │                     │     "route_master": 0,                                      │
    │                     │     "route_slave": 0,                                       │
    │                     │     "rw_transactions": 0,                                   │
    │                     │     "server_query_statistics": []                           │
    │                     │ }                                                           │
    └─────────────────────┴─────────────────────────────────────────────────────────────┘
    $ sudo mariadb
    CREATE USER 'app_user'@'192.0.2.10' IDENTIFIED BY 'app_user_passwd';
    GRANT ALL ON test.* TO 'app_user'@'192.0.2.10';
    CREATE USER 'app_user'@'192.0.2.11' IDENTIFIED BY 'app_user_passwd';
    GRANT ALL ON test.* TO 'app_user'@'192.0.2.11';
    $ mariadb --host 192.0.2.10 --port 3307
          --user app_user --password
    $ maxctrl list listeners
    ┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
    │ Name                       │ Port │ Host │ State   │ Service                   │
    ├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
    │ connection_router_listener │ 3308 │ ::   │ Running │ connection_router_service │
    ├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
    │ query_router_listener      │ 3307 │ ::   │ Running │ query_router_service      │
    └────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘
    $ mariadb --host 192.0.2.10 --port 3308 \
          --user app_user --password
    SELECT @@global.hostname, @@global.server_id;
    
    +-------------------+--------------------+
    | @@global.hostname | @@global.server_id |
    +-------------------+--------------------+
    |              mcs2 |                  2 |
    +-------------------+--------------------+
    $ maxctrl list listeners
    ┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
    │ Name                       │ Port │ Host │ State   │ Service                   │
    ├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
    │ connection_router_listener │ 3308 │ ::   │ Running │ connection_router_service │
    ├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
    │ query_router_listener      │ 3307 │ ::   │ Running │ query_router_service      │
    └────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘
    $ mariadb --host 192.0.2.10 --port 3307 \
          --user app_user --password
    CREATE TABLE test.load_balancing_test (
       id INT PRIMARY KEY AUTO_INCREMENT,
       hostname VARCHAR(256),
       server_id INT
    );
    INSERT INTO test.load_balancing_test (hostname, server_id)
    VALUES (@@global.hostname, @@global.server_id);
    SELECT * FROM test.load_balancing_test;
    +----+----------+-----------+
    | id | hostname | server_id |
    +----+----------+-----------+
    |  1 | mcs1     |         1 |
    |  2 | mcs1     |         1 |
    |  3 | mcs1     |         1 |
    +----+----------+-----------+
    $ maxctrl list listeners
    ┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
    │ Name                       │ Port │ Host │ State   │ Service                   │
    ├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
    │ connection_router_listener │ 3308 │ ::   │ Running │ connection_router_service │
    ├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
    │ query_router_listener      │ 3307 │ ::   │ Running │ query_router_service      │
    └────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘
    $ mariadb --host 192.0.2.10 --port 3307 \
          --user app_user --password
    SELECT @@global.hostname, @@global.server_id;
    +-------------------+--------------------+
    | @@global.hostname | @@global.server_id |
    +-------------------+--------------------+
    |              mcs2 |                  2 |
    +-------------------+--------------------+
    SELECT @@global.hostname, @@global.server_id;
    +-------------------+--------------------+
    | @@global.hostname | @@global.server_id |
    +-------------------+--------------------+
    |              mcs3 |                  3 |
    +-------------------+--------------------+
    $ maxctrl show maxscale
    ┌──────────────┬───────────────────────────────────────────────────────┐
    │ Version      │ 22.08.15                                              │
    ├──────────────┼───────────────────────────────────────────────────────┤
    │ Commit       │ 3761fa7a52046bc58faad8b5a139116f9e33364c              │
    ├──────────────┼───────────────────────────────────────────────────────┤
    │ Started At   │ Thu, 05 Aug 2021 20:21:20 GMT                         │
    ├──────────────┼───────────────────────────────────────────────────────┤
    │ Activated At │ Thu, 05 Aug 2021 20:21:20 GMT                         │
    ├──────────────┼───────────────────────────────────────────────────────┤
    │ Uptime       │ 868                                                   │
    ├──────────────┼───────────────────────────────────────────────────────┤
    │ Config Sync  │ null                                                  │
    ├──────────────┼───────────────────────────────────────────────────────┤
    │ Parameters   │ {                                                     │
    │              │     "admin_auth": true,                               │
    │              │     "admin_enabled": true,                            │
    │              │     "admin_gui": true,                                │
    │              │     "admin_host": "0.0.0.0",                          │
    │              │     "admin_log_auth_failures": true,                  │
    │              │     "admin_pam_readonly_service": null,               │
    │              │     "admin_pam_readwrite_service": null,              │
    │              │     "admin_port": 8989,                               │
    │              │     "admin_secure_gui": false,                        │
    │              │     "admin_ssl_ca_cert": null,                        │
    │              │     "admin_ssl_cert": null,                           │
    │              │     "admin_ssl_key": null,                            │
    │              │     "admin_ssl_version": "MAX",                       │
    │              │     "auth_connect_timeout": "10000ms",                │
    │              │     "auth_read_timeout": "10000ms",                   │
    │              │     "auth_write_timeout": "10000ms",                  │
    │              │     "cachedir": "/var/cache/maxscale",                │
    │              │     "config_sync_cluster": null,                      │
    │              │     "config_sync_interval": "5000ms",                 │
    │              │     "config_sync_password": "*****",                  │
    │              │     "config_sync_timeout": "10000ms",                 │
    │              │     "config_sync_user": null,                         │
    │              │     "connector_plugindir": "/usr/lib64/mysql/plugin", │
    │              │     "datadir": "/var/lib/maxscale",                   │
    │              │     "debug": null,                                    │
    │              │     "dump_last_statements": "never",                  │
    │              │     "execdir": "/usr/bin",                            │
    │              │     "language": "/var/lib/maxscale",                  │
    │              │     "libdir": "/usr/lib64/maxscale",                  │
    │              │     "load_persisted_configs": true,                   │
    │              │     "local_address": null,                            │
    │              │     "log_debug": false,                               │
    │              │     "log_info": false,                                │
    │              │     "log_notice": true,                               │
    │              │     "log_throttling": {                               │
    │              │         "count": 10,                                  │
    │              │         "suppress": 10000,                            │
    │              │         "window": 1000                                │
    │              │     },                                                │
    │              │     "log_warn_super_user": false,                     │
    │              │     "log_warning": true,                              │
    │              │     "logdir": "/var/log/maxscale",                    │
    │              │     "max_auth_errors_until_block": 10,                │
    │              │     "maxlog": true,                                   │
    │              │     "module_configdir": "/etc/maxscale.modules.d",    │
    │              │     "ms_timestamp": false,                            │
    │              │     "passive": false,                                 │
    │              │     "persistdir": "/var/lib/maxscale/maxscale.cnf.d", │
    │              │     "piddir": "/var/run/maxscale",                    │
    │              │     "query_classifier": "qc_sqlite",                  │
    │              │     "query_classifier_args": null,                    │
    │              │     "query_classifier_cache_size": 289073971,         │
    │              │     "query_retries": 1,                               │
    │              │     "query_retry_timeout": "5000ms",                  │
    │              │     "rebalance_period": "0ms",                        │
    │              │     "rebalance_threshold": 20,                        │
    │              │     "rebalance_window": 10,                           │
    │              │     "retain_last_statements": 0,                      │
    │              │     "session_trace": 0,                               │
    │              │     "skip_permission_checks": false,                  │
    │              │     "sql_mode": "default",                            │
    │              │     "syslog": true,                                   │
    │              │     "threads": 1,                                     │
    │              │     "users_refresh_interval": "0ms",                  │
    │              │     "users_refresh_time": "30000ms",                  │
    │              │     "writeq_high_water": 16777216,                    │
    │              │     "writeq_low_water": 8192                          │
    │              │ }                                                     │
    └──────────────┴───────────────────────────────────────────────────────┘
    $ maxctrl list servers
    ┌────────┬────────────────┬──────┬─────────────┬─────────────────┬────────┐
    │ Server │ Address        │ Port │ Connections │ State           │ GTID   │
    ├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
    │ mcs1   │ 192.0.2.1      │ 3306 │ 1           │ Master, Running │ 0-1-25 │
    ├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
    │ mcs2   │ 192.0.2.2      │ 3306 │ 1           │ Slave, Running  │ 0-1-25 │
    ├────────┼────────────────┼──────┼─────────────┼─────────────────┼────────┤
    │ mcs3   │ 192.0.2.3      │ 3306 │ 1           │ Slave, Running  │ 0-1-25 │
    └────────┴────────────────┴──────┴─────────────┴─────────────────┴────────┘
    $ maxctrl show server mcs1
    ┌─────────────────────┬───────────────────────────────────────────┐
    │ Server              │ mcs1                                      │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Address             │ 192.0.2.1                                 │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Port                │ 3306                                      │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ State               │ Master, Running                           │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Version             │ 11.4.5-3-MariaDB-enterprise-log           │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Last Event          │ master_up                                 │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Triggered At        │ Thu, 05 Aug 2021 20:22:26 GMT             │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Services            │ connection_router_service                 │
    │                     │ query_router_service                      │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Monitors            │ columnstore_monitor                       │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Master ID           │ -1                                        │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Node ID             │ 1                                         │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Slave Server IDs    │                                           │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Current Connections │ 1                                         │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Total Connections   │ 1                                         │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Max Connections     │ 1                                         │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Statistics          │ {                                         │
    │                     │     "active_operations": 0,               │
    │                     │     "adaptive_avg_select_time": "0ns",    │
    │                     │     "connection_pool_empty": 0,           │
    │                     │     "connections": 1,                     │
    │                     │     "max_connections": 1,                 │
    │                     │     "max_pool_size": 0,                   │
    │                     │     "persistent_connections": 0,          │
    │                     │     "reused_connections": 0,              │
    │                     │     "routed_packets": 0,                  │
    │                     │     "total_connections": 1                │
    │                     │ }                                         │
    ├─────────────────────┼───────────────────────────────────────────┤
    │ Parameters          │ {                                         │
    │                     │     "address": "192.0.2.1",               │
    │                     │     "disk_space_threshold": null,         │
    │                     │     "extra_port": 0,                      │
    │                     │     "monitorpw": null,                    │
    │                     │     "monitoruser": null,                  │
    │                     │     "persistmaxtime": "0ms",              │
    │                     │     "persistpoolmax": 0,                  │
    │                     │     "port": 3306,                         │
    │                     │     "priority": 0,                        │
    │                     │     "proxy_protocol": false,              │
    │                     │     "rank": "primary",                    │
    │                     │     "socket": null,                       │
    │                     │     "ssl": false,                         │
    │                     │     "ssl_ca_cert": null,                  │
    │                     │     "ssl_cert": null,                     │
    │                     │     "ssl_cert_verify_depth": 9,           │
    │                     │     "ssl_cipher": null,                   │
    │                     │     "ssl_key": null,                      │
    │                     │     "ssl_verify_peer_certificate": false, │
    │                     │     "ssl_verify_peer_host": false,        │
    │                     │     "ssl_version": "MAX"                  │
    │                     │ }                                         │
    └─────────────────────┴───────────────────────────────────────────┘
    $ maxctrl list monitors
    ┌─────────────────────┬─────────┬──────────────────┐
    │ Monitor             │ State   │ Servers          │
    ├─────────────────────┼─────────┼──────────────────┤
    │ columnstore_monitor │ Running │ mcs1, mcs2, mcs3 │
    └─────────────────────┴─────────┴──────────────────┘
    $ maxctrl show monitor columnstore_monitor
    ┌─────────────────────┬─────────────────────────────────────┐
    │ Monitor             │ columnstore_monitor                 │
    ├─────────────────────┼─────────────────────────────────────┤
    │ Module              │ mariadbmon                          │
    ├─────────────────────┼─────────────────────────────────────┤
    │ State               │ Running                             │
    ├─────────────────────┼─────────────────────────────────────┤
    │ Servers             │ mcs1                                │
    │                     │ mcs2                                │
    │                     │ mcs3                                │
    ├─────────────────────┼─────────────────────────────────────┤
    │ Parameters          │ {                                   │
    │                     │     "backend_connect_attempts": 1,  │
    │                     │     "backend_connect_timeout": 3,   │
    │                     │     "backend_read_timeout": 3,      │
    │                     │     "backend_write_timeout": 3,     │
    │                     │     "disk_space_check_interval": 0, │
    │                     │     "disk_space_threshold": null,   │
    │                     │     "events": "all",                │
    │                     │     "journal_max_age": 28800,       │
    │                     │     "module": "mariadbmon",         │
    │                     │     "monitor_interval": 2000,       │
    │                     │     "password": "*****",            │
    │                     │     "script": null,                 │
    │                     │     "script_timeout": 90,           │
    │                     │     "user": "mxs"                   │
    │                     │ }                                   │
    ├─────────────────────┼─────────────────────────────────────┤
    │ Monitor Diagnostics │ {}                                  │
    └─────────────────────┴─────────────────────────────────────┘
    $ maxctrl list services
    ┌───────────────────────────┬────────────────┬─────────────┬───────────────────┬──────────────────┐
    │ Service                   │ Router         │ Connections │ Total Connections │ Servers          │
    ├───────────────────────────┼────────────────┼─────────────┼───────────────────┼──────────────────┤
    │ connection_router_Service │ readconnroute  │ 0           │ 0                 │ mcs1, mcs2, mcs3 │
    ├───────────────────────────┼────────────────┼─────────────┼───────────────────┼──────────────────┤
    │ query_router_service      │ readwritesplit │ 0           │ 0                 │ mcs1, mcs2, mcs3 │
    └───────────────────────────┴────────────────┴─────────────┴───────────────────┴──────────────────┘
    $ maxctrl show service query_router_service
    ┌─────────────────────┬─────────────────────────────────────────────────────────────┐
    │ Service             │ query_router_service                                        │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Router              │ readwritesplit                                              │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ State               │ Started                                                     │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Started At          │ Sat Aug 28 21:41:16 2021                                    │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Current Connections │ 0                                                           │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Total Connections   │ 0                                                           │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Max Connections     │ 0                                                           │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Cluster             │                                                             │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Servers             │ mcs1                                                        │
    │                     │ mcs2                                                        │
    │                     │ mcs3                                                        │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Services            │                                                             │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Filters             │                                                             │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Parameters          │ {                                                           │
    │                     │     "auth_all_servers": false,                              │
    │                     │     "causal_reads": "false",                                │
    │                     │     "causal_reads_timeout": "10000ms",                      │
    │                     │     "connection_keepalive": "300000ms",                     │
    │                     │     "connection_timeout": "0ms",                            │
    │                     │     "delayed_retry": false,                                 │
    │                     │     "delayed_retry_timeout": "10000ms",                     │
    │                     │     "disable_sescmd_history": false,                        │
    │                     │     "enable_root_user": false,                              │
    │                     │     "idle_session_pool_time": "-1000ms",                    │
    │                     │     "lazy_connect": false,                                  │
    │                     │     "localhost_match_wildcard_host": true,                  │
    │                     │     "log_auth_warnings": true,                              │
    │                     │     "master_accept_reads": false,                           │
    │                     │     "master_failure_mode": "fail_instantly",                │
    │                     │     "master_reconnection": false,                           │
    │                     │     "max_connections": 0,                                   │
    │                     │     "max_sescmd_history": 50,                               │
    │                     │     "max_slave_connections": 255,                           │
    │                     │     "max_slave_replication_lag": "0ms",                     │
    │                     │     "net_write_timeout": "0ms",                             │
    │                     │     "optimistic_trx": false,                                │
    │                     │     "password": "*****",                                    │
    │                     │     "prune_sescmd_history": true,                           │
    │                     │     "rank": "primary",                                      │
    │                     │     "retain_last_statements": -1,                           │
    │                     │     "retry_failed_reads": true,                             │
    │                     │     "reuse_prepared_statements": false,                     │
    │                     │     "router": "readwritesplit",                             │
    │                     │     "session_trace": false,                                 │
    │                     │     "session_track_trx_state": false,                       │
    │                     │     "slave_connections": 255,                               │
    │                     │     "slave_selection_criteria": "LEAST_CURRENT_OPERATIONS", │
    │                     │     "strict_multi_stmt": false,                             │
    │                     │     "strict_sp_calls": false,                               │
    │                     │     "strip_db_esc": true,                                   │
    │                     │     "transaction_replay": false,                            │
    │                     │     "transaction_replay_attempts": 5,                       │
    │                     │     "transaction_replay_max_size": 1073741824,              │
    │                     │     "transaction_replay_retry_on_deadlock": false,          │
    │                     │     "type": "service",                                      │
    │                     │     "use_sql_variables_in": "all",                          │
    │                     │     "user": "mxs",                                          │
    │                     │     "version_string": null                                  │
    │                     │ }                                                           │
    ├─────────────────────┼─────────────────────────────────────────────────────────────┤
    │ Router Diagnostics  │ {                                                           │
    │                     │     "avg_sescmd_history_length": 0,                         │
    │                     │     "max_sescmd_history_length": 0,                         │
    │                     │     "queries": 0,                                           │
    │                     │     "replayed_transactions": 0,                             │
    │                     │     "ro_transactions": 0,                                   │
    │                     │     "route_all": 0,                                         │
    │                     │     "route_master": 0,                                      │
    │                     │     "route_slave": 0,                                       │
    │                     │     "rw_transactions": 0,                                   │
    │                     │     "server_query_statistics": []                           │
    │                     │ }                                                           │
    └─────────────────────┴─────────────────────────────────────────────────────────────┘
    $ sudo mariadb
    CREATE USER 'app_user'@'192.0.2.10' IDENTIFIED BY 'app_user_passwd';
    GRANT ALL ON test.* TO 'app_user'@'192.0.2.10';
    CREATE USER 'app_user'@'192.0.2.11' IDENTIFIED BY 'app_user_passwd';
    GRANT ALL ON test.* TO 'app_user'@'192.0.2.11';
    $ mariadb --host 192.0.2.10 --port 3307
          --user app_user --password
    $ maxctrl list listeners
    ┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
    │ Name                       │ Port │ Host │ State   │ Service                   │
    ├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
    │ connection_router_listener │ 3308 │ ::   │ Running │ connection_router_service │
    ├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
    │ query_router_listener      │ 3307 │ ::   │ Running │ query_router_service      │
    └────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘
    $ mariadb --host 192.0.2.10 --port 3308 \
          --user app_user --password
    SELECT @@global.hostname, @@global.server_id;
    
    +-------------------+--------------------+
    | @@global.hostname | @@global.server_id |
    +-------------------+--------------------+
    |              mcs2 |                  2 |
    +-------------------+--------------------+
    $ maxctrl list listeners
    ┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
    │ Name                       │ Port │ Host │ State   │ Service                   │
    ├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
    │ connection_router_listener │ 3308 │ ::   │ Running │ connection_router_service │
    ├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
    │ query_router_listener      │ 3307 │ ::   │ Running │ query_router_service      │
    └────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘
    $ mariadb --host 192.0.2.10 --port 3307 \
          --user app_user --password
    CREATE TABLE test.load_balancing_test (
       id INT PRIMARY KEY AUTO_INCREMENT,
       hostname VARCHAR(256),
       server_id INT
    );
    INSERT INTO test.load_balancing_test (hostname, server_id)
    VALUES (@@global.hostname, @@global.server_id);
    SELECT * FROM test.load_balancing_test;
    +----+----------+-----------+
    | id | hostname | server_id |
    +----+----------+-----------+
    |  1 | mcs1     |         1 |
    |  2 | mcs1     |         1 |
    |  3 | mcs1     |         1 |
    +----+----------+-----------+
    $ maxctrl list listeners
    ┌────────────────────────────┬──────┬──────┬─────────┬───────────────────────────┐
    │ Name                       │ Port │ Host │ State   │ Service                   │
    ├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
    │ connection_router_listener │ 3308 │ ::   │ Running │ connection_router_service │
    ├────────────────────────────┼──────┼──────┼─────────┼───────────────────────────┤
    │ query_router_listener      │ 3307 │ ::   │ Running │ query_router_service      │
    └────────────────────────────┴──────┴──────┴─────────┴───────────────────────────┘
    $ mariadb --host 192.0.2.10 --port 3307 \
          --user app_user --password
    SELECT @@global.hostname, @@global.server_id;
    +-------------------+--------------------+
    | @@global.hostname | @@global.server_id |
    +-------------------+--------------------+
    |              mcs2 |                  2 |
    +-------------------+--------------------+
    SELECT @@global.hostname, @@global.server_id;
    +-------------------+--------------------+
    | @@global.hostname | @@global.server_id |
    +-------------------+--------------------+
    |              mcs3 |                  3 |
    +-------------------+--------------------+

    Set this system variable to ON.

    Set this option to the file you want to use for the Relay Logs. Setting this option enables relay logging.

    Set this option to the file you want to use to index Relay Log filenames.

    Sets the numeric Server ID for this MariaDB Enterprise Server. The value set on this option must be unique to each node.

    Set this system variable to utf8

    Set this system variable to utf8_general_ci

    columnstore_use_import_for_batchinsert

    Set this system variable to ALWAYS to always use cpimport for LOAD DATA INFILE and INSERT...SELECT statements.

    Set this system variable to ON.

    Set this option to the file you want to use for the Binary Log. Setting this option enables binary logging.

    Set this option to the file you want to use to track binlog filenames.

    Credentials Management for MariaDB Enterprise ColumnStorearrow-up-right
    CMAPI
    How to create an AppArmor Profilearrow-up-right
    Next: Step 5: Test MariaDB Enterprise Server
    spinner

    This procedure describes the deployment of the ColumnStore Shared Local Storage topology with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5.

    MariaDB Enterprise ColumnStore 5 is a columnar storage engine for MariaDB Enterprise Server 10.5. Enterprise ColumnStore is suitable for Online Analytical Processing (OLAP) workloads.

    This procedure has 9 steps, which are executed in sequence.

    This procedure represents basic product capability and deploys 3 Enterprise ColumnStore nodes and 1 MaxScale node.

    This page provides an overview of the topology, requirements, and deployment procedures.

    Please read and understand this procedure before executing.

    hashtag
    Procedure Steps

    Step
    Description

    Prepare ColumnStore Nodes

    Configure Shared Local Storage

    Install MariaDB Enterprise Server

    Start and Configure MariaDB Enterprise Server

    Test MariaDB Enterprise Server

    Install MariaDB MaxScale

    hashtag
    Support

    Customers can obtain support by submitting a support case.

    hashtag
    Components

    The following components are deployed during this procedure:

    Component
    Function

    Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.

    Database proxy that extends the availability, scalability, and security of MariaDB Enterprise Servers

    hashtag
    MariaDB Enterprise Server Components

    Component
    Description

    • Columnar storage engine • Highly available • Optimized for Online Analytical Processing (OLAP) workloads • Scalable query execution • Cluster Management API (CMAPI) provides a REST API for multi-node administration.

    hashtag
    MariaDB MaxScale Components

    Component
    Description

    Listener

    Listens for client connections to MaxScale then passes them to the router service

    MariaDB Monitor

    Tracks changes in the state of MariaDB Enterprise Servers.

    Read Connection Router

    Routes connections from the listener to any available Enterprise ColumnStore node

    Read/Write Split Router

    Routes read operations from the listener to any available Enterprise ColumnStore node, and routes write operations from the listener to a specific server that MaxScale uses as the primary server

    Server Module

    Connection configuration in MaxScale to an Enterprise ColumnStore node

    hashtag
    Topology

    The MariaDB Enterprise ColumnStore topology with Object Storage delivers production analytics with high availability, fault tolerance, and limitless data storage by leveraging S3-compatible storage.

    The topology consists of:

    • One or more MaxScale nodes

    • An odd number of ColumnStore nodes (minimum of 3) running ES, Enterprise ColumnStore, and CMAPI

    The MaxScale nodes:

    • Monitor the health and availability of each ColumnStore node using the MariaDB Monitor (mariadbmon)

    • Accept client and application connections

    • Route queries to ColumnStore nodes using the Read/Write Split Router (readwritesplit)

    The ColumnStore nodes:

    • Receive queries from MaxScale

    • Execute queries

      • Use shared local storage for the Storage Manager directory

    hashtag
    Requirements

    These requirements are for the ColumnStore Object Storage topology when deployed with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5.

    • Node Count

    • Operating System

    • Minimum Hardware Requirements

    • Recommended Hardware Requirements

    • Storage Requirements

    • S3-Compatible Object Storage Requirements

    • Preferred Object Storage Providers: Cloud

    • Preferred Object Storage Providers: Hardware

    • Shared Local Storage Directories

    • Shared Local Storage Options

    • Recommended Storage Options

    hashtag
    Node Count

    • MaxScale nodes, 1 or more are required.

    • Enterprise ColumnStore nodes, 3 or more are required for high availability. You should always have an odd number of nodes in a multi-node ColumnStore deployment to avoid split brain scenarios.

    hashtag
    Operating System

    In alignment to the enterprise lifecyclearrow-up-right, the ColumnStore Object Storage topology with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5 is provided for:

    • CentOS Linux 7 (x86_64)

    • Debian 10 (x86_64)

    • Red Hat Enterprise Linux 7 (x86_64)

    • Red Hat Enterprise Linux 8 (x86_64)

    • Ubuntu 18.04 LTS (x86_64)

    • Ubuntu 20.04 LTS (x86_64)

    hashtag
    Minimum Hardware Requirements

    MariaDB Enterprise ColumnStore's minimum hardware requirements are not intended for production environments, but the minimum hardware requirements can be appropriate for development and test environments. For production environments, see the recommended hardware requirements instead.

    The minimum hardware requirements are:

    Component
    CPU
    Memory

    MaxScale node

    4+ cores

    4+ GB

    Enterprise ColumnStore node

    4+ cores

    4+ GB

    MariaDB Enterprise ColumnStore will refuse to start if the system has less than 3 GB of memory.

    If Enterprise ColumnStore is started on a system with less memory, the following error message will be written to the ColumnStore system log called crit.log:

    And the following error message will be raised to the client:

    hashtag
    Recommended Hardware Requirements

    MariaDB Enterprise ColumnStore's recommended hardware requirements are intended for production analytics.

    The recommended hardware requirements are:

    Component
    CPU
    Memory

    MaxScale node

    8+ cores

    16+ GB

    Enterprise ColumnStore node

    64+ cores

    128+ GB

    hashtag
    Storage Requirements

    The ColumnStore Object Storage topology requires the following storage types:

    Storage Type
    Description

    The ColumnStore Object Storage topology uses shared local storage for the to store metadata.

    hashtag
    Shared Local Storage Directories

    The ColumnStore Object Storage topology uses shared local storage for the Storage Manager directory to store metadata.

    The Storage Manager directory is located at the following path by default:

    • /var/lib/columnstore/storagemanager

    hashtag
    Shared Local Storage Options

    The most common shared local storage options for the ColumnStore Object Storage topology are:

    Shared Local Storage
    Common Usage
    Description

    EBS (Elastic Block Store) Multi-Attach

    AWS

    • EBS is a high-performance block-storage service for AWS (Amazon Web Services). • EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported. • For deployments in AWS, EBS Multi-Attach is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data.

    EFS (Elastic File System)

    AWS

    • EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services). • For deployments in AWS, EFS is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data. EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services).

    Filestore

    GCP

    • Filestore is high-performance, fully managed storage for GCP (Google Cloud Platform). • For deployments in GCP, Filestore is the recommended option for the Storage Manager directory, and Google Object Storage (S3-compatible) is the recommended option for data.

    GlusterFS

    On-premises

    hashtag
    Enterprise ColumnStore Management with CMAPI

    Enterprise ColumnStore's CMAPI (Cluster Management API) is a REST API that can be used to manage a multi-node Enterprise ColumnStore cluster.

    Many tools are capable of interacting with REST APIs. For example, the curl utility could be used to make REST API calls from the command-line.

    Many programming languages also have libraries for interacting with REST APIs.

    The examples below show how to use the CMAPI with curl.

    hashtag
    URL Endpoint Format for REST API

    For example:

    • shutdownarrow-up-right

    • startarrow-up-right

    • statusarrow-up-right

    hashtag
    Required Request Headers

    • 'x-api-key': '93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd'

    • 'Content-Type': 'application/json'

    x-api-key can be set to any value of your choice during the first call to the server. Subsequent connections will require this same key.

    hashtag
    Get Status

    hashtag
    Start Cluster

    hashtag
    Stop Cluster

    hashtag
    Add Node

    hashtag
    Remove Node

    hashtag
    Quick Reference

    hashtag
    MariaDB Enterprise Server Configuration Management

    Method
    Description

    Configuration File

    Configuration files (such as /etc/my.cnf) can be used to set system-variables and options. The server must be restarted to apply changes made to configuration files.

    Command-line

    The server can be started with command-line options that set system-variables and options.

    SQL

    Users can set system-variables that support dynamic changes on-the-fly using the SET statement.

    MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.

    To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.

    Distribution
    Example Configuration File Path
    • CentOS

    • Red Hat Enterprise Linux (RHEL)

    /etc/my.cnf.d/z-custom-mariadb.cnf

    • Debian

    • Ubuntu

    /etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf

    hashtag
    MariaDB Enterprise Server Service Management

    The systemctl command is used to start and stop the MariaDB Enterprise Server service.

    Operation
    Command

    Start

    sudo systemctl start mariadb

    Stop

    sudo systemctl stop mariadb

    Restart

    sudo systemctl restart mariadb

    Enable during startup

    sudo systemctl enable mariadb

    Disable during startup

    sudo systemctl disable mariadb

    Status

    sudo systemctl status mariadb

    For additional information, see "".

    hashtag
    MariaDB Enterprise Server Logs

    MariaDB Enterprise Server produces log data that can be helpful in problem diagnosis.

    Log filenames and locations may be overridden in the server configuration. The default location of logs is the data directory. The data directory is specified by the datadir system variable.

    Log
    System Variable/Option
    Default Filename

    <hostname>.err

    server_audit.log

    <hostname>-slow.log

    hashtag
    Enterprise ColumnStore Service Management

    The systemctl command is used to start and stop the ColumnStore service.

    Operation
    Command

    Start

    sudo systemctl start mariadb-columnstore

    Stop

    sudo systemctl stop mariadb-columnstore

    Restart

    sudo systemctl restart mariadb-columnstore

    Enable during startup

    sudo systemctl enable mariadb-columnstore

    Disable during startup

    sudo systemctl disable mariadb-columnstore

    Status

    sudo systemctl status mariadb-columnstore

    In the ColumnStore Object Storage topology, the mariadb-columnstore service should not be enabled. The CMAPI service restarts Enterprise ColumnStore as needed, so it does not need to start automatically upon reboot.

    hashtag
    Enterprise ColumnStore CMAPI Service Management

    The systemctl command is used to start and stop the CMAPI service.

    Operation
    Command

    Start

    sudo systemctl start mariadb-columnstore-cmapi

    Stop

    sudo systemctl stop mariadb-columnstore-cmapi

    Restart

    sudo systemctl restart mariadb-columnstore-cmapi

    Enable during startup

    sudo systemctl enable mariadb-columnstore-cmapi

    Disable during startup

    sudo systemctl disable mariadb-columnstore-cmapi

    Status

    sudo systemctl status mariadb-columnstore-cmapi

    For additional information on endpoints, see "CMAPI".

    hashtag
    MaxScale Configuration Management

    MaxScale can be configured using several methods. These methods make use of MaxScale's REST APIarrow-up-right.

    Method
    Benefits

    Command-line utility to perform administrative tasks through the REST API. See MaxCtrl Commands.

    MaxGUI is a graphical utility that can perform administrative tasks through the REST API.

    The REST API can be used directly. For example, the curl utility could be used to make REST API calls from the command-line. Many programming languages also have libraries to interact with REST APIs.

    The procedure on these pages configures MaxScale using MaxCtrl.

    hashtag
    MaxScale Service Management

    The systemctl command is used to start and stop the MaxScale service.>

    Operation
    Command

    Start

    sudo systemctl start maxscale

    Stop

    sudo systemctl stop maxscale

    Restart

    sudo systemctl restart maxscale

    Enable during startup

    sudo systemctl enable maxscale

    Disable during startup

    sudo systemctl disable maxscale

    Status

    sudo systemctl status maxscale

    For additional information, see "Start and Stop Services".

    hashtag
    Next Step

    Navigation in the procedure Shared Local Storage topology

    Next: Step 1: Prepare ColumnStore Nodes.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    • Enterprise Server 10.5

    • Enterprise Server 10.6

    • Enterprise Server 11.4

    Columnar storage engine with S3-compatible object storage

    • Highly available

    • Automatic failover via MaxScale and CMAPI

    • Scales read via MaxScale

    spinner
    The instructions were tested against ColumnStore 23.10.

    Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

    hashtag
    Stop the Enterprise ColumnStore Services

    The installation process might have started some of the ColumnStore services. The services should be stopped prior to making configuration changes.

    1. On each Enterprise ColumnStore node, stop the MariaDB Enterprise Server service:

    1. On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:

    1. On each Enterprise ColumnStore node, stop the CMAPI service:

    hashtag
    Configure Enterprise ColumnStore

    On each Enterprise ColumnStore node, configure Enterprise Server.

    Connector
    MariaDB Connector/R2DBC

    Set this system variable to utf8

    Set this system variable to utf8_general_ci

    columnstore_use_import_for_batchinsert

    Set this system variable to ALWAYS to always use cpimport for LOAD DATA INFILE and INSERT...SELECT statements.

    Set this system variable to ON.

    Set this option to the file you want to use for the Binary Log. Setting this option enables binary logging.

    Set this option to the file you want to use to track binlog filenames.

    Mandatory system variables and options for ColumnStore Object Storage include:

    Example Configuration

    hashtag
    Configure the S3 Storage Manager

    On each Enterprise ColumnStore node, configure S3 Storage Manager to use S3-compatible storage by editing the /etc/columnstore/storagemanager.cnf configuration file:

    The S3-compatible object storage options are configured under [S3]:

    • The bucket option must be set to the name of the bucket that you created in "Create an S3 Bucket".

    • The endpoint option must be set to the endpoint for the S3-compatible object storage.

    • The aws_access_key_id and aws_secret_access_key options must be set to the access key ID and secret access key for the S3-compatible object storage.

    • To use a specific IAM role, you must uncomment and set iam_role_name, sts_region, and sts_endpoint.

    • To use the IAM role assigned to an EC2 instance, you must uncomment ec2_iam_mode=enabled.

    The local cache options are configured under [Cache]:

    • The cache_size option is set to 2 GB by default.

    • The path option is set to /var/lib/columnstore/storagemanager/cache by default.

    Ensure that the specified path has sufficient storage space for the specified cache size.

    hashtag
    Start the Enterprise ColumnStore Services

    1. On each Enterprise ColumnStore node, start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:

    1. On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:

    After the CMAPI service is installed in the next step, CMAPI will start the Enterprise ColumnStore service as needed on each node. CMAPI disables the Enterprise ColumnStore service to prevent systemd from automatically starting Enterprise ColumnStore upon reboot.

    1. On each Enterprise ColumnStore node, start and enable the CMAPI service, so that it starts automatically upon reboot:

    For additional information, see "".

    hashtag
    Create User Accounts

    The ColumnStore Object Storage topology requires several user accounts. Each user account should be created on the primary server, so that it is replicated to the replica servers.

    hashtag
    Create the Utility User

    Enterprise ColumnStore requires a mandatory utility user account to perform cross-engine joins and similar operations.

    1. On the primary server, create the user account with the CREATE USER statement:

    1. On the primary server, grant the user account SELECT privileges on all databases with the GRANT statement:

    1. On each Enterprise ColumnStore node, configure the ColumnStore utility user:

    1. On each Enterprise ColumnStore node, set the password:

    For details about how to encrypt the password, see "Credentials Management for MariaDB Enterprise ColumnStorearrow-up-right".

    Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.

    hashtag
    Create the Replication User

    ColumnStore Object Storage uses MariaDB Replication to replicate writes between the primary and replica servers. As MaxScale can promote a replica server to become a new primary in the event of node failure, all nodes must have a replication user.

    The action is performed on the primary server.

    Create the replication user and grant it the required privileges:

    1. Use the CREATE USER statement to create replication user.

    Replace the referenced IP address with the relevant address for your environment.

    Ensure that the user account can connect to the primary server from each replica.

    1. Grant the user account the required privileges with the GRANT statement.

    hashtag
    Create MaxScale User

    ColumnStore Object Storage 23.10 uses MariaDB MaxScale 22.08 to load balance between the nodes.

    This action is performed on the primary server.

    1. Use the statement to create the MaxScale user:

    Replace the referenced IP address with the relevant address for your environment.

    Ensure that the user account can connect from the IP address of the MaxScale instance.

    1. Use the statement to grant the privileges required by the router:

    1. Use the statement to grant privileges required by the MariaDB Monitor.

    hashtag
    Configure MariaDB Replication

    On each replica server, configure MariaDB Replication:

    1. Use the CHANGE MASTER TO statement to configure the connection to the primary server:

    1. Start replication using the START REPLICA statement:

    1. Confirm that replication is working using the SHOW REPLICA STATUS statement:

    Ensure that the replica server cannot accept local writes by setting the read_only system variable to ON using the SET GLOBAL statement:

    hashtag
    Initiate the Primary Server with CMAPI

    Initiate the primary server using CMAPI.

    1. Create an API key for the cluster. This API key should be stored securely and kept confidential, because it can be used to add cluster nodes to the multi-node Enterprise ColumnStore deployment.

    For example, to create a random 256-bit API key using openssl rand:

    This document will use the following API key in further examples, but users should create their own:

    1. Use CMAPI to add the primary server to the cluster and set the API key. The new API key needs to be provided as part of the X-API-key HTML header.

    For example, if the primary server's host name is mcs1 and its IP address is 192.0.2.1, use the following node command:

    1. Use CMAPI to check the status of the cluster node:

    hashtag
    Add Replica Servers with CMAPI

    Add the replica servers with CMAPI:

    1. For each replica server, use CMAPI to add the replica server to the cluster. The previously set API key needs to be provided as part of the X-API-key HTML header.

    For example, if the primary server's host name is mcs1 and the replica server's IP address is 192.0.2.2, use the following node command:

    1. After all replica servers have been added, use CMAPI to confirm that all cluster nodes have been successfully added:

    hashtag
    Configure Linux Security Modules (LSM)

    The specific steps to configure the security module depend on the operating system.

    hashtag
    Configure SELinux (CentOS, RHEL)

    Configure SELinux for Enterprise ColumnStore:

    1. To configure SELinux, you have to install the packages required for audit2allow. On CentOS 7 and RHEL 7, install the following:

    On RHEL 8, install the following:

    1. Allow the system to run under load for a while to generate SELinux audit events.

    2. After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:

    If no audit events were found, this will print the following:

    1. If audit events were found, the new SELinux policy can be loaded using semodule:

    1. Set SELinux to enforcing mode:

    1. Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config.

    For example, the file will usually look like this after the change:

    1. Confirm that SELinux is in enforcing mode:

    hashtag
    Configure AppArmor (Ubuntu)

    For information on how to create a profile, see How to create an AppArmor Profilearrow-up-right on Ubuntu.com.

    hashtag
    Configure Firewalls

    The specific steps to configure the firewall service depend on the platform.

    hashtag
    Configure firewalld (CentOS, RHEL)

    Configure firewalld for Enterprise Cluster on CentOS and RHEL:

    1. Check if the firewalld service is running:

    1. If the firewalld service was stopped to perform the installation, start it now:

    For example, if your cluster nodes are in the 192.0.2.0/24 subnet:

    1. Open up the relevant ports using firewall-cmd:

    1. Reload the runtime configuration:

    hashtag
    Configure UFW (Ubuntu)

    Configure UFW for Enterprise ColumnStore on Ubuntu:

    1. Check if the UFW service is running:

    1. If the UFW service was stopped to perform the installation, start it now:

    1. Open up the relevant ports using ufw.

    For example, if your cluster nodes are in the 192.0.2.0/24 subnet in the range 192.0.2.1 - 192.0.2.3:

    1. Reload the runtime configuration:

    hashtag
    Next Step

    Navigation in the procedure "Deploy ColumnStore Object Storage Topology":

    This page was step 4 of 9.

    Next: Step 5: Test MariaDB Enterprise Server.

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    spinner
    $ sudo systemctl stop mariadb
    $ sudo systemctl stop mariadb-columnstore
    $ sudo systemctl stop mariadb-columnstore-cmapi
    [mariadb]
    bind_address                           = 0.0.0.0
    log_error                              = mariadbd.err
    character_set_server                   = utf8
    collation_server                       = utf8_general_ci
    log_bin                                = mariadb-bin
    log_bin_index                          = mariadb-bin.index
    relay_log                              = mariadb-relay
    relay_log_index                        = mariadb-relay.index
    log_slave_updates                      = ON
    gtid_strict_mode                       = ON
    
    # This must be unique on each Enterprise ColumnStore node
    server_id                              = 1
    $ sudo systemctl start mariadb
    $ sudo systemctl enable mariadb
    $ sudo systemctl stop mariadb-columnstore
    $ sudo systemctl start mariadb-columnstore-cmapi
    $ sudo systemctl enable mariadb-columnstore-cmapi
    CREATE USER 'util_user'@'127.0.0.1'
    IDENTIFIED BY 'util_user_passwd';
    GRANT SELECT, PROCESS ON *.*
    TO 'util_user'@'127.0.0.1';
    $ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1
    $ sudo mcsSetConfig CrossEngineSupport Port 3306
    $ sudo mcsSetConfig CrossEngineSupport User util_user
    $ sudo mcsSetConfig CrossEngineSupport Password util_user_passwd
    CREATE USER 'repl'@'192.0.2.%' IDENTIFIED BY 'repl_passwd';
    GRANT REPLICA MONITOR,
       REPLICATION REPLICA,
       REPLICATION REPLICA ADMIN,
       REPLICATION MASTER ADMIN
    ON *.* TO 'repl'@'192.0.2.%';
    CREATE USER 'mxs'@'192.0.2.%'
    IDENTIFIED BY 'mxs_passwd';
    GRANT SHOW DATABASES ON *.* TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.columns_priv TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.db TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.procs_priv TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.proxies_priv TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.roles_mapping TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.tables_priv TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.user TO 'mxs'@'192.0.2.%';
    GRANT BINLOG ADMIN,
       READ_ONLY ADMIN,
       RELOAD,
       REPLICA MONITOR,
       REPLICATION MASTER ADMIN,
       REPLICATION REPLICA ADMIN,
       REPLICATION REPLICA,
       SHOW DATABASES,
       SELECT
    ON *.* TO 'mxs'@'192.0.2.%';
    CHANGE MASTER TO
       MASTER_HOST='192.0.2.1',
       MASTER_USER='repl',
       MASTER_PASSWORD='repl_passwd',
       MASTER_USE_GTID=slave_pos;
    START REPLICA;
    SHOW REPLICA STATUS;
    SET GLOBAL read_only=ON;
    $ openssl rand -hex 32
    
    93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd
    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       --data '{"timeout":120, "node": "192.0.2.1"}' \
       | jq .
    {
      "timestamp": "2020-10-28 00:39:14.672142",
      "node_id": "192.0.2.1"
    }
    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    {
      "timestamp": "2020-12-15 00:40:34.353574",
      "192.0.2.1": {
        "timestamp": "2020-12-15 00:40:34.362374",
        "uptime": 11467,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 19202
          },
          {
            "name": "controllernode",
            "pid": 19232
          },
          {
            "name": "PrimProc",
            "pid": 19254
          },
          {
            "name": "ExeMgr",
            "pid": 19292
          },
          {
            "name": "WriteEngine",
            "pid": 19316
          },
          {
            "name": "DMLProc",
            "pid": 19332
          },
          {
            "name": "DDLProc",
            "pid": 19366
          }
        ]
      }
    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       --data '{"timeout":120, "node": "192.0.2.2"}' \
       | jq .
    {
      "timestamp": "2020-10-28 00:42:42.796050",
      "node_id": "192.0.2.2"
    }
    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    {
      "timestamp": "2020-12-15 00:40:34.353574",
      "192.0.2.1": {
        "timestamp": "2020-12-15 00:40:34.362374",
        "uptime": 11467,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 19202
          },
          {
            "name": "controllernode",
            "pid": 19232
          },
          {
            "name": "PrimProc",
            "pid": 19254
          },
          {
            "name": "ExeMgr",
            "pid": 19292
          },
          {
            "name": "WriteEngine",
            "pid": 19316
          },
          {
            "name": "DMLProc",
            "pid": 19332
          },
          {
            "name": "DDLProc",
            "pid": 19366
          }
        ]
      },
      "192.0.2.2": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "192.0.2.3": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "num_nodes": 3
    }
    $ sudo yum install policycoreutils policycoreutils-python
    $ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils
    $ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
    $ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
    
    Nothing to do
    $ sudo semodule -i mariadb_local.pp
    $ sudo setenforce enforcing
    # This file controls the state of SELinux on the system.
    # SELINUX= can take one of these three values:
    #     enforcing - SELinux security policy is enforced.
    #     permissive - SELinux prints warnings instead of enforcing.
    #     disabled - No SELinux policy is loaded.
    SELINUX=enforcing
    # SELINUXTYPE= can take one of three values:
    #     targeted - Targeted processes are protected,
    #     minimum - Modification of targeted policy. Only selected processes are protected.
    #     mls - Multi Level Security protection.
    SELINUXTYPE=targeted
    $ sudo getenforce
    Enforcing
    $ sudo systemctl status firewalld
    $ sudo systemctl start firewalld
    $ sudo firewall-cmd --permanent --add-rich-rule='
       rule family="ipv4"
       source address="192.0.2.0/24"
       destination address="192.0.2.0/24"
       port port="3306" protocol="tcp"
       accept'
    $ sudo firewall-cmd --permanent --add-rich-rule='
       rule family="ipv4"
       source address="192.0.2.0/24"
       destination address="192.0.2.0/24"
       port port="8600-8630" protocol="tcp"
       accept'
    $ sudo firewall-cmd --permanent --add-rich-rule='
       rule family="ipv4"
       source address="192.0.2.0/24"
       destination address="192.0.2.0/24"
       port port="8640" protocol="tcp"
       accept'
    $ sudo firewall-cmd --permanent --add-rich-rule='
       rule family="ipv4"
       source address="192.0.2.0/24"
       destination address="192.0.2.0/24"
       port port="8700" protocol="tcp"
       accept'
    $ sudo firewall-cmd --permanent --add-rich-rule='
       rule family="ipv4"
       source address="192.0.2.0/24"
       destination address="192.0.2.0/24"
       port port="8800" protocol="tcp"
       accept'
    $ sudo firewall-cmd --reload
    $ sudo ufw status verbose
    $ sudo ufw enable
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 3306 proto tcp
    
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8600:8630 proto tcp
    
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8640 proto tcp
    
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8700 proto tcp
    
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8800 proto tcp
    $ sudo ufw reload
    Apr 30 21:54:35 a1ebc96a2519 PrimProc[1004]: 35.668435 |0|0|0| C 28 CAL0000: Error total memory available is less than 3GB.
    ERROR 1815 (HY000): Internal error: System is not ready yet. Please try again.
    https://{server}:{port}/cmapi/{version}/{route}/{command}
    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
          --header 'Content-Type:application/json' \
          --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
          | jq .
    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/start \
          --header 'Content-Type:application/json' \
          --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
          --data '{"timeout":20}' \
          | jq .
    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/shutdown \
          --header 'Content-Type:application/json' \
          --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
          --data '{"timeout":20}' \
          | jq .
    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
          --header 'Content-Type:application/json' \
          --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
          --data '{"timeout":20, "node": "192.0.2.2"}' \
          | jq .
    $ curl -k -s -X DELETE https://mcs1:8640/cmapi/0.4.0/cluster/node \
          --header 'Content-Type:application/json' \
          --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
          --data '{"timeout":20, "node": "192.0.2.2"}' \
          | jq .
    $ sudo systemctl stop mariadb
    $ sudo systemctl stop mariadb-columnstore
    $ sudo systemctl stop mariadb-columnstore-cmapi
    [mariadb]
    bind_address                           = 0.0.0.0
    log_error                              = mariadbd.err
    character_set_server                   = utf8
    collation_server                       = utf8_general_ci
    log_bin                                = mariadb-bin
    log_bin_index                          = mariadb-bin.index
    relay_log                              = mariadb-relay
    relay_log_index                        = mariadb-relay.index
    log_slave_updates                      = ON
    gtid_strict_mode                       = ON
    
    # This must be unique on each Enterprise ColumnStore node
    server_id                              = 1
    [ObjectStorage]
    …
    service = S3
    …
    [S3]
    bucket                = your_columnstore_bucket_name
    endpoint              = your_s3_endpoint
    aws_access_key_id     = your_s3_access_key_id
    aws_secret_access_key = your_s3_secret_key
    # iam_role_name       = your_iam_role
    # sts_region          = your_sts_region
    # sts_endpoint        = your_sts_endpoint
    # ec2_iam_mode        = enabled
    
    [Cache]
    cache_size = your_local_cache_size
    path       = your_local_cache_path
    $ sudo systemctl start mariadb
    $ sudo systemctl enable mariadb
    $ sudo systemctl stop mariadb-columnstore
    $ sudo systemctl start mariadb-columnstore-cmapi
    $ sudo systemctl enable mariadb-columnstore-cmapi
    CREATE USER 'util_user'@'127.0.0.1'
    IDENTIFIED BY 'util_user_passwd';
    GRANT SELECT, PROCESS ON *.*
    TO 'util_user'@'127.0.0.1';
    $ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1
    $ sudo mcsSetConfig CrossEngineSupport Port 3306
    $ sudo mcsSetConfig CrossEngineSupport User util_user
    $ sudo mcsSetConfig CrossEngineSupport Password util_user_passwd
    CREATE USER 'repl'@'192.0.2.%' IDENTIFIED BY 'repl_passwd';
    GRANT REPLICA MONITOR,
       REPLICATION REPLICA,
       REPLICATION REPLICA ADMIN,
       REPLICATION MASTER ADMIN
    ON *.* TO 'repl'@'192.0.2.%';
    CREATE USER 'mxs'@'192.0.2.%'
    IDENTIFIED BY 'mxs_passwd';
    GRANT SHOW DATABASES ON *.* TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.columns_priv TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.db TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.procs_priv TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.proxies_priv TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.roles_mapping TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.tables_priv TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.user TO 'mxs'@'192.0.2.%';
    GRANT BINLOG ADMIN,
       READ_ONLY ADMIN,
       RELOAD,
       REPLICA MONITOR,
       REPLICATION MASTER ADMIN,
       REPLICATION REPLICA ADMIN,
       REPLICATION REPLICA,
       SHOW DATABASES,
       SELECT
    ON *.* TO 'mxs'@'192.0.2.%';
    CHANGE MASTER TO
       MASTER_HOST='192.0.2.1',
       MASTER_USER='repl',
       MASTER_PASSWORD='repl_passwd',
       MASTER_USE_GTID=slave_pos;
    START REPLICA;
    SHOW REPLICA STATUS;
    SET GLOBAL read_only=ON;
    $ openssl rand -hex 32
    
    93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd
    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       --data '{"timeout":120, "node": "192.0.2.1"}' \
       | jq .
    {
      "timestamp": "2020-10-28 00:39:14.672142",
      "node_id": "192.0.2.1"
    }
    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    {
      "timestamp": "2020-12-15 00:40:34.353574",
      "192.0.2.1": {
        "timestamp": "2020-12-15 00:40:34.362374",
        "uptime": 11467,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 19202
          },
          {
            "name": "controllernode",
            "pid": 19232
          },
          {
            "name": "PrimProc",
            "pid": 19254
          },
          {
            "name": "ExeMgr",
            "pid": 19292
          },
          {
            "name": "WriteEngine",
            "pid": 19316
          },
          {
            "name": "DMLProc",
            "pid": 19332
          },
          {
            "name": "DDLProc",
            "pid": 19366
          }
        ]
      }
    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       --data '{"timeout":120, "node": "192.0.2.2"}' \
       | jq .
    {
      "timestamp": "2020-10-28 00:42:42.796050",
      "node_id": "192.0.2.2"
    }
    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    {
      "timestamp": "2020-12-15 00:40:34.353574",
      "192.0.2.1": {
        "timestamp": "2020-12-15 00:40:34.362374",
        "uptime": 11467,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 19202
          },
          {
            "name": "controllernode",
            "pid": 19232
          },
          {
            "name": "PrimProc",
            "pid": 19254
          },
          {
            "name": "ExeMgr",
            "pid": 19292
          },
          {
            "name": "WriteEngine",
            "pid": 19316
          },
          {
            "name": "DMLProc",
            "pid": 19332
          },
          {
            "name": "DDLProc",
            "pid": 19366
          }
        ]
      },
      "192.0.2.2": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "192.0.2.3": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "num_nodes": 3
    }
    $ sudo yum install policycoreutils policycoreutils-python
    $ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils
    $ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
    $ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
    
    Nothing to do
    $ sudo semodule -i mariadb_local.pp
    $ sudo setenforce enforcing
    # This file controls the state of SELinux on the system.
    # SELINUX= can take one of these three values:
    #     enforcing - SELinux security policy is enforced.
    #     permissive - SELinux prints warnings instead of enforcing.
    #     disabled - No SELinux policy is loaded.
    SELINUX=enforcing
    # SELINUXTYPE= can take one of three values:
    #     targeted - Targeted processes are protected,
    #     minimum - Modification of targeted policy. Only selected processes are protected.
    #     mls - Multi Level Security protection.
    SELINUXTYPE=targeted
    $ sudo getenforce
    Enforcing
    $ sudo systemctl status firewalld
    $ sudo systemctl start firewalld
    $ sudo firewall-cmd --permanent --add-rich-rule='
       rule family="ipv4"
       source address="192.0.2.0/24"
       destination address="192.0.2.0/24"
       port port="3306" protocol="tcp"
       accept'
    $ sudo firewall-cmd --permanent --add-rich-rule='
       rule family="ipv4"
       source address="192.0.2.0/24"
       destination address="192.0.2.0/24"
       port port="8600-8630" protocol="tcp"
       accept'
    $ sudo firewall-cmd --permanent --add-rich-rule='
       rule family="ipv4"
       source address="192.0.2.0/24"
       destination address="192.0.2.0/24"
       port port="8640" protocol="tcp"
       accept'
    $ sudo firewall-cmd --permanent --add-rich-rule='
       rule family="ipv4"
       source address="192.0.2.0/24"
       destination address="192.0.2.0/24"
       port port="8700" protocol="tcp"
       accept'
    $ sudo firewall-cmd --permanent --add-rich-rule='
       rule family="ipv4"
       source address="192.0.2.0/24"
       destination address="192.0.2.0/24"
       port port="8800" protocol="tcp"
       accept'
    $ sudo firewall-cmd --reload
    $ sudo ufw status verbose
    $ sudo ufw enable
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 3306 proto tcp
    
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8600:8630 proto tcp
    
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8640 proto tcp
    
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8700 proto tcp
    
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8800 proto tcp
    $ sudo ufw reload

    Bulk data import

  • Enterprise Server 10.5, Enterprise ColumnStore 5, MaxScale 2.5

  • Enterprise Server 10.6, Enterprise ColumnStore 23.02, MaxScale 22.08

  • Start and Configure MariaDB MaxScale

    Test MariaDB MaxScale

    Import Data

    • GlusterFS is a distributed file system. • GlusterFS supports replication and failover.

    NFS (Network File System)

    On-premises

    • NFS is a distributed file system. • If NFS is used, the storage should be mounted with the sync option to ensure that each node flushes its changes immediately. • For on-premises deployments, NFS is the recommended option for the Storage Manager directory, and any S3-compatible storage is the recommended option for data.

    <hostname>.log

    <hostname>-bin

    Step 1
    Step 2
    Step 3
    Step 4
    Step 5
    Step 6
    MariaDB Enterprise Server
    MariaDB MaxScale
    MariaDB Enterprise ColumnStore
    Shared Local Storage
    Storage Manager directory
    REST APIarrow-up-right

    Set this system variable to ON.

    Set this option to the file you want to use for the Relay Logs. Setting this option enables relay logging.

    Set this option to the file you want to use to index Relay Log filenames.

    Sets the numeric Server ID for this MariaDB Enterprise Server. The value set on this option must be unique to each node.

    Step 7
    Step 8
    Step 9

    Single-Node S3

    This guide provides steps for deploying a single-node S3 ColumnStore, setting up the environment, installing the software, and bulk importing data for online analytical processing (OLAP) workloads.

    hashtag
    Overview

    • Enterprise Server 10.5

    • Enterprise Server 10.6

    • Enterprise Server 11.4

    Columnar storage engine with S3-compatible object storage

    • Highly available

    • Automatic failover via MaxScale and CMAPI

    • Scales read via MaxScale

    This procedure describes the deployment of the ColumnStore Object Storage topology with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5.

    MariaDB Enterprise ColumnStore 5 is a columnar storage engine for MariaDB Enterprise Server 10.5. Enterprise ColumnStore is suitable for Online Analytical Processing (OLAP) workloads.

    This procedure has 9 steps, which are executed in sequence.

    This procedure represents basic product capability and deploys 3 Enterprise ColumnStore nodes and 1 MaxScale node.

    This page provides an overview of the topology, requirements, and deployment procedures.

    Please read and understand this procedure before executing.

    hashtag
    Procedure Steps

    Step
    Description

    hashtag
    Support

    Customers can obtain support by submitting a support case.

    hashtag
    Components

    The following components are deployed during this procedure:

    Component
    Function

    hashtag
    MariaDB Enterprise Server Components

    Component
    Description

    hashtag
    MariaDB MaxScale Components

    Component
    Description

    hashtag
    Topology

    The MariaDB Enterprise ColumnStore topology with Object Storage delivers production analytics with high availability, fault tolerance, and limitless data storage by leveraging S3-compatible storage.

    The topology consists of:

    • One or more MaxScale nodes

    • An odd number of ColumnStore nodes (minimum of 3) running ES, Enterprise ColumnStore, and CMAPI

    The MaxScale nodes:

    • Monitor the health and availability of each ColumnStore node using the MariaDB Monitor (mariadbmon)

    • Accept client and application connections

    • Route queries to ColumnStore nodes using the Read/Write Split Router (readwritesplit)

    The ColumnStore nodes:

    • Receive queries from MaxScale

    • Execute queries

    • Use for data

    hashtag
    Requirements

    These requirements are for the ColumnStore Object Storage topology when deployed with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5.

    • Node Count

    • Operating System

    • Minimum Hardware Requirements

    • Recommended Hardware Requirements

    hashtag
    Node Count

    • MaxScale nodes, 1 or more are required.

    • Enterprise ColumnStore nodes, 3 or more are required for high availability. You should always have an odd number of nodes in a multi-node ColumnStore deployment to avoid split brain scenarios.

    hashtag
    Operating System

    In alignment to the , the ColumnStore Object Storage topology with MariaDB Enterprise Server 10.5, MariaDB Enterprise ColumnStore 5, and MariaDB MaxScale 2.5 is provided for:

    • CentOS Linux 7 (x86_64)

    • Debian 10 (x86_64)

    • Red Hat Enterprise Linux 7 (x86_64)

    • Red Hat Enterprise Linux 8 (x86_64)

    hashtag
    Minimum Hardware Requirements

    MariaDB Enterprise ColumnStore's minimum hardware requirements are not intended for production environments, but the minimum hardware requirements can be appropriate for development and test environments. For production environments, see the instead.

    The minimum hardware requirements are:

    Component
    CPU
    Memory

    MariaDB Enterprise ColumnStore will refuse to start if the system has less than 3 GB of memory.

    If Enterprise ColumnStore is started on a system with less memory, the following error message will be written to the ColumnStore system log called crit.log:

    And the following error message will be raised to the client:

    hashtag
    Recommended Hardware Requirements

    MariaDB Enterprise ColumnStore's recommended hardware requirements are intended for production analytics.

    The recommended hardware requirements are:

    Component
    CPU
    Memory

    hashtag
    Storage Requirements

    The ColumnStore Object Storage topology requires the following storage types:

    Storage Type
    Description

    hashtag
    S3-Compatible Object Storage Requirements

    The ColumnStore Object Storage topology uses S3-compatible object storage to store data.

    Many S3-compatible object storage services exist. MariaDB Corporation cannot make guarantees about all S3-compatible object storage services, because different services provide different functionality.

    For the preferred S3-compatible object storage providers that provide cloud and hardware solutions, see the following sections:

    The use of non-cloud and non-hardware providers is at your own risk.

    If you have any questions about using specific S3-compatible object storage with MariaDB Enterprise ColumnStore, contact us.

    hashtag
    Preferred Object Storage Providers: Cloud

    • Amazon Web Services (AWS) S3

    • Google Cloud Storage

    • Azure Storage

    • Alibaba Cloud Object Storage Service

    hashtag
    Preferred Object Storage Providers: Hardware

    • Cloudian HyperStore

    • Cohesity S3

    • Dell EMC

    • IBM Cloud Object Storage

    hashtag
    Shared Local Storage Directories

    The ColumnStore Object Storage topology uses shared local storage for the to store metadata.

    The Storage Manager directory is located at the following path by default:

    • /var/lib/columnstore/storagemanager

    hashtag
    Shared Local Storage Options

    The most common shared local storage options for the ColumnStore Object Storage topology are:

    Shared Local Storage
    Common Usage
    Description

    hashtag
    Recommended Storage Options

    For best results, MariaDB Corporation would recommend the following storage options:

    Environment
    Object Storage For Data
    Shared Local Storage For Storage Manager

    hashtag
    Enterprise ColumnStore Management with CMAPI

    Enterprise ColumnStore's CMAPI (Cluster Management API) is a REST API that can be used to manage a multi-node Enterprise ColumnStore cluster.

    Many tools are capable of interacting with REST APIs. For example, the curl utility could be used to make REST API calls from the command-line.

    Many programming languages also have libraries for interacting with REST APIs.

    The examples below show how to use the CMAPI with curl.

    hashtag
    URL Endpoint Format for REST API

    For example:

    • https://mcs1:8640/cmapi/0.4.0/cluster/shutdown

    • https://mcs1:8640/cmapi/0.4.0/cluster/start

    • https://mcs1:8640/cmapi/0.4.0/cluster/status

    With CMAPI 1.4 and later:

    • https://mcs1:8640/cmapi/0.4.0/cluster/node

    With CMAPI 1.3 and earlier:

    • https://mcs1:8640/cmapi/0.4.0/cluster/add-node

    • https://mcs1:8640/cmapi/0.4.0/cluster/remove-node

    hashtag
    Required Request Headers

    • 'x-api-key': '93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd'

    • 'Content-Type': 'application/json'

    x-api-key can be set to any value of your choice during the first call to the server. Subsequent connections will require this same key.

    hashtag
    Get Status

    curl examples remain valid but are now considered legacy.

    $ mcs cluster status

    hashtag
    Start Cluster

    $ mcs cluster start --timeout 20

    hashtag
    Stop Cluster

    $ mcs cluster shutdown --timeout 20

    hashtag
    Add Node

    • With CMAPI 1.4 and later:

    • With CMAPI 1.3 and earlier:

    hashtag
    Remove Node

    • With CMAPI 1.4 and later:

    • With CMAPI 1.3 and earlier:

    hashtag
    Quick Reference

    hashtag
    MariaDB Enterprise Server Configuration Management

    Method
    Description

    MariaDB Enterprise Server packages are configured to read configuration files from different paths, depending on the operating system. Making custom changes to Enterprise Server default configuration files is not recommended because custom changes may be overwritten by other default configuration files that are loaded later.

    To ensure that your custom changes will be read last, create a custom configuration file with the z- prefix in one of the include directories.

    Distribution
    Example Configuration File Path

    hashtag
    MariaDB Enterprise Server Service Management

    The systemctl command is used to start and stop the MariaDB Enterprise Server service.

    Operation
    Command

    For additional information, see "".

    hashtag
    MariaDB Enterprise Server Logs

    MariaDB Enterprise Server produces log data that can be helpful in problem diagnosis.

    Log filenames and locations may be overridden in the server configuration. The default location of logs is the data directory. The data directory is specified by the datadir system variable.

    Log
    System Variable/Option
    Default Filename

    hashtag
    Enterprise ColumnStore Service Management

    The systemctl command is used to start and stop the ColumnStore service.

    Operation
    Command

    In the ColumnStore Object Storage topology, the mariadb-columnstore service should not be enabled. The CMAPI service restarts Enterprise ColumnStore as needed, so it does not need to start automatically upon reboot.

    hashtag
    Enterprise ColumnStore CMAPI Service Management

    The systemctl command is used to start and stop the CMAPI service.

    Operation
    Command

    For additional information on endpoints, see "CMAPI".

    hashtag
    MaxScale Configuration Management

    MaxScale can be configured using several methods. These methods make use of MaxScale's .

    Method
    Benefits

    The procedure on these pages configures MaxScale using MaxCtrl.

    hashtag
    MaxScale Service Management

    The systemctl command is used to start and stop the MaxScale service.

    Operation
    Command

    For additional information, see "".

    hashtag
    Next Step

    Navigation in the procedure "Deploy ColumnStore Object Storage Topology":

    .

    This page is: Copyright © 2025 MariaDB. All rights reserved.

    Start and Configure MariaDB MaxScale

    Test MariaDB MaxScale

    Import Data

    Use shared local storage for the Storage Manager directory
  • Storage Requirements

  • S3-Compatible Object Storage Requirements

  • Preferred Object Storage Providers: Cloud

  • Preferred Object Storage Providers: Hardware

  • Shared Local Storage Directories

  • Shared Local Storage Options

  • Recommended Storage Options

  • Ubuntu 18.04 LTS (x86_64)

  • Ubuntu 20.04 LTS (x86_64)

  • Seagate Lyve Rack

  • Quantum ActiveScale

  • NFS (Network File System)

    On-premises

    • NFS is a distributed file system.

    • If NFS is used, the storage should be mounted with the sync option to ensure that each node flushes its changes immediately.

    • For on-premises deployments, NFS is the recommended option for the Storage Manager directory, and any S3-compatible storage is the recommended option for data.

    <hostname>-bin

    Bulk data import

  • Enterprise Server 10.5, Enterprise ColumnStore 5, MaxScale 2.5

  • Enterprise Server 10.6, Enterprise ColumnStore 23.02, MaxScale 22.08

  • Step 1

    Prepare ColumnStore Nodes

    Step 2

    Configure Shared Local Storage

    Step 3

    Install MariaDB Enterprise Server

    Step 4

    Start and Configure MariaDB Enterprise Server

    Step 5

    Test MariaDB Enterprise Server

    Step 6

    Install MariaDB MaxScale

    MariaDB Enterprise Server

    Modern SQL RDBMS with high availability, pluggable storage engines, hot online backups, and audit logging.

    MariaDB MaxScale

    Database proxy that extends the availability, scalability, and security of MariaDB Enterprise Servers

    MariaDB Enterprise ColumnStorearrow-up-right

    • Columnar storage engine

    • Highly available

    • Optimized for Online Analytical Processing (OLAP) workloads

    • Scalable query execution

    • provides a REST API for multi-node administration

    Listener

    Listens for client connections to MaxScale then passes them to the router service

    MariaDB Monitor

    Tracks changes in the state of MariaDB Enterprise Servers.

    Read Connection Router

    Routes connections from the listener to any available Enterprise ColumnStore node

    Read/Write Split Router

    Routes read operations from the listener to any available Enterprise ColumnStore node, and routes write operations from the listener to a specific server that MaxScale uses as the primary server

    Server Module

    Connection configuration in MaxScale to an Enterprise ColumnStore node

    MaxScale node

    4+ cores

    4+ GB

    Enterprise ColumnStore node

    4+ cores

    4+ GB

    MaxScale node

    8+ cores

    16+ GB

    Enterprise ColumnStore node

    64+ cores

    128+ GB

    S3-Compatible Object Storage

    The ColumnStore Object Storage topology uses S3-compatible object storage to store data.

    Shared Local Storage

    The ColumnStore Object Storage topology uses shared local storage for the Storage Manager directory to store metadata.

    EBS (Elastic Block Store) Multi-Attach

    AWS

    • EBS is a high-performance block-storage service for AWS (Amazon Web Services).

    • EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported.

    • For deployments in AWS, EBS Multi-Attach is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data.

    EFS (Elastic File System)

    AWS

    • EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services).

    • For deployments in AWS, EFS is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data. EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services).

    Filestore

    GCP

    • Filestore is high-performance, fully managed storage for GCP (Google Cloud Platform).

    • For deployments in GCP, Filestore is the recommended option for the Storage Manager directory, and Google Object Storage (S3-compatible) is the recommended option for data.

    GlusterFS

    On-premises

    AWS

    Amazon S3 storage

    EBS Multi-Attach or EFS

    GCP

    Google Object Storage (S3-compatible)

    Filestore

    On-premises

    Any S3-compatible object storage

    NFS

    Configuration File

    Configuration files (such as /etc/my.cnf) can be used to set system-variables and options. The server must be restarted to apply changes made to configuration files.

    Command-line

    The server can be started with command-line options that set system-variables and options.

    SQL

    Users can set system-variables that support dynamic changes on-the-fly using the SET statement.

    Distribution

    Example Configuration File Path

    • CentOS

    • Red Hat Enterprise Linux (RHEL)

    /etc/my.cnf.d/z-custom-mariadb.cnf

    • Debian

    • Ubuntu

    /etc/mysql/mariadb.conf.d/z-custom-mariadb.cnf

    Start

    sudo systemctl start mariadb

    Stop

    sudo systemctl stop mariadb

    Restart

    sudo systemctl restart mariadb

    Enable during startup

    sudo systemctl enable mariadb

    Disable during startup

    sudo systemctl disable mariadb

    Status

    sudo systemctl status mariadb

    <hostname>.err

    server_audit.log

    <hostname>-slow.log

    Start

    sudo systemctl start mariadb-columnstore

    Stop

    sudo systemctl stop mariadb-columnstore

    Restart

    sudo systemctl restart mariadb-columnstore

    Enable during startup

    sudo systemctl enable mariadb-columnstore

    Disable during startup

    sudo systemctl disable mariadb-columnstore

    Status

    sudo systemctl status mariadb-columnstore

    Start

    sudo systemctl start mariadb-columnstore-cmapi

    Stop

    sudo systemctl stop mariadb-columnstore-cmapi

    Restart

    sudo systemctl restart mariadb-columnstore-cmapi

    Enable during startup

    sudo systemctl enable mariadb-columnstore-cmapi

    Disable during startup

    sudo systemctl disable mariadb-columnstore-cmapi

    Status

    sudo systemctl status mariadb-columnstore-cmapi

    MaxCtrlarrow-up-right

    Command-line utility to perform administrative tasks through the REST API. See MaxCtrl Commands.

    MaxGUIarrow-up-right

    MaxGUI is a graphical utility that can perform administrative tasks through the REST API.

    REST APIarrow-up-right

    The REST API can be used directly. For example, the curl utility could be used to make REST API calls from the command-line. Many programming languages also have libraries to interact with REST APIs.

    Start

    sudo systemctl start maxscale

    Stop

    sudo systemctl stop maxscale

    Restart

    sudo systemctl restart maxscale

    Enable during startup

    sudo systemctl enable maxscale

    Disable during startup

    sudo systemctl disable maxscale

    Status

    sudo systemctl status maxscale

    S3-compatible object storagearrow-up-right
    enterprise lifecyclearrow-up-right
    recommended hardware requirements
    Cloud
    Hardware
    Storage Manager directory
    REST APIarrow-up-right
    Next: Step 1: Prepare ColumnStore Nodes
    spinner
    • GlusterFS is a distributed file system.

    • GlusterFS supports replication and failover.

    <hostname>.log

    Apr 30 21:54:35 a1ebc96a2519 PrimProc[1004]: 35.668435 |0|0|0| C 28 CAL0000: Error total memory available is less than 3GB.
    ERROR 1815 (HY000): Internal error: System is not ready yet. Please try again.
    https://{server}:{port}/cmapi/{version}/{route}/{command}
    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
          --header 'Content-Type:application/json' \
          --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
          | jq .
    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/start \
          --header 'Content-Type:application/json' \
          --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
          --data '{"timeout":20}' \
          | jq .
    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/shutdown \
          --header 'Content-Type:application/json' \
          --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
          --data '{"timeout":20}' \
          | jq .
    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
          --header 'Content-Type:application/json' \
          --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
          --data '{"timeout":20, "node": "192.0.2.2"}' \
          | jq .
    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/add-node \
          --header 'Content-Type:application/json' \
          --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
          --data '{"timeout":20, "node": "192.0.2.2"}' \
          | jq .
    $ curl -k -s -X DELETE https://mcs1:8640/cmapi/0.4.0/cluster/node \
          --header 'Content-Type:application/json' \
          --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
          --data '{"timeout":20, "node": "192.0.2.2"}' \
          | jq .
    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/remove-node \
          --header 'Content-Type:application/json' \
          --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
          --data '{"timeout":20, "node": "192.0.2.2"}' \
          | jq .
    Step 7
    Step 8
    Step 9
    MaxScale's REST API
    MaxCtrl
    MaxScale's REST API
    MaxScale's REST API
    MaxCtrl
    MaxScale's REST API
    MaxScale's REST API
    MaxScale's REST API
    MaxScale's REST API
    MaxScale's REST API
    MaxScale's REST API
    MaxCtrl
    MaxScale's REST API
    MaxScale's REST API
    MaxCtrl
    mariadbmon
    MaxScale's REST API
    MaxScale's REST API
    MaxCtrl
    mariadbmon
    MaxScale's REST API
    MaxScale's REST API
    MaxCtrl
    MaxScale's REST API
    MaxCtrl
    MaxScale's REST API
    MaxCtrl
    MaxScale's REST API
    MaxCtrl
    MaxScale's REST API
    MaxScale's REST API
    MaxScale's REST API
    MaxScale's REST API
    MaxScale's REST API
    MaxScale's REST API
    MaxCtrl
    MaxGUI
    InnoDB has a row size of 64KB
    Versions
    MariaDB Package Repository Setup and Usage
    MariaDB Replication
    CREATE DATABASE
    CREATE TABLE
    LOAD DATA INFILE
    LOAD DATA INFILE
    SELECT
    Versions
    MariaDB Package Repository Setup and Usage
    MariaDB Replication
    Versions
    MariaDB Package Repository Setup and Usage
    statement
    statement
    MariaDB Topologies
    mariadb client
    SELECT
    Versions
    MariaDB Package Repository Setup and Usage
    Versions
    MariaDB Package Repository Setup and Usage
    Versions
    MariaDB Package Repository Setup and Usage
    information_schema.PLUGINS
    CREATE DATABASE
    CREATE TABLE
    SELECT
    CREATE USER
    character_set_server
    collation_server
    LOAD DATA INFILE
    INSERT...SELECT
    MariaDB Client
    information_schema.PLUGINS
    Optimize Linux Kernel Parameters
    MariaDB Backup
    FLUSH TABLES WITH READ LOCK
    MariaDB Backup
    MariaDB Backup
    UNLOCK TABLES
    MariaDB Backup
    information_schema.PLUGINS
    SELECT
    MariaDB Client
    information_schema.PLUGINS
    CREATE USER
    GRANT
    INTEGER
    DECIMAL
    DATE
    DATETIME
    CHAR
    VARCHAR
    mariadb-backup
    FLUSH TABLES WITH READ LOCK
    MariaDB Backup
    MariaDB Backup
    UNLOCK TABLES
    MariaDB Backup
    system variables
    options
    system variables
    options
    system variables
    SET
    system variables
    options
    system variables
    options
    system variables
    SET
    gtid_strict_mode
    mariadb_es_repo_setup
    gtid_strict_mode
    MariaDB Client
    SHOW GLOBAL STATUS
    MariaDB Client
    SHOW GLOBAL VARIABLES
    Versions
    MariaDB Package Repository Setup and Usage
    Versions
    MariaDB Package Repository Setup and Usage
    Topologies
    MariaDB Backup
    MariaDB Backup
    MariaDB Backup
    MariaDB Client
    MariaDB Replication
    gtid_slave_pos
    CHANGE MASTER TO
    START SLAVE
    enumeration
    numeric
    numeric
    numeric
    numeric
    numeric
    numeric
    numeric
    numeric
    enumeration
    numeric
    numeric
    enumeration
    enumeration
    numeric
    numeric
    numeric
    enumeration
    enumeration
    enumeration
    enumeration
    LOAD DATA INFILE
    INSERT INTO SELECT FROM
    CREATE USER
    GRANT
    GRANT
    character_set_server
    collation_server
    gtid_strict_mode
    log_bin
    log_bin_index
    log_slave_updates
    relay_log
    relay_log_index
    server_id
    Starting and Stopping MariaDB
    MariaDB Error Log
    log_error
    MariaDB Enterprise Audit Log
    server_audit_file_path
    Slow Query Log
    slow_query_log_file
    General Query Log
    general_log_file
    Binary Log
    log_bin
    Starting and Stopping MariaDB
    CREATE USER
    GRANT
    GRANT
    character_set_server
    collation_server
    gtid_strict_mode
    log_bin
    log_bin_index
    log_slave_updates
    relay_log
    relay_log_index
    server_id
    Starting and Stopping MariaDB
    Starting and Stopping MariaDB
    MariaDB Error Log
    log_error
    MariaDB Enterprise Audit Log
    server_audit_file_path
    Slow Query Log
    slow_query_log_file
    General Query Log
    general_log_file
    Binary Log
    log_bin
    Cluster Management API (CMAPI)