Deploy Multi-Node MariaDB Enterprise ColumnStore 5 on Debian 10

MariaDB Enterprise ColumnStore is a columnar storage engine for MariaDB Enterprise Server, which may be deployed in a multi-node configuration for high availability (HA).

These instructions cover MariaDB Enterprise ColumnStore 5 with MariaDB Enterprise Server 10.5 and MariaDB MaxScale 2.5 on Debian 10 in a Multi-Node Enterprise ColumnStore Deployment configuration.

While these instructions cover Debian 10, instructions are also available which cover all supported Operating Systems, and other individual Operating Systems.

MariaDB Enterprise Components

These instructions detail the deployment of the following MariaDB Enterprise components:

Component

Description

MariaDB Enterprise Server 10.5

  • A 100% Open Source modern SQL database.

MariaDB Enterprise ColumnStore 5

  • A columnar storage engine for MariaDB Enterprise Server that provides distributed, columnar storage for scalable analytical processing and smart transactions (HTAP).

MariaDB MaxScale 6.1

  • A database proxy, firewall, and load balancer.

Term Definitions

Term

Definition

Columnar Database

  • A database where the columns of each row are stored separately.

  • Best suited for analytical and OLAP workloads.

  • Also known as a "column-oriented database".

Row Database

  • A database where all columns of each row are stored together.

  • Best suited for transactional and OLTP workloads.

  • Also known as a "row-oriented database".

Primary Server

  • The server that is designated to receive write queries in a MariaDB Replication deployment.

  • Formerly known as a "master".

  • In the examples below, the primary server will be mcs1.

Replica Server

  • A server that replicates from a primary server using MariaDB Replication.

  • Formerly known as a "slave".

  • In the examples below, the replica servers will be mcs2 and mcs3.

MaxScale instance

Cluster Node

  • A server that is part of a multi-node Enterprise ColumnStore cluster.

  • In the examples below, the cluster nodes are mxs1, mxs2, and mxs3.

MariaDB MaxScale

Multi-node Enterprise ColumnStore 5 was designed to take advantage of MaxScale.

MaxScale 6.1 is required for multi-node Enterprise ColumnStore 5.

MaxScale 6.1 provides the following benefits:

The procedure below includes the following MaxScale deployment steps:

  1. Install MaxScale.

  2. Create the MaxScale User.

  3. Configure MaxScale.

High Availability

Multi-node Enterprise ColumnStore 5 supports high availability, but only if certain conditions are met:

It has the following limitations:

  • The cluster only has a single primary server, which is the only server that can accept writes. MaxScale's Read/Write Split Router (readwritesplit) will transparently route writes to the master, so this should not affect the application.

  • The cluster requires a minimum of 3 cluster nodes. The cluster also requires an odd number of cluster nodes. These requirements help prevent split-brain scenarios from occurring, which can cause failover to fail in some cases.

  • If the primary server fails, cluster activity may be paused for up to 2 minutes while failover occurs.

Mandatory Utility User Account

Multi-node Enterprise ColumnStore 5 requires a mandatory utility user account.

The procedure below includes the following steps to create and configure this user account:

  1. Configure the Mandatory Utility User Account.

  2. Create the Mandatory Utility User Account.

Optimize Linux Kernel Parameters

MariaDB Enterprise ColumnStore performs best when certain Linux kernel parameters are optimized.

  1. Set the relevant kernel parameters in a sysctl configuration file. For proper change management, we recommend setting them in an Enterprise ColumnStore-specific configuration file.

    For example, create a /etc/sysctl.d/90-mariadb-enterprise-columnstore.conf file with the following contents:

    # minimize swapping
    vm.swappiness = 1
    
    # Increase the TCP max buffer size
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    
    # Increase the TCP buffer limits
    # min, default, and max number of bytes to use
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 65536 16777216
    
    # don't cache ssthresh from previous connection
    net.ipv4.tcp_no_metrics_save = 1
    
    # for 1 GigE, increase this to 2500
    # for 10 GigE, increase this to 30000
    net.core.netdev_max_backlog = 2500
    
  2. Set the same kernel parameters at runtime using the sysctl command:

    $ sudo sysctl --load=/etc/sysctl.d/90-mariadb-enterprise-columnstore.conf
    

LSM Configuration for Install

To avoid confusion and potential problems, we recommend configuring the system's Linux Security Module (LSM) on each cluster node during installation. The specific steps to configure the security module will depend on the platform.

In the Configure the Linux Security Module section, we will configure the security module and restart it.

AppArmor (Debian/Ubuntu/SLES)

AppArmor must be disabled before installing MariaDB Enterprise ColumnStore.

  1. Disable AppArmor:

    $ sudo systemctl disable apparmor
    
  2. Reboot the system.

  3. Confirm that no AppArmor profiles are loaded using aa-status:

    $ sudo aa-status
    

    Example output:

    apparmor module is loaded.
    0 profiles are loaded.
    0 profiles are in enforce mode.
    0 profiles are in complain mode.
    0 processes have profiles defined.
    0 processes are in enforce mode.
    0 processes are in complain mode.
    0 processes are unconfined but have a profile defined.
    

Note

Remember to configure and re-enable AppArmor after the installation is complete.

Firewall Considerations

MariaDB Enterprise ColumnStore requires the following TCP ports:

TCP Ports

Description

3306

Port used for MariaDB Client traffic

8600-8630

Port range used for inter-node communication

8640

Port used by CMAPI

8700

Port used for inter-node communication

8800

Port used for inter-node communication

It is recommended to temporarily stop the system's firewall service on each cluster node during installation to avoid confusion and potential problems. The specific steps to stop the firewall will depend on the platform.

In the Configure the Firewall Service section, we will configure the firewall and restart it.

If you are on a cloud platform, such as AWS, you may need to modify the security group or firewall rules for the cloud instance.

Configure the AWS Security Group

If your Enterprise ColumnStore cluster is in Amazon Web Services, you will need to modify the cluster's security group prior to installation.

Enterprise ColumnStore requires all internal communications to be open between cluster nodes. Therefore, the security group should allow all protocols and all ports to be open between the Enterprise ColumnStore nodes and the MaxScale proxy.

Configure the Character Encoding

When using MariaDB Enterprise ColumnStore, it is recommended to set the system's locale to UTF-8.

  1. Set the system's locale to en_US.UTF-8 by executing localedef:

    $ sudo localedef -i en_US -f UTF-8 en_US.UTF-8
    

Configure DNS

MariaDB Enterprise ColumnStore requires all cluster nodes to have host names that are resolvable on all other nodes. If your infrastructure does not configure DNS centrally, you may need to configure static DNS entries in each server's /etc/hosts file.

On each cluster node, edit the /etc/hosts file to map host names to the IP address of each cluster node:

192.0.2.1     mcs1
192.0.2.2     mcs2
192.0.2.3     mcs3
192.0.2.100   mxs1

Note

Replace the IP addresses with the addresses in your own environment.

Storage Options

MariaDB Enterprise ColumnStore supports multiple storage types:

Storage Type

Description

S3-Compatible Object Storage

  • S3-compatible object storage is optional, but recommended

  • Enterprise ColumnStore can use S3-compatible object storage to store data.

  • With multi-node Enterprise ColumnStore, the Storage Manager directory should use shared local storage for high availability.

Shared Local Storage

  • Required for multi-node Enterprise ColumnStore with high availability

  • Enterprise ColumnStore can use shared local storage to store data and metadata.

  • If S3-compatible storage is used for data, the shared local storage will only be used for the Storage Manager directory.

Non-Shared Local Storage

  • Appropriate for single-node Enterprise ColumnStore

  • Enterprise ColumnStore can use non-shared local storage to store data and metadata.

S3-Compatible Object Storage

MariaDB Enterprise ColumnStore supports S3-compatible object storage.

S3-compatible object storage is optional, but highly recommended. If S3-compatible object storage is used, Enterprise ColumnStore requires the Storage Manager directory to use shared local storage (such as NFS) for high availability.

S3-compatible object storage is:

  • Compatible: Many object storage services are compatible with the Amazon S3 API.

  • Economical: S3-compatible object storage is often very low cost.

  • Flexible: S3-compatible object storage is available for both cloud and on-premises deployments.

  • Limitless: S3-compatible object storage is often virtually limitless.

  • Resilient: S3-compatible object storage is often low maintenance and highly available, since many services use resilient cloud infrastructure.

  • Scalable: S3-compatible object storage is often highly optimized for read and write scaling.

  • Secure: S3-compatible object storage is often encrypted-at-rest.

Many S3-compatible object storage services exist. MariaDB Corporation cannot make guarantees about all S3-compatible object storage services, because different services provide different functionality.

If you have any questions about using specific S3-compatible object storage with MariaDB Enterprise Columnstore, contact us.

S3-Compatible Storage for On-Premise Deployments

In addition to cloud deployments, S3-compatible storage is also available for on-premise deployments. Many storage vendors provide S3-compatible storage for on-premise deployments. MariaDB Corporation does not recommend a specific storage vendor.

If you have any questions, please contact us.

Create an S3 Bucket

If you want to use S3-compatible storage, it is important to create the S3 bucket before you start ColumnStore. All cluster nodes access data from the same bucket.

If you already have an S3 bucket, confirm that the bucket is empty.

We will configure Enterprise ColumnStore to use the S3 bucket later in the Configure the S3 Storage Manager section.

Note

The specific steps to create the S3 bucket will depend on what S3-compatible storage you are using.

Shared Local Storage

MariaDB Enterprise ColumnStore can use shared local storage.

Shared local storage is required for high availability. The specific shared local storage requirements depend on whether Enterprise ColumnStore is configured to use S3-compatible object storage:

  • When S3-compatible object storage is used, Enterprise ColumnStore requires the Storage Manager directory to use shared local storage for high availability.

  • When S3-compatible object storage is not used, Enterprise ColumnStore requires the DB Root directories to use shared local storage for high availability.

The most common shared local storage options for on-premises and cloud deployments are:

  • NFS (Network File System)

  • GlusterFS

The most common shared local storage options for AWS (Amazon Web Services) deployments are:

  • EBS (Elastic Block Store) Multi-Attach

  • EFS (Elastic File System)

The most common shared local storage option for GCP (Google Cloud Platform) deployments is:

  • Filestore

Shared Local Storage Options

The most common options for shared local storage are:

Shared Local Storage

Description

EBS (Elastic Block Store) Multi-Attach

  • EBS is a high-performance block-storage service for AWS (Amazon Web Services).

  • EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported.

  • For deployments in AWS, EBS Multi-Attach is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data.

EFS (Elastic File System)

  • EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services).

  • For deployments in AWS, EFS is a recommended option for the Storage Manager directory, and Amazon S3 storage is the recommended option for data.

Filestore

NFS (Network File System)

  • NFS is a distributed file system.

  • If NFS is used, the storage should be mounted with the sync option.

  • For on-premises deployments, NFS is the recommended option for the Storage Manager directory, and any S3-compatible storage is the recommended option for data.

GlusterFS

  • GlusterFS is a stable and well-designed distributed file system.

  • GlusterFS supports replication and failover.

Directories Requiring Shared Local Storage for HA

Multi-node MariaDB Enterprise ColumnStore requires some directories to use shared local storage for high availability. The specific requirements depend on if MariaDB Enterprise ColumnStore is configured to use S3-compatible object storage:

Using S3-Compatible Object Storage?

Directories to use Shared Local Storage

Yes

Storage Manager directory

No

DB Root directories

Enterprise ColumnStore Installation

MariaDB Corporation provides a APT package repository for Debian 10.

MariaDB Enterprise ColumnStore ships as a storage engine plugin for MariaDB Enterprise Server and a platform engine to handle back-end storage processes. MariaDB Enterprise Server 10.5 does not require any additional software to operate as a single-node analytics database. However, MaxScale is needed for a multi-node analytics database and for high availability.

Install Enterprise ColumnStore on Debian/Ubuntu (APT)

On each cluster node, perform the following steps to install Enterprise ColumnStore.

  1. Retrieve your Customer Download Token at https://customers.mariadb.com/downloads/token/ and substitute for CUSTOMER_DOWNLOAD_TOKEN in the following directions.

  2. Configure the APT package repository.

    MariaDB Enterprise ColumnStore 5 is available on MariaDB Enterprise Server 10.5.

    To configure APT package repositories:

    $ sudo apt install wget
    
    $ wget https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
    
    $ echo "b741361ea3a0a9fcaa30888a63ff3a8a4021882f126cf4ef26cf616493a29315  mariadb_es_repo_setup" \
        | sha256sum -c -
    
    $ chmod +x mariadb_es_repo_setup
    
    $ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
       --mariadb-server-version="10.5" --mariadb-maxscale-version=6.1 \
       --skip-tools
    
    $ sudo apt update
    
  3. Install additional dependencies for Enterprise ColumnStore's CMAPI Server.

    On Debian 10, install the following:

    $ sudo apt install python python-apt python-pymysql python-pip \
       python3 python3-apt python3-pymysql python3-pip \
       aptitude libjemalloc2 jq curl
    
  4. Install MariaDB Enterprise ColumnStore and package dependencies:

    $ sudo apt install mariadb-server \
       mariadb-backup \
       libmariadb3 \
       mariadb-client \
       mariadb-plugin-columnstore \
       mariadb-columnstore-cmapi
    
  5. Configure MariaDB Enterprise ColumnStore.

    Installation only loads MariaDB Enterprise ColumnStore to the system. MariaDB Enterprise ColumnStore requires configuration and additional post-installation steps before the database server is ready for use.

Enterprise ColumnStore Configuration

MariaDB Enterprise ColumnStore requires configuration after it is installed. The configuration file location depends on your operating system.

Enterprise Server Configuration

MariaDB Enterprise Server can be configured in the following ways:

  • System variables and options can be set in a configuration file (such as /etc/my.cnf). MariaDB Enterprise Server must be restarted to apply changes made to the configuration file.

  • System variables and options can be set on the command-line.

  • If a system variable supports dynamic changes, it can be set on-the-fly using the SET statement.

Configuration Files

MariaDB's packages include several bundled configuration files. It is also possible to create custom configuration files.

On Debian and Ubuntu, MariaDB's packages bundle the following configuration files:

  • /etc/mysql/my.cnf

  • /etc/mysql/mariadb.cnf

  • /etc/mysql/mariadb.conf.d/50-client.cnf

  • /etc/mysql/mariadb.conf.d/50-mysql-clients.cnf

  • /etc/mysql/mariadb.conf.d/50-mysqld_safe.cnf

  • /etc/mysql/mariadb.conf.d/50-server.cnf

  • /etc/mysql/mariadb.conf.d/60-galera.cnf

  • /etc/mysql/mariadb.conf.d/mariadb-enterprise.cnf

And on Debian and Ubuntu, custom configuration files from the following directories are read by default:

  • /etc/mysql/conf.d/

  • /etc/mysql/mariadb.conf.d/

Configure MariaDB for Multi-node Enterprise ColumnStore

Configure MariaDB Enterprise Server by performing the following process on all cluster nodes.

  1. Determine which system variables and options you need to configure.

    Mandatory system variables and options for multi-node Enterprise ColumnStore include:

    System Variable/Option

    Description

    character_set_server

    Set this system variable to utf8

    collation_server

    Set this system variable to utf8_general_ci

    columnstore_use_import_for_batchinsert

    Set this system variable to ALWAYS to always use cpimport for LOAD DATA INFILE and INSERT...SELECT statements.

    gtid_strict_mode

    Set this system variable to ON.

    log_bin

    Set this option to enable the Binary Log. This is a file path.

    log_bin_index

    Set this option to track the names of the Binary Log files. This is a file path.

    log_slave_updates

    Set this system variable to ON.

    relay_log

    Set this option to enable Relay Logs. This is a file path.

    relay_log_index

    Set this option to track the names of the Relay Log files. This is a file path.

    server_id

    Sets the numeric Server ID for this MariaDB Enterprise Server. Must be unique for each cluster node in the deployment.

  2. Choose a configuration file in which to configure your system variables and options.

    We recommend not making custom changes to one of the bundled configuration files. Instead, create a custom configuration file in one of the included directories. Configuration files in included directories are read in alphabetical order. If you want your custom configuration file to override the bundled configuration files, it is a good idea to prefix the custom configuration file's name with a string that will be sorted last, such as z-.

    • On Debian and Ubuntu, a good custom configuration file would be: /etc/mysql/mariadb.conf.d/z-custom-my.cnf

  3. Set your system variables and options in the configuration file.

    They need to be set in a group that will be read by mariadbd, such as [mariadb] or [server].

    For example:

    [mariadb]
    log_error                              = mariadbd.err
    character_set_server                   = utf8
    collation_server                       = utf8_general_ci
    log_bin                                = mariadb-bin
    log_bin_index                          = mariadb-bin.index
    relay_log                              = mariadb-relay
    relay_log_index                        = mariadb-relay.index
    log_slave_updates                      = ON
    gtid_strict_mode                       = ON
    columnstore_use_import_for_batchinsert = ALWAYS
    
    # This must be unique on each cluster node
    server_id                              = 1
    
  4. In Debian and Ubuntu, a bundled configuration file must be modified for multi-node Enterprise ColumnStore.

    In /etc/mysql/mariadb.conf.d/50-server.cnf, comment out the line that sets the bind_address system variable to 127.0.0.1:

    # Commented out for multi-node Enterprise ColumnStore
    # bind-address            = 127.0.0.1
    

Configure the Mandatory Utility User Account

Multi-node Enterprise ColumnStore requires a mandatory utility user account. By default, it connects to the server using the root user with no password. MariaDB Enterprise Server 10.5 will reject this login attempt by default, so you will need to configure Enterprise ColumnStore to use a different user account and password.

The following directions are for configuring Enterprise ColumnStore to use a different user account and password using the mcsSetConfig command. Directions for creating the user account are in the Create the Mandatory Utility User Account section.

To configure cross engine joins, perform the following steps, use the mcsSetConfig command.

For example, to configure Enterprise ColumnStore to use the cross_engine user account to connect to the server at 127.0.0.1:

$ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1
$ sudo mcsSetConfig CrossEngineSupport Port 3306
$ sudo mcsSetConfig CrossEngineSupport User cross_engine
$ sudo mcsSetConfig CrossEngineSupport Password cross_engine_passwd

Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.

By default, MariaDB Enterprise Server installs the simple_password_check plugin. Its requirements are configured by the simple_password_check_digits, simple_password_check_letters_same_case, simple_password_check_minimal_length, and simple_password_check_other_characters system variables.

Configure the S3 Storage Manager

When you want to use S3-compatible storage for Enterprise ColumnStore, you must configure Enterprise ColumnStore's S3 Storage Manager to use S3-compatible storage.

To configure Enterprise ColumnStore to use S3-compatible storage, edit /etc/columnstore/storagemanager.cnf:

[ObjectStorage]

service = S3

[S3]
bucket = your_columnstore_bucket_name
endpoint = your_s3_endpoint
aws_access_key_id = your_s3_access_key_id
aws_secret_access_key = your_s3_secret_key
# iam_role_name = your_iam_role
# sts_region = your_sts_region
# sts_endpoint = your_sts_endpoint
# ec2_iam_mode=enabled

[Cache]
cache_size = your_local_cache_size
path = your_local_cache_path

The S3-compatible object storage options are configured under [S3]:

  • The bucket option must be set to the name of the bucket that you created in the Create an S3 Bucket step.

  • The endpoint option must be set to the endpoint for the S3-compatible object storage.

  • The aws_access_key_id and aws_secret_access_key options must be set to the access key ID and secret access key for the S3-compatible object storage.

  • To use a specific IAM role, you must uncomment and set iam_role_name, sts_region, and sts_endpoint.

  • To use the IAM role assigned to an EC2 instance, you must uncomment ec2_iam_mode=enabled.

The local cache options are configured under [Cache]:

  • The cache_size option is set to 2 GB by default.

  • The path option is set to /var/lib/columnstore/storagemanager/cache by default.

Ensure that the specified path has sufficient storage space for the specified cache size.

Start the Enterprise ColumnStore Services

The Enterprise Server. Enterprise ColumnStore, and CMAPI services can be started using the systemctl command. In case the services were started during the installation process, use the restart command to ensure that the services pick up the new configuration.

Perform the following procedure on all nodes:

  1. Start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:

    $ sudo systemctl restart mariadb
    $ sudo systemctl enable mariadb
    
  2. Start and disable the MariaDB Enterprise ColumnStore service, so that it does not start automatically upon reboot:

    $ sudo systemctl restart mariadb-columnstore
    $ sudo systemctl disable mariadb-columnstore
    

    Note

    The Enterprise ColumnStore service should not be enabled in a multi-node deployment. The Enterprise ColumnStore service will be started as-needed by the CMAPI service, so it does not need to start automatically upon reboot.

  3. Start and enable the CMAPI service, so that it starts automatically upon reboot:

    $ sudo systemctl restart mariadb-columnstore-cmapi
    $ sudo systemctl enable mariadb-columnstore-cmapi
    

Create User Accounts

Several user accounts need to be created on the primary server.

Create the Mandatory Utility User Account

The credentials for the mandatory utility user account were previously configured in the Configure the Mandatory Utility User Account section. The user account must also be created and granted the necessary privileges to access data.

The user account only needs to be created on the primary server. It will be replicated to the replica servers.

  1. Connect to the primary server using MariaDB Client using the root@localhost user account:

    $ sudo mariadb
    
  2. Create the user account with the CREATE USER statement:

    CREATE USER 'cross_engine'@'127.0.0.1'
       IDENTIFIED BY "cross_engine_passwd";
    

    Note

    Choose the same user name and password that was configured in the Configure the Mandatory Utility User Account section.

  3. Grant the user account SELECT privileges on all databases with the GRANT statement:

    GRANT SELECT, PROCESS ON *.*
       TO 'cross_engine'@'127.0.0.1';
    

Create the Replication User

The deployment requires MariaDB Replication, so a replication user account must be created and granted sufficient privileges to perform replication.

The user account only needs to be created on the primary server. It will be replicated to the replica servers.

  1. Connect to the primary server using MariaDB Client using the root@localhost user account:

    $ sudo mariadb
    
  2. Create the user account with the CREATE USER statement:

    CREATE USER 'repl'@'192.0.2.%'
       IDENTIFIED BY 'passwd';
    

    Replace the referenced IP address with the relevant address for your environment.

    Ensure that the user account can connect from the IP address of all replicas.

    Note

    Please choose a password that meets your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.

    By default, MariaDB Enterprise Server installs the simple_password_check plugin. Its requirements are configured by the simple_password_check_digits, simple_password_check_letters_same_case, simple_password_check_minimal_length, and simple_password_check_other_characters system variables.

  3. Grant the user account several global privileges with the GRANT statement.

    The required permissions depend on the version of MariaDB Enterprise Server that is being used.

    In ES 10.5.8-5 and later, the following privileges are required:

    GRANT REPLICA MONITOR,
       REPLICATION REPLICA,
       REPLICATION REPLICA ADMIN,
       REPLICATION MASTER ADMIN
       ON *.*
       TO 'repl'@'192.0.2.%';
    

    In ES 10.5.6-4 and earlier, the following privileges are required:

    GRANT BINLOG MONITOR,
       REPLICATION REPLICA,
       REPLICATION REPLICA ADMIN,
       REPLICATION MASTER ADMIN
       ON *.*
       TO 'repl'@'192.0.2.%';
    
  4. This user account will be used to configure replication in the Configure MariaDB Replication section.

Create the MaxScale User

The deployment requires MariaDB MaxScale, so a MaxScale user account must be created and granted sufficient privileges to manage and monitor the cluster.

The user account only needs to be created on the primary server. It will be replicated to the replica servers.

  1. Connect to the primary server using MariaDB Client using the root@localhost user account:

    $ sudo mariadb
    
  2. Create the user account with the CREATE USER statement:

    CREATE USER 'mxs'@'192.0.2.%'
       IDENTIFIED BY 'passwd';
    

    Replace the referenced IP address with the relevant address for your environment.

    Ensure that the user account can connect from the IP address of the MaxScale instance.

    Note

    Please choose a password that meets your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.

    By default, MariaDB Enterprise Server installs the simple_password_check plugin. Its requirements are configured by the simple_password_check_digits, simple_password_check_letters_same_case, simple_password_check_minimal_length, and simple_password_check_other_characters system variables.

  3. Grant the user account several global privileges and the SELECT privilege on the system database with the GRANT statement.

    The required permissions depend on the version of MariaDB Enterprise Server that is being used.

    In ES 10.5.8-5 and later, the following privileges are required:

    GRANT BINLOG ADMIN,
       READ_ONLY ADMIN,
       RELOAD,
       REPLICA MONITOR,
       REPLICATION MASTER ADMIN,
       REPLICATION REPLICA ADMIN,
       REPLICATION REPLICA,
       SHOW DATABASES
       ON *.*
       TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT
       ON *.*
       TO 'mxs'@'192.0.2.%';
    

    In ES 10.5.6-4 and earlier, the following privileges are required:

    GRANT BINLOG ADMIN,
       BINLOG MONITOR,
       READ_ONLY ADMIN,
       RELOAD,
       REPLICATION MASTER ADMIN,
       REPLICATION REPLICA ADMIN,
       REPLICATION REPLICA,
       SHOW DATABASES
       ON *.*
       TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT
       ON *.*
       TO 'mxs'@'192.0.2.%';
    
  4. This user account will be used to configure MaxScale in the Configure MaxScale section.

Prepare Cluster

A few steps need to be performed to add the cluster nodes to the cluster.

Configure MariaDB Replication

Multi-node Enterprise ColumnStore requires MariaDB Replication, which must be configured. Perform the following steps on each replica server.

  1. Connect to the replica server using MariaDB Client using the root@localhost user account:

    $ sudo mariadb
    
  2. Use the CHANGE MASTER TO statement to configure the replica server to replicate from the primary server:

    CHANGE MASTER TO
       MASTER_HOST='192.0.2.1',
       MASTER_USER='repl',
       MASTER_PASSWORD='passwd',
       MASTER_USE_GTID=slave_pos;
    
  3. Start replication using the START REPLICA statement:

    START REPLICA;
    
  4. Confirm that replication is working using the SHOW REPLICA STATUS statement:

    SHOW REPLICA STATUS;
    
  5. Ensure that the replica server cannot accept local writes by setting the read_only system variable to ON using the SET GLOBAL statement:

    SET GLOBAL read_only=ON;
    

Initiate Primary Server using CMAPI

The primary server needs to be initiated using the CMAPI. For more information about the CMAPI, see Working with CMAPI.

  1. Create an API key for the cluster.

    This API key should be stored securely and kept confidential, because it can be used to add cluster nodes to the multi-node Enterprise ColumnStore deployment.

    For example, to create a random 256-bit API key using openssl rand:

    $ openssl rand -hex 32
    93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd
    

    This document will use the following API key in further examples, but users should create their own instead: 93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd

  2. Add the primary server to the cluster and set the API key using curl to send the command to the CMAPI Server.

    The new API key needs to be provided as part of the X-API-key HTML header.

    The output can be piped to jq, so that the JSON results are pretty-printed.

    For example, if the primary server's host name is mcs1 and its IP address is 192.0.2.1.

    • With CMAPI 1.4 and later, send the node command:

      $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
         --header 'Content-Type:application/json' \
         --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
         --data '{"timeout":120, "node": "192.0.2.1"}' \
         | jq .
      
    • With CMAPI 1.3 and earlier, send the add-node command:

      $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/add-node \
         --header 'Content-Type:application/json' \
         --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
         --data '{"timeout":120, "node": "192.0.2.1"}' \
         | jq .
      

    Example output:

    {
      "timestamp": "2020-10-28 00:39:14.672142",
      "node_id": "192.0.2.1"
    }
    
  3. Check the status of the new cluster node by using curl to send the status command to the CMAPI Server:

    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    

    Example output:

    {
      "timestamp": "2020-12-15 00:40:34.353574",
      "192.0.2.1": {
        "timestamp": "2020-12-15 00:40:34.362374",
        "uptime": 11467,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 19202
          },
          {
            "name": "controllernode",
            "pid": 19232
          },
          {
            "name": "PrimProc",
            "pid": 19254
          },
          {
            "name": "ExeMgr",
            "pid": 19292
          },
          {
            "name": "WriteEngine",
            "pid": 19316
          },
          {
            "name": "DMLProc",
            "pid": 19332
          },
          {
            "name": "DDLProc",
            "pid": 19366
          }
        ]
      }
    

Initiate Replica Servers using CMAPI

The replica servers need to be initiated using the CMAPI. For more information about the CMAPI, see Working with CMAPI.

  1. For each replica server, use curl to send the command to the primary server's CMAPI Server.

    The previously set API key needs to be provided as part of the X-API-key HTML header.

    The output can be piped to jq, so that the JSON results are pretty-printed.

    For example, if the primary server's host name is mcs1 and the replica server's IP address is 192.0.2.2:

    • With CMAPI 1.4 and later, send the node command:

      $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
         --header 'Content-Type:application/json' \
         --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
         --data '{"timeout":120, "node": "192.0.2.2"}' \
         | jq .
      
    • With CMAPI 1.3 and earlier, send the add-node command:

      $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/add-node \
         --header 'Content-Type:application/json' \
         --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
         --data '{"timeout":120, "node": "192.0.2.2"}' \
         | jq .
      

    Example output:

    {
      "timestamp": "2020-10-28 00:42:42.796050",
      "node_id": "192.0.2.2"
    }
    
  2. After all replica servers have been added, use curl to send the status command to the CMAPI Server.

    The previously set API key needs to be provided as part of the X-API-key HTML header.

    The output can be piped to jq, so that the JSON results are pretty-printed.

    For example, if the primary server's host name is mcs1:

    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    

    Use the output to confirm that all cluster nodes have been successfully added.

    Example output:

    {
      "timestamp": "2020-12-15 00:40:34.353574",
      "192.0.2.1": {
        "timestamp": "2020-12-15 00:40:34.362374",
        "uptime": 11467,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 19202
          },
          {
            "name": "controllernode",
            "pid": 19232
          },
          {
            "name": "PrimProc",
            "pid": 19254
          },
          {
            "name": "ExeMgr",
            "pid": 19292
          },
          {
            "name": "WriteEngine",
            "pid": 19316
          },
          {
            "name": "DMLProc",
            "pid": 19332
          },
          {
            "name": "DDLProc",
            "pid": 19366
          }
        ]
      },
      "192.0.2.2": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "192.0.2.3": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "num_nodes": 3
    }
    

Install MaxScale

After the cluster is setup, you need to install MaxScale.

MariaDB Corporation provides a APT package repository for Debian 10.

Install MaxScale on Debian/Ubuntu (APT)

On the MaxScale instance, perform the following steps to install MaxScale.

  1. Retrieve your Customer Download Token at https://customers.mariadb.com/downloads/token/ and substitute for CUSTOMER_DOWNLOAD_TOKEN in the following directions.

  2. Configure the APT package repository.

    Enterprise ColumnStore 5 requires MaxScale 6.1. Pass the version to install using the --mariadb-maxscale-version flag to the mariadb_es_repo_setup script.

    To configure APT package repositories:

    $ sudo apt install wget
    
    $ wget https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
    
    $ echo "b741361ea3a0a9fcaa30888a63ff3a8a4021882f126cf4ef26cf616493a29315  mariadb_es_repo_setup" \
        | sha256sum -c -
    
    $ chmod +x mariadb_es_repo_setup
    
    $ sudo ./mariadb_es_repo_setup --token="CUSTOMER_DOWNLOAD_TOKEN" --apply \
       --mariadb-maxscale-version="6.1"
    
    $ sudo apt update
    
  3. Install MariaDB MaxScale and package dependencies:

    $ sudo apt install maxscale
    
  4. Configure MaxScale.

    Installation only loads MariaDB MaxScale to the system. MariaDB MaxScale requires configuration before MaxScale is ready for use.

Configure MaxScale

Multi-node Enterprise ColumnStore deployments require MaxScale, which must be configured.

Replace Default Configuration

The default /etc/maxscale.cnf must be replaced.

Replace the default /etc/maxscale.cnf with the following configuration:

[maxscale]
threads=auto
admin_host=0.0.0.0
admin_secure_gui=false

Start the MaxScale Service

The MaxScale service must be started.

In case the service was started during the installation process, use the restart command to ensure that it picks up the new configuration:

$ sudo systemctl restart maxscale

Configure MaxScale with the REST API

The following sections will use MaxScale's REST API to configure MaxScale.

MaxScale provides several methods to use the REST API:

Method

Description

MaxCtrl

  • MaxCtrl is a command-line utility that can perform administrative tasks.

  • It supports many different commands.

  • See MaxCtrl Commands in 6.1 for a list of commands.

MaxGUI

  • MaxGUI is a graphical utility that can perform administrative tasks.

  • It was introduced in MaxScale 2.5.

REST API

  • The REST API can also be used directly.

  • For example, the curl utility could be used to make REST API calls from the command-line.

  • Many programming languages also have libraries for interacting with REST APIs.

The examples in the sections below will use MaxCtrl.

Configure Servers with REST API

MaxScale needs to be configured to recognize the cluster nodes. We can use the REST API to create the servers in MaxScale.

  1. Determine which server parameters you need to configure.

    Mandatory parameters for Server objects:

    Parameter

    Description

    type

    Set the module type to server.

    address

    Set the IP address for the MariaDB Enterprise Server.

    port

    Set the port the MariaDB Enterprise Server listens on.

    protocol

    Set the protocol to MariaDBBackend to connect the Server.

  2. Configure MaxScale to recognize each server using MaxCtrl or another supported REST API access method.

    With MaxCtrl, we can use the create server command.

    $ maxctrl create server mcs1 192.0.2.1
    $ maxctrl create server mcs2 192.0.2.2
    $ maxctrl create server mcs3 192.0.2.3
    

    As the first argument, provide a name for the server.

    As the second argument, provide the IP address for the server. Replace the referenced IP addresses and host names with the relevant values for your environment.

Configure MariaDB Monitor with REST API

MaxScale can use MariaDB Monitor (mariadbmon) to monitor the cluster nodes. We can use the REST API to configure the monitor.

  1. Determine which parameters for MariaDB Monitor (mariadbmon) you need to configure.

    Mandatory parameters for MariaDB Monitor (mariadbmon) to obtain high availability:

    Parameter

    Description

    type

    Set the type to monitor.

    module

    Set to mariadbmon for MariaDB Replication.

    servers

    Set to a comma-separated list of the Server object names. Set to the names of the servers created in the Configure Servers with the REST API section.

    user

    Set to the user MariaDB MaxScale uses to connect to the Servers. In the Create the MaxScale User section, we called this user mxs.

    password

    Set to the password MariaDB MaxScale uses to authenticate on the Servers.

    replication_user

    Set to the user that is used for replication. In the Create the Replication User section, we called this user repl.

    replication_password

    Set to the password used to authenticate for replication.

    auto_failover

    Enables Automatic Failover for MariaDB Replication deployments, allowing the monitor to respond to a master server failure by reconfiguring a replica server to operate as the new master server. Set to true.

    auto_rejoin

    Enables Automatic Rejoin for MariaDB Replication deployments, allowing the monitor to reconfigure replica servers that were down during Automatic Failover to use the new master server. Set to true.

    enforce_read_only_slaves

    Ensures that replicas are read-only. Set to true.

    monitor_interval

    Sets the frequency that MaxScale checks the back-end servers. A good value is 1000.

    detect_replication_lag

    Enables replication lag detection. Set to true.

  2. Configure MaxScale to start the monitor using MaxCtrl or another supported REST API access method.

    With MaxCtrl, we can use the create monitor command:

    $ maxctrl create monitor mcs_monitor mariadbmon \
       --servers mcs1 mcs2 mcs3 \
       --monitor-user=mxs \
       --monitor-password=passwd \
       replication_user=repl \
       replication_password=passwd \
       auto_failover=true \
       auto_rejoin=true \
       enforce_read_only_slaves=true \
       monitor_interval=1000 \
       detect_replication_lag=true
    

    The name of the monitor is provided as the first argument.

    The module parameter is provided as the second argument.

    The servers parameter can be set with the --servers option.

    The user parameter can be set with the --monitor-user option.

    The password parameter can be set with the --monitor-password option.

    The rest of the parameters are provided as key=value pairs.

Configure Read/Write Split Router with REST API

Use MaxScale's Read/Write Split Router (readwritesplit) to load-balance traffic between the cluster nodes.

  1. Determine which parameters for Read/Write Split Router (readwritesplit) you need to configure.

    Mandatory parameters for Read/Write Split Router (readwritesplit):

    Parameter

    Description

    type

    Set the module type as service.

    router

    Use to set the type of router you want to use for the service. For the Read/Write Split router, set to readwritesplit.

    servers

    Set as a comma-separated list of the Server object names. Set to the names of the servers created in the Configure Servers with the REST API section.

    user

    Set the user for MaxScale to use when connecting to Servers. In the Create the MaxScale User section, we called this user mxs.

    password

    Set the password for MaxScale to use authentication when connecting to Servers.

  2. Configure MaxScale to start the router using MaxCtrl or another supported REST API access method.

    With MaxCtrl, we can use the create service command:

    $ maxctrl create service mcs_service readwritesplit \
       user=mxs \
       password=passwd \
       --servers mcs1 mcs2 mcs3
    

    The name of the new service is provided as the first argument.

    The router parameter is provided as the second argument.

    The servers parameter can be set with the --servers option.

    The rest of the parameters are provided as key=value pairs.

Configure the Router's Listener with the REST API

The Read/Write Split Router (readwritesplit) needs to have an associated listener, so that it can accept client traffic.

  1. Determine which parameters for the listener you need to configure.

    Mandatory parameters for the listener:

    Parameter

    Description

    type

    Set the module type as a listener.

    service

    Use this parameter to connect the listener to a configured routing service.

    protocol

    Set this parameter to MariaDBClient to handle incoming client connections.

    port

    Set this parameter to specify the port for the MaxScale instance to listen on.

  2. Configure MaxScale to start the router's listener using MaxCtrl or another supported REST API access method.

    With MaxCtrl, we can use the create listener command:

    $ maxctrl create listener mcs_service mcs_listener 3306 \
       --protocol=MariaDBClient
    

    The service parameter is provided as the first argument.

    The name of the new listener is provided as the second argument.

    The port parameter is provided as the third argument.

    The protocol parameter can be set with the --protocol option.

Bulk Import Data

Now that the Enterprise ColumnStore system is running, you can bulk import your data.

Import the Schema

Before data can be imported into the tables, the schema needs to be created.

  1. Connect to the primary server using MariaDB Client using the root@localhost user account:

    $ sudo mariadb
    
  2. For each database that you are importing, create the database with the CREATE DATABASE statement:

    CREATE DATABASE inventory;
    
  3. For each table that you are importing, create the table with the CREATE TABLE statement:

    CREATE TABLE inventory.products (
       product_name varchar(11) NOT NULL DEFAULT '',
       supplier varchar(128) NOT NULL DEFAULT '',
       quantity varchar(128) NOT NULL DEFAULT '',
       unit_cost varchar(128) NOT NULL DEFAULT ''
    ) ENGINE=Columnstore DEFAULT CHARSET=utf8;
    

cpimport

MariaDB Enterprise ColumnStore includes cpimport, which is a command-line utility that is designed to efficiently load data in bulk.

To import your data from a TSV (tab-separated values) file with cpimport:

$ sudo cpimport -s '\t' inventory products /tmp/inventory-products.tsv

LOAD DATA INFILE

When data is loaded with the LOAD DATA INFILE statement, MariaDB Enterprise ColumnStore loads the data using cpimport, which is a command-line utility that is designed to efficiently load data in bulk.

To import your data from a TSV (tab-separated values) file with LOAD DATA INFILE statement:

LOAD DATA INFILE '/tmp/inventory-products.tsv'
INTO TABLE inventory.products;

Import from Remote Database

MariaDB Enterprise ColumnStore can also import data directly from a remote database. A simple method is to query the table using the SELECT statement, and then pipe the results into cpimport, which is a command-line utility that is designed to efficiently load data in bulk.

To import your data from a remote MariaDB database:

$ mariadb --quick \
   --skip-column-names \
   --execute="SELECT * FROM inventory.products" \
   | cpimport -s '\t' inventory products

Configure the Linux Security Module

If you stopped the Linux Security Module (LSM) on each cluster node during installation, you can restart the module and configure it on each cluster node.

The specific steps to configure the security module depend on the platform.

Configure AppArmor (Debian/Ubuntu/SLES)

We disabled AppArmor in the AppArmor section, but we have to create an AppArmor profile for Enterprise ColumnStore before re-enabling it. This will ensure that AppArmor does not interfere with Enterprise ColumnStore's functionality.

For information on how to create a profile, see How to create an AppArmor Profile on ubuntu.com.

Configure the Firewall Service

If you stopped the firewall service on each cluster node during installation, you can restart the service and configure it to open the relevant ports on each cluster node.

The specific steps to configure the firewall service depend on the platform.

Service Administration

Multi-node Enterprise ColumnStore has several services.

Enterprise Server Service Administration

MariaDB Enterprise Server uses systemctl to administer its service:

Operation

Command

Start

sudo systemctl start mariadb

Stop

sudo systemctl stop mariadb

Restart

sudo systemctl restart mariadb

Enable during startup

sudo systemctl enable mariadb

Disable during startup

sudo systemctl disable mariadb

Status

sudo systemctl status mariadb

Enterprise ColumnStore Service Administration

In a multi-node deployment, the Enterprise ColumnStore service should not be manually started or enabled, because the service will be managed by the CMAPI service.

MariaDB Enterprise ColumnStore uses systemctl to check the status of the Enterprise ColumnStore service:

Operation

Command

Status

sudo systemctl status mariadb-columnstore

CMAPI Service Administration

MariaDB Enterprise ColumnStore uses systemctl to administer its CMAPI service:

Operation

Command

Start

sudo systemctl start mariadb-columnstore-cmapi

Stop

sudo systemctl stop mariadb-columnstore-cmapi

Restart

sudo systemctl restart mariadb-columnstore-cmapi

Enable during startup

sudo systemctl enable mariadb-columnstore-cmapi

Disable during startup

sudo systemctl disable mariadb-columnstore-cmapi

Status

sudo systemctl status mariadb-columnstore-cmapi

MaxScale Service Administration

MariaDB MaxScale uses systemctl to administer its service:

Operation

Command

Start

sudo systemctl start maxscale

Stop

sudo systemctl stop maxscale

Restart

sudo systemctl restart maxscale

Enable during startup

sudo systemctl enable maxscale

Disable during startup

sudo systemctl disable maxscale

Status

sudo systemctl status maxscale

Testing

When you have MariaDB Enterprise ColumnStore up and running, you should test it to ensure that it is in working order.

Check Server Status

  1. Connect to the server using MariaDB Client using the root@localhost user account:

    $ sudo mariadb
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 38
    Server version: 10.5.12-8-MariaDB-Enterprise MariaDB Enterprise Server
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]>
    

Check System Status

  1. Use curl to send the status command to the CMAPI Server using your deployment's API key.

    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    

    Example output:

    {
      "timestamp": "2020-12-15 00:40:34.353574",
      "192.0.2.1": {
        "timestamp": "2020-12-15 00:40:34.362374",
        "uptime": 11467,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 19202
          },
          {
            "name": "controllernode",
            "pid": 19232
          },
          {
            "name": "PrimProc",
            "pid": 19254
          },
          {
            "name": "ExeMgr",
            "pid": 19292
          },
          {
            "name": "WriteEngine",
            "pid": 19316
          },
          {
            "name": "DMLProc",
            "pid": 19332
          },
          {
            "name": "DDLProc",
            "pid": 19366
          }
        ]
      },
      "192.0.2.2": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "192.0.2.3": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "num_nodes": 3
    }
    

Test Application Database User

Your application should have a dedicated database user account. The user account needs to be created on the primary node. The user account needs to be able to connect from both the MaxScale node and the application server.

This action is performed on the primary node.

  1. Connect to ES:

    $ mysql --user=root --password
    
  2. Create the database user account for your MaxScale node:

    CREATE USER 'col_app_user'@'192.0.2.10'
       IDENTIFIED BY 'col_app_user_passwd';
    

    Replace 192.0.2.10 with the relevant IP address specification for your MaxScale node.

    Passwords should meet your organization's password policies.

  3. Grant the database user account for your MaxScale node the proper privileges for your application:

    GRANT ALL ON test.*
       TO 'col_app_user'@'192.0.2.10';
    

    The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.

  4. Create the database user account for your application server:

    CREATE USER 'col_app_user'@'192.0.2.20'
       IDENTIFIED BY 'col_app_user_passwd';
    

    Replace 192.0.2.20 with the relevant IP address specification for your application server.

    Passwords should meet your organization's password policies.

  5. Grant the database user account for your application server the proper privileges for your application:

    GRANT ALL ON test.*
       TO 'col_app_user'@'192.0.2.20';
    

    The privileges shown are designed to allow the tests in the subsequent sections to work. The user account for your production application may require different privileges.

Test Application Connections

This action is performed on a client connected to the MaxScale node.

  1. Use the MariaDB Client to connect to the listener port on the MaxScale node:

    $ mariadb --host=192.0.2.10 \
       --user=col_app_user \
       --password
    

    The listener port (Configure a Listener) should be specified by the --port option. When the listener uses TCP port 3306, you do not need to specify the port to the client.

    The user account (Test Application Database User) should be specified by the --user option.

    MaxScale's Read/Write Split Router (Configure Read/Write Split Router) will route the connection to one of the nodes (Configure Server Objects).

  2. Use the client connection to query the node's hostname:

    SELECT @@global.hostname;
    

    The output will show the hostname of one of the nodes.

Test DDL

This action is performed on a client connected to the MaxScale node.

  1. Use the MariaDB Client to connect to the listener port on the MaxScale node:

    $ mariadb --host=192.0.2.10 \
       --user=col_app_user \
       --password
    

    The listener port (Configure a Listener) should be specified by the --port option. When the listener uses TCP port 3306, you do not need to specify the port to the client.

    The user account (Test Application Database User) should be specified by the --user option.

    MaxScale's Read/Write Split Router (Configure Read/Write Split Router) will route the connection to one of the nodes (Configure Server Objects).

  2. Create a database using the CREATE DATABASE statement:

    CREATE DATABASE test;
    
  3. Create a ColumnStore table CREATE TABLE statement:

    CREATE TABLE test.analytics_test (
       id INT,
       str VARCHAR(50)
    ) ENGINE = ColumnStore;
    

Working with MaxGUI

MaxGUI is a graphical utility that can perform administrative tasks using MaxScale's REST API. It is available starting in MaxScale 2.5. It supports many different operations.

MaxGUI can be used to:

  • Configure MaxScale.

  • View details about sessions, connections, and load.

  • View details about monitors, servers, and services.

Working with MaxCtrl

MaxCtrl is a command-line utility that can perform administrative tasks using MaxScale's REST API. It supports many different operations.

MaxCtrl can be used to:

  • Configure MaxScale.

  • View details about sessions and threads.

  • View details about monitors, servers, services, and filters.

  • Enable or disable log levels.

  • Test MaxScale's query classifier.

Working with CMAPI

Enterprise ColumnStore's CMAPI (Cluster Management API) is a REST API that can be used to manage a multi-node Enterprise ColumnStore cluster.

Many tools are capable of interacting with REST APIs. For example, the curl utility could be used to make REST API calls from the command-line.

Many programming languages also have libraries for interacting with REST APIs.

The examples below show how to use the CMAPI with curl.

Format of URL endpoints for REST API

https://{server}:{port}/cmapi/{version}/{route}/{command}

For example:

  • https://mcs1:8640/cmapi/0.4.0/cluster/status

  • https://mcs1:8640/cmapi/0.4.0/cluster/start

  • https://mcs1:8640/cmapi/0.4.0/cluster/shutdown

With CMAPI 1.4 and later:

  • https://mcs1:8640/cmapi/0.4.0/cluster/node

With CMAPI 1.3 and earlier:

  • https://mcs1:8640/cmapi/0.4.0/cluster/add-node

  • https://mcs1:8640/cmapi/0.4.0/cluster/remove-node

Required Request Headers

  • 'x-api-key': '93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd'

  • 'Content-Type': 'application/json'

Note

x-api-key can be set to any value of your choice during the first call to the server. Subsequent connections will require this same key.

Get Status

$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
   --header 'Content-Type:application/json' \
   --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
   | jq .

Start Cluster

$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/start \
   --header 'Content-Type:application/json' \
   --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
   --data '{"timeout":20}' \
   | jq .

Stop Cluster

$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/shutdown \
   --header 'Content-Type:application/json' \
   --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
   --data '{"timeout":20}' \
   | jq .

Add Node

  • With CMAPI 1.4 and later:

    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       --data '{"timeout":20, "node": "192.0.2.2"}' \
       | jq .
    
  • With CMAPI 1.3 and earlier:

    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/add-node \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       --data '{"timeout":20, "node": "192.0.2.2"}' \
       | jq .
    

Remove Node

  • With CMAPI 1.4 and later:

    $ curl -k -s -X DELETE https://mcs1:8640/cmapi/0.4.0/cluster/node \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       --data '{"timeout":20, "node": "192.0.2.2"}' \
       | jq .
    
  • With CMAPI 1.3 and earlier:

    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/remove-node \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       --data '{"timeout":20, "node": "192.0.2.2"}' \
       | jq .