Step 4: Start and Configure MariaDB Enterprise Server

Overview

This page details step 4 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".

This step starts and configures MariaDB Enterprise Server 10.5 and MariaDB Enterprise ColumnStore 5.

Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

Stop the Enterprise ColumnStore Services

The installation process might have started some of the ColumnStore services. The services should be stopped prior to making configuration changes.

  1. On each Enterprise ColumnStore node, stop the MariaDB Enterprise Server service:

    $ sudo systemctl stop mariadb
    
  2. On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:

    $ sudo systemctl stop mariadb-columnstore
    
  3. On each Enterprise ColumnStore node, stop the CMAPI service:

    $ sudo systemctl stop mariadb-columnstore-cmapi
    

Configure Enterprise ColumnStore

On each Enterprise ColumnStore node, configure Enterprise Server.

Mandatory system variables and options for ColumnStore Shared Local Storage include:

System Variable/Option

Description

character_set_server

Set this system variable to utf8

collation_server

Set this system variable to utf8_general_ci

columnstore_use_import_for_batchinsert

Set this system variable to ALWAYS to always use cpimport for LOAD DATA INFILE and INSERT...SELECT statements.

gtid_strict_mode

Set this system variable to ON.

log_bin

Set this option to the file you want to use for the Binary Log. Setting this option enables binary logging.

log_bin_index

Set this option to the file you want to use to track binlog filenames.

log_slave_updates

Set this system variable to ON.

relay_log

Set this option to the file you want to use for the Relay Logs. Setting this option enables relay logging.

relay_log_index

Set this option to the file you want to use to index Relay Log filenames.

server_id

Sets the numeric Server ID for this MariaDB Enterprise Server. The value set on this option must be unique to each node.

Example Configuration

[mariadb]
bind_address                           = 0.0.0.0
log_error                              = mariadbd.err
character_set_server                   = utf8
collation_server                       = utf8_general_ci
log_bin                                = mariadb-bin
log_bin_index                          = mariadb-bin.index
relay_log                              = mariadb-relay
relay_log_index                        = mariadb-relay.index
log_slave_updates                      = ON
gtid_strict_mode                       = ON

# This must be unique on each Enterprise ColumnStore node
server_id                              = 1

Start the Enterprise ColumnStore Services

  1. On each Enterprise ColumnStore node, start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:

    $ sudo systemctl start mariadb
    
    $ sudo systemctl enable mariadb
    
  2. On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:

    $ sudo systemctl stop mariadb-columnstore
    

    After the CMAPI service is installed in the next step, CMAPI will start the Enterprise ColumnStore service as-needed on each node. CMAPI disables the Enterprise ColumnStore service to prevent systemd from automatically starting Enterprise ColumnStore upon reboot.

  3. On each Enterprise ColumnStore node, start and enable the CMAPI service, so that it starts automatically upon reboot:

    $ sudo systemctl start mariadb-columnstore-cmapi
    
    $ sudo systemctl enable mariadb-columnstore-cmapi
    

For additional information, see "Start and Stop Services".

Create User Accounts

The ColumnStore Shared Local Storage topology requires several user accounts. Each user account should be created on the primary server, so that it is replicated to the replica servers.

Create the Utility User

Enterprise ColumnStore requires a mandatory utility user account to perform cross-engine joins and similar operations.

  1. On the primary server, create the user account with the CREATE USER statement:

    CREATE USER 'util_user'@'127.0.0.1'
    IDENTIFIED BY 'util_user_passwd';
    
  2. On the primary server, grant the user account SELECT privileges on all databases with the GRANT statement:

    GRANT SELECT, PROCESS ON *.*
    TO 'util_user'@'127.0.0.1';
    
  3. On each Enterprise ColumnStore node, use mcsSetConfig to configure the ColumnStore utility user:

    $ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1
    
    $ sudo mcsSetConfig CrossEngineSupport Port 3306
    
    $ sudo mcsSetConfig CrossEngineSupport User util_user
    
    $ sudo mcsSetConfig CrossEngineSupport Password util_user_passwd
    

Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.

Create the Replication User

ColumnStore Shared Local Storage uses MariaDB Replication to replicate writes between the primary and replica servers. As MaxScale can promote a replica server to become a new primary in the event of node failure, all nodes must have a replication user.

The action is performed on the primary server.

Create the replication user and grant it the required privileges:

  1. Use the CREATE USER statement to create replication user.

    CREATE USER 'repl'@'192.0.2.%' IDENTIFIED BY 'repl_passwd';
    

    Replace the referenced IP address with the relevant address for your environment.

    Ensure that the user account can connect to the primary server from each replica.

  2. Grant the user account the required privileges with the GRANT statement.

    The required privileges depend on the version of MariaDB Enterprise Server that is being used.

    In ES 10.5.8-5 and later, the following privileges are required:

    GRANT REPLICA MONITOR,
       REPLICATION REPLICA,
       REPLICATION REPLICA ADMIN,
       REPLICATION MASTER ADMIN
    ON *.* TO 'repl'@'192.0.2.%';
    

    In ES 10.5.6-4 and before, the following privileges are required:

    GRANT BINLOG MONITOR,
       REPLICATION REPLICA,
       REPLICATION REPLICA ADMIN,
       REPLICATION MASTER ADMIN
    ON *.* TO 'repl'@'192.0.2.%';
    

Create MaxScale User

ColumnStore Shared Local Storage 5 uses MariaDB MaxScale 2.5 to load balance between the nodes.

This action is performed on the primary server.

  1. Use the CREATE USER statement to create the MaxScale user:

    CREATE USER 'mxs'@'192.0.2.%'
    IDENTIFIED BY 'mxs_passwd';
    

    Replace the referenced IP address with the relevant address for your environment.

    Ensure that the user account can connect from the IP address of the MaxScale instance.

  2. Use the GRANT statement to grant the privileges required by the router:

    GRANT SHOW DATABASES ON *.* TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.columns_priv TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.db TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.procs_priv TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.proxies_priv TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.roles_mapping TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.tables_priv TO 'mxs'@'192.0.2.%';
    
    GRANT SELECT ON mysql.user TO 'mxs'@'192.0.2.%';
    
  3. Use the GRANT statement to grant privileges required by the MariaDB Monitor.

    The required privileges depend on the version of MariaDB Enterprise Server that is being used.

    In ES 10.5.8-5 and later, the following privileges are required:

    GRANT BINLOG ADMIN,
       READ_ONLY ADMIN,
       RELOAD,
       REPLICA MONITOR,
       REPLICATION MASTER ADMIN,
       REPLICATION REPLICA ADMIN,
       REPLICATION REPLICA,
       SHOW DATABASES,
       SELECT
    ON *.* TO 'mxs'@'192.0.2.%';
    

    In ES 10.5.6-4 and before, the following privileges are required:

    GRANT BINLOG ADMIN,
       BINLOG MONITOR,
       READ_ONLY ADMIN,
       RELOAD,
       REPLICATION MASTER ADMIN,
       REPLICATION REPLICA ADMIN,
       REPLICATION REPLICA,
       SHOW DATABASES,
       SELECT
    ON *.* TO 'mxs'@'192.0.2.%';
    

Configure MariaDB Replication

On each replica server, configure MariaDB Replication:

  1. Use the CHANGE MASTER TO statement to configure the connection to the primary server:

    CHANGE MASTER TO
       MASTER_HOST='192.0.2.1',
       MASTER_USER='repl',
       MASTER_PASSWORD='repl_passwd',
       MASTER_USE_GTID=slave_pos;
    
  2. Start replication using the START REPLICA statement:

    START REPLICA;
    
  3. Confirm that replication is working using the SHOW REPLICA STATUS statement:

    SHOW REPLICA STATUS;
    
  4. Ensure that the replica server cannot accept local writes by setting the read_only system variable to ON using the SET GLOBAL statement:

    SET GLOBAL read_only=ON;
    

Initiate the Primary Server with CMAPI

Initiate the primary server using CMAPI.

  1. Create an API key for the cluster.

    This API key should be stored securely and kept confidential, because it can be used to add cluster nodes to the multi-node Enterprise ColumnStore deployment.

    For example, to create a random 256-bit API key using openssl rand:

    $ openssl rand -hex 32
    
    93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd
    

    This document will use the following API key in further examples, but users should create their own:

  2. Use CMAPI to add the primary server to the cluster and set the API key.

    The new API key needs to be provided as part of the X-API-key HTML header.

    For example, if the primary server's host name is mcs1 and its IP address is 192.0.2.1, use one of the following commands.

    • With CMAPI 1.4 and later, use the node command:

      $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
         --header 'Content-Type:application/json' \
         --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
         --data '{"timeout":120, "node": "192.0.2.1"}' \
         | jq .
      
    • With CMAPI 1.3 and earlier, use the add-node command:

      $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/add-node \
         --header 'Content-Type:application/json' \
         --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
         --data '{"timeout":120, "node": "192.0.2.1"}' \
         | jq .
      
    {
      "timestamp": "2020-10-28 00:39:14.672142",
      "node_id": "192.0.2.1"
    }
    
  3. Use CMAPI to check the status of the cluster node:

    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    
    {
      "timestamp": "2020-12-15 00:40:34.353574",
      "192.0.2.1": {
        "timestamp": "2020-12-15 00:40:34.362374",
        "uptime": 11467,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 19202
          },
          {
            "name": "controllernode",
            "pid": 19232
          },
          {
            "name": "PrimProc",
            "pid": 19254
          },
          {
            "name": "ExeMgr",
            "pid": 19292
          },
          {
            "name": "WriteEngine",
            "pid": 19316
          },
          {
            "name": "DMLProc",
            "pid": 19332
          },
          {
            "name": "DDLProc",
            "pid": 19366
          }
        ]
      }
    

Add Replica Servers with CMAPI

Add the replica servers with CMAPI:

  1. For each replica server, use CMAPI to add the replica server to the cluster.

    The previously set API key needs to be provided as part of the X-API-key HTML header.

    For example, if the primary server's host name is mcs1 and the replica server's IP address is 192.0.2.2, use one of the following commands.

    • With CMAPI 1.4 and later, use the node command:

      $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \
         --header 'Content-Type:application/json' \
         --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
         --data '{"timeout":120, "node": "192.0.2.2"}' \
         | jq .
      
    • With CMAPI 1.3 and earlier, use the add-node command:

      $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/add-node \
         --header 'Content-Type:application/json' \
         --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
         --data '{"timeout":120, "node": "192.0.2.2"}' \
         | jq .
      
    • In both cases, the output looks like the following:

      {
        "timestamp": "2020-10-28 00:42:42.796050",
        "node_id": "192.0.2.2"
      }
      
  2. After all replica servers have been added, use CMAPI to confirm that all cluster nodes have been successfully added:

    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    
    {
      "timestamp": "2020-12-15 00:40:34.353574",
      "192.0.2.1": {
        "timestamp": "2020-12-15 00:40:34.362374",
        "uptime": 11467,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 19202
          },
          {
            "name": "controllernode",
            "pid": 19232
          },
          {
            "name": "PrimProc",
            "pid": 19254
          },
          {
            "name": "ExeMgr",
            "pid": 19292
          },
          {
            "name": "WriteEngine",
            "pid": 19316
          },
          {
            "name": "DMLProc",
            "pid": 19332
          },
          {
            "name": "DDLProc",
            "pid": 19366
          }
        ]
      },
      "192.0.2.2": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "192.0.2.3": {
        "timestamp": "2020-12-15 00:40:34.428554",
        "uptime": 11437,
        "dbrm_mode": "slave",
        "cluster_mode": "readonly",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 17789
          },
          {
            "name": "PrimProc",
            "pid": 17813
          },
          {
            "name": "ExeMgr",
            "pid": 17854
          },
          {
            "name": "WriteEngine",
            "pid": 17877
          }
        ]
      },
      "num_nodes": 3
    }
    

Configure Linux Security Modules (LSM)

The specific steps to configure the security module depend on the operating system.

Configure SELinux (CentOS, RHEL)

Configure SELinux for Enterprise ColumnStore:

  1. To configure SELinux, you have to install the packages required for audit2allow.

    On CentOS 7 and RHEL 7, install the following:

    $ sudo yum install policycoreutils policycoreutils-python
    

    On RHEL 8, install the following:

    $ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils
    
  2. Allow the system to run under load for a while to generate SELinux audit events.

  3. After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:

    $ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
    

    If no audit events were found, this will print the following:

    $ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
    
    Nothing to do
    
  4. If audit events were found, the new SELinux policy can be loaded using semodule:

    $ sudo semodule -i mariadb_local.pp
    
  5. Set SELinux to enforcing mode:

    $ sudo setenforce enforcing
    
  6. Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config.

    For example, the file will usually look like this after the change:

    # This file controls the state of SELinux on the system.
    # SELINUX= can take one of these three values:
    #     enforcing - SELinux security policy is enforced.
    #     permissive - SELinux prints warnings instead of enforcing.
    #     disabled - No SELinux policy is loaded.
    SELINUX=enforcing
    # SELINUXTYPE= can take one of three values:
    #     targeted - Targeted processes are protected,
    #     minimum - Modification of targeted policy. Only selected processes are protected.
    #     mls - Multi Level Security protection.
    SELINUXTYPE=targeted
    
  7. Confirm that SELinux is in enforcing mode:

    $ sudo getenforce
    
    Enforcing
    

Configure AppArmor (Ubuntu)

For information on how to create a profile, see How to create an AppArmor Profile on Ubuntu.com.

Configure Firewalls

Configure firewalls for Enterprise ColumnStore.

The specific steps to configure the firewall service depend on the platform.

Configure firewalld (CentOS, RHEL)

Configure firewalld for Enterprise Cluster on CentOS and RHEL:

  1. Check if the firewalld service is running:

    $ sudo systemctl status firewalld
    
  2. If the firewalld service was stopped to perform the installation, start it now:

    For example, if your cluster nodes are in the 192.0.2.0/24 subnet:

    $ sudo systemctl start firewalld
    
  3. Open up the relevant ports using firewall-cmd:

    $ sudo firewall-cmd --permanent --add-rich-rule='
       rule family="ipv4"
       source address="192.0.2.0/24"
       destination address="192.0.2.0/24"
       port port="3306" protocol="tcp"
       accept'
    
    $ sudo firewall-cmd --permanent --add-rich-rule='
       rule family="ipv4"
       source address="192.0.2.0/24"
       destination address="192.0.2.0/24"
       port port="8600-8630" protocol="tcp"
       accept'
    
    $ sudo firewall-cmd --permanent --add-rich-rule='
       rule family="ipv4"
       source address="192.0.2.0/24"
       destination address="192.0.2.0/24"
       port port="8640" protocol="tcp"
       accept'
    
    $ sudo firewall-cmd --permanent --add-rich-rule='
       rule family="ipv4"
       source address="192.0.2.0/24"
       destination address="192.0.2.0/24"
       port port="8700" protocol="tcp"
       accept'
    
    $ sudo firewall-cmd --permanent --add-rich-rule='
       rule family="ipv4"
       source address="192.0.2.0/24"
       destination address="192.0.2.0/24"
       port port="8800" protocol="tcp"
       accept'
    
  4. Reload the runtime configuration:

    $ sudo firewall-cmd --reload
    

Configure UFW (Ubuntu)

Configure UFW for Enterprise ColumnStore on Ubuntu:

  1. Check if the UFW service is running:

    $ sudo ufw status verbose
    
  2. If the UFW service was stopped to perform the installation, start it now:

    $ sudo ufw enable
    
  3. Open up the relevant ports using ufw.

    For example, if your cluster nodes are in the 192.0.2.0/24 subnet in the range 192.0.2.1 - 192.0.2.3:

    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 3306 proto tcp
    
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8600:8630 proto tcp
    
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8640 proto tcp
    
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8700 proto tcp
    
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8800 proto tcp
    
  4. Reload the runtime configuration:

    $ sudo ufw reload
    

Next Step

Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology":

  • This page was step 4 of 9.