Upgrade a Multi-Node MariaDB Enterprise ColumnStore Deployment from 1.2 to 1.5 with MariaDB Enterprise Server 10.5

These instructions detail the upgrade from MariaDB ColumnStore 1.2 to MariaDB Enterprise ColumnStore 1.5 with MariaDB Enterprise Server 10.5 in a Multi-node ColumnStore Deployment configuration on a range of supported Operating Systems. Instructions for specific operating systems are available

In this upgrade process, the system is upgraded from MariaDB ColumnStore 1.2, which is bundled with a patched version of MariaDB Server, to MariaDB Enterprise ColumnStore 1.5, which integrates with MariaDB Enterprise Server 10.5 as a storage engine plugin.

When MariaDB ColumnStore is upgraded, the old version must be uninstalled and the new version installed. Additionally, the configuration of MariaDB ColumnStore 1.2 and earlier is not compatible with MariaDB Enterprise ColumnStore 1.5. System variables and plugins have been renamed, which requires that you edit your configuration files to the new standard in addition to upgrading your software.

Data Backup

Occasionally, issues can be encountered during upgrades. These issues can even potentially corrupt the database's data files, preventing you from easily reverting to the old installation. It is generally best to perform a backup prior to upgrading. If an issue is encountered during the upgrade, you can use the backup to restore your MariaDB ColumnStore database to the old version. If the upgrade finishes without issue, the backup can be deleted.

  1. Run columnstoreBackup for each Performance Module:

    $ columnstoreBackup -zv 192.0.2.1 /data/backups/pm1
    
  2. Confirm the successful completion of the backup operation.

  3. Test the backup.

Additional information on columnstoreBackup is available on the MariaDB Knowledge Base.

Uninstall Old Version

When upgrading MariaDB ColumnStore, ColumnStore plugins as well as the previous installation of MariaDB server must be uninstalled before attempting to install the new release.

Remove ColumnStore Plugins

MariaDB ColumnStore 1.5 includes several changes to the names of the plugins used by the storage engine. These changes make the plugin loads from MariaDB ColumnStore 1.2 incompatible with a 1.5 installation. Before removing the previous installation, these plugins must be removed. Removing these plugins ensures that when the upgraded Server starts, it doesn't raise errors attempting to load plugins under their old names.

  1. Connect to the Server through the ColumnStore 1.2 Client, mcsmysql:

    $ mcsmysql
    
  2. Execute the UNINSTALL PLUGIN to statement to remove the MariaDB ColumnStore plugins:

    UNINSTALL PLUGIN infinidb;
    UNINSTALL PLUGIN columnstore_tables;
    UNINSTALL PLUGIN columnstore_columns;
    UNINSTALL PLUGIN columnstore_extents;
    UNINSTALL PLUGIN columnstore_files;
    UNINSTALL PLUGIN columnstore;
    

Stop ColumnStore

Before the old version can be uninstalled, stop the current MariaDB ColumnStore process.

  1. Stop MariaDB ColumnStore with mcsadmin shutdownSystem:

    $ mcsadmin shutdownSystem y
    

Uninstall via YUM (RHEL/CentOS)

  1. Remove all MariaDB ColumnStore packages with the following command:

    $ sudo yum remove "mariadb-columnstore-*"
    

    Note that a wildcard character used in the command to ensure that all MariaDB ColumnStore packages are uninstalled. Confirm that this wildcard does not unintentionally refer to any of your custom applications.

  2. Verify that all MariaDB ColumnStore packages are uninstalled with the following command:

    $ rpm --query --all | grep -i -E "mariadb-columnstore"
    

    Note that the command should not return any results. If it does, remove the remaining packages individually.

Uninstall via APT (Debian/Ubuntu)

  1. Remove all MariaDB ColumnStore packages with the following command:

    $ sudo apt remove "mariadb-columnstore-*"
    

    Note that a wildcard character used in the command to ensure that all MariaDB ColumnStore packages are uninstalled. Confirm that this wildcard does not unintentionally refer to any of your custom applications.

  2. Verify that all MariaDB ColumnStore packages are uninstalled with the following command:

    $ apt list --installed | grep -i -E "mariadb-columnstore"
    

    Note that the command should not return any results. If it does, remove the remaining packages individually.

Uninstall via ZYpp (SLES)

  1. Remove all MariaDB ColumnStore packages with the following command:

    $ sudo zypper remove "mariadb-columnstore-*"
    

    Note that a wildcard character used in the command to ensure that all MariaDB ColumnStore packages are uninstalled. Confirm that this wildcard does not unintentionally refer to any of your custom applications.

  2. Verify that all MariaDB ColumnStore packages are uninstalled with the following command:

    $ rpm --query --all | grep -i -E "mariadb-columnstore"
    

    Note that the command should not return any results. If it does, remove the remaining packages individually.

System Preparation

Systems hosting ColumnStore Instances require some additional configuration prior to installation.

Optimizing Linux Kernel Parameters

MariaDB ColumnStore performs best when certain Linux kernel parameters are optimized.

  1. Set the relevant kernel parameters in a sysctl configuration file. For proper change management, it is recommended to set them in a ColumnStore-specific configuration file.

    For example, create a /etc/sysctl.d/90-mariadb-columnstore.conf file with the following contents:

    # Increase the TCP max buffer size
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    
    # Increase the TCP buffer limits
    # min, default, and max number of bytes to use
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 65536 16777216
    
    # don't cache ssthresh from previous connection
    net.ipv4.tcp_no_metrics_save = 1
    
    # for 1 GigE, increase this to 2500
    # for 10 GigE, increase this to 30000
    net.core.netdev_max_backlog = 2500
    
    # optimize Linux to cache directories and inodes
    vm.vfs_cache_pressure = 10
    
    # minimize swapping
    vm.swappiness = 10
    
  2. Set the same kernel parameters at runtime using the sysctl command:

    $ sudo sysctl --load=/etc/sysctl.d/90-mariadb-columnstore.conf
    

Firewall Considerations

MariaDB ColumnStore requires the following TCP ports:

TCP Ports

Description

3306

Port used for MariaDB Client traffic

8600-8630

Port range used for inter-node communication

8640

Port used by CMAPI

8700

Port used for inter-node communication

8800

Port used for inter-node communication

It is recommended to stop the system's firewall service on each node during installation to avoid confusion and potential problems. The specific steps to stop the firewall will depend on the platform.

In the Configuring the Firewall Service section, we will configure the firewall and restart it.

Stopping the Firewall Service with firewalld (RHEL/CentOS/SLES 15)

  1. Check whether the firewalld service is running:

    $ sudo systemctl status firewalld
    
  2. If the firewalld service is running, then stop it:

    $ sudo systemctl stop firewalld
    

Warning

If you stop the firewall service, then be sure to restart it once installation is complete

Stopping the Firewall Service with SuSEFirewall2 (SLES 12)

  1. Check whether the SuSEFirewall2 service is running:

    $ sudo systemctl status SuSEfirewall2_setup
    $ sudo systemctl status SuSEfirewall2_init
    
  2. If the firewalld service is running, then stop it:

    $ sudo systemctl stop SuSEfirewall2_setup
    $ sudo systemctl stop SuSEfirewall2_init
    

Warning

If you stop the firewall service, then be sure to restart it once installation is complete

Stopping the Firewall Service with UFW (Ubuntu)

  1. Check whether the UFW service is running:

    $ sudo ufw status verbose
    
  2. If the UFW service is running, then stop it:

    $ sudo ufw disable
    

Warning

If you stop the firewall service, then be sure to restart it once installation is complete

Linux Security Module Considerations

It is recommended to disable the system's Linux Security Module (LSM) on each node during installation to avoid confusion and potential problems. The specific steps to disable the security module will depend on the platform.

In the Configuring the Linux Security Module section, we will configure the security module and restart it.

Disabling the Linux Security Module with SELinux (RHEL/CentOS/SLES)

Prior to installing MariaDB Columnstore, it is necessary to set SELinux to permissive mode:

  1. Set SELinux to permissive mode by setting SELINUX=permissive in /etc/selinux/config:

    # This file controls the state of SELinux on the system.
    # SELINUX= can take one of these three values:
    #     enforcing - SELinux security policy is enforced.
    #     permissive - SELinux prints warnings instead of enforcing.
    #     disabled - No SELinux policy is loaded.
    SELINUX=permissive
    # SELINUXTYPE= can take one of three values:
    #     targeted - Targeted processes are protected,
    #     minimum - Modification of targeted policy. Only selected processes are protected.
    #     mls - Multi Level Security protection.
    SELINUXTYPE=targeted
    
  2. Reboot the system.

  3. Confirm that SELinux is in permissive mode using getenforce:

    $ sudo getenforce
    

Disabling the Linux Security Module with AppArmor (Debian/Ubuntu/SLES)

Prior to installing MariaDB Columnstore, it is necessary to disable AppArmor:

  1. Disable AppArmor:

    $ sudo systemctl disable apparmor
    
  2. Reboot the system.

  3. Confirm that no AppArmor profiles are loaded using aa-status:

    $ sudo aa-status
    

    Example output:

    apparmor module is loaded.
    0 profiles are loaded.
    0 profiles are in enforce mode.
    0 profiles are in complain mode.
    0 processes have profiles defined.
    0 processes are in enforce mode.
    0 processes are in complain mode.
    0 processes are unconfined but have a profile defined.
    

Character Encoding

When using MariaDB ColumnStore, it is recommended to set the system's locale to UTF-8.

  1. On some operating systems, you may need to install some dependencies.

    On RHEL 8 and CentOS 8, install the following:

    $ sudo yum install glibc-locale-source glibc-langpack-en
    
  2. Set the system's locale to en_US.UTF-8 by executing localedef:

    $ sudo localedef -i en_US -f UTF-8 en_US.UTF-8
    

Installation

MariaDB Corporation provides package repositories for YUM (RHEL/CentOS), APT (Debian/Ubuntu), and ZYpp (SLES).

MariaDB Enterprise Server 10.5 does not require additional software to operate as an analytics database with MariaDB ColumnStore.

Install via YUM (RHEL/CentOS)

  1. Retrieve your Customer Download Token at https://customers.mariadb.com/downloads/token/ and substitute for customer_download_token in the following directions.

  2. Configure the YUM package repository.

    MariaDB ColumnStore 1.5 is available on MariaDB Enterprise Server 10.5.

    To configure YUM package repositories:

    $ sudo yum install wget
    
    $ wget https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
    
    $ echo "93fa0df3d6491a791f5d699158dcfe3e6ce20c45ddc2f534ed2f5eac6468ff0a  mariadb_es_repo_setup" \
        | sha256sum -c -
    
    $ chmod +x mariadb_es_repo_setup
    
    $ sudo ./mariadb_es_repo_setup --token="customer_download_token" --apply \
       --mariadb-server-version="10.5"
    
  3. Install MariaDB ColumnStore and package dependencies.

    $ sudo yum install MariaDB-server MariaDB-backup \
        MariaDB-columnstore-engine
    

    Note that ColumnStore currently installs a x-columnstore.cnf configuration file that sets plugin_maturity to gamma. This means that other plugins with gamma maturity can be installed as well.

  4. Install the EPEL repository.

  5. Install some dependencies for ColumnStore's CMAPI Server:

    $ sudo yum install python2 python2-PyMySQL python2-pip \
       python3 python3-PyMySQL python3-pip \
       htop jemalloc jq mlocate net-tools curl
    

    Note that jemalloc is not required, but it improves performance.

  6. Create a directory for the CMAPI Server installation, and change into the new directory:

    $ sudo mkdir -p /opt/cmapi
    $ sudo chmod 0755 /opt/cmapi
    $ cd /opt/cmapi
    
  7. Download the CMAPI Server into the directory:

    $ sudo wget https://dlm.mariadb.com/{{ customer_download_token }}/mariadb-enterprise-server/10.5.4-2/cmapi/mariadb-columnstore-cmapi.tar.gz
    
  8. Extract the CMAPI Server into the directory:

    $ sudo tar -xvzf mariadb-columnstore-cmapi.tar.gz
    
  9. Install the CMAPI Server:

    $ sudo ./service.sh install
    
  10. Configure MariaDB ColumnStore.

    Installation only loads MariaDB ColumnStore to the system. MariaDB ColumnStore may require configuration and additional post-installation steps before the database server is ready for use.

    See Configuration.

Install via APT (Debian/Ubuntu)

  1. Retrieve your Customer Download Token at https://customers.mariadb.com/downloads/token/ and substitute for customer_download_token in the following directions.

  2. Configure the APT package repository.

    MariaDB ColumnStore 1.5 is available on MariaDB Enterprise Server 10.5.

    To configure APT package repositories:

    $ sudo apt install wget
    
    $ wget https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
    
    $ echo "93fa0df3d6491a791f5d699158dcfe3e6ce20c45ddc2f534ed2f5eac6468ff0a  mariadb_es_repo_setup" \
        | sha256sum -c -
    
    $ chmod +x mariadb_es_repo_setup
    
    $ sudo ./mariadb_es_repo_setup --token="customer_download_token" --apply \
       --mariadb-server-version="10.5"
    
    $ sudo apt update
    
  3. Install MariaDB ColumnStore and package dependencies:

    $ sudo apt install mariadb-server mariadb-backup \
        mariadb-plugin-columnstore
    

    Note that ColumnStore currently installs a x-columnstore.cnf configuration file that sets plugin_maturity to gamma. This means that other plugins with gamma maturity can be installed as well.

  4. Install some dependencies for ColumnStore's CMAPI Server.

    On Debian 10, install the following:

    $ sudo apt install python python-apt python-pymysql python-pip \
       python3 python3-apt python3-pymysql python3-pip \
       aptitude htop libjemalloc2 jq mlocate net-tools curl
    

    On Ubuntu 20.04, install the following:

    $ sudo apt install python3 python3-apt python3-pymysql python3-pip python-is-python3 \
       aptitude htop libjemalloc2 jq mlocate net-tools curl
    

    On Debian 9, Ubuntu 16.04, and Ubuntu 18.04, install the following:

    $ sudo apt install python python-apt python-pymysql python-pip \
       python3 python3-apt python3-pymysql python3-pip \
       aptitude htop libjemalloc1 jq mlocate net-tools curl
    

    Note that jemalloc is not required, but it improves performance.

  5. Create a directory for the CMAPI Server installation, and change into the new directory:

    $ sudo mkdir -p /opt/cmapi
    $ sudo chmod 0755 /opt/cmapi
    $ cd /opt/cmapi
    
  6. Download the CMAPI Server into the directory:

    $ sudo wget https://dlm.mariadb.com/{{ customer_download_token }}/mariadb-enterprise-server/10.5.4-2/cmapi/mariadb-columnstore-cmapi.tar.gz
    
  7. Extract the CMAPI Server into the directory:

    $ sudo tar -xvzf mariadb-columnstore-cmapi.tar.gz
    
  8. Install the CMAPI Server:

    $ sudo ./service.sh install
    
  9. Configure MariaDB ColumnStore.

    Installation only loads MariaDB ColumnStore to the system. MariaDB ColumnStore may require configuration and additional post-installation steps before the database server is ready for use.

    See Configuration.

Install via ZYpp (SLES)

  1. Retrieve your Customer Download Token at https://customers.mariadb.com/downloads/token/ and substitute for customer_download_token in the following directions.

  2. Configure the ZYpp package repository.

    MariaDB ColumnStore 1.5 is available on MariaDB Enterprise Server 10.5.

    To configure ZYpp package repositories:

    $ sudo zypper install wget
    
    $ wget https://dlm.mariadb.com/enterprise-release-helpers/mariadb_es_repo_setup
    
    $ echo "93fa0df3d6491a791f5d699158dcfe3e6ce20c45ddc2f534ed2f5eac6468ff0a  mariadb_es_repo_setup" \
        | sha256sum -c -
    
    $ chmod +x mariadb_es_repo_setup
    
    $ sudo ./mariadb_es_repo_setup --token="customer_download_token" --apply \
       --mariadb-server-version="10.5"
    
  3. Install MariaDB ColumnStore and package dependencies:

    $ sudo zypper install MariaDB-server MariaDB-backup \
        MariaDB-columnstore-engine
    

    Note that ColumnStore currently installs a x-columnstore.cnf configuration file that sets plugin_maturity to gamma. This means that other plugins with gamma maturity can be installed as well.

  4. Install some dependencies for ColumnStore's CMAPI Server.

    On SLES 15, install the following:

    $ sudo zypper install python2 python2-PyMySQL python2-pip \
       python3 python3-PyMySQL python3-pip \
       htop jemalloc jq mlocate net-tools curl
    

    On SLES 12, install the following:

    $ sudo zypper install python python-PyMySQL python-pip \
       python3 python3-PyMySQL python3-pip \
       htop jemalloc jq mlocate net-tools curl
    

    Note that jemalloc is not required, but it improves performance.

  5. Create a directory for the CMAPI Server installation, and change into the new directory:

    $ sudo mkdir -p /opt/cmapi
    $ sudo chmod 0755 /opt/cmapi
    $ cd /opt/cmapi
    
  6. Download the CMAPI Server into the directory:

    $ sudo wget https://dlm.mariadb.com/{{ customer_download_token }}/mariadb-enterprise-server/10.5.4-2/cmapi/mariadb-columnstore-cmapi.tar.gz
    
  7. Extract the CMAPI Server into the directory:

    $ sudo tar -xvzf mariadb-columnstore-cmapi.tar.gz
    
  8. Install the CMAPI Server:

    $ sudo ./service.sh install
    
  9. Configure MariaDB ColumnStore.

    Installation only loads MariaDB ColumnStore to the system. MariaDB ColumnStore may require configuration and additional post-installation steps before the database server is ready for use.

    See Configuration.

Configuration

Configuration Compatibility

MariaDB ColumnStore 1.5 updates the prefix used in system variables. The infinidb_ prefix used in MariaDB ColumnStore 1.2 has been replaced with the columnstore_ prefix.

MariaDB ColumnStore 1.2 configuration files are not compatible with MariaDB ColumnStore 1.5.

Server Configuration

MariaDB Enterprise Server can be configured in the following ways:

  • System variables and options can be set in a configuration file (such as /etc/my.cnf). MariaDB Enterprise Server must be restarted to apply changes made to the configuration file.

  • System variables and options can be set on the command-line.

  • If a system variable supports dynamic changes, then it can be set on-the-fly using the SET statement.

Configuration Files

MariaDB's packages include several bundled configuration files. It is also possible to create custom configuration files.

On RHEL, CentOS, and SLES, MariaDB's packages bundle the following configuration files:

  • /etc/my.cnf

  • /etc/my.cnf.d/client.cnf

  • /etc/my.cnf.d/mariadb-enterprise.cnf

  • /etc/my.cnf.d/mysql-clients.cnf

  • /etc/my.cnf.d/server.cnf

And on RHEL, CentOS, and SLES, custom configuration files from the following directories are read by default:

  • /etc/my.cnf.d/

On Debian and Ubuntu, MariaDB's packages bundle the following configuration files:

  • /etc/mysql/my.cnf

  • /etc/mysql/mariadb.cnf

  • /etc/mysql/mariadb.conf.d/50-client.cnf

  • /etc/mysql/mariadb.conf.d/50-mysql-clients.cnf

  • /etc/mysql/mariadb.conf.d/50-mysqld_safe.cnf

  • /etc/mysql/mariadb.conf.d/50-server.cnf

  • /etc/mysql/mariadb.conf.d/60-galera.cnf

  • /etc/mysql/mariadb.conf.d/mariadb-enterprise.cnf

And on Debian and Ubuntu, custom configuration files from the following directories are read by default:

  • /etc/mysql/conf.d/

  • /etc/mysql/mariadb.conf.d/

Renamed Server Configuration

For platforms that use YUM or ZYpp as a package manager:

MariaDB Enterprise Server's packages bundle several configuration files:

  • /etc/my.cnf

  • /etc/my.cnf.d/client.cnf

  • /etc/my.cnf.d/mariadb-enterprise.cnf

  • /etc/my.cnf.d/mysql-clients.cnf

  • /etc/my.cnf.d/server.cnf

If your version of any of these configuration files contained any custom edits, then the package manager may save your edited version with the .rpmsave extension during the upgrade process. If you want to continue using your version with the custom edits, then you may need to move it back. For example, to move server.cnf back in place:

$ sudo mv /etc/my.cnf.d/server.cnf /etc/my.cnf.d/server.cnf.original
$ sudo mv /etc/my.cnf.d/server.cnf.rpmsave /etc/my.cnf.d/server.cnf

Configuring MariaDB

  1. Determine which system variables and options you need to configure.

    Mandatory system variables and options for multi-node ColumnStore include:

    System Variable/Option

    Description

    character_set_server

    Set this system variable to utf8

    server_id

    Sets the numeric Server ID for this MariaDB Enterprise Server. Must be unique for each node in the deployment.

    log_bin

    Set this option to enable the Binary Log.

  2. Choose a configuration file in which to configure your system variables and options.

    It not recommended to make custom changes to one of the bundled configuration files. Instead, it is recommended to create a custom configuration file in one of the included directories. Configuration files in included directories are read in alphabetical order. If you want your custom configuration file to override the bundled configuration files, then it is a good idea to prefix the custom configuration file's name with a string that will be sorted last, such as z-.

    • On RHEL, CentOS, and SLES, a good custom configuration file would be: /etc/my.cnf.d/z-custom-my.cnf

    • On Debian and Ubuntu, a good custom configuration file would be: /etc/mysql/mariadb.conf.d/z-custom-my.cnf

  3. Set your system variables and options in the configuration file.

    They need to be set in a group that will be read by mariadbd, such as [mariadb] or [server].

    For example:

    [mariadb]
    log_error   = mariadbd.err
    character_set_server = utf8
    log_bin     = mariadb-bin
    # This must be unique on each node
    server_id   = 1
    

Cross Engine Joins

When a cross engine join is executed, the ExeMgr process connects to the server using the root user with no password by default. MariaDB Enterprise Server 10.5 will reject this login attempt by default. If you plan to use Cross Engine Joins, you need to configure ColumnStore to use a different user account and password.

  1. Configure the Cross Engine Join credentials using the mcsSetConfig command.

    For example, to configure ColumnStore to use the cross_engine user account to connect to the server at 127.0.0.1:

    $ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1
    $ sudo mcsSetConfig CrossEngineSupport Port 3306
    $ sudo mcsSetConfig CrossEngineSupport User cross_engine
    $ sudo mcsSetConfig CrossEngineSupport Password cross_engine_passwd
    
  2. The cross_engine@127.0.0.1 user account needs to be created on the server after it has been started. This step is described in the Create the Cross Engine Join User section.

S3 Storage Manager

MariaDB ColumnStore supports using S3-compatible storage. If you want to use S3-compatible storage, then you need to configure it.

  1. Edit /etc/columnstore/storagemanager.cnf:

    [ObjectStorage]
    
    service = S3
    
    [S3]
    bucket = your_columnstore_bucket_name
    endpoint = your_s3_endpoint
    aws_access_key_id = your_s3_access_key_id
    aws_secret_access_key = your_s3_secret_key
    
    [Cache]
    cache_size = your_local_cache_size
    path = your_local_cache_path
    
    • The default local cache size is 2 GB.

    • The default local cache path is /var/lib/columnstore/storagemanager/cache.

  2. Ensure that the local cache path has sufficient store space to store the local cache.

Restore Data

MariaDB Enterprise Server with MariaDB ColumnStore 1.5 uses a different directory structure from MariaDB ColumnStore 1.2. In order to restore your data, you need to reconfigure the Server to use the 1.2 directory locations.

Restoring Server Data

In MariaDB ColumnStore 1.2, MariaDB Server writes ColumnStore data to DBRoots and data from all other storage engines in a non-standard data directory located in root installations at /usr/local/mariadb/columnstore/mysql/db/. To restore the Server data from the 1.2 to the MariaDB Enterprise Server 10.5 installation, you need to move the old data directory to the new location.

  1. On each MariaDB Enterprise Server, move the default data directory to an alternate location:

    $ sudo mv /var/lib/mysql /var/lib/mysql_default
    
  2. Move the data directory from MariaDB ColumnStore 1.2 to the new location:

    $ sudo mv /usr/local/mariadb/columnstore/mysql/db /var/lib/mysql
    

Restoring ColumnStore Data

In MariaDB ColumnStore 1.2, DBRoot files containing ColumnStore data are stored on Performance Modules (PM), in the ColumnStore home directory. The convention for the file names is dataN.

  1. On each PM, identify the local DBRoots:

    $ sudo ls /usr/local/mariadb/columnstore | grep "data[0-9]+"
    data1   data2   data3
    
  2. For each DBRoot, move it from the directory used by the ColumnStore 1.2 installation to the one used by the ColumnStore 1.5 installation:

    $ sudo mv /usr/local/mariadb/columnstore/data1 /var/lib/columnstore/data1
    

Post Installation

Installing the MariaDB Enterprise Server and MariaDB ColumnStore packages provides you with necessary packages to run the Server as a ColumnStore Instance, but only configures the Server as a Server. A few additional steps are needed to configure the MariaDB ColumnStore storage back-end.

Update the Bash Profile

MariaDB ColumnStore 1.2 shipped with a columnstoreAlias.sh script, which created a series of aliases for various commands.

Remove reference to this file from your .bash_profile and any copies in /etc/profile.d/ to ensure that the commands below call the MariaDB ColumnStore 1.5 binaries in /usr/bin and not the MariaDB ColumnStore 1.2 aliases.

Note, that to restore the environment variables, you will need to open a new shell after the above change.

Start the Server

The server and the ColumnStore processes can be started using the systemctl command.

  1. Start the MariaDB Server process and configure it to start automatically:

    $ sudo systemctl start mariadb
    $ sudo systemctl enable mariadb
    
  2. Start the MariaDB ColumnStore processes and configure them to start automatically:

    $ sudo systemctl start mariadb-columnstore
    $ sudo systemctl enable mariadb-columnstore
    
  3. Start the CMAPI process and configure it to start automatically:

    $ sudo systemctl start mariadb-columnstore-cmapi
    $ sudo systemctl enable mariadb-columnstore-cmapi
    

Upgrade the Data Directory

  1. Execute mariadb-upgrade to update the data directory:

    $ sudo mariadb-upgrade
    

Create the Cross Engine Join User

The credentials for cross engine joins were previously configured in the Cross Engine Joins section. The user account must also be created, and the user account must be granted the necessary privileges to access data.

  1. Connect to the server using MariaDB Client using the root@localhost user account:

    $ sudo mariadb
    
  2. Create the user account with the CREATE USER statement:

    CREATE USER 'cross_engine'@'127.0.0.1'
       IDENTIFIED BY "cross_engine_passwd";
    
  3. Grant the user account SELECT privileges on all databases with the GRANT statement:

    GRANT SELECT ON *.*
       TO 'cross_engine'@'127.0.0.1';
    

Configure DNS

MariaDB ColumnStore's CMAPI requires all nodes to have host names that are resolvable on all other nodes. If your infrastructure does not configure DNS centrally, then you may need to configure static DNS entries in each server's /etc/hosts file.

  1. On each server, edit the /etc/hosts file to map host names to the IP address of each node:

    192.0.2.1     mcs1
    192.0.2.2     mcs2
    192.0.2.3     mcs3
    

Create the Replication User

The deployment requires MariaDB Replication, so the replication user account must be created, and the user account must be granted sufficient privileges to perform replication.

  1. Connect to the primary server using MariaDB Client using the root@localhost user account:

    $ sudo mariadb
    
  2. Create the user account with the CREATE USER statement:

    CREATE USER 'repl'@'192.0.2.%'
       IDENTIFIED BY 'passwd';
    

    Ensure that the user account can connect from the IP address of all replicas.

  3. Grant the user account REPLICATION SLAVE and the BINLOG MONITOR global privileges with the GRANT statement:

    GRANT REPLICATION SLAVE, BINLOG MONITOR ON *.*
       TO 'repl'@'192.0.2.%';
    
  4. This user account will be used to configure replication in the Configure MariaDB Replication section.

Configure MariaDB Replication

The deployment requires MariaDB Replication, which must be configured.

  1. Connect to the primary server using MariaDB Client using the root@localhost user account:

    $ sudo mariadb
    
  2. On the primary server, obtain the current GTID position by querying the gtid_current_pos system variable using SHOW GLOBAL VARIABLES:

    SHOW GLOBAL VARIABLES LIKE 'gtid_current_pos';
    +------------------+-------+
    | Variable_name    | Value |
    +------------------+-------+
    | gtid_current_pos | 0-1-7 |
    +------------------+-------+
    

    If the value is non-empty, then take note of the value for later.

  3. Connect to each replica server using MariaDB Client using the root@localhost user account:

    $ sudo mariadb
    
  4. If the current GTID position of the primary server was non-empty in the previous step, then you may need to set the GTID position on each replica server. Otherwise, you can skip this step.

    On each replica server, the current GTID position can be set by setting the gtid_slave_pos system variable using SET GLOBAL:

    SET GLOBAL gtid_slave_pos='0-1-7';
    
  5. Use the CHANGE MASTER TO statement to configure the server to replicate from the primary server starting from this position:

    CHANGE MASTER TO
       MASTER_HOST='192.0.2.1',
       MASTER_USER='repl',
       MASTER_PASSWORD='passwd',
       MASTER_USE_GTID=slave_pos;
    
  6. Start replication using the START REPLICA statement:

    START REPLICA;
    
  7. Confirm that replication is working properly using the SHOW REPLICA STATUS statement:

    SHOW REPLICA STATUS;
    
  8. Ensure that each replica server cannot accept local writes by setting the read_only system variable to ON using the SET GLOBAL statement:

    SET GLOBAL read_only=ON;
    

Initiate Primary Server using CMAPI

The primary server needs to be initiated using the CMAPI.

  1. Create an API key for the cluster.

    This API key should be stored securely and kept confidential, because it can be used to add nodes to the multi-node ColumnStore deployment.

    For example, to create a random 256-bit API key using openssl rand:

    $ openssl rand -hex 32
    93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd
    

    This document will use the following API key in further examples, but users should create their own instead: 93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd

  2. Set the API key for the cluster by using curl to send the status command to the CMAPI Server.

    The new API key needs to be provided as part of the X-API-key HTML header.

    The output can be piped to jq, so that the JSON results are pretty-printed.

    For example, if the primary server's host name is mcs1:

    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    

    Example output:

    {
      "timestamp": "2020-07-30 21:26:04.367537",
      "127.0.0.1": {
        "timestamp": "2020-07-30 21:26:04.375173",
        "uptime": 4525,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 4406
          },
          {
            "name": "PrimProc",
            "pid": 4410
          },
          {
            "name": "controllernode",
            "pid": 4448
          },
          {
            "name": "WriteEngine",
            "pid": 4451
          },
          {
            "name": "ExeMgr",
            "pid": 4457
          },
          {
            "name": "DMLProc",
            "pid": 4464
          },
          {
            "name": "DDLProc",
            "pid": 4467
          }
        ]
      }
    }
    

Initiate Replica Servers using CMAPI

The replica servers needs to be initiated using the CMAPI.

  1. For each replica server, use curl to send the add-node command to the primary server's CMAPI Server.

    The previously set API key needs to be provided as part of the X-API-key HTML header.

    The output can be piped to jq, so that the JSON results are pretty-printed.

    For example, if the primary server's host name is mcs1 and the replica server's host name is mcs2:

    $ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/add-node \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       --data '{"timeout":120, "node": "mcs2"}' \
       | jq .
    

    Example output:

    {
      "timestamp": "2020-07-30 21:34:07.365452",
      "node_id": "mcs2"
    }
    
  2. After all replica servers have been added, use curl to send the status command to the CMAPI Server.

    The previously set API key needs to be provided as part of the X-API-key HTML header.

    The output can be piped to jq, so that the JSON results are pretty-printed.

    For example, if the primary server's host name is mcs1:

    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
       | jq .
    

    Use the output to confirm that all nodes have been successfully added.

    Example output:

    {
      "timestamp": "2020-07-30 21:29:47.748102",
      "mcs1": {
        "timestamp": "2020-07-30 21:32:10.377710",
        "uptime": 75277,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 21471
          },
          {
            "name": "controllernode",
            "pid": 21482
          },
          {
            "name": "PrimProc",
            "pid": 21487
          },
          {
            "name": "ExeMgr",
            "pid": 21517
          },
          {
            "name": "WriteEngine",
            "pid": 21526
          },
          {
            "name": "DDLProc",
            "pid": 21572
          },
          {
            "name": "DMLProc",
            "pid": 21576
          }
        ]
      },
      "mcs2": {
        "timestamp": "2020-07-30 21:34:07.781330",
        "uptime": 75282,
        "dbrm_mode": "slave",
        "cluster_mode": "readwrite",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 21215
          },
          {
            "name": "PrimProc",
            "pid": 21224
          },
          {
            "name": "ExeMgr",
            "pid": 21254
          },
          {
            "name": "WriteEngine",
            "pid": 21263
          },
          {
            "name": "DDLProc",
            "pid": 21296
          },
          {
            "name": "DMLProc",
            "pid": 21299
          }
        ]
      },
      "mcs3": {
        "timestamp": "2020-07-30 21:34:47.811159",
        "uptime": 75286,
        "dbrm_mode": "slave",
        "cluster_mode": "readwrite",
        "dbroots": [
          "3"
        ],
        "module_id": 3,
        "services": [
          {
            "name": "workernode",
            "pid": 20799
          },
          {
            "name": "PrimProc",
            "pid": 20808
          },
          {
            "name": "ExeMgr",
            "pid": 20838
          },
          {
            "name": "WriteEngine",
            "pid": 20846
          },
          {
            "name": "DDLProc",
            "pid": 20880
          },
          {
            "name": "DMLProc",
            "pid": 20882
          }
        ]
      }
    }
    

Build System Tables

If you are using S3 storage, then you also need to build the system tables.

  1. Build the system tables using the dbbuilder command with the argument 7:

    $ sudo dbbuilder 7
    

    If your system tables already exist, then you will see a message like the following:

    Build system catalog was not completed. System catalog appears to exist.  It will remain intact for reuse.  The database is not recreated.
    

Configuring the Linux Security Module

If you stopped the Linux Security Module (LSM) on each node during installation, you can restart the module and configure it on each node.

The specific steps to configure the security module depend on the platform.

Configuring the Linux Security Module with SELinux (RHEL/CentOS/SLES)

After installation, SELinux can be properly configured to handle ColumnStore. This can be done while SELinux is still in permissive mode using the audit2allow command.

  1. To configure SELinux, you have to install the packages required for audit2allow.

    On RHEL 7 and CentOS 7, install the following:

    $ sudo yum install policycoreutils policycoreutils-python
    

    On RHEL 8 and CentOS 8, install the following:

    $ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils
    

    On SLES, install the following:

    $ sudo zypper install policycoreutils python3-policycoreutils
    
  2. Allow the system to run under load for a while to generate SELinux audit events.

  3. After the system has taken some load, generate an SELinux policy from the audit events using audit2allow:

    $ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
    

    If no audit events were found, then this will print the following:

    $ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
    Nothing to do
    
  4. If audit events were found, then the new SELinux policy can be loaded using semodule:

    $ sudo semodule -i mariadb_local.pp
    
  5. Set SELinux to enforcing mode by setting SELINUX=enforcing in /etc/selinux/config:

    # This file controls the state of SELinux on the system.
    # SELINUX= can take one of these three values:
    #     enforcing - SELinux security policy is enforced.
    #     permissive - SELinux prints warnings instead of enforcing.
    #     disabled - No SELinux policy is loaded.
    SELINUX=enforcing
    # SELINUXTYPE= can take one of three values:
    #     targeted - Targeted processes are protected,
    #     minimum - Modification of targeted policy. Only selected processes are protected.
    #     mls - Multi Level Security protection.
    SELINUXTYPE=targeted
    
  6. Reboot the system.

  7. Confirm that SELinux is in enforcing mode using getenforce:

    $ sudo getenforce
    

Configuring the Linux Security Module with AppArmor (Debian/Ubuntu/SLES)

After installation, AppArmor can be properly configured to handle ColumnStore.

An AppArmor profile must be created for ColumnStore. For information on how to do that, see How to create an AppArmor Profile.

Configuring the Firewall Service

If you stopped the firewall service on each node during installation, you can restart the service and configure it to open the relevant ports on each node.

The specific steps to configure the firewall service depend on the platform.

Configuring the Firewall Service with firewalld (RHEL/CentOS/SLES 15)

  1. Check whether the firewalld service is running:

    $ sudo systemctl status firewalld
    
  2. If the firewalld service was stopped to perform the installation, then start it now:

    For example, if your ColumnStore nodes are in the 192.0.2.0/24 subnet:

    $ sudo systemctl start firewalld
    
  3. Open up the relevant ports using firewall-cmd:

    
    
    $ sudo firewall-cmd --permanent --add-rich-rule='

    rule family="ipv4" source address="192.0.2.0/24" destination address="192.0.2.0/24" port port="3306" protocol="tcp" accept'

    $ sudo firewall-cmd --permanent --add-rich-rule='

    rule family="ipv4" source address="192.0.2.0/24" destination address="192.0.2.0/24" port port="8600-8630" protocol="tcp" accept'

    $ sudo firewall-cmd --permanent --add-rich-rule='

    rule family="ipv4" source address="192.0.2.0/24" destination address="192.0.2.0/24" port port="8640" protocol="tcp" accept'

    $ sudo firewall-cmd --permanent --add-rich-rule='

    rule family="ipv4" source address="192.0.2.0/24" destination address="192.0.2.0/24" port port="8700" protocol="tcp" accept'

    $ sudo firewall-cmd --permanent --add-rich-rule='

    rule family="ipv4" source address="192.0.2.0/24" destination address="192.0.2.0/24" port port="8800" protocol="tcp" accept'

  4. Reload the runtime configuration:

    $ sudo firewall-cmd --reload
    

Configuring the Firewall Service with SuSEFirewall2 (SLES 12)

  1. Open up the relevant ports in a SuSEFirewall2 configuration file. For proper change management, it is recommended to use a ColumnStore-specific configuration file.

    For example, add the following to /etc/sysconfig/SuSEfirewall2.d/services/mariadb-columnstore:

    ## Name: MariaDB ColumnStore Service
    ## Description: Opens ports required for MariaDB ColumnStore
    
    TCP="3306 8600:8630 8640 8700 8800"
    
  2. Add the service to the relevant zone in /etc/sysconfig/SuSEfirewall2.

    For example, if your network interface is in the Internal Zone:

    FW_CONFIGURATIONS_INT="mariadb-columnstore"
    
  3. Restart the SuSEFirewall2 service:

    $ sudo systemctl restart SuSEfirewall2_init
    $ sudo systemctl restart SuSEfirewall2_setup
    

Restarting the Firewall Service with UFW (Ubuntu)

  1. Check whether the UFW service is running:

    $ sudo ufw status verbose
    
  2. If the UFW service was stopped to perform the installation, then start it now:

    $ sudo ufw enable
    
  3. Open up the relevant ports using ufw.

    For example, if your ColumnStore nodes are in the 192.0.2.0/24 subnet in the range 192.0.2.1 - 192.0.2.3:

    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 3306 proto tcp
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8600:8630 proto tcp
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8640 proto tcp
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8700 proto tcp
    $ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8800 proto tcp
    
  4. Reload the runtime configuration:

    $ sudo ufw reload
    

Administration

MariaDB Enterprise Server uses systemctl to start and stop the server processes:

Operation

Command

Start

sudo systemctl start mariadb

Stop

sudo systemctl stop mariadb

Restart

sudo systemctl restart mariadb

Enable during startup

sudo systemctl enable mariadb

Disable during startup

sudo systemctl disable mariadb

Status

sudo systemctl status mariadb

MariaDB ColumnStore also uses systemctl to start and stop the ColumnStore processes:

Operation

Command

Start

sudo systemctl start mariadb-columnstore

Stop

sudo systemctl stop mariadb-columnstore

Restart

sudo systemctl restart mariadb-columnstore

Enable during startup

sudo systemctl enable mariadb-columnstore

Disable during startup

sudo systemctl disable mariadb-columnstore

Status

sudo systemctl status mariadb-columnstore

MariaDB ColumnStore also uses systemctl to start and stop the CMAPI processes:

Operation

Command

Start

sudo systemctl start mariadb-columnstore-cmapi

Stop

sudo systemctl stop mariadb-columnstore-cmapi

Restart

sudo systemctl restart mariadb-columnstore-cmapi

Enable during startup

sudo systemctl enable mariadb-columnstore-cmapi

Disable during startup

sudo systemctl disable mariadb-columnstore-cmapi

Status

sudo systemctl status mariadb-columnstore-cmapi

Testing

When you have MariaDB ColumnStore up and running, you should test it to ensure that it is in working order and that there were not any issues during startup.

Checking Server Status

  1. Connect to the server using MariaDB Client using the root@localhost user account:

    $ sudo mariadb
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 38
    Server version: 10.5.5-3-MariaDB-Enterprise MariaDB Enterprise Server
    
    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]>
    

Checking System Status

  1. Use curl to send the status command to the CMAPI Server using your deployment's API key.

    $ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
       --header 'Content-Type:application/json' \
       --header 'x-api-key:mariadb_columnstore_olap123?' \
       | jq .
    

    Example output:

    {
      "timestamp": "2020-07-21 21:29:47.748102",
      "mcs1": {
        "timestamp": "2020-07-21 21:29:47.753729",
        "uptime": 75277,
        "dbrm_mode": "master",
        "cluster_mode": "readwrite",
        "dbroots": [
          "1"
        ],
        "module_id": 1,
        "services": [
          {
            "name": "workernode",
            "pid": 21471
          },
          {
            "name": "controllernode",
            "pid": 21482
          },
          {
            "name": "PrimProc",
            "pid": 21487
          },
          {
            "name": "ExeMgr",
            "pid": 21517
          },
          {
            "name": "WriteEngine",
            "pid": 21526
          },
          {
            "name": "DDLProc",
            "pid": 21572
          },
          {
            "name": "DMLProc",
            "pid": 21576
          }
        ]
      },
      "mcs2": {
        "timestamp": "2020-07-21 21:29:47.781330",
        "uptime": 75282,
        "dbrm_mode": "slave",
        "cluster_mode": "readwrite",
        "dbroots": [
          "2"
        ],
        "module_id": 2,
        "services": [
          {
            "name": "workernode",
            "pid": 21215
          },
          {
            "name": "PrimProc",
            "pid": 21224
          },
          {
            "name": "ExeMgr",
            "pid": 21254
          },
          {
            "name": "WriteEngine",
            "pid": 21263
          },
          {
            "name": "DDLProc",
            "pid": 21296
          },
          {
            "name": "DMLProc",
            "pid": 21299
          }
        ]
      },
      "mcs3": {
        "timestamp": "2020-07-21 21:29:47.811159",
        "uptime": 75286,
        "dbrm_mode": "slave",
        "cluster_mode": "readwrite",
        "dbroots": [
          "3"
        ],
        "module_id": 3,
        "services": [
          {
            "name": "workernode",
            "pid": 20799
          },
          {
            "name": "PrimProc",
            "pid": 20808
          },
          {
            "name": "ExeMgr",
            "pid": 20838
          },
          {
            "name": "WriteEngine",
            "pid": 20846
          },
          {
            "name": "DDLProc",
            "pid": 20880
          },
          {
            "name": "DMLProc",
            "pid": 20882
          }
        ]
      }
    }