Step 4: Start and Configure MariaDB Enterprise Server
This page is part of MariaDB's Documentation.
The parent of this page is: Deploy ColumnStore Shared Local Storage Topology with MariaDB Enterprise Server 10.6
Topics on this page:
Overview
This page details step 4 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".
This step starts and configures MariaDB Enterprise Server 10.6 and MariaDB Enterprise ColumnStore 23.10.
Interactive commands are detailed. Alternatively, the described operations can be performed using automation.
Stop the Enterprise ColumnStore Services
The installation process might have started some of the ColumnStore services. The services should be stopped prior to making configuration changes.
On each Enterprise ColumnStore node, stop the MariaDB Enterprise Server service:
$ sudo systemctl stop mariadb
On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:
$ sudo systemctl stop mariadb-columnstore
On each Enterprise ColumnStore node, stop the CMAPI service:
$ sudo systemctl stop mariadb-columnstore-cmapi
Configure Enterprise ColumnStore
On each Enterprise ColumnStore node, configure Enterprise Server.
Mandatory system variables and options for ColumnStore Shared Local Storage include:
System Variable/Option | Description |
---|---|
Set this system variable to | |
Set this system variable to | |
Set this system variable to | |
Set this system variable to | |
Set this option to the file you want to use for the Binary Log. Setting this option enables binary logging. | |
Set this option to the file you want to use to track binlog filenames. | |
Set this system variable to | |
Set this option to the file you want to use for the Relay Logs. Setting this option enables relay logging. | |
Set this option to the file you want to use to index Relay Log filenames. | |
Sets the numeric Server ID for this MariaDB Enterprise Server. The value set on this option must be unique to each node. |
Example Configuration
[mariadb]
bind_address = 0.0.0.0
log_error = mariadbd.err
character_set_server = utf8
collation_server = utf8_general_ci
log_bin = mariadb-bin
log_bin_index = mariadb-bin.index
relay_log = mariadb-relay
relay_log_index = mariadb-relay.index
log_slave_updates = ON
gtid_strict_mode = ON
# This must be unique on each Enterprise ColumnStore node
server_id = 1
Start the Enterprise ColumnStore Services
On each Enterprise ColumnStore node, start and enable the MariaDB Enterprise Server service, so that it starts automatically upon reboot:
$ sudo systemctl start mariadb
$ sudo systemctl enable mariadb
On each Enterprise ColumnStore node, stop the MariaDB Enterprise ColumnStore service:
$ sudo systemctl stop mariadb-columnstore
After the CMAPI service is installed in the next step, CMAPI will start the Enterprise ColumnStore service as-needed on each node. CMAPI disables the Enterprise ColumnStore service to prevent
systemd
from automatically starting Enterprise ColumnStore upon reboot.On each Enterprise ColumnStore node, start and enable the CMAPI service, so that it starts automatically upon reboot:
$ sudo systemctl start mariadb-columnstore-cmapi
$ sudo systemctl enable mariadb-columnstore-cmapi
For additional information, see "Start and Stop Services".
Create User Accounts
The ColumnStore Shared Local Storage topology requires several user accounts. Each user account should be created on the primary server, so that it is replicated to the replica servers.
Create the Utility User
Enterprise ColumnStore requires a mandatory utility user account to perform cross-engine joins and similar operations.
On the primary server, create the user account with the CREATE USER statement:
CREATE USER 'util_user'@'127.0.0.1' IDENTIFIED BY 'util_user_passwd';
On the primary server, grant the user account
SELECT
privileges on all databases with the GRANT statement:GRANT SELECT, PROCESS ON *.* TO 'util_user'@'127.0.0.1';
On each Enterprise ColumnStore node, configure the ColumnStore utility user:
$ sudo mcsSetConfig CrossEngineSupport Host 127.0.0.1
$ sudo mcsSetConfig CrossEngineSupport Port 3306
$ sudo mcsSetConfig CrossEngineSupport User util_user
On each Enterprise ColumnStore node, set the password:
$ sudo mcsSetConfig CrossEngineSupport Password util_user_passwd
For details about how to encrypt the password, see "Credentials Management for MariaDB Enterprise ColumnStore".
Passwords should meet your organization's password policies. If your MariaDB Enterprise Server instance has a password validation plugin installed, then the password should also meet the configured requirements.
Create the Replication User
ColumnStore Shared Local Storage uses MariaDB Replication to replicate writes between the primary and replica servers. As MaxScale can promote a replica server to become a new primary in the event of node failure, all nodes must have a replication user.
The action is performed on the primary server.
Create the replication user and grant it the required privileges:
Use the CREATE USER statement to create replication user.
CREATE USER 'repl'@'192.0.2.%' IDENTIFIED BY 'repl_passwd';
Replace the referenced IP address with the relevant address for your environment.
Ensure that the user account can connect to the primary server from each replica.
Grant the user account the required privileges with the GRANT statement.
GRANT REPLICA MONITOR, REPLICATION REPLICA, REPLICATION REPLICA ADMIN, REPLICATION MASTER ADMIN ON *.* TO 'repl'@'192.0.2.%';
Create MaxScale User
ColumnStore Shared Local Storage 23.10 uses MariaDB MaxScale 22.08 to load balance between the nodes.
This action is performed on the primary server.
Use the CREATE USER statement to create the MaxScale user:
CREATE USER 'mxs'@'192.0.2.%' IDENTIFIED BY 'mxs_passwd';
Replace the referenced IP address with the relevant address for your environment.
Ensure that the user account can connect from the IP address of the MaxScale instance.
Use the GRANT statement to grant the privileges required by the router:
GRANT SHOW DATABASES ON *.* TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.columns_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.db TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.procs_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.proxies_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.roles_mapping TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.tables_priv TO 'mxs'@'192.0.2.%';
GRANT SELECT ON mysql.user TO 'mxs'@'192.0.2.%';
Use the GRANT statement to grant privileges required by the MariaDB Monitor.
GRANT BINLOG ADMIN, READ_ONLY ADMIN, RELOAD, REPLICA MONITOR, REPLICATION MASTER ADMIN, REPLICATION REPLICA ADMIN, REPLICATION REPLICA, SHOW DATABASES, SELECT ON *.* TO 'mxs'@'192.0.2.%';
Configure MariaDB Replication
On each replica server, configure MariaDB Replication:
Use the CHANGE MASTER TO statement to configure the connection to the primary server:
CHANGE MASTER TO MASTER_HOST='192.0.2.1', MASTER_USER='repl', MASTER_PASSWORD='repl_passwd', MASTER_USE_GTID=slave_pos;
Start replication using the START REPLICA statement:
START REPLICA;
Confirm that replication is working using the SHOW REPLICA STATUS statement:
SHOW REPLICA STATUS;
Ensure that the replica server cannot accept local writes by setting the read_
only system variable toON
using the SET GLOBAL statement:SET GLOBAL read_only=ON;
Initiate the Primary Server with CMAPI
Initiate the primary server using CMAPI.
Create an API key for the cluster.
This API key should be stored securely and kept confidential, because it can be used to add cluster nodes to the multi-node Enterprise ColumnStore deployment.
For example, to create a random 256-bit API key using
openssl rand
:$ openssl rand -hex 32
93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd
This document will use the following API key in further examples, but users should create their own:
Use CMAPI to add the primary server to the cluster and set the API key.
The new API key needs to be provided as part of the
X-API-key
HTML header.For example, if the primary server's host name is
mcs1
and its IP address is192.0.2.1
, use the following node command:$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \ --header 'Content-Type:application/json' \ --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \ --data '{"timeout":120, "node": "192.0.2.1"}' \ | jq .
{ "timestamp": "2020-10-28 00:39:14.672142", "node_id": "192.0.2.1" }
Use CMAPI to check the status of the cluster node:
$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \ --header 'Content-Type:application/json' \ --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \ | jq .
{ "timestamp": "2020-12-15 00:40:34.353574", "192.0.2.1": { "timestamp": "2020-12-15 00:40:34.362374", "uptime": 11467, "dbrm_mode": "master", "cluster_mode": "readwrite", "dbroots": [ "1" ], "module_id": 1, "services": [ { "name": "workernode", "pid": 19202 }, { "name": "controllernode", "pid": 19232 }, { "name": "PrimProc", "pid": 19254 }, { "name": "ExeMgr", "pid": 19292 }, { "name": "WriteEngine", "pid": 19316 }, { "name": "DMLProc", "pid": 19332 }, { "name": "DDLProc", "pid": 19366 } ] }
Add Replica Servers with CMAPI
Add the replica servers with CMAPI:
For each replica server, use CMAPI to add the replica server to the cluster.
The previously set API key needs to be provided as part of the
X-API-key
HTML header.For example, if the primary server's host name is
mcs1
and the replica server's IP address is192.0.2.2
, use the following node command:$ curl -k -s -X PUT https://mcs1:8640/cmapi/0.4.0/cluster/node \ --header 'Content-Type:application/json' \ --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \ --data '{"timeout":120, "node": "192.0.2.2"}' \ | jq .
{ "timestamp": "2020-10-28 00:42:42.796050", "node_id": "192.0.2.2" }
After all replica servers have been added, use CMAPI to confirm that all cluster nodes have been successfully added:
$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \ --header 'Content-Type:application/json' \ --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \ | jq .
{ "timestamp": "2020-12-15 00:40:34.353574", "192.0.2.1": { "timestamp": "2020-12-15 00:40:34.362374", "uptime": 11467, "dbrm_mode": "master", "cluster_mode": "readwrite", "dbroots": [ "1" ], "module_id": 1, "services": [ { "name": "workernode", "pid": 19202 }, { "name": "controllernode", "pid": 19232 }, { "name": "PrimProc", "pid": 19254 }, { "name": "ExeMgr", "pid": 19292 }, { "name": "WriteEngine", "pid": 19316 }, { "name": "DMLProc", "pid": 19332 }, { "name": "DDLProc", "pid": 19366 } ] }, "192.0.2.2": { "timestamp": "2020-12-15 00:40:34.428554", "uptime": 11437, "dbrm_mode": "slave", "cluster_mode": "readonly", "dbroots": [ "2" ], "module_id": 2, "services": [ { "name": "workernode", "pid": 17789 }, { "name": "PrimProc", "pid": 17813 }, { "name": "ExeMgr", "pid": 17854 }, { "name": "WriteEngine", "pid": 17877 } ] }, "192.0.2.3": { "timestamp": "2020-12-15 00:40:34.428554", "uptime": 11437, "dbrm_mode": "slave", "cluster_mode": "readonly", "dbroots": [ "2" ], "module_id": 2, "services": [ { "name": "workernode", "pid": 17789 }, { "name": "PrimProc", "pid": 17813 }, { "name": "ExeMgr", "pid": 17854 }, { "name": "WriteEngine", "pid": 17877 } ] }, "num_nodes": 3 }
Configure Linux Security Modules (LSM)
The specific steps to configure the security module depend on the operating system.
Configure SELinux (CentOS, RHEL)
Configure SELinux for Enterprise ColumnStore:
To configure SELinux, you have to install the packages required for
audit2allow
.On CentOS 7 and RHEL 7, install the following:
$ sudo yum install policycoreutils policycoreutils-python
On RHEL 8, install the following:
$ sudo yum install policycoreutils python3-policycoreutils policycoreutils-python-utils
Allow the system to run under load for a while to generate SELinux audit events.
After the system has taken some load, generate an SELinux policy from the audit events using
audit2allow
:$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
If no audit events were found, this will print the following:
$ sudo grep mysqld /var/log/audit/audit.log | audit2allow -M mariadb_local
Nothing to do
If audit events were found, the new SELinux policy can be loaded using
semodule
:$ sudo semodule -i mariadb_local.pp
Set SELinux to enforcing mode:
$ sudo setenforce enforcing
Set SELinux to enforcing mode by setting
SELINUX=enforcing
in/etc/selinux/config
.For example, the file will usually look like this after the change:
# This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=enforcing # SELINUXTYPE= can take one of three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
Confirm that SELinux is in enforcing mode:
$ sudo getenforce
Enforcing
Configure AppArmor (Ubuntu)
For information on how to create a profile, see How to create an AppArmor Profile on Ubuntu.com.
Configure Firewalls
Configure firewalls for Enterprise ColumnStore.
The specific steps to configure the firewall service depend on the platform.
Configure firewalld (CentOS, RHEL)
Configure firewalld for Enterprise Cluster on CentOS and RHEL:
Check if the firewalld service is running:
$ sudo systemctl status firewalld
If the firewalld service was stopped to perform the installation, start it now:
For example, if your cluster nodes are in the
192.0.2.0/24
subnet:$ sudo systemctl start firewalld
Open up the relevant ports using
firewall-cmd
:$ sudo firewall-cmd --permanent --add-rich-rule=' rule family="ipv4" source address="192.0.2.0/24" destination address="192.0.2.0/24" port port="3306" protocol="tcp" accept'
$ sudo firewall-cmd --permanent --add-rich-rule=' rule family="ipv4" source address="192.0.2.0/24" destination address="192.0.2.0/24" port port="8600-8630" protocol="tcp" accept'
$ sudo firewall-cmd --permanent --add-rich-rule=' rule family="ipv4" source address="192.0.2.0/24" destination address="192.0.2.0/24" port port="8640" protocol="tcp" accept'
$ sudo firewall-cmd --permanent --add-rich-rule=' rule family="ipv4" source address="192.0.2.0/24" destination address="192.0.2.0/24" port port="8700" protocol="tcp" accept'
$ sudo firewall-cmd --permanent --add-rich-rule=' rule family="ipv4" source address="192.0.2.0/24" destination address="192.0.2.0/24" port port="8800" protocol="tcp" accept'
Reload the runtime configuration:
$ sudo firewall-cmd --reload
Configure UFW (Ubuntu)
Configure UFW for Enterprise ColumnStore on Ubuntu:
Check if the UFW service is running:
$ sudo ufw status verbose
If the UFW service was stopped to perform the installation, start it now:
$ sudo ufw enable
Open up the relevant ports using
ufw
.For example, if your cluster nodes are in the
192.0.2.0/24
subnet in the range 192.0.2.1 - 192.0.2.3:$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 3306 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8600:8630 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8640 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8700 proto tcp
$ sudo ufw allow from 192.0.2.0/24 to 192.0.2.3 port 8800 proto tcp
Reload the runtime configuration:
$ sudo ufw reload
Next Step
Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology":
This page was step 4 of 9.