Step 2: Configure Shared Local Storage

Overview

This page details step 2 of the 9-step procedure "Deploy ColumnStore Shared Local Storage Topology".

This step configures shared local storage on systems hosting Enterprise ColumnStore 5.

Interactive commands are detailed. Alternatively, the described operations can be performed using automation.

Directories for Shared Local Storage

In a ColumnStore Shared Local Storage topology, MariaDB Enterprise ColumnStore requires the DB Root directories to be located on shared local storage.

The DB Root directories are at the following path:

  • /var/lib/columnstore/dataN

The N in dataN represents a range of integers that starts at 1 and stops at the number of nodes in the deployment. For example, with a 3-node Enterprise ColumnStore deployment, this would refer to the following directories:

  • /var/lib/columnstore/data1

  • /var/lib/columnstore/data2

  • /var/lib/columnstore/data3

The DB Root directories must be mounted on every ColumnStore node.

Choose a Shared Local Storage Solution

Select a Shared Local Storage solution for the DB Root directories:

For additional information, see "Shared Local Storage Options".

Configure EBS Multi-Attach

EBS is a high-performance block-storage service for AWS (Amazon Web Services). EBS Multi-Attach allows an EBS volume to be attached to multiple instances in AWS. Only clustered file systems, such as GFS2, are supported.

Consult the vendor documentation for details on how to configure EBS Multi-Attach.

Configure Elastic File System (EFS)

EFS is a scalable, elastic, cloud-native NFS file system for AWS (Amazon Web Services)

Consult the vendor documentation for details on how to configure EFS.

Configure Filestore

Filestore is high-performance, fully managed storage for GCP (Google Cloud Platform).

Consult the vendor documentation for details on how to configure Filestore.

Configure GlusterFS

GlusterFS is a distributed file system.

GlusterFS is a shared local storage option, but it is not one of the recommended options.

For more information, see "Recommended Storage Options".

Install GlusterFS

On each Enterprise ColumnStore node, install GlusterFS.

Install on CentOS / RHEL 8 (YUM):

$ sudo yum install --enablerepo=PowerTools glusterfs-server

Install on CentOS / RHEL 7 (YUM):

$ sudo yum install centos-release-gluster
$ sudo yum install glusterfs-server

Install on Debian (APT):

$ wget -O - https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub | apt-key add -
$ DEBID=$(grep 'VERSION_ID=' /etc/os-release | cut -d '=' -f 2 | tr -d '"')
$ DEBVER=$(grep 'VERSION=' /etc/os-release | grep -Eo '[a-z]+')
$ DEBARCH=$(dpkg --print-architecture)
$ echo deb https://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/${DEBID}/${DEBARCH}/apt ${DEBVER} main > /etc/apt/sources.list.d/gluster.list
$ sudo apt update
$ sudo apt install glusterfs-server

Install on Ubuntu (APT):

$ sudo apt update
$ sudo apt install glusterfs-server

Start the GlusterFS Daemon

Start the GlusterFS daemon:

$ sudo systemctl start glusterd
$ sudo systemctl enable glusterd

Probe the GlusterFS Peers

Before you can create a volume with GlusterFS, you must probe each node from a peer node.

  1. On the primary node, probe all of the other cluster nodes:

    $ sudo gluster peer probe mcs2
    
    $ sudo gluster peer probe mcs3
    
  2. On one of the replica nodes, probe the primary node to confirm that it is connected:

    $ sudo gluster peer probe mcs1
    
    peer probe: Host mcs1 port 24007 already in peer list
    
  3. On the primary node, check the peer status:

    $ sudo gluster peer status
    
    Number of Peers: 2
    
    Hostname: mcs2
    Uuid: 3c8a5c79-22de-45df-9034-8ae624b7b23e
    State: Peer in Cluster (Connected)
    
    Hostname: mcs3
    Uuid: 862af7b2-bb5e-4b1c-8311-630fa32ed451
    State: Peer in Cluster (Connected)
    

Configure and Mount GlusterFS Volumes

Create the GlusterFS volumes for MariaDB Enterprise ColumnStore. Each volume must have the same number of replicas as the number of Enterprise ColumnStore nodes.

  1. On each Enterprise ColumnStore node, create the directories for each brick in the /brick directory:

    $ sudo mkdir -p /brick/data1
    
    $ sudo mkdir -p /brick/data2
    
    $ sudo mkdir -p /brick/data3
    
  2. On the primary node, create the GlusterFS volumes:

    $ sudo gluster volume create data1 \
          replica 3 \
          mcs1:/brick/data1 \
          mcs2:/brick/data1 \
          mcs3:/brick/data1 \
          force
    
    $ sudo gluster volume create data2 \
          replica 3 \
          mcs1:/brick/data2 \
          mcs2:/brick/data2 \
          mcs3:/brick/data2 \
          force
    
    $ sudo gluster volume create data3 \
          replica 3 \
          mcs1:/brick/data3 \
          mcs2:/brick/data3 \
          mcs3:/brick/data3 \
          force
    
  3. On the primary node, start the volumes:

    $ sudo gluster volume start data1
    
    $ sudo gluster volume start data2
    
    $ sudo gluster volume start data3
    
  4. On each Enterprise ColumnStore node, create mount points for the volumes:

    $ sudo mkdir -p /var/lib/columnstore/data1
    
    $ sudo mkdir -p /var/lib/columnstore/data2
    
    $ sudo mkdir -p /var/lib/columnstore/data3
    
  5. On each Enterprise ColumnStore node, add the mount points to /etc/fstab:

    127.0.0.1:data1 /var/lib/columnstore/data1 glusterfs defaults,_netdev 0 0
    127.0.0.1:data2 /var/lib/columnstore/data2 glusterfs defaults,_netdev 0 0
    127.0.0.1:data3 /var/lib/columnstore/data3 glusterfs defaults,_netdev 0 0
    
  6. On each Enterprise ColumnStore node, mount the volumes:

    $ sudo mount -a
    

Configure Network File System (NFS)

NFS is a distributed file system. NFS is available in most Linux distributions. If NFS is used for an Enterprise ColumnStore deployment, the storage must be mounted with the sync option to ensure that each node flushes its changes immediately.

Consult the documentation for your NFS implementation for details on how to configure NFS.

Next Step

Navigation in the procedure "Deploy ColumnStore Shared Local Storage Topology":

  • This page was step 2 of 9.