All pages
Powered by GitBook
1 of 5

Loading...

Loading...

Loading...

Loading...

Loading...

Backup and Restore Overview

Overview

MariaDB Enterprise ColumnStore supports backup and restore.

System of Record

Before you determine a backup strategy for your Enterprise ColumnStore deployment, it is a good idea to determine the system of record for your Enterprise ColumnStore data.

A system of record is the authoritative data source for a given piece of information. Organizations often store duplicate information in several systems, but only a single system can be the authoritative data source.

Enterprise ColumnStore is designed to handle analytical processing for OLAP, data warehousing, DSS, and hybrid workloads on very large data sets. Analytical processing does not generally happen on the system of record. Instead, analytical processing generally occurs on a specialized database that is loaded with data from the separate system of record. Additionally, very large data sets can be difficult to back up. Therefore, it may be beneficial to only backup the system of record.

If Enterprise ColumnStore is not acting as the system of record for your data, you should determine how the system of record affects your backup plan:

  • If your system of record is another database server, you should ensure that the other database server is properly backed up and that your organization has procedures to reload Enterprise ColumnStore from the other database server.

  • If your system of record is a set of data files, you should ensure that the set of data files is properly backed up and that your organization has procedures to reload Enterprise ColumnStore from the set of data files.

Full Backup and Restore

MariaDB Enterprise ColumnStore supports full backup and restore for all storage types. A full backup includes:

  • Enterprise ColumnStore's data and metadata

With S3: an S3 snapshot of the and a file system snapshot or copy of the Without S3: a file system snapshot or copy of the .

  • The MariaDB data directory from the primary node

To see the procedure to perform a full backup and restore, choose the storage type:

Storage Type
Diagram

Enterprise ColumnStore with Object Storage

Enterprise ColumnStore with Shared Local Storage

S3-compatible object storage
Storage Manager directory
DB Root directories
columnstore-topology-s3
columnstore-topology

Backup & Restore

MariaDB ColumnStore backup and restore manage distributed data using snapshots or tools like mariadb-backup, with restoration ensuring cluster sync via cpimport or file system recovery.

Backup and Restore with Object Storage

Overview

MariaDB Enterprise ColumnStore supports backup and restore. If Enterprise ColumnStore uses S3-compatible object storage for data and shared local storage for the Storage Manager directory, the S3 bucket, the Storage Manager directory, and the MariaDB data directory must be backed up separately.

Recovery Planning

MariaDB Enterprise ColumnStore supports multiple .

This page discusses how to backup and restore Enterprise ColumnStore when it uses for data and shared local storage (such as NFS) for the .

Any file can become corrupt due to hardware issues, crashes, power loss, and other reasons. If the Enterprise ColumnStore data or metadata become corrupt, Enterprise ColumnStore could become unusable, resulting in data loss.

If Enterprise ColumnStore is your , it should be backed up regularly.

If Enterprise ColumnStore uses S3-compatible object storage for data and shared local storage for the , the following items must be backed up:

  • The MariaDB Data directory is backed up using .

  • The S3 bucket must be backed up using the vendor's snapshot procedure.

  • The must be backed up.

See the instructions below for more details.

Backup

Use the following process to take a backup:

  1. Determine which node is the primary server using curl to send the status command to the CMAPI Server:

The output will show "dbrm_mode": "master" for the primary server:

  1. Connect to the primary server using MariaDB Client as a user account that has privileges to lock the database:

  1. Lock the database with the statement:

Ensure that the client remains connected to the primary server, so that the lock is held for the remaining steps.

  1. Make a copy or snapshot of the . By default, it is located at /var/lib/columnstore/storagemanager.

For example, to make a copy of the directory with rsync:

  1. Use to backup the MariaDB data directory:

  1. Use to prepare the backup:

  1. Create a snapshot of the S3-compatible storage. Consult the storage vendor's manual for details on how to do this.

  2. Ensure that all previous operations are complete.

  3. In the original client connection to the primary server, unlock the database with the statement:

Restore

Use the following process to restore a backup:

  1. , so that you can restore the backup to an empty deployment.

  2. Ensure that all services are stopped on each node:

  1. Restore the backup of the . By default, it is located at /var/lib/columnstore/storagemanager.

For example, to restore the backup with rsync:

  1. Use to restore the backup of the MariaDB data directory:

  1. Restore the snapshot of your S3-compatible storage to the new S3 bucket that you plan to use. Consult the storage vendor's manual for details on how to do this.

  2. Update storagemanager.cnf to configure Enterprise ColumnStore to use the S3 bucket. By default, it is located at /etc/columnstore/storagemanager.cnf.

For example:

  • The default local cache size is 2 GB.

  • The default local cache path is /var/lib/columnstore/storagemanager/cache.

  • Ensure that the local cache path has sufficient store space to store the local cache.

  • The bucket option must be set to the name of the bucket that you created from your snapshot in the previous step.

  1. Start the services on each node:

To use an IAM role, you must also uncomment and set iam_role_name, sts_region, and sts_endpoint.

storage options
S3-compatible object storage
Storage Manager directory
system of record
Storage Manager directory
Storage Manager directory
Storage Manager directory
Deploy Enterprise ColumnStore
Storage Manager directory
Enterprise-ColumnStore-Backup-with-S3-Flowchart
$ curl -k -s https://mcs1:8640/cmapi/0.4.0/mcs cluster status \
   --header 'Content-Type:application/json' \
   --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
   | jq .
{
  "timestamp": "2020-12-15 00:40:34.353574",
  "192.0.2.1": {
    "timestamp": "2020-12-15 00:40:34.362374",
    "uptime": 11467,
    "dbrm_mode": "master",
    "cluster_mode": "readwrite",
    "dbroots": [
      "1"
    ],
    "module_id": 1,
    "services": [
      {
        "name": "workernode",
        "pid": 19202
      },
      {
        "name": "controllernode",
        "pid": 19232
      },
      {
        "name": "PrimProc",
        "pid": 19254
      },
      {
        "name": "ExeMgr",
        "pid": 19292
      },
      {
        "name": "WriteEngine",
        "pid": 19316
      },
      {
        "name": "DMLProc",
        "pid": 19332
      },
      {
        "name": "DDLProc",
        "pid": 19366
      }
    ]
  }
$ mariadb --host=192.0.2.1 \
   --user=root \
   --password
FLUSH TABLES WITH READ LOCK;
$ sudo mkdir -p /backups/columnstore/202101291600/
$ sudo rsync -av /var/lib/columnstore/storagemanager /backups/columnstore/202101291600/
$ sudo mkdir -p /backups/mariadb/202101291600/
$ sudo mariadb-backup --backup \
   --target-dir=/backups/mariadb/202101291600/ \
   --user=mariadb-backup \
   --password=mbu_passwd
$ sudo mariadb-backup --prepare \
   --target-dir=/backups/mariadb/202101291600/
UNLOCK TABLES;
$ sudo systemctl stop mariadb-columnstore-cmapi
$ sudo systemctl stop mariadb-columnstore
$ sudo systemctl stop mariadb
$ sudo rsync -av /backups/columnstore/202101291600/storagemanager/ /var/lib/columnstore/storagemanager/
$ sudo chown -R mysql:mysql /var/lib/columnstore/storagemanager
$ sudo mariadb-backup --copy-back \
   --target-dir=/backups/mariadb/202101291600/
$ sudo chown -R mysql:mysql /var/lib/mysql
[ObjectStorage]
…
service = S3
…
[S3]
bucket = your_columnstore_bucket_name
endpoint = your_s3_endpoint
aws_access_key_id = your_s3_access_key_id
aws_secret_access_key = your_s3_secret_key
# iam_role_name = your_iam_role
# sts_region = your_sts_region
# sts_endpoint = your_sts_endpoint

[Cache]
cache_size = your_local_cache_size
path = your_local_cache_path
$ sudo systemctl start mariadb
$ sudo systemctl start mariadb-columnstore-cmapi

Backup and Restore with Shared Local Storage

Overview

MariaDB Enterprise ColumnStore supports backup and restore. If Enterprise ColumnStore uses shared local storage for the DB Root directories, the DB Root directories and the MariaDB data directory must be backed up separately.

Recovery Planning

MariaDB Enterprise ColumnStore supports multiple .

This page discusses how to backup and restore Enterprise ColumnStore when it uses (such as NFS) for the .

Any file can become corrupt due to hardware issues, crashes, power loss, and other reasons. If the Enterprise ColumnStore data or metadata become corrupt, Enterprise ColumnStore could become unusable, resulting in data loss.

If Enterprise ColumnStore is your , it should be backed up regularly.

If Enterprise ColumnStore uses for the , the following items must be backed up:

  • The MariaDB Data directory is backed up using

  • The must be backed up

  • Each must be backed up

See the instructions below for more details.

Backup

Use the following process to take a backup:

  1. Determine which node is the primary server using curl to send the status command to the CMAPI Server:

The output will show dbrm_mode: master for the primary server:

  1. Connect to the primary server using MariaDB Client as a user account that has privileges to lock the database:

  1. Lock the database with the statement:

Ensure that the client remains connected to the primary server, so that the lock is held for the remaining steps.

  1. Make a copy or snapshot of the . By default, it is located at /var/lib/columnstore/storagemanager.

For example, to make a copy of the directory with rsync:

  1. Make a copy or snapshot of the . By default, they are located at /var/lib/columnstore/dataN, where the N in dataN represents a range of integers that starts at 1 and stops at the number of nodes in the deployment.

For example, to make a copy of the directories with rsync in a 3-node deployment:

  1. Use to backup the :

  1. Use to prepare the backup:

  1. Ensure that all previous operations are complete.

  2. In the original client connection to the primary server, unlock the database with the statement:

Restore

Use the following process to restore a backup:

  1. , so that you can restore the backup to an empty deployment.

  2. Ensure that all services are stopped on each node:

  1. Restore the backup of the . By default, it is located at /var/lib/columnstore/storagemanager.

For example, to restore the backup with rsync:

  1. Restore the backup of the DB Root directories. By default, they are located at /var/lib/columnstore/dataN, where the N in dataN represents a range of integers that starts at 1 and stops at the number of nodes in the deployment.

For example, to restore the backup with rsync in a 3-node deployment:

  1. Use to restore the backup of the MariaDB data directory:

  1. Start the services on each node:

storage options
shared local storage
DB Root directories
system of record
shared local storage
DB Root directories
Storage Manager directory
DB Root directories
Storage Manager directory
DB Root directories
Storage Manager directory
Deploy Enterprise ColumnStore
Storage Manager director
$ curl -k -s https://mcs1:8640/cmapi/0.4.0/cluster/status \
   --header 'Content-Type:application/json' \
   --header 'x-api-key:93816fa66cc2d8c224e62275bd4f248234dd4947b68d4af2b29671dd7d5532dd' \
   | jq .
{
  "timestamp": "2020-12-15 00:40:34.353574",
  "192.0.2.1": {
    "timestamp": "2020-12-15 00:40:34.362374",
    "uptime": 11467,
    "dbrm_mode": "master",
    "cluster_mode": "readwrite",
    "dbroots": [
      "1"
    ],
    "module_id": 1,
    "services": [
      {
        "name": "workernode",
        "pid": 19202
      },
      {
        "name": "controllernode",
        "pid": 19232
      },
      {
        "name": "PrimProc",
        "pid": 19254
      },
      {
        "name": "ExeMgr",
        "pid": 19292
      },
      {
        "name": "WriteEngine",
        "pid": 19316
      },
      {
        "name": "DMLProc",
        "pid": 19332
      },
      {
        "name": "DDLProc",
        "pid": 19366
      }
    ]
  }
$ mariadb --host=192.0.2.1 \
   --user=root \
   --password
FLUSH TABLES WITH READ LOCK;
$ sudo mkdir -p /backups/columnstore/202101291600/
$ sudo rsync -av /var/lib/columnstore/storagemanager /backups/columnstore/202101291600/
$ sudo rsync -av /var/lib/columnstore/data1 /backups/columnstore/202101291600/
$ sudo rsync -av /var/lib/columnstore/data2 /backups/columnstore/202101291600/
$ sudo rsync -av /var/lib/columnstore/data3 /backups/columnstore/202101291600/
$ sudo mkdir -p /backups/mariadb/202101291600/
$ sudo mariadb-backup --backup \
   --target-dir=/backups/mariadb/202101291600/ \
   --user=mariadb-backup \
   --password=mbu_passwd
$ sudo mariadb-backup --prepare \
   --target-dir=/backups/mariadb/202101291600/
UNLOCK TABLES;
$ sudo systemctl stop mariadb-columnstore-cmapi
$ sudo systemctl stop mariadb-columnstore
$ sudo systemctl stop mariadb
$ sudo rsync -av /backups/columnstore/202101291600/storagemanager/ /var/lib/columnstore/storagemanager/
$ sudo chown -R mysql:mysql /var/lib/columnstore/storagemanager
$ sudo rsync -av /backups/columnstore/202101291600/data1/ /var/lib/columnstore/data1/
$ sudo rsync -av /backups/columnstore/202101291600/data2/ /var/lib/columnstore/data2/
$ sudo rsync -av /backups/columnstore/202101291600/data3/ /var/lib/columnstore/data3/
$ sudo chown -R mysql:mysql /var/lib/columnstore/data1
$ sudo chown -R mysql:mysql /var/lib/columnstore/data2
$ sudo chown -R mysql:mysql /var/lib/columnstore/data3
$ sudo mariadb-backup --copy-back \
   --target-dir=/backups/mariadb/202101291600/
$ sudo chown -R mysql:mysql /var/lib/mysql
$ sudo systemctl start mariadb
$ sudo systemctl start mariadb-columnstore-cmapi

MCS backup and restore commands

This page documents how to create and restore MariaDB Enterprise ColumnStore backups using the mcs CLI.

The mcs backup and mcs restore commands support the same workflows as the mcs_backup_manager.sh script, including:

  • Full and incremental backups

  • Local/shared storage and S3 storage topologies

  • Optional compression and parallelism

  • Separate DBRM (metadata) backup/restore workflows

The examples in this page assume the mcs command is available on the host and you run the backup/restore operations as root.

Before You Start

Identify Your Storage Topology

On a ColumnStore node, determine which StorageManager service is configured:

Example output:

  • service = LocalStorage

  • service = S3

Use service = LocalStorage when ColumnStore data lives on local/shared storage, and service = S3 when ColumnStore data is stored in object storage.

Estimate Backup Size

LocalStorage:

S3:

Backups

LocalStorage Topology Backups

Instructions

  1. Run mcs backup as root on each node, starting with the primary node.

  2. Use the same backup location on each node.

List Your Backups

Example output:

Quick Examples

Full backup:

Parallel backup:

Compressed backup:

Incremental backup (auto-select most recent full backup):

Save the backup to a remote host (SCP):

Online Backup Example

When you run a backup, by default the tooling performs polling checks and attempts to obtain a consistent point-in-time backup by:

  • checking for active writes

  • checking for running cpimport jobs

  • issuing write locks

  • saving BRM prior to copying

You can skip these safety mechanisms with:

  • --skip-polls

  • --skip-locks

  • --skip-save-brm

Skipping polls/locks/BRM saving can be useful for certain workflows, but it increases the risk of capturing a partially-written state that complicates restore.

Incremental Backup Example

Before you can run an incremental backup, you need a full backup taken.

Then taking an incremental backup you need to define the full backup name to increment via flag --incremental xxxxx.

Incremental backups add ColumnStore deltas to an existing full backup. You can either:

  • specify the full backup folder name explicitly, or

  • use auto_most_recent will select the most recent directory defined in --backup-location to apply the incremental backup to the most recent full backup

Apply to the most recent full backup:

Apply to a specific full backup folder:

Cron Backup Example

Create a cron job (run as root) that takes periodic backups and appends logs:

Every Night Full Backup retaining the last 14 days:

Full backup once a week (Saturday night) w/ incremental backups all the other nights (keep 21 days)

LocalStorage Backup Flags

The most commonly used options are:

Flag / Option
Description
Notes

S3 Topology Backups

Instructions

  1. Ensure the node has access to your S3 endpoint and credentials.

  2. Run mcs backup with --storage S3 and a backup bucket (--backup-bucket).

  3. Run it as root on each node, starting with the primary node.

If you're using an on-premise S3-compatible solution, you may need --endpoint-url (and sometimes --no-verify-ssl).

Quick Examples

Full backup:

Compressed backup (and skip copying bucket data if you only want local artifacts):

Incremental backup:

On-premise S3 endpoint: Key Flags for on premise buckets are the following:

  • -url - the local/ip address of the S3 provider. For example, minio defaults to port 9000, 127.0.0.1 would be used if minio is installed on the same machine running columnstore

  • --no-verify-ssl - used when ssl certs are not used/defined for the S3 provider/endpoint

Cron Backup Example

As with LocalStorage, you can schedule mcs backup in cron. Consider including --name-backup to avoid collisions.

S3 Backup Flags

The most commonly used S3-specific options are:

Flag / Option
Description
Notes

Restore

LocalStorage Topology Restore

Instructions

If Backup made only on Primary node on Clusters that do NOT save the backup to an NFS share, copy the primary nodes backup mysql & configs directory to all nodes.

  1. List backups to find the folder name you want.

  2. Restore on each node, starting with the primary node.

When running a columnstore backup, a restore.job file is created with a command compatible to run on each node to restore the backup.

List Your Backups to Restore

Quick Examples

Standard restore:

Compressed backup restore:

LocalStorage Restore Flags

Common options:

Flag / Option
Description
Notes

S3 Topology Restore

Instructions

  1. Use the same backup bucket that contains the backup.

  2. Restore on each node, starting with the primary node.

When running a columnstore backup, a restoreS3.job file is created with a command compatible to run on each node to restore the backup.

Quick Examples

Standard Restore:

On-premise S3 Endpoint:

Restoring to a New Bucket:

Key Flags for restoring to a new bucket

  • -nb - the name of the new bucket to copy the backup into and configure columnstore to use post restore

  • -nr - the name of the region of the new bucket to configure columnstore to use post restore

  • -nk - the key of the new bucket to configure columnstore to use post restore

S3 Restore Flags

Common options:

Flag / Option
Description
Notes

DBRM Backups

Both S3 and LocalStorage use the same commands for dbrm backups

DBRM backups are intended for backing up internal ColumnStore metadata only.

Instructions

Run mcs dbrm_backup as root with the appropriate flags as you need ONLY on the primary node

List Your dbrm Backups

Quick Examples

Standard dbrm_backup:

dbrm_backup before upgrade:

dbrm_backup Flags

Common options:

Flag / Option
Description
Notes

DBRM Restore

Instructions

Both S3 and LocalStorage use the same commands for dbrm restore.

DBRM backups are intended for backing up internal ColumnStore metadata only.

  1. List available DBRM backups.

  2. Restore from the selected folder.

List Your dbrm Restore Options

Quick Examples

Standard dbrm_restore:

dbrm_restore Flags

Common options:

Flag / Option
Description
Notes

--parallel (-P)

Enables parallel rsync and defines the number of parallel rsync threads to run. If combined with --compress this flag defines the number of compression threads to run. Default is 4

Example: --parallel 8

--compress (-c)

Compress backup in X format

Supported: pigz

--quiet (-q)

Silence verbose copy command outputs

Useful for cron jobs

--name-backup (-nb)

Define the name of the backup - default: date +%m-%d-%Y

Example: -nb before-upgrade-backup

--retention-days (-r)

Retain backups created within the last X days; delete older ones

0 means keep all

--apply-retention-only (-aro)

Only apply retention policy; do not run a backup

Works with --retention-days

--list (-li)

List backups

Lists backups in the configured location

--skip-mariadb-backup (-smdb)

Skip restoring MariaDB server data

Use when restoring ColumnStore only

-ns - the secret of the new bucket to configure columnstore to use post restore

--new-region (-nr)

Region for --new-bucket

S3 only

--new-key (-nk)

Access key for --new-bucket

S3 only

--new-secret (-ns)

Secret for --new-bucket

S3 only

--continue (-cont)

Allow deleting data in --new-bucket during restore

S3 only; dangerous if bucket contains important data

--skip-storage-manager (-ssm)

Skip backing up the storagemanager directory

Support-guided workflows

--skip-save-brm (-sbrm)

Skip saving BRM prior to DBRM backup

Can produce a dirtier backup

--skip-locks (-slock)

Skip issuing read locks

Support-guided workflows

--skip-polls (-spoll)

Skip polling to confirm locks are released

Support-guided workflows

--quiet (-q)

Silence verbose copy output

Useful for cron

--skip-storage-manager (-ssm)

Skip backing up/restoring storagemanager directory

Support-guided workflows

--list (-li)

List backups

Lists available DBRM backups

--backup-location (-bl)

Defines the path where the backup should be saved to. A date based folder under this path will be created per backup run automatically. You can change the name of the folders with -nb

Typical default in tooling: /tmp/backups/

--backup-destination (-bd)

Where backups are stored relative to the node running the command. Two possible values: Local or Remote. Determines if the backup requires ssh thus -scp needs to be defined too or if the -bl path is relative to the script

Local or Remote

--secure-copy-protocol (-scp)

if --backup-destination is defined as Remote , then you need to define -scp . This defines the ssh connection to remotely rsync to

Example: -bd Remote -scp root@192.168.0.1

--incremental (-i)

Creates an incremental backup based on an existing full backup

--storage S3 (-s)

Use S3 storage topology

Must be set to S3 for object storage workflows

--backup-bucket (-bb)

Bucket where backups are stored

Example: s3://my-cs-backups

--endpoint-url (-url)

Custom S3 endpoint URL

For on-premise S3 vendors (MinIO, etc.)

--no-verify-ssl (-nv-ssl)

Skip verifying SSL certificates

--load (-l)

Backup folder name to restore

Required for restore

--backup-location (-bl)

Where backups are located

Example: /tmp/backups/

--backup_destination (-bd)

Whether the backup is on the local server or a remote server

Local or Remote

--scp (-scp)

SCP source used when --backup_destination Remote

--backup-bucket (-bb)

Backup bucket to restore from

Example: s3://my-cs-backups

--endpoint-url (-url)

Custom S3 endpoint URL

For on-premise S3 vendors

--no-verify-ssl (-nv-ssl)

Skip verifying SSL certificates

Use with caution

--new-bucket (-nb)

New bucket to restore data into

--backup-location (-bl)

Where DBRM backups are written

Default example: /tmp/dbrm_backups

--retention-days (-r)

Retain DBRM backups created within last X days

Older backups are deleted

--mode (-m)

Run mode

once or loop

--interval (-i)

Sleep interval (minutes) when --mode loop

--backup-location (-bl)

Where DBRM backups exist on disk

Example: /tmp/dbrm_backups

--load (-l)

Backup directory name to restore

Required for restore

--no-start (-ns)

Do not attempt ColumnStore startup after restore

Useful for manual recovery steps

--skip-dbrm-backup (-sdbk)

Skip taking a safety backup before restoring

Value can be <folder> or auto_most_recent

Use with caution; primarily for test/non-standard TLS

Format: user@host

Use when restoring into a different bucket

Only used in loop mode

Use with caution

grep "service" /etc/columnstore/storagemanager.cnf
du -sh /var/lib/columnstore/
du -sh /var/lib/columnstore/storagemanager/ ;
aws s3 ls s3://<bucketname> --recursive | grep -v -E "(Bucket: |Prefix: |LastWriteTime|^$|--)" | awk 'BEGIN {total=0}{total+=$3}END{print total/1024/1024/1024" GB"}'
mcs backup --backup-location /tmp/backups/ --list
Existing Backups
--------------------------------------------------------------------------
Options                                       Last-Updated  Extent Map      EM-Size      Journal-Size VBBM-Size  VSS-Size   Days Old
12-03-2024                                    Dec 3 21:05   BRM_saves_em    77228        0            1884       2792       12
11-21-2024                                    Nov 21 21:05   BRM_saves_em    77228        0            1884       2792       12
--------------------------------------------------------------------------
Restore with mcs restore --path /tmp/backups/ --directory <backup_folder_from_above>
mcs backup
mcs backup --parallel 8
mcs backup --parallel 8 --compress pigz
mcs backup --incremental auto_most_recent
mcs backup --backup-destination Remote --scp root@192.68.0.1
mcs backup -P 8
# Run Full Backup
mcs backup -P 8
# Backup Complete @ /tmp/backups/12-03-2025
# Run the incremental backup - updating the full backup 12-03-2025
mcs backup --incremental 12-03-2025
mcs backup --incremental auto_most_recent
mcs backup --incremental <full-backup-folder>
sudo crontab -e
*/60 */24 * * *  mcs backup --parallel 4 -r 14 >> /tmp/cs_backup.log  2>&1
59 23 * * 6 mcs backup -P 8 -r 21 >> /root/cs_backup.log 2>&1
59 23 * * 0-5 mcs backup --incremental auto_most_recent -r 21 >> /root/cs_backup.log 2>&1
59 23 * * 7 mcs backup --incremental auto_most_recent -r 21 >> /root/cs_backup.log 2>&1
mcs backup --storage S3 --backup-bucket s3://my-cs-backups
mcs backup --storage S3 --backup-bucket s3://my-cs-backups --compress pigz --quiet --skip-bucket-data
mcs backup --storage S3 --backup-bucket gs://my-cs-backups --incremental 12-18-2023
mcs backup --storage S3 --backup-bucket s3://my-on-premise-bucket --endpoint-url http://127.0.0.1:8000
scp /tmp/backups/12-03-2024/mysql root@pm2:/tmp/backups/12-03-2024/mysql
scp /tmp/backups/12-03-2024/configs root@pm2:/tmp/backups/12-03-2024/configs

scp /tmp/backups/12-03-2024/mysql root@pm3:/tmp/backups/12-03-2024/mysql
scp /tmp/backups/12-03-2024/configs root@pm3:/tmp/backups/12-03-2024/configs
mcs restore --backup-location /tmp/backups/ --list
# on primary
mcs cluster stop

# on every node
systemctl stop mariadb
systemctl stop mariadb-columnstore-cmapi
mcs restore -l 12-03-2024 -bl /mnt/custom/path/
mcs restore -l 11-21-2024 -bl /mnt/custom/path/ -c pigz
mcs restore -s S3 -bb s3://my-cs-backups  -l 12-03-2025
mcs restore -s S3 -bb gs://on-premise-bucket -l 12-03-2025 -url http://127.0.0.1:8000
mcs restore -s S3 -bb s3://my-cs-backups -l 11-21-2022 -nb s3://new-data-bucket -nr us-east-1 -nk AKIAxxxxxxx3FHCADF -ns GGGuxxxxxxxxxxnqa72csk5 -ha
# on primary
mcs cluster stop

# on every node
systemctl stop mariadb
systemctl stop mariadb-columnstore-cmapi
mcs restore -s S3 -bb s3://my-cs-backups -l 11-21-2024
mcs restore -s S3 -bb gs://on-premise-bucket -l 12-03-2021 -url http://127.0.0.1:9000
mcs restore -s S3 -bb s3://my-cs-backups -l 11-21-2022 -nb s3://new-data-bucket -nr us-east-1 -nk AKIAxxxxxxx3FHCADF -ns GGGuxxxxxxxxxxnqa72csk5
mcs dbrm_backup --list
mcs dbrm_backup --mode loop --interval 90 --retention-days 7 --backup-location /mnt/dbrm_backups
mcs dbrm_backup --mode once --retention-days 7 --backup-location /mnt/dbrm_backups -nb my-one-off-backup
mcs dbrm_backup
mcs dbrm_backup -nb before_upgrade_11.21.2024_dbrm_backup
mcs dbrm_restore --list
./mcs_backup_manager.sh dbrm_restore --backup-location /tmp/dbrm_backups --load dbrm_backup_20241203_172842
./mcs_backup_manager.sh dbrm_restore --backup-location /tmp/dbrm_backups --load dbrm_backup_20241203_172842 --no-start
mcs dbrm-restore --backup-location /tmp/dbrm_backups --load <dbrm_backup_folder>
mariadb-backup
FLUSH TABLES WITH READ LOCK
MariaDB Backup
MariaDB Backup
UNLOCK TABLES
MariaDB Backup
MariaDB Backup
FLUSH TABLES WITH READ LOCK
MariaDB Backup
MariaDB Backup
UNLOCK TABLES
MariaDB Backup

This page is: Copyright © 2025 MariaDB. All rights reserved.

This page is: Copyright © 2025 MariaDB. All rights reserved.

This page is: Copyright © 2025 MariaDB. All rights reserved.