MCS backup and restore commands
This page documents how to create and restore MariaDB Enterprise ColumnStore backups using the mcs CLI.
The mcs backup and mcs restore commands support the same workflows as the mcs_backup_manager.sh script, including:
Full and incremental backups
Local/shared storage and S3 storage topologies
Optional compression and parallelism
Separate DBRM (metadata) backup/restore workflows
Before You Start
Identify Your Storage Topology
On a ColumnStore node, determine which StorageManager service is configured:
grep "service" /etc/columnstore/storagemanager.cnfExample output:
service = LocalStorageservice = S3
Estimate Backup Size
LocalStorage:
S3:
Backups
LocalStorage Topology Backups
Instructions
Run
mcs backupasrooton each node, starting with the primary node.Use the same backup location on each node.
List Your Backups
Example output:
Quick Examples
Full backup:
Parallel backup:
Compressed backup:
Incremental backup (auto-select most recent full backup):
Save the backup to a remote host (SCP):
Online Backup Example
When you run a backup, by default the tooling performs polling checks and attempts to obtain a consistent point-in-time backup by:
checking for active writes
checking for running
cpimportjobsissuing write locks
saving BRM prior to copying
You can skip these safety mechanisms with:
--skip-polls--skip-locks--skip-save-brm
Skipping polls/locks/BRM saving can be useful for certain workflows, but it increases the risk of capturing a partially-written state that complicates restore.
Incremental Backup Example
Before you can run an incremental backup, you need a full backup taken.
Then taking an incremental backup you need to define the full backup name to increment via flag --incremental xxxxx.
Incremental backups add ColumnStore deltas to an existing full backup. You can either:
specify the full backup folder name explicitly, or
use
auto_most_recentwill select the most recent directory defined in--backup-locationto apply the incremental backup to the most recent full backup
Apply to the most recent full backup:
Apply to a specific full backup folder:
Cron Backup Example
Create a cron job (run as root) that takes periodic backups and appends logs:
Every Night Full Backup retaining the last 14 days:
Full backup once a week (Saturday night) w/ incremental backups all the other nights (keep 21 days)
LocalStorage Backup Flags
The most commonly used options are:
--backup-location (-bl)
Defines the path where the backup should be saved to. A date based folder under this path will be created per backup run automatically. You can change the name of the folders with -nb
Typical default in tooling: /tmp/backups/
--backup-destination (-bd)
Where backups are stored relative to the node running the command. Two possible values: Local or Remote. Determines if the backup requires ssh thus -scp needs to be defined too or if the -bl path is relative to the script
Local or Remote
--secure-copy-protocol (-scp)
if --backup-destination is defined as Remote , then you need to define -scp . This defines the ssh connection to remotely rsync to
Example: -bd Remote -scp root@192.168.0.1
--incremental (-i)
Creates an incremental backup based on an existing full backup
Value can be <folder> or auto_most_recent
--parallel (-P)
Enables parallel rsync and defines the number of parallel rsync threads to run. If combined with --compress this flag defines the number of compression threads to run. Default is 4
Example: --parallel 8
--compress (-c)
Compress backup in X format
Supported: pigz
--quiet (-q)
Silence verbose copy command outputs
Useful for cron jobs
--name-backup (-nb)
Define the name of the backup - default: date +%m-%d-%Y
Example: -nb before-upgrade-backup
--retention-days (-r)
Retain backups created within the last X days; delete older ones
0 means keep all
--apply-retention-only (-aro)
Only apply retention policy; do not run a backup
Works with --retention-days
--list (-li)
List backups
Lists backups in the configured location
S3 Topology Backups
Instructions
Ensure the node has access to your S3 endpoint and credentials.
Run
mcs backupwith--storage S3and a backup bucket (--backup-bucket).Run it as
rooton each node, starting with the primary node.
Quick Examples
Full backup:
Compressed backup (and skip copying bucket data if you only want local artifacts):
Incremental backup:
On-premise S3 endpoint: Key Flags for on premise buckets are the following:
-url- the local/ip address of the S3 provider. For example, minio defaults to port 9000, 127.0.0.1 would be used if minio is installed on the same machine running columnstore--no-verify-ssl- used when ssl certs are not used/defined for the S3 provider/endpoint
Cron Backup Example
As with LocalStorage, you can schedule mcs backup in cron. Consider including --name-backup to avoid collisions.
S3 Backup Flags
The most commonly used S3-specific options are:
--storage S3 (-s)
Use S3 storage topology
Must be set to S3 for object storage workflows
--backup-bucket (-bb)
Bucket where backups are stored
Example: s3://my-cs-backups
--endpoint-url (-url)
Custom S3 endpoint URL
For on-premise S3 vendors (MinIO, etc.)
--no-verify-ssl (-nv-ssl)
Skip verifying SSL certificates
Use with caution; primarily for test/non-standard TLS
Restore
LocalStorage Topology Restore
Instructions
List backups to find the folder name you want.
Restore on each node, starting with the primary node.
List Your Backups to Restore
Quick Examples
Standard restore:
Compressed backup restore:
LocalStorage Restore Flags
Common options:
--load (-l)
Backup folder name to restore
Required for restore
--backup-location (-bl)
Where backups are located
Example: /tmp/backups/
--backup_destination (-bd)
Whether the backup is on the local server or a remote server
Local or Remote
--scp (-scp)
SCP source used when --backup_destination Remote
Format: user@host
--skip-mariadb-backup (-smdb)
Skip restoring MariaDB server data
Use when restoring ColumnStore only
S3 Topology Restore
Instructions
Use the same backup bucket that contains the backup.
Restore on each node, starting with the primary node.
When running a columnstore backup, a restoreS3.job file is created with a command compatible to run on each node to restore the backup.
Quick Examples
Standard Restore:
On-premise S3 Endpoint:
Restoring to a New Bucket:
Key Flags for restoring to a new bucket
-nb- the name of the new bucket to copy the backup into and configure columnstore to use post restore-nr- the name of the region of the new bucket to configure columnstore to use post restore-nk- the key of the new bucket to configure columnstore to use post restore-ns- the secret of the new bucket to configure columnstore to use post restore
S3 Restore Flags
Common options:
--backup-bucket (-bb)
Backup bucket to restore from
Example: s3://my-cs-backups
--endpoint-url (-url)
Custom S3 endpoint URL
For on-premise S3 vendors
--no-verify-ssl (-nv-ssl)
Skip verifying SSL certificates
Use with caution
--new-bucket (-nb)
New bucket to restore data into
Use when restoring into a different bucket
--new-region (-nr)
Region for --new-bucket
S3 only
--new-key (-nk)
Access key for --new-bucket
S3 only
--new-secret (-ns)
Secret for --new-bucket
S3 only
--continue (-cont)
Allow deleting data in --new-bucket during restore
S3 only; dangerous if bucket contains important data
DBRM Backups
Both S3 and LocalStorage use the same commands for dbrm backups
DBRM backups are intended for backing up internal ColumnStore metadata only.
Instructions
Run mcs dbrm_backup as root with the appropriate flags as you need ONLY on the primary node
List Your dbrm Backups
Quick Examples
Standard dbrm_backup:
dbrm_backup:dbrm_backup before upgrade:
dbrm_backup Flags
Common options:
--backup-location (-bl)
Where DBRM backups are written
Default example: /tmp/dbrm_backups
--retention-days (-r)
Retain DBRM backups created within last X days
Older backups are deleted
--mode (-m)
Run mode
once or loop
--interval (-i)
Sleep interval (minutes) when --mode loop
Only used in loop mode
--skip-storage-manager (-ssm)
Skip backing up the storagemanager directory
Support-guided workflows
--skip-save-brm (-sbrm)
Skip saving BRM prior to DBRM backup
Can produce a dirtier backup
--skip-locks (-slock)
Skip issuing read locks
Support-guided workflows
--skip-polls (-spoll)
Skip polling to confirm locks are released
Support-guided workflows
--quiet (-q)
Silence verbose copy output
Useful for cron
DBRM Restore
Instructions
Both S3 and LocalStorage use the same commands for dbrm restore.
DBRM backups are intended for backing up internal ColumnStore metadata only.
List available DBRM backups.
Restore from the selected folder.
List Your dbrm Restore Options
Quick Examples
Standard dbrm_restore:
dbrm_restore:dbrm_restore Flags
dbrm_restore FlagsCommon options:
--backup-location (-bl)
Where DBRM backups exist on disk
Example: /tmp/dbrm_backups
--load (-l)
Backup directory name to restore
Required for restore
--no-start (-ns)
Do not attempt ColumnStore startup after restore
Useful for manual recovery steps
--skip-dbrm-backup (-sdbk)
Skip taking a safety backup before restoring
Use with caution
--skip-storage-manager (-ssm)
Skip backing up/restoring storagemanager directory
Support-guided workflows
--list (-li)
List backups
Lists available DBRM backups
Last updated
Was this helpful?

